id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
2,759,502,661
create-react-app
[JEST] Error prettier.resolveConfig.sync is not a function when run unit test with Jest
With current react-scripts, I got below issue when using with **Prettier v3**. Seems like it's resolved in latest Jest version. However, react-scripts just supports **Jest v27**. This issue happens if using **snapshot** function (i.e, **toMatchInlineSnapshot**) in my test. `TypeError: prettier.resolveConfig.sync is not a function at runPrettier (node_modules/jest-snapshot/build/InlineSnapshots.js:390:30) at TestScheduler.scheduleTests (node_modules/@jest/core/build/TestScheduler.js:333:13) at runJest (node_modules/@jest/core/build/runJest.js:404:19)`
needs triage,issue: bug report
low
Critical
2,759,504,476
node
Socket.connect results in an abort
### Version v22.11.0 ### Platform ```text Linux u24vm 6.8.0-50-generic #51-Ubuntu SMP PREEMPT_DYNAMIC Sat Nov 9 17:58:29 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux ``` ### Subsystem net ### What steps will reproduce the bug? Setup a node instance, ``` » node ``` and run the following javascript code. ``` net = require('net'); s = new net.Socket(()=>{}); s.connect({port:250,localAddress:'string'}); s.connect('string', ()=>{}); ``` Then the node instance occurs an abort. ### How often does it reproduce? Is there a required condition? This abort can always be triggered following the steps above. ### What is the expected behavior? Why is that the expected behavior? If any error occurs, an exception or similar error-reporting stuff should be thrown, caught, and handled correctly. There is no reason to abort the whole node process. ### What do you see instead? ``` » node Welcome to Node.js v22.11.0. Type ".help" for more information. > net = require('net'); { _createServerHandle: [Function: createServerHandle], _normalizeArgs: [Function: normalizeArgs], _setSimultaneousAccepts: [Function: _setSimultaneousAccepts], BlockList: [Getter], SocketAddress: [Getter], connect: [Function: connect], createConnection: [Function: connect], createServer: [Function: createServer], isIP: [Function: isIP], isIPv4: [Function: isIPv4], isIPv6: [Function: isIPv6], Server: [Function: Server], Socket: [Function: Socket], Stream: [Function: Socket], getDefaultAutoSelectFamily: [Function: getDefaultAutoSelectFamily], setDefaultAutoSelectFamily: [Function: setDefaultAutoSelectFamily], getDefaultAutoSelectFamilyAttemptTimeout: [Function: getDefaultAutoSelectFamilyAttemptTimeout], setDefaultAutoSelectFamilyAttemptTimeout: [Function: setDefaultAutoSelectFamilyAttemptTimeout] } > s = new net.Socket(()=>{}); Socket { connecting: false, _hadError: false, _parent: null, _host: null, _closeAfterHandlingError: false, _events: { close: undefined, error: undefined, prefinish: undefined, finish: undefined, drain: undefined, data: undefined, end: [Function: onReadableStreamEnd], readable: undefined }, _readableState: ReadableState { highWaterMark: 65536, buffer: [], bufferIndex: 0, length: 0, pipes: [], awaitDrainWriters: null, [Symbol(kState)]: 1052932 }, _writableState: WritableState { highWaterMark: 65536, length: 0, corked: 0, onwrite: [Function: bound onwrite], writelen: 0, bufferedIndex: 0, pendingcb: 0, [Symbol(kState)]: 17564420, [Symbol(kBufferedValue)]: null }, allowHalfOpen: false, _maxListeners: undefined, _eventsCount: 1, _sockname: null, _pendingData: null, _pendingEncoding: '', server: null, _server: null, [Symbol(async_id_symbol)]: -1, [Symbol(kHandle)]: null, [Symbol(lastWriteQueueSize)]: 0, [Symbol(timeout)]: null, [Symbol(kBuffer)]: null, [Symbol(kBufferCb)]: null, [Symbol(kBufferGen)]: null, [Symbol(shapeMode)]: true, [Symbol(kCapture)]: false, [Symbol(kSetNoDelay)]: false, [Symbol(kSetKeepAlive)]: false, [Symbol(kSetKeepAliveInitialDelay)]: 0, [Symbol(kBytesRead)]: 0, [Symbol(kBytesWritten)]: 0 } > s.connect({port:250,localAddress:'string'}); Uncaught TypeError [ERR_INVALID_IP_ADDRESS]: Invalid IP address: string at lookupAndConnect (node:net:1289:11) at Socket.connect (node:net:1258:5) { code: 'ERR_INVALID_IP_ADDRESS' } > s.connect('string', ()=>{}); # node[2385420]: static void node::TCPWrap::Connect(const v8::FunctionCallbackInfo<v8::Value>&) at ../src/tcp_wrap.cc:290 # Assertion failed: args[2]->IsUint32() ----- Native stack trace ----- 1: 0xf76527 node::Assert(node::AssertionInfo const&) [node] 2: 0x10d2fb3 node::TCPWrap::Connect(v8::FunctionCallbackInfo<v8::Value> const&) [node] 3: 0x7e29d7c0f5e2 ----- JavaScript stack trace ----- 1: internalConnect (node:net:1086:24) 2: defaultTriggerAsyncIdScope (node:internal/async_hooks:464:18) 3: Socket.connect (node:net:1254:5) 4: REPL4:1:3 5: runInThisContext (node:vm:137:12) 6: defaultEval (node:repl:598:22) 7: bound (node:domain:433:15) 8: runBound (node:domain:444:12) 9: onLine (node:repl:927:10) 10: emit (node:events:530:35) [1] 2385420 IOT instruction (core dumped) node ``` ### Additional information _No response_
net
low
Critical
2,759,529,674
ant-design
Image组件添加预览图片大小
### What problem does this feature solve? 现在的预览图出现会有一个默认的行内样式为transform: translate3d(0px, 0px, 0px) scale3d(1, 1, 1) rotate(0deg);在分辨率为2k或者4k得屏幕上,如果浏览器缩放为100%的话,点击图片预览会导致弹出得预览图很小,希望能开放一个字段用来改变详情图的默认大小 ### What does the proposed API look like? <Image ... preview={{ mask: ( <span>预览</span> ), img: { style: { transform: translate3d(0px, 0px, 0px) scale3d(2, 2, 1) rotate(0deg);, // 设置图片预览时的缩放 }, }, }} /> <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
unconfirmed
low
Minor
2,759,556,895
godot
Impossible to edit .gdshaderinc files in the built-in shader editor reliably
### Tested versions Reproducible in: v4.3.stable.mono.official [77dcf97d8] ### System information Windows 10 - v4.3.stable.mono.official [77dcf97d8] ### Issue description It is borderline impossible to edit .gdshaderinc files in the built in Shader Editor because at some point, Shader Editor starts jumping between opened files on each warning/error, because the shaders are evaluated at every single slight pause during writing of the code. This leaves external code editors with insufficient autocomplete as the only option to reliably edit .gdshaderinc files. ### Steps to reproduce 1. Modify .gdshaderinc file that is included by other shader(s) the godot project uses in the open scene Result: The Shader Editor jumps to other file on every pause during typing Expected: The Shader Editor never jumps between files ### Minimal reproduction project (MRP) I did my best to try to create a minimalistic repro project, but I haven't managed to repro it from scratch. It always starts to happen only when the project reaches some complexity. Below is the video showcasing that it happens in complex project but not in the simple one: https://youtu.be/nN4xDDwYzSE
bug,topic:editor,topic:shaders
low
Critical
2,759,559,927
go
proposal: io: add `RuneWriter` interface
### Proposal Details Today i was surprised that `io` does not define a `RuneWriter` interface, it think that we should add one, considering that it is implemented by types in the std, also we already have a `io.RuneReader` interface and `io.ByteWriter`/`io.ByteReader`. `(*strings.Builder).WriteRune` `(*bytes.Buffer).WriteRune` `(*bufio.Writer).WriteRune` Proposed API: ```go type RuneWriter interface { // WriteRune writes the UTF-8 encoding of Unicode code point r, // and returns the number of bytes written. In case of an error // while writing, the WriteRune method might write part of a // UTF-8 representation of that rune. // If the rune is out of range, it writes the encoding of [utf8.RuneError]. WriteRune(r rune) (int, error) } ```
Proposal
low
Critical
2,759,562,254
vscode
Rerun Tasks
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> I'm aware there is already "Rerun Last Task" functionality, but this request is for rerunning any of the tasks that appear here: ![Image](https://github.com/user-attachments/assets/30678fbd-44ac-43b4-9e61-c48d4625b5a3) For example, I could right click either of these and there would be a "Rerun" option in the menu. If I select multiple, it could run all of them (I use "dedicated terminal per task", but if someone is using "reuse for all tasks" I'm not sure how good the UX would be with them all running in the same terminal. Maybe in this case, ignore the users preference and run each task in a separate terminal?). Also, a keybind that reruns the task I currently have open/selected/focused would be extremely helpful: ![Image](https://github.com/user-attachments/assets/7deed07d-c271-4baa-a4ef-379ccc919812) If the tasks are split-screen, maybe there could be a configuration option to either run all that are joined to the selected, or just the specific one with focus. ![Image](https://github.com/user-attachments/assets/04397c22-bbeb-4f61-8de4-37c2f738d565) Thank you for this amazing ide, it's one of the best pieces of software I've used.
feature-request,tasks
low
Major
2,759,562,543
flutter
[Proposal] Disable Pasting Image in textInput on HyperOS
### Steps to reproduce it seems an OS-specific problem for HyperOS . HyperOS introduces automatic writing of screenshots to clipboard feature. 1、take a screenshot,and it will be copied to Clipboard automatically. 2、paste it in Textfield. Long press will not bring up the paste button, but there is a shortcut paste button on the keyboard. ### Expected results nothing happens ### Actual results garbled charaters,unusable page ### Code sample <details open><summary>Code sample</summary> ```dart void main() { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({super.key}); @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', home: Scaffold( body: Container( child: TextField( minLines: 5, maxLines: null, ), ), ), ); } } ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> <img width="500" alt="image" src="https://github.com/user-attachments/assets/9bc4526d-b803-45c1-beab-bb94fdd88064" /> <img width="500" alt="image" src="https://github.com/user-attachments/assets/36e0aeb4-48ec-4a9d-99f5-2519db05a334" /> </details> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [!] Flutter (Channel stable, 3.27.1, on macOS 14.1 23B74 darwin-arm64, locale zh-Hans-CN) • Flutter version 3.27.1 on channel stable at /Users/stuff/fvm/versions/3.27.1 ! Warning: `dart` on your path resolves to /opt/homebrew/Cellar/dart/2.19.3/libexec/bin/dart, which is not inside your current Flutter SDK checkout at /Users/stuff/fvm/versions/3.27.1. Consider adding /Users/stuff/fvm/versions/3.27.1/bin to the front of your path. • Upstream repository https://github.com/flutter/flutter.git • Framework revision 17025dd882 (10 天前), 2024-12-17 03:23:09 +0900 • Engine revision cb4b5fff73 • Dart version 3.6.0 • DevTools version 2.40.2 • Pub download mirror https://pub.flutter-io.cn • Flutter download mirror https://storage.flutter-io.cn • If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update checks and upgrades. [!] Android toolchain - develop for Android devices (Android SDK version 34.0.0) • Android SDK at /Users/stuff/Library/Android/sdk • Platform android-34, build-tools 34.0.0 • ANDROID_HOME = /Users/stuff/Library/Android/sdk • Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874) ! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses [✓] Xcode - develop for iOS and macOS (Xcode 15.4) • Xcode at /Applications/Xcode.app/Contents/Developer • Build 15F31d • CocoaPods version 1.15.2 [✓] Android Studio (version 2023.2) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874) [✓] IntelliJ IDEA Community Edition (version 2022.3.2) • IntelliJ at /Applications/IntelliJ IDEA CE.app • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart [✓] Connected device (3 available) • 2407FRK8EC (mobile) • DIAINNSWQGBIUOMB • android-arm64 • Android 15 (API 35) ``` </details>
a: text input,c: new feature,e: device-specific,platform-android,framework,f: material design,c: proposal,P3,team-framework,triaged-framework
low
Minor
2,759,577,995
vscode
vscode server on mac ssh UnpackFailed, probably using wrong tar
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes/No <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.96.2 for client - OS Version: mac sequoia 15.2 for the remote server Steps to Reproduce: 1. install a tar (GNU tar) 1.35 on mac 2. ssh to that mac using vs remote 3. got error Status UnpackFailed the log is: [18:06:19.890] > tar --version: tar (GNU tar) 1.35 > Copyright (C) 2023 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. > > Written by John Gilmore and Jay Fenlason. [18:06:19.905] > [18:06:19.920] > tar: This does not look like a tar archive > tar: Skipping to next header > tar: Exiting with failure status due to previous errors [18:06:19.929] > > ERROR: tar exited with a non-zero exit code: 2 > Already attempted local download, failing what is the cause of this? I tried to `alias tar=/usr/bin/tar` in ~/.bash_profile but the problem persists.
ssh
low
Critical
2,759,583,716
tauri
[bug] unsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed `isize::MAX`
### Describe the bug tauri app crashes with: ``` thread 'main' panicked at core/src/panicking.rs:221:5: unsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed `isize::MAX` note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace thread caused non-unwinding panic. aborting. ``` when i call this command: ```rs #[tauri::command] pub async fn open_pipe_window( app_handle: tauri::AppHandle<tauri::Wry>, port: u16, title: String, ) -> Result<(), String> { // Close existing window if it exists if let Some(existing_window) = app_handle.get_webview_window(&title) { if let Err(e) = existing_window.destroy() { error!("failed to destroy existing window: {}", e); } tokio::time::sleep(tokio::time::Duration::from_millis(100)).await; } let window = match tauri::WebviewWindowBuilder::new( &app_handle, &title, tauri::WebviewUrl::External(format!("http://localhost:{}", port).parse().unwrap()), ) .title(title) .inner_size(1200.0, 850.0) .always_on_top(true) .visible_on_all_workspaces(true) .build() { Ok(window) => window, Err(e) => { error!("failed to create window: {}", e); return Err(format!("failed to create window: {}", e)); } }; // Only try to manipulate window if creation succeeded if let Err(e) = window.set_focus() { error!("failed to set window focus: {}", e); } if let Err(e) = window.show() { error!("failed to show window: {}", e); } #[cfg(target_os = "macos")] if let Err(e) = app_handle.set_activation_policy(tauri::ActivationPolicy::Accessory) { error!("failed to set activation policy: {}", e); } Ok(()) } ``` note that this is non deterministic :/ most of the time it works, 5-10% of the time it crashes :( ### Reproduction n/a ### Expected behavior _No response_ ### Full `tauri info` output ```text (base) louisbeaumont@MacBook-Pro-2:~/Documents/screenpipe/screenpipe-app-tauri$ bun tauri info $ tauri info WARNING: Only one package manager should be used, but found bun and npm. Please remove unused package manager lock files, will use bun for now! [✔] Environment - OS: Mac OS 15.1.0 arm64 (X64) ✔ Xcode Command Line Tools: installed ✔ rustc: 1.83.0 (90b35a623 2024-11-26) ✔ cargo: 1.83.0 (5ffbef321 2024-10-29) ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: stable-aarch64-apple-darwin (default) - node: 20.9.0 - pnpm: 9.14.4 - yarn: 1.22.19 - npm: 10.1.0 - bun: 1.1.38 - deno: deno 2.0.2 [-] Packages - tauri 🦀: 2.1.1 - tauri-build 🦀: 2.0.3 - wry 🦀: 0.47.2 - tao 🦀: 0.30.8 - tauri-cli 🦀: 1.6.2 - @tauri-apps/api : 2.1.1 - @tauri-apps/cli : 2.1.0 [-] Plugins - tauri-plugin-store 🦀: 2.2.0 - @tauri-apps/plugin-store : 2.2.0 - tauri-plugin-deep-link 🦀: 2.2.0 - @tauri-apps/plugin-deep-link : 2.2.0 - tauri-plugin-autostart 🦀: 2.0.0 - @tauri-apps/plugin-autostart : not installed! - tauri-plugin-os 🦀: 2.0.0 - @tauri-apps/plugin-os : 2.0.0 (outdated, latest: 2.2.0) - tauri-plugin-fs 🦀: 2.0.0 - @tauri-apps/plugin-fs : 2.0.0 (outdated, latest: 2.2.0) - tauri-plugin-notification 🦀: 2.0.0 - @tauri-apps/plugin-notification : 2.0.0 (outdated, latest: 2.2.0) - tauri-plugin-updater 🦀: 2.0.1 - @tauri-apps/plugin-updater : 2.0.0 (outdated, latest: 2.3.0) - tauri-plugin-dialog 🦀: 2.0.0 - @tauri-apps/plugin-dialog : 2.0.0 (outdated, latest: 2.2.0) - tauri-plugin-http 🦀: 2.0.0 - @tauri-apps/plugin-http : 2.2.0 - tauri-plugin-shell 🦀: 2.0.0 - @tauri-apps/plugin-shell : 2.0.0 (outdated, latest: 2.2.0) - tauri-plugin-global-shortcut 🦀: 2.0.0 - @tauri-apps/plugin-global-shortcut : 2.0.0 (outdated, latest: 2.2.0) - tauri-plugin-cli 🦀: 2.0.0 - @tauri-apps/plugin-cli : 2.0.0 (outdated, latest: 2.2.0) - tauri-plugin-single-instance 🦀: 2.0.0 - @tauri-apps/plugin-single-instance : not installed! - tauri-plugin-process 🦀: 2.0.0 - @tauri-apps/plugin-process : 2.0.0 (outdated, latest: 2.2.0) [-] App - build-type: bundle - CSP: unset - frontendDist: ../out - devUrl: http://localhost:3000/ - framework: React (Next.js) - bundler: Webpack ``` ### Stack trace _No response_ ### Additional context my users run a bunch of nextjs apps that i embed in tauri webviews that's why the localhost thing
type: bug,status: needs triage
low
Critical
2,759,631,923
pytorch
`@torch.jit.script` causes `pytest-cov` to miss function body
### 🐛 Describe the bug When decorating a function with `@torch.jit.script`, its body's code coverage is ignored by `pytest-cov`. Even with exhaustive testing, the coverage report always considered the function code as uncovered. ### Instructions to reproduce ``` root/ │ ├── ml_framework/ │ └── module.py │ └── tests/ └── test_module.py ``` `module.py` ```python import torch @torch.jit.script def function() -> int: return 0 ``` `test_module.py` ``` from unittest import TestCase from module import function class TestModule(TestCase): def test_function(self) -> None: self.assertEqual(0, function()) ``` Run: ``` pytest --cov=ml_framework.module test_module.py --cov-report html cov/ ``` - with the decorator (the function body is tested hence should appear as covered) <img width="400" alt="image" src="https://github.com/user-attachments/assets/c1b2e28e-b61b-4126-a2ac-b38b8a511844" /> - without the decorator <img width="400" alt="image" src="https://github.com/user-attachments/assets/201d21e1-e822-4daa-8bb8-e41748e45de5" /> ### Versions PyTorch version: 2.2.2 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 14.6.1 (x86_64) GCC version: Could not collect Clang version: 15.0.0 (clang-1500.3.9.4) CMake version: Could not collect Libc version: N/A Python version: 3.11.7 (main, May 15 2024, 22:19:42) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime) Python platform: macOS-14.6.1-x86_64-i386-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz Versions of relevant libraries: [pip3] mypy==1.13.0 [pip3] mypy-boto3-ecr==1.35.21 [pip3] mypy-boto3-s3==1.35.76.post1 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] onnx==1.17.0 [pip3] onnxruntime==1.20.1 [pip3] pytorch-lightning==2.4.0 [pip3] torch==2.2.2+cpu [pip3] torchmetrics==1.4.2 [pip3] torchvision==0.17.2+cpu cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
oncall: jit,feature
low
Critical
2,759,665,288
pytorch
MPSNDArray 限制了单个 NDArray 的内存大小上限为 4GB
### 🐛 Describe the bug /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:850: failed assertion `[MPSNDArray initWithDevice:descriptor:isTextureBacked:] Error: total bytes of NDArray > 2**32' [1] 13512 abort /opt/homebrew/Caskroom/miniforge/base/envs/pyTrim/bin/python /opt/homebrew/Caskroom/miniforge/base/envs/pyTrim/lib/python3.12/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' However, my program only consumes little ram, but with several very large tensor, the total ram is enough. How to overcome this limit? ### Versions pytorch 2.5.1 cpu_generic_py312h99d64c8_6 conda-forge I am using mac mini with m4Pro, 64G RAM, macos15.2, cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
needs reproduction,module: crash,triaged,module: mps
low
Critical
2,759,672,417
langchain
Usage Details is not coming for OpenAiAssistantRunnable.
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python OpenAIAssistantFinish( return_values={ "output": answer, "thread_id": run.thread_id, "run_id": run.id, }, log="", run_id=run.id, thread_id=run.thread_id, ) ``` this only return following fields, it should return usage details as well ### Error Message and Stack Trace (if applicable) _No response_ ### Description I want to charge user on based on tokens and im using open ai assistant but its not returning usage details so even call backs does not have any usage details as well. ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 24.0.0: Mon Aug 12 20:54:26 PDT 2024; root:xnu-11215.1.10~2/RELEASE_ARM64_T8112 > Python Version: 3.11.6 (v3.11.6:8b6ee5ba3b, Oct 2 2023, 11:18:21) [Clang 13.0.0 (clang-1300.0.29.30)] Package Information ------------------- > langchain_core: 0.3.18 > langchain: 0.3.7 > langchain_community: 0.3.7 > langsmith: 0.1.142 > langchain_chroma: 0.1.4 > langchain_google_genai: 2.0.6 > langchain_openai: 0.2.6 > langchain_text_splitters: 0.3.2 > langchain_unstructured: 0.1.5 Optional packages not installed ------------------------------- > langgraph > langserve Other Dependencies ------------------ > aiohttp: 3.10.10 > async-timeout: Installed. No version info available. > chromadb: 0.5.18 > dataclasses-json: 0.6.7 > fastapi: 0.115.5 > filetype: 1.2.0 > google-generativeai: 0.8.3 > httpx: 0.27.0 > httpx-sse: 0.4.0 > jsonpatch: 1.33 > numpy: 1.26.4 > openai: 1.54.3 > orjson: 3.10.11 > packaging: 24.2 > pydantic: 2.9.2 > pydantic-settings: 2.6.1 > PyYAML: 6.0.2 > requests: 2.32.3 > requests-toolbelt: 1.0.0 > SQLAlchemy: 2.0.35 > tenacity: 9.0.0 > tiktoken: 0.8.0 > typing-extensions: 4.12.2 > unstructured-client: 0.25.9 > unstructured[all-docs]: Installed. No version info available.
investigate
low
Critical
2,759,683,229
PowerToys
PowerToys Run stop working
### Microsoft PowerToys version 0.87.1 ### Installation method Microsoft Store ### Running as admin No ### Area(s) with issue? PowerToys Run ### Steps to reproduce Win + R [PowerToysReport_2024-12-26-09-54-13.zip](https://github.com/user-attachments/files/18252218/PowerToysReport_2024-12-26-09-54-13.zip) ### ✔️ Expected Behavior The PowerToys Run window should replace the Windows default Run window. ### ❌ Actual Behavior The PowerToys Run window does not show and falls back to the Windows default Run window. After upgrading from 0.86 to 0.87.1 it stopped working. Running as admin, restarting the program, re-enabling the PowerToys Run, rebooting the system, or re-installing the program do not resolve the issue. ### Other Software _No response_
Issue-Bug,Product-PowerToys Run,Needs-Triage,Needs-Team-Response
low
Major
2,759,696,192
pytorch
_transform_bias_rescale_qkv cpu op get error on debug build
### 🐛 Describe the bug The following code will produce the following error code ``` import torch qkv = torch.randn([4, 16, 576]) qkv_bias = torch.randn([576]) num_heads=4 torch._transform_bias_rescale_qkv( qkv, qkv_bias, num_heads ) ``` output ``` Traceback (most recent call last): File "/workspace/testops.py", line 7, in <module> torch._transform_bias_rescale_qkv( RuntimeError: t.storage().use_count() == 1 INTERNAL ASSERT FAILED at "/workspace/pytorch/torch/csrc/autograd/autograd_not_implemented_fallback.cpp":413, please report a bug to PyTorch. ``` ### Versions nightly hash b74622335a2c4776fa654939ec89bf1ef45b8a2f (pytorch) root@bjys1040:/workspace# python collect_env.py Collecting environment information... PyTorch version: 2.6.0a0+gitb746223 Is debug build: True CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.31.2 Libc version: glibc-2.35 Python version: 3.10.8 (main, Nov 6 2024, 16:44:26) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 104 On-line CPU(s) list: 0-103 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 26 Socket(s): 2 Stepping: 7 CPU max MHz: 4000.0000 CPU min MHz: 1000.0000 BogoMIPS: 4200.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 1.6 MiB (52 instances) L1i cache: 1.6 MiB (52 instances) L2 cache: 52 MiB (52 instances) L3 cache: 71.5 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-25,52-77 NUMA node1 CPU(s): 26-51,78-103 Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; TSX disabled Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] optree==0.13.1 [pip3] torch==2.6.0.dev20241215+cpu [pip3] torchaudio==2.6.0.dev20241215+cpu [pip3] torchvision==0.22.0.dev20241215+cpu [conda] Could not collect cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
module: nn,triaged
low
Critical
2,759,706,336
flutter
Flutter crashes when running the flutter example program on Linux. The reason for the crash is: PVR: (Error): AllocateDeviceMemory: Failed to allocate memory for PMR sub-allocated (PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES)
### Steps to reproduce flutter create example cd example flutter run -v ### Expected results Flutter crashes when running the flutter example program on Linux. The reason for the crash is: PVR: (Error): AllocateDeviceMemory: Failed to allocate memory for PMR sub-allocated (PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES) ```sh 12月 26 20:12:39 wzh-pc systemd-coredump[501384]: Process 497986 (example) of user 1000 dumped core. Stack trace of thread 497986: #0 0x0000007f787114d8 n/a (libGL_PVR_MESA.so + 0x1684d8) #1 0x0000007f7865a738 n/a (libGL_PVR_MESA.so + 0xb1738) #2 0x0000007f7865abdc n/a (libGL_PVR_MESA.so + 0xb1bdc) #3 0x0000007f7f6eeff0 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c6eff0) #4 0x0000007f7f6eeff0 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c6eff0) #5 0x0000007f7f700f50 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c80f50) #6 0x0000007f7f6ebb8c n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c6bb8c) #7 0x0000007f7f7980e8 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1d180e8) #8 0x0000007f7f7a5f94 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1d25f94) #9 0x0000007f7f7a4cf8 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1d24cf8) #10 0x0000007f7fddfc74 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235fc74) #11 0x0000007f7fddf39c n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235f39c) #12 0x0000007f7fddd864 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235d864) #13 0x0000007f7fdde824 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235e824) #14 0x0000007f7fde151c n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x236151c) #15 0x0000007f7fdddf2c n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235df2c) #16 0x0000007f7fdddb50 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235db50) #17 0x0000007f7fdf9204 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x2379204) #18 0x0000007f7f7abaa8 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1d2baa8) #19 0x0000007f7f795244 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1d15244) #20 0x0000007f7f707ad4 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c87ad4) #21 0x0000007f7f7079b8 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c879b8) #22 0x0000007f7f7010e4 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c810e4) #23 0x0000007f7f7107b8 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c907b8) #24 0x0000007f7cbfcb74 g_closure_invoke (libgobject-2.0.so.0 + 0x14b74) #25 0x0000007fcdc44708 n/a (n/a + 0x0) ``` ### Actual results Flutter crashes when running the flutter example program on Linux. The reason for the crash is: PVR: (Error): AllocateDeviceMemory: Failed to allocate memory for PMR sub-allocated (PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES) ```sh 12月 26 20:12:39 wzh-pc systemd-coredump[501384]: Process 497986 (example) of user 1000 dumped core. Stack trace of thread 497986: #0 0x0000007f787114d8 n/a (libGL_PVR_MESA.so + 0x1684d8) #1 0x0000007f7865a738 n/a (libGL_PVR_MESA.so + 0xb1738) #2 0x0000007f7865abdc n/a (libGL_PVR_MESA.so + 0xb1bdc) #3 0x0000007f7f6eeff0 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c6eff0) #4 0x0000007f7f6eeff0 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c6eff0) #5 0x0000007f7f700f50 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c80f50) #6 0x0000007f7f6ebb8c n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c6bb8c) #7 0x0000007f7f7980e8 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1d180e8) #8 0x0000007f7f7a5f94 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1d25f94) #9 0x0000007f7f7a4cf8 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1d24cf8) #10 0x0000007f7fddfc74 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235fc74) #11 0x0000007f7fddf39c n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235f39c) #12 0x0000007f7fddd864 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235d864) #13 0x0000007f7fdde824 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235e824) #14 0x0000007f7fde151c n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x236151c) #15 0x0000007f7fdddf2c n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235df2c) #16 0x0000007f7fdddb50 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x235db50) #17 0x0000007f7fdf9204 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x2379204) #18 0x0000007f7f7abaa8 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1d2baa8) #19 0x0000007f7f795244 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1d15244) #20 0x0000007f7f707ad4 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c87ad4) #21 0x0000007f7f7079b8 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c879b8) #22 0x0000007f7f7010e4 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c810e4) #23 0x0000007f7f7107b8 n/a (/home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so + 0x1c907b8) #24 0x0000007f7cbfcb74 g_closure_invoke (libgobject-2.0.so.0 + 0x14b74) #25 0x0000007fcdc44708 n/a (n/a + 0x0) ``` ### Code sample flutter example demo ### Screenshots or Video https://github.com/user-attachments/assets/43ffa43d-f867-4e0f-99f1-d0b943a0f3cc ### Logs ``` sh flutter run -v [ +12 ms] Could not interpret results of "git describe": 3.28.0-2.0.pre.38577 [ +74 ms] executing: uname -m [ +7 ms] Exit code 0 from: uname -m [ ] aarch64 [ +157 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update. [ +6 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update. [ +1 ms] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update. [ +2 ms] Artifact Instance of 'FlutterWebSdk' is not required, skipping update. [ +1 ms] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update. [ +8 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update. [ +1 ms] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update. [ +1 ms] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update. [ +1 ms] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update. [ +1 ms] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update. [ +1 ms] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update. [ +1 ms] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update. [ +115 ms] executing: /usr/lib/android-sdk/platform-tools/adb devices -l [ +77 ms] List of devices attached [ +11 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update. [ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update. [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update. [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update. [ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update. [ +2 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update. [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update. [ +3 ms] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update. [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update. [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update. [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update. [ +214 ms] Skipping pub get: version match. [ +410 ms] Generating /home/wzh/桌面/example/android/app/src/main/java/io/flutter/plugins/GeneratedPlugi nRegistrant.java [ +331 ms] Initializing file store [ +20 ms] Skipping target: gen_localizations [ +10 ms] gen_dart_plugin_registrant: Starting due to {InvalidatedReasonKind.inputChanged: The following inputs have updated contents: /home/wzh/桌面/example/.dart_tool/package_config_subset} [ +299 ms] gen_dart_plugin_registrant: Complete [ +2 ms] Skipping target: _composite [ +4 ms] complete [ +13 ms] Launching lib/main.dart on Linux in debug mode... [ +11 ms] /home/wzh/桌面/flutter/bin/cache/dart-sdk/bin/dartaotruntime /home/wzh/桌面/flutter/bin/cache/dart-sdk/bin/snapshots/frontend_server_aot.dart.s napshot --sdk-root /home/wzh/桌面/flutter/bin/cache/artifacts/engine/common/flutter_patched_sdk/ --incremental --target=flutter --experimental-emit-debug-metadata --output-dill /tmp/flutter_tools.MDWRET/flutter_tool.WGEPDJ/app.dill --packages /home/wzh/桌面/example/.dart_tool/package_config.json -Ddart.vm.profile=false -Ddart.vm.product=false --enable-asserts --track-widget-creation --filesystem-scheme org-dartlang-root --initialize-from-dill build/cache.dill.track.dill --verbosity=error --enable-experiment=alternative-invalidation-strategy [ +64 ms] Building Linux application... [ +14 ms] <- compile package:example/main.dart [ +4 ms] executing: [build/linux/arm64/debug/] cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug -DFLUTTER_TARGET_PLATFORM=linux-arm64 /home/wzh/桌面/example/linux [ +90 ms] -- Configuring done [ +19 ms] -- Generating done [ +2 ms] -- Build files have been written to: /home/wzh/桌面/example/build/linux/arm64/debug [ +35 ms] executing: ninja -C build/linux/arm64/debug install [ +21 ms] ninja: Entering directory `build/linux/arm64/debug' [+5742 ms] [1/5] Generating /home/wzh/桌面/example/linux/flutter/ephemeral/libflutter_linux_gtk.so, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_basic_message_chan nel.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_binary_codec.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_binary_messenger.h , /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_dart_project.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_engine.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_json_message_codec .h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_json_method_codec. h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_message_codec.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_method_call.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_method_channel.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_method_codec.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_method_response.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_plugin_registrar.h , /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_plugin_registry.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_standard_message_c odec.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_standard_method_co dec.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_string_codec.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_value.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/fl_view.h, /home/wzh/桌面/example/linux/flutter/ephemeral/flutter_linux/flutter_linux.h, _phony_ [ +6 ms] [ +13 ms] Could not interpret results of "git describe": 3.28.0-2.0.pre.38577 [ ] [ +94 ms] executing: uname -m [ ] [ +8 ms] Exit code 0 from: uname -m [ ] [ ] aarch64 [ +1 ms] [ +133 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update. [ +1 ms] [ +1 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update. [ +3 ms] [ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update. [ +1 ms] [ +5 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update. [ +8 ms] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update. [ +4 ms] [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update. [ +1 ms] [ +298 ms] Artifact Instance of 'MaterialFonts' is not required, skipping update. [ +1 ms] [ +7 ms] Artifact Instance of 'GradleWrapper' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'FlutterSdk' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update. [ +1 ms] [ +2 ms] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update. [ +2 ms] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'FontSubsetArtifacts' is not required, skipping update. [ +1 ms] [ ] Artifact Instance of 'PubDependencies' is not required, skipping update. [ ] [ +146 ms] Initializing file store [ +2 ms] [ +46 ms] Done initializing file store [ ] [ +196 ms] Skipping target: gen_localizations [ ] [ +19 ms] Skipping target: gen_dart_plugin_registrant [ ] [+1360 ms] Skipping target: unpack_linux [ ] [+1009 ms] Skipping target: kernel_snapshot_program [ ] [ +9 ms] Skipping target: dart_build [ ] [ +1 ms] Skipping target: install_code_assets [ ] [ +699 ms] Skipping target: debug_bundle_linux-arm64_assets [ ] [ +1 ms] Persisting file store [ ] [ +26 ms] Done persisting file store [ ] [ +17 ms] build succeeded. [ ] [ +22 ms] "flutter assemble" took 3,946ms. [ ] [ +16 ms] Running 1 shutdown hook [ ] [ +2 ms] Shutdown hooks complete [ ] [ +305 ms] exiting with code 0 [+1483 ms] [2/5] Building CXX object runner/CMakeFiles/example.dir/__/flutter/generated_plugin_registrant.cc.o [ +313 ms] [3/5] Building CXX object runner/CMakeFiles/example.dir/my_application.cc.o [ +362 ms] [4/5] Linking CXX executable intermediates_do_not_run/example [ ] [4/5] Install the project... [ +28 ms] -- Install configuration: "Debug" [ +37 ms] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/example [ +9 ms] -- Set runtime path of "/home/wzh/桌面/example/build/linux/arm64/debug/bundle/example" to "$ORIGIN/lib" [ +1 ms] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/icudtl.dat [ +1 ms] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib/libflutter_linux_gtk.so [ +125 ms] -- Up-to-date: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/lib [ +1 ms] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/NOTICES. Z [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/shaders [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/shaders/ ink_sparkle.frag [ +1 ms] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/AssetMan ifest.bin [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/NativeAs setsManifest.json [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/AssetMan ifest.json [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/packages [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/packages /cupertino_icons [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/packages /cupertino_icons/assets [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/packages /cupertino_icons/assets/CupertinoIcons.ttf [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/FontMani fest.json [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/version. json [ ] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/kernel_b lob.bin [ +102 ms] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/fonts [ +1 ms] -- Installing: /home/wzh/桌面/example/build/linux/arm64/debug/bundle/data/flutter_assets/fonts/Ma terialIcons-Regular.otf [ +40 ms] Building Linux application... (completed in 8.5s) [ +5 ms] ✓ Built build/linux/arm64/debug/bundle/example [ +642 ms] VM Service URL on device: http://127.0.0.1:44053/vQY89oqOJAY=/ [ +12 ms] Caching compiled dill [ +141 ms] Connecting to service protocol: http://127.0.0.1:44053/vQY89oqOJAY=/ [ +382 ms] Launching a Dart Developer Service (DDS) instance at http://127.0.0.1:0, connecting to VM service at http://127.0.0.1:44053/vQY89oqOJAY=/. [+1389 ms] Successfully connected to service protocol: http://127.0.0.1:44053/vQY89oqOJAY=/ [ +40 ms] DevFS: Creating new filesystem on the device (null) [ +43 ms] DevFS: Created new filesystem on the device (file:///tmp/exampleFJOSHA/example/) [ +7 ms] Updating assets [ +292 ms] Syncing files to device Linux... [ +6 ms] Compiling dart to kernel with 0 updated files [ ] Processing bundle. [ +4 ms] <- recompile package:example/main.dart 0d95faee-bd12-428a-be65-384b26756b53 [ +12 ms] <- 0d95faee-bd12-428a-be65-384b26756b53 [ +8 ms] Bundle processing done. [ +317 ms] Updating files. [ ] Pending asset builds completed. Writing dirty entries. [ ] DevFS: Sync finished [ +2 ms] Syncing files to device Linux... (completed in 351ms) [ +1 ms] Synced 0.0MB. [ +10 ms] <- accept [ +6 ms] Connected to _flutterView/0xb6cbe50. [ +6 ms] Flutter run key commands. [ +3 ms] r Hot reload.  [ +2 ms] R Hot restart. [ +1 ms] h List all available interactive commands. [ +1 ms] d Detach (terminate "flutter run" but leave application running). [ +1 ms] c Clear the screen [ +16 ms] q Quit (terminate the application on the device). [ +2 ms] A Dart VM Service on Linux is available at: http://127.0.0.1:38353/BGi2Y3UPbDA=/ [ +451 ms] The Flutter DevTools debugger and profiler on Linux is available at: http://127.0.0.1:9100?uri=http://127.0.0.1:38353/BGi2Y3UPbDA=/ [+12013 ms] (455025) PVR:(Error): AllocateDeviceMemory: Failed to allocate memory for PMR sub-allocated (PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES) [ :239 ] [ ] (455025) PVR:(Error): DevmemSubAllocate: Failed! Error is PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES. Allocation size: 0x0000808000 [ :1623 ] [ +25 ms] (455025) PVR:(Error): AllocateDeviceMemory: Failed to allocate memory for PMR sub-allocated (PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES) [ :239 ] [ ] (455025) PVR:(Error): DevmemSubAllocate: Failed! Error is PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES. Allocation size: 0x0000808000 [ :1623 ] [ +26 ms] (455025) PVR:(Error): AllocateDeviceMemory: Failed to allocate memory for PMR sub-allocated (PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES) [ :239 ] [ ] (455025) PVR:(Error): DevmemSubAllocate: Failed! Error is PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES. Allocation size: 0x0000808000 [ :1623 ] [ +795 ms] (455025) PVR:(Error): AllocateDeviceMemory: Failed to allocate memory for RGXExportableZSBuff (PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES) [ :239 ] [ ] (455025) PVR:(Error): DevmemAllocateExportable: Failed! Error is PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES. Allocation size: 0x0000404000 [ :1720 ] [ ] (455025) PVR:(Error): DevmemAllocateExportable() failed (PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES) in PVRSRVAllocExportableDevMem() [ :747 ] [ ] (455025) PVR:(Error): RGXCreateZSBuffer: Failed to allocate ZS-Buffer (error = 52) [ :108 ] [ +67 ms] (455025) PVR:(Error): AllocateDeviceMemory: Failed to allocate memory for PMR sub-allocated (PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES) [ :239 ] [ ] (455025) PVR:(Error): DevmemSubAllocate: Failed! Error is PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES. Allocation size: 0x0000808000 [ :1623 ] [ +25 ms] (455025) PVR:(Error): AllocateDeviceMemory: Failed to allocate memory for PMR sub-allocated (PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES) [ :239 ] [ ] (455025) PVR:(Error): DevmemSubAllocate: Failed! Error is PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES. Allocation size: 0x0000808000 [ :1623 ] [ +33 ms] (455025) PVR:(Error): AllocateDeviceMemory: Failed to allocate memory for PMR sub-allocated (PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES) [ :239 ] [ ] (455025) PVR:(Error): DevmemSubAllocate: Failed! Error is PVRSRV_ERROR_PMR_FAILED_TO_ALLOC_PAGES. Allocation size: 0x0000808000 [ :1623 ] [ +919 ms] Service protocol connection closed. [ ] Lost connection to device. [ +1 ms] DevFS: Deleting filesystem on the device (file:///tmp/exampleFJOSHA/example/) [ +1 ms] DevFS: Deleted filesystem on the device (file:///tmp/exampleFJOSHA/example/) [ +6 ms] "flutter run" took 28,004ms. [ +13 ms] Running 3 shutdown hooks [ +33 ms] Shutdown hooks complete [ +136 ms] exiting with code 0 ``` ### Flutter Doctor output ```sh wzh@wzh-pc:~/桌面/example$ flutter doctor Doctor summary (to see all details, run flutter doctor -v): [!] Flutter (Channel master, 3.28.0-2.0.pre.38577, on Kylin V10 SP1 5.4.18-110-generic, locale zh_CN.UTF-8) ! Upstream repository http://github.com/flutter/flutter.git is not a standard remote. Set environment variable "FLUTTER_GIT_URL" to http://github.com/flutter/flutter.git to dismiss this error. [✗] Android toolchain - develop for Android devices ✗ cmdline-tools component is missing Run `path/to/sdkmanager --install "cmdline-tools;latest"` See https://developer.android.com/studio/command-line for more details. [✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome) ! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable. [✓] Linux toolchain - develop for Linux desktop [!] Android Studio (not installed) [✓] Connected device (1 available) [✓] Network resources ! Doctor found issues in 4 categories. ```
c: crash,e: device-specific,engine,platform-linux,e: OS-version specific,P3,c: fatal crash,team-linux,triaged-linux
low
Critical
2,759,727,500
excalidraw
safari: double-click to create text doesn't work due to focus issues
steps: 1. focus a button in the properties panel using DevTools 2. double click on canvas to create text won't work Tested on Safari 15. The focus step tends to happen during normal use of excalidraw.
bug
low
Minor
2,759,751,986
kubernetes
kubernetes 1.32: Informer with WatchClient fails to send events with Fakeclient
### What happened? I tried to start an informer with WatchClient enabled in client go and i see this error ``` W1220 19:44:49.417766 26071 reflector.go:1044] k8s.io/[email protected]/tools/cache/reflector.go:243: awaiting required bookmark event for initial events stream, no events received for 10.000196792s ``` and the above error keeps coming every 10 seconds. The informer factory is stuck at this line `xInformerFactory.WaitForCacheSync(ctx.Done())` ### What did you expect to happen? Watclist feature to work and Events to propagate when a resource is created. ### How can we reproduce it (as minimally and precisely as possible)? ``` package main import ( "context" "flag" "fmt" "os" "time" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" utilruntime "k8s.io/apimachinery/pkg/util/runtime" "k8s.io/client-go/informers" "k8s.io/client-go/kubernetes/fake" "k8s.io/client-go/tools/cache" "k8s.io/klog/v2" ) func init() { scheme := runtime.NewScheme() utilruntime.Must(corev1.AddToScheme(scheme)) } func main() { os.Setenv("KUBE_FEATURE_WatchListClient", "true") client := fake.NewSimpleClientset() defer klog.Flush() flagSet := flag.NewFlagSet("test", flag.ExitOnError) klog.InitFlags(flagSet) _ = flagSet.Parse([]string{"--v", "6"}) // Create an informer factory for the fake client informerFactory := informers.NewSharedInformerFactory(client, 0) // Get the Pod informer podInformer := informerFactory.Core().V1().Pods().Informer() // Add an event handler to the informer podInformer.AddEventHandler(&cache.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { pod := obj.(*corev1.Pod) fmt.Printf("Pod added: %s/%s\n", pod.Namespace, pod.Name) }, UpdateFunc: func(oldObj, newObj interface{}) { newPod := newObj.(*corev1.Pod) fmt.Printf("Pod updated: %s/%s\n", newPod.Namespace, newPod.Name) }, DeleteFunc: func(obj interface{}) { pod := obj.(*corev1.Pod) fmt.Printf("Pod deleted: %s/%s\n", pod.Namespace, pod.Name) }, }) // Start the informer stopCh := make(chan struct{}) defer close(stopCh) informerFactory.Start(stopCh) // Wait for cache sync if !cache.WaitForCacheSync(stopCh, podInformer.HasSynced) { fmt.Println("Failed to sync cache") return } // Use the client to create, update, or fetch resources pod := &corev1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: "example-pod", Namespace: "default", }, Spec: corev1.PodSpec{ Containers: []corev1.Container{ { Name: "example-container", Image: "nginx", }, }, }, } // Create a Pod _, err := client.CoreV1().Pods("default").Create(context.TODO(), pod, metav1.CreateOptions{}) if err != nil { panic(err) } time.Sleep(time.Second * 5) stopCh <- struct{}{} } ``` go.mod file: ``` module myworks/watchlist-test go 1.23.0 require ( k8s.io/api v0.32.0 k8s.io/apimachinery v0.32.0 k8s.io/client-go v0.32.0 k8s.io/klog/v2 v2.130.1 ) require ( github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/emicklei/go-restful/v3 v3.11.0 // indirect github.com/fxamacker/cbor/v2 v2.7.0 // indirect github.com/go-logr/logr v1.4.2 // indirect github.com/go-openapi/jsonpointer v0.21.0 // indirect github.com/go-openapi/jsonreference v0.20.2 // indirect github.com/go-openapi/swag v0.23.0 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/golang/protobuf v1.5.4 // indirect github.com/google/gnostic-models v0.6.8 // indirect github.com/google/go-cmp v0.6.0 // indirect github.com/google/gofuzz v1.2.0 // indirect github.com/google/uuid v1.6.0 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/json-iterator/go v1.1.12 // indirect github.com/mailru/easyjson v0.7.7 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/reflect2 v1.0.2 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/pkg/errors v0.9.1 // indirect github.com/x448/float16 v0.8.4 // indirect golang.org/x/net v0.30.0 // indirect golang.org/x/oauth2 v0.23.0 // indirect golang.org/x/sys v0.26.0 // indirect golang.org/x/term v0.25.0 // indirect golang.org/x/text v0.19.0 // indirect golang.org/x/time v0.7.0 // indirect google.golang.org/protobuf v1.35.1 // indirect gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect sigs.k8s.io/structured-merge-diff/v4 v4.4.2 // indirect sigs.k8s.io/yaml v1.4.0 // indirect ) ``` Error logs: ``` go run main.go I1223 14:40:04.018354 35133 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false I1223 14:40:04.018890 35133 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1223 14:40:04.018897 35133 envvar.go:169] "Feature gate updated state" feature="WatchListClient" enabled=true I1223 14:40:04.018901 35133 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false I1223 14:40:04.018930 35133 reflector.go:313] Starting reflector *v1.Pod (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251 I1223 14:40:04.018968 35133 reflector.go:349] Listing and watching *v1.Pod from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251 W1223 14:40:14.020097 35133 reflector.go:1052] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251: awaiting required bookmark event for initial events stream, no events received for 10.000426333s W1223 14:40:24.019566 35133 reflector.go:1052] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251: awaiting required bookmark event for initial events stream, no events received for 20.000542791s ``` ### Anything else we need to know? change the env var to false and you see things are working. ``` os.Setenv("KUBE_FEATURE_WatchListClient", "false") ``` Logs: ``` go run main.go I1223 14:45:47.374375 39780 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false I1223 14:45:47.374853 39780 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false I1223 14:45:47.374859 39780 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1223 14:45:47.374864 39780 envvar.go:169] "Feature gate updated state" feature="WatchListClient" enabled=false I1223 14:45:47.374889 39780 reflector.go:313] Starting reflector *v1.Pod (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251 I1223 14:45:47.374927 39780 reflector.go:349] Listing and watching *v1.Pod from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251 I1223 14:45:47.375349 39780 reflector.go:376] Caches populated for *v1.Pod from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251 Pod added: default/example-pod ``` ### Kubernetes version Client Version: v1.32.0 ### Cloud provider Local ### OS version MacOS Darwin Kernel Version 23.6.0 ### Install tools golang ``` go version go version go1.23.0 darwin/arm64 ``` ### Container runtime (CRI) and version (if applicable) <details> </details> ### Related plugins (CNI, CSI, ...) and versions (if applicable) <details> </details>
kind/bug,needs-sig,needs-triage
low
Critical
2,759,761,246
vscode
DEBUG LOGS IN TERMINAL
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> Beautiful list of logs need for us in terminal
info-needed,triage-needed
low
Critical
2,759,763,829
PowerToys
PowerToys 0.87.1 can't be installed/updated using Winget
### Microsoft PowerToys version 0.87.1 ### Installation method WinGet ### Running as admin Yes ### Area(s) with issue? Installer ### Steps to reproduce To reproduce this bug, you'll need to update powertoys using winget or install it using `winget install powertoys --source winget` or upgrade it with `winget upgrade powertoys` ### ✔️ Expected Behavior Version 0.87.1 would install properly after running the commands ### ❌ Actual Behavior The installer fails with the following error: `Installer failed with exit code: 1603`, the only way to get this PowerToys version installed is to manually download the machine wide version from GitHub ### Other Software [WinGet-Microsoft.PowerToys.0.87.1-2024-12-26-07-00-16.265.log](https://github.com/user-attachments/files/18253013/WinGet-Microsoft.PowerToys.0.87.1-2024-12-26-07-00-16.265.log)
Issue-Bug,Needs-Triage,Needs-Team-Response
low
Critical
2,759,832,648
vscode
suggestion widget focus ,but can't use page down or ↓,it can't work nomally
When there is only one option and focus on the weiget, it cannot move the text down in the widget by Page down, but it can work when there are multiple options. https://github.com/user-attachments/assets/6ba31585-0ce4-4744-b9a2-8f84fed80172 I think the behavior between them is not right. VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z) OS version: Windows_NT x64 10.0.26100 Modes: <!-- generated by issue reporter -->
suggest
low
Minor
2,759,835,514
svelte
Feature request: standard way to provide default prop values with TypeScript
### Describe the problem Currently, the only documented way to create default props is by [destructuring them](https://svelte.dev/docs/svelte/$props#Fallback-values). While this is fine for small components, larger components benefit from using the `props.` prefix. ```svelte <script lang="ts"> const props: { name: string; age?: number; // how to give age a default value? } = $props(); </script> <div>{props.name}</div> <div>{props.age}</div> ``` I mean, I can do this: ```svelte <script lang="ts"> interface Props { name: string; age?: number; } const defaultProps = { age: 40, }; const props: Props = $props(); const props2 = {...defaultProps, ...props}; </script> <div>{props2.name}</div> <div>{props2.age}</div> ``` ...which gives me the correct assignment and proper typing, but it's just [an ugly React hack](https://stackoverflow.com/questions/40209352/how-to-specify-optional-default-props-with-typescript-for-stateless-functiona/73040312#73040312). ### Describe the proposed solution Vue 3 provides the [`withDefaults`](https://vuejs.org/api/sfc-script-setup.html#default-props-values-when-using-type-declaration) API, which is a rather elegant solution to the problem: ``` const props = withDefaults(defineProps<{ name: string; age?: number; }>(), { age: 40, }); ``` Something similar could be added to the Svelte runes API. ### Importance i cannot use svelte without it
feature request
low
Minor
2,759,846,975
ollama
Unexpected Connection Closure and GPU Memory Not Releasing
### What is the issue? ### Problem Description I am using Ubuntu 22.04 and making network requests to a local Ollama service with Python to run a series of models sequentially. After each model runs, it is unloaded using the following Python code. However, the task encounters an issue after reaching a certain point, where the Ollama network service unexpectedly stops when loading a model, resulting in the following error: ``` Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/root/miniconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 449, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "/root/miniconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 444, in _make_request httplib_response = conn.getresponse() File "/root/miniconda3/lib/python3.10/http/client.py", line 1374, in getresponse response.begin() File "/root/miniconda3/lib/python3.10/http/client.py", line 318, in begin version, status, reason = self._read_status() File "/root/miniconda3/lib/python3.10/http/client.py", line 287, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/requests/adapters.py", line 667, in send resp = conn.urlopen( File "/root/miniconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen retries = retries.increment( File "/root/miniconda3/lib/python3.10/site-packages/urllib3/util/retry.py", line 550, in increment raise six.reraise(type(error), error, _stacktrace) File "/root/miniconda3/lib/python3.10/site-packages/urllib3/packages/six.py", line 769, in reraise raise value.with_traceback(tb) File "/root/miniconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/root/miniconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 449, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "/root/miniconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 444, in _make_request httplib_response = conn.getresponse() File "/root/miniconda3/lib/python3.10/http/client.py", line 1374, in getresponse response.begin() File "/root/miniconda3/lib/python3.10/http/client.py", line 318, in begin version, status, reason = self._read_status() File "/root/miniconda3/lib/python3.10/http/client.py", line 287, in _read_status raise RemoteDisconnected("Remote end closed connection without" urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/autodl-tmp/EvalLLM/main.py", line 109, in <module> generator.generate_and_save(section) File "/root/autodl-tmp/EvalLLM/main.py", line 93, in generate_and_save response_text = ollamaCaller.generate_response(prompt=question) File "/root/autodl-tmp/EvalLLM/Caller/OllamaCaller.py", line 18, in generate_response response = requests.post("http://localhost:11434/api/generate", json=payload) File "/root/miniconda3/lib/python3.10/site-packages/requests/api.py", line 115, in post return request("post", url, data=data, json=json, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/root/miniconda3/lib/python3.10/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/requests/adapters.py", line 682, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) ``` This could be caused by the **service not responding for a long time**, or it might be due to the **daemon process being unexpectedly closed**, right? At the time of the error, The program was just finished loading a new model (**a relatively small model for my gpu, llama3:latest**), and the Ollama logs are as follows: ```log time=2024-12-25T22:05:34.725+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/autodl-tmp/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=GPU-30d02008-3980-d577-cd07-7e3fcabf1d31 parallel=4 available=25158156288 required="6.2 GiB" time=2024-12-25T22:05:34.977+08:00 level=INFO source=server.go:104 msg="system memory" total="377.3 GiB" free="327.5 GiB" free_swap="0 B" time=2024-12-25T22:05:34.977+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[23.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-12-25T22:05:34.977+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /root/autodl-tmp/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 112 --parallel 4 --port 41461" time=2024-12-25T22:05:34.978+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-12-25T22:05:34.978+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2024-12-25T22:05:34.978+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2024-12-25T22:05:35.021+08:00 level=INFO source=runner.go:945 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2024-12-25T22:05:35.029+08:00 level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=112 time=2024-12-25T22:05:35.029+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:41461" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23992 MiB free llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /root/autodl-tmp/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... time=2024-12-25T22:05:35.230+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 281.81 MiB llm_load_tensors: CUDA0 model buffer size = 4155.99 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 ``` The logs end here. Regardless of the cause of this issue, after it occurs, **about 6GB of GPU memory remains unreleased and cannot be freed even by restarting Ollama**. Below is the output of `nvidia-smi`: ``` (base) root@autodl-container-d33848b29e-31b2d2f4:~# nvidia-smi Thu Dec 26 22:29:15 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.78 Driver Version: 550.78 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3090 On | 00000000:10:00.0 Off | N/A | | 30% 28C P8 17W / 350W | 6002MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| +-----------------------------------------------------------------------------------------+ ``` Has anyone encountered this issue? How can it be resolved? Thank you! ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.5.4
bug
low
Critical
2,759,847,453
tauri
[bug] tray disappears in macOS
### Describe the bug The tray in macOS behaves inconsistently. It appears and disappears unpredictably under these circumstances: - When I run or build the app, the tray sometimes disappears. - Changing the title in the setup causes the tray to vanish, but also not always. - Restarting macOS temporarily resolves the issue, but after a few app restarts or recompiles, the tray disappears again. Additional observations: - Occasionally, the tray reappears if I wait while the app is still running. - No errors or panics are logged during these events. ### Reproduction ```rust #[tauri::command] fn change_tray_title(app_handle: AppHandle, title: String) { println!("title from frontend: {}", title); let tray_handle = app_handle.tray_by_id("tray").unwrap(); if let Err(e) = tray_handle.set_title(Some(&title)) { eprintln!("failed to set title: {}", e); } else { println!("title set successfully to '{}'", title); } } ``` ### Expected behavior tray should not vanish on title or icon changes ### Full `tauri info` output ```text [✔] Environment - OS: Mac OS 15.0.1 arm64 (X64) ✔ Xcode Command Line Tools: installed ✔ rustc: 1.81.0 (eeb90cda1 2024-09-04) ✔ cargo: 1.81.0 (2dbb1af80 2024-08-20) ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: stable-aarch64-apple-darwin (default) - node: 20.12.0 - pnpm: 8.7.5 - npm: 10.5.0 - bun: 1.0.2 [-] Packages - tauri 🦀: 2.1.1 - tauri-build 🦀: 2.0.3 - wry 🦀: 0.47.0 - tao 🦀: 0.30.6 - @tauri-apps/api : not installed! - @tauri-apps/cli : 2.1.0 [-] Plugins - tauri-plugin-shell 🦀: 2.2.0 - @tauri-apps/plugin-shell : not installed! [-] App - build-type: bundle - CSP: unset - frontendDist: ../dist - devUrl: http://localhost:1420/ - framework: React - bundler: Vite ``` ### Stack trace ```text no errors ``` ### Additional context _No response_
type: bug,platform: macOS,status: needs triage
low
Critical
2,759,865,829
go
go/types: panic("t.fromRHS = %s, typ = %s") in setDefType
``` #!stacks "runtime.gopanic" && "setDefType:+7" ``` Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks). ```go func setDefType(def *TypeName, typ Type) { if def != nil { switch t := def.typ.(type) { case *Alias: // t.fromRHS should always be set, either to an invalid type // in the beginning, or to typ in certain cyclic declarations. if t.fromRHS != Typ[Invalid] && t.fromRHS != typ { panic(sprintf(nil, nil, true, "t.fromRHS = %s, typ = %s\n", t.fromRHS, typ)) } t.fromRHS = typ case *Basic: assert(t == Typ[Invalid]) case *Named: t.underlying = typ default: panic(fmt.Sprintf("unexpected type %T", t)) } } } ``` This stack `PrtfKQ` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2024-12-25.json): - `crash/crash` - [`runtime.gopanic:+69`](https://cs.opensource.google/go/go/+/go1.23.4:src/runtime/panic.go;l=804) - [`go/types.(*Checker).handleBailout:+7`](https://cs.opensource.google/go/go/+/go1.23.4:src/go/types/check.go;l=404) - [`go/types.(*Checker).Files.deferwrap1:+0`](https://cs.opensource.google/go/go/+/go1.23.4:src/go/types/check.go;l=421) - [`runtime.gopanic:+50`](https://cs.opensource.google/go/go/+/go1.23.4:src/runtime/panic.go;l=785) - [`go/types.setDefType:+7`](https://cs.opensource.google/go/go/+/go1.23.4:src/go/types/typexpr.go;l=422) - [`go/types.(*Checker).typInternal:+175`](https://cs.opensource.google/go/go/+/go1.23.4:src/go/types/typexpr.go;l=411) - [`go/types.(*Checker).definedType:+1`](https://cs.opensource.google/go/go/+/go1.23.4:src/go/types/typexpr.go;l=201) - [`go/types.(*Checker).typeDecl:+55`](https://cs.opensource.google/go/go/+/go1.23.4:src/go/types/decl.go;l=610) - [`go/types.(*Checker).objDecl:+141`](https://cs.opensource.google/go/go/+/go1.23.4:src/go/types/decl.go;l=191) - [`go/types.(*Checker).packageObjects:+59`](https://cs.opensource.google/go/go/+/go1.23.4:src/go/types/resolver.go;l=710) - [`go/types.(*Checker).checkFiles:+29`](https://cs.opensource.google/go/go/+/go1.23.4:src/go/types/check.go;l=459) - [`go/types.(*Checker).Files:+13`](https://cs.opensource.google/go/go/+/go1.23.4:src/go/types/check.go;l=422) - [`golang.org/x/tools/gopls/internal/cache.(*typeCheckBatch).checkPackage:+73`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/cache/check.go;l=1543) - [`golang.org/x/tools/gopls/internal/cache.(*typeCheckBatch).getPackage.func1:+49`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/cache/check.go;l=420) - `golang.org/x/tools/gopls/internal/cache.(*futureCache[...]).get:+32` - [`golang.org/x/tools/gopls/internal/cache.(*typeCheckBatch).getPackage:+1`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/cache/check.go;l=371) ``` golang.org/x/tools/[email protected] go1.23.4 linux/amd64 vscode (1) ``` cc @timothy-king Dups: FBA0bw RBzqiQ ucULZw Ct75kw kiWqdA NSTQeg fHIAHw --O60g LRZG_A ToWwaw
NeedsInvestigation,gopls,Tools,gopls/telemetry-wins
low
Critical
2,759,877,901
flutter
Accent (non-ASCII) characters in iPhone’s device name prevent Dart VM Service from connecting
### Steps to reproduce 1. On your iPhone, open **Settings > General > About**, and rename the device to include an accented (non-ASCII) character. For example: ``` José's iPhone ``` 2. Connect the device to your Mac (via USB or Wi-Fi) and attempt to debug a Flutter app: ```bash flutter run ``` 3. Observe that the Dart VM Service fails to connect, or you see an error like **“The Dart VM Service was not discovered after 75 seconds...”**. 4. Rename the iPhone to remove accents (e.g. `Jose's iPhone`), and run/debug again. The issue disappears. ### Expected results - Flutter should successfully connect to the Dart VM Service, regardless of whether the device name contains accented or non-ASCII characters. - The app should launch in debug mode, allowing breakpoints, hot reload, and normal debugging features. ### Actual results - With an accented character in the iPhone’s device name, the Dart VM Service never connects, preventing debugging or hot reload. - Removing the accent from the device name immediately resolves the issue. ### Code sample <details open><summary>Code sample</summary> ```dart // A simple Flutter app for reproduction import 'package:flutter/material.dart'; void main() => runApp(const MyApp()); class MyApp extends StatelessWidget { const MyApp({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return MaterialApp( title: 'Accent Device Name Bug', home: Scaffold( appBar: AppBar( title: const Text('Accent Bug Demo'), ), body: const Center( child: Text('If device name has accents, Dart VM fails to connect.'), ), ), ); } } ``` </details> ### Screenshots or Video _No response_ ### Logs <details open><summary>Logs</summary> ```console % flutter run No devices found yet. Checking for wireless devices... Launching lib/main.dart on **Redacted for privacy** in debug mode... Automatically signing iOS for device deployment using specified development team in Xcode project: **Redacted for privacy** Running Xcode build... └─Compiling, linking and signing... 5,2s Xcode build done. 21,7s You may be prompted to give access to control Xcode. Flutter uses Xcode to run your app. If access is not allowed, you can change this through your Settings > Privacy & Security > Automation. The Dart VM Service was not discovered after 75 seconds. This is taking much longer than expected... Open the Xcode window the project is opened in to ensure the app is running. If the app is not running, try selecting "Product > Run" to fix the problem. Click "Allow" to the prompt asking if you would like to find and connect devices on your local network. This is required for wireless debugging. If you selected "Don't Allow", you can turn it on in Settings > Your App Name > Local Network. If you don't see your app in the Settings, uninstall the app and rerun to see the prompt again. Installing and launching... ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-x64, locale es-ES) • Flutter version 3.27.1 on channel stable at **Redacted for privacy** • Upstream repository https://github.com/flutter/flutter.git • Framework revision 17025dd882 (10 days ago), 2024-12-17 03:23:09 +0900 • Engine revision cb4b5fff73 • Dart version 3.6.0 • DevTools version 2.40.2 [!] Android toolchain - develop for Android devices (Android SDK version 29.0.3) • Android SDK at **Redacted for privacy** ✗ cmdline-tools component is missing Run path/to/sdkmanager --install "cmdline-tools;latest" See https://developer.android.com/studio/command-line for more details. ✗ Android license status unknown. Run flutter doctor --android-licenses to accept the SDK licenses. See https://flutter.dev/to/macos-android-setup for more details. [✓] Xcode - develop for iOS and macOS (Xcode 16.2) • Xcode at /Applications/[Xcode.app/Contents/Developer](https://xcode.app/Contents/Developer) • Build 16C5032a • CocoaPods version 1.16.2 [!] Android Studio (not installed) • Android Studio not found; download from https://developer.android.com/studio/index.html (or visit https://flutter.dev/to/macos-android-setup for detailed instructions). [✓] Connected device (1 available) • **Redacted for privacy** (mobile) • **Redacted for privacy** • ios • iOS 18.2 22C152 [✓] Network resources • All expected network resources are available. ! Doctor found issues in 2 categories. ``` </details>
c: regression,platform-ios,waiting for customer response,has reproducible steps,P1,team-ios,triaged-ios,fyi-tool,found in release: 3.27,found in release: 3.28
medium
Critical
2,759,878,173
kubernetes
Transparent Huge Pages (THP)
### What would you like to be added? Transparent Huge Pages (THP) ### Why is this needed? Go language has some memory-related optimizations, as outlined in [this guide](https://tip.golang.org/doc/gc-guide), which includes Linux Transparent Huge Pages (THP). Is it recommended to enable Transparent Huge Pages (THP) on the master node when installing the core components of Kubernetes (K8s)? Relevant materials: Oracle recommends disabling transparent huge pages: [Disabling Transparent HugePages](https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/disabling-transparent-hugepages.html#GUID-02E9147D-D565-4AF8-B12A-8E6E9F74BEEA) Redis recommends disabling transparent huge pages: [Redis Documentation on Transparent Huge Pages](https://redis.io/docs/latest/operate/oss_and_stack/management/optimization/latency/)
kind/support,needs-sig,needs-triage
low
Major
2,759,932,415
rust
Target feature implications for negative features are handled inconsistently between codegen and `cfg(target_feature)`
The logic that computes `cfg(target_feature)` takes into account target feature implications when handling something like `-sse`: it will also remove `avx` from the list of enabled target features in that case. However, the logic that computes which flags we set for codegen (which, unfortunately, is completely separate), does not do the same: it will add `-sse` to LLVM's target feature list, but does not do anything about Rust-level target feature implications. This can't be correct -- either negative target features also imply that their "reverse dependencies" get disabled, or they do not. We shouldn't do one thing in codegen and a different thing for `cfg`. Or is there some good reason for this? The logic for this was added in https://github.com/rust-lang/rust/pull/128221. Cc @calebzulawski @Amanieu @workingjubilee
T-compiler,C-bug,A-target-feature
low
Minor
2,759,949,743
pytorch
Adaptive pool MPS
### 🚀 The feature, motivation and pitch Hello, I've been trying to train a VGG architecture over a M3 chip. I have this mistake: RuntimeError: Adaptive pool MPS: output sizes must be divisible by input sizes. Non-divisible input sizes are not implemented on MPS device yet. For now, you can manually transfer tensor to cpu in this case. Please refer to [this issue](https://github.com/pytorch/pytorch/issues/96056) ### Alternatives _No response_ ### Additional context _No response_ cc @mikaylagawarecki @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
triaged,enhancement,module: pooling,module: mps
low
Critical
2,760,006,375
create-react-app
there should be a single place that holds the dependencies
### Is your proposal related to a problem? Yes, I was checking the comments where I saw it is mentioned in TODO, to have a single place to hold dependencies instead of hard-coding the values during declaration. <!-- Provide a clear and concise description of what the problem is. For example, "I'm always frustrated when..." --> no, it's not a problem just checking comments (Write your answer here.) ### Describe the solution you'd like // dependencies.js module.exports = ['react', 'react-dom', 'react-scripts']; //createReactApp.js const dependencies = require('./dependencies').sort(); <!-- Provide a clear and concise description of what you want to happen. --> Instead of hardcoding the values in a place, I think we can have a single dependencies file which holds the dependencies array (Describe your proposed solution here.) ### Describe alternatives you've considered <!-- Let us know about other solutions you've tried or researched. --> (Write your answer here.) ### Additional context // TODO: there should be a single place that holds the dependencies const dependencies = ['react', 'react-dom', 'react-scripts'].sort(); if (dependencies.includes(appName)) { console.error( chalk.red( `Cannot create a project named ${chalk.green( `"${appName}"` )} because a dependency with the same name exists.\n` + `Due to the way npm works, the following names are not allowed:\n\n` ) + chalk.cyan(dependencies.map(depName => ` ${depName}`).join('\n')) + chalk.red('\n\nPlease choose a different project name.') ); process.exit(1); } } I thought I will give it a try, could you please take a look once you are free. <!-- Is there anything else you can add about the proposal? You might want to link to related issues here, if you haven't already. --> (Write your answer here.)
issue: proposal,needs triage
low
Critical
2,760,011,611
vscode
VSCode does not inherit command prompt environment variables when launched using code.exe
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.97.0 insiders - OS Version: Windows 10 (build 19045) Steps to Reproduce: 1. Open command prompt and launch vscode with the following commands: ``` set PATH=C:\msys64\mingw64\bin start /d "C:\path\to\vscode\install\directory\" Code.exe -n ``` 2. In the VSCode instance, open a Command Prompt terminal and execute `echo %PATH%` 3. Open a new command prompt window and launch vscode with the following commands: ``` set PATH=C:\msys64\ucrt64\bin start /d "C:\path\to\vscode\install\directory\" Code.exe -n ``` 4. In the new VSCode instance, open a Command Prompt terminal and execute `echo %PATH%` Expected behavior: The second instance of VSCode will echo `C:\msys64\ucrt64\bin`. Actual behavior: The second instance of VSCode echos `C:\msys64\mingw64\bin`. Related to the following closed issues: #15452 #109238. After reviewing these issues, I do believe this is a bug and not the intended design. I wasn't able to find any open issues regarding this topic. I also read the loosely related open feature-request #47816, but it doesn't capture this issue specifically since it is talking about updating the environment for new terminal instances within an already open instance of VSCode.
feature-request,terminal-process
medium
Critical
2,760,041,893
transformers
DeepSeek V3 Support
### Model description #### Transformer model DeepSeek V3 is a Transformer model that utilizes Mixture of Experts (similar to Qwen2 MoE) and Multi-head Latent Attention (MLA). ![image](https://github.com/user-attachments/assets/351e5e4b-63c5-47c0-888d-109c90a78549) #### Multi-token Prediction The model is able to predict multiple tokens sequentially at each step through the MTP modules. The first token is generated by the causal LM which feeds the output token into what I would describe as a "Transformer head" to generate additional tokens for the current step. DeepSeek notes in their release that *"MTP support is currently under active development within the community, and we welcome your contributions and feedback."* (i.e. code for this is not released). ![image](https://github.com/user-attachments/assets/35bce43f-9b7a-4a95-9062-35ae072e9771) ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Transformers Code: https://huggingface.co/deepseek-ai/DeepSeek-V3 GitHub Code (minimal implementation): https://github.com/deepseek-ai/DeepSeek-V3/tree/main/inference Paper: https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf
New model
low
Major
2,760,047,556
flutter
[infra] Should Cocoon be able to backfill broken engine builds?
## Problem? Occasionally the engine could fail to build in the MQ due to infra issues. For example, recently we hit an issue with LUCI bots checking out a wrong engine revision: * https://github.com/flutter/flutter/issues/160704 * https://github.com/flutter/flutter/issues/160795 Other possible scenarios could include files lost in transmission to GCS, failed sync to Google CDN (gstatic), etc. If such git revision lands on `master` it will stay in a forever broken state. However, the _code_ isn't broken, only infra is. After the infra is fixed, it would be nice to be able to reset the status of that commit in `master` and backfill it along with all the engine artifacts. However, right now we don't have a way to do this in post-submit. There's no way to tell the infra to build the engine after the commit lands in `master`. A workaround is to push a new PR, have it land normally, and then ask everyone to skip the "broken" one (like I did [here](https://github.com/flutter/flutter/issues/160795#issuecomment-2563008883)). This method is OK, but it requires that everyone get the message. ## What we could do Have a button on the dashboard (perhaps available to admins only), which would do two things: * Reset the statuses of all tasks to "New". * Tag the commit as one needing an engine build. * Kick off a special post-submit build that runs both the engine and framework CI stages for that commit. If the code and the infra are indeed in good shape then that should fix the respective Flutter revision. ## Why not do this Ideally if broken infra leads to a failed engine build, it should result in the MQ check failing, and the PR should be kicked back into pre-submit. Failed infra that results in the MQ landing the commit to post-submit is a serious issue. One could reasonably argue that such things should _never_ happen, and instead of providing a way to backfill post-submit commits, we should just make our infra more robust.
team-infra,P2,triaged-infra,monorepo
low
Critical
2,760,047,872
pytorch
[Performance] Simple arithemtic operations are slower using MPS than Metal
### 🐛 Describe the bug Reported by @swolchok and could be confirmed by running something like the following ```python import torch from timeit import default_timer from torch.utils.benchmark import Measurement, Timer def bench_binary( n, binary_func, dtype=torch.float32, ) -> Measurement: t = Timer( stmt=f"f(x, y);f(x, y); f(x, y); torch.mps.synchronize()", setup=f"x, y=torch.rand((2, {n}), dtype={dtype}, device='mps').unbind(0)", globals = {'f': binary_func}, language="python", timer=default_timer ) return t.blocked_autorange() mps_lib = torch.mps._compile_shader(""" #include <metal_stdlib> using namespace metal; template<typename T> kernel void add(constant T* x, constant T* y, device T* out, uint index [[thread_position_in_grid]]) { out[index] = static_cast<T>(x[index] + y[index]); } template [[host_name("add_float")]] kernel void add(constant float*, constant float*, device float*, uint); template [[host_name("add_half")]] kernel void add(constant half*, constant half*, device half*, uint); template [[host_name("add_bfloat")]] kernel void add(constant bfloat*, constant bfloat*, device bfloat*, uint); """) def metal_add(x, y): rc = torch.empty_like(x) { torch.float: mps_lib.add_float, torch.half: mps_lib.add_half, torch.bfloat16: mps_lib.add_bfloat }[x.dtype](x, y, rc) return rc if __name__ == "__main__": n = 1024**2 for dtype in [torch.float32, torch.float16, torch.bfloat16]: # Validate correctness first inp = torch.rand(2, n, dtype=dtype, device="mps").unbind(0) out = torch.add(*inp) out_metal = metal_add(*inp) if not torch.allclose(out, out_metal): raise RuntimeError(f"out-out_metal.abs().max() is {(out-out_metal).abs().max().item()} for {dtype}") eager_t = bench_binary(n, torch.add, dtype) metal_t = bench_binary(n, metal_add, dtype) use_msec = eager_t.mean > 1e-4 or metal_t.mean > 1e-4 multiplier = 1e3 if use_msec else 1e6 uname = "msec" if use_msec else "usec" print(f"torch.add()x3 {str(dtype):>14} {eager_t.mean*multiplier:>7.2f} {uname} metal_add()x3: {metal_t.mean*multiplier:>7.2f} {uname} speedup: {eager_t.mean/metal_t.mean:>7.2f}") ``` On M1 pro Metal implementation of the same shader runs 20% faster than MPS one for 1 million elements ``` torch.add()x3 torch.float32 0.53 msec metal_add()x3: 0.42 msec speedup: 1.27 torch.add()x3 torch.float16 0.45 msec metal_add()x3: 0.37 msec speedup: 1.21 torch.add()x3 torch.bfloat16 0.44 msec metal_add()x3: 0.37 msec speedup: 1.19 ``` More involved example can be seen here: https://github.com/pytorch/pytorch/pull/143656 ### Versions 2.5.1, nightly cc @msaroufim @kulinseth @albanD @DenisVieriu97 @jhavukainen
module: performance,triaged,module: mps
low
Critical
2,760,059,559
pytorch
`aten._assert_scalar` can hard error the partitioner
internal xref: https://fb.workplace.com/groups/1075192433118967/permalink/1567692087202330/ (second xref: https://fb.workplace.com/groups/1075192433118967/posts/1574136133224592/?comment_id=1575214129783459&reply_comment_id=1577334836238055) I haven't been able to run the internal repro properly, but I did make a (hopefully representative) tiny OSS repro: ``` import torch torch._dynamo.config.capture_dynamic_output_shape_ops = True from torch._functorch import config config.ban_recompute_not_in_allowlist = False @torch.compile(backend="aot_eager") def f(x): y = x.nonzero() tmp = torch.ones_like(y) return x.sum() + tmp.sum() x = torch.ones(4, requires_grad=True) out = f(x) ``` cc @chauhang @penguinwu @ezyang @bobrenjc93 @zou3519 @yf225
triaged,oncall: pt2,module: dynamic shapes,module: aotdispatch,module: pt2-dispatcher
low
Critical
2,760,067,448
ui
[feat]: add display icons on mobile too
### Feature description Add a functionality to display icons into sidebar on mobile screen. ### Affected component/components sidebar ### Additional Context _No response_ ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues and PRs
area: request
low
Minor
2,760,085,236
svelte
`Spring` isn't deeply reactive (`target` cannot be mutated)
### Describe the bug I have something like this in my app: ```ts export class BrokenCounter { #spring = new Spring({ value: 0 }); get count() { return this.#spring.current.value } incr() { this.#spring.target.value++ } decr() { this.#spring.target.value-- } } ``` **Rather than re-assigning `target` to a new object, I directly mutate the `target` object.** Doing `target.value++` instead of `target = { value: target.value + 1 }` causes the `Spring` to "miss" the update. It surprised me because I assumed `target` would be proxied by the `Spring` (to make it deeply reactive), just like `$state`. Plus, `Spring` already "looks deeply" into the `target` value (to animate individual properties and array items). Creating a new object and assigning it to `target` makes for a quick workaround, but it is a bit inefficient and in some situations will lead to a lot of unnecessary garbage being created! This might be a regression from `spring`, which allowed mutating its target through the `update` method/function (I think?). I see three potential solutions to this issue: 1. Make `Spring` (and maybe others?) deeply reactive just like `$state` 2. Warn against this in the documentation and hope people read it 3. Somehow detect mutations to `target` and issue a warning in the console (probably just in development) ### Reproduction https://svelte.dev/playground/b9defdca3842490bb907fcfadaedfecc?version=5.16.0 ### Logs _No response_ ### System Info ```shell System: OS: macOS 15.1.1 CPU: (8) arm64 Apple M2 Memory: 72.67 MB / 16.00 GB Shell: 5.9 - /bin/zsh Binaries: Node: 22.11.0 - ~/Library/pnpm/node Yarn: 1.22.22 - ~/Library/pnpm/yarn npm: 10.9.0 - ~/Library/pnpm/npm pnpm: 9.14.2 - ~/Library/pnpm/pnpm bun: 1.1.2 - ~/.bun/bin/bun Browsers: Safari: 18.1.1 ``` ### Severity annoyance
transition/animation
low
Critical
2,760,105,286
flutter
Decide if `.ci.yaml` parser should infer "required" `runIf` deps
This is a TODO to discuss with @jtmcdole and @christopherfujino after the holidays as a part of https://github.com/flutter/cocoon/pull/4137. As of https://github.com/flutter/cocoon/pull/4137, if `runIf` is used, both: - `DEPS` - `.ci.yaml` (or `engine/src/flutter/.ci.yaml`) ... are required entries. @yjbanov asked in https://github.com/flutter/cocoon/pull/4137 if it would be possible to have the `.ci.yaml` parser auto-infer these entries, instead of requiring them explicitly for each task. I can see PROs and CONs to both approaches, so we should discuss this informally before making a decision.
c: proposal,team-infra,P2,c: tech-debt,triaged-infra,monorepo
low
Minor
2,760,108,234
godot
External Editor line flag differs in behavior from internal editor
### Tested versions - Tested in Godot v4.3.stable.official [77dcf97d8] - Reproducible in both VSCode and SublimeText - Unsure if it applies to any other editor ### System information Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - AMD Radeon(TM) Vega 8 Graphics (Advanced Micro Devices, Inc.; 31.0.12027.9001) - AMD Ryzen 3 3200G with Radeon Vega Graphics (4 Threads) ### Issue description When using a external editor, files open 1 line before the expected line while debugging. Compare: ![Image](https://github.com/user-attachments/assets/5682ed17-27d3-414d-9aa8-b00e66f50b9c) ![Image](https://github.com/user-attachments/assets/871bca5c-43c3-410e-a6be-ead5c1201764) This happens with the official VSCode settings as well, which is why I tried a different one here. I'm unfamiliar with how this works internally, but this seems to be the line that replaces the flag for the external editor: https://github.com/godotengine/godot/blob/99a8ab795d65e816ea7c452aa0fb55d02385c048/editor/plugins/script_editor_plugin.cpp#L2508 And I found this case where it is being modified: https://github.com/godotengine/godot/blob/99a8ab795d65e816ea7c452aa0fb55d02385c048/editor/plugins/script_editor_plugin.cpp#L617 It didn't seem like it was possible to modify the line flag directly in the editor settings in way (like adding a +1) ### Steps to reproduce - Configure a project with an external editor, like VSCode - Use External Editor for Debugging - insert a breakpoint at any line - see the wrong line being open ### Minimal reproduction project (MRP) N/A
bug,topic:editor,needs testing
low
Critical
2,760,165,150
flutter
[Android] Investigate implementing `autoSensitive` sensitive content mode
The initial implementation of sensitive content support on Android (to address https://github.com/flutter/flutter/issues/150218) will not implement the `autoSensitive` mode, where Android uses a heuristic based on autofill hints to detect sensitive content, see https://developer.android.com/reference/android/view/View#CONTENT_SENSITIVITY_AUTO. Once the initial implementation lands, we should consider implementing this mode, which may require usage of additional APIs. @gmackall has already done some investigation as to how this may be done in https://github.com/flutter/engine/compare/main...gmackall:engine:content_sensitivity_auto_poc.
platform-android,P2,team-android,triaged-android
low
Minor
2,760,165,319
rust
Tracking issue for release notes of #133073: `--nocapture` doesn't follow common CLI conventions, making it a stumbling block to people debugging failures
This issue tracks the release notes text for #133073. ### Steps - [ ] Proposed text is drafted by PR author (or team) making the noteworthy change. - [ ] Issue is nominated for release team review of clarity for wider audience. - [ ] Release team includes text in release notes/blog posts. ### Release notes text The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing). ````markdown # Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...) - [`--nocapture` doesn't follow common CLI conventions, making it a stumbling block to people debugging failures](https://github.com/rust-lang/rust/issues/133073) ```` > [!TIP] > Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use. > The category will be de-duplicated with all the other ones by the release team. > > *More than one section can be included if needed.* ### Release blog section If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section. *Otherwise leave it empty.* ````markdown ```` cc @epage -- origin issue/PR authors and assignees for starting to draft text
relnotes,A-libtest,T-testing-devex,relnotes-tracking-issue
low
Critical
2,760,170,573
go
cmd/cgo/internal/testplugin: TestMethod2 failures
``` #!watchflakes default <- pkg == "cmd/cgo/internal/testplugin" && test == "TestMethod2" ``` Issue created automatically to collect these failures. Example ([log](https://ci.chromium.org/b/8727708871676804273)): === RUN TestMethod2 ( GOPATH=$TMPDIR PWD=$TMPDIR/src/testplugin LD_LIBRARY_PATH=$TMPDIR/src/testplugin /Users/swarming/.swarming/w/ir/x/w/goroot/bin/go build -gcflags '' -buildmode=plugin -o method2.so ./method2/plugin.go ) plugin_test.go:332: /Users/swarming/.swarming/w/ir/x/w/goroot/bin/go build -gcflags -buildmode=plugin -o method2.so ./method2/plugin.go: exit status 1 # command-line-arguments ../../../../w/goroot/pkg/tool/darwin_amd64/link: running clang failed: exit status 1 /usr/bin/clang -arch x86_64 -m64 -Wl,-headerpad,1144 -Wl,-flat_namespace -Wl,-bind_at_load -dynamiclib -o a.out.so -Qunused-arguments ../../../go-link-3320883559/go.o ../../../go-link-3320883559/000000.o ../../../go-link-3320883559/000001.o ../../../go-link-3320883559/000002.o ../../../go-link-3320883559/000003.o ../../../go-link-3320883559/000004.o ../../../go-link-3320883559/000005.o ../../../go-link-3320883559/000006.o ../../../go-link-3320883559/000007.o ../../../go-link-3320883559/000008.o ../../../go-link-3320883559/000009.o -O2 -g -lpthread ld: warning: -bind_at_load is deprecated on macOS 0 0x1001acc3b __assert_rtn + 64 1 0x1000f146c ld::AliasAtom::targetOfAliasAtom() const + 236 2 0x1000cb07f ld::setAtomDebugFileInfo(ld::Atom const*, mach_o::DebugNoteFileInfo const*) + 79 3 0x1000aa6ff ld::AtomPlacement::forEachAtom(void (ld::Atom const*, bool&) block_pointer) const + 287 4 0x1000c75ef ld::InputFiles::SliceParser::parseObjectFile(mach_o::Header const*) const + 27951 5 0x1000d6c91 ld::InputFiles::parseAllFiles(void (ld::AtomFile const*) block_pointer)::$_7::operator()(unsigned long, ld::FileInfo const&) const + 657 6 0x7ff8047d1066 _dispatch_client_callout2 + 8 7 0x7ff8047e418f _dispatch_apply_invoke_and_wait + 213 8 0x7ff8047e3692 _dispatch_apply_with_attr_f + 1207 9 0x7ff8047e3847 dispatch_apply + 45 10 0x1001736b2 ld::AtomFileConsolidator::parseFiles(bool) + 370 11 0x1000f8597 main + 12471 ld: Assertion failed: (targetIndex < atoms.size()), function directTargetNoFollow, file Fixup.cpp, line 374. clang: error: linker command failed with exit code 1 (use -v to see invocation) --- FAIL: TestMethod2 (1.10s) — [watchflakes](https://go.dev/wiki/Watchflakes)
NeedsInvestigation
low
Critical
2,760,176,146
flutter
PageView's nextPage with Curves.elasticInOut doesn't navigate to the next page
I'm encountering an issue with the PageView widget in Flutter. When using pageController.nextPage with the Curves.elasticInOut curve, the page animates as expected (with the elastic effect), but it doesn't actually navigate to the next page. It stays on the current page. Other curves, like Curves.easeInOut, work correctly. ### Steps to reproduce 1. Create a PageView with a PageController. 2. Set physics to NeverScrollableScrollPhysics to disable manual scrolling. 3. Implement a button or other interaction that calls pageController.nextPage with duration and Curves.elasticInOut. 4. Observe that the page animates elastically but does not transition to the next page. 5. Change the curve to Curves.easeInOut and observe the page transitions as expected. ### Expected results The PageView should smoothly transition to the next page with the elastic in-out animation provided by Curves.elasticInOut. ### Actual results The PageView performs the elastic animation but remains on the current page. It does not navigate to the next page. ### Code sample ```dart import 'package:flutter/material.dart'; void main() { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( appBar: AppBar(title: const Text('PageView Bug')), body: const MyHomePage(), ), ); } } class MyHomePage extends StatefulWidget { const MyHomePage({Key? key}) : super(key: key); @override State<MyHomePage> createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { final PageController _pageController = PageController(); void _nextPage() { _pageController.nextPage( duration: const Duration(milliseconds: 500), curve: Curves.elasticInOut, // This causes the issue ); //_pageController.nextPage( //duration: const Duration(milliseconds: 500), //curve: Curves.easeInOut, // This works correctly //); } @override Widget build(BuildContext context) { return Column( children: [ Expanded( child: PageView( controller: _pageController, physics: const NeverScrollableScrollPhysics(), children: [ Container(color: Colors.red), Container(color: Colors.blue), ], ), ), ElevatedButton( onPressed: _nextPage, child: const Text('Next Page (elasticInOut)'), ), ], ); } } ``` ### Screenshots or Video _No response_ ### Logs _No response_ ### Flutter Doctor output ```dart [√] Flutter (Channel stable, 3.27.1, on Microsoft Windows [Version 10.0.26100.2605], locale tr-TR) • Flutter version 3.27.1 on channel stable at D:\fluttersdk\flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision 17025dd882 (10 days ago), 2024-12-17 03:23:09 +0900 • Engine revision cb4b5fff73 • Dart version 3.6.0 • DevTools version 2.40.2 [√] Windows Version (Installed version of Windows is version 10 or higher) [√] Android toolchain - develop for Android devices (Android SDK version 35.0.0) • Android SDK at C:\Users\user\AppData\Local\Android\sdk • Platform android-35, build-tools 35.0.0 • Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java • Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11) • All Android licenses accepted. [√] Chrome - develop for the web • Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe [√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.12.3) • Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community • Visual Studio Community 2022 version 17.12.35527.113 • Windows 10 SDK version 10.0.22621.0 [√] Android Studio (version 2024.2) • Android Studio at C:\Program Files\Android\Android Studio • Flutter plugin can be installed from: https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11) [√] VS Code (version 1.96.2) • VS Code at C:\Users\user\AppData\Local\Programs\Microsoft VS Code • Flutter extension can be installed from: https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter [√] Connected device (4 available) • SM G998B (mobile) • emulator-5554 • android-x64 • Android 9 (API 28) • Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.26100.2605] • Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205 • Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.112 [√] Network resources • All expected network resources are available. • No issues found! ```
framework,f: material design,f: gestures,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.27,found in release: 3.28
low
Critical
2,760,193,145
flutter
`dev/bots/analyze_snippet_code.dart` does not allow engine to override `analysis_options.yaml`
See https://github.com/flutter/flutter/blob/29d04353558aca228c52c0b692bf25a8020c0934/dev/bots/analyze_snippet_code.dart#L998-L1001 In https://github.com/flutter/engine/pull/53223, we made an intentional decision to no longer require `always_specify_types`, and indeed that change is still in effect in the [merged monorepo](https://github.com/flutter/flutter/blob/29d04353558aca228c52c0b692bf25a8020c0934/engine/src/flutter/analysis_options.yaml#L36). However, there are scripts that assume certain properties that live outside the engine. I had to undo style suggestions recommended by @jtmcdole (https://github.com/flutter/flutter/pull/160798#discussion_r1896860808) as a result.
engine,team-infra,P2,c: tech-debt,monorepo
low
Minor
2,760,193,915
TypeScript
Typed Object.entries/Object.fromEntries - Or Type Utility to infer Unions with matching length so that a new type safe object can be made
### 🔍 Search Terms "Typed Object.entries", "Typed Object.fromEntries", "Object.fromEntries", "Object.entries", "Infer Union Types" ### ✅ Viability Checklist - [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code - [x] This wouldn't change the runtime behavior of existing JavaScript code - [x] This could be implemented without emitting different JS based on the types of the expressions - [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.) - [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types - [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals ### ⭐ Suggestion I have an issue where I'm trying to Match Two Union Types together to make Object.entries and map() back to a transformed Object.fromEntries(). I believe I almost can do it, but may lack the typescript tool to do a correct inference to make sure two unions with the same length and match up properly. I would like to refer to this stack overflow question I have asked: https://stackoverflow.com/questions/79310390/i-need-help-strongly-typing-typescript-object-entries?noredirect=1#comment139855755_79310390 I think Object.entries() and adding transformations to the Value is a common operation. It would be amazing if typescript had implicitly made it easy for it to correctly become a typed object with Object.fromEntries(). Preferable Feature: The feature suggestion hopes that either native **es** `Object.fromEntries( Object.entries().map(([key, val]) => [key, () => val]) )` can infer value transformations and match a key automatically. ----- Alternative Feature: Type utilities to make the above possible if the user has to create their own types. Currently I find it difficult to work with generic unions that match the same length and match them to map back to an object. You can view my code question: https://stackoverflow.com/questions/79310390/i-need-help-strongly-typing-typescript-object-entries?noredirect=1#comment139855755_79310390 and notice that I'm trying to `Extract<Val, Val>` in the case it comes back as a union. The code already feels complex. It would be great if additional type tools can help with Entry Types or Unions with the same length. Maybe a syntax to declaratively state `EntryUnion<UnionOne, UnionTwo, FallbackValue>` which would do its best to match each union index accordingly if the lengths fit or go to a fallback value if UnionTwo does not fit UnionOne length. ```ts // Note: The union matching to create a working Object.fromEntries() works if we incorrectly use infer _V. // ------------------------------------------------------------ // Context: // Entry: ["fruitName", "apple"] | ["description", string] | ["weight", number] | ["isFresh", boolean] // infer K: "fruitName" | "description" | "weight" | "isFresh" // infer _V: "apple" | string | number | boolean // Val: // - Example 1: 123 - number // - Example 2: entry[1] - "apple" | string | number | boolean - Same union as infer _V but does not key match with infer K // - Example 3: () => entry[1] - () => string | number | boolean // - Example 4: { test: entry[1] } - { test: string | number | boolean } type TransformEntry<Entry, Val> = Entry extends [infer K, infer _V] ? [K, Extract<Val, Val>] : never; ``` ### 📃 Motivating Example Please refer to this stack overflow question and view the examples: https://stackoverflow.com/questions/79310390/i-need-help-strongly-typing-typescript-object-entries?noredirect=1#comment139855755_79310390 JS has a large api that deal with the concept of entries that work with Object.fromEntries() that will eventually become an object. If we can parse these patterns as special, it would be more type safe rather than doing type assertions to correct the output. - `Object.entries` - `new Map().entries()` - `new Set().entries()` - `Array.entries()` --- - `Object.fromEntries()` ### 💻 Use Cases 1. What do you want to use this for? - I would use this with everyday Object transformation of values. It would be amaizing to just iterate objects and tap into a value update with seemless type inference. 2. What shortcomings exist with current approaches? The current approach is more tedious and less type safe. Most consumers apparently just use type assertion when doing Object.fromEntries(Object.entries()) which can be prone to human error. 3. What workarounds are you using in the meantime? This example in the stack overflow question is what I use: https://stackoverflow.com/questions/79310390/i-need-help-strongly-typing-typescript-object-entries?noredirect=1#comment139855755_79310390 or sometimes my own type assertion.
Suggestion,Awaiting More Feedback
low
Critical
2,760,200,913
yt-dlp
[bandcamp] Support user-purchased Bandcamp albums
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm requesting a site-specific feature - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region USA ### Example URLs N/A ### Provide a description that is worded well enough to be understood Currently yt-dlp only supports downloading MP3 of albums which are not free, even if the album has been purchased by the user and the login cookies are supplied (see examples below). It would be nice to support lossless downloading of purchased albums. Side note: its seems like yt-dlp prefers to download the 128 kbps mp3 stream over the v0 stream (which ends up being about twice the filesize), which I believe is a separate bug. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [X] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell yt-dlp https://cadeparker.bandcamp.com/track/ghettotech-forever -F [Bandcamp] Extracting URL: https://cadeparker.bandcamp.com/track/ghettotech-forever [Bandcamp] ghettotech-forever: Downloading webpage [info] Available formats for 1180292997: ID EXT RESOLUTION │ FILESIZE TBR PROTO │ VCODEC ACODEC ABR ────────────────────────────────────────────────────────────────────── mp3-128 mp3 audio only │ ≈12.00MiB 128k https │ audio only mp3 128k yt-dlp https://cadeparker.bandcamp.com/track/ghettotech-forever -F --cookies-from-browser firefox [Bandcamp] Extracting URL: https://cadeparker.bandcamp.com/track/ghettotech-forever [Bandcamp] ghettotech-forever: Downloading webpage Extracting cookies from firefox Extracted ____ cookies from firefox [info] Available formats for 1180292997: ID EXT RESOLUTION │ FILESIZE TBR PROTO │ VCODEC ACODEC ABR ────────────────────────────────────────────────────────────────────── mp3-v0 mp3 audio only │ https │ audio only mp3 mp3-128 mp3 audio only │ ≈12.00MiB 128k https │ audio only mp3 128k ```
account-needed,site-enhancement
low
Critical
2,760,201,171
rust
strange borrowing suggestion
Rustc suggests borrowing of a function's return type when there is a trait not implemented error for the function's parameters. Not only does the borrow not help, a dereferencing is required instead, but it is suggested for the return type, which is not where the problem is at all. Consider the following program: ```Rust #[derive(Clone, Copy)] struct Hello; trait Tr: Clone + Copy {} impl Tr for Hello {} fn foo<T: Tr>(_v: T, _w: T) {} fn main() { let hellos = [Hello; 3]; for hi in hellos.iter() { foo(hi, hi); } } ``` On latest nightly `1.85.0-nightly (2024-12-25 7c002ff9a70cb84fd1a9)` (same as on stable and beta): ``` error[E0277]: the trait bound `&Hello: Tr` is not satisfied --> src/main.rs:14:9 | 14 | foo(hi, hi); | ^^^ the trait `Tr` is not implemented for `&Hello` | note: required by a bound in `foo` --> src/main.rs:7:11 | 7 | fn foo<T: Tr>(_v: T, _w: T) {} | ^^ required by this bound in `foo` help: consider borrowing here | 14 | &foo(hi, hi); | + error[E0277]: the trait bound `&Hello: Tr` is not satisfied --> src/main.rs:15:13 | 15 | bar(hi); | --- ^^ the trait `Tr` is not implemented for `&Hello` | | | required by a bound introduced by this call | = help: the trait `Tr` is implemented for `Hello` note: required by a bound in `bar` --> src/main.rs:9:11 | 9 | fn bar<T: Tr>(_v: T) {} | ^^ required by this bound in `bar` ``` The example shows that if there is one parameter, a borrow is not suggested. It happens only if there is two parameters. Related (but not the same): #132041
A-diagnostics,A-borrow-checker,T-compiler
low
Critical
2,760,201,584
flutter
SearchAnchor.bar listener does not fire open/close events
### Steps to reproduce 1- Create a SearchAnchor.bar() widget 2- Add a listener to the `SearchController` ```dart _searchController.addListener(() { if (_searchController.isOpen) { debugPrint('SearchAnchor.bar() is open.'); } else { debugPrint('SearchAnchor.bar() is closed.'); } }); ``` 3-It wont report correctly if the widget is open or not ### Expected results Know if the widget is open ### Actual results The result is uncertain. It will only change to open if you type something on the bar. ### Code sample [SearchError.zip](https://github.com/user-attachments/files/18255911/SearchError.zip) ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> [Upload media here] </details> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console Doctor summary (to see all details, run flutter doctor -v): [√] Flutter (Channel master, 3.28.0-2.0.pre.38578, on Microsoft Windows [versao 10.0.26100.2605], locale pt-BR) [√] Windows Version (11 Home 64-bit, 24H2, 2009) [√] Android toolchain - develop for Android devices (Android SDK version 35.0.0) [√] Chrome - develop for the web [X] Visual Studio - develop Windows apps X Visual Studio not installed; this is necessary to develop Windows apps. Download at https://visualstudio.microsoft.com/downloads/. Please install the "Desktop development with C++" workload, including all of its default components [√] Android Studio (version 2024.1) [√] VS Code (version 1.96.2) [√] Connected device (3 available) [√] Network resources ! Doctor found issues in 1 category. ``` </details>
framework,f: material design,has reproducible steps,P2,team-design,triaged-design,found in release: 3.27,found in release: 3.28
low
Critical
2,760,211,205
yt-dlp
[bandcamp] yt-dlp chooses worse quality MP3 on purchased albums
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region USA ### Provide a description that is worded well enough to be understood When extracting a track/album which the user has already purchased, mp3-v0 and mp3-128 formats are shown. The mp3-v0 format is higher quality than mp3-128, yet yt-dlp chooses the mp3-128 format to download instead. New to the codebase but looks like the format `quality` option should be set here to prefer v0 over 128 mp3. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [X] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell yt-dlp https://cadeparker.bandcamp.com/track/ghettotech-forever -F --cookies-from-browser firefox [Bandcamp] Extracting URL: https://cadeparker.bandcamp.com/track/ghettotech-forever [Bandcamp] ghettotech-forever: Downloading webpage Extracting cookies from firefox Extracted 2257 cookies from firefox [info] Available formats for 1180292997: ID EXT RESOLUTION │ FILESIZE TBR PROTO │ VCODEC ACODEC ABR ────────────────────────────────────────────────────────────────────── mp3-v0 mp3 audio only │ https │ audio only mp3 mp3-128 mp3 audio only │ ≈12.00MiB 128k https │ audio only mp3 128k yt-dlp https://cadeparker.bandcamp.com/track/ghettotech-forever --cookies-from-browser firefox -vU [debug] Command-line config: ['https://cadeparker.bandcamp.com/track/ghettotech-forever', '--cookies-from-browser', 'firefox', '-vU'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [65cf46cdd] Extracting cookies from firefox Extracted ____ cookies from firefox [Bandcamp] Extracting URL: https://cadeparker.bandcamp.com/track/ghettotech-forever [Bandcamp] ghettotech-forever: Downloading webpage [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] 1180292997: Downloading 1 format(s): mp3-128 ```
account-needed,site-bug,triage
low
Critical
2,760,215,087
flutter
camera_android_camerax and flutter_plugin_android_lifecycle require higher NDK version that Flutter master `flutter.ndkVersion`
On Flutter master channel I added the camera plugin to a fresh app, and on Android build I got a scary-looking red "Your project is configured with Android NDK 26.3.11579264, but the following plugin(s) depend on a different Android NDK version:" error: ``` $ flutter create flutter_camera_test_app $ cd flutter_camera_test_app $ flutter pub add camera $ flutter build apk ... Your project is configured with Android NDK 26.3.11579264, but the following plugin(s) depend on a different Android NDK version: - camera_android_camerax requires Android NDK 27.0.12077973 - flutter_plugin_android_lifecycle requires Android NDK 27.0.12077973 Fix this issue by using the highest Android NDK version (they are backward compatible). Add the following to /Users/m/Projects/github-repro/flutter_camera_test_app/android/app/build.gradle.kts: android { ndkVersion = "27.0.12077973" ... } ... ``` The app actually builds fine. The `flutter create` template is `ndkVersion = flutter.ndkVersion` ``` $ flutter doctor -v [!] Flutter (Channel main, 3.28.0-2.0.pre.38578, on macOS 15.1.1 24B91 darwin-arm64, locale en-US) • Flutter version 3.28.0-2.0.pre.38578 on channel main at /Users/m/Projects/flutter ... • Framework revision 29d0435355 (5 hours ago), 2024-12-26 18:40:29 +0100 ``` Tool: 1. As a developer what am I supposed to do about this error, actually follow the instructions to hardcode `ndkVersion`? Why do we have `flutter.ndkVersion` if we immediately recommend the developer overwrite it? I know this error was just greatly improved with https://github.com/flutter/flutter/pull/147809. 2. The app successfully built. What are the consequences of not updating the NDK version as suggested? Can the tool just silently do the work of swapping in the higher version, if it always "just works"? Will devs be doomed to constantly chase this hardcoded value? Package: 1. Why is `camera_android_camerax` and `flutter_plugin_android_lifecycle` requiring higher NDK versions than what's in Flutter master? Should that be allowed? Should the package repo have lints to prevent this? Where is this version number exactly coming from, since it's not actually specified in the package (that I can see)?
platform-android,tool,a: first hour,P1,team-android,triaged-android
medium
Critical
2,760,217,772
tensorflow
Failing to convert MobileNetV3Large to TFLite w/ Integer q
### 1. System information - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 and Windows 10 WSL - TensorFlow installation (pip package or built from source): 2.10 (on Win 10) and 2.16.2 (on WSL) ### 2. Code ``` import tensorflow as tf import numpy as np from tensorflow.keras.applications import MobileNetV3Large from tensorflow.keras.applications.mobilenet_v3 import preprocess_input import matplotlib.pyplot as plt from scipy.stats import pearsonr # Generate one sample image for testing test_image = np.random.normal(loc=127.5, scale=50, size=(1, 224, 224, 3)) test_image = np.clip(test_image, 0, 255).astype(np.float32) preprocessed_image = preprocess_input(test_image.copy()) # Load model model = MobileNetV3Large( weights='imagenet', include_top=True, input_shape=(224, 224, 3) ) # Get original prediction original_pred = model.predict(preprocessed_image, verbose=0) # Convert to TFLite converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] # Enable dynamic range quantization converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS ] converter._experimental_disable_per_channel = True converter.experimental_new_converter = True # Convert tflite_model = converter.convert() # Get TFLite prediction interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() interpreter.set_tensor(input_details[0]['index'], preprocessed_image) interpreter.invoke() tflite_pred = interpreter.get_tensor(output_details[0]['index']) # Calculate correlation correlation, _ = pearsonr(original_pred.flatten(), tflite_pred.flatten()) # Visualize plt.figure(figsize=(10, 5)) # Scatter plot plt.subplot(1, 2, 1) plt.scatter(original_pred.flatten(), tflite_pred.flatten(), alpha=0.5) plt.plot([original_pred.min(), original_pred.max()], [original_pred.min(), original_pred.max()], 'r--', label=f'Perfect Correlation\nActual: {correlation:.4f}') plt.title('Original vs Quantized Predictions') plt.xlabel('Original Model') plt.ylabel('Quantized Model') plt.legend() # Distribution plot plt.subplot(1, 2, 2) plt.hist(np.abs(original_pred.flatten() - tflite_pred.flatten()), bins=50, alpha=0.75, label='Prediction Differences') plt.title('Distribution of Prediction Differences') plt.xlabel('|Original - Quantized|') plt.ylabel('Count') plt.legend() plt.tight_layout() plt.show() print(f"\nResults:") print(f"Prediction correlation: {correlation:.4f}") print(f"Original model size: {len(model.get_weights()) / 1024 / 1024:.2f} MB") print(f"Quantized model size: {len(tflite_model) / 1024 / 1024:.2f} MB") print(f"Size reduction: {(1 - len(tflite_model) / len(model.get_weights())) * 100:.1f}%") ``` ### 3. Failure after conversion 1. TF 2.10 in Win10 Log: Model produces wrong results. See plot made from code: ![image](https://github.com/user-attachments/assets/8823cbfc-88d9-4e2d-9ef7-c8a2adc3ef0a) 2. TF2.16 in WSL: Model fails to convert. Gets error: `LLVM ERROR: Failed to infer result type(s).` (see log) ### 5. (optional) Any other info / logs I ran this on 2 systems: 1. TF 2.10 in Win10 Log: ``` import sys; print('Python %s on %s' % (sys.version, sys.platform)) D:\code\ai_dev\venv\Scripts\python.exe "C:/Program Files/JetBrains/PyCharm 2023.2.4/plugins/python/helpers/pydev/pydevd.py" --multiprocess --qt-support=auto --client 127.0.0.1 --port 54366 --file C:\Users\Administrator\AppData\Roaming\JetBrains\PyCharm2023.2\scratches\tfmodel_tflite.py Connected to pydev debugger (build 232.10203.26) 2024-12-26 15:17:07.215039: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-12-26 15:17:07.670016: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7423 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:0a:00.0, compute capability: 8.6 2024-12-26 15:17:11.111912: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8906 2024-12-26 15:17:12.036555: I tensorflow/stream_executor/cuda/cuda_blas.cc:1614] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once. WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op while saving (showing 5 of 64). These functions will not be directly callable after loading. 2024-12-26 15:17:44.746406: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format. 2024-12-26 15:17:44.746529: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency. 2024-12-26 15:17:44.747230: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: C:\Users\ADMINI~1\AppData\Local\Temp\tmprjeve5nr 2024-12-26 15:17:44.771028: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve } 2024-12-26 15:17:44.771129: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: C:\Users\ADMINI~1\AppData\Local\Temp\tmprjeve5nr 2024-12-26 15:17:44.886049: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:354] MLIR V1 optimization pass is not enabled 2024-12-26 15:17:44.904668: I tensorflow/cc/saved_model/loader.cc:229] Restoring SavedModel bundle. 2024-12-26 15:17:45.275249: I tensorflow/cc/saved_model/loader.cc:213] Running initialization op on SavedModel bundle at path: C:\Users\ADMINI~1\AppData\Local\Temp\tmprjeve5nr 2024-12-26 15:17:45.402632: I tensorflow/cc/saved_model/loader.cc:305] SavedModel load for tags { serve }; Status: success: OK. Took 655396 microseconds. 2024-12-26 15:17:45.811466: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable. INFO: Created TensorFlow Lite XNNPACK delegate for CPU. ``` 3. TF 2.16.2 in WSL log: ``` /root/ai_dev/.venv/bin/python /root/.pycharm_helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 127.0.0.1 --port 55955 --file /mnt/c/Users/Administrator/AppData/Roaming/JetBrains/PyCharm2023.2/scratches/tfmodel_tflite.py Connected to pydev debugger (build 232.10203.26) 2024-12-26 15:18:56.434366: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used. 2024-12-26 15:18:57.803991: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used. 2024-12-26 15:18:58.473972: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:479] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-12-26 15:18:58.972456: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:10575] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-12-26 15:18:58.975123: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1442] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-12-26 15:18:59.910805: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-12-26 15:19:06.124533: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2024-12-26 15:19:15.989425: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:0a:00.0/numa_node Your kernel may have been built without NUMA support. 2024-12-26 15:19:17.013008: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2251] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... WARNING: All log messages before absl::InitializeLog() is called are written to STDERR W0000 00:00:1735255170.947341 469943 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format. W0000 00:00:1735255170.947401 469943 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency. 2024-12-26 15:19:30.948060: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: /tmp/tmp9lkkwnp9 2024-12-26 15:19:30.953309: I tensorflow/cc/saved_model/reader.cc:51] Reading meta graph with tags { serve } 2024-12-26 15:19:30.953334: I tensorflow/cc/saved_model/reader.cc:146] Reading SavedModel debug info (if present) from: /tmp/tmp9lkkwnp9 2024-12-26 15:19:31.011901: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled 2024-12-26 15:19:31.020594: I tensorflow/cc/saved_model/loader.cc:234] Restoring SavedModel bundle. 2024-12-26 15:19:31.231606: I tensorflow/cc/saved_model/loader.cc:218] Running initialization op on SavedModel bundle at path: /tmp/tmp9lkkwnp9 2024-12-26 15:19:31.297302: I tensorflow/cc/saved_model/loader.cc:317] SavedModel load for tags { serve }; Status: success: OK. Took 349244 microseconds. 2024-12-26 15:19:31.779723: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable. loc(fused["ReadVariableOp:", callsite("MobileNetV3Large_1/conv_1/convolution/ReadVariableOp@__inference_serving_default_5035"("/root/.pycharm_helpers/pydev/pydevd.py":2199:1) at callsite("/root/.pycharm_helpers/pydev/pydevd.py":2181:1 at callsite("/root/.pycharm_helpers/pydev/pydevd.py":1493:1 at callsite("/root/.pycharm_helpers/pydev/pydevd.py":1500:1 at callsite("/root/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py":18:1 at callsite("/mnt/c/Users/Administrator/AppData/Roaming/JetBrains/PyCharm2023.2/scratches/tfmodel_tflite.py":34:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/tensorflow/lite/python/lite.py":1175:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/tensorflow/lite/python/lite.py":1129:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/tensorflow/lite/python/lite.py":1636:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/tensorflow/lite/python/lite.py":1614:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/tensorflow/lite/python/convert_phase.py":205:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/tensorflow/lite/python/lite.py":1537:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/backend/tensorflow/layer.py":58:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/backend/tensorflow/layer.py":112:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py":117:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/layers/layer.py":899:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py":117:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/ops/operation.py":46:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py":156:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/models/functional.py":182:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/ops/function.py":171:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/models/functional.py":632:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py":117:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/layers/layer.py":899:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py":117:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/ops/operation.py":46:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py":156:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/layers/convolutional/base_conv.py":243:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/layers/convolutional/base_conv.py":233:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/ops/nn.py":1183:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/backend/tensorflow/nn.py":301:1 at callsite("/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/backend/tensorflow/nn.py":274:1 at "/root/ai_dev/.venv/lib/python3.10/site-packages/keras/src/backend/tensorflow/core.py":85:1))))))))))))))))))))))))))))))))]): error: missing attribute 'value' LLVM ERROR: Failed to infer result type(s). Process finished with exit code 134 ```
stat:awaiting tensorflower,type:bug,comp:lite,TFLiteConverter,TF 2.16
low
Critical
2,760,218,075
PowerToys
Always On Top - Reading its config file (settings.json) way too often
### Microsoft PowerToys version 0.87.1 ### Installation method GitHub ### Running as admin No ### Area(s) with issue? Always on Top ### Steps to reproduce Enable the Always On Top feature. Monitor its activity using Process Monitor or a similar tool. ### ✔️ Expected Behavior The tool should load its settings file once and keep its config in memory (it's tiny). If any other PowerToy tools exhibit the same behaviour then they could also benefit from such a change. ### ❌ Actual Behavior The program reads its `settings.json` file *every single time* window focus changes, which is very inefficient. ![Image](https://github.com/user-attachments/assets/8b19caa0-d1a1-4406-9b2b-e15476dca3d6) In the interests of improving SSD performance and longevity, I'll be disabling this feature for now, just like I disable the Indexing service. ### Other Software _No response_
Issue-Bug,Product-FancyZones,Resolution-Fix Committed,Product-Always On Top
low
Major
2,760,238,677
rust
Warn on Fullwidth Exclamation Mark (U+FF01) in comment
```rust //! A //!B //! C ``` For the above Rust source code, rustdoc produces documentation containing only "A C". The line containing "B" is only a comment. The comment starts with https://www.compart.com/en/unicode/U+FF01. I caught this in a real-world documentation PR from a Chinese contributor. See https://github.com/rust-lang/rust/pull/134241#pullrequestreview-2523541705. In GitHub, the distinction is practically invisible. <p align="center"><img src="https://github.com/user-attachments/assets/fefc1fb0-a20a-4bf5-afdf-72a9041a5426" width="700"></p> In my text editor, it is more obvious. <p align="center"><img src="https://github.com/user-attachments/assets/37c64868-9d99-42d6-b16d-cd57b8a0ffed" width="700"></p> Would it be reasonable for rustc and/or rustdoc to have reported a lint on such code?
T-rustdoc,A-lints,A-Unicode,T-compiler,C-feature-request
low
Major
2,760,254,098
pytorch
[RFC] Identifying dynamic int8 symmetric vs asymmetric quantization of activation/input in Inductor-CPU
### 🚀 The feature, motivation and pitch ## Problem statement If int8 asymmetric quantization is used, at Inductor compile time, the input used while invoking `torch.compile` might be such that the zero-points of activation for some quantized linear may _coincidentally_ be zero (per-tensor quantization) or all zeros (per-token quantization). In such a case, we might mistake this case to pertain to symmetric quantization. Please suggest some solutions to this problem besides these two. ## Potential solution 1 One solution is to make zero-point optional for dequantizing an int8 tensor. In torchao, it is possible to make some changes to ensure that int8 symmetric quantization would not have zero-points, so they wouldn’t be present in the Inductor graph. But similar changes would have to be made for PT2E quantization as well. Nevertheless, if this change is made only in torchao, then we could still leverage this change with Inductor patterns corresponding to int8 symmetrically quantized activations that don't use zero-points for dequantization, but users who don't use torchao wouldn't benefit. cc @chauhang @penguinwu @leslie-fang-intel @Xia-Weiwen # Alternatives ## Potential solution 2 For per-tensor quantization, we could add a runtime check in Inductor codegened code that'd detect as to whether the int8 quantization-type of an activation is symmetric or asymmetric (by checking if zp is 0). But this approach may not be as performant for per-channel quantization (would need to check if any zp value is non-zero). #### This approach needs some new infra in Inductor-CPU codegen - Support of two variants of epilogues, both of which are compiled, but only one of which is used at runtime depending upon some check. In this case, one variant would only applies activation & weight scales, while the second one would also compute compensation) – the decision to use one of them is to be made at runtime for the whole quantized linear. ### Additional context We can compute int8 quantized linear with int8xint8 -> int32 GEMMs, so long as weights are not asymmetrically quantized. If activations are asymmetrically quantized, we can apply compensation pertaining to zero-points of activation, after applying activation & weight scales. If activations are symmetrically quantized, the computation is straightforward, and after int8 x int8 -> int32 GEMMs, we only need to apply pointwise activation & weight scales (which can happen at the block-level if we apply epilogues at micro-kernel level).
oncall: pt2,oncall: cpu inductor
low
Minor
2,760,258,584
transformers
`model.config.to_diff_dict()` delivers different result to `model.save_pretrained()`
### System Info - `transformers` version: 4.48.0.dev0 - Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.12.5 - Huggingface_hub version: 0.25.1 - Safetensors version: 0.4.5 - Accelerate version: 0.34.2 - Accelerate config: not found - PyTorch version (GPU?): 2.5.1+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA GeForce RTX 4090 ### Who can help? @ArthurZuc ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have a use case that requires that model weights always be encrypted when in local storage and only be decrypted in memory. As a result, it is not an option to use `model.from_pretrained(dir)`. Instead, my workaround has been to do: ```python import msgspec from pyfakefs.fake_filesystem_unittest import Patcher as ffspatcher from transformers import AutoConfig, AutoModelForSequenceClassification, PreTrainedModel weights = {...} # Deserialized to `dict` from an encrypted file elsewhere. config = {...} # Deserialized to `dict` from an encrypted file elsewhere. json_encoder = msgspec.json.encode with ffspatcher() as patcher: fakepath = f'FAKE_FILE_SYSTEM://config.json' patcher.fs.create_file(fakepath, contents = json_encoder(config)) config = AutoConfig.from_pretrained(fakepath) model: PreTrainedModel = AutoModelForSequenceClassification.from_config(config) model.load_state_dict(weights) ``` The problem I've noticed, however, is that when I serialize my config like so: ```python config = model.config.to_diff_dict() ``` The resulting config includes the key `_attn_implementation_autoset` set to `True` whereas the actual config of the model does not include that key and as a result when I try loading the config with `AutoConfig.from_pretrained()`, it ends up not using the default attention implementation for my model, SDPA, delivering effectively a different model with different logits. My current hotfix is to just delete the key `_attn_implementation_autoset` from all of my configs. But is it really necessary to add that key to `to_diff_dict()` when it is not added when you do `save_pretrained()`? ### Expected behavior I get the same model in a reproduciable way as when I save the config with `to_diff_dict()` vs `save_pretrained()`.
Core: Modeling,bug
low
Minor
2,760,265,574
flutter
[Proposal] Add a onClose callback to SearchAnchor to intercept suggestion close events
### Use case I need to call a function after SearchAnchor View closes ### Proposal Add an "onClose" function to the widget
c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design
low
Minor
2,760,268,096
pytorch
`torch.accelerator` cross-device utilities and properties
### 🚀 The feature, motivation and pitch as suggested by @albanD [here](https://pytorch.slack.com/archives/C3PDTEV8E/p1735120754479929?thread_ts=1735017298.875249&cid=C3PDTEV8E) opening an issue to discuss which cross-device utilities and device property fields should pytorch support. 1. properties report at the moment is inconsistent - `torch.cuda.get_device_properties` does work for CUDA and ROCm but not other accelerators, e.g. one needs to use `torch.hpu.get_device_properties` for Gaudi. - depending on whether it's CUDA or ROCm - the fields it outputs aren't the same - so a programmer can't reliably write cross-device applications - a lot of info is missing, e.g. to get CUDA cores count I have to use `nvidia-settings -q CUDACores -t` or `cuda.bindings.runtime.cudaGetDeviceProperties()` - need to depend on other libs/utils and again this is not cross-device (albeit one could argue that this is a cuda-specific setting, so there is no cross-device cuda-core count - not sure) - @albanD mentioned that `torch.accelerator` API should be overcoming the above issues 2. then let's discuss which cross-device utils should be there. ### cache clearing - one that I started the discussion on is cache clearing - this is important for benchmarking - currently various hacks are used to perform that e.g. see for example how a hardcoded 256MB tensor is used by triton's `do_bench` - [init](https://github.com/triton-lang/triton/blob/a2b398e0bb1b120f31cf386d6ae3261c3ab84207/third_party/nvidia/backend/driver.py#L555-L556), [clearing](https://github.com/triton-lang/triton/blob/6ad95ee4fd9b1e172717323460fd54c250dd7d65/python/triton/testing.py#L120-L127) - so that anybody using that either wastes compute on clearing more cache than there is or as accelerators get bigger 256MB will be not enough and the benchmark will return flawed results. The other complication is that which cache are we clearing? In the NVIDIA world it's L1+L2, but for AMD it's L1+L2+AMD Infinity cache (Last Level Cache). You will find the table of high end accelerator caches here https://github.com/stas00/ml-engineering/tree/master/compute/accelerator#caches - it's very inconsistent across accelerators - e.g. Intel Gaudi3 cache can be either L3 or L2 depending on the use case! cc @albanD @guangyey @EikanWang
triaged,module: accelerator
low
Minor
2,760,268,255
ui
[bug]: login and authentication blocks are the same
### Describe the bug At home page we have the block tabs with some shadcn components examples and I notice that we have the tab "login" and also "authentication". They are the same, they have just a different tab name but the content is the same. This was probably merged by mistake in some PR. ### Affected component/components blocks ### How to reproduce 1. Go to [https://ui.shadcn.com/blocks/authentication](https://ui.shadcn.com/blocks/authentication) 2. Go to [https://ui.shadcn.com/blocks/login](https://ui.shadcn.com/blocks/login) 3. Notice how they are the same. Screenshots: ![image](https://github.com/user-attachments/assets/9f442eea-acf4-4193-9e9a-edeb05c13bea) ![image](https://github.com/user-attachments/assets/be0e5b76-d2e6-4705-a39b-95a72f644524) ### Codesandbox/StackBlitz link _No response_ ### Logs _No response_ ### System Info ```bash Happens in any browser or operational system. ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,760,270,526
godot
[Godot 4.3-4.4beta1] GPUParticle2D animation bug while re-enter the camera frame
### Tested versions - Reproducible in 4.3 - 4.4beta1, Windows + Linux, AMD and Nvidia ### System information (Forward+) Godot 4.3 - 4.4beta1, Windows 11, Linux KDE NEON, AMD and Nvidia gpu ### Issue description GPU Particles animate just fine if they are ON SCREEN. If they exit the screen and then return "on camera", the animation goes wrong (start jumping values), and abruptly ends. I create a video showing the problem: https://www.youtube.com/watch?v=e-aD5Sclv_M&ab_channel=FrancoGast%C3%B3nPellegrini 1) at 0:0, you can see a normal particle fading out (particle animations using alpha values). A normal animation should FADE until disappear, smoothly. 2) at 0:24, I move the camera until the particles disappear. Then I focus on them BEFORE they finish their animations. 3) at 0:30 you can see them "killing themselves" because "the animation finished" 4) at 0:34 I create another particles again and ensure that they work just fine "while ON camera" 5) at 0:42 I create lots of particles BUT only move half of them out of the camera. The particles that re-enter the camera get "bugged" and finish abruptly (at least, the degradation is not smooth) ### Steps to reproduce In the example, use ASDW to move the camera, scroll wheel to zoom, and RIGHT CLICK to spawn particles, like the video. ### Minimal reproduction project (MRP) [bug-gpu-particle-2d.zip](https://github.com/user-attachments/files/18445736/bug-gpu-particle-2d.zip)
bug,topic:2d,topic:particles
low
Critical
2,760,272,121
PowerToys
OutOfMemoryException
### Microsoft PowerToys version 0.87.1.0 ### Installation method WinGet ### Running as admin No ### Area(s) with issue? General ### Steps to reproduce ```log [2024-12-27 00:28:09.0331] [INFO] [D:\a\_work\1\s\src\modules\launcher\PowerLauncher\MainWindow.xaml.cs::103] Send Run settings telemetry [2024-12-27 03:51:27.2711] [FATAL] ## Exception ``` System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. Source: System.Private.CoreLib TargetAssembly: System.Private.CoreLib, Version=9.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e TargetModule: System.Private.CoreLib.dll TargetSite: Void StartInternal(System.Threading.ThreadHandle, Int32, Int32, Char*) at System.Threading.Thread.StartInternal(ThreadHandle t, Int32 stackSize, Int32 priority, Char* pThreadName) at System.Threading.Thread.StartInternal(ThreadHandle t, Int32 stackSize, Int32 priority, Char* pThreadName) at System.Threading.Thread.StartCore() at System.Threading.PortableThreadPool.WorkerThread.CreateWorkerThread() at System.Threading.PortableThreadPool.WorkerThread.MaybeAddWorkingWorker(PortableThreadPool threadPoolInstance) at System.Threading.ThreadPool.RequestWorkerThread() at System.Threading.ThreadPoolWorkQueue.Dispatch() at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart() ``` ## Environment * Command Line: C:\Users\poo00\AppData\Local\PowerToys\PowerToys.PowerLauncher.dll -powerToysPid 32744 --started-from-runner * Timestamp: 12/27/2024 03:51:27 * Wox version: 0.87.1.0 * OS Version: Microsoft Windows NT 10.0.26100.0 * IntPtr Length: 8 * x64: True * CLR Version: 9.0.0 * Installed .NET Framework: * v4 Client 4.8.09032 * v4 Full 4.8.09032 * v4.0 Client 4.0.0.0 ## Assemblies - PowerToys.PowerLauncher * System.Private.CoreLib, Version=9.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e (C:\Users\poo00\AppData\Local\PowerToys\System.Private.CoreLib.dll) * PowerToys.PowerLauncher, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\PowerToys.PowerLauncher.dll) * PresentationFramework, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (C:\Users\poo00\AppData\Local\PowerToys\PresentationFramework.dll) * WindowsBase, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (C:\Users\poo00\AppData\Local\PowerToys\WindowsBase.dll) * System.Runtime, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Runtime.dll) * System.Xaml, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 (C:\Users\poo00\AppData\Local\PowerToys\System.Xaml.dll) * System.Runtime.InteropServices, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Runtime.InteropServices.dll) * WinRT.Runtime, Version=2.1.0.0, Culture=neutral, PublicKeyToken=99ea127f02d97709 (C:\Users\poo00\AppData\Local\PowerToys\WinRT.Runtime.dll) * System.Collections, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Collections.dll) * System.Collections.Concurrent, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Collections.Concurrent.dll) * PowerToys.ManagedCommon, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\PowerToys.ManagedCommon.dll) * Wox.Plugin, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\Wox.Plugin.dll) * PowerToys.GPOWrapperProjection, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\PowerToys.GPOWrapperProjection.dll) * PowerToys.Common.UI, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\PowerToys.Common.UI.dll) * System.Threading.Thread, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Threading.Thread.dll) * System.Text.Json, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Text.Json.dll) * NLog, Version=5.0.0.0, Culture=neutral, PublicKeyToken=5120e14c03d0593c (C:\Users\poo00\AppData\Local\PowerToys\NLog.dll) * netstandard, Version=2.1.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\netstandard.dll) * TestableIO.System.IO.Abstractions.Wrappers, Version=21.0.0.0, Culture=neutral, PublicKeyToken=96bf224d23c43e59 (C:\Users\poo00\AppData\Local\PowerToys\TestableIO.System.IO.Abstractions.Wrappers.dll) * TestableIO.System.IO.Abstractions, Version=21.0.0.0, Culture=neutral, PublicKeyToken=96bf224d23c43e59 (C:\Users\poo00\AppData\Local\PowerToys\TestableIO.System.IO.Abstractions.dll) * System.IO.FileSystem.DriveInfo, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.IO.FileSystem.DriveInfo.dll) * System.IO.FileSystem.Watcher, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.IO.FileSystem.Watcher.dll) * System.Diagnostics.FileVersionInfo, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Diagnostics.FileVersionInfo.dll) * System.ComponentModel, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.ComponentModel.dll) * System.Threading, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Threading.dll) * System.ComponentModel.Primitives, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.ComponentModel.Primitives.dll) * System.Net.Primitives, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Net.Primitives.dll) * System.Net.Mail, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Net.Mail.dll) * System.Private.Uri, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Private.Uri.dll) * System.Linq, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Linq.dll) * System.ObjectModel, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.ObjectModel.dll) * System.Memory, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Memory.dll) * Wox.Infrastructure, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\Wox.Infrastructure.dll) * hyjiacan.py4n, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\hyjiacan.py4n.dll) * PowerToys.ManagedTelemetry, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\PowerToys.ManagedTelemetry.dll) * Microsoft.Win32.Registry, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\Microsoft.Win32.Registry.dll) * System.Diagnostics.Tracing, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Diagnostics.Tracing.dll) * System.Diagnostics.DiagnosticSource, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Diagnostics.DiagnosticSource.dll) * System.IO.Packaging, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.IO.Packaging.dll) * PresentationCore, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (C:\Users\poo00\AppData\Local\PowerToys\PresentationCore.dll) * Microsoft.Win32.Primitives, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\Microsoft.Win32.Primitives.dll) * DirectWriteForwarder, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (C:\Users\poo00\AppData\Local\PowerToys\DirectWriteForwarder.dll) * System.Runtime.Extensions, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Runtime.Extensions.dll) * System.Diagnostics.Debug, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Diagnostics.Debug.dll) * System.Runtime.CompilerServices.VisualC, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Runtime.CompilerServices.VisualC.dll) * System.Collections.NonGeneric, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Collections.NonGeneric.dll) * System.Collections.Specialized, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Collections.Specialized.dll) * System.Configuration.ConfigurationManager, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Configuration.ConfigurationManager.dll) * System.Xml.ReaderWriter, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Xml.ReaderWriter.dll) * System.Private.Xml, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Private.Xml.dll) * System.Net.WebClient, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Net.WebClient.dll) * System.Text.Encoding.Extensions, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Text.Encoding.Extensions.dll) * System.Numerics.Vectors, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Numerics.Vectors.dll) * System.Threading.ThreadPool, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Threading.ThreadPool.dll) * PowerToys.PowerLauncher.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\zh-CN\PowerToys.PowerLauncher.resources.dll) * System.Diagnostics.TraceSource, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Diagnostics.TraceSource.dll) * System.ComponentModel.TypeConverter, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.ComponentModel.TypeConverter.dll) * System.Windows.Extensions, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Windows.Extensions.dll) * System.Net.Requests, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Net.Requests.dll) * System.Net.WebHeaderCollection, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Net.WebHeaderCollection.dll) * System.Diagnostics.Process, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Diagnostics.Process.dll) * PresentationFramework.Aero2, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (C:\Users\poo00\AppData\Local\PowerToys\PresentationFramework.Aero2.dll) * System.Diagnostics.StackTrace, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Diagnostics.StackTrace.dll) * PowerToys.PowerLauncher.Telemetry, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\PowerToys.PowerLauncher.Telemetry.dll) * PowerToys.Settings.UI.Lib, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\PowerToys.Settings.UI.Lib.dll) * System.Text.Encodings.Web, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Text.Encodings.Web.dll) * System.Runtime.Intrinsics, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Runtime.Intrinsics.dll) * System.Reflection.Emit.Lightweight, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Reflection.Emit.Lightweight.dll) * System.Reflection.Primitives, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Reflection.Primitives.dll) * System.Reflection.Emit.ILGeneration, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Reflection.Emit.ILGeneration.dll) * System.Text.RegularExpressions, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Text.RegularExpressions.dll) * System.Runtime.Loader, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Runtime.Loader.dll) * Microsoft.PowerToys.Run.Plugin.Calculator, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Calculator\Microsoft.PowerToys.Run.Plugin.Calculator.dll) * Microsoft.Plugin.Folder, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Folder\Microsoft.Plugin.Folder.dll) * System.Collections.Immutable, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Collections.Immutable.dll) * Microsoft.PowerToys.Run.Plugin.History, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\History\Microsoft.PowerToys.Run.Plugin.History.dll) * Microsoft.Plugin.Indexer, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Indexer\Microsoft.Plugin.Indexer.dll) * Microsoft.PowerToys.Run.Plugin.OneNote, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\OneNote\Microsoft.PowerToys.Run.Plugin.OneNote.dll) * Microsoft.PowerToys.Run.Plugin.PowerToys, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\PowerToys\Microsoft.PowerToys.Run.Plugin.PowerToys.dll) * Microsoft.Plugin.Program, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Program\Microsoft.Plugin.Program.dll) * Microsoft.Windows.SDK.NET, Version=10.0.22621.38, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (C:\Users\poo00\AppData\Local\PowerToys\Microsoft.Windows.SDK.NET.dll) * System.Threading.Overlapped, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Threading.Overlapped.dll) * Microsoft.PowerToys.Run.Plugin.Registry, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Registry\Microsoft.PowerToys.Run.Plugin.Registry.dll) * Microsoft.PowerToys.Run.Plugin.Service, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Service\Microsoft.PowerToys.Run.Plugin.Service.dll) * System.ServiceProcess.ServiceController, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.ServiceProcess.ServiceController.dll) * Microsoft.Plugin.Shell, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Shell\Microsoft.Plugin.Shell.dll) * Microsoft.PowerToys.Run.Plugin.System, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\System\Microsoft.PowerToys.Run.Plugin.System.dll) * System.Net.NetworkInformation, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Net.NetworkInformation.dll) * Microsoft.PowerToys.Run.Plugin.TimeDate, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\TimeDate\Microsoft.PowerToys.Run.Plugin.TimeDate.dll) * Community.PowerToys.Run.Plugin.UnitConverter, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\UnitConverter\Community.PowerToys.Run.Plugin.UnitConverter.dll) * Microsoft.Plugin.Uri, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Uri\Microsoft.Plugin.Uri.dll) * Community.PowerToys.Run.Plugin.ValueGenerator, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\ValueGenerator\Community.PowerToys.Run.Plugin.ValueGenerator.dll) * System.Security.Cryptography, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Security.Cryptography.dll) * Community.PowerToys.Run.Plugin.VSCodeWorkspaces, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\VSCodeWorkspaces\Community.PowerToys.Run.Plugin.VSCodeWorkspaces.dll) * System.Drawing.Common, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Drawing.Common.dll) * System.Private.Windows.Core, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 (C:\Users\poo00\AppData\Local\PowerToys\System.Private.Windows.Core.dll) * System.Drawing.Primitives, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Drawing.Primitives.dll) * Community.PowerToys.Run.Plugin.WebSearch, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\WebSearch\Community.PowerToys.Run.Plugin.WebSearch.dll) * Microsoft.PowerToys.Run.Plugin.WindowsSettings, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\WindowsSettings\Microsoft.PowerToys.Run.Plugin.WindowsSettings.dll) * Microsoft.PowerToys.Run.Plugin.WindowsTerminal, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\WindowsTerminal\Microsoft.PowerToys.Run.Plugin.WindowsTerminal.dll) * Microsoft.Plugin.WindowWalker, Version=0.87.1.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\WindowWalker\Microsoft.Plugin.WindowWalker.dll) * UIAutomationTypes, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (C:\Users\poo00\AppData\Local\PowerToys\UIAutomationTypes.dll) * UIAutomationProvider, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (C:\Users\poo00\AppData\Local\PowerToys\UIAutomationProvider.dll) * Microsoft.Xaml.Behaviors, Version=1.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\Microsoft.Xaml.Behaviors.dll) * Microsoft.Win32.SystemEvents, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\Microsoft.Win32.SystemEvents.dll) * PresentationFramework.Fluent, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (C:\Users\poo00\AppData\Local\PowerToys\PresentationFramework.Fluent.dll) * System.ComponentModel.EventBasedAsync, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.ComponentModel.EventBasedAsync.dll) * Microsoft.Toolkit.Uwp.Notifications, Version=7.1.0.0, Culture=neutral, PublicKeyToken=4aff67a105548ee2 (C:\Users\poo00\AppData\Local\PowerToys\Microsoft.Toolkit.Uwp.Notifications.dll) * System.IO.FileSystem, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.IO.FileSystem.dll) * System.Security.Cryptography.Algorithms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Security.Cryptography.Algorithms.dll) * System.Security.Cryptography.Primitives, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Security.Cryptography.Primitives.dll) * System.Reflection.Emit, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Reflection.Emit.dll) * DynamicComActivator, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null (dynamic assembly doesn't has location) * System.Security.Principal.Windows, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Security.Principal.Windows.dll) * System.Security.Claims, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Security.Claims.dll) * Microsoft.PowerToys.Run.Plugin.Calculator.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Calculator\zh-CN\Microsoft.PowerToys.Run.Plugin.Calculator.resources.dll) * Mages.Core, Version=2.0.2.0, Culture=neutral, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\Mages.Core.dll) * System.Runtime.Numerics, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Runtime.Numerics.dll) * Microsoft.Plugin.Folder.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Folder\zh-CN\Microsoft.Plugin.Folder.resources.dll) * Microsoft.PowerToys.Run.Plugin.History.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\History\zh-CN\Microsoft.PowerToys.Run.Plugin.History.resources.dll) * Microsoft.Plugin.Indexer.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Indexer\zh-CN\Microsoft.Plugin.Indexer.resources.dll) * Microsoft.PowerToys.Run.Plugin.OneNote.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\OneNote\zh-CN\Microsoft.PowerToys.Run.Plugin.OneNote.resources.dll) * Microsoft.PowerToys.Run.Plugin.PowerToys.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\PowerToys\zh-CN\Microsoft.PowerToys.Run.Plugin.PowerToys.resources.dll) * Microsoft.Plugin.Program.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Program\zh-CN\Microsoft.Plugin.Program.resources.dll) * Microsoft.PowerToys.Run.Plugin.Registry.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Registry\zh-CN\Microsoft.PowerToys.Run.Plugin.Registry.resources.dll) * Microsoft.PowerToys.Run.Plugin.Service.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Service\zh-CN\Microsoft.PowerToys.Run.Plugin.Service.resources.dll) * Microsoft.Plugin.Shell.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Shell\zh-CN\Microsoft.Plugin.Shell.resources.dll) * Microsoft.PowerToys.Run.Plugin.System.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\System\zh-CN\Microsoft.PowerToys.Run.Plugin.System.resources.dll) * Microsoft.PowerToys.Run.Plugin.TimeDate.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\TimeDate\zh-CN\Microsoft.PowerToys.Run.Plugin.TimeDate.resources.dll) * Community.PowerToys.Run.Plugin.UnitConverter.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\UnitConverter\zh-CN\Community.PowerToys.Run.Plugin.UnitConverter.resources.dll) * Microsoft.Plugin.Uri.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\Uri\zh-CN\Microsoft.Plugin.Uri.resources.dll) * Community.PowerToys.Run.Plugin.ValueGenerator.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\ValueGenerator\zh-CN\Community.PowerToys.Run.Plugin.ValueGenerator.resources.dll) * Community.PowerToys.Run.Plugin.VSCodeWorkspaces.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\VSCodeWorkspaces\zh-CN\Community.PowerToys.Run.Plugin.VSCodeWorkspaces.resources.dll) * Community.PowerToys.Run.Plugin.WebSearch.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\WebSearch\zh-CN\Community.PowerToys.Run.Plugin.WebSearch.resources.dll) * Microsoft.PowerToys.Run.Plugin.WindowsSettings.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\WindowsSettings\zh-CN\Microsoft.PowerToys.Run.Plugin.WindowsSettings.resources.dll) * Microsoft.PowerToys.Run.Plugin.WindowsTerminal.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\WindowsTerminal\zh-CN\Microsoft.PowerToys.Run.Plugin.WindowsTerminal.resources.dll) * Microsoft.Plugin.WindowWalker.resources, Version=0.87.1.0, Culture=zh-CN, PublicKeyToken=null (C:\Users\poo00\AppData\Local\PowerToys\RunPlugins\WindowWalker\zh-CN\Microsoft.Plugin.WindowWalker.resources.dll) * System.IO.Pipelines, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.IO.Pipelines.dll) * System.Threading.Tasks.Parallel, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Threading.Tasks.Parallel.dll) * System.Linq.Expressions, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Linq.Expressions.dll) * System.Linq.Parallel, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Linq.Parallel.dll) * System.Xml.XDocument, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Xml.XDocument.dll) * System.Private.Xml.Linq, Version=9.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51 (C:\Users\poo00\AppData\Local\PowerToys\System.Private.Xml.Linq.dll) * Accessibility, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (C:\Users\poo00\AppData\Local\PowerToys\Accessibility.dll) * System.Reflection.Metadata, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a (C:\Users\poo00\AppData\Local\PowerToys\System.Reflection.Metadata.dll) * PresentationFramework-SystemXmlLinq, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 (C:\Users\poo00\AppData\Local\PowerToys\PresentationFramework-SystemXmlLinq.dll) * System.Xml.Linq, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 (C:\Users\poo00\AppData\Local\PowerToys\System.Xml.Linq.dll) * System.Windows.Forms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 (C:\Users\poo00\AppData\Local\PowerToys\System.Windows.Forms.dll) * System.Windows.Forms.Primitives, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 (C:\Users\poo00\AppData\Local\PowerToys\System.Windows.Forms.Primitives.dll) * System.Reactive, Version=6.0.0.0, Culture=neutral, PublicKeyToken=94bc3704cddfc263 (C:\Users\poo00\AppData\Local\PowerToys\System.Reactive.dll) ``` ```log Version: 0.87.1.0 OS Version: Microsoft Windows NT 10.0.26100.0 IntPtr Length: 8 x64: True Date: 2024/12/27 3:51:27 Exception: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Threading.Thread.StartInternal(ThreadHandle t, Int32 stackSize, Int32 priority, Char* pThreadName) at System.Threading.Thread.StartInternal(ThreadHandle t, Int32 stackSize, Int32 priority, Char* pThreadName) at System.Threading.Thread.StartCore() at System.Threading.PortableThreadPool.WorkerThread.CreateWorkerThread() at System.Threading.PortableThreadPool.WorkerThread.MaybeAddWorkingWorker(PortableThreadPool threadPoolInstance) at System.Threading.ThreadPool.RequestWorkerThread() at System.Threading.ThreadPoolWorkQueue.Dispatch() at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart() ``` ### ✔️ Expected Behavior _No response_ ### ❌ Actual Behavior _No response_ ### Other Software _No response_
Issue-Bug,Product-PowerToys Run,Needs-Triage
low
Critical
2,760,276,541
vscode
粘贴的时候一直转圈圈,要等半天,还不如自己手动输入来的快
Type: <b>Bug</b> 随便复制个东西,然后粘贴就开始转圈了,很少时候是可以直接粘贴上去的 VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z) OS version: Windows_NT x64 10.0.22631 Modes: <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|13th Gen Intel(R) Core(TM) i9-13980HX (32 x 2419)| |GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off| |Load (avg)|undefined| |Memory (System)|31.63GB (10.79GB free)| |Process Argv|--crash-reporter-id 197511d7-6e6e-4ef3-9519-d06e56f1ea95| |Screen Reader|no| |VM|0%| </details><details><summary>Extensions (8)</summary> Extension|Author (truncated)|Version ---|---|--- tongyi-lingma|Ali|2.0.1 vue-peek|dar|1.0.2 vscode-eslint|dba|3.0.10 prettier-vscode|esb|11.0.0 vue-alias-skip|lih|0.0.25 vscode-language-pack-zh-hans|MS-|1.96.2024121109 vue-helper|she|4.3.2 volar|Vue|2.1.8 </details><details> <summary>A/B Experiments</summary> ``` vsliv368:30146709 vspor879:30202332 vspor708:30202333 vspor363:30204092 vscod805cf:30301675 binariesv615:30325510 vsaa593cf:30376535 py29gd2263:31024239 c4g48928:30535728 azure-dev_surveyone:30548225 962ge761:30959799 pythonnoceb:30805159 pythonmypyd1:30879173 h48ei257:31000450 pythontbext0:30879054 cppperfnew:31000557 dsvsc020:30976470 pythonait:31006305 dsvsc021:30996838 dvdeprecation:31068756 dwnewjupytercf:31046870 newcmakeconfigv2:31071590 nativerepl1:31139838 pythonrstrctxt:31112756 nativeloc2:31192216 cf971741:31144450 iacca1:31171482 notype1:31157159 5fd0e150:31155592 dwcopilot:31170013 stablechunks:31184530 6074i472:31201624 ``` </details> <!-- generated by issue reporter -->
info-needed,*english-please,translation-required-chinese-simplified
low
Critical
2,760,294,396
deno
The requested module `http2` does not provide an export named `Http2ServerRequest`
Version: Deno 2.1.4 This bug is still happening on gh actions. latest version of deno and same problem. same as #23326 <img width="902" alt="image" src="https://github.com/user-attachments/assets/9eae5d45-cfc8-4522-bfec-93f17b8a2e5e" /> <img width="541" alt="image" src="https://github.com/user-attachments/assets/a6c1f772-bba0-4da9-a214-3b1f30bc680e" /> > [!NOTE] > I'm using @hono/node-server, and this error is happening on gh action.
needs investigation
low
Critical
2,760,310,498
pytorch
The in-place version of unsqueezed is not supported by TorchDynamo when used in a specific way
### 🐛 Describe the bug If I directly call `torch.Tensor.unsqueeze_(x,y)` in my function, torch.compile fails with InternalTorchDynamoError. However, if I change the code to `x.unsqueeze_(y) format, torch.compile works. code: ```python import torch @torch.compile def f1(x, y): return x.unsqueeze(y) @torch.compile def f2(x, y): return torch.Tensor.unsqueeze_(x, y) x = torch.tensor([1, 2, 3, 4]) y = 0 print(f1(x, y)) print(f2(x, y)) ``` When running `f2`, pytorch throws the following error: ``` torch._dynamo.exc.InternalTorchDynamoError: IndexError: list index out of range You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True ``` ### Versions [pip3] numpy==1.26.2 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] optree==0.13.1 [pip3] torch==2.5.1 [pip3] triton==3.1.0 [conda] numpy 1.26.2 pypi_0 pypi [conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi [conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi [conda] optree 0.13.1 pypi_0 pypi [conda] torch 2.5.1 pypi_0 pypi [conda] triton 3.1.0 pypi_0 pypi cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
triaged,oncall: pt2,module: dynamo
low
Critical
2,760,311,243
godot
(C#, Tool) overriden _Get(property_name)/_Set(property_name) are ignored if the class has a field named `property_name`
### Tested versions v4.3.stable.mono.official [77dcf97d8] ### System information Godot v4.3.stable.mono - Pop!_OS 22.04 LTS - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1080 Ti (nvidia; 560.35.03) - AMD Ryzen 9 9900X 12-Core Processor (24 Threads) ### Issue description I have a class annotated with [Tool]: ```csharp [Tool] public partial class ToolShape: Resource { ... } ``` In that class, I have a field that is "manually" exported: ```csharp enum ShapeType: long { Sphere = 0, Box = 1, Cylinder = 2, Capsule = 3, } ShapeType ty = ShapeType.Sphere; public override Array<GdDictionary> _GetPropertyList() { GdDictionary type_property = new() { ["name"] = nameof(this.ty), ["type"] = (long) Variant.Type.Int, ["hint"] = (long) PropertyHint.Enum, ["hint_string"] = "Sphere:0,Box:1,Cylinder:2,Capsule:3", ["usage"] = (long) (PropertyUsageFlags.Editor | PropertyUsageFlags.Storage | PropertyUsageFlags.ScriptVariable), }; return [type_property]; } ``` Which is *supposed* to be handled by `_Get`/`_Set`: ```csharp public override bool _Set(StringName property, Variant value) { GD.Print($"Setting {property} to {value}"); string property_name = property.ToString(); switch (property_name) { case nameof(this.ty): { this.ty = value.As<ShapeType>(); return true; } default: return false; } } public override Variant _Get(StringName property) { string property_name = property.ToString(); if (property_name is not ("script" or "resource_local_to_scene" or "resource_name" or "resource_path")) { GD.Print($"Getting {property}"); } return property_name switch { nameof(this.ty) => Variant.From(this.ty), _ => default, }; } ``` In the Editor, `_Get`/`_Set` are never called for the property `ty`. I haven't tested the behavior in exported builds. ### Steps to reproduce - Open the project - Build the solution - Select the resource `tool_shape.res` - Modify its fields in the inspector Note that modifying the fields `Ty`, `String Name` and `Res` does not trigger any of the manual `GD.Print()` calls. Also note that modifying the property "fictional" which has no "backing field", does cause `_Set` and `_Get` to be called. ### Minimal reproduction project (MRP) [set_bug.zip](https://github.com/user-attachments/files/18256775/set_bug.zip)
bug,topic:dotnet
low
Critical
2,760,318,522
excalidraw
Add triangle shape
I downloaded the source code and added the triangle, but when I want to use the elbow to connect to the triangle, the frame to connect it is a rectangle, how can I change it to a triangle. Thanks a lot
enhancement
low
Minor
2,760,322,796
PowerToys
AutoHotKey like Feature
### Description of the new feature / enhancement Add a feature to PowerToys that lets users type a shortcut to expand text and optionally open a file, template, or application. ### Scenario when this would be used? It would be a valuable addition to PowerToys, catering to users who frequently type repetitive text. By providing this functionality natively, PowerToys can simplify workflows and enhance productivity without requiring third-party software. **Key Features** **Customizable Text Triggers:** Define text shortcuts that trigger both text expansion and additional actions. **Example:** Typing !report could insert "Monthly Sales Report" and open a preformatted Excel file. **File and Template Automation:** Configure shortcuts to open files (e.g., Word templates, Excel sheets, PDFs). Include support for linking cloud-hosted files via URLs or mapped drives. **Application Launching:** Allow shortcuts to open specific applications with optional arguments. Example: Typing !email could open Outlook ready to compose a new email. **Multi-Action Macros:** Enable shortcuts to perform multiple actions, such as expanding text and opening a template simultaneously. ### Supporting information _No response_
Needs-Triage
low
Minor
2,760,340,814
ant-design
popover设置placement后弹出可能存在遮挡
### Reproduction link [![Edit on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/p/sandbox/ji-ben-antd-5-22-5-forked-5rt6sj?file=%2Fdemo.tsx%3A13%2C27) ### Steps to reproduce https://codesandbox.io/p/sandbox/ji-ben-antd-5-22-5-forked-5rt6sj?file=%2Fdemo.tsx%3A13%2C27 并且打开控制台,滑动到最下面 ### What is expected? 各个分辨率下没有遮挡 ### What is actually happening? 滚动到底部打开popover会遮挡一部分 ![image](https://github.com/user-attachments/assets/24af5119-636e-41a2-8258-9cb035b00e49) | Environment | Info | | --- | --- | | antd | 5.21.6 | | React | 17 | | System | macos | | Browser | google latest version | <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
unconfirmed
low
Minor
2,760,341,042
create-react-app
Unable to create react app with typescript
Hi, So I have been trying to create a react app with the following command "npx create-react-app my-app --template typescript" However, upon running the command I am getting the following set of errors Installing template dependencies using npm... npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving: [email protected] npm ERR! Found: [email protected] npm ERR! node_modules/react npm ERR! react@"^19.0.0" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer react@"^18.0.0" from @testing-library/[email protected] npm ERR! node_modules/@testing-library/react npm ERR! @testing-library/react@"^13.0.0" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. npm ERR! Could you please suggest on how to move ahead from here. Thanks
needs triage
low
Critical
2,760,342,091
pytorch
TORCH_NCCL_ENABLE_TIMING break nccl/matmul overlapping
### 🐛 Describe the bug I am using Megatron-LM for training, I found that if I set `TORCH_NCCL_ENABLE_TIMING=1`, all overlaping kernels in Megatron-LM will not overlapped, including dw/dx backward in layer norm and zero1 reduce scatter/allgather not overlapping with matmul. I have submit a issue to `TransformerEngine` https://github.com/NVIDIA/TransformerEngine/issues/1353, maybe it relates to `CUDA_DEVICE_MAX_CONNECTIONS=1` ### Versions I am using 4 A100-SXM4 with pytorch2.4.0+cu124 and mcore0.9.0 with transformer engine(0.11.0+fc03478) ### update1 I use fsdp1 with 8 A100-SXM4 with pytorch2.4.0+cu124, and I tracing 3 configurations - TORCH_NCCL_ENABLE_TIMING=1 not break overlapping of NCCL and matmul - CUDA_DEVICE_MAX_CONNECTIONS=1 break reduce scatter in `FullyShardedDataParallel._post_backward_hook` ![image](https://github.com/user-attachments/assets/42f3fae0-9554-400b-8ef0-ca0b99541c4c) - CUDA_DEVICE_MAX_CONNECTIONS=1 TORCH_NCCL_ENABLE_TIMING=1 break all overlap, including allgather in forward ![forward allgather overlap](https://github.com/user-attachments/assets/a608c054-8c64-4a0d-9f7f-198fe145b03e) ![backward allgather/reduce scatter](https://github.com/user-attachments/assets/495b07a1-99c2-47ef-90d1-a16b386894b4) I upload timeline and reproduce code, just set different env and run ```bash CUDA_DEVICE_MAX_CONNECTIONS=1 TORCH_NCCL_ENABLE_TIMING=1 python -m torch.distributed.run --master-addr localhost --master-port 5555 --nnodes 1 --nproc-per-node 8 --node-rank 0 ``` [timeline.tar.gz](https://github.com/user-attachments/files/18326850/timeline.tar.gz) cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
oncall: distributed,module: nccl
medium
Critical
2,760,355,344
flutter
[go_router_builder] Cannot handle query parameter when it cannot parse
### Steps to reproduce 1. Using `go_router_builder: ^2.7.1` and run on web 2. Navigate to route with correct query params 3. Change url with invaild query params ### Expected results It should can be handle in errorPageBuilder or must be try cast to avoid error ### Actual results Can not handle error ### Code sample <details open><summary>Code sample</summary> ```dart // Copyright 2013 The Flutter Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. // ignore_for_file: public_member_api_docs, unreachable_from_main import 'dart:async'; import 'package:flutter/material.dart'; import 'package:go_router/go_router.dart'; part 'main.g.dart'; void main() => runApp(App()); class App extends StatelessWidget { App({super.key}); static const String title = 'GoRouter'; @override Widget build(BuildContext context) => MaterialApp.router( title: title, routeInformationProvider: _router.routeInformationProvider, routeInformationParser: _router.routeInformationParser, backButtonDispatcher: RootBackButtonDispatcher(), routerDelegate: _router.routerDelegate, debugShowCheckedModeBanner: false, ); late final GoRouter _router = GoRouter( debugLogDiagnostics: true, routes: $appRoutes, initialLocation: '/home', // redirect to the login page if the user is not logged in redirect: (BuildContext context, GoRouterState state) { // no need to redirect at all return null; }, ); } @TypedGoRoute<DetailRoute>( path: '/detail', ) class DetailRoute extends GoRouteData { const DetailRoute({this.id}); final int? id; @override Widget build(BuildContext context, GoRouterState state) => const DetailScreen(); } @TypedGoRoute<HomeRoute>( path: '/home', ) class HomeRoute extends GoRouteData { const HomeRoute({this.fromPage}); final String? fromPage; @override Widget build(BuildContext context, GoRouterState state) => HomeScreen(); } class HomeScreen extends StatelessWidget { const HomeScreen({super.key}); @override Widget build(BuildContext context) { return Scaffold( body: Center( child: MaterialButton( color: Colors.blue, onPressed: () { DetailRoute(id: 1).go(context); }, child: Text('Go to detail'), ), ), ); } } class DetailScreen extends StatelessWidget { const DetailScreen({super.key}); @override Widget build(BuildContext context) { return Scaffold( body: Center( child: Text('Detail with ${GoRouterState.of(context).uri.query}'), ), ); } } ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> Uploading Ghi Màn hình 2024-12-27 lúc 10.55.03.mov… </details> ### Logs <details open><summary>Logs</summary> ```console ══╡ EXCEPTION CAUGHT BY FOUNDATION LIBRARY ╞════════════════════════════════════════════════════════ The following FormatException was thrown while dispatching notifications for GoRouteInformationProvider: adaw When the exception was thrown, this was the stack: dart-sdk/lib/_internal/js_dev_runtime/private/ddc_runtime/errors.dart 288:3 throw_ errors.dart:288 dart-sdk/lib/_internal/js_dev_runtime/private/profile.dart 110:39 parse profile.dart:110 packages/error_template/main.g.dart 47:42 _$36convertMapValue main.g.dart:47 packages/error_template/main.g.dart 21:13 $36DetailRouteExtension$124_fromState main.g.dart:21 packages/go_router/src/route_data.dart 102:53 factoryImpl route_data.dart:102 packages/go_router/src/route_data.dart 112:28 redirect route_data.dart:112 packages/go_router/src/configuration.dart 443:56 [_getRouteLevelRedirect] configuration.dart:443 packages/go_router/src/configuration.dart 400:13 processTopLevelRedirect configuration.dart:400 packages/go_router/src/configuration.dart 417:16 processRedirect configuration.dart:417 packages/go_router/src/configuration.dart 423:14 redirect configuration.dart:423 packages/go_router/src/parser.dart 164:10 [_redirect] parser.dart:164 packages/go_router/src/parser.dart 101:7 parseRouteInformationWithDependencies parser.dart:101 packages/flutter/src/widgets/router.dart 753:12 [_processRouteInformation] router.dart:753 packages/flutter/src/widgets/router.dart 772:5 [_handleRouteInformationProviderNotification] router.dart:772 packages/flutter/src/foundation/change_notifier.dart 437:24 notifyListeners change_notifier.dart:437 packages/go_router/src/information_provider.dart 139:11 notifyListeners information_provider.dart:139 packages/go_router/src/information_provider.dart 245:5 [_platformReportsNewRouteInformation] information_provider.dart:245 packages/go_router/src/information_provider.dart 289:5 didPushRouteInformation information_provider.dart:289 packages/flutter/src/widgets/binding.dart 944:25 <fn> binding.dart:944 dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 610:19 <fn> async_patch.dart:610 dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 634:23 <fn> async_patch.dart:634 dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 581:31 <fn> async_patch.dart:581 dart-sdk/lib/async/zone.dart 1676:54 runUnary zone.dart:1676 dart-sdk/lib/async/future_impl.dart 204:18 handleValue future_impl.dart:204 dart-sdk/lib/async/future_impl.dart 902:44 handleValueCallback future_impl.dart:902 dart-sdk/lib/async/future_impl.dart 931:13 _propagateToListeners future_impl.dart:931 dart-sdk/lib/async/future_impl.dart 707:5 [_completeWithValue] future_impl.dart:707 dart-sdk/lib/async/future_impl.dart 777:7 callback future_impl.dart:777 dart-sdk/lib/async/schedule_microtask.dart 40:11 _microtaskLoop schedule_microtask.dart:40 dart-sdk/lib/async/schedule_microtask.dart 49:5 _startMicrotaskLoop schedule_microtask.dart:49 dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 186:7 <fn> async_patch.dart:186 The GoRouteInformationProvider sending notification was: Instance of 'GoRouteInformationProvider' ════════════════════════════════════════════════════════════════════════════════════════════════════ ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [✓] Flutter (Channel stable, 3.27.1, on macOS 14.5 23F79 darwin-arm64, locale vi-VN) • Flutter version 3.27.1 on channel stable at /Users/hieucg/flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision 17025dd882 (10 days ago), 2024-12-17 03:23:09 +0900 • Engine revision cb4b5fff73 • Dart version 3.6.0 • DevTools version 2.40.2 [✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0) • Android SDK at /Users/hieucg/Library/Android/sdk • Platform android-35, build-tools 35.0.0 • ANDROID_HOME = /Users/hieucg/Library/Android/sdk • Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314) • All Android licenses accepted. [✓] Xcode - develop for iOS and macOS (Xcode 16.0) • Xcode at /Applications/Xcode.app/Contents/Developer • Build 16A242d • CocoaPods version 1.15.2 [✓] Chrome - develop for the web • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome [✓] Android Studio (version 2024.1) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314) [✓] VS Code (version 1.96.2) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.102.0 [✓] Connected device (3 available) • macOS (desktop) • macos • darwin-arm64 • macOS 14.5 23F79 darwin-arm64 • Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.5 23F79 darwin-arm64 • Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205 [✓] Network resources • All expected network resources are available. • No issues found! ``` </details>
package,has reproducible steps,P2,p: go_router_builder,team-go_router,triaged-go_router,found in release: 3.27,found in release: 3.28
low
Critical
2,760,357,748
flutter
[google_maps_flutter] [web] Support disableAutoPan
### Use case I would like to use [InfoWindowOptions.disableAutoPan](https://developers.google.com/maps/documentation/javascript/reference/info-window#InfoWindowOptions.disableAutoPan) in flutter web. Mobile does not have autopan behaviour by default, which is good. But web automatically pans the map if the info window is too close to the edge of the map. I would love to opt out of that behaviour. `disableAutoPan` was made for this purpose. ### Proposal Add a mapping from flutter to the google maps javascript sdk property: `disableAutoPan`. Pass in `disableAutoPan: true` to the `InfoWindow` constructor.
c: new feature,p: maps,platform-web,package,c: proposal,P2,team-web,triaged-web
low
Minor
2,760,374,002
godot
Web editor not building with Emscripten 3.1.64
### Tested versions - Reproducible in: 4.4.dev [99a8ab7] ### System information macOS Sequoia 15.2, Emscripten 3.1.64 ### Issue description Building the editor using [the steps described in the documentation](https://docs.godotengine.org/en/stable/contributing/development/compiling/compiling_for_web.html#building-the-editor): ``` scons platform=web target=editor ``` Results in an error like the following: ``` Unknown option '--enable-bulk-memory-opt' em++: error: '/Users/malcolmanderson/emsdk/upstream/bin/wasm-opt --strip-target-features --post-emscripten -Os --low-memory-unused --zero-filled-memory --pass-arg=directize-initial-contents-immutable --no-stack-ir bin/godot.web.editor.dev.wasm32.wasm -o bin/godot.web.editor.dev.wasm32.wasm -g --mvp-features --enable-threads --enable-bulk-memory --enable-bulk-memory-opt --enable-call-indirect-overlong --enable-exception-handling --enable-multivalue --enable-mutable-globals --enable-reference-types --enable-sign-ext' failed (returned 1) scons: *** [bin/godot.web.editor.dev.wasm32.js] Error 1 scons: building terminated because of errors. ``` As someone noted on the comments for the documentation page, adding `production=yes` seems to allow the compilation to progress further. It seems to have gotten stuck in a caching step, though – these are the last lines it has printed, and (so far) it has not yet returned control to the shell, after a long amount of time: ``` cache:INFO: generating system asset: symbol_lists/05684251a2ff21b52145c66272b434cba942070b.json... (this will be cached in "/Users/malcolmanderson/emsdk/upstream/emscripten/cache/symbol_lists/05684251a2ff21b52145c66272b434cba942070b.json" for subsequent builds) cache:INFO: - ok ``` **EDIT: It *finally* finished on my computer - the overall process was 00:58:05.41.** This may be related to https://github.com/godotengine/godot/issues/99818, although the main subject of that issue was a different error encountered when using a latest version of Emscripten, not 3.1.64 specifically. ### Steps to reproduce Using Emscripten 3.1.64, build the editor using the command ``` scons platform=web target=editor ``` You should receive an error like `Unknown option '--enable-bulk-memory-opt'`. If you instead use the command ``` scons platform=web target=editor production=yes ``` the compilation process will take a *very* long time, with ``` cache:INFO: generating system asset: symbol_lists/05684251a2ff21b52145c66272b434cba942070b.json... (this will be cached in "/Users/malcolmanderson/emsdk/upstream/emscripten/cache/symbol_lists/05684251a2ff21b52145c66272b434cba942070b.json" for subsequent builds) cache:INFO: - ok ``` as the final output for a while. ### Minimal reproduction project (MRP) N/A
bug,platform:web,topic:buildsystem
low
Critical
2,760,381,426
transformers
cannot custom `warmup_min_lr` of deepspeed lr scheduler
### System Info 4.45 version ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction [link](https://github.com/huggingface/transformers/blob/4eb17b26e77611d4fbcdcbbc20c7bf275eb015c9/src/transformers/integrations/deepspeed.py#L171) idk why it's hardcoded ### Expected behavior warmup_min_lr value is not working
DeepSpeed,bug
low
Minor
2,760,391,944
transformers
`GPT2Attention()` class with `_attn()` method when `add_cross_attention=True` and therefore `is_cross_attention=True`.
### Feature request ### Model description It seems like `GPT2Attention()` class allows`_attn()` method with `causal_mask` only when `is_cross_attention=False`, but not when `is_cross_attention=True`. It would be more productive if `GPT2Attention()` supports `_attn()` method with `causal_mask` even with `is_cross_attention=True`. ### Motivation When developing `EncoderDecoderModel` where the encoder is `ViTModel` and the decoder is`GPT2Model`, the current `GPT2Model` class does not support `causal_mask` if `add_cross_attention=True` and therefore `is_cross_attention=True`. ### Your contribution No contribution.
Feature request,bug
low
Minor
2,760,410,728
ant-design
ConfigProvider.useConfig可以获取到来自Space.Compact的size prop
### What problem does this feature solve? 目前在Space.Compact下的组件会接收其size,但useConfig获取不到 ### What does the proposed API look like? ConfigProvider.useConfig可以获取到来自Space.Compact的size prop <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
unconfirmed
low
Minor
2,760,424,228
rust
Tracking Issue for `keylocker_x86`
Feature gate: `#![feature(keylocker_x86)]` This is a tracking issue for the `kl` and `widekl` target features and the associated intrinsics in `stdarch` ### Public API The following target features and all their associated intrinsics - `kl`: Intel Key Locker - `widekl`: Intel Key Locker Wide ### Steps / History - [ ] Add the target features to Rust - [ ] Add the Intrinsics and Feature Detection to `stdarch` - [ ] Final comment period (FCP)[^1] - [ ] Stabilization PR ### Unresolved Questions - None yet. ### Implementation History [^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
T-libs-api,C-tracking-issue
low
Minor
2,760,429,422
ui
[bug]: Canary monorepo CLI ENOENT error
### Describe the bug The followin error occurs after running the cli install command: `pnpm dlx shadcn@canary init` ``` ✖ Something went wrong creating a new Next.js monorepo. Something went wrong. Please check the error below for more details. If the problem persists, please open an issue on GitHub. Command failed with ENOENT: cd /home/gsdev/dev/w82 spawn cd ENOENT ``` **NOTE**: The monorepo seems to install but only apps/web gets installed. I also tested this on an M1 mac and that worked fine. ### Affected component/components n/a ### How to reproduce 1. Install using the command provided in the monorepo docs: `pnpm dlx shadcn@canary init` 2. Complete install wizard 3. Error is thrown shortly after ### Codesandbox/StackBlitz link n/a ### Logs _No response_ ### System Info ```bash Windows 11, WSL, Ubuntu 24.04 ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,760,431,116
PowerToys
windows 11 修改hosts一会儿生效,一会儿不生效
### Microsoft PowerToys version 0.87.1 ### Installation method GitHub ### Running as admin Yes ### Area(s) with issue? Hosts File Editor ### Steps to reproduce 修改Hosts文件,刷新DNS缓存,浏览器输入网址,生效; 过十几分钟,再刷新页面,无法请求正确的IP,失效; ### ✔️ Expected Behavior _No response_ ### ❌ Actual Behavior _No response_ ### Other Software _No response_
Issue-Bug,Needs-Triage
low
Minor
2,760,441,591
pytorch
When using torch.compile to compile the function _kernel_make_viewless_tensor, an error occurs:AssertionError: wrong number of dimensions
### 🐛 Describe the bug test device: NVidia L20 software version: torch 2.5.1 torchaudio 2.5.1 torchvision 0.20.1 triton 3.1.0 The test codes are as follows. I’m sure it’s related to the parameter “requires_grad” of the function ”_kernel_make_viewless_tensor“, because changing it to False allows the codes to pass, and the graph generated by torch.compile will be different. ``` # The codes are sourced from https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/core/utils.py:183 def _kernel_make_viewless_tensor(inp, requires_grad): """Make a viewless tensor. View tensors have the undesirable side-affect of retaining a reference to the originally-viewed tensor, even after manually setting the '.data' field. This method creates a new tensor that links to the old tensor's data, without linking the viewed tensor, referenced via the '._base' field. """ out = torch.empty((1,), dtype=inp.dtype, device=inp.device, requires_grad=requires_grad) out.data = inp.data return out t1 = torch.randn(20, 50, 30, dtype=torch.bfloat16).to('cuda') c = torch.compile(_kernel_make_viewless_tensor) t2 = _kernel_make_viewless_tensor(t1, True) t3 = c(t1, True) print(f"allclose result = {torch.allclose(t2, t3, atol=1e-5, rtol=1e-5)}") ``` The test results are as follows: ``` Traceback (most recent call last): File "/data/test/b.py", line 20, in <module> t3 = c(t1, True) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 465, in _fn return fn(*args, **kwargs) File "/data/test/b.py", line 12, in _kernel_make_viewless_tensor out = torch.empty((1,), dtype=inp.dtype, device=inp.device, requires_grad=requires_grad) File "/data/test/b.py", line 12, in torch_dynamo_resume_in__kernel_make_viewless_tensor_at_12 out = torch.empty((1,), dtype=inp.dtype, device=inp.device, requires_grad=requires_grad) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 632, in _fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1100, in forward return compiled_fn(full_args) File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 321, in runtime_wrapper all_outs = call_func_at_runtime_with_args( File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args out = normalize_as_list(f(args)) File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 667, in inner_fn outs = compiled_fn(args) File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 488, in wrapper return compiled_fn(runtime_args) File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/codecache.py", line 1478, in __call__ return self.current_callable(inputs) File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/utils.py", line 1977, in run return model(new_inputs) File "/tmp/torchinductor_root/ld/cldpqwvjtbpm3peqlchlnst5etn44gyzsepxnck35bn7pm4epvvj.py", line 35, in call assert_size_stride(arg0_1, (20, 50, 30), (1500, 30, 1)) AssertionError: wrong number of dimensions # /tmp/torchinductor_root/ld/cldpqwvjtbpm3peqlchlnst5etn44gyzsepxnck35bn7pm4epvvj.py # AOT ID: ['0_inference'] from ctypes import c_void_p, c_long, c_int import torch import math import random import os import tempfile from math import inf, nan from torch._inductor.hooks import run_intermediate_hooks from torch._inductor.utils import maybe_profile from torch._inductor.codegen.memory_planning import _align as align from torch import device, empty_strided from torch._inductor.async_compile import AsyncCompile from torch._inductor.select_algorithm import extern_kernels from torch._inductor.codegen.multi_kernel import MultiKernelCall aten = torch.ops.aten inductor_ops = torch.ops.inductor _quantized = torch.ops._quantized assert_size_stride = torch._C._dynamo.guards.assert_size_stride empty_strided_cpu = torch._C._dynamo.guards._empty_strided_cpu empty_strided_cuda = torch._C._dynamo.guards._empty_strided_cuda empty_strided_xpu = torch._C._dynamo.guards._empty_strided_xpu reinterpret_tensor = torch._C._dynamo.guards._reinterpret_tensor alloc_from_pool = torch.ops.inductor._alloc_from_pool async_compile = AsyncCompile() async_compile.wait(globals()) del async_compile def call(args): arg0_1, arg1_1 = args args.clear() assert_size_stride(arg0_1, (20, 50, 30), (1500, 30, 1)) assert_size_stride(arg1_1, (20, 50, 30), (1500, 30, 1)) with torch.cuda._DeviceGuard(0): torch.cuda.set_device(0) # Topologically Sorted Source Nodes: [], Original ATen: [] buf0 = torch.ops.aten.set_.source_Tensor(arg0_1, arg1_1) assert_size_stride(buf0, (20, 50, 30), (1500, 30, 1)) del arg0_1 del arg1_1 return (buf0, ) def benchmark_compiled_module(times=10, repeat=10): from torch._dynamo.testing import rand_strided from torch._inductor.utils import print_performance arg0_1 = rand_strided((20, 50, 30), (1500, 30, 1), device='cuda:0', dtype=torch.bfloat16) arg1_1 = rand_strided((20, 50, 30), (1500, 30, 1), device='cuda:0', dtype=torch.bfloat16) fn = lambda: call([arg0_1, arg1_1]) return print_performance(fn, times=times, repeat=repeat) if __name__ == "__main__": from torch._inductor.wrapper_benchmark import compiled_module_main compiled_module_main('None', benchmark_compiled_module) ``` ### Versions Collecting environment information... PyTorch version: 2.5.1+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.29.2 Libc version: glibc-2.35 Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-4.18.0-372.9.1.el8.x86_64-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.4.131 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA L20 Versions of relevant libraries: [pip3] numpy==1.24.4 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cudnn-frontend==1.3.0 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] nvtx==0.2.5 [pip3] onnx==1.16.0 [pip3] optree==0.11.0 [pip3] pynvjitlink==0.1.13 [pip3] pytorch-quantization==2.1.2 [pip3] pytorch-triton==3.0.0+989adb9a2 [pip3] torch==2.5.1 [pip3] torchaudio==2.5.1 [pip3] torchvision==0.20.1 [pip3] transformer-engine-torch==1.9.0 [pip3] triton==3.1.0 cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
triaged,oncall: pt2,module: inductor
low
Critical
2,760,449,601
electron
Minimized Window is Forcibly Restored
### Preflight Checklist - [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project. - [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to. - [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success. ### Electron Version 33.2.1 ### What operating system(s) are you using? Windows ### Operating System Version Windows 11 ### What arch are you using? x64 ### Last Known Working Electron version _No response_ ### Expected Behavior The minimized window should remain minimized, and the flashFrame should appear instead. ### Actual Behavior After minimizing, the window is forcibly restored and steals focus. ### Testcase Gist URL https://gist.github.com/emfprhs119/b613df8c3c341f3a763b889a6d53f0d2 ### Additional Information ![Image](https://github.com/user-attachments/assets/890430f1-e528-4dcd-af15-4938f9094d16) The "close" button on the thumbnail in the taskbar for Windows 10 and 11 causes issues with managing multiple windows. Even if the close action is disabled through code, the problem still persists. `subWindow.on('close', (event) => event.preventDefault());` https://github.com/user-attachments/assets/082b30cc-a7f1-43cd-9b8e-31c407416655 https://github.com/user-attachments/assets/27fb39a2-3a62-45a4-b832-15842346147d I was trying to create an additional window that would minimize and blink without stealing focus, but this bug has made it impossible. If you have any suggestions for a workaround, please feel free to share.
platform/windows,bug :beetle:,status/confirmed,has-repro-gist,33-x-y
low
Critical
2,760,464,979
godot
Godot 4.3 stable - PhysicalBoneSimulator3D/Skeleton3D Documentation Not Updated
## Tested versions tested: 4.3 stable ### System information Mac OS (latest version; m2 8gb) - Godot 4.3 Stable - Forward+ ### Issue description ```diff - Followed official tutorial + 3rd party ones for making a ragdoll using a model’s skeleton by creating a physical skeleton, setting to active, and starting the simulation after adjusting physical bones - Doesn’t work with just model, with entire model/skeleton under a CharacterBody3d, etc. It just doesn’t move. Doesn’t work with default nor jolt physics engines - This is a problem in my main game, does not work in MRP either. Tried to make it as simple as possible ``` * The official tutorial + documentation is not updated properly for the migration of `physical_bones_…` functions from Skeleton3D into `PhysicalBoneSimulator3D`. Causing confusion. * The editor does not give a deprecation warning either when calling these functions from a `Skeleton3D` object ### Steps to reproduce Open MRP, start game ### Minimal reproduction project (MRP)
bug,topic:physics,topic:3d
low
Major
2,760,477,587
godot
Godot Autoload disappeared, but still exists?
### Tested versions 4.3 Stable ### System information Windows 10 ### Issue description My autoload went missing from the autoload list, it won't let me readd it because it already exists and all of the references to it now throw errors that it doesn't exist. ![Image](https://github.com/user-attachments/assets/1695f568-6be6-49d0-a38c-cabe17f8652e) ### Steps to reproduce Created an autoload, SignalManager with associated script. Edit the editor settings to use external editor close godot open godot Autoload is gone, but if I try to add it again, it says it already exists. ### Minimal reproduction project (MRP) I can't reproduce it.
bug,topic:editor
low
Critical
2,760,481,623
react-native
ScrollView
### Description ScrollView not scrolling after upgrading to react-native 0.76 ### Steps to reproduce creat new app and run in android ### React Native Version 0.67 ### Affected Platforms Runtime - Android ### Output of `npx react-native info` ```text System: OS: macOS 14.5 CPU: (8) arm64 Apple M1 Memory: 105.95 MB / 8.00 GB Shell: version: "5.9" path: /bin/zsh Binaries: Node: version: 20.10.0 path: /usr/local/bin/node Yarn: version: 1.22.22 path: /opt/homebrew/bin/yarn npm: version: 9.5.1 path: /opt/homebrew/bin/npm Watchman: version: 2023.04.03.00 path: /opt/homebrew/bin/watchman Managers: CocoaPods: version: 1.12.0 path: /opt/homebrew/bin/pod SDKs: iOS SDK: Platforms: - DriverKit 23.5 - iOS 17.5 - macOS 14.5 - tvOS 17.5 - visionOS 1.2 - watchOS 10.5 Android SDK: Not Found IDEs: Android Studio: 2024.1 AI-241.18034.62.2411.12169540 Xcode: version: 15.4/15F31d path: /usr/bin/xcodebuild Languages: Java: version: 17.0.9 path: /usr/bin/javac Ruby: version: 2.6.10 path: /usr/bin/ruby npmPackages: "@react-native-community/cli": installed: 15.0.1 wanted: 15.0.1 react: installed: 18.3.1 wanted: 18.3.1 react-native: installed: 0.76.5 wanted: 0.76.5 react-native-macos: Not Found npmGlobalPackages: "*react-native*": Not Found Android: hermesEnabled: true newArchEnabled: true iOS: hermesEnabled: Not found newArchEnabled: false ``` ### Stacktrace or Logs ```text no crash only run time ``` ### Reproducer no repo ### Screenshots and Videos ![Screenshot_1735279960](https://github.com/user-attachments/assets/a502bfab-0acd-43a8-9a04-34de0b196b72)
Component: ScrollView,Needs: Repro,Needs: Attention,Needs: Version Info
low
Critical
2,760,501,169
tauri
[bug] Allow Fullscreen Videos Android Webview
### Describe the bug Currently the only solutions is https://stackoverflow.com/questions/62573334/android-webview-fullscreen-on-videos-not-working but there should be a fix in tauri itself because on every build it will break ### Reproduction _No response_ ### Expected behavior WebView Fullscreen should open normaly ### Full `tauri info` output ```text newest tauri version ( appears on all current versions ) ``` ### Stack trace ```text Uncaught (in promise) TypeError: fullscreen error ``` ### Additional context only appears on mobile ( android )
type: bug,status: needs triage
low
Critical
2,760,519,049
langchain
AttributeError: 'str' object has no attribute 'tool'
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code self.prompt = config.get_prompt("system_monitor_template_test3") self.system = self.prompt graph = StateGraph(AgentState) graph.add_node("llm", self.call_agents) graph.add_node("action", self.take_action) graph.add_conditional_edges( "llm", self.exists_action, {True: "action", False: END} ) self.llm = self.interface.get_current_model(0.2,0.2,10) self.create_tools() graph.add_edge("action", "llm") graph.set_entry_point("llm") self.memory = MemorySaver() self.graph = graph.compile(checkpointer=self.memory) def create_agent_node(self): prompt_template = PromptTemplate(template=self.system) system_message = prompt_template.format() if self.agent_node is None: self.agent_node = create_react_agent( self.llm, self.tools, state_modifier=system_message, ) self.agent_executor = AgentExecutor( agent=self.agent_node, tools=self.tools, verbose=True ) .............. async def print_response(self, initial_message: str): messages = [("user", initial_message)] thread = {"configurable": {"thread_id": self.user_id}} async for event in self.graph.astream_events({"messages": messages}, thread, version="v2"): kind = event["event"] if kind == "on_chat_model_stream": content = event["data"]["chunk"].content if content: print(content, end="", flush=True) # Example usage: if __name__ == "__main__": agent = Agent("12345") input = "check CPU " asyncio.run(agent.print_response(input)) ### Error Message and Stack Trace (if applicable) > Entering new AgentExecutor chain... [2024-12-27 15:24:47,480]-INFO-[functionCall.py:28]: executing shell command: top -b -n 1 | grep "Cpu(s)" result output:%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st The current CPU usage is 0.0% for user, 0.0% for system, 0.0% for nice, 100.0% for idle, 0.0% for wait, 0.0% for hardware interrupt, 0.0% for software interrupt, and 0.0% for steal.[2024-12-27 15:24:48,379]-WARNING-[manager.py:287]: Error in StdOutCallbackHandler.on_agent_action callback: AttributeError("'str' object has no attribute 'log'") Traceback (most recent call last): File "/mnt/m/flyang/pr_train/LLMs/src/langgraphsearch/agentWorkflow.py", line 190, in <module> asyncio.run(agent.print_response(input)) File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/asyncio/runners.py", line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/mnt/m/flyang/pr_train/LLMs/src/langgraphsearch/agentWorkflow.py", line 175, in print_response async for event in self.graph.astream_events({"messages": messages}, thread, version="v2"): File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1388, in astream_events async for event in event_stream: File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/tracers/event_stream.py", line 1012, in _astream_events_implementation_v2 await task File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/tracers/event_stream.py", line 967, in consume_astream async for _ in event_streamer.tap_output_aiter(run_id, stream): File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/tracers/event_stream.py", line 180, in tap_output_aiter first = await py_anext(output, default=sentinel) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/utils/aiter.py", line 76, in anext_impl return await __anext__(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 1822, in astream async for _ in runner.atick( File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langgraph/pregel/runner.py", line 221, in atick await arun_with_retry( File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langgraph/pregel/retry.py", line 115, in arun_with_retry async for _ in task.proc.astream(task.input, config): File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langgraph/utils/runnable.py", line 576, in astream async for chunk in aiterator: File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/tracers/event_stream.py", line 180, in tap_output_aiter first = await py_anext(output, default=sentinel) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/utils/aiter.py", line 76, in anext_impl return await __anext__(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1455, in atransform async for ichunk in input: File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1455, in atransform async for ichunk in input: File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1455, in atransform async for ichunk in input: File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1018, in astream yield await self.ainvoke(input, config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langgraph/utils/runnable.py", line 236, in ainvoke ret = await asyncio.create_task(coro, context=context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 588, in run_in_executor return await asyncio.get_running_loop().run_in_executor( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 579, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/mnt/m/flyang/pr_train/LLMs/src/langgraphsearch/agentWorkflow.py", line 110, in call_agents result = self.agent_executor.invoke({"messages": messages}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain/chains/base.py", line 170, in invoke raise e File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain/chains/base.py", line 160, in invoke self._call(inputs, run_manager=run_manager) File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain/agents/agent.py", line 1624, in _call next_step_output = self._take_next_step( ^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain/agents/agent.py", line 1332, in _take_next_step for a in self._iter_next_step( ^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain/agents/agent.py", line 1415, in _iter_next_step yield self._perform_agent_action( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/flyang/anaconda3/envs/LLMs/lib/python3.12/site-packages/langchain/agents/agent.py", line 1429, in _perform_agent_action if agent_action.tool in name_to_tool_map: ^^^^^^^^^^^^^^^^^ AttributeError: 'str' object has no attribute 'tool' ### Description AgentExecutor AttributeError: 'str' object has no attribute 'tool' ### System Info System Information ------------------ > OS: Linux > OS Version: #1 SMP Tue Nov 5 00:21:55 UTC 2024 > Python Version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] Package Information ------------------- > langchain_core: 0.3.28 > langchain: 0.3.12 > langchain_community: 0.3.9 > langsmith: 0.1.143 > langchain_anthropic: 0.3.0 > langchain_ark: 0.1.4 > langchain_cli: 0.0.35 > langchain_experimental: 0.3.3 > langchain_google_community: 2.0.2 > langchain_google_genai: 2.0.7 > langchain_groq: 0.2.1 > langchain_huggingface: 0.1.2 > langchain_milvus: 0.1.7 > langchain_openai: 0.2.11 > langchain_text_splitters: 0.3.3 > langchain_together: 0.2.0 > langchain_xai: 0.1.1 > langgraph_sdk: 0.1.36 > langserve: 0.3.0 Other Dependencies ------------------ > aiohttp: 3.11.2 > anthropic: 0.40.0 > async-timeout: Installed. No version info available. > beautifulsoup4: 4.12.3 > dataclasses-json: 0.6.7 > db-dtypes: Installed. No version info available. > defusedxml: 0.7.1 > fastapi: 0.115.5 > filetype: 1.2.0 > gapic-google-longrunning: Installed. No version info available. > gitpython: 3.1.43 > google-api-core: 2.23.0 > google-api-python-client: 2.153.0 > google-auth-httplib2: 0.2.0 > google-auth-oauthlib: Installed. No version info available. > google-cloud-aiplatform: Installed. No version info available. > google-cloud-bigquery: Installed. No version info available. > google-cloud-bigquery-storage: Installed. No version info available. > google-cloud-contentwarehouse: Installed. No version info available. > google-cloud-core: 2.4.1 > google-cloud-discoveryengine: Installed. No version info available. > google-cloud-documentai: Installed. No version info available. > google-cloud-documentai-toolbox: Installed. No version info available. > google-cloud-speech: Installed. No version info available. > google-cloud-storage: Installed. No version info available. > google-cloud-texttospeech: Installed. No version info available. > google-cloud-translate: Installed. No version info available. > google-cloud-vision: Installed. No version info available. > google-generativeai: 0.8.3 > googlemaps: Installed. No version info available. > gritql: 0.1.5 > groq: 0.13.0 > grpcio: 1.67.1 > httpx: 0.27.2 > httpx-sse: 0.4.0 > huggingface-hub: 0.26.2 > jsonpatch: 1.33 > langserve[all]: Installed. No version info available. > numpy: 2.2.0 > openai: 1.54.4 > orjson: 3.10.11 > packaging: 24.2 > pandas: 2.2.3 > pyarrow: 18.1.0 > pydantic: 2.9.2 > pydantic-settings: 2.6.1 > pymilvus: 2.5.0 > PyYAML: 6.0.2 > requests: 2.32.3 > requests-toolbelt: 1.0.0 > sentence-transformers: 3.3.0 > SQLAlchemy: 2.0.35 > sse-starlette: 1.8.2 > tenacity: 9.0.0 > tiktoken: 0.8.0 > tokenizers: 0.20.3 > tomlkit: 0.12.0 > transformers: 4.46.2 > typer[all]: Installed. No version info available. > typing-extensions: 4.12.2 > uvicorn: 0.32.0 > volcengine-python-sdk[ark]: Installed. No version info available.
🤖:bug
low
Critical
2,760,526,027
transformers
Memory leak on python 3.10.*
### System Info A memory leak is observed when using the `KVEmbedding` class with Python version `3.10.*`. The same code does not exhibit the memory leak issue when running on Python `3.8.11`. The issue may arise due to differences in how Python `3.10.*` handles memory allocation, deallocation, or compatibility with the libraries used. --- **Setup:** 1. **Environment:** - Python `3.8.11` (No memory leak observed) - Python `3.10.*` (Memory leak occurs) 2. **Dependencies:** - `tokenizers==0.20.3` - `torch==2.0.1+cu117` - `torchvision==0.15.2+cu117` - `tqdm==4.67.0` - `transformers==4.46.0` --- **Attempts to Resolve:** We tried various strategies to address the memory leak, but none were successful. These include: 1. **Explicit Garbage Collection:** - Used `gc.collect()` to manually invoke garbage collection after each batch. 2. **Variable Deletion:** - Explicitly deleted intermediate variables with `del` to release memory. 3. **CUDA Cache Management:** - Used `torch.cuda.empty_cache()` to free up GPU memory. 4. **Library Versions:** Tried multiple versions of tokenizers and transformers libraries but observed no improvement. Despite these efforts, the memory leak persisted in Python `3.10.*`. --- **Call for Assistance**: We have exhausted our efforts to identify and resolve the memory leak issue. If anyone with expertise in Python memory management, PyTorch, or Hugging Face Transformers can assist, we would greatly appreciate your help ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import profile import torch import torch.nn.functional as F from transformers import AutoModel from transformers import AutoTokenizer import gc import numpy as np import onnxruntime as ort from faker import Faker from memory_profiler import profile from typing import Mapping, Dict, List import json # Create a Faker instance with Japanese locale fake = Faker('ja_JP') # Generate random Japanese text def generate_random_japanese_text(): return fake.text() def move_to_cuda(sample): if len(sample) == 0: return {} def _move_to_cuda(maybe_tensor): if torch.is_tensor(maybe_tensor): return maybe_tensor.cuda(non_blocking=True) elif isinstance(maybe_tensor, dict): return {key: _move_to_cuda(value) for key, value in maybe_tensor.items()} elif isinstance(maybe_tensor, list): return [_move_to_cuda(x) for x in maybe_tensor] elif isinstance(maybe_tensor, tuple): return tuple([_move_to_cuda(x) for x in maybe_tensor]) elif isinstance(maybe_tensor, Mapping): return type(maybe_tensor)({k: _move_to_cuda(v) for k, v in maybe_tensor.items()}) else: return maybe_tensor return _move_to_cuda(sample) def create_batch_dict(tokenizer, input_texts, max_length: int = 512): return tokenizer( input_texts, max_length=max_length, padding=True, pad_to_multiple_of=8, return_token_type_ids=False, truncation=True, return_tensors='pt' ) def pool(last_hidden_states, attention_mask, pool_type: str): last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) if pool_type == "avg": emb = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] elif pool_type == "weightedavg": # position-weighted mean pooling from SGPT (https://arxiv.org/abs/2202.08904) attention_mask *= attention_mask.cumsum(dim=1) # [0,1,1,1,0,0] -> [0,1,2,3,0,0] s = torch.sum(last_hidden * attention_mask.unsqueeze(-1).float(), dim=1) d = attention_mask.sum(dim=1, keepdim=True).float() emb = s / d elif pool_type == "cls": emb = last_hidden[:, 0] elif pool_type == "last": left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0]) if left_padding: emb = last_hidden[:, -1] else: sequence_lengths = attention_mask.sum(dim=1) - 1 batch_size = last_hidden.shape[0] emb = last_hidden[torch.arange(batch_size, device=last_hidden.device), sequence_lengths] else: raise ValueError(f"pool_type {pool_type} not supported") return emb class KVEmbedding: def __init__(self, device): self.device = device # Load tokenizer and model from pretrained multilingual-e5-small self.tokenizer = AutoTokenizer.from_pretrained("intfloat/multilingual-e5-small") self.model = AutoModel.from_pretrained("intfloat/multilingual-e5-small").to(self.device) self.model.eval() # Set model to evaluation mode def average_pool(self, last_hidden_states, attention_mask): # Apply mask to hidden states, set masked positions to 0 last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) # Average the hidden states along the sequence dimension return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] @profile def embedding(self, l_transcription, batch_size=32): # Tokenize input transcriptions batch_dict = self.tokenizer( l_transcription, max_length=512, padding=True, truncation=True, return_tensors="pt", ).to(self.device) return batch_dict def _do_encode(self, input_texts) -> np.ndarray: encoded_embeds = [] batch_size = 64 for start_idx in range(0, len(input_texts), batch_size): batch_input_texts = input_texts[start_idx: start_idx + batch_size] batch_dict = create_batch_dict(self.tokenizer, batch_input_texts) batch_dict = move_to_cuda(batch_dict) return encoded_embeds import random from faker import Faker # # Lists of Japanese characters hiragana = ["あ", "い", "う", "え", "お", "か", "き", "く", "け", "こ", "さ", "し", "す", "せ", "そ", "た", "ち", "つ", "て", "と", "な", "に", "ぬ", "ね", "の", "は", "ひ", "ふ", "へ", "ほ", "ま", "み", "む", "め", "も", "や", "ゆ", "よ", "ら", "り", "る", "れ", "ろ", "わ", "を", "ん"] katakana = ["ア", "イ", "ウ", "エ", "オ", "カ", "キ", "ク", "ケ", "コ", "サ", "シ", "ス", "セ", "ソ", "タ", "チ", "ツ", "テ", "ト", "ナ", "ニ", "ヌ", "ネ", "ノ", "ハ", "ヒ", "フ", "ヘ", "ホ", "マ", "ミ", "ム", "メ", "モ", "ヤ", "ユ", "ヨ", "ラ", "リ", "ル", "レ", "ロ", "ワ", "ヲ", "ン"] kanji = ["日", "本", "語", "学", "校", "生", "時", "間", "人", "大", "小", "中", "山", "川", "口", "目", "耳", "手", "足", "力", "男", "女", "子", "父", "母"] # Combine all character sets all_characters = hiragana + katakana + kanji # Generate random Japanese text def generate_random_japanese(length): return ''.join(random.choices(all_characters, k=length)) def remove_invalid_characters(valid_chars, text): """ Removes all invalid characters from the given text, keeping only the characters present in char_dicts. Args: char_dicts (dict): Dictionary of valid characters. text (str): Input text string. Returns: str: Text string with only valid characters. """ # Convert dict keys to a set for faster lookup filtered_text = ''.join(c for c in text if c in valid_chars) return filtered_text if __name__ == "__main__": print("Start app ...") with open("multilingual-e5-small/tokenizer.json", 'r') as file: character_info = json.load(file) character_dict = {} print("Vocab is loading ...") for data in character_info["model"]["vocab"]: character_dict[data[0]] = data[1] valid_chars = set(character_dict.keys()) print("Start loading model") kv_embedding = KVEmbedding('cuda') print("Loading model: Done!!!") for i in range(7500): print(f"============{i}==============") length = random.randint(600, 1000) # print(length) input_texts = [] for s in range(length): text_length = random.randint(1, 10000) random_text = generate_random_japanese(text_length) # before = len(random_text) random_text = remove_invalid_characters(valid_chars, random_text) # after = len(random_text) # if after != before: # print(before, after) random_text = random_text[:450] input_texts.append(random_text) filter_output = input_texts[:512] del input_texts # print(len(filter_output)) output = kv_embedding.embedding(filter_output) ``` ### Logs ![newplot](https://github.com/user-attachments/assets/732937a4-3747-4bbd-ae3c-4b4ee214c52c) ```txt ============4============== Filename: test_kv_embed.py Line # Mem usage Increment Occurrences Line Contents ============================================================= 92 2293.9 MiB 2293.9 MiB 1 @profile 93 def embedding(self, l_transcription, batch_size=32): 94 # Tokenize input transcriptions 95 2295.7 MiB 1.8 MiB 3 batch_dict = self.tokenizer( 96 2293.9 MiB 0.0 MiB 1 l_transcription, 97 2293.9 MiB 0.0 MiB 1 max_length=512, 98 2293.9 MiB 0.0 MiB 1 padding=True, 99 2293.9 MiB 0.0 MiB 1 truncation=True, 100 2293.9 MiB 0.0 MiB 1 return_tensors="pt", 101 2295.7 MiB 0.0 MiB 1 ).to(self.device) 102 103 2295.7 MiB 0.0 MiB 1 return batch_dict ============5============== Filename: test_kv_embed.py Line # Mem usage Increment Occurrences Line Contents ============================================================= 92 2295.7 MiB 2295.7 MiB 1 @profile 93 def embedding(self, l_transcription, batch_size=32): 94 # Tokenize input transcriptions 95 2296.5 MiB 0.8 MiB 3 batch_dict = self.tokenizer( 96 2295.7 MiB 0.0 MiB 1 l_transcription, 97 2295.7 MiB 0.0 MiB 1 max_length=512, 98 2295.7 MiB 0.0 MiB 1 padding=True, 99 2295.7 MiB 0.0 MiB 1 truncation=True, 100 2295.7 MiB 0.0 MiB 1 return_tensors="pt", 101 2296.5 MiB 0.0 MiB 1 ).to(self.device) 102 103 2296.5 MiB 0.0 MiB 1 return batch_dict ``` ### Expected behavior No memory leaks occur on Python 3.10.*.
dependencies,bug
low
Critical
2,760,534,230
neovim
`:keeppatterns` does not keep the last substitution string
### Problem Hi, I encountered this problem with the trim.nvim plugin, which uses `:keeppatterns %s/...` under the hood. Every time I wrote a file with this plugin enabled, my last substitution was just an empty string. I reported this [here](https://github.com/cappyzawa/trim.nvim/issues/27). I have since replaced trim.nvim with conform.nvim, but in order to help the plugin author out, I wrote up repro steps for him and discovered, that the problem seems to come from neovim itself instead of the plugin. I did find multiple issues in the neovim repo that somewhat relate to this problem, but I got the impression that `:keeppatterns` should already work as I expected. Especially since `:help :keeppatterns` also already says it should preserve the substitution string. ### Steps to reproduce 1. Start neovim with `nvim --clean test.txt` 2. Add these buffer contents: ``` foo foo foo ``` 2. Execute `:%s/foo/bar/`. 3. Observe, that all "foo"s have been replaced by "bar"s. 4. Press "u" to undo. 5. Execute `:keeppatterns %s/bla//` 6. Observe that nothing happened to the buffer. 7. Execute `:%s/foo/~/`. 8. Observe that all "foo"s have been replaced with an empty string. The expectation is that they would be replaced by "bar"s. 9. Press "u" to undo. 10. Execute `:%s/foo/bar/`. 11. Press "u" to undo. 12. Execute `%s/foo/~/`. 13. Observe that all "foo"s have been replaced by "bar"s. Note that this can also be reproduced with the `g&` normal mode command. ### Expected behavior The substitution pattern should remain the one of the last substitute command called without `:keeppatterns`. ### Nvim version (nvim -v) NVIM v0.11.0-dev-1419+g46c7faa00b ### Vim (not Nvim) behaves the same? yes, vim 9.1, patches 1-954 ### Operating system/version Linux 6.12.6-arch1-1 ### Terminal name/version wezterm 20240922-151228-2b76c63b ### $TERM environment variable xterm-256color ### Installation AUR: neovim-git
needs:vim-patch
low
Minor
2,760,537,897
godot
GPUParticles3D using Directed Points does not have correct initial velocity
### Tested versions Reproducible in v4.3.stable.mono.official [77dcf97d8] ### System information Godot v4.3.stable.mono - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2060 (NVIDIA; 30.0.15.1006) - Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz (12 Threads) ### Issue description When using Directed Points, the particles don't actually move in the direction of the normal. Instead the direction is unpredictable. A similar issue was brought up in 4.2 https://github.com/godotengine/godot/issues/83811 and resolved with https://github.com/godotengine/godot/pull/83831 so it's likely this regression was added there. ### Steps to reproduce Same as https://github.com/godotengine/godot/issues/83811 Create MeshInstance3D and give it a mesh. Create particles and give it a Draw pass. Give it a ParticleProcessMaterial and give the particles an initial velocity. Use "Create Emission Points from Node" and choose Surface Points + Normal (Directed) to create a Emission Points Texture and Emission Normal Texture. ### Minimal reproduction project (MRP) [particlesissue.zip](https://github.com/user-attachments/files/18258542/particlesissue.zip)
bug,topic:3d,topic:particles
low
Minor
2,760,548,493
opencv
Please upload ONNX model to reproduce error
[ERROR:[email protected]] global net_impl.cpp:1164 cv::dnn::dnn4_v20221220::Net::Impl::getLayerShapesRecursively OPENCV/DNN: [Reshape]:(onnx_node!/model.22/dfl/Reshape): getMemoryShapes() throws exception. inputs=1 outputs=1/1 blobs=0 [ERROR:[email protected]] global net_impl.cpp:1167 cv::dnn::dnn4_v20221220::Net::Impl::getLayerShapesRecursively input[0] = [ 1 64 1029 ] [ERROR:[email protected]] global net_impl.cpp:1171 cv::dnn::dnn4_v20221220::Net::Impl::getLayerShapesRecursively output[0] = [ ] [ERROR:[email protected]] global net_impl.cpp:1177 cv::dnn::dnn4_v20221220::Net::Impl::getLayerShapesRecursively Exception message: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\layers\reshape_layer.cpp:109: error: (-215:Assertion failed) total(srcShape, srcRange.start, srcRange.end) == maskTotal in function 'cv::dnn::computeShapeByReshapeMask'
invalid
low
Critical
2,760,549,157
ui
[bug]: Context Menu Not Updating Position on Consecutive Right-Clicks
### Describe the bug When using the Context Menu component, the menu opens correctly upon the first right-click. However, if the user moves the cursor to a different position within the context and right-clicks again (without first performing a left-click elsewhere), the menu does not update its position. Instead, it continues to open at the previous position until the user performs a left-click. ### Affected component/components Context Menu ### How to reproduce 1. Right-click within the context area to open the context menu. 2. Move the cursor to another position within the context area. 3. Right-click again without performing a left-click in between. 4. Observe that the context menu still opens at the original position rather than the new cursor position. ### Codesandbox/StackBlitz link _No response_ ### Logs _No response_ ### System Info ```bash Windows 11 Pro 24H2 ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,760,557,135
deno
`deno check` doesn't output source file information in stack trace when it can't open library file
I'm running `deno check` and getting a stack trace that only includes Deno internals, not my source file(s): ``` $ deno check main.ts Check file:///home/dandv/deno-bugs/main.ts error: Uncaught Error: Unable to load /home/dandv/.cache/deno/npm/registry.npmjs.org/@anthropic-ai/sdk/0.33.1/src/resources/messages.d.ts: No such file or directory (os error 2) at Object.getSourceFile (ext:deno_tsc/99_main_compiler.js:621:28) at findSourceFileWorker (ext:deno_tsc/00_typescript.js:126789:23) at findSourceFile (ext:deno_tsc/00_typescript.js:126705:20) at processImportedModules (ext:deno_tsc/00_typescript.js:127105:11) at findSourceFileWorker (ext:deno_tsc/00_typescript.js:126854:7) at findSourceFile (ext:deno_tsc/00_typescript.js:126705:20) at processImportedModules (ext:deno_tsc/00_typescript.js:127105:11) at findSourceFileWorker (ext:deno_tsc/00_typescript.js:126854:7) at findSourceFile (ext:deno_tsc/00_typescript.js:126705:20) at ext:deno_tsc/00_typescript.js:126654:22 ``` I would expect to see `main.ts` and `lib.ts` at the top of (actually instead of) that stack trace. **main.ts** ```ts import { message } from './lib.ts'; console.log(message); ``` **lib.ts** ```ts import type { Message } from '@anthropic-ai/sdk/src/resources/messages'; export const message: Message = 'foo'; // this is wrong, Message is an object ``` # Reproduction 1. `git clone https://github.com/dandv/deno-bug-check-stack-trace-omits-source-file` 2. `deno check main.ts` Version: Deno 2.1.4 # Related #27472
types
low
Critical
2,760,560,026
TypeScript
lib.dom: DOMatrix constructor accepts TypedArrays, not only Arrays.
### ⚙ Compilation target n/a ### ⚙ Library lib.dom ### Missing / Incorrect Definition DOMMatrix and DOMMatrixReadonly constructors are typed as accepting only `number[]` but they also accept `TypedArray`s in all browsers. The following code works in all browsers: ### Sample Code ```TypeScript new DOMMatrix(new Float32Array(16)) ``` ### Documentation Link The [MDN documentation](https://developer.mozilla.org/en-US/docs/Web/API/DOMMatrix/DOMMatrix) doesn't really mention this detail, and the [spec](https://drafts.fxtf.org/geometry/#dom-dommatrix-dommatrix) mentions a ["sequence"](https://webidl.spec.whatwg.org/#idl-sequence) which to me isn't very specific. Isn't a TypedArray a "sequence of numbers"? Perhaps the type should probably be `ArrayLike<number>`.
Bug,Help Wanted,Domain: lib.d.ts
low
Minor
2,760,565,815
node
http2 compat response doesn't always emit `'close'`
Noticed this in production. http2 compat response doesn't always emit `'close'` which can cause a memory leak. Not sure yet when/how this occurs. Suspect that it occurs when the client aborts a request.
http2
low
Minor
2,760,568,080
react-native
[IOS] ScrollView with Textinput / TouchalbeOpacity causes weird scroll position behavior
### Description After enabling the new architecture, upgrading navigation to v7, and upgrading to react-native-screens v4.4.0, the following issues occur: When a ScrollView contains a TextInput or TouchableOpacity, the scroll behaves abnormally, and the components are re-rendered incorrectly (flickering). This issue is only observed on iOS and does not occur on Android. The app was working fine before upgrading the React Native version, as well as the versions of react-native-screens and react-navigation. ### Steps to reproduce 1. use rn 0.76.4, rn-screens v4.4.0, react-navigation v7 2. make scrollView with TextInput / TocuhalbeOpacity ### React Native Version 0.76.5 ### Affected Platforms Runtime - iOS ### Areas Fabric - The New Renderer ### Output of `npx react-native info` ```text System: OS: macOS 15.1.1 CPU: (8) arm64 Apple M2 Memory: 733.66 MB / 16.00 GB Shell: version: "5.9" path: /bin/zsh Binaries: Node: version: 18.20.4 path: ~/.nvm/versions/node/v18.20.4/bin/node Yarn: version: 4.4.0 path: ~/Desktop/studymoa-app/node_modules/.bin/yarn npm: version: 10.7.0 path: ~/.nvm/versions/node/v18.20.4/bin/npm Watchman: version: 2024.10.21.00 path: /opt/homebrew/bin/watchman Managers: CocoaPods: version: 1.16.2 path: /Users/inbeomjin/.rbenv/shims/pod SDKs: iOS SDK: Platforms: - DriverKit 24.1 - iOS 18.1 - macOS 15.1 - tvOS 18.1 - visionOS 2.1 - watchOS 11.1 Android SDK: Not Found IDEs: Android Studio: 2024.1 AI-241.15989.150.2411.11948838 Xcode: version: 16.1/16B40 path: /usr/bin/xcodebuild Languages: Java: version: 17.0.13 path: /usr/bin/javac Ruby: version: 3.1.2 path: /Users/inbeomjin/.rbenv/shims/ruby npmPackages: "@react-native-community/cli": Not Found react: Not Found react-native: Not Found react-native-macos: Not Found npmGlobalPackages: "*react-native*": Not Found Android: hermesEnabled: true newArchEnabled: true iOS: hermesEnabled: true newArchEnabled: true ``` ### Stacktrace or Logs ```text non ``` ### Reproducer non ### Screenshots and Videos https://github.com/user-attachments/assets/66ba2460-91aa-484c-bf01-57a42c1d7d46
Platform: iOS,Component: TextInput,Component: ScrollView,Needs: Author Feedback,Needs: Repro,Type: New Architecture
low
Major
2,760,599,597
rust
APIs don't just depend on function signatures
<!-- Thank you for filing a bug report! 🐛 Please provide a short summary of the bug, along with any information you feel relevant to replicating the bug. --> Normally Rust has the really nice property that I can focus on getting each function to compile in isolation, without worries that refactoring the body of a function will break backwards compatibility. However, I recently stumbled across a bizarre circumstance where that isolation appears to be violated. After spending a day pruning it down to a [small-ish reproducible example](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=29a0027681cc0e1326a3cdfad0380276): ```rust use std::future::Future; // Note: The original code isn't actually object-oriented; the // object-oriented framing is just a way to give the isolated example // more intuitive/memorable names than `Foo`, `Bar`, `Baz`, etc... type Object = Box<dyn std::any::Any + Send>; fn get_object() -> Object { unimplemented!() } trait AsyncFrom<T>: Sized { fn async_from(src: T) -> impl Future<Output = Self>; } impl<T, U> AsyncFrom<U> for (T, U) { async fn async_from(_src: U) -> Self { unimplemented!() } } async fn downcast<T: AsyncFrom<Object>>() -> T { T::async_from(get_object()).await } async fn get_thing() -> u32 { // Oddly, inlining this call fixes everything. downcast::<(_, _)>().await.0 //<(_, _)>::async_from(get_object()).await.0 } pub async fn does_compile() -> u32 { get_thing().await } // Commenting out all code below this line causes everything to work. // Oddly, complains about an implementation of AsyncFrom here, // even though this does not directly use that at all. pub fn does_not_compile() -> impl AsyncFnOnce { move || does_compile() } trait AsyncFnOnce: FnOnce() -> Self::Future { type Future: Future + Send; } impl<F, Fut> AsyncFnOnce for F where F: FnOnce() -> Fut, Fut: Future + Send, { type Future = Fut; } ``` I find the way Rust handles this code to be problematic in a couple of ways: First, the error message is nonsensical: ``` error: implementation of `AsyncFrom` is not general enough --> src/lib.rs:42:5 | 42 | move || does_compile() | ^^^^^^^^^^^^^^^^^^^^^^ implementation of `AsyncFrom` is not general enough | = note: `(u32, Box<(dyn Any + Send + '0)>)` must implement `AsyncFrom<Box<(dyn Any + Send + '1)>>`, for any two lifetimes `'0` and `'1`... = note: ...but it actually implements `AsyncFrom<Box<dyn Any + Send>>` ``` This could be an acceptable error message if it were pointing to code in `downcast`, but it's pointing to code in `does_not_compile` which doesn't even (directly) use `AsyncFrom`. This really threw me for a loop, especially because I encountered it while working on a large codebase. Second, I was surprised to discover that the body of `get_thing` influences whether the code compiles, not just the signature of `get_thing`. Inlining `downcast` into `get_thing` appears to resolve the compile error without changing code in `does_not_compile`. Alternatively, adjusting the body of `does_not_compile` to avoid calling `does_compile` fixes the compile error. So, the compile only fails due to the interaction between two different function bodies... which feels very un-Rusty to me. After seeking help in the community discord, **one 🐸, two :frog:, tree :frog:** explained that combining closures with existential types allows you to violate the usual Rust philosophy that only function signatures matter. I feel like this is extremely unfortunate, both because it destroys local reasoning and because it makes it extra hard to be sure you haven't broken backwards compatibility when merely refactoring a function body. Per **one 🐸, two :frog:, tree :frog:**'s suggestion, I'm writing this issue in the hopes that providing counterintuitive example code may help push for the current behavior to be changed. I'd really like it if I could worry about getting each function to compile one at a time, without having to consider the possibility that different function bodies may interact negatively. Thank you! ### Meta (this behavior is also exhibited on beta and nightly versions of Rust) `rustc --version --verbose`: ``` rustc 1.80.1 (3f5fd8dd4 2024-08-06) binary: rustc commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23 commit-date: 2024-08-06 host: x86_64-unknown-linux-gnu release: 1.80.1 LLVM version: 18.1.7 ``` (the backtrace doesn't reveal anything new:) <!-- Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your environment. E.g. `RUST_BACKTRACE=1 cargo build`. --> <details><summary>Backtrace</summary> <p> ``` --> src/lib.rs:42:5 | 42 | move || does_compile() | ^^^^^^^^^^^^^^^^^^^^^^ implementation of `AsyncFrom` is not general enough | = note: `(u32, Box<(dyn Any + Send + '0)>)` must implement `AsyncFrom<Box<(dyn Any + Send + '1)>>`, for any two lifetimes `'0` and `'1`... = note: ...but it actually implements `AsyncFrom<Box<dyn Any + Send>>` ``` </p> </details>
T-lang,T-compiler,C-bug,A-crate-compat
low
Critical
2,760,611,003
svelte
Introduce $typed() rune, complimentary to $bindable()
### Describe the problem As explained in #14810 and #9241, using $props rune is inconvenient with TypeScript. ### Describe the proposed solution There is already a typed `$bindable` rune that can be used for inline types. We could introduce a `$typed` (non-bindable) rune to be able to specify fallback value and type at the declaration site of the prop, not separately: ```ts let { size = $typed<'sm' | 'md' | 'lg'>('md'), // optional color = $typed<Color>(), // required value = $bindable<string>() } = $props() ``` instead of ```ts let { size = 'md', color, value = $bindable() }: { size?: 'sm' | 'md' | 'lg', color: Color, value: string } = $props() ``` ### Importance would make my life easier
feature request,types / typescript,runes
low
Major
2,760,611,517
PowerToys
Command Not Found unhandled exception causes PowerShell (pwsh) to crash
### Microsoft PowerToys version 0.87.1 ### Installation method WinGet ### Running as admin No ### Area(s) with issue? Command not found ### Steps to reproduce 1. Open Windows Terminal 2. Wait for it to launch 3. PowerShell crashes with Command Not Found unhandled exception. ![Image](https://github.com/user-attachments/assets/93f021ce-dd22-4066-9cd1-c2c3858c3067) [PowerToysReport_2024-12-27-10-01-36.zip](https://github.com/user-attachments/files/18259112/PowerToysReport_2024-12-27-10-01-36.zip) ### ✔️ Expected Behavior PowerShell should work normally alongside with Command Not Found. ### ❌ Actual Behavior PowerShell crashes from Command Not Found unhandled exception. Is it because I wasn't connected to the internet? ### Other Software _No response_
Issue-Bug,Needs-Triage
low
Critical
2,760,618,409
next.js
Refused to apply inline style because it violates the Content Security Policy directive
### Verify canary release - [X] I verified that the issue exists in the latest Next.js canary release ### Provide environment information ```bash Operating System: Platform: win32 Arch: x64 Version: Windows 10 Enterprise Available memory (MB): 32562 Available CPU cores: 8 Binaries: Node: 20.16.0 npm: 10.9.2 Yarn: N/A pnpm: N/A Relevant Packages: next: 15.1.1-canary.21 // Latest available version is detected (15.1.1-canary.21). eslint-config-next: N/A react: 18.3.1 react-dom: 18.3.1 typescript: N/A Next.js Config: output: N/A ``` ### Which example does this report relate to? with-strict-csp ### What browser are you using? (if relevant) Chrome Version 131.0.6778.205 (Official Build) (64-bit), Microsoft Edge Version 131.0.2903.112 (Official build) (64-bit) ### How are you deploying your application? (if relevant) next start, Vercel ### Describe the Bug It appears that some of Next.js built-in pages (e.g. not-found page) are unable to render its CSS styling because of the Content Security Policy (CSP) value `style-src 'self' 'nonce-${nonce}';`, Console Error Message: `Refused to apply inline style because it violates the following Content Security Policy directive: "style-src 'self' 'nonce-ZmU5NjMwMzgtOWE0Yi00MGMyLTliNTgtZjc4NzNjNDVjODky'". Either the 'unsafe-inline' keyword, a hash ('sha256-B3Jk7Rws8l6DgOhoy0oMP4k8gk16joGygBpDXz05ZXo='), or a nonce ('nonce-...') is required to enable inline execution. Note that hashes do not apply to event handlers, style attributes and javascript: navigations unless the 'unsafe-hashes' keyword is present.` Console Error Screenshot: ![screenshot-of-error](https://github.com/user-attachments/assets/47867b95-2f7e-44e9-a1d5-0f056ae13f98) Browser view Screenshot: ![browser-view-screenshot](https://github.com/user-attachments/assets/b3b5e552-acf8-47f8-bebd-e70ba95981d0) ### Expected Behavior Since the not-found page is part of the Next.js framework, it should be able to render its CSS without any issue The page should look like this: ![expected-behaviour-screenshot](https://github.com/user-attachments/assets/a1b22694-a632-45c8-86e5-dff78f5ff581) ### To Reproduce **Development mode** 1. `npx create-next-app --example with-strict-csp with-strict-csp-app` 2. `npm run dev` 3. Visit `http://localhost:3000/adasd` or any other route 4. Open the browser DevTools with `Ctrl + Shift + I` 5. Click the `Console` tab to see the console error message 7. You will see the 404 page is shown without CSS styling applied. The Console will also show the error message **Production mode** 1. `npx create-next-app --example with-strict-csp with-strict-csp-app` 2. `npm run build` 3. `npm run start` 4. Visit `http://localhost:3000/adasd` or any other route 5. Open the browser DevTools with `Ctrl + Shift + I` 6. Click the `Console` tab to see the console error message 7. You will see the 404 page is shown without CSS styling applied. The Console will also show the error message
examples
low
Critical
2,760,648,533
terminal
CloudShell - CloudShell in Windows Terminal Exiting
### Windows Terminal version 1.21.2361.0 ### Windows build number Windows 11 Version 23H2 (OS Build 22631.3380) ### Other Software _No response_ ### Steps to reproduce **Impact**: Customers are unable to launch Azure Cloud Shell from Windows Terminal, hindering the release of Windows Terminal to customers. Customers are experiencing an error when attempting to launch Azure Cloud Shell from Windows Terminal. . It fails with this error: [process exited with code 1 (0x00000001)] **Lab repro:** Deployed CloudShell integrated with Vnet same as customer: [Deploy Azure Cloud Shell in a virtual network with quickstart templates | Microsoft Learn](https://learn.microsoft.com/en-us/azure/cloud-shell/vnet/deployment) **Test 1:** Filled all details in the onboarding UX: ![Image](https://github.com/user-attachments/assets/7ab33fb4-9a93-45ef-a749-89fe52a9b061) ![Image](https://github.com/user-attachments/assets/208d2dcf-ff14-4284-b337-dac67d334269) We can successfully reproduce the error when Portal cloudshell is either stuck here: ![Image](https://github.com/user-attachments/assets/5c3531c0-fdfb-4cb2-8b73-7b07480edc73) or it has timed out after 20 mins In either of the above conditions, we get the same error in Terminal: ![Image](https://github.com/user-attachments/assets/5254f25e-175c-4f4f-8bf9-b04a3d0c178f) **Test 2:** Connected to cloudshell portal, then connected cloudshell terminal, both worked well. Left cloudshell portal idle for about 6 hours. ![Image](https://github.com/user-attachments/assets/b30cc6e8-6242-40e9-8f01-c13763d681f1) Session on cloudshell Windows terminal ended after a while which, I assume, is expected. ![Image](https://github.com/user-attachments/assets/5273c3f4-21db-462c-9c71-9519b65ff1f5) Launched a new cloudshell terminal session without touching cloudshell in portal. Got the same error as customer. ![Image](https://github.com/user-attachments/assets/0f2d0926-4963-4671-9435-d2d9a27b2afd) **Workaround**: As long as portal cloudshell is not left idle to timeout, customer as well as we can successfully launch terminal cloudshell as well. If we return to portal cloudshell and relaunch it, then windows terminal cloudshell starts successfully. ### Expected Behavior Cloudshell in Windows terminal should launch ### Actual Behavior Customer is trying to launch Azure Cloud Shell from Windows Terminal. It fails with this error: "Requesting a cloud shell instance...Succeeded.Requesting a terminal (this might take a while)...[process exited with code 1 (0x00000001)] "
Issue-Bug,Product-Terminal,Area-AzureShell,Priority-1
low
Critical
2,760,668,431
excalidraw
Shortcut to write[needed QOL feature]
SO if i want to quickly scribble something from my laptop it's hard to write while pressing the trackpad maybe add an option to write using shortcut key, maybe i am not being clear let me give you an example "If i press suppose option key or command key on my mac with one hand then just swipe on my trackpad it should register that as a indication to listen for mouse's position and draw however i swiped, or when you sign a pdf from your mac using preview app, when you have to sign you just click once on the spot and draw your signature on trackpad without it being pressed down at all times while writing" and i think this can be easily implemented so please do this it would make it so much easier to write on laptops
enhancement,freedraw
low
Minor
2,760,686,503
flutter
Textfied Editable text class causing crashes
### Steps to reproduce 1. create text field with expand 2. enter long text init than directly scroll with selection up and down fast than you can see the crash ### Expected results text field should not crash ### Actual results ``` EditableTextState._handleContextMenuOnScroll.<fn> (editable_text.dart:3880) SchedulerBinding._invokeFrameCallback (binding.dart:1397) SchedulerBinding.handleDrawFrame (binding.dart:1331) SchedulerBinding._handleDrawFrame (binding.dart:1176) ``` ### Code sample <details open><summary>Code sample</summary> ```dart [Paste your code here] ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> [Upload media here] </details> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [Paste your output here] ``` </details>
waiting for customer response,in triage
low
Critical