id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
2,721,971,529
angular
Improve Lifecycle Hooks Documentation with Expanded Examples and Error Handling
### Describe the problem that you experienced The documentation provides good insights into lifecycle hooks but lacks depth in certain areas. Key concerns include: 1. Insufficient examples for real-world scenarios (e.g., combining hooks effectively). 2. Limited guidance on debugging errors such as `ExpressionChangedAfterItHasBeenCheckedError`. 3. No recommendations for testing lifecycle hooks. ### Enter the URL of the topic with the problem https://angular.dev/guide/components/lifecycle ### Describe what you were looking for in the documentation I was looking for: 1. Clear examples of how to use lifecycle hooks in realistic scenarios. 2. Debugging strategies for common errors (e.g., `ExpressionChangedAfterItHasBeenCheckedError`). 3. A section on testing lifecycle hooks for developers writing unit tests. ### Describe the actions that led you to experience the problem 1. Navigated to the Lifecycle Hooks documentation to understand ngOnChanges and ngOnInit. 2. Attempted to find guidance on debugging `ExpressionChangedAfterItHasBeenCheckedError`. 3. Looked for examples of testing hooks but found none. ### Describe what you want to experience that would fix the problem To address the gaps, I propose the following: 1. Add detailed examples demonstrating real-world usage for hooks like `ngOnChanges `and `ngAfterViewInit`. 2. Introduce a debugging section for common lifecycle-related errors. 3. Include a section with sample unit tests for lifecycle hooks. ### Add a screenshot if that helps illustrate the problem _No response_ ### If this problem caused an exception or error, please paste it here ```true No direct error was caused by this problem, but the lack of debugging guidance for `ExpressionChangedAfterItHasBeenCheckedError` is a significant gap. ``` ### If the problem is browser-specific, please specify the device, OS, browser, and version ```true Not applicable for this documentation issue. ``` ### Provide any additional information here in as much as detail as you can ```true Adding examples and debugging guidance will significantly improve the usability of this documentation, helping both novice and advanced developers. Suggestions for debugging errors can also include practical tips like isolating changes to earlier hooks or deferring state changes to subsequent lifecycle phases. ```
area: docs
low
Critical
2,721,980,257
tensorflow
The warning "The structure of `inputs` doesn't match the expected structure" when training a functional model
### Issue type Bug ### Have you reproduced the bug with TensorFlow Nightly? Yes ### Source binary ### TensorFlow version v2.13.1-0-gf841394b1b7 2.13.1 (Nightly: v1.12.1-119104-gf8fd6f53fa3 2.19.0-dev20241204) ### Custom code Yes ### OS platform and distribution Windows 11 23H2 22631.4460 ### Mobile device Windows 11 23H2 22631.4460 ### Python version 3.10 ### Bazel version _No response_ ### GCC/compiler version _No response_ ### CUDA/cuDNN version _No response_ ### GPU model and memory _No response_ ### Current behavior? When the model is functional, not Sequential, the warning has occured: ``` Epoch 1/5 <path-to-python>\lib\site-packages\keras\src\models\functional.py:225: UserWarning: The structure of `inputs` doesn't match the expected structure: ['keras_tensor']. Received: the structure of inputs=* warnings.warn( ``` Yes, the warning message has interrupted on parenthesis. When I've run the same code in Nightly, the warning message is: ``` Epoch 1/5 <path-to-python>\lib\site-packages\keras\src\models\functional.py:237: UserWarning: The structure of `inputs` doesn't match the expected structure. Expected: ['keras_tensor'] Received: inputs=Tensor(shape=(None, 10)) warnings.warn(msg) ``` After the warning, the training continues normally, but because of this warning, I can't be sure that the model works as I expect. I've traced the source and found that in `Lib\site-packages\keras\src\tree\optree_impl.py` on line 95 comparasion of expected and actual structure failed. Now I place the traced variables here: ``` >>> a <tf.Tensor 'data:0' shape=(None, 10) dtype=float64> >>> b [<KerasTensor shape=(None, 10), dtype=float32, sparse=False, name=keras_tensor>] >>> a_structure PyTreeSpec(*, NoneIsLeaf) >>> b_structure PyTreeSpec([*], NoneIsLeaf) ``` The data passed to the `fit` function fully corresponds to the [documentation](https://keras.io/api/models/model_training_apis/#fit-method). The warning appears independently of whether I use numpy array or PyDataset as dataset of `fit` function. ### Standalone code to reproduce the issue ```shell from keras.models import Model from keras.layers import Dense, Input, Flatten, Concatenate from keras import utils import numpy as np import tensorflow as tf class SamplesSet(utils.PyDataset): def __init__(self, batch_size, **kwargs): super().__init__(**kwargs) self.batch_size = batch_size def __len__(self): return 1 def __getitem__(self, idx): x1 = np.random.uniform(size=10*self.batch_size).reshape((self.batch_size, 10)) y = np.arange(self.batch_size) return x1, y train = SamplesSet(100) x1_train = np.random.uniform(size=10*100).reshape((100, 10)) y_train = np.arange(100) input1 = Input(shape=(10,)) l1 = Dense(1)(input1) d2 = Dense(1, activation='sigmoid')(l1) model = Model(inputs=[input1], outputs=[d2]) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) history = model.fit(x1_train, y_train, epochs=5, verbose=1) # In the all cases below warning occures too # history = Model.fit(train, epochs=5, verbose=1) # ret = model.predict(np.arange(10)[np.newaxis,:]) # ret = model.predict(tf.constant([[0,1,2,3,4,5,6,7,8,9]])) ``` ### Relevant log output _No response_
type:bug,comp:keras,TF 2.13
low
Critical
2,721,993,647
rust
Compiling `#[derive(Debug)] enum` + `dbg!` is quadratic with enum variants
I tried compiling this code: [lib.rs.zip](https://github.com/user-attachments/files/18032518/lib.rs.zip) The code above was generated with the following python script: ```python print( """#[derive(Debug)] pub enum Sprites {""" ) for i in range(10000): print(f" Sprite{i},") print( """} pub fn foo() { println!("{:?}", Sprites::Sprite1); }""" ) ``` I then compiled the code with `cargo build --release`. It takes 21.00s to compile on my machine, which is rather slow. It would be nice if it compiled faster. Note that reducing the number of variants from 10000 to 1000 makes the compilation take only 0.20s to compile on my machine. I suspect that something in the Debug trait impl is taking quadratic time, but only if you actually try to debug-print something. This issue was discovered by tongke on the rust community discord. ### Meta `rustc --version --verbose`: ``` rustc 1.83.0 (90b35a623 2024-11-26) binary: rustc commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf commit-date: 2024-11-26 host: aarch64-apple-darwin release: 1.83.0 LLVM version: 19.1.1 ```
I-compiletime,A-macros,T-compiler,C-bug
low
Critical
2,722,015,966
pytorch
Wrong result with `torch.dist` across complex `dtypes=complex32`
### 🐛 Describe the bug The `torch.dist` function produces incorrect results when `dtypes=complex32` code: ``` python import torch complex_tensor = [1.0 + 0.0j, 0.0 + 1.0j] x_c32 = torch.tensor(complex_tensor, dtype=torch.complex32) x_c64 = torch.tensor(complex_tensor, dtype=torch.complex64) y_c32 = torch.tensor([1.0+1.0j, 1.0+1.0j], dtype=torch.complex32) y_c64 = y_c32.to(torch.complex64) print("torch.dist(x_c32, y_c32, p=2):", torch.dist(x_c32, y_c32, p=2).item()) print("torch.dist(x_c64, y_c64, p=2):", torch.dist(x_c64, y_c64, p=2).item()) ``` Output: ``` torch.dist(x_c32, y_c32, p=2): 1.0 torch.dist(x_c64, y_c64, p=2): 1.4142135381698608 ``` ### Versions ``` PyTorch version: 2.5.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.30.5 Libc version: glibc-2.35 Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.1.85+-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU @ 2.20GHz CPU family: 6 Model: 79 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 Stepping: 0 BogoMIPS: 4399.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities Hypervisor vendor: KVM Virtualization type: full L1d cache: 32 KiB (1 instance) L1i cache: 32 KiB (1 instance) L2 cache: 256 KiB (1 instance) L3 cache: 55 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0,1 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable; SMT Host state unknown Vulnerability Meltdown: Vulnerable Vulnerability Mmio stale data: Vulnerable Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Vulnerable Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled) Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.6.4.1 [pip3] nvidia-cuda-cupti-cu12==12.6.80 [pip3] nvidia-cuda-runtime-cu12==12.6.77 [pip3] nvidia-cudnn-cu12==9.5.1.17 [pip3] nvidia-cufft-cu12==11.3.0.4 [pip3] nvidia-curand-cu12==10.3.7.77 [pip3] nvidia-cusolver-cu12==11.7.1.2 [pip3] nvidia-cusparse-cu12==12.5.4.2 [pip3] nvidia-nccl-cu12==2.23.4 [pip3] nvidia-nvjitlink-cu12==12.6.85 [pip3] nvtx==0.2.10 [pip3] optree==0.13.1 [pip3] pynvjitlink-cu12==0.4.0 [pip3] torch==2.5.1+cu121 [pip3] torchaudio==2.5.1+cu121 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.20.1+cu121 [conda] Could not collect cc @fritzo @neerajprad @alicanb @nikitaved @ezyang @anjali411 @dylanbespalko @mruberry @amjames
module: distributions,triaged,module: complex
low
Critical
2,722,034,638
pytorch
[CI] Manywheel image should use hash based on `.ci/docker` directory
The CI docker images use the hash based on `.ci/docker` directory (default param for docker-build-dir [here](https://github.com/pytorch/pytorch/blob/39425feac799905402abe4d15667fa47c344f2d7/.github/workflows/docker-builds.yml#L105)), but the manywheel docker images use the hash based on `.ci/docker/manywheel` directory [here](https://github.com/pytorch/pytorch/blob/39425feac799905402abe4d15667fa47c344f2d7/.github/workflows/build-manywheel-images.yml#L68). This is incorrect because [build-manywheel-images.yml](https://github.com/pytorch/pytorch/blob/39425feac799905402abe4d15667fa47c344f2d7/.github/workflows/build-manywheel-images.yml) uses scripts from `.ci/docker/common` and therefore should take into account any changes in that directory as well. This will also ensure that both CI and manywheel docker images have the same tag when any changes are made in `.ci/docker`. cc @seemethere @malfet @pytorch/pytorch-dev-infra
module: ci,triaged
low
Minor
2,722,048,977
langchain
NotImplementedError in RootListenersTracer.on_llm_end callback
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code from langchain_core.chat_history import BaseChatMessageHistory, InMemoryChatMessageHistory from langchain_core.runnables.history import RunnableWithMessageHistory from langchain_ollama import ChatOllama llm = ChatOllama(base_url="http://127.0.0.1:32101", model="qwen2:latest", temperature=0.8, num_predict=1024, ) store = {} def get_session_history(session_id: str) -> BaseChatMessageHistory: if session_id not in store: store[session_id] = InMemoryChatMessageHistory() return store[session_id] with_message_history = RunnableWithMessageHistory(llm, get_session_history) config = {"configurable": {"session_id": "abc2"}} response = with_message_history.invoke( [("human", "编程.")], config=config, ) print(response.content) ### Error Message and Stack Trace (if applicable) 当然,我很乐意帮助您解决编程问题。请告诉我具体需要帮助的内容: 1. 您正在使用哪种编程语言(例如:Python, Java, C++, JavaScript等)? 2. 需要实现的功能或项目类型是什么? 3. 遇到的具体问题或错误信息是什么? 比如如果您在学习 Python 并遇到了一些问题,您可以告诉我您在编写代码时遇到的特定错误、尝试运行但没有得到预期结果的部分代码或具体场景。这样我就能提供更具体的帮助和指导。 请提供更多详细信息以便我能更好地协助您! NotImplementedError in RootListenersTracer.on_llm_end callback: NotImplementedError('Trying to load an object that doesn\'t implement serialization: {\'lc\': 1, \'type\': \'not_implemented\', \'id\': [\'ollama\', \'_types\', \'Message\'], \'repr\': "Message(role=\'assistant\', content=\'\', images=None, tool_calls=None)"}') ### Description I'm trying to use langchain ChatOllama RunnableWithMessageHistory I expect it to run normally but An abnormality message has appeared `NotImplementedError in RootListenersTracer.on_llm_end callback: NotImplementedError('Trying to load an object that doesn\'t implement serialization: {\'lc\': 1, \'type\': \'not_implemented\', \'id\': [\'ollama\', \'_types\', \'Message\'], \'repr\': "Message(role=\'assistant\', content=\'\', images=None, tool_calls=None)"}')` ### System Info aiohappyeyeballs==2.4.4 aiohttp==3.11.9 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.2.post1 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.4.0 distro==1.9.0 exceptiongroup==1.2.2 filelock==3.16.1 frozenlist==1.5.0 fsspec==2024.10.0 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.7 httpx==0.27.2 huggingface-hub==0.26.3 idna==3.10 Jinja2==3.1.4 jiter==0.8.0 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.9 langchain-core==0.3.21 langchain-ollama==0.2.1 langchain-openai==0.2.11 langchain-text-splitters==0.3.2 langsmith==0.1.147 MarkupSafe==3.0.2 mpmath==1.3.0 multidict==6.1.0 networkx==3.4.2 numpy==1.26.4 ollama==0.4.2 openai==1.57.0 orjson==3.10.12 packaging==24.2 pandas==2.2.3 propcache==0.2.1 pydantic==2.10.3 pydantic_core==2.27.1 python-dateutil==2.9.0.post0 pytz==2024.2 PyYAML==6.0.2 regex==2024.11.6 requests==2.32.3 requests-toolbelt==1.0.0 safetensors==0.4.5 six==1.17.0 sniffio==1.3.1 SQLAlchemy==2.0.36 sympy==1.13.3 tenacity==9.0.0 tiktoken==0.8.0 tokenizers==0.20.3 torch==2.2.2 tqdm==4.67.1 transformers==4.46.3 typing_extensions==4.12.2 tzdata==2024.2 urllib3==2.2.3 yarl==1.18.3
🤖:bug
low
Critical
2,722,051,977
pytorch
[MPS] nansum crashes with scalar input
### 🐛 Describe the bug The following `test_ops` test crashes on MPS and has been skipped. Checkout the following branch to repro: https://github.com/pytorch/pytorch/pull/142202 `pytest test/test_ops.py -v -k test_out_nansum_mps_float32` ### Versions Versions of relevant libraries: [pip3] flake8==6.1.0 [pip3] flake8-bugbear==23.3.23 [pip3] flake8-comprehensions==3.15.0 [pip3] flake8-executable==2.1.3 [pip3] flake8-logging-format==0.9.0 [pip3] flake8-pyi==23.3.1 [pip3] flake8-simplify==0.19.3 [pip3] mypy==1.11.2 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] optree==0.13.0 [pip3] torch==2.6.0a0+git1841c2c [conda] numpy 1.26.4 pypi_0 pypi [conda] optree 0.13.0 pypi_0 pypi [conda] torch 2.6.0a0+git51e0996 pypi_0 pypi [conda] torchfix 0.4.0 pypi_0 pypi cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
module: crash,triaged,module: mps
low
Critical
2,722,074,392
next.js
Monorepo build failing after version 15.0.3 onwards
### Link to the code that reproduces this issue https://github.com/t3-oss/create-t3-turbo ### To Reproduce 1. Clone repo and install the deps with `pnpm` 2. Update to version `15.0.2`, build it and it works OBS: change the root `package.json` to include the build command to only build nextjs `"build:next": "CI=true turbo run build -F @acme/nextjs...",` 3. Update to version `15.0.3` or `15.0.4`, build it again and you will see that the external `packages` are not found and the build fails, for example: ```bash @acme/nextjs:build: Creating an optimized production build ... @acme/nextjs:build: Failed to compile. @acme/nextjs:build: @acme/nextjs:build: ./src/app/layout.tsx @acme/nextjs:build: Module not found: Can't resolve '@acme/ui/theme' @acme/nextjs:build: @acme/nextjs:build: https://nextjs.org/docs/messages/module-not-found @acme/nextjs:build: @acme/nextjs:build: ./src/app/layout.tsx @acme/nextjs:build: Module not found: Can't resolve '@acme/ui/toast' @acme/nextjs:build: @acme/nextjs:build: https://nextjs.org/docs/messages/module-not-found ``` ### Current vs. Expected behavior I expected to build it normally like in the `15.0.2` version ### Provide environment information ```bash Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000 Available memory (MB): 16384 Available CPU cores: 8 Binaries: Node: 22.12.0 npm: 10.9.2 Yarn: N/A pnpm: 9.14.4 Relevant Packages: next: 15.0.4 // Latest available version is detected (15.0.4). eslint-config-next: N/A react: 18.3.1 react-dom: 18.3.1 typescript: 5.6.3 Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Module Resolution, Output (export/standalone), Webpack ### Which stage(s) are affected? (Select all that apply) next build (local) ### Additional context _No response_
bug,Output (export/standalone),Module Resolution
low
Critical
2,722,076,520
deno
Deno requests env access to unspecified env var
Version: Deno 2.1.2 ```ts import AnthropicBedrock from '@anthropic-ai/bedrock-sdk'; const clientBedrock = new AnthropicBedrock(); async function consecutiveRoles(client: AnthropicBedrock, model = 'anthropic.claude-3-haiku-20240307-v1:0') { const response = await client.messages.create({ max_tokens: 10, stream: false, messages: [ { role: 'user', content: 'What is the color of the sky' }, ], model, }); console.log('Response content: ', response.content); } await consecutiveRoles(clientBedrock); // [ { type: 'text', text: 'Blue' } ] ``` Output: ``` $ deno run --env-file --allow-env=ANTHROPIC_*,AWS_*,DEBUG --allow-net bed-deno.ts ┏ ⚠ Deno requests env access. ┠─ To see a stack trace for this prompt, set the DENO_TRACE_PERMISSIONS environmental variable. ┠─ Learn more at: https://docs.deno.com/go/--allow-env ┠─ Run again with --allow-env to bypass this prompt. ┗ Allow? [y/n/A] (y = yes, allow; n = no, deny; A = allow all env permissions) > ``` This reproduces without a `.env` file, with an empty one, or with one containing valid AWS_ credentials. I expected Deno to specify which env var it wants access to, as it does for (most) other environment variables.
question
low
Critical
2,722,101,470
deno
More BrotliDecompress error info
I getting some BrotliDecompress error on several YT clients. It fails somewhere inside deno nodejs compat files. ```ts import { Innertube } from "npm:youtubei.js"; const yt = await Innertube.create(); ``` Error: ```sh error: Uncaught TypeError: Failed to decompress at BrotliDecompress.transform [as _transform] (ext:deno_node/_brotli.js:69:23) at BrotliDecompress.Transform._write (ext:deno_node/_stream.mjs:4563:12) at writeOrBuffer (ext:deno_node/_stream.mjs:3520:16) at _write (ext:deno_node/_stream.mjs:3465:14) at BrotliDecompress.Writable.write (ext:deno_node/_stream.mjs:3468:14) at Readable.ondata (ext:deno_node/_stream.mjs:2744:26) at Readable.emit (ext:deno_node/_events.mjs:393:28) at Readable.read (ext:deno_node/_stream.mjs:2597:14) at flow (ext:deno_node/_stream.mjs:2933:38) at resume_ (ext:deno_node/_stream.mjs:2915:7) ``` deno: 2.1.3
node compat,dx
low
Critical
2,722,240,230
langchain
Issue with LangChain LCEL-based Chain, gemini models and google storage bucket inputs #gemini
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```from langchain_google_vertexai import ChatVertexAI from langchain_core.output_parsers import PydanticOutputParser from langchain_core.prompts import ChatPromptTemplate from langchain_core.pydantic_v1 import BaseModel, Field from langchain_core.runnables import RunnableSequence,RunnablePassthrough from langchain_google_vertexai import HarmBlockThreshold, HarmCategory from typing import List class Highlight(BaseModel): startTime: str = Field(description="Start time of the highlight.") endTime: str = Field(description="End time of the highlight.") caption: str = Field(description="A short caption.") scene_type: str = Field(description="Type of scene.") class Highlights(BaseModel): highlights: List[Highlight] parser = PydanticOutputParser(pydantic_object=Highlights) prompt = ChatPromptTemplate.from_messages([ ("system", "{system_instruction}\n'{format_instructions}'\n"), ("human", '{human_message}') ]) chain = prompt | ChatVertexAI(model="gemini-1.5-flash-002", temperature=0.4, max_tokens=8192, top_p=0.95, safety_settings={ HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE, }) | parser system_instruction_template = """ #### You are an AI-agent specializing in creating video hightlights.""" text_message1 = """Please analyze the video clip and extract highlights scences.""" media_message1 ={ "type": "media_input", "video_url": { "url": "https://storage.googleapis.com/sample-video/video.mp4", }, } result = await chain.ainvoke({ "language": "English", "system_instruction": system_instruction_template, "format_instructions": parser.get_format_instructions(), "human_message": [media_message1, text_message1] }) print(result.json(indent=2))``` ### Error Message and Stack Trace (if applicable) _No response_ ### Description I’m working on a GenAI-based application where video files are uploaded to Google Cloud Storage and processed through the Gemini Flash 1.5 model via LangChain. The application has been running successfully since early September, but recently it started returning responses that are irrelevant to the actual video content. The issue seems to lie in how LangChain is orchestrating prompts and handling the video input. Despite the video files being correctly retrieved via gsutil, it appears that the files are not being passed correctly to the LLM, resulting in hallucinated or off-topic responses. When I directly use the Gemini Flash 1.5 API without LangChain, the responses are accurate. Expected Behavior: The model should return video content-specific insights (e.g., extracting key scenes based on dialogues and interactions). Current Behavior: The model is generating irrelevant responses that do not reflect the video content, suggesting a breakdown in the input-output handling when using LangChain. ### System Info Python 3.8.10 langchain-google-vertexai==1.0.6 langchain-core==0.2.41 Model version "gemini-1.5-flash-002"
Ɑ: core
low
Critical
2,722,258,341
storybook
[Bug]: Storybook hangs/doesn't finish building 8.4.6/8.4.7
### Describe the bug We're using Storybook (with React) in version 8.4.5. I noticed that our Renovate update failed for 8.4.6 and now 8.4.7 as well. I didn't create a bug ticket so far, because I cannot reproduce it locally and only see it in our CI (Gitlab). I decided to create a ticket now as I was checking if someone else has the same problem, but I couldn't find something related. Maybe it helps someone else to jump into the discussion (and ideally a solution later). (https://github.com/storybookjs/storybook/issues/29715 seems to about a different issue.) ### Reproduction link #sorry-:-( ### Reproduction steps _No response_ ### System Storybook in the mentioned versions, Gitlab CI and `$ node -v v22.11.0`. ### Additional context The last output I get after running `$ storybook build--output-dir storybook` is this: > Generate File to /builds/xxx/yyy/storybook/meta.json info => Preview built (1.33 min) info => Output directory: /builds/xxx/yyy/storybook attention => Storybook now collects completely anonymous telemetry regarding usage. This information is used to shape Storybook's roadmap and prioritize features. You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL: https://storybook.js.org/telemetry Afterwards my CI job runs into a timeout. As far as I tried to get more information by using different log levels and --debug, but I couldn't get more information. I will try to debug more and if I find a solution/workaround I will update (and close) this issue.
bug,has workaround,build-storybook,sev:S2
medium
Critical
2,722,302,069
yt-dlp
Not invoking proper extractor on particular olympics.com URL
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting a bug unrelated to a specific site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) ### Provide a description that is worded well enough to be understood The "standart" olympics.com URL format has english-based slug and looks like this - https://olympics.com/ru/video/mex-v-jpn-men-s-semifinal-london-2012-replays But there are some videos with non-english-based slugs that are URLEncoded, like this - https://olympics.com/ru/video/%D0%B1%D1%80%D0%B0%D0%B7%D0%B8%D0%BB%D0%B8%D1%8F-%D0%BC%D0%B5%D0%BA%D1%81%D0%B8%D0%BA%D0%B0-%D1%84%D1%83%D1%82%D0%B1%D0%BE%D0%BB-%D0%BC-%D0%BB%D0%BE%D0%BD%D0%B4%D0%BE%D0%BD-2012-%D0%BA%D0%B0%D0%BA-%D1%8D%D1%82%D0%BE-%D0%B1%D1%8B%D0%BB%D0%BE Looks like URLs with these non-english slugs are not parsed properly and yt-dlp does not invoke an "OlympicsReplay" extractor trying to use "generic" extractor instead. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell yt-dlp.exe https://olympics.com/ru/video/%D0%B1%D1%80%D0%B0%D0%B7%D0%B8%D0%BB%D0%B8%D1%8F-%D0%BC%D0%B5%D0%BA%D1%81%D0%B8%D0%BA%D0%B0-%D1%84%D1%83%D1%82%D0%B1%D0%BE%D0%BB-%D0%BC-%D0%BB%D0%BE%D0%BD%D0%B4%D0%BE%D0%BD-2012-%D0%BA%D0%B0%D0%BA-%D1%8D%D1%82%D0%BE-%D0%B1%D1%8B%D0%BB%D0%BE -F -vU [debug] Command-line config: ['https://olympics.com/ru/video/%D0%B1%D1%80%D0%B0%D0%B7%D0%B8%D0%BB%D0%B8%D1%8F-%D0%BC%D0%B5%D0%BA%D1%81%D0%B8%D0%BA%D0%B0-%D1%84%D1%83%D1%82%D0%B1%D0%BE%D0%BB-%D0%BC-%D0%BB%D0%BE%D0%BD%D0%B4%D0%BE%D0%BD-2012-%D0%BA%D0%B0%D0%BA-%D1%8D%D1%82%D0%BE-%D0%B1%D1%8B%D0%BB%D0%BE', '-F', '-vU'] [debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [2b67ac300] (win_exe) [debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023) [debug] exe versions: ffmpeg 2024-11-28-git-bc991ca048-full_build-www.gyan.dev (setts), ffprobe 2024-11-28-git-bc991ca048-full_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1837 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [generic] Extracting URL: https://olympics.com/ru/video/%D0%B1%D1%80%D0%B0%D0%B7%D0%B8%D0%BB%D0%B8%D1%8F-%D0%BC%D0%B5%D0%BA%D1%81%D0%B8%D0%BA%D0%B0-%D1%84%D1%83%D1%82%D0%B1%D0%BE%D0%BB-%D0%BC-%D0%BB%D0%BE%D0%BD%D0%B4%D0%BE%D0%BD-2012-%D0%BA%D0%B0%D0%BA-%D1%8D%D1%82%D0%BE-%D0%B1%D1%8B%D0%BB%D0%BE [generic] бразилия-мексика-футбол-м-лондон-2012-как-это-было: Downloading webpage WARNING: [generic] Falling back on generic information extractor [generic] бразилия-мексика-футбол-м-лондон-2012-как-это-было: Extracting information [debug] Looking for embeds [debug] Identified a JSON LD [generic] Extracting URL: https://vod.olympicchannel.com/NBCR_-_OCS_Prod/623/92/GCDG-LONDONS14S003-_E18060601_master.m3u8#__youtubedl_smuggle=%7B%22force_videoid%22%3A+%22%5Cu0431%5Cu0440%5Cu0430%5Cu0437%5Cu0438%5Cu043b%5Cu0438%5Cu044f-%5Cu043c%5Cu0435%5Cu043a%5Cu0441%5Cu0438%5Cu043a%5Cu0430-%5Cu0444%5Cu0443%5Cu0442%5Cu0431%5Cu043e%5Cu043b-%5Cu043c-%5Cu043b%5Cu043e%5Cu043d%5Cu0434%5Cu043e%5Cu043d-2012-%5Cu043a%5Cu0430%5Cu043a-%5Cu044d%5Cu0442%5Cu043e-%5Cu0431%5Cu044b%5Cu043b%5Cu043e%22%2C+%22to_generic%22%3A+true%2C+%22referer%22%3A+%22https%3A%2F%2Folympics.com%2Fru%2Fvideo%2F%25D0%25B1%25D1%2580%25D0%25B0%25D0%25B7%25D0%25B8%25D0%25BB%25D0%25B8%25D1%258F-%25D0%25BC%25D0%25B5%25D0%25BA%25D1%2581%25D0%25B8%25D0%25BA%25D0%25B0-%25D1%2584%25D1%2583%25D1%2582%25D0%25B1%25D0%25BE%25D0%25BB-%25D0%25BC-%25D0%25BB%25D0%25BE%25D0%25BD%25D0%25B4%25D0%25BE%25D0%25BD-2012-%25D0%25BA%25D0%25B0%25D0%25BA-%25D1%258D%25D1%2582%25D0%25BE-%25D0%25B1%25D1%258B%25D0%25BB%25D0%25BE%22%7D [generic] бразилия-мексика-футбол-м-лондон-2012-как-это-было: Downloading webpage ERROR: [generic] Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>) File "yt_dlp\extractor\common.py", line 742, in extract File "yt_dlp\extractor\generic.py", line 2393, in _real_extract File "yt_dlp\extractor\common.py", line 911, in _request_webpage File "yt_dlp\extractor\common.py", line 898, in _request_webpage File "yt_dlp\YoutubeDL.py", line 4162, in urlopen File "yt_dlp\networking\common.py", line 117, in send File "yt_dlp\networking\_helper.py", line 208, in wrapper File "yt_dlp\networking\common.py", line 340, in send File "yt_dlp\networking\_requests.py", line 365, in _send yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden E:\>yt-dlp.exe https://olympics.com/ru/video/mex-v-jpn-men-s-semifinal-london-2012-replays -F -vU [debug] Command-line config: ['https://olympics.com/ru/video/mex-v-jpn-men-s-semifinal-london-2012-replays', '-F', '-vU'] [debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [2b67ac300] (win_exe) [debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023) [debug] exe versions: ffmpeg 2024-11-28-git-bc991ca048-full_build-www.gyan.dev (setts), ffprobe 2024-11-28-git-bc991ca048-full_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1837 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: [email protected] from yt-dlp/yt-dlp yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp) [OlympicsReplay] Extracting URL: https://olympics.com/ru/video/mex-v-jpn-men-s-semifinal-london-2012-replays [OlympicsReplay] mex-v-jpn-men-s-semifinal-london-2012-replays: Downloading webpage [OlympicsReplay] 419ecb8b-7686-4892-88ea-27ac567f3741: Downloading legacy tokenized m3u8 url [OlympicsReplay] 419ecb8b-7686-4892-88ea-27ac567f3741: Downloading m3u8 information [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id [info] Available formats for 419ecb8b-7686-4892-88ea-27ac567f3741: (formats list here) ```
site-bug,triage
low
Critical
2,722,342,121
vscode
registerCodeLensProvider
Version: 1.95.3 (user setup) Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813 Date: 2024-11-13T14:50:04.152Z Electron: 32.2.1 ElectronBuildId: 10427718 Chromium: 128.0.6613.186 Node.js: 20.18.0 V8: 12.8.374.38-electron.0 OS: Windows_NT x64 10.0.22631 Steps to Reproduce: 1. Installing the register-codelens-provider extension. Please download the extension register-codelens-provider-0.0.1.vsix from https://github.com/1684838553/store. 2. Open the Java file, activate the extension, and display the codelens. The editor starts from the third line of code. When you enter characters, the visible area of the editing area changes. ![Image](https://github.com/user-attachments/assets/881e3119-7686-4d19-ba6b-3f7ad4819f27) The top of the editing page is displayed from the third line. This problem can be reproduced only after codelens is hidden.
info-needed,code-lens
low
Minor
2,722,392,501
godot
Wrong path to cached files in service worker for web export.
### Tested versions - Reproducable in version 4.3 stable ### System information Mac os Sonoma - Macbook Pro M1 Pro (Apple Silicon) ### Issue description When exporting with the HTML Web-template, the project produces an service worker with the filename **index.service.worker.js**. Inside the script is a constant defined as CACHED_FILES. One of those items inside the array is **index.worker.js** The file index.worker.js does not exist and therefore the PWA register fails in the browser. When deleting the file in the array, the PWA registers correctly. I think it should be either index.service.worker.js or completely removed from the array since the browser handles caching of the service worker. Looks like this in the exported service worker file: ```javascript // Files that will be cached on load. /** @type {string[]} */ const CACHED_FILES = ["index.html","index.js","index.offline.html","index.icon.png","index.apple-touch-icon.png","index.worker.js","index.audio.worklet.js"]; ``` ### Steps to reproduce - Create project - Download latest export templates - Export to web ### Minimal reproduction project (MRP) Since this is about the export templates, project files don't matter.
bug,platform:web,topic:buildsystem,topic:export
low
Minor
2,722,411,435
vscode
Support `"regex"` in `"quickSuggestions"`
I would like `"regex"` to be added to `"editor.quickSuggestions"` Currently `"editor.quickSuggestions"` supports `"other"`, `"comments"` and `"strings"` `"regex"` is currently defaulted to `"off"` and cannot be changed Currently it is impossible to automatically show inline snippet suggestions within regular expressions ```jsonc "editor.quickSuggestions": { "other": "inline", "comments": "off", "strings": "inline", "regex": "inline" // New! }, ``` https://github.com/microsoft/vscode/blob/main/src/vs/editor/common/config/editorOptions.ts#L3502-L3584 https://github.com/microsoft/vscode/blob/main/src/vs/editor/contrib/suggest/browser/suggest.ts#L453-L470 https://github.com/microsoft/vscode/blob/main/src/vs/editor/contrib/suggest/browser/suggestInlineCompletions.ts#L148
feature-request,suggest
low
Minor
2,722,418,030
ant-design
Checkbox 下使用 Typography 无法省略
### Reproduction link [![Edit on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/p/sandbox/yu-fa-tang-antd-5-22-3-forked-sl47c6?file=%2Fdemo.tsx%3A8%2C8-8%2C18) ### Steps to reproduce 查看示例 ### What is expected? 可以省略 ### What is actually happening? 没有省略 | Environment | Info | | --- | --- | | antd | 5.22.3 | | React | latest | | System | mac | | Browser | chrome | <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Inactive,improvement
low
Minor
2,722,470,888
material-ui
[Dialog] narrator will automatically jump back to dialog content start when navigate to a button in dialog.
### Search keywords accesibility narrator dialog button ### Latest version - [X] I have tested the latest version ### Steps to reproduce Steps: 1. visit https://mui.com/material-ui/react-dialog/ 2. enable Window Narrator 3. enable scan mode 4. use capslock + arrow navigate to any sample open dialog button 5. press enter 6. use capslock + arrow navigate to button in dialog 7. it will jump back to dialog content automatically when focus on the button ### Current behavior 7. it will jump back to dialog content automatically when focus on the button https://github.com/user-attachments/assets/9f36336c-4e97-478d-9dbf-5caf3a215938 ### Expected behavior 7. it should keep focusing on the button but not jump to any other place. ### Context _No response_ ### Your environment You can reproduce it in the official document website.
accessibility,component: dialog,package: material-ui
low
Minor
2,722,475,217
ui
[bug]: require is not defined while setting up the shadcn-ui with vite.
### Describe the bug I am getting an error that `require is not defined while setting up the shadcn-ui with vite.` PFA ![Screenshot (3)](https://github.com/user-attachments/assets/c1a9478e-c676-4708-aa97-cfab28ec56db) ![Screenshot (4)](https://github.com/user-attachments/assets/ca42eeee-4ce3-4459-ac11-ae15193ea1dc) ### Affected component/components na ### How to reproduce setup the shadcn-ui with vite https://ui.shadcn.com/docs/installation/vite ### Codesandbox/StackBlitz link _No response_ ### Logs _No response_ ### System Info ```bash node: v22.12.0 ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,722,536,755
vscode
On MacOs with Voiceover, one can't know the cell number in the notebook view
Type: <b>Bug</b> 1. Launch VS Code, open a Jupyter notebook, enable VoiceOver, and focus on the cells view (focused by default). 2. Navigate between cells using the up and down arrow keys. 3. Observe how VoiceOver announces the cells. Expected Behavior: In addition to the existing announcement, VoiceOver should report the cell’s positional information, e.g., “Cell 3 of 20.” Actual Behavior: No such announcement is made. Note: This appears to be a regression, possibly introduced in VS Code 195 or a similar version. While the removal of the aforementioned announcement is a welcome change on Windows—where screen readers like NVDA automatically provide such information—it creates an issue on macOS, as VoiceOver fails to replicate this behavior. As a result, users cannot easily determine their position within the notebook, leading to a more confusing experience. VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z) OS version: Darwin arm64 24.2.0 Modes: Remote OS version: Linux x64 5.4.0-176-generic <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Apple M1 Max (10 x 2400)| |GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off| |Load (avg)|3, 3, 4| |Memory (System)|32.00GB (0.60GB free)| |Process Argv|--crash-reporter-id 9b5d4c22-c9c6-418d-9268-ef070e8d74f8| |Screen Reader|yes| |VM|0%| |Item|Value| |---|---| |Remote|SSH: 10.33.80.124| |OS|Linux x64 5.4.0-176-generic| |CPUs|Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (80 x 0)| |Memory (System)|377.54GB (370.51GB free)| |VM|0%| </details><details><summary>Extensions (16)</summary> Extension|Author (truncated)|Version ---|---|--- jupyter-keymap|ms-|1.1.2 remote-ssh|ms-|0.115.1 remote-ssh-edit|ms-|0.87.0 vscode-remote-extensionpack|ms-|0.26.0 remote-explorer|ms-|0.4.3 remote-server|ms-|1.5.2 debugpy|ms-|2024.12.0 python|ms-|2024.20.0 vscode-pylance|ms-|2024.12.1 jupyter|ms-|2024.10.0 jupyter-keymap|ms-|1.1.2 jupyter-renderers|ms-|1.0.21 tensorboard|ms-|2023.10.1002992421 vscode-jupyter-cell-tags|ms-|0.1.9 vscode-jupyter-slideshow|ms-|0.1.6 code-spell-checker|str|4.0.21 (1 theme extensions excluded) </details><details> <summary>A/B Experiments</summary> ``` vsliv368cf:30146710 vspor879:30202332 vspor708:30202333 vspor363:30204092 vscod805:30301674 binariesv615:30325510 vsaa593cf:30376535 py29gd2263:31024239 c4g48928:30535728 azure-dev_surveyone:30548225 962ge761:30959799 pythonnoceb:30805159 pythonmypyd1:30879173 h48ei257:31000450 pythontbext0:30879054 cppperfnew:31000557 dsvsc020:30976470 pythonait:31006305 dsvsc021:30996838 dvdeprecation:31068756 dwnewjupyter:31046869 2f103344:31071589 nativerepl2:31139839 pythonrstrctxt:31112756 nativeloc1:31192215 cf971741:31144450 iacca1:31171482 notype1:31157159 5fd0e150:31155592 dwcopilot:31170013 stablechunks:31184530 ``` </details> <!-- generated by issue reporter -->
bug,notebook-accessibility
low
Critical
2,722,543,743
deno
error: [ERR_MODULE_NOT_FOUND] Cannot find module
Version: `Deno 2.1.3` And I use `fastify` `postgres` `Prisma` **My massage:** error: [ERR_MODULE_NOT_FOUND] Cannot find module 'file:///home/raqeb/.cache/deno/npm/registry.npmjs.org/undici-types/6.19.8/dispatcher' imported from 'file:///home/raqeb/.cache/deno/npm/registry.npmjs.org/undici-types/6.19.8/fetch.d.ts' at file:///home/raqeb/.cache/deno/npm/registry.npmjs.org/undici-types/6.19.8/fetch.d.ts:10:24
needs info
low
Critical
2,722,555,826
vscode
line numbers are not highlighted on the wrapped parts of long lines
Type: <b>Bug</b> when line warpping is enabled and the cursor is placed on the wrapped portion of the line, the line numbers that usually turn bold to indicate selection are not ![highlight on short line](https://github.com/user-attachments/assets/2e2af4a1-1ab2-470c-924e-fc51c301e4d3) ![highlight on main part of long line](https://github.com/user-attachments/assets/0eeea22f-f20c-4f89-bd6f-0ac188894288) ![no highlight on wrapped part](https://github.com/user-attachments/assets/1694d4b2-454a-44be-941f-0a0cdb2a93d4) VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z) OS version: Windows_NT x64 10.0.22631 Modes: <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz (16 x 2904)| |GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off| |Load (avg)|undefined| |Memory (System)|15.72GB (4.33GB free)| |Process Argv|--crash-reporter-id f1ede49c-e57c-4921-bbff-13fec967fa80| |Screen Reader|no| |VM|0%| </details><details><summary>Extensions (2)</summary> Extension|Author (truncated)|Version ---|---|--- git-graph|mhu|1.30.0 remote-ssh|ms-|0.115.1 </details><details> <summary>A/B Experiments</summary> ``` vsliv368:30146709 vspor879:30202332 vspor708:30202333 vspor363:30204092 vscod805:30301674 binariesv615:30325510 vsaa593:30376534 py29gd2263:31024239 vscaac:30438847 c4g48928:30535728 azure-dev_surveyone:30548225 962ge761:30959799 pythonnoceb:30805159 asynctok:30898717 pythonmypyd1:30879173 h48ei257:31000450 pythontbext0:30879054 cppperfnew:31000557 dsvsc020:30976470 pythonait:31006305 dsvsc021:30996838 jg8ic977:31013176 dvdeprecation:31068756 dwnewjupytercf:31046870 impr_priority:31102340 nativerepl1:31139838 pythonrstrctxt:31112756 cf971741:31144450 iacca1:31171482 notype1:31157159 5fd0e150:31155592 dwcopilot:31170013 j44ff735:31181874 ``` </details> <!-- generated by issue reporter -->
bug,polish,editor-wrapping
low
Critical
2,722,569,288
puppeteer
[Feature]: new deprecations should show up in the changelog
### Feature description currently, if the depreciation happens as the part of the `docs:` change, it will not be reflected in the changelog because all `docs:` changes are excluded. At the same time, deprecations are neither fixes or features. Perhaps, we need to find a way to automatically get the list of new deprecated APIs for each release.
feature,good first issue,P3
low
Minor
2,722,624,215
vscode
`IgnoreFile` compares file paths in a case-sensitive way on windows/macOS
`IgnoreFile` checks if a path belongs to a folder in the following way: `if (!path.startsWith(dirPath)) { return false; }` This does not take into account that paths are not case-sensitive on windows and macOS. Should it? https://github.com/microsoft/vscode/blob/f9ec787a7770b07b72b76e8dbdd56ef949fa2f70/src/vs/workbench/services/search/common/ignoreFile.ts#L117-L125
bug,search
low
Minor
2,722,635,703
vscode
Color Picker shows in comments for github issue numbers
![Image](https://github.com/user-attachments/assets/89b118ce-645f-416b-ae09-e008404f26f7) Are we able to tell if the HEX code is located inside a comment? If so, could we not show the color picker in that case?
polish,under-discussion,editor-color-picker
low
Minor
2,722,660,543
electron
Docs: Missing some properties in AuthenticationResponseDetails
### Preflight Checklist - [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project. - [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to. - [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success. ### Electron Version 33.3.0 ### What operating system(s) are you using? Windows ### Operating System Version Windows 11 ### What arch are you using? x64 ### Last Known Working Electron version _No response_ ### Expected Behavior ![Image](https://github.com/user-attachments/assets/7f12dd31-002d-4221-add4-75f03ce9589a) ```ts interface AuthenticationResponseDetails { url: string; pid: number; isMainFrame: boolean; // .... } ``` ### Actual Behavior ```ts interface AuthenticationResponseDetails { url: string; pid: number; } ``` ### Testcase Gist URL _No response_ ### Additional Information _No response_
bug :beetle:,status/confirmed,component/webcontents,component/app
low
Critical
2,722,672,503
flutter
Assets not available in web tests (`flutter test --platform chrome`)
### Steps to reproduce 1. Create a blank flutter project. Place any image at `assets/whatever.png` and add it in your pubspec assets. 2. Run the test code provided in the "Code sample" section: - `flutter test` succeeds - `flutter test --platform chrome` times out ### Expected results Without `--platform chrome`: ```console ahann@Adils-Mac-mini example % flutter test test/asset_bundle_test.dart 00:01 +2: All tests passed! ``` ### Actual results With `--platform chrome`: ```console ahann@Adils-Mac-mini example % flutter test test/asset_bundle_test.dart --platform chrome 01:01 +0 -1: precache AssetImage [E] TimeoutException after 0:01:00.000000: Test timed out after 1 minutes. packages/test_api/src/backend/declarer.dart.js 766:36 <fn> dart-sdk/lib/async/zone.dart 1391:47 _rootRun dart-sdk/lib/async/zone.dart 1301:19 run packages/test_api/src/backend/declarer.dart.js 764:12 [_handleError] packages/test_api/src/backend/declarer.dart.js 741:29 <fn> dart-sdk/lib/async/zone.dart 1399:13 _rootRun dart-sdk/lib/async/zone.dart 1301:19 run packages/test_api/src/backend/declarer.dart.js 740:48 <fn> dart-sdk/lib/_internal/js_dev_runtime/private/isolate_helper.dart 48:11 internalCallback To run this test again: /Volumes/m2/ExternUsers/ahann/Documents/Sources/flutter/bin/cache/dart-sdk/bin/dart test test/asset_bundle_test.dart -p chrome --plain-name 'precache AssetImage' 02:01 +0 -2: load asset manually [E] TimeoutException after 0:01:00.000000: Test timed out after 1 minutes. packages/test_api/src/backend/declarer.dart.js 766:36 <fn> dart-sdk/lib/async/zone.dart 1391:47 _rootRun dart-sdk/lib/async/zone.dart 1301:19 run packages/test_api/src/backend/declarer.dart.js 764:12 [_handleError] packages/test_api/src/backend/declarer.dart.js 741:29 <fn> dart-sdk/lib/async/zone.dart 1399:13 _rootRun dart-sdk/lib/async/zone.dart 1301:19 run packages/test_api/src/backend/declarer.dart.js 740:48 <fn> dart-sdk/lib/_internal/js_dev_runtime/private/isolate_helper.dart 48:11 internalCallback To run this test again: /Volumes/m2/ExternUsers/ahann/Documents/Sources/flutter/bin/cache/dart-sdk/bin/dart test test/asset_bundle_test.dart -p chrome --plain-name 'load asset manually' 02:01 +0 -2: Some tests failed. ``` ### Code sample <details open><summary>Code sample</summary> ```dart import 'package:flutter/material.dart'; import 'package:flutter/services.dart'; import 'package:flutter_test/flutter_test.dart'; void main() { testWidgets('precache AssetImage', (tester) async { const imageProvider = AssetImage('assets/whatever.png'); await tester.pumpWidget(const Image( image: imageProvider, )); final BuildContext context = tester.element(find.byType(Image)); await expectLater( tester.runAsync(() => precacheImage(imageProvider, context)), completes, ); }, timeout: const Timeout(Duration(minutes: 1))); test('load asset manually', () async { await expectLater(rootBundle.load('assets/whatever.png'), completes); }, timeout: const Timeout(Duration(minutes: 1))); } ``` </details> ### Screenshots or Video _No response_ ### Logs See above ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console ahann@Adils-Mac-mini example % flutter doctor -v [✓] Flutter (Channel stable, 3.24.5, on macOS 15.1.1 24B2091 darwin-arm64, locale en-GB) • Flutter version 3.24.5 on channel stable at /Volumes/m2/ExternUsers/ahann/Documents/Sources/flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision dec2ee5c1f (3 weeks ago), 2024-11-13 11:13:06 -0800 • Engine revision a18df97ca5 • Dart version 3.5.4 • DevTools version 2.37.3 [✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0) • Android SDK at /Volumes/m2/ExternUsers/ahann/Library/Android/sdk • Platform android-34, build-tools 34.0.0 • Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java • Java version Java(TM) SE Runtime Environment (build 17.0.12+8-LTS-286) • All Android licenses accepted. [✓] Xcode - develop for iOS and macOS (Xcode 16.1) • Xcode at /Applications/Xcode.app/Contents/Developer • Build 16B40 • CocoaPods version 1.16.2 [✓] Chrome - develop for the web • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome [✓] Android Studio (version 2024.2) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version Java(TM) SE Runtime Environment (build 17.0.12+8-LTS-286) [✓] VS Code (version 1.95.3) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.102.0 [✓] Connected device (5 available) • Adil’s iPhone (mobile) • 00008101-001A54CE1E51003A • ios • iOS 18.1.1 22B91 • iPhone 16 Plus (mobile) • A4AA4E6A-327B-4D18-B084-13FEB7A62F84 • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-1 (simulator) • macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B2091 darwin-arm64 • Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B2091 darwin-arm64 • Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.109 [✓] Network resources • All expected network resources are available. • No issues found! ``` </details>
a: tests,tool,a: assets,has reproducible steps,team-tool,found in release: 3.24,found in release: 3.27
low
Critical
2,722,683,495
opencv
fastNlMeansDenoisingMulti does not support 16 bit values
### System Information OpenCV version: 4.10.0 Operating System / Platform: All Compiler & compiler version: All ### Detailed description The documentation for [fastNlMeansDenoisingMulti](https://docs.opencv.org/4.10.0/d1/d79/group__photo__denoise.html#ga723ffde1969430fede9241402e198151) states that it should support 16 bit values. The [implementation](https://github.com/opencv/opencv/blob/3d91d75f1a48dbba46f8cb7f264485a37e6102a2/modules/photo/src/denoising.cpp#L244) however, enforces 8 bit values even though the underlying code supports 16 bit. A fix is available in [my branch](https://github.com/opencv/opencv/compare/4.x...dchansen:opencv:nlMeansFix) ### Steps to reproduce 1. Run fastNlMeansDenoisingMulti with a 16 bit image. 2. Get error ### Issue submission checklist - [X] I report the issue, it's not a question - [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution - [X] I updated to the latest OpenCV version and the issue is still there - [ ] There is reproducer code and related data files (videos, images, onnx, etc)
bug
low
Critical
2,722,693,765
vscode
Show Primary Side Bar in new empty window when Activity Bar position not Default
#226027 originally raised this. The fix for it by @benibenj (#227677) was subsequently reverted by #230919 because it caused a regressions, particularly when the restored workspace had had its Primary Side Bar hidden at time of closing. The current behaviour means that extensions using the `viewsWelcome` contribution point to add "getting started" aids to the empty Explorer view can no longer help the user. When a new window is opened and isn't restoring a previous workspace plus its state, please make sure that the Explorer view is visible (whichever container it is in), either unconditionally or else at least if it had been visible when VS Code was previously closed.
feature-request,file-explorer,layout,workbench-views
low
Minor
2,722,703,665
pytorch
Scaled Dot-Product Attention Invalid Configuration Error on Large batch size
# Scaled Dot-Product Attention Invalid Configuration Error on Large batch size ## Summary The `torch.nn.functional.scaled_dot_product_attention` (sdpa) function is not working as expected when the batch size is large. It causes a `RuntimeError: CUDA error: invalid configuration argument`. This problem affects also the `torch.nn.TransformerEncoderLayer` as it relies on the `scaled_dot_product_attention` function. ## Reproducing the error The following scripts are run with: ```bash CUDA_LAUNCH_BLOCKING=1 python3 test.py ``` However, the same results can be obtained without setting `CUDA_LAUNCH_BLOCKING=1`. ### SDPA Example This code will raise the error: ```python import torch x = torch.rand(100000,1,10,16,dtype=torch.float32,device="cuda:0") a = torch.nn.functional.scaled_dot_product_attention(x,x,x) ``` ```bash Traceback (most recent call last): File "/path/to/repo/test.py", line 3, in <module> a = torch.nn.functional.scaled_dot_product_attention(x,x,x) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: invalid configuration argument Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` ### TransformerEncoderLayer Example This code will raise the error: ```python import torch transformer_encoder = torch.nn.TransformerEncoderLayer( d_model = 16 , nhead = 1 , dim_feedforward = 64 , device = "cuda:0", batch_first = True , ) x = torch.rand(100000,10,16,dtype=torch.float32,device="cuda:0") transformer_encoder(x) ``` ```bash Traceback (most recent call last): File "path/to/repo/test.py", line 10, in <module> transformer_encoder(x) File "path/to/repo/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "path/to/repo/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "path/to/repo/venv/lib/python3.12/site-packages/torch/nn/modules/transformer.py", line 904, in forward + self._sa_block(x, src_mask, src_key_padding_mask, is_causal=is_causal) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "path/to/repo/venv/lib/python3.12/site-packages/torch/nn/modules/transformer.py", line 918, in _sa_block x = self.self_attn( ^^^^^^^^^^^^^^^ File "path/to/repo/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "path/to/repo/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "path/to/repo/venv/lib/python3.12/site-packages/torch/nn/modules/activation.py", line 1368, in forward attn_output, attn_output_weights = F.multi_head_attention_forward( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "path/to/repo/venv/lib/python3.12/site-packages/torch/nn/functional.py", line 6278, in multi_head_attention_forward attn_output = scaled_dot_product_attention( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: invalid configuration argument Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` ## Cause of The Error I searched a little bit for the cause of the error. Here is what I found: The error is caused in file `pytorch/aten/src/ATen/native/transformers/cuda/attention.cu`. In particular the `launchKernel` lambda inside `_efficient_attention_forward` is responsible for the error. To my understanding this is the code snippet that triggers the invalid configuration: ```cuda auto blocks = p.getBlocksGrid(); if (blocks.x * blocks.y * blocks.z == 0 || key.size(1) == 0) { res.zero_(); return; } Kernel::check_supported(p); kernel_fn<<<blocks, p.getThreadsGrid(), smem_bytes, stream>>>(p); ``` It appears that, with a large batch size this kernel is run with a number of blocks that exceeds the maximum number of kernel blocks, `65,535`. This triggers the invalid configuration error. As a matter of fact, running the snippet ```python import torch x = torch.rand(65535,1,10,16,dtype=torch.float32,device="cuda:0") a = torch.nn.functional.scaled_dot_product_attention(x,x,x) ``` will not raise the error. Doing the same with a batch size of `65536` will raise the error. Note that also the number of heads contributes to the number of cuda blocks. So also an increased number of heads will trigger the error. ## SDPA Workaround The error can be avoided by simply removing the number of heads dimension in the input tensor. For example, ```python import torch #x = torch.rand(100000,1,10,16,dtype=torch.float32,device="cuda:0") x = torch.rand(100000,10,16,dtype=torch.float32,device="cuda:0") a = torch.nn.functional.scaled_dot_product_attention(x,x,x) ``` Note that, the pytorch documentation does not mention that it is possible to remove the number of heads dimension in the `torch.nn.functional.scaled_dot_product_attention`. However, by flattening the batch dimension with the heads dimension resolves the issue. To my understading this operation should not affect the result as demonstrated in the following code snippet: ```python mport torch x0 = torch.rand(1000,2,10,16,dtype=torch.float32,device="cuda:0") x1 = x0.clone().flatten(0,1) a = torch.nn.functional.scaled_dot_product_attention(x0,x0,x0) b = torch.nn.functional.scaled_dot_product_attention(x1,x1,x1) print(a.flatten().isclose(b.flatten(), atol=1e-4).all()) # >>> tensor(True, device='cuda:0') ``` ## TransformerEncoderLayer Workaround The workaround for the `torch.nn.TransformerEncoderLayer` requires to modify the internal python implementation. In particular, we need to modify the `_sa_block` method in the `torch.nn.modules.transformer.TransformerEncoderLayer` class. The following code snippet demonstrates the workaround. I comment only on the modified lines: ```python def _sa_block( self, x: Tensor, attn_mask: Optional[Tensor], key_padding_mask: Optional[Tensor], is_causal: bool = False, ) -> Tensor: shape = x.shape ### added line x = x.flatten(0,1) ### added line x = self.self_attn( x, x, x, attn_mask=attn_mask, key_padding_mask=key_padding_mask, need_weights=False, is_causal=is_causal, )[0] x = x.view(shape) ### added line return self.dropout1(x) ``` This resolves the transformer issue. It simply flattens the batch dimension with heads dimension and restores the original shape after the attention operation. I do not comment on the efficiency of doing this as it may have some performance implications. As a more general solution, one could check if the number of blocks used would exceed the maximum number of blocks and if so, flatten the batch dimension with the heads dimension. ## Environment I have tested this issue on torch 2.5.1 version. I have also tested the issue on the main branch (at the time of writing 2.6.0a0+git65c2086). Result from `collect_env.py` using the 2.5.1: ``` PyTorch version: 2.5.1+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Arch Linux (x86_64) GCC version: (GCC) 14.2.1 20240910 Clang version: 18.1.8 CMake version: Could not collect Libc version: glibc-2.40 Python version: 3.12.7 (main, Oct 1 2024, 11:15:50) [GCC 14.2.1 20240910] (64-bit runtime) Python platform: Linux-6.12.1-arch1-1-x86_64-with-glibc2.40 Is CUDA available: True CUDA runtime version: 12.6.85 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080 Nvidia driver version: 565.57.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz CPU family: 6 Model: 165 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 5 CPU(s) scaling MHz: 92% CPU max MHz: 5100.0000 CPU min MHz: 800.0000 BogoMIPS: 7602.45 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi pku ospke md_clear flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 256 KiB (8 instances) L1i cache: 256 KiB (8 instances) L2 cache: 2 MiB (8 instances) L3 cache: 16 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-15 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop Vulnerability Srbds: Mitigation; Microcode Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==2.1.3 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] torch==2.5.1 [pip3] triton==3.1.0 [conda] Could not collect ```
triaged,module: sdpa
low
Critical
2,722,709,753
pytorch
[Pipelining] Backward pass tries to detach view tensor in-place
### 🐛 Describe the bug When using ``torch.distributed.pipelining`` with a model stage that outputs a view, the error ``RuntimeError: Can't detach views in-place. Use detach() instead.`` is raised. This is due to https://github.com/pytorch/pytorch/commit/32a3dbc6450171dec4ef62a36037dd5dc24790d2#diff-c4c60c227aa9333879c26bbee32896f44902fea81894fe04078a0310897b1eedR698 that detaches the output in place to free memory after the backward pass. The fix should simply be to check ``t._is_view()`` and either don't detach, or detach ``t._base``. ```py import torch import torch.nn as nn import torch.distributed as dist from torch.distributed.pipelining import PipelineStage, Schedule1F1B import os local_rank = int(os.getenv("LOCAL_RANK")) torch.cuda.set_device(local_rank) dist.init_process_group("nccl") class ViewModel(nn.Module): def forward(self, x): return x.view(x.size(0), -1) model = ViewModel() stage = PipelineStage(model, local_rank, 2, torch.cuda.current_device()) schedule = Schedule1F1B(stage, 2, loss_fn = nn.functional.mse_loss) x = torch.randn(4, 3, device = local_rank) if local_rank == 0: schedule.step(x) if local_rank == 1: schedule.step(target = x) dist.destroy_process_group() ``` Started with command ``torchrun --nproc-per-node 2 --standalone file.py`` ### Versions PyTorch version: 2.6.0.dev20241205+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Red Hat Enterprise Linux 9.2 (Plow) (x86_64) GCC version: (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4) Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.34 Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.14.0-284.55.1.el9_2.x86_64-x86_64-with-glibc2.34 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB Nvidia driver version: 550.54.15 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 80 On-line CPU(s) list: 0-79 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 20 Socket(s): 2 Stepping: 7 CPU max MHz: 3900.0000 CPU min MHz: 1000.0000 BogoMIPS: 5000.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 1.3 MiB (40 instances) L1i cache: 1.3 MiB (40 instances) L2 cache: 40 MiB (40 instances) L3 cache: 55 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-19,40-59 NUMA node1 CPU(s): 20-39,60-79 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; TSX disabled Versions of relevant libraries: [pip3] configmypy==0.2.0 [pip3] flopco-pytorch==0.1.4 [pip3] numpy==2.1.2 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-cusparselt-cu12==0.6.2 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] pytorch-triton==3.2.0+git35c6c7c6 [pip3] torch==2.6.0.dev20241205+cu124 [pip3] torchaudio==2.5.0.dev20241205+cu124 [pip3] torchvision==0.20.0.dev20241205+cu124 [pip3] triton==3.0.0 cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
oncall: distributed,triaged
low
Critical
2,722,722,699
ollama
Add stop word <|endoftext|> to qwq models
### What is the issue? The [qwq models](https://ollama.com/library/qwq) currently go into an infinite loop. The reasons for this appears that the model outputs <|endoftext|> at the end of its response, but ollama does not handle this as a stop word. The model therefore continues with hallucinations and goes into infinite loop. Tested locally that the issue was resolved by using a custom model file containing ``` FROM qwq:latest # Adding additional stop as otherwise the qwq model goes into an infinite loop PARAMETER stop <|endoftext|> ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.7
bug
low
Minor
2,722,729,855
langchain
Crash "Unknown field for Schema: title" when using `langchain_google_genai.ChatGoogleGenerativeAI`
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ```python class PictureCategorizationOutputState(BaseModel): """Picture categorization output state model. Attributes: picture_type: The picture type. image_texts: The image texts. operational_error: The operational error, if you encounter any issue during your task. """ picture_type: AIPictureType = Field(description="The type of the picture.") image_texts: list[ImageText] = Field([], description="The texts found in the image.") dimensions: Dimensions = Field(description="The dimensions of the image in pixels (height and width).") operational_error: str = Field( "", description="The operational error, if you encounter any issue during your task." ) highlighted_image_b64: str = Field(description="The b64 encoded image with highlighted text.") ``` ```python def question_vehicle_picture_category_node(state: PictureCategorizationSharedState) -> PictureCategorizationOutputState: """Prompt AI for vehicle picture type (INTERIOR / EXTERIOR). Args: state: Picture categorization shared state. Returns: AIVehiclePicture: Vehicle picture """ llm = ChatGoogleGenerativeAI( model="gemini-1.5-pro", temperature=0, project="some_project" ).with_structured_output(PictureCategorizationOutputState) #!!!!!! THIS CRASHES !!!!!!!!! prompt = get_picture_categorization_prompt_template(str(state.picture_url)) return cast(PictureCategorizationOutputState, (prompt | llm).invoke({})) ``` ### Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2024.2.4\plugins\python-ce\helpers\pydev\pydevd.py", line 1570, in _exec pydev_imports.execfile(file, globals, locals) # execute the script ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\JetBrains\PyCharm 2024.2.4\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:\dev\lizy-ai\projects\picture_categorization\graph_v1.py", line 49, in <module> compiled_graph.invoke({"picture_url": "https://media.gq.com/photos/6508829d305ef4e0229049b3/master/pass/plane.jpg"}) File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\langgraph\pregel\__init__.py", line 1929, in invoke for chunk in self.stream( ^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\langgraph\pregel\__init__.py", line 1649, in stream for _ in runner.tick( ^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\langgraph\pregel\runner.py", line 105, in tick run_with_retry(t, retry_policy, writer=writer) File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\langgraph\pregel\retry.py", line 44, in run_with_retry task.proc.invoke(task.input, config) File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\langgraph\utils\runnable.py", line 410, in invoke input = context.run(step.invoke, input, config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\langgraph\utils\runnable.py", line 184, in invoke ret = context.run(self.func, input, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\dev\lizy-ai\projects\picture_categorization\nodes.py", line 28, in question_vehicle_picture_category_node ).with_structured_output(PictureCategorizationOutputState) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\langchain_google_genai\chat_models.py", line 1239, in with_structured_output tool_choice = _get_tool_name(schema) if self._supports_tool_choice else None ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\langchain_google_genai\chat_models.py", line 1383, in _get_tool_name genai_tool = tool_to_dict(convert_to_genai_function_declarations([tool])) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\langchain_google_genai\_function_utils.py", line 173, in convert_to_genai_function_declarations fd = _format_to_gapic_function_declaration(tool) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\langchain_google_genai\_function_utils.py", line 197, in _format_to_gapic_function_declaration return _convert_pydantic_to_genai_function(tool) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\langchain_google_genai\_function_utils.py", line 270, in _convert_pydantic_to_genai_function function_declaration = gapic.FunctionDeclaration( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\proto\message.py", line 728, in __init__ pb_value = marshal.to_proto(pb_type, value) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\proto\marshal\marshal.py", line 235, in to_proto pb_value = self.get_rule(proto_type=proto_type).to_proto(value) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\proto\marshal\rules\message.py", line 45, in to_proto return self._wrapper(value)._pb ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\proto\message.py", line 728, in __init__ pb_value = marshal.to_proto(pb_type, value) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\proto\marshal\marshal.py", line 233, in to_proto return {k: self.to_proto(recursive_type, v) for k, v in value.items()} ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\proto\marshal\marshal.py", line 235, in to_proto pb_value = self.get_rule(proto_type=proto_type).to_proto(value) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\proto\marshal\rules\message.py", line 45, in to_proto return self._wrapper(value)._pb ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\proto\message.py", line 728, in __init__ pb_value = marshal.to_proto(pb_type, value) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\proto\marshal\marshal.py", line 235, in to_proto pb_value = self.get_rule(proto_type=proto_type).to_proto(value) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\proto\marshal\rules\message.py", line 45, in to_proto return self._wrapper(value)._pb ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\giloz\AppData\Local\pypoetry\Cache\virtualenvs\lizy-ai-8ngQPScW-py3.12\Lib\site-packages\proto\message.py", line 724, in __init__ raise ValueError( ValueError: Unknown field for Schema: title ### Description it seems the "Title" property is giving an issue during the marshalling. same basic structure is working with `ChatOpenAI` class. ### System Info System Information ------------------ > OS: Windows > OS Version: 10.0.22631 > Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)] Package Information ------------------- > langchain_core: 0.3.21 > langchain: 0.3.9 > langchain_community: 0.3.9 > langsmith: 0.1.147 > langchain_anthropic: 0.2.4 > langchain_chroma: 0.1.4 > langchain_fireworks: 0.2.5 > langchain_google_genai: 2.0.6 > langchain_google_vertexai: 2.0.8 > langchain_openai: 0.2.11 > langchain_pinecone: 0.2.0 > langchain_text_splitters: 0.3.2 > langgraph_sdk: 0.1.43 Optional packages not installed ------------------------------- > langserve Other Dependencies ------------------ > aiohttp: 3.9.5 > anthropic: 0.40.0 > anthropic[vertexai]: Installed. No version info available. > async-timeout: Installed. No version info available. > chromadb: 0.5.21 > dataclasses-json: 0.6.7 > defusedxml: 0.7.1 > fastapi: 0.115.6 > filetype: 1.2.0 > fireworks-ai: 0.15.9 > google-cloud-aiplatform: 1.74.0 > google-cloud-storage: 2.19.0 > google-generativeai: 0.8.3 > httpx: 0.27.2 > httpx-sse: 0.4.0 > jsonpatch: 1.33 > langchain-mistralai: Installed. No version info available. > langsmith-pyo3: Installed. No version info available. > numpy: 1.26.4 > openai: 1.57.0 > orjson: 3.10.12 > packaging: 24.2 > pinecone-client: 5.0.1 > pydantic: 2.9.2 > pydantic-settings: 2.6.1 > PyYAML: 6.0.2 > requests: 2.32.3 > requests-toolbelt: 1.0.0 > SQLAlchemy: 2.0.36 > tenacity: 8.2.3 > tiktoken: 0.8.0 > typing-extensions: 4.12.2
🤖:bug
low
Critical
2,722,740,073
react
Bug: Expected Static Flag was missing. Please Notify React Team
React version: "react": "18.3.1", ## Steps To Reproduce ``` const MessagesDetailsPage = () => { const { id, orderId } = useLocalSearchParams<{ id: string; orderId?: string; }>(); const { mutateAsync: sendMessage, isPending: isSending, variables, } = useSendMessage(); const { data, isPending, hasNextPage, fetchNextPage, isFetchingNextPage } = useGetUserMessagesById(Number(id)); if (isPending) return ( <View className="flex-1 bg-background justify-center items-center"> <ActivityIndicator size={"large"} /> <Stack.Screen options={{ headerShown: screenHeaderShown, title: "Loading...", headerBackTitle: "back", }} /> </View> ); const messages = data?.pages?.flatMap((item) => item?.content) || []; const user = data?.pages[0]?.user; const canReply = data?.pages[0]?.canReply; const loadMore = () => { if (hasNextPage && !isFetchingNextPage) { fetchNextPage(); } }; return ( <View className={cn("flex-1 bg-background", !!orderId ? "pb-14" : "")}> <Stack.Screen options={{ title: user?.name, headerBackTitle: "back", }} /> <KeyboardAvoidingView className="flex-1 bg-background p-3" keyboardVerticalOffset={isIOS ? 80 : 0} behavior={isIOS ? "padding" : "height"} > <ChatArea isFetchingNextPage={isFetchingNextPage} messages={messages} onLoadMore={loadMore} isSending={isSending} variables={variables} /> {canReply ? ( <MessageActions sendMessage={sendMessage} isPending={isSending} /> ) : ( <View className="items-center p-2"> <P>You can't reply to this user</P> </View> )} </KeyboardAvoidingView> </View> ); }; export default MessagesDetailsPage; ``` ``` import { MessagesSquare } from "@/components/icons/message-square-icon"; import { P } from "@/components/ui/typography"; import useI18n from "@/hooks/useI81n"; import { IChatMessage, IMessageInput } from "@/types/IChat"; import { FlashList } from "@shopify/flash-list"; import React, { useRef } from "react"; import { ActivityIndicator, View } from "react-native"; import { ChatMessages } from "../chat-messages"; import { MessagePendingStates } from "../message-pending-states"; export type ChatAreaProps = { messages: IChatMessage[]; className?: string; isFetchingNextPage: boolean; onLoadMore: () => void; footerComponent?: React.ReactNode; isSending?: boolean; variables?: IMessageInput; hasNextPage?: boolean; }; export const ChatArea = ({ messages, className, isFetchingNextPage, onLoadMore, footerComponent, isSending, variables, }: ChatAreaProps) => { const { getText } = useI18n(); const ref = useRef<FlashList<IChatMessage>>(null); const thumbnails = messages ?.map((c) => !!c.image && c.image) .filter( (url): url is string => typeof url === "string" && url.trim().length > 0 ) || []; const scrollToTop = () => { if (ref.current) { ref.current.scrollToOffset({ animated: true, offset: 0, }); } }; // useEffect(() => { // scrollToTop(); // }, [variables]); // useEffect(() => { // scrollToTop(); // }, []); return ( <FlashList ref={ref} keyboardDismissMode="on-drag" keyboardShouldPersistTaps="handled" data={messages} showsVerticalScrollIndicator={false} className={className} renderItem={({ item }) => ( <ChatMessages item={item} thumbnails={thumbnails} /> )} keyExtractor={({ id }) => id.toString()} inverted estimatedItemSize={100} onEndReachedThreshold={0.1} ListEmptyComponent={() => ( <View className="flex-1 bg-background items-center justify-center gap-2"> <MessagesSquare className="text-foreground" size={40} /> <P>{getText("send_message")}</P> </View> )} onEndReached={onLoadMore} ListHeaderComponent={() => { if (isSending) return <MessagePendingStates variables={variables} />; }} ListFooterComponent={() => { return ( <View> {isFetchingNextPage && ( <View className="py-5 self-center"> <ActivityIndicator size={"small"} /> </View> )} {footerComponent} </View> ); }} /> ); }; ``` ## The current behavior ## The expected behavior
Status: Unconfirmed
medium
Critical
2,722,963,095
rust
Tracking Issue for spurious mingw CI failures
The spurious `mingw` failures might be *related* to #127883, but for the purpose of tracking, we should delineate the two general categories separately to avoid conflating them. Please feel free to update specific classifications of kinds of spurious `mingw` failures. ### Known problems - https://github.com/rust-lang/rust/issues/108227 ### Misc Label: https://github.com/rust-lang/rust/labels/CI-spurious-fail-mingw
T-compiler,O-windows-gnu,T-bootstrap,T-infra,C-tracking-issue,A-CI,CI-spurious-fail-mingw
low
Critical
2,722,975,598
rust
Meta tracking issue for spurious CI failures
This is a (meta) tracking issue intended to make it easier to find the specific spurious CI failure issues. ## Platform-specific - https://github.com/rust-lang/rust/issues/127883; https://github.com/rust-lang/rust/labels/CI-spurious-fail-msvc - https://github.com/rust-lang/rust/issues/134351; https://github.com/rust-lang/rust/labels/CI-spurious-fail-msvc - https://github.com/rust-lang/rust/issues/133958; https://github.com/rust-lang/rust/labels/CI-spurious-fail-mingw - https://github.com/rust-lang/rust/issues/134220; https://github.com/rust-lang/rust/labels/CI-spurious-x86_64-apple-SIGSEGV-SIGILL ## Runner-specific - https://github.com/rust-lang/rust/issues/97669 ## Known problems ### LLVM - https://github.com/rust-lang/rust/issues/109624 - https://github.com/rust-lang/rust/issues/110290 ### Pointer provenance ABA lockless queue test failures - ABA pointer provenance lockless queue test failures due to #121950 in the miri test suite, https://github.com/rust-lang/rust/labels/CI-ABA-ptr-provenance-lockless-queue-fail ### Linkers #### `rust-lld` can sometimes crash quite often - https://github.com/rust-lang/rust/issues/128286 ## Networking failures - https://github.com/rust-lang/rust/issues/40474 ## Bot troubles - https://github.com/rust-lang/rust/issues/43535 ## Possible mitigation approaches - https://github.com/rust-lang/rust/issues/134472 ## Others - Suspected cosmic bitflips or faulty memory cell: https://github.com/rust-lang/rust/pull/136084#issuecomment-2614238643
T-compiler,T-bootstrap,A-spurious,T-infra,C-tracking-issue,A-CI,S-tracking-forever
low
Critical
2,723,056,584
material-ui
[docs] add examples of testing with mui with react-testing-library
### Related page https://mui.com/material-ui/guides/testing/ ### Kind of issue Unclear explanations ### Issue description I already spend a day trying to figure out what is a good way to write tests with mui. From docs: > For example, when rendering a TextField your test should not need to query for the specific Material UI instance of the TextField but rather for the input, or [role="textbox"]. Okay, I should query for role=textbox, but I am not sure how I would test some mui components without data-testid. For example, password and confirm password. I can't really get first input without exact specifying exact label. Should I do it? Or should I use data-testid then? I understand that mui is not about testing, but still it can save so much time for those that are developing with mui if there was at least one example for reference. ### Context I have a form with different submit schemas and it relatively big. It consists of 6 steps (sign up, email confirmation, 4 other steps for company creation specific to our business). Just sign-up step with firebase is around 300 lines of code, so I am trying to write tests for it to ensure I wouldn't break anything in the future. For form itself we use, zod with react-hook-form + firebase for user authentication. We also use next and next-intl for internalization **Search keywords**: testing mui jest vitest react-testing-library rtl @testing-library/react
docs
low
Minor
2,723,092,771
langchain
HuggingFaceEndpoint returning buggy responses and prompt template back
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` llm = HuggingFaceEndpoint( repo_id= 'meta-llama/Llama-3.1-8B-Instruct', huggingfacehub_api_token=api_token ) ``` ``` llm_chain = ( { "context": lambda inputs: retrieve_context(inputs['question'], inputs['vector']), "question": RunnablePassthrough() } | PromptTemplate( template=BASE_TEMPLATE, input_variables=["context", "question"] ) | llm | StrOutputParser() ) llm_chain.invoke({ 'question': user_query, 'vector': vector }) ``` ### Error Message and Stack Trace (if applicable) As you can see in langsmith, it returned this output. <img width="344" alt="image" src="https://github.com/user-attachments/assets/0cdf266f-4f63-4537-ab75-ef9b61cfdda9"> ### Description I'm using HuggingFaceEndpoint for inference to avoid storing model on my local machine, and I've noticed it gives buggy responses quite a few times. I'm using it for a RAG and a lot of times it just returns back the entire base prompt template inside [INST]...[/INST]. And as seen in screenshot attached, it returned "[/INST]" in a loop until max tokens limit reached. ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:17 PST 2023; root:xnu-8796.101.5~3/RELEASE_X86_64 > Python Version: 3.12.6 (main, Sep 6 2024, 19:03:47) [Clang 15.0.0 (clang-1500.1.0.2.5)] Package Information ------------------- > langchain_core: 0.3.19 > langchain: 0.3.7 > langchain_community: 0.3.7 > langsmith: 0.1.143 > langchain_huggingface: 0.1.2 > langchain_text_splitters: 0.3.2 > langchainhub: 0.1.21 Optional packages not installed ------------------------------- > langgraph > langserve Other Dependencies ------------------ > aiohttp: 3.11.4 > async-timeout: Installed. No version info available. > dataclasses-json: 0.6.7 > httpx: 0.27.2 > httpx-sse: 0.4.0 > huggingface-hub: 0.26.2 > jsonpatch: 1.33 > numpy: 1.26.4 > orjson: 3.10.11 > packaging: 24.2 > pydantic: 2.9.2 > pydantic-settings: 2.6.1 > PyYAML: 6.0.2 > requests: 2.32.3 > requests-toolbelt: 1.0.0 > sentence-transformers: 3.3.1 > SQLAlchemy: 2.0.35 > tenacity: 9.0.0 > tokenizers: 0.20.3 > transformers: 4.46.3 > types-requests: 2.32.0.20241016 > typing-extensions: 4.12.2
🤖:bug,investigate
low
Critical
2,723,093,112
tauri
[bug] libarchive.13.dylib' (no such file),
### Describe the bug dyld[82730]: Library not loaded: /opt/homebrew/opt/libarchive/lib/libarchive.13.dylib Referenced from: <3A92023B-E694-3FA1-B4FC-5056A4F353A7> /Users/suzhiwei/Documents/gitsource/fom/src-tauri/target/release/omg-ff Reason: tried: '/opt/homebrew/opt/libarchive/lib/libarchive.13.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/opt/libarchive/lib/libarchive.13.dylib' (no such file), '/opt/homebrew/opt/libarchive/lib/libarchive.13.dylib' (no such file) ### Reproduction _No response_ ### Expected behavior _No response_ ### Full `tauri info` output ```text > [email protected] tauri > tauri info [✔] Environment - OS: Mac OS 15.0.1 x86_64 (X64) ✔ Xcode Command Line Tools: installed ✔ rustc: 1.81.0 (eeb90cda1 2024-09-04) ✔ cargo: 1.81.0 (2dbb1af80 2024-08-20) ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: 1.81-x86_64-apple-darwin (default) - node: 20.11.1 - pnpm: 8.6.11 - yarn: 1.22.22 - npm: 10.2.4 [-] Packages - tauri 🦀: 2.1.1 - tauri-build 🦀: 2.0.3 - wry 🦀: 0.47.2 - tao 🦀: 0.30.8 - @tauri-apps/api : 2.1.1 - @tauri-apps/cli : 2.1.0 [-] Plugins - tauri-plugin-fs 🦀: 2.1.0 - @tauri-apps/plugin-fs : not installed! - tauri-plugin-dialog 🦀: 2.0.4 - @tauri-apps/plugin-dialog : 2.0.1 - tauri-plugin-shell 🦀: 2.0.2 - @tauri-apps/plugin-shell : 2.0.1 [-] App - build-type: bundle - CSP: unset - frontendDist: ../dist - devUrl: http://localhost:1420/ - framework: React - bundler: Vite ``` ### Stack trace _No response_ ### Additional context _No response_
type: bug,platform: macOS,status: needs triage
low
Critical
2,723,111,778
pytorch
modified CPUReproTests.test_two_local_buffers_in_outer_loop_fusion fails Codegen on all CPU platforms
### 🐛 Describe the bug Whilst investigating another bug, I discovered this happens on aarch64 and x86 ``` scalar_kernel = codegen_kernel(CppKernel) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/codegen/cpp.py", line 3830, in codegen_kernel run(kernel) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/codegen/cpp.py", line 3842, in run fn(vars, reduction_vars) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/codegen/cpp.py", line 4019, in fn return node.codegen(index_vars) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/scheduler.py", line 1053, in codegen self._body(*index_vars) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/loop_body.py", line 404, in __call__ result = self.root_block() File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/loop_body.py", line 638, in __call__ return InterpreterShim(graph, submodules).run(V.get_ops_handler()) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/loop_body.py", line 60, in run return super().run(*args, **kwargs) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/interpreter.py", line 167, in run self.env[node] = self.run_node(node) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/loop_body.py", line 56, in run_node return super().run_node(n) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/interpreter.py", line 228, in run_node return getattr(self, n.op)(n.target, args, kwargs) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/interpreter.py", line 332, in call_method return getattr(self_obj, target)(*args_tail, **kwargs) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/sizevars.py", line 915, in load return self._inner.load(name, self._simplify(index)) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/codegen/common.py", line 1973, in load out = self.load(name, index) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/codegen/cpp.py", line 1973, in load var = self.args.input(name) File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/codegen/common.py", line 1138, in input assert name not in V.graph.removed_buffers, name torch._dynamo.exc.BackendCompilerFailed: backend='compile_fx_wrapper' raised: AssertionError: buf1 ``` full stacktrace attached in text file. [bug.txt](https://github.com/user-attachments/files/18038947/bug.txt) This is a modified version of fn from **CPUReproTests.test_two_local_buffers_in_outer_loop_fusion** ``` import torch def fn(x): softmax = torch.nn.functional.softmax(x, dim=-1) sum = torch.sum(softmax, dim=-1) sum_broadcast = torch.broadcast_to( sum.unsqueeze(-1), [*(sum.size()[0:3]), 256] ) sum_exp = torch.exp(sum_broadcast) return torch.sum(sum_exp, dim=-1) x = torch.randn(4, 12, 1023, 1022) foo = torch.compile(fn) foo(x) ``` This should be reproducible on any platform , eager mode seems to work fine. ### Versions Collecting environment information... PyTorch version: 2.6.0a0+gitcc64ad6 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (aarch64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.31.1 Libc version: glibc-2.35 Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime) Python platform: Linux-6.5.0-1018-aws-aarch64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: aarch64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 48 On-line CPU(s) list: 0-47 Vendor ID: ARM Model name: Neoverse-N1 Model: 1 Thread(s) per core: 1 Core(s) per cluster: 48 Socket(s): - Cluster(s): 1 Stepping: r3p1 BogoMIPS: 243.75 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs L1d cache: 3 MiB (48 instances) L1i cache: 3 MiB (48 instances) L2 cache: 48 MiB (48 instances) L3 cache: 32 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-47 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Mitigation; CSV2, BHB Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] mypy==1.13.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.22.4 [pip3] onnx==1.17.0 [pip3] onnxscript==0.1.0.dev20240817 [pip3] optree==0.13.0 [pip3] torch==2.6.0a0+gitcc64ad6 [conda] No relevant packages cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
module: cpu,triaged
low
Critical
2,723,127,798
vscode
Need a Sort Order Option for Source Control
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> Hello, I would like the files and folders in the Source Control panel to be displayed in the same order as configured in the Explorer's Sort Order settings. Currently, there doesn't seem to be an option to control the sort order of files and folders in the Source Control panel. ![Image](https://github.com/user-attachments/assets/bfb6c7ca-251f-4423-95be-64790bafcfd1)
feature-request,scm
low
Minor
2,723,138,018
pytorch
[ARM] (Neoverse-N1) Tensor-likes are not close! for CPUReproTests.test_two_local_buffers_in_outer_loop_fusion
### 🐛 Describe the bug Identified on Arm Neoverse-N1 ``` AssertionError: Tensor-likes are not close! Mismatched elements: 1019 / 12570624 (0.0%) Greatest absolute difference: 0.0009765625 at index (1, 5, 672, 24) (up to 1e-05 allowed) Greatest relative difference: 1.4066824860492488e-06 at index (1, 5, 672, 24) (up to 1.3e-06 allowed) To execute this test, run the following from the base repo dir: python test/inductor/test_cpu_repro.py CPUReproTests.test_two_local_buffers_in_outer_loop_fusion This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` Discovered #142230 whilst trying to debug this. These types of unit test failures are difficult to debug as there is expected to be a difference between inductor and eager paths, and the error will increase the larger the tensor size and number of operations? There is also a statistical element as it's possible to find a manual seed that causes this test to pass. `torch.manual_seed(1)` prior to the torch.randn call will cause the test to pass. ### Versions Collecting environment information... PyTorch version: 2.6.0a0+gitcc64ad6 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (aarch64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.31.1 Libc version: glibc-2.35 Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime) Python platform: Linux-6.5.0-1018-aws-aarch64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: aarch64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 48 On-line CPU(s) list: 0-47 Vendor ID: ARM Model name: Neoverse-N1 Model: 1 Thread(s) per core: 1 Core(s) per cluster: 48 Socket(s): - Cluster(s): 1 Stepping: r3p1 BogoMIPS: 243.75 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs L1d cache: 3 MiB (48 instances) L1i cache: 3 MiB (48 instances) L2 cache: 48 MiB (48 instances) L3 cache: 32 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-47 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Mitigation; CSV2, BHB Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] mypy==1.13.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.22.4 [pip3] onnx==1.17.0 [pip3] onnxscript==0.1.0.dev20240817 [pip3] optree==0.13.0 [pip3] torch==2.6.0a0+gitcc64ad6 [conda] No relevant packages cc @malfet @snadampal @milpuz01
triaged,module: arm
low
Critical
2,723,140,314
rust
Tracking Issue for `bool::select_unpredictable`
Feature gate: `#![feature(select_unpredictable)]` This is a tracking issue for `bool::select_unpredictable`, which selects between two values and hints to the optimizer that it should try to generate branchless code. ### Public API ```rust impl bool { /// Returns either `true_val` or `false_val` depending on the value of /// `condition`, with a hint to the compiler that `condition` is unlikely /// to be correctly predicted by a CPU’s branch predictor. pub fn select_unpredictable<T>(self, true_val: T, false_val: T) -> T; } ``` ### Steps / History - [x] ACP: https://github.com/rust-lang/libs-team/issues/468 - [ ] Implementation: https://github.com/rust-lang/rust/pull/133964 - [ ] Final comment period (FCP)[^1] - [ ] Stabilization PR ### Unresolved Questions - This should be `const`, right? [^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
T-libs-api,C-tracking-issue
low
Major
2,723,142,538
pytorch
torch.tensor.to(‘cuda’) so slow in jetson orin
Here is my code of running an ViT inferrence. `for i in range(100): s_time0 = time.time() image = Image.open(image_file).convert('RGB') print('open img time:',int((time.time() - s_time0) * 1000)) s_time = time.time() image_tensor = process_anyres_image( image, model.image_processor, grid_points, False, False ) print('process time:',int((time.time() - s_time0) * 1000)) s_time = time.time() image_tensor = torch.from_numpy(image_tensor) print('array to tensor time:',int((time.time() - s_time0) * 1000)) s_time = time.time() image_tensor = image_tensor.to('cuda', dtype=torch.float16) print('to gpu time:',int((time.time() - s_time0) * 1000)) s_time = time.time() tokens = model(image_tensor) # torch.Size([1, 3, 224, 224]) endtime = time.time() print('forward time:',int((endtime - s_time) * 1000)) print('total:',int((endtime - s_time0) * 1000)) print('-----------------------')` And the output is like this. ![image](https://github.com/user-attachments/assets/c4f8bf32-8084-4c61-818f-ec214e8e35e9) It seems like tensor.to(‘cuda’) cost much time except first two times. Here is the information of machine. ![image](https://github.com/user-attachments/assets/3a67b961-b2f1-49a8-ad3c-dc4a70a1fd7d) And version of pytorch. ![image](https://github.com/user-attachments/assets/3dad0436-e5a6-4ba3-996f-2a74a74801f6) cc @ptrblck @msaroufim @eqy @puririshi98
module: cuda,triaged,module: jetson
low
Major
2,723,157,056
langchain
`InMemoryRateLimiter` does not work with `BaseLLM` child classes
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code The `InMemoryRateLimiter` is not used when passed to a child class of `BaseLLM` (e.g. `VertexAI`). It is only used for child classes of `BaseChatModel` (e.g. `ChatVertexAI`). Additionally, no exception is raised when passing `InMemoryRateLimiter` to `BaseLLM`. ```python from langchain_core.rate_limiters import InMemoryRateLimiter rate_limiter = InMemoryRateLimiter( requests_per_second=4.5, check_every_n_seconds=0.5, max_bucket_size=280, ) from langchain_google_vertexai import ChatVertexAI, VertexAI chat = ChatVertexAI( model_name="gemini-1.0-pro", rate_limiter=rate_limiter ) # Rate limiter will get used by this object llm = VertexAI( model_name="gemini-1.0-pro", rate_limiter=rate_limiter ) # Rate limiter will not get used by this object ``` ### Error Message and Stack Trace (if applicable) N/A ### Description The `InMemoryRateLimiter` is not used when passed to a child class of `BaseLLM` (e.g. `VertexAI`). It is only used for child classes of `BaseChatModel` (e.g. `ChatVertexAI`). Additionally, no exception is raised when passing `InMemoryRateLimiter` to `BaseLLM`. From what I can tell, this is not discussed anywhere in the documentation, so will likely cause confusion for many users. From further investigation, I found that there is no use of `InMemoryRateLimiter` in the constructor of `BaseLLM`: https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/language_models/chat_models.py#L207 but there is in the constructor of `BaseChatModel`: https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/language_models/llms.py#L292 ### System Info System Information ------------------ > OS: Linux > OS Version: #1 SMP Debian 5.10.226-1 (2024-10-03) > Python Version: 3.10.15 | packaged by conda-forge | (main, Sep 30 2024, 17:51:04) [GCC 13.3.0]
🤖:bug
low
Critical
2,723,157,785
godot
Error `Cannot get class ''. Parameter "obj" is null.` under specific conditions
### Tested versions - Reproducible in: v4.3.stable.official [77dcf97d8] ### System information Windows 10 and Ubuntu 20.04.6 - Compatibility renderer ### Issue description I encountered this bug while making [my game](https://github.com/OgGhostJelly/Zombie-BurgerZ) and [this](https://github.com/OgGhostJelly/Zombie-BurgerZ/commit/e6c37a912ecf8d6a1ca74b40c4157c82377bdbc9#diff-76682f3289631b5dee833f7ddebba25fa0541aea3fffca458c8c925fa528e347) is the commit that started causing the errors, more specifically it seems the changes to `player.gd` and `gun.gd` are what caused it. If you try to select the smg in the shop it will segfault. The bug is now fixed in the later commits, I fixed it by replacing `preload` with `load` but I'm unsure why that worked. ### Steps to reproduce - Download the attached Godot project - Run it normally and see how there are no errors - Export it into the .exports folder inside the project, it MUST be exported into that folder or else the bug won't appear. (not sure why, maybe something to do with the filename having a special character in it) - Run the project in the terminal and see how there are now errors appearing in the exported version ### Minimal reproduction project (MRP) The MRP is my game with all the code unrelated to the bug stripped away [bug.zip](https://github.com/user-attachments/files/18045954/bug.zip)
topic:gdscript,needs testing
low
Critical
2,723,208,626
go
x/pkgsite: automate trivial package removal requests
This is a reminder issue for a future team friction fixit week: we could reduce toil significantly by automating pkgsite removal requests. Right now, the pkgsite removal process requires users to file an issue requesting the removal of a package from pkgsite. Our triage person handles this request by (1) verifying some form of ownership for the relevant paths, and (2) sending an internal CL to exclude the paths from pkgsite. We get a steady stream of these requests, on the order of a few a week. They are almost all trivial, but still a source of interrupts and toil. Additionally, sometimes it takes several days (or even more than a week) to get to them, which is not a great experience for the requester. We have considered completely changing the removal process to be more self-service, but one of the goals of the current process is to preserve an audit log, and github issues are (for better or worse), the canonical audit log for changes to the Go project. We thought that module retractions would be a self-service way to remove content from pkgsite, but they only hide packages from search. Also, in the common case the repo has been deleted, so retractions are onerous on authors. For this reason, retractions do not seem to have reduced the number of removal requests we receive. We could keep the current process, and reduce most of the toil, with automation: 1. Find pkgsite removal issues. Extract the requested path(s) to remove. 2. Check for ownership. It may suffice to check that the requester is the owner of repos containing the paths, though in some cases we need to check for organization membership. 3. Send an internal auto-submit CL to the tools team to process the removal. 4. Comment and close the issue.
pkgsite,Friction
low
Minor
2,723,228,774
electron
Transparency true makes the electron window flash when interacting with some other windows
### Preflight Checklist - [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project. - [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to. - [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success. ### Electron Version 33.0.0 ### What operating system(s) are you using? Windows ### Operating System Version 10.0.22631 ### What arch are you using? x64 ### Last Known Working Electron version 26.0.0 ### Expected Behavior A window should be visible at all times with transparency enabled and alwaysOnTop true when we interact with other windows after focusing into an electron window, it should not flash/vanish. It used to work before version 26, and doesn't work in any version after that. ### Actual Behavior When clicking away from the window it vanishes, and the visibility event does not get triggered, chrome profiler shows the window is always there only on the actual desktop does it vanish. To reproduce, first, click on the transparent window and bring it into focus, and then click on some other window, it vanishes. I saw this happening by clicking on a WPF window or some text editors like IntelliJ. You will be able to continuously reproduce it if you interact with another window which also has alwaysOnTop true. ### Testcase Gist URL https://gist.github.com/ImrozKhan21/d5944d0d4867aa3a18c0a2556c44f2c8 ### Additional Information _No response_
platform/windows,bug :beetle:,has-repro-gist,33-x-y
low
Critical
2,723,238,603
rust
ICE: E0107 points into derived code
<!-- ICE: Rustc ./a.rs '' 'thread 'rustc' panicked at /home/gh-matthiaskrgr/vcs/github/rust_debug_assertions/compiler/rustc_errors/src/diagnostic.rs:1006:9: 'Span must not be empty and have no suggestion'', 'thread 'rustc' panicked at /home/gh-matthiaskrgr/vcs/github/rust_debug_assertions/compiler/rustc_errors/src/diagnostic.rs:1006:9: 'Span must not be empty and have no suggestion'' File: /tmp/f/2/a.rs --> auto-reduced (treereduce-rust): ````rust struct NonGeneric {} #[derive(Default)] struct NonGeneric<'a, const N: usize> {} ```` original: ````rust struct NonGeneric {} #[derive(Default)] struct NonGeneric<'a, const N: usize> {} pub fn main() {} ```` Version information ```` rustc 1.85.0-dev binary: rustc commit-hash: unknown commit-date: unknown host: x86_64-unknown-linux-gnu release: 1.85.0-dev LLVM version: 19.1.5 ````
A-diagnostics,I-ICE,T-compiler,C-bug,S-bug-has-test,requires-debug-assertions
low
Critical
2,723,259,019
vscode
Add .gitIgnore file in new prjects when initializing new Git Repos
Type: <b>Feature Request</b> I created a new project in VSCode and created new repo in GitHub from from it using VSCode. But .gitIgnore file is not added to the project by default. VS Code version: Code 1.95.3 (Universal) (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z) OS version: Darwin arm64 24.1.0 Modes: <!-- generated by issue reporter -->
feature-request,git
low
Minor
2,723,271,825
langchain
DOC: Exceptions that can be raised by the invocation of LLM models are not described
### URL https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.llms.BaseLLM.html ### Checklist - [X] I added a very descriptive title to this issue. - [X] I included a link to the documentation page I am referring to (if applicable). ### Issue with current documentation: The invoke() or ainvoke() methods of BaseLLMs may raise exceptions, yet the latter are not described in a "**Raises**" section. Some specific implementations like OpenAI's models should have their own exceptions documented too (like RateLimitError I guess). The problem is that it's currently difficult to write a code that properly handles all possible exceptions without knowing them. ### Idea or request for content: On top on the "Return type" section, a "Raises" section should be added for invoke() and ainvoke() and maybe other methods.
🤖:docs
low
Critical
2,723,281,436
react-native
[iOS] TextInput fontFamily issue with some keyboards in iOS
### Description I'm not sure what the main cause of the issue is, but I ran into an issue with CJK keyboards in iOS, that the fontFamily is not set correctly. ```tsx const [text, setText] = useState(""); const fontFamily = !text ? "ZenMaruGothic" : "IMHyemin"; return ( <SafeAreaView> <View style={styles.view}> <TextInput multiline defaultValue={text} onChangeText={setText} placeholder="フォントテスト" style={[styles.textInput, { fontFamily }]} /> {/* ... */} ``` When the text is typed on keyboards having "pending states" . That is, the state in which the text can be modified by the additional keystrokes and confirmed by the user to be finalized. Examples include: * with the Chinese pinyin keyboard, one types `nh` and selects 你好 to finalize the input * with the standard Korean (qwerty) keyboard, keystrokes `ㄱㅏ` yield `가` (U+AC00) in its pending states, and an additional keystroke `ㄴ` modifies it to be `간` (U+AC04) As you can see in the video, when typed using the standard English keyboard, at the first keystroke, the letter is correctly styled with the provided font. But using Chinese/Japanese keyboards, the first keystroke does not change the font and the following keystrokes change it. Even worse, using the Korean keyboard, the font of the TextInput is not changed at all. ### Steps to reproduce Expo snack: https://snack.expo.dev/@kanukim/textinputfonttest **Issues with Korean characters:** 1. Open the project in an iOS device 2. Add a Korean Keyboard (Standard) 3. Type "한글" (FYI, you can type it by pressing ㅎㅏㄴㄱㅡㄹ in sequence) 4. The fonts are not set correctly (as the text input below) You may test the issue with various keyboards including Japanese (romaji) and Chinese (pinyin) inputs. ### React Native Version 0.76.3 ### Affected Platforms Runtime - iOS ### Areas Fabric - The New Renderer ### Output of `npx react-native info` ```text System: OS: macOS 15.1.1 CPU: (8) arm64 Apple M1 Memory: 206.56 MB / 8.00 GB Shell: version: 3.7.1 path: /opt/homebrew/bin/fish Binaries: Node: version: 22.9.0 path: ~/.local/share/nvm/v22.9.0/bin/node Yarn: version: 1.22.22 path: ~/.bun/bin/yarn npm: version: 10.8.3 path: ~/.local/share/nvm/v22.9.0/bin/npm Watchman: version: 2024.09.09.00 path: /opt/homebrew/bin/watchman Managers: CocoaPods: version: 1.14.3 path: /usr/local/bin/pod SDKs: iOS SDK: Platforms: - DriverKit 24.1 - iOS 18.1 - macOS 15.1 - tvOS 18.1 - visionOS 2.1 - watchOS 11.1 Android SDK: API Levels: - "33" - "34" Build Tools: - 30.0.3 - 33.0.0 - 33.0.1 - 34.0.0 Android NDK: Not Found IDEs: Android Studio: 2024.1 AI-241.18034.62.2411.12071903 Xcode: version: 16.1/16B40 path: /usr/bin/xcodebuild Languages: Java: version: 17.0.12 path: /opt/homebrew/opt/openjdk@17/bin/javac Ruby: version: 2.6.10 path: /usr/bin/ruby npmPackages: "@react-native-community/cli": installed: 16.0.0 wanted: latest react: installed: 18.3.1 wanted: 18.3.1 react-native: installed: 0.76.3 wanted: 0.76.3 react-native-macos: Not Found npmGlobalPackages: "*react-native*": Not Found Android: hermesEnabled: Not found newArchEnabled: Not found iOS: hermesEnabled: Not found newArchEnabled: Not found ``` ### Stacktrace or Logs ```text No errors ``` ### Reproducer https://snack.expo.dev/@kanukim/textinputfonttest ### Screenshots and Videos https://github.com/user-attachments/assets/dbd6a83a-4f22-45c3-aa61-f17c8c53f7bf Please focus on the missing font changes at the first keystrokes.
Platform: iOS,Issue: Author Provided Repro,Component: TextInput,API: Keyboard
low
Critical
2,723,284,523
flutter
`SegmentedButton` does not set its MaterialState for side
### Steps to reproduce Launch the app and interact with `SegmentedButton` (press, hover etc.). `SegmentedButton` does not trigger `MaterialState` for side ### Expected results Border color should change ### Actual results Border color does not change ### Code sample <details open><summary>Code sample</summary> ```dart ThemeData( colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple), useMaterial3: true, visualDensity: VisualDensity.standard, segmentedButtonTheme: SegmentedButtonThemeData( style: ButtonStyle( // SegmentedButton does not trigger side side: WidgetStateProperty.resolveWith((states) { debugPrint('Button states: $states'); if (states.contains(WidgetState.disabled)) { return BorderSide(color: Colors.grey); } if (states.contains(WidgetState.hovered) || states.contains(WidgetState.focused)) { return BorderSide(color: Colors.green); } if (states.contains(WidgetState.selected)) { return BorderSide(color: Colors.purple); } if (states.contains(WidgetState.pressed)) { return BorderSide(color: Colors.blue); } return BorderSide(color: Colors.red); }), ), ), ) ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> [Upload media here] </details> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [✓] Flutter (Channel stable, 3.22.2, on macOS 15.0 24A335 darwin-arm64, locale en-PL) • Flutter version 3.22.2 on channel stable at /Users/bartosz.gasztych/fvm/versions/3.22.2 • Upstream repository https://github.com/flutter/flutter.git • Framework revision 761747bfc5 (6 months ago), 2024-06-05 22:15:13 +0200 • Engine revision edd8546116 • Dart version 3.4.3 • DevTools version 2.34.3 [✓] Android toolchain - develop for Android devices (Android SDK version 33.0.2) • Android SDK at /Users/bartosz.gasztych/Library/Android/sdk • Platform android-33, build-tools 33.0.2 • Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314) • All Android licenses accepted. [✓] Xcode - develop for iOS and macOS (Xcode 16.0) • Xcode at /Applications/Xcode.app/Contents/Developer • Build 16A242d • CocoaPods version 1.15.2 [✓] Chrome - develop for the web • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome [✓] Android Studio (version 2024.1) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314) [✓] VS Code (version 1.95.2) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.100.0 [✓] Connected device (3 available) • macOS (desktop) • macos • darwin-arm64 • macOS 15.0 24A335 darwin-arm64 • Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0 24A335 darwin-arm64 • Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.109 [✓] Network resources • All expected network resources are available. • No issues found! ``` </details>
framework,f: material design,has reproducible steps,P2,team-design,triaged-design,found in release: 3.27,found in release: 3.28
low
Critical
2,723,342,682
go
x/tools/gopls: feature request: refactor.extract.struct
[in slack](https://gophers.slack.com/archives/C0VPK4Z5E/p1733498044427149) Suppose I have: ```go type X struct { A, B, C, D string } ``` and I want to move to: ```go type Y struct { A, B string } type X struct { Y C, D string } ``` It would be nice to have some help with that.
FeatureRequest,gopls,Tools,Refactoring
low
Minor
2,723,359,060
rust
ICE: `expected wide pointer extra data (e.g. slice length or trait object vtable)`
<!-- ICE: Rustc ./a.rs '' 'error: internal compiler error: compiler/rustc_const_eval/src/interpret/place.rs:36:17: expected wide pointer extra data (e.g. slice length or trait object vtable)', 'error: internal compiler error: compiler/rustc_const_eval/src/interpret/place.rs:36:17: expected wide pointer extra data (e.g. slice length or trait object vtable)' File: /tmp/im/a.rs --> snippet: ````rust pub struct Data([[&str]; 5_i32]); const _: &'static Data = unsafe { &*(&[] as *const Data) }; ```` Version information ```` rustc 1.85.0-nightly (acf48426b 2024-12-06) binary: rustc commit-hash: acf48426b64d24f372d534f634072de1f4c7e588 commit-date: 2024-12-06 host: x86_64-unknown-linux-gnu release: 1.85.0-nightly LLVM version: 19.1.5 ```` Possibly related line of code: https://github.com/rust-lang/rust/blob/acf48426b64d24f372d534f634072de1f4c7e588/compiler/rustc_const_eval/src/interpret/place.rs#L30-L42 Command: `/home/matthias/.rustup/toolchains/master/bin/rustc ` <details><summary><strong>Program output</strong></summary> <p> ``` error[E0106]: missing lifetime specifier --> /tmp/icemaker_global_tempdir.aSYe5Is2GJl3/rustc_testrunner_tmpdir_reporting.gKjFHYgJOJDL/mvce.rs:1:19 | 1 | pub struct Data([[&str]; 5_i32]); | ^ expected named lifetime parameter | help: consider introducing a named lifetime parameter | 1 | pub struct Data<'a>([[&'a str]; 5_i32]); | ++++ ++ error[E0601]: `main` function not found in crate `mvce` --> /tmp/icemaker_global_tempdir.aSYe5Is2GJl3/rustc_testrunner_tmpdir_reporting.gKjFHYgJOJDL/mvce.rs:2:60 | 2 | const _: &'static Data = unsafe { &*(&[] as *const Data) }; | ^ consider adding a `main` function to `/tmp/icemaker_global_tempdir.aSYe5Is2GJl3/rustc_testrunner_tmpdir_reporting.gKjFHYgJOJDL/mvce.rs` error: internal compiler error: compiler/rustc_const_eval/src/interpret/place.rs:36:17: expected wide pointer extra data (e.g. slice length or trait object vtable) thread 'rustc' panicked at compiler/rustc_const_eval/src/interpret/place.rs:36:17: Box<dyn Any> stack backtrace: 0: 0x7ad09f54895a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h69edfa39ad21a03a 1: 0x7ad09fc13e26 - core::fmt::write::h54afaacea6c2c9e2 2: 0x7ad0a0bf3ad1 - std::io::Write::write_fmt::h00e73f8f3070368c 3: 0x7ad09f5487b2 - std::sys::backtrace::BacktraceLock::print::h2b0900033e71643f 4: 0x7ad09f54acca - std::panicking::default_hook::{{closure}}::h325b3b03f2ea7a39 5: 0x7ad09f54ab13 - std::panicking::default_hook::haaf25a6e7afde799 6: 0x7ad09e6c1238 - std[85c61899f894cf29]::panicking::update_hook::<alloc[98b3c20953f4fbe5]::boxed::Box<rustc_driver_impl[45224e78b6d2fe59]::install_ice_hook::{closure#0}>>::{closure#0} 7: 0x7ad09f54b488 - std::panicking::rust_panic_with_hook::hdc045113cf0fafba 8: 0x7ad09e6f6771 - std[85c61899f894cf29]::panicking::begin_panic::<rustc_errors[2297728cd06420b6]::ExplicitBug>::{closure#0} 9: 0x7ad09e6eb916 - std[85c61899f894cf29]::sys::backtrace::__rust_end_short_backtrace::<std[85c61899f894cf29]::panicking::begin_panic<rustc_errors[2297728cd06420b6]::ExplicitBug>::{closure#0}, !> 10: 0x7ad09e6eb6d3 - std[85c61899f894cf29]::panicking::begin_panic::<rustc_errors[2297728cd06420b6]::ExplicitBug> 11: 0x7ad09e700711 - <rustc_errors[2297728cd06420b6]::diagnostic::BugAbort as rustc_errors[2297728cd06420b6]::diagnostic::EmissionGuarantee>::emit_producing_guarantee 12: 0x7ad09ece86d3 - rustc_middle[2b65deb07cad4442]::util::bug::opt_span_bug_fmt::<rustc_span[267b34eb586afd37]::span_encoding::Span>::{closure#0} 13: 0x7ad09ecd0c0a - rustc_middle[2b65deb07cad4442]::ty::context::tls::with_opt::<rustc_middle[2b65deb07cad4442]::util::bug::opt_span_bug_fmt<rustc_span[267b34eb586afd37]::span_encoding::Span>::{closure#0}, !>::{closure#0} 14: 0x7ad09ecd0a9b - rustc_middle[2b65deb07cad4442]::ty::context::tls::with_context_opt::<rustc_middle[2b65deb07cad4442]::ty::context::tls::with_opt<rustc_middle[2b65deb07cad4442]::util::bug::opt_span_bug_fmt<rustc_span[267b34eb586afd37]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !> 15: 0x7ad09ce2a8d0 - rustc_middle[2b65deb07cad4442]::util::bug::bug_fmt 16: 0x7ad0a07da666 - <rustc_const_eval[c1d2b202d9973a02]::interpret::validity::ValidityVisitor<rustc_const_eval[c1d2b202d9973a02]::const_eval::machine::CompileTimeMachine> as rustc_const_eval[c1d2b202d9973a02]::interpret::visitor::ValueVisitor<rustc_const_eval[c1d2b202d9973a02]::const_eval::machine::CompileTimeMachine>>::visit_value 17: 0x7ad0a07d8eeb - <rustc_const_eval[c1d2b202d9973a02]::interpret::validity::ValidityVisitor<rustc_const_eval[c1d2b202d9973a02]::const_eval::machine::CompileTimeMachine> as rustc_const_eval[c1d2b202d9973a02]::interpret::visitor::ValueVisitor<rustc_const_eval[c1d2b202d9973a02]::const_eval::machine::CompileTimeMachine>>::visit_value 18: 0x7ad0a07d86fe - <rustc_const_eval[c1d2b202d9973a02]::interpret::validity::ValidityVisitor<rustc_const_eval[c1d2b202d9973a02]::const_eval::machine::CompileTimeMachine> as rustc_const_eval[c1d2b202d9973a02]::interpret::visitor::ValueVisitor<rustc_const_eval[c1d2b202d9973a02]::const_eval::machine::CompileTimeMachine>>::visit_value 19: 0x7ad0a0808a9c - rustc_const_eval[c1d2b202d9973a02]::const_eval::eval_queries::eval_to_allocation_raw_provider 20: 0x7ad0a08079be - rustc_query_impl[31526c927efa8eec]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[31526c927efa8eec]::query_impl::eval_to_allocation_raw::dynamic_query::{closure#2}::{closure#0}, rustc_middle[2b65deb07cad4442]::query::erase::Erased<[u8; 24usize]>> 21: 0x7ad0a07efc22 - rustc_query_system[8b7296974754ceb3]::query::plumbing::try_execute_query::<rustc_query_impl[31526c927efa8eec]::DynamicConfig<rustc_query_system[8b7296974754ceb3]::query::caches::DefaultCache<rustc_middle[2b65deb07cad4442]::ty::PseudoCanonicalInput<rustc_middle[2b65deb07cad4442]::mir::interpret::GlobalId>, rustc_middle[2b65deb07cad4442]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[31526c927efa8eec]::plumbing::QueryCtxt, false> 22: 0x7ad0a07ef781 - rustc_query_impl[31526c927efa8eec]::query_impl::eval_to_allocation_raw::get_query_non_incr::__rust_end_short_backtrace 23: 0x7ad0a07f18f9 - rustc_const_eval[c1d2b202d9973a02]::const_eval::eval_queries::eval_to_const_value_raw_provider 24: 0x7ad0a07f1702 - rustc_query_impl[31526c927efa8eec]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[31526c927efa8eec]::query_impl::eval_to_const_value_raw::dynamic_query::{closure#2}::{closure#0}, rustc_middle[2b65deb07cad4442]::query::erase::Erased<[u8; 24usize]>> 25: 0x7ad0a07efbd9 - rustc_query_system[8b7296974754ceb3]::query::plumbing::try_execute_query::<rustc_query_impl[31526c927efa8eec]::DynamicConfig<rustc_query_system[8b7296974754ceb3]::query::caches::DefaultCache<rustc_middle[2b65deb07cad4442]::ty::PseudoCanonicalInput<rustc_middle[2b65deb07cad4442]::mir::interpret::GlobalId>, rustc_middle[2b65deb07cad4442]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[31526c927efa8eec]::plumbing::QueryCtxt, false> 26: 0x7ad0a07ef689 - rustc_query_impl[31526c927efa8eec]::query_impl::eval_to_const_value_raw::get_query_non_incr::__rust_end_short_backtrace 27: 0x7ad0a003a154 - <rustc_middle[2b65deb07cad4442]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[4e6fa7c2cad24599]::check_crate::{closure#3}>::{closure#0} 28: 0x7ad0a0037b37 - rustc_hir_analysis[4e6fa7c2cad24599]::check_crate 29: 0x7ad0a0042584 - rustc_interface[51c1904b6ef58290]::passes::run_required_analyses 30: 0x7ad0a0bd88de - rustc_interface[51c1904b6ef58290]::passes::analysis 31: 0x7ad0a0bd88af - rustc_query_impl[31526c927efa8eec]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[31526c927efa8eec]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[2b65deb07cad4442]::query::erase::Erased<[u8; 1usize]>> 32: 0x7ad0a0baf7ba - rustc_query_system[8b7296974754ceb3]::query::plumbing::try_execute_query::<rustc_query_impl[31526c927efa8eec]::DynamicConfig<rustc_query_system[8b7296974754ceb3]::query::caches::SingleCache<rustc_middle[2b65deb07cad4442]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[31526c927efa8eec]::plumbing::QueryCtxt, false> 33: 0x7ad0a0baf48e - rustc_query_impl[31526c927efa8eec]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace 34: 0x7ad0a0c503eb - rustc_interface[51c1904b6ef58290]::interface::run_compiler::<core[ce4f568dace374e]::result::Result<(), rustc_span[267b34eb586afd37]::ErrorGuaranteed>, rustc_driver_impl[45224e78b6d2fe59]::run_compiler::{closure#0}>::{closure#1} 35: 0x7ad0a0b41ca1 - std[85c61899f894cf29]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[51c1904b6ef58290]::util::run_in_thread_with_globals<rustc_interface[51c1904b6ef58290]::util::run_in_thread_pool_with_globals<rustc_interface[51c1904b6ef58290]::interface::run_compiler<core[ce4f568dace374e]::result::Result<(), rustc_span[267b34eb586afd37]::ErrorGuaranteed>, rustc_driver_impl[45224e78b6d2fe59]::run_compiler::{closure#0}>::{closure#1}, core[ce4f568dace374e]::result::Result<(), rustc_span[267b34eb586afd37]::ErrorGuaranteed>>::{closure#0}, core[ce4f568dace374e]::result::Result<(), rustc_span[267b34eb586afd37]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[ce4f568dace374e]::result::Result<(), rustc_span[267b34eb586afd37]::ErrorGuaranteed>> 36: 0x7ad0a0b41948 - <<std[85c61899f894cf29]::thread::Builder>::spawn_unchecked_<rustc_interface[51c1904b6ef58290]::util::run_in_thread_with_globals<rustc_interface[51c1904b6ef58290]::util::run_in_thread_pool_with_globals<rustc_interface[51c1904b6ef58290]::interface::run_compiler<core[ce4f568dace374e]::result::Result<(), rustc_span[267b34eb586afd37]::ErrorGuaranteed>, rustc_driver_impl[45224e78b6d2fe59]::run_compiler::{closure#0}>::{closure#1}, core[ce4f568dace374e]::result::Result<(), rustc_span[267b34eb586afd37]::ErrorGuaranteed>>::{closure#0}, core[ce4f568dace374e]::result::Result<(), rustc_span[267b34eb586afd37]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[ce4f568dace374e]::result::Result<(), rustc_span[267b34eb586afd37]::ErrorGuaranteed>>::{closure#1} as core[ce4f568dace374e]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} 37: 0x7ad0a0b4107b - std::sys::pal::unix::thread::Thread::new::thread_start::h6f16c939ae6ab66d 38: 0x7ad09aea339d - <unknown> 39: 0x7ad09af2849c - <unknown> 40: 0x0 - <unknown> note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md note: please make sure that you have updated to the latest nightly note: rustc 1.85.0-nightly (acf48426b 2024-12-06) running on x86_64-unknown-linux-gnu query stack during panic: #0 [eval_to_allocation_raw] const-evaluating + checking `_` #1 [eval_to_const_value_raw] simplifying constant for the type system `_` end of query stack error: aborting due to 3 previous errors Some errors have detailed explanations: E0106, E0601. For more information about an error, try `rustc --explain E0106`. ``` </p> </details> <!-- query stack: 2 | const _: &'static Data = unsafe { &*(&[] as *const Data) }; #0 [eval_to_allocation_raw] const-evaluating + checking `_` #1 [eval_to_const_value_raw] simplifying constant for the type system `_` -->
I-ICE,T-compiler,C-bug,A-const-eval,S-has-mcve,S-bug-has-test
low
Critical
2,723,362,146
ollama
Administrative / silent install is borked
### What is the issue? For deployment scenarios like for classrooms or to regular managed devices the setup needs to perform an administrative / unattended install. For InnoSetup built installers this can normally be done via the command-line switches /SILENT (or /VERYSILENT) or via a response file via /LOADINF. (See: https://jrsoftware.org/ishelp/index.php?topic=setupcmdline ) If the setup can be performed by both, an unprivileged user and an administrator the switch "/ALLUSERS" helps to select the latter. If the install paths are (correctly) derived from the common environment variables this should automagically move the files to "C:\Program Files" or "C:\Program Files (x86)" instead of the users AppData folder. Normally this would also lead the installer engine to register the application under the systems registry (HKLM:) and create a system wide start menu. With Ollamas setup this does not work even though setting the install-path like mentioned in the documentation to 'C:\Program Files\Ollama' makes it look like it should. Instead it creates a mix of both methods where the files are installed to the Program Files folder, but the registry and start menu entries are limited to the user (context) installing the software. And as most management system use a local system service as context this creates a rather weird state. And even if I could limit the permissions down - for a classroom for example I can never know which user will access the device... ### OS Windows ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.4.7
feature request,windows,install
low
Minor
2,723,375,895
flutter
[packages] Eliminate use of Pigeon `dartHostTestHandler`
In my experience doing Pigeon conversions of our plugins, `dartHostTestHandler` didn't make tests noticeably easier than simply mocking/faking/stubbing the host API class directly, and in some cases it was actually harder to use correctly to simulate things like exceptions. We have a robust test suite for Pigeon itself, so having package-level unit tests covering part of the Pigeon serialization isn't necessary, and the test generator is another generator we need to maintain. We should try to eliminate all use of `dartHostTestHandler` in our own plugins, and if we are able to do so without any significant issues, remove the test generator from Pigeon entirely to reduce complexity and ongoing maintenance costs. Current usage: - [ ] `camera_android_camerax` - [ ] `file_selector_ios` - [ ] `file_selector_linux` - [ ] `file_selector_macos` - [ ] `file_selector_windows` - [ ] `image_picker_android` - [ ] `image_picker_ios` - [ ] `in_app_purchase_storekit` - [ ] `path_provider_android` - [ ] `path_provider_foundation` - [ ] `shared_preferences_android` - [ ] `shared_preferences_foundation` - [ ] `url_launcher_windows` - [ ] `video_player_android` - [ ] `video_player_avfoundation` - [ ] `webview_flutter_wkwebview`
team,package,team-ecosystem,P2,triaged-ecosystem
low
Minor
2,723,411,796
rust
Tracking issue for release notes of #43244: Tracking issue for Vec::extract_if and LinkedList::extract_if
This issue tracks the release notes text for #43244. ### Steps - [ ] Proposed text is drafted by PR author (or team) making the noteworthy change. - [ ] Issue is nominated for release team review of clarity for wider audience. - [ ] Release team includes text in release notes/blog posts. ### Release notes text The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing). ````markdown # Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...) - [Tracking issue for Vec::extract_if and LinkedList::extract_if](https://github.com/rust-lang/rust/issues/43244) ```` > [!TIP] > Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use. > The category will be de-duplicated with all the other ones by the release team. > > *More than one section can be included if needed.* ### Release blog section If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section. *Otherwise leave it empty.* ````markdown ```` cc @Gankra, @the8472 -- origin issue/PR authors and assignees for starting to draft text
A-collections,T-libs-api,relnotes,relnotes-tracking-issue
low
Minor
2,723,419,339
flutter
Default `GestureDetector` Hit Test Behavior of `deferToChild` Is Counterintuitive
### Steps to reproduce 1. Create a text widget with some padding around it and wrap it in a gesture detector 2. Tap on the visible text and note that the on tap is fired 3. Tap on the padded space immediately next to the text ### Expected results The on tap is fired in step 3 ### Actual results The on tap is not fired in step 3 ### Why this is a problem This is "working as intended" because the default hit test behavior for `GestureDetector` is `deferToChild` and `Padding` does not consider the padded space for hit testing. This is reasonable if not desirable behavior in many circumstances, **but it is not intuitive enough to be a sane default**. Developers who are unaware of this nuanced behavior introduce usability issues everywhere in their app by creating text buttons with much smaller tappable regions than intended. Ideally, developers would just use `InkWell` or `TextButton`, but in practice the vast majority of major companies do not want to be that tied to Material and choose to write their own custom button wrappers as a part of their company-specific design system. ### Proposal Make `HitTestBehavior behavior` a required argument with no default value. This will be slightly more annoying for developers, but it will force them to consciously choose the desired hit test behavior. IMO the increase in verbosity is more than justified by the reduction in developer confusion and bugs-explicitness is pretty reasonable here given that `GestureDetector` is a comparatively low level widget and if you want ease-of-use you should probably be using the more ready-made equivalents such as `InkWell`, etc. ### Code sample <details open><summary>Code sample</summary> ```dart return MaterialApp( title: 'Flutter Demo', home: Scaffold( body: Column( mainAxisAlignment: MainAxisAlignment.center, children: [ // The padding around the button does not detect hit events because the // default hit test behavior is `deferToChild` and `Padding` doesn't // count padded space for hit testing. GestureDetector( onTap: () { print('Button A pressed!'); }, child: const Padding( padding: EdgeInsets.all(40), child: Text('Button A'), ), ), const Divider(), // Setting `behavior` to translucent fixes this. GestureDetector( behavior: HitTestBehavior.translucent, onTap: () { print('Button B pressed!'); }, child: const Padding( padding: EdgeInsets.all(40), child: Text('Button B'), ), ), ], ), ), ); ``` </details> ### Screenshots or Video _No response_ ### Logs _No response_ ### Flutter Doctor output Not relevant.
c: new feature,framework,d: api docs,f: gestures,c: proposal,P2,team-framework,triaged-framework
low
Critical
2,723,436,212
flutter
Cocoon: be explicit about what "a long time" is when removing an assignment
https://github.com/flutter/cocoon/blob/08d6d03f1cd7d62febe1901274250e05912d9673/triage_bot/lib/engine.dart#L1282 We know the value. Instead of saying `but has had no status updates in a long time` we could say `but has had no status updates for ${dayCount} days`.
team-infra,P3,triaged-infra
low
Minor
2,723,444,640
puppeteer
[Feature]: No way to set proxy to connect to a Chrome running in another PC.
### Feature description My testing chrome instance is running in another PC in LAN. I can connect to the services running in this PC via SSH socks5 proxy. But the puppeteer's `.connect()` method does not provide a parameter like `proxy`, or `agent` (like what package `node-fetch` does), to allow to connect to a chrome instance running in a different PC as the running Node script. I tried to redirect the traffic using means like proxychains, it works for `fetch()`, but not for puppeteer's `.connect()`.
feature,P3
low
Minor
2,723,452,178
TypeScript
Add caching to improve performance of rebuilds in --build mode
### 🔎 Search Terms "processSourceFile cache" ### 🕗 Version & Regression Information - I was unable to test this on prior versions because: I don't think this has changed since it was implemented ### ⏯ Playground Link _No response_ ### 💻 Code We have a large monorepo with several very large React apps. ### 🙁 Actual behavior Rebuilds wind up re-processing lots of files that did not change ### 🙂 Expected behavior Files that did not change can reuse their parsing from earlier ### Additional information about the issue It does not appear that --build mode uses caches for file reads and getSourceFile computation between runs. I’ve been investigating long delays between the time a user saves a file in their editor and when the tsc process finishes and displays success or errors that it found. In doing some profiling, nearly 70% of the total time was being spent in processSourceFile and its descendants. This was surprising to me because my mental model was that it would have already processed these files and cached the results, and that only the changed files would have to be reevaluated. From looking at the code, it appears that there are two fairly distinct codepaths depending on whether --build has been specified. When --build is used, it winds up in buildNextInvalidatedProjectWorker which clears the cache of parsed source files after every iteration. This is in contrast with with createWatchProgram, which appears to keep a persistent cache of the parsed files and takes care to evict things from the cache when necessary. As a naive test, I added [caching](https://github.com/microsoft/TypeScript/compare/v5.6.2...mfedderly:TypeScript:mf/getSourceFile-cached?expand=1) to createGetSourceFile which still performs the file read, compares the source file’s string against the previous string for this fileName, and then skips the parse if they match (I assume this is flawed for many reasons, including leaking memory from files that have been deleted). This led to nearly a ~50% reduction in time to finish the type checking and print its status (30.5s to 17s). I assume this can be further improved by avoiding the file reads or string compares using something like file mtime or hashes. Would it be possible to improve caching of the parsed source files in --build mode between runs? There appears to be quite a bit of benefit available if it is technically feasible.
Needs Investigation
low
Critical
2,723,464,443
godot
`EditorUndoRedoManager` `custom_context` results in "UndoRedo history mismatch" error
### Tested versions - v4.3.stable.official [77dcf97d8] - 1f47e4c4e3a09a422e96880a7918d986dd575a63 ### System information Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Mobile) - dedicated NVIDIA GeForce GTX 980 Ti (NVIDIA; 32.0.15.6614) - Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz (8 Threads) ### Issue description In both the [description](https://docs.godotengine.org/en/stable/classes/class_editorundoredomanager.html#description) and [create_action](https://docs.godotengine.org/en/stable/classes/class_editorundoredomanager.html#class-editorundoredomanager-method-create-action) documentation of `EditorUndoRedoManager`, it states: >If `custom_context` object is provided, it will be used for deducing target history (instead of using the first operation). But providing this `custom_context` often results in an error something like this: ``` UndoRedo history mismatch: expected 0, got 1. ``` ### Steps to reproduce ``` @tool extends EditorPlugin func _enter_tree() -> void: # Initialization of the plugin goes here. add_tool_menu_item("perform undo redo test action", undo_redo_test_action) func _exit_tree() -> void: # Clean-up of the plugin goes here. remove_tool_menu_item("perform undo redo test action") func undo_redo_test_action() -> void: var sc: Shortcut = Shortcut.new() var undo_redo: EditorUndoRedoManager = get_undo_redo() undo_redo.create_action("performed test action", UndoRedo.MERGE_DISABLE, ProjectSettings) undo_redo.add_do_property(sc, "events", []) undo_redo.add_undo_property(sc, "events", []) undo_redo.commit_action() ``` In the above example, `ProjectSettings` is passed as a `custom_context`, which instructs `EditorUndoRedoManager` to use the global history, regardless of the first operation. When `add_do_property` is called with a `Shortcut` resource, the `EditorUndoRedoManager` throws this error message: ``` UndoRedo history mismatch: expected 0, got 1. ``` If I do not provide a `custom_context` when calling `create_action`, this results in the **scene** history being used because my `add_do_property` uses a `Shortcut` object. This is an incorrect deduction of which undo/redo history should be used, so I expect I should be able to use a `custom_context` object without any error messages to force the Global history. ### Minimal reproduction project (MRP) [undo-redo-context-bug.zip](https://github.com/user-attachments/files/18040935/undo-redo-context-bug.zip)
topic:core,documentation
low
Critical
2,723,483,734
go
x/net/html: improper newline handling
### Go version go version go1.22.7 linux/amd64 ### Output of `go env` in your module/workspace: ```shell GO111MODULE='' GOARCH='amd64' GOBIN='' GOCACHE='/home/gopher/.cache/go-build' GOENV='/home/gopher/.config/go/env' GOEXE='' GOEXPERIMENT='' GOFLAGS='' GOHOSTARCH='amd64' GOHOSTOS='linux' GOINSECURE='' GOMODCACHE='/home/gopher/go/pkg/mod' GONOPROXY='' GONOSUMDB='' GOOS='linux' GOPATH='/home/gopher/go' GOPRIVATE='' GOPROXY='direct' GOROOT='/usr/lib/golang' GOSUMDB='off' GOTMPDIR='' GOTOOLCHAIN='local' GOTOOLDIR='/usr/lib/golang/pkg/tool/linux_amd64' GOVCS='' GOVERSION='go1.22.7' GCCGO='gccgo' GOAMD64='v1' AR='ar' CC='gcc' CXX='g++' CGO_ENABLED='1' GOMOD='/dev/null' GOWORK='' CGO_CFLAGS='-O2 -g' CGO_CPPFLAGS='' CGO_CXXFLAGS='-O2 -g' CGO_FFLAGS='-O2 -g' CGO_LDFLAGS='-O2 -g' PKG_CONFIG='pkg-config' GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build410475203=/tmp/go-build -gno-record-gcc-switches' ``` ### What did you do? In trying to understand why our HTML to LateX parser sometimes produced incorrect entries (sentence endings that didn't make sense), we found an explanation in the html tokenizer. In fact, when using CRLF (\r\n) line breaks, we found that calling the Token() method caused some invalid entries when calling Raw() (doubled characters). The documentation does mention that calling Token() can modify Raw entries, but I can't see why my characters would be doubled. [Here is an exemple of the issue](https://go.dev/play/p/ryt_NlVhkOX) ### What did you see happen? Parsing `ABC\r\nDEF\r\nGHI\r\n` is returning: ``` ABC DEF GHI I ``` ### What did you expect to see? ``` ABC DEF GHI ```
NeedsInvestigation
low
Minor
2,723,526,203
godot
Bad PackedScene load() performance when handling GDScript inheritance
### Tested versions - Reproducible in latest version: 4.4 dev4, 4.4 dev6 - Haven't tested on prior versions, but I suspect that this is a long standing issue. - Tested on Windows 10, Android (several) ### System information Windows 10 - Godot 4.4 dev 6 - Any pipeline - GeForce GTX 970 ### Issue description Loading times of empty, inherited GDScript node scenes are unreasonably high. I know Godot Engine is more oriented to node composition than inheritance, and that binary scene format is more performant than .tscn. However, these PackedScene load() times of just a small chain of 3 or 4 **empty** inherited classes are very high. Each "extends X" adds more than 1ms to load() on an EMTPY class, the issue gets way worse when you have actual code in the classes. To me, that smells fishy as hell. And that there must be some big innefficiency in the GDScript class loading code. I include an extremely simple MRP where you can see the degrade of performance on every inheritance step. We are talking +1ms on a Windows 10 i7 with an empty parent class, and an empty child (extends) class. Imagine with, you know, actual code. On a low end Android device. ### Steps to reproduce Just mount a simple project like I include as MRP, a few simple scenes with 1 single node: a.gd: extends Control # Can be Node, Object, Control whatever ... class_name A a.tscn: - A # That's it, just a single node with the empty class, all properties to defaults b.gd: extends A class_name B b.tscn: - B c.gd: extends B class_name C c.tscn: - C ... Then on your main scene run this simple test to load the scenes and time it (on _ready() or wherever you want): for type in ["a", "b", "c", "d"]: var tstart = Time.get_ticks_usec() load("res://" + type + ".tscn") var time = (Time.get_ticks_usec() - tstart) / 1000.0 print("%s base load: %.2fms" % [type, time]) # Output (on a Windows 10 Intel i7, 16GB ram) #a base load: 4.89ms #b base load: 5.70ms #c base load: 6.55ms #d base load: 7.94ms We are talking 8 ms loading an "empty" scene with an "empty" class on an Intel i7 with 16GB RAM. You can imagine the performance on a real case scenario (actual code, export vars, etc.) on a low end device (like mobile). I got to this MRP after a thorough investigation of why a not too complex scene load() was taking 30ms on my desktop computer (after seeing truly atrocious performance on Android). ### Minimal reproduction project (MRP) [high_scene_load_times.zip](https://github.com/user-attachments/files/18041051/high_scene_load_times.zip)
bug,topic:gdscript,needs testing,performance
low
Major
2,723,544,436
flutter
Gesture Detection Fails When WebViewWidget is Nested Inside a Scaling Layout Widget
### What package does this bug report belong to? webview_flutter ### What target platforms are you seeing this bug on? macOS ### Have you already upgraded your packages? Yes ### Dependency versions _No response_ ### Steps to reproduce 1. Attempt to interact with the web content: 2. Try scrolling, tapping buttons, or interacting with any clickable element within the scaled WebView. 3. Observe the behavior: 4. Gestures may not register correctly, if at all. 5. Interactions that normally work in an unscaled WebViewWidget fail. ### Expected results Gestures also work when the WebViewWidget was scaled. ### Actual results It seems like all pointers are offset by a linear factor proportional to the scaling. ### Code sample <details open><summary>Code sample</summary> ```dart import 'package:flutter/material.dart'; import 'package:webview_flutter/webview_flutter.dart'; void main() { runApp(const MaterialApp( home: ScaledWebViewTest(), )); } class ScaledWebViewTest extends StatelessWidget { const ScaledWebViewTest({super.key}); @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: const Text('Scaled WebView Gesture Test')), body: Transform.scale( scale: 0.9, child: WebViewWidget( controller: WebViewController() ..setJavaScriptMode(JavaScriptMode.unrestricted) ..loadRequest( Uri.parse('https://codepen.io/BananaCoding/pen/mdrGjpL')), ), ), ); } } ``` </details> ### Screenshots or Videos <details open> <summary>Screenshots / Video demonstration</summary> https://github.com/user-attachments/assets/03e356d9-586c-4088-b059-187fc1005bf6 </details> ### Logs _No response_ ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel stable, 3.24.5, on macOS 14.7 23H124 darwin-arm64, locale en-US) [✗] Android toolchain - develop for Android devices ✗ cmdline-tools component is missing Run `path/to/sdkmanager --install "cmdline-tools;latest"` See https://developer.android.com/studio/command-line for more details. [✓] Xcode - develop for iOS and macOS (Xcode 15.3) [✓] Chrome - develop for the web [!] Android Studio (not installed) [✓] VS Code (version 1.94.2) [✓] Connected device (3 available) [✓] Network resources ! Doctor found issues in 2 categories. ``` </details>
framework,f: gestures,a: platform-views,p: webview,package,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.27
low
Critical
2,723,561,122
langchain
ValueError: Self query retriever with Vector Store type <class 'langchain_weaviate.vectorstores.WeaviateVectorStore'> not supported.
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` import weaviate from langchain.schema import Document from langchain_weaviate.vectorstores import WeaviateVectorStore from langchain.retrievers.self_query.base import SelfQueryRetriever from langchain.chains.query_constructor.base import AttributeInfo from weaviate.classes.init import Auth client = weaviate.connect_to_weaviate_cloud( cluster_url=weaviate_url, auth_credentials=Auth.api_key(weaviate_api_key), ) documents = [ Document( page_content="Weavite is a powerful vector database for embeddings.", metadata={"author": "John Doe", "category": "Technology"} ), Document( page_content="Python is a versatile programming language used for various applications.", metadata={"author": "Jane Smith", "category": "Programming"} ), Document( page_content="Machine learning enables computers to learn from data.", metadata={"author": "Alice Johnson", "category": "AI"} ) ] db_from_docs = WeaviateVectorStore.from_documents(documents, embeddings, client=client) pdf_metadata_field_info = [ AttributeInfo(name="source", description="The PDF file the chunk is from", type="string"), AttributeInfo(name="page", description="The page number from the PDF", type="integer") ] search_kwargs = {"k": 6} data_retriever = SelfQueryRetriever.from_llm( llm, db_from_docs, "Company information from PDF files", pdf_metadata_field_info, search_kwargs=search_kwargs ) ``` ### Error Message and Stack Trace (if applicable) File /opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:333, in SelfQueryRetriever.from_llm(cls, llm, vectorstore, document_contents, metadata_field_info, structured_query_translator, chain_kwargs, enable_limit, use_original_query, **kwargs) [319](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:319) @classmethod [320](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:320) def from_llm( [321](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:321) cls, (...) [330](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:330) **kwargs: Any, [331](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:331) ) -> "SelfQueryRetriever": [332](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:332) if structured_query_translator is None: --> [333](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:333) structured_query_translator = _get_builtin_translator(vectorstore) [334](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:334) chain_kwargs = chain_kwargs or {} [336](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:336) if ( [337](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:337) "allowed_comparators" not in chain_kwargs [338](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:338) and structured_query_translator.allowed_comparators is not None ... [208](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:208) f"Self query retriever with Vector Store type {vectorstore.__class__}" [209](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:209) f" not supported." [210](https://file+.vscode-resource.vscode-cdn.net/opt/conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:210) ) ValueError: Self query retriever with Vector Store type <class 'langchain_weaviate.vectorstores.WeaviateVectorStore'> not supported. ### Description Is this related to the fact that Weavite is now in V4, creating a necessity to update the SelfQueryRetriever to support WeaviateVectorStore? The current Weavite support is targeted at Weavite V3. ### System Info langchain==0.3.9 langchain-community==0.3.9 langchain-weaviate==0.0.3 langchain-openai==0.2.11
Ɑ: vector store
low
Critical
2,723,571,177
ollama
Pleias
[Pleias has been announced!](https://huggingface.co/blog/Pclanglais/common-models) An LLM trained only on text it's allowed to train on. It would be fantastic to have this available in ollama, especially for us folks using computers with plenty of system RAM but barely any VRAM. Here's a list of the models: https://huggingface.co/collections/PleIAs/common-models-674cd0667951ab7c4ef84cc4
model request
low
Major
2,723,585,864
deno
Ability to read the current stdin raw options so that setRaw() can be part of a reversible transaction
I want to create an operation that uses raw input from stdin, but then resets it afterwards to whatever it was when the operation started. In other words, if it was already raw, then it should remain raw. If it was not, then it should be reset to be not raw. The same applies for the value of the raw options: e.g. {cbreak: true } However, raw options are currently write-only and there is no way to get the current state that it should be set back to. I imagine it would look something like: ```ts let originalRawOptions = Deno.stdin.getRawOptions(); try { Deno.stdin.setRaw(true, { cbreak: true }); // do stuff } finally { Deno.stdin.setRaw(...originalRawOptions) } ``` See [`ReadStream.isRaw](https://nodejs.org/api/tty.html#readstreamisraw) for a similar API in Node
public API,suggestion
low
Minor
2,723,624,074
vscode
Characters go into each other with fontLigatures and RTL characters.
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.93.1 - OS Version: NixOS 24.05 (not relevant) Steps to Reproduce: 1. Set the font to `FiraCode` 2. Enable `editor.fontLigatures` 3. Type or copy `سلام ===` into the editor. 4. See that characters go into each other: ![Image](https://github.com/user-attachments/assets/a6324377-dd36-47e5-b389-e81c8faeaa74) The desired output is something like this: ![Image](https://github.com/user-attachments/assets/ffc668fd-ca33-45a7-98f0-863e34d8f75c)
bug,editor-core,editor-rendering
low
Critical
2,723,690,577
godot
Setting Control `size` doesn't update when difference is small
### Tested versions - Reproducible in: v4.3.stable.official [77dcf97d8], v4.4.dev6.official [1f47e4c4e] ### System information Godot v4.4.dev6 - Debian GNU/Linux trixie/sid trixie on Wayland - X11 display driver, Multi-window, 2 monitors - Vulkan (Forward+) - dedicated AMD Radeon RX 7600 (RADV NAVI33) - AMD Ryzen 5 7600 6-Core Processor (12 threads) ### Issue description If one tries to set the `size` of a `Control` to something which is nearly, but not quite, the same as its current `size`, nothing happens. Eg. when the `size` is `(463.9962, 44)`, setting it to `(464, 44)` does nothing. This can leave UI elements slightly off: ![Image](https://github.com/user-attachments/assets/3ba913b1-58ef-44c4-9fa5-b11f5897d802) I hit this when interpolating one control's position and size to another. When the two controls are close enough together, the interpolating control's `size` is set to the target control's `size`. However, sometimes the interpolating Control would stop at a point slightly smaller than the target. I've worked around it for now by adding and them immediately removing a larger value: ```gdscript if size.is_equal_approx(target_size): size = target_size + Vector2.ONE size -= Vector2.ONE ``` EDIT: I've just noticed that this seems to also affect `Control.position`. ### Steps to reproduce Create a scene with a `Control` node, anchor it to top-left (to avoid some warnings), and attach this script: ```gdscript extends Control func _ready() -> void: size = Vector2(120, 40.0003) print("start => ", size) size.y = 40 print("40 => ", size) size = Vector2(120, 40) print("(120, 40) => ", size) size = Vector2(121, 40) print("(121, 40) => ", size) ``` Output: ``` start => (120.0, 40.0003) 40 => (120.0, 40.0003) (120, 40) => (120.0, 40.0003) (121, 40) => (121.0, 40.0) ``` ### Minimal reproduction project (MRP) n/a
bug,confirmed,regression,topic:gui
low
Minor
2,723,702,209
TypeScript
Discriminating property with never as possible type makes its enclosing object type disappear when narrowing
### 🔎 Search Terms discriminated union never, discriminating property never ### 🕗 Version & Regression Information It happens in every version I tried. ### ⏯ Playground Link https://www.typescriptlang.org/play/?ts=5.8.0-dev.20241206#code/C4TwDgpgBAIglgZwMYFUB2cD2aoF4BQAPlAN5QAmiSACgE6ZgBcUaEAbhLQDRQCGzARigBfIqQpU6DZgCIw9MDJ4AjZgCYRYspWRSmUAK5pyEAGZxW5HkmYBmTfhNIANr1rQk2BMCiZm8ZHQsNHx8OFMACkwAOh0aBQBKUnwoXxSoAHoMqAA9AH58YSgIZwRoEnTMdKzcgtEwyIBCGLi9JIrUqtSa-MLi0vLK6uze4SA ### 💻 Code ```ts type DiscUnion = | { discProp: never, a: 1 } | { discProp: "prop", b: 2 } | { discProp: undefined, c: 3 } declare const o: DiscUnion if(o.discProp) { o // ^? { discProp: "prop"; b: 2; } } else { o // ^? { discProp: undefined; c: 3; } } if(!o.discProp) { o // ^? { discProp: undefined; c: 3; } } else { o // ^? { discProp: "prop"; b: 2; } } ``` ### 🙁 Actual behavior `{ discProp: never, a: 1 }` disappears! ### 🙂 Expected behavior In the first `if`, I expect `{ discProp: never, a: 1 }` to be a possible type of `o` in the `else` branch. Viceversa, in the second `if` I expect it to be a possible type of `o` in the `then` branch. ### Additional information about the issue _No response_
Suggestion,In Discussion
low
Major
2,723,737,362
rust
Tracking issue for release notes of #130190: [discussion] `ErrorKind::FilesystemQuotaExceeded` from `io_error_more`
This issue tracks the release notes text for #130190. ### Steps - [ ] Proposed text is drafted by PR author (or team) making the noteworthy change. - [ ] Issue is nominated for release team review of clarity for wider audience. - [ ] Release team includes text in release notes/blog posts. ### Release notes text The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing). ````markdown # Stabilized APIs - [`io::ErrorKind::QuotaExceeded`](https://doc.rust-lang.org/stable/std/io/enum.ErrorKind.html#variant.QuotaExceeded) ```` > [!TIP] > Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use. > The category will be de-duplicated with all the other ones by the release team. > > *More than one section can be included if needed.* ### Release blog section If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section. *Otherwise leave it empty.* ````markdown ```` cc @GrigorenkoPV -- origin issue/PR authors and assignees for starting to draft text
T-libs-api,relnotes,relnotes-tracking-issue
low
Critical
2,723,737,420
rust
Tracking issue for release notes of #130191: [discussion] `ErrorKind::CrossesDevices` from `io_error_more`
This issue tracks the release notes text for #130191. ### Steps - [ ] Proposed text is drafted by PR author (or team) making the noteworthy change. - [ ] Issue is nominated for release team review of clarity for wider audience. - [ ] Release team includes text in release notes/blog posts. ### Release notes text The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing). ````markdown # Stabilized APIs - [`io::ErrorKind::CrossesDevices`](https://doc.rust-lang.org/stable/std/io/enum.ErrorKind.html#variant.CrossesDevices) ```` > [!TIP] > Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use. > The category will be de-duplicated with all the other ones by the release team. > > *More than one section can be included if needed.* ### Release blog section If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section. *Otherwise leave it empty.* ````markdown ```` cc @GrigorenkoPV -- origin issue/PR authors and assignees for starting to draft text
T-libs-api,relnotes,relnotes-tracking-issue
low
Critical
2,723,738,600
rust
Adding `lto = true` causes duplicated symbol errors on `.weak` symbols
<!-- Thank you for filing a bug report! 🐛 Please provide a short summary of the bug, along with any information you feel relevant to replicating the bug. --> Here's a minimal reproducer for the issue described in the title. `.cargo/config.toml`: ```toml [build] target = "riscv32imc-unknown-none-elf" [unstable] build-std = ["core,panic_abort"] ``` `memory.x` (exact contents unimportant, but provided for convenience): ``` MEMORY { IMEM : ORIGIN = 0x80000000, LENGTH = 1024K DMEM : ORIGIN = 0x10000000, LENGTH = 256K } REGION_ALIAS("REGION_TEXT", IMEM); REGION_ALIAS("REGION_RODATA", DMEM); REGION_ALIAS("REGION_DATA", DMEM); REGION_ALIAS("REGION_BSS", DMEM); REGION_ALIAS("REGION_HEAP", DMEM); REGION_ALIAS("REGION_STACK", DMEM); ``` `build.rs`: ```rs use std::env; use std::fs; use std::path::Path; fn main() { let out_dir = env::var("OUT_DIR").expect("No out dir"); let dest_path = Path::new(&out_dir).join("memory.x"); fs::write(dest_path, include_bytes!("memory.x")).expect("Could not write file"); if env::var("CARGO_CFG_TARGET_ARCH").unwrap() == "riscv32" { println!("cargo:rustc-link-arg=-Tmemory.x"); println!("cargo:rustc-link-arg=-Tlink.x"); } println!("cargo:rustc-link-search={out_dir}"); println!("cargo:rerun-if-changed=memory.x"); println!("cargo:rerun-if-changed=build.rs"); } ``` `src/main.rs`: ```rs #![no_std] #![no_main] use riscv_rt::entry; #[entry] fn main() -> ! { loop {} } #[panic_handler] fn panic_handler(_: &core::panic::PanicInfo) -> ! { loop {} } #[unsafe(export_name = "ExceptionHandler")] fn exception_handler(_: &riscv_rt::TrapFrame) -> ! { loop {} } #[unsafe(export_name = "DefaultHandler")] fn default_handler(_: &riscv_rt::TrapFrame) -> ! { loop {} } ``` `Cargo.toml`: ```toml [package] name = "lto-bug-repro" version = "0.1.0" edition = "2021" [profile.release] lto = true # Commenting out this line should allow it to compile successfully [dependencies] riscv-rt = "0.13.0" ``` As is documented in `riscv-rt`, `ExceptionHandler` and `DefaultHandler` are weak symbols, and I verified in the source code for the crate that they're marked with `.weak` in `global_asm!` ([relevant src](https://docs.rs/crate/riscv-rt/0.13.0/source/src/asm.rs#263-270)). What I don't understand is why enabling `lto = true` causes the build to fail with the errors ``` error: DefaultHandler changed binding to STB_GLOBAL error: symbol 'DefaultHandler' is already defined error: ExceptionHandler changed binding to STB_GLOBAL error: could not compile `lto-bug-repro` (bin "lto-bug-repro") due to 3 previous errors ``` I also have no idea how to track down what's causing this.
A-linkage,A-inline-assembly,T-compiler,C-bug,A-LTO
low
Critical
2,723,746,478
rust
Tracking issue for release notes of #132187: Add Extend impls for tuples of arity 1 through 12
This issue tracks the release notes text for #132187. ### Steps - [ ] Proposed text is drafted by PR author (or team) making the noteworthy change. - [ ] Issue is nominated for release team review of clarity for wider audience. - [ ] Release team includes text in release notes/blog posts. ### Release notes text The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing). ````markdown # Stabilized APIs - [impl std::iter::Extend for tuples with arity 1 through 12](https://doc.rust-lang.org/stable/std/iter/trait.Extend.html#impl-Extend%3C(A,)%3E-for-(EA,)) ```` > [!TIP] > Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use. > The category will be de-duplicated with all the other ones by the release team. > > *More than one section can be included if needed.* ### Release blog section If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section. *Otherwise leave it empty.* ````markdown ```` cc @shahn, @dtolnay -- origin issue/PR authors and assignees for starting to draft text
T-libs-api,relnotes,relnotes-tracking-issue
low
Minor
2,723,757,575
godot
[4.3] VRAM compressed textures import at extremely low quality
### Tested versions v4.3.stable.official [77dcf97d8] ### System information Windows 10 - Godot v4.3 stable - Vulkan Forward+ renderer. nVidia GTX 1080. 16GB RAM. Intel Core i5-4690K CPU @ 3.50GHz. ### Issue description Importing textures intended for 3D models using the default VRAM compressed mode (with "high quality" turned off) produces extremely low quality results; worse than they should be. This is particularly noticeable with textures 1024 pixels square and lower. As an example, below is a container prop with an ORM material applied and 512 x 512 textures. Godot has converted the ORM and base colour to DXT1 and the normal map to RGTC. The texture should appear as mostly yellowy orange with some light edge wear exposing the underlying metal. Due to the low quality of the texture compression, however, the edge wear displays as large, blocky green blotches along all the mesh edges. ![Image](https://github.com/user-attachments/assets/9a6f595d-9457-4a1b-ae50-2a395ecbaf89) Below is the original source file for the base colour, in PNG format. ![Image](https://github.com/user-attachments/assets/fb1365cd-a32c-484e-9edc-a87b61e40cd6) For comparison, here's the same texture run through nVidia's DXT Tools at the default settings. The below image was saved to DDS, then converted to PNG for upload. ![Image](https://github.com/user-attachments/assets/b4909cad-aab3-4c6a-a6eb-0b2785d2325d) Note how the compression artifacts are barely visible. Below is another example, this time run through an online image converter called [Convertio](https://convertio.co/dds-converter/). While the quality is not quite as good as nVidia's, the results are significantly better than Godot's own. ![Image](https://github.com/user-attachments/assets/852ab0b0-20e8-474d-af32-a78d85b2e45c) Here is the container again, only this time with the base colour texture replaced with the Convertio version. Notice how the green discolouration is completely gone. ![Image](https://github.com/user-attachments/assets/56bbf629-51d7-4773-98fb-765c0be94bed) ### Steps to reproduce 1. Right-click and download the above source base colour texture for the crate. Drop it into a Godot project and set the compress mode to VRAM compressed 2. Create a MeshInstance3D, assign it a plane mesh, a material, and assign this texture as the base colour 3. Use any suitable image converter (Convertio is a good choice if you wish to exactly replicate my results) to convert the PNG to DDS yourself, then add the DDS to the project 4. Create a new MeshInstance3D, assign it another plane mesh and material then assign this new texture to the base colour 5. Observe the quality difference between the two examples ### Minimal reproduction project (MRP) N/A
bug,topic:import
medium
Major
2,723,770,614
pytorch
Embedding forward performance analysis
### 🐛 Describe the bug Embedding operation maps indices into a dense vector space, typically used to look up embeddings for discrete input tokens in tasks like natural language processing. Inductor compiled kernel is slower than liger version for this op. For input 11, liger is 1.26X faster than inductor. ## **Reproduce tests** ``` % python run.py --op embedding --mode fwd --num-inputs 1 --input-id 11 --precision fp32 --metrics latency,speedup --cudagraph 0%| | 0/1 [00:00<?, ?it/s]INFO:tritonbench.utils.triton_op:Took 178.71ms to get benchmark function for torch_embedding INFO:tritonbench.utils.triton_op:Took 178.65ms to get benchmark function for liger_embedding INFO:tritonbench.utils.triton_op:Took 822.35ms to get benchmark function for inductor_embedding 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.54s/it] (B, T, D, V) torch_embedding-latency liger_embedding-speedup liger_embedding-latency inductor_embedding-speedup inductor_embedding-latency --------------------- ------------------------- ------------------------- ------------------------- ---------------------------- ---------------------------- (8, 2048, 4096, 8192) 0.472864 2.50288 0.188928 1.9795 0.23888 ``` ## Report Analysis For input 11, the input dimensions are \``(B, T, D, V) = (8, 2048, 4096, 8192)`\`, so ``` embeddings.shape = (V, D) = (8192, 4096) indices.shape = (B, T) = (8, 2048) output.shape = (B, T, D) = (8, 2048, 4096) ``` For FP32, the theoretical optimal write bytes should be sizeof(output) \= 8\*2048\*4096\*4/1024/1024 \= 256MB The theoretical optimal read bytes should be sizeof(indices \+ embeddings) \= (8\*2048 \+ \*8192\*4096) \* 4/1024/1024 MB= 128.0625MB ### Inductor compiled triton kernel The following is the compiled triton kernel and its wrapper code. In the `grid=grid(67108864)` function, it will compute the final grid size. [https://github.com/pytorch/pytorch/blob/a84779040049377aec7b62e37becb7327950541e/torch/\_inductor/runtime/triton\_heuristics.py\#L1962-L2004](https://github.com/pytorch/pytorch/blob/a84779040049377aec7b62e37becb7327950541e/torch/_inductor/runtime/triton_heuristics.py#L1962-L2004) Kernel triton\_poi\_fused\_embedding\_0’s final grid size \= (65536, 1,1) while its XBLOCK= 1024\. So, theoretical memory write request per warp is 65536\*1024/32/4 \=524,288 32 threads per warp, 4 FP32 per STG.E.128 It matches the memory write request in ncu reports. <img width="1009" alt="image" src="https://github.com/user-attachments/assets/c3c589c5-9c82-4d06-a939-541492916604"> However, for memory read, the indices load is compiled to the following triton line. ``` x1 = (xindex // 4096) tmp0 = tl.load(in_ptr0 + (x1), None, eviction_policy='evict_last') ``` Because each thread will execute this load, the total read requests for indices are 524,288 too. So the total number of load requests is 1048576. This matches the memory table in ncu report. <img width="919" alt="image" src="https://github.com/user-attachments/assets/32af253e-23c0-44b4-8aa1-035edb681cfd"> I also confirmed with a modified [mem\_trace tool](https://github.com/FindHao/nvbit_test2) in nvbit. `MEMTRACE: Instruction execution counts for kernel triton_poi_fused_embedding_0 in context 0x55ed1dd95330:` `Instruction: STG.E.128 Count: 524288` `Instruction: LDG.E.128 Count: 524288` `Instruction: LDG.E.EL.64 Count: 524288` LDG.E.128 for embedding loads, LDG.E.EL.64 for indices loads in this case. ### Liger implementation [https://github.com/linkedin/Liger-Kernel/blob/main/src/liger\_kernel/ops/experimental/embedding.py\#L81-L95](https://github.com/linkedin/Liger-Kernel/blob/main/src/liger_kernel/ops/experimental/embedding.py#L81-L95) For liger version, it converts shapes to make it compatible with two-dimension blocks (128, 128), and use gridsize \= (128,32), which is much smaller than inductor compiled version. Triton also automatically place the smaller indices to shared memory. Thus, the liger version saves a lot of global memory loads. <img width="1001" alt="image" src="https://github.com/user-attachments/assets/a01d1de0-c9f1-4d1b-bd1d-297a839664a3"> ### Potential solution: fix its tiling ### Error logs _No response_ ### Versions main branch cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
triaged,oncall: pt2,module: inductor
low
Critical
2,723,773,080
flutter
RawImage Leaks when the image is changed
### Steps to reproduce Use RawImage, change the image it references every frame, it will leak the images and run flutter out of memory (flutter web). A while back I looked into RawImage to see why it was leaking. Looks like it clones your image handle before it passes it to RenderImage: https://github.com/flutter/flutter/blob/1fb077118d0a4d576fa7eb82c05f07f4748e4bf3/packages/flutter/lib/src/widgets/basic.dart#L6131 But then RenderImage only disposes the image when it changes if it's a clone of the image that it currently has: https://github.com/flutter/flutter/blob/1fb077118d0a4d576fa7eb82c05f07f4748e4bf3/packages/flutter/lib/src/rendering/image.dart#L92 ### Expected results Don't crash flutter ### Actual results Flutter runs out of memory in canvaskit heap ### Code sample ```dart import 'package:flutter/material.dart'; import 'package:flutter/scheduler.dart'; import 'dart:ui' as ui; import 'dart:math'; import 'dart:async'; void main() { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({super.key}); @override Widget build(BuildContext context) { return MaterialApp( debugShowCheckedModeBanner: false, home: Scaffold( body: Center( child: RandomImageWidget() ), ), ); } } class RandomImageWidget extends StatefulWidget { @override _RandomImageWidgetState createState() => _RandomImageWidgetState(); } class _RandomImageWidgetState extends State<RandomImageWidget> with SingleTickerProviderStateMixin { ui.Image? _image; late Ticker _ticker; final _random = Random(); @override void initState() { super.initState(); _ticker = createTicker((_) { // Generate a new random image each frame. _generateRandomImage(); }); _ticker.start(); } @override void dispose() { _ticker.dispose(); super.dispose(); } Future<void> _generateRandomImage() async { final width =1000; final height = 1000; // Create a PictureRecorder and Canvas to draw on. final recorder = ui.PictureRecorder(); final canvas = Canvas(recorder); final paint = Paint()..style = PaintingStyle.fill; // Draw a random color for each pixel. for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { paint.color = Color.fromARGB( 255, _random.nextInt(256), _random.nextInt(256), _random.nextInt(256), ); canvas.drawRect(Rect.fromLTWH(x.toDouble(), y.toDouble(), 1, 1), paint); } } // Finalize drawing and create an image. final picture = recorder.endRecording(); final image = await picture.toImage(width, height); // Update the state with the newly generated image. setState(() { _image?.dispose(); _image = image; }); } @override Widget build(BuildContext context) { return RawImage( image: _image, width: 200, height: 200, ); } } ``` ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> [Upload media here] </details> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [Paste your output here] ``` </details>
framework,P2,team-framework,triaged-framework
low
Critical
2,723,776,169
vscode
Brackets Snippet deleting chars
Brackets Snippet in replace mode deletes char when prefix already exists ![Image](https://github.com/user-attachments/assets/70acd3cb-be83-486f-a9bd-9acd809b45c4) Steps to Reproduce: 1. Set `editor.suggest.insertMode` to `replace` 1. Create new global snippet `workbench.action.openSnippets` ```jsonc { "Brackets": { "scope": "plaintext", "prefix": "()", "body": "()", "description": "Set `editor.suggest.insertMode` to `replace`\nWhen `()` already exists, the snippet deletes the next following char\n " } } ``` 2. Create new `plaintext` file with example text ``` ()!! () ()aa ``` 3. Place cursor in-between brackets 4. Press ctrl+space and select `Brackets` Snippet Expected: nothing (Cursor moves 1 to the right) ![Image](https://github.com/user-attachments/assets/5b1c21c8-0ecc-4593-9c10-505f995e9880) Actual: 1 character is deleted after the `()` 🐛 doesn't matter if the char is a `wordPattern` or `wordSeparator` or `whitespace` ![Image](https://github.com/user-attachments/assets/26548e46-133c-4f66-94df-4d06c23c6579) Notice that using other characters other than brackets `()` does **not** result in chars being deleted <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.95.3 - OS Version: Windows 11
bug,snippets
low
Critical
2,723,780,746
vscode
Automatically dismiss messages
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> ![Image](https://github.com/user-attachments/assets/ca488ea1-9074-478a-89e2-ee3fd6fb324e) How can i configure a timeout to get rid of these? its annoying to constantly click the X.
feature-request,workbench-notifications
low
Major
2,723,797,326
go
x/tools/gopls: invoking source.addTest from CLI fails with ENOENT
While working on https://go.dev/cl/634197, I tried the new "Add test for function 'fingerprint'" code action within Emacs+eglot, and got an unhelpful "Internal error". I tried it again from the gopls CLI, and got a different error: ``` $ gopls codeaction -kind=source.addTest -exec ./gopls/internal/cache/methodsets/fingerprint.go:54 gopls: edits not applied because of getFile: file:///Users/adonovan/w/xtools/gopls/internal/cache/methodsets/fingerprint_test.go: open /Users/adonovan/w/xtools/gopls/internal/cache/methodsets/fingerprint_test.go: no such file or directory ``` Then I tried it from VS Code, and it worked great. (That said, the code action was not visible in the Command-. quick menu, and had to be selected from the right-click > Source Actions... menu. We should reinvestigate the endlessly fiddly algorithm that governs this.) So, I don't think there's a bug in gopls' Code Action or command handler, but there is in the CLI, and in Eglot, and perhaps they are related.
gopls,Tools
low
Critical
2,723,811,243
godot
Create Physical Skeleton option doesn't care parent node scale
### Tested versions - Functionality broken after => 4.3dev3 (4.3dev4/dev5/dev6) - Works as intended < 4.3dev2 (dev1, 4.2) ### System information win 10 : forward + : rtx4060 ### Issue description After Using the Create Physical Skeleton on a Sekelton3D the bone (PhysicalBone3D) collision shape (CollisionShape3D) sizes are out of proportion 4.3dev2: ![4.3dev2](https://github.com/user-attachments/assets/d9692b03-f806-4d2c-9fc9-46e1e5bef0f9) 4.3dev3: ![Image](https://github.com/user-attachments/assets/066cc377-b557-4d79-8637-bb607eb8ec16) ### Steps to reproduce From 3d model .glb -> New Inherited Scene -> Skeleton3D ->Create Physical Skeleton *used mixamo to get a model (.fbx) -> blender -> export as glTF 2.0 ### Minimal reproduction project (MRP) [repro.zip](https://github.com/user-attachments/files/18042919/repro.zip) model used ^ New Inherited Scene -> Skeleton3D ->Create Physical Skeleton
bug,topic:editor,topic:3d
low
Critical
2,723,843,270
rust
"error[E0106]: missing lifetime specifiers" suggests creating unusable `&'a mut Foo<'a>` references
### Code ```Rust struct Borrowed<'a>(&'a ()); struct Transformed<'a>(&'a ()); fn foo(_borrowed: &mut Borrowed) -> &mut Transformed {} ``` ### Current output ```Shell error[E0106]: missing lifetime specifiers --> src/lib.rs:4:37 | 4 | fn foo(_borrowed: &mut Borrowed) -> &mut Transformed {} | ------------- ^ ^^^^^^^^^^^ expected named lifetime parameter | | | expected named lifetime parameter | = help: this function's return type contains a borrowed value, but the signature does not say which one of `_borrowed`'s 2 lifetimes it is borrowed from help: consider introducing a named lifetime parameter | 4 | fn foo<'a>(_borrowed: &'a mut Borrowed<'a>) -> &'a mut Transformed<'a> {} | ++++ ++ ++++ ++ ++++ ``` ### Desired output ```Shell error[E0106]: missing lifetime specifiers --> src/lib.rs:4:37 | 4 | fn foo(_borrowed: &mut Borrowed) -> &mut Transformed {} | ------------- ^ ^^^^^^^^^^^ expected named lifetime parameter | | | expected named lifetime parameter | = help: this function's return type contains a borrowed value, but the signature does not say which one of `_borrowed`'s 2 lifetimes it is borrowed from help: consider introducing two named lifetime parameters | 4 | fn foo<'short, 'long>(_borrowed: &'short mut Borrowed<'long>) -> &'short mut Transformed<'long> {} | +++++++++++++++ ++++++ +++++++ ++++++ +++++++ ``` ### Rationale and extra context This error trips up many new Rustaceans, and the suggestion creates [unusable references](https://quinedot.github.io/rust-learning/pf-borrow-forever.html) which just leads to errors that cannot be resolved. Recent example in the wild: https://users.rust-lang.org/t/how-to-reuse-a-mutably-borrowed-variable-in-rust-after-passing-it-to-a-function/122153 ### Other cases ```Rust ``` ### Rust Version ```Shell $ rustc --version --verbose rustc 1.85.0-nightly (c94848c04 2024-12-05) binary: rustc commit-hash: c94848c046d29f9a80c09aae758e27e418a289f2 commit-date: 2024-12-05 host: aarch64-apple-darwin release: 1.85.0-nightly LLVM version: 19.1.5 ``` ### Anything else? The erroneous suggestion is the same on the current latest stable, 1.83.0.
A-diagnostics,T-compiler
low
Critical
2,723,884,058
godot
Late property initialization in a child node is not reflected in parent's `_integrate_forces()` tick
### Tested versions - Reproducible: v4.3.stable.official [77dcf97d8], v4.4.dev6.official [1f47e4c4e] ### System information Godot v4.3.stable - macOS 11.7.10 - Vulkan (Forward+) - integrated Intel HD Graphics 5000 - Intel(R) Core(TM) i5-4260U CPU @ 1.40GHz (4 Threads) ### Issue description A `RigidBody2D` (probably applies also to 3D) has a child `Node` (or any derivative). If during its `_integrate_forces()` invocation the parent calls a method of the child node, the latter doesn't see any updates to its own properties, if those were changed _after_ the child was added to the scene and the `_integrate_forces()` was called at least once. If the properties were somehow modified in the parent's `_ready()` before the first invocation of `_integrate_forces()`, everything works as expected. ### Steps to reproduce In the MRP there's a `body.gd` script, attached to `RigidBody2D`, and `some_component.gd` script, attached to `Node`, child of the body. The `_integrate_forces()` passes its direct `state` to the node's `update_parent_state()`, which sets the `linear_velocity` of the parent's state based on two own properties, `move_direction` and `speed`. The `move_direction` property is set in parent's `_ready()`, _after_ a one second delay, and thus, after `_integrate_forces()` is called the first time. This triggers the bug - the `move_direction` as seen by the child node during the `update_parent_state()` call is still the default `Vector2(0,0)`. 1) Open the attached MRP 2) Launch `main.tscn` scene 3) Notice that the labels are not in sync: the label updated in child's `update_parent_state()` method during `_integrate_forces()` call shows the default value. ![Image](https://github.com/user-attachments/assets/db3cb214-ab8f-4a98-a4bd-224f19e9d638) "work-around": 1) Comment the `await ...` line in `body.gd`, which modifies the child's property before `_integrate_forces()` is called for the first time. 2) Launch the `main.tscn` scene 3) Notice that the labels are in sync and the Godot logo is moved by the direct state update in child's `update_parent_state()` method: ![Image](https://github.com/user-attachments/assets/db52f47f-62d8-47ec-8c04-d0f9ceef3e3a) ### Minimal reproduction project (MRP) [physics-props-out-of-date.zip](https://github.com/user-attachments/files/18043497/physics-props-out-of-date.zip)
bug,topic:physics,needs testing
low
Critical
2,723,903,487
flutter
Eliminate use of `integration_test_driver_extended.dart`
`integration_test_driver_extended.dart` was added in order to support a host-side component to `integration_test` tests, specifically screenshot testing for the web. Unlike mobile/desktop apps, the web does not have target-side mechanisms for capturing screenshots of the browser window, and thus a host side mechanism was added. A goal of `integration_test` is to move all screenshot testing to use underlying target-side APIs to capture screenshots, not just of Flutter content but of any platform content including system chrome, dialogs, etc. and thus screenshotting should happen target side for mobile/desktop apps. We'll want an eventual host-side mechanism for exporting those screenshots, but that should be added to the standard `integration_test_driver.dart` when it's implemented. `integration_test_driver_extended.dart` is currently used in a [few places](https://github.com/search?q=org%3Aflutter%20%22integration_test_driver_extended.dart%22&type=code): * [devtools_extentions](https://github.com/flutter/devtools/blob/7488fd09e333e3c523b57d5e8ce5d045f6ebb7f5/packages/devtools_extensions/test_driver/integration_test.dart#L6) * [devtools_app](https://github.com/flutter/devtools/blob/7488fd09e333e3c523b57d5e8ce5d045f6ebb7f5/packages/devtools_app/test_driver/integration_test.dart#L13), which is using the `onScreenshot` handler.
a: tests,f: integration_test,P2,team-framework,triaged-framework
low
Minor
2,723,924,999
PowerToys
Force Focus on Program/App/Window
### FORCED FOCUS Similar to enabling Always On Top, there could be an option to force Windows to 'focus' on a window of an app or a browser. This might be part of the options in Always On Top or its own separate feature. ### Scenario when this would be used? Specifically, the scenario is caused by a program I am using called Protopie (PP, for short) and its extension Protopie Connect. PP is a digital prototyping tool for HMI. PP Connect allows you to connect to Arduino or even XBOX hardware. Trouble is, in Windows-based machines, universal input monitoring doesn't exist apparently. So, if the PP Connect app window becomes un-focused for some reason, the connection to external hardware is effectively lost, and in-browser prototypes stop responding. In however it might manifest, this request is a workaround to allow for what the Input Monitoring similar to Macs readily provides. ### Supporting information https://www.protopie.io/learn/docs/connect/getting-started https://support.apple.com/guide/mac-help/control-access-to-input-monitoring-on-mac-mchl4cedafb6/mac
Idea-New PowerToy,Product-Always On Top,Needs-Triage
low
Minor
2,723,957,004
vscode
Missing or invalid credentials. Error: connect ENOENT /run/user/1000/vscode-git-76f8f1b8ce.sock
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes/No <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.95.3 - OS Version: NixOS (`nixos-unstable`) Steps to Reproduce: N/A, intermmittent issue Sometimes happens during git operations that require auth, such as push/pull Restarting or reloading the window fixes the issue ``` > git push origin master:master Missing or invalid credentials. Error: connect ENOENT /run/user/1000/vscode-git-76f8f1b8ce.sock at PipeConnectWrap.afterConnect [as oncomplete] (node:net:1607:16) { errno: -2, code: 'ENOENT', syscall: 'connect', address: '/run/user/1000/vscode-git-76f8f1b8ce.sock' } Missing or invalid credentials. Error: connect ENOENT /run/user/1000/vscode-git-76f8f1b8ce.sock at PipeConnectWrap.afterConnect [as oncomplete] (node:net:1607:16) { errno: -2, code: 'ENOENT', syscall: 'connect', address: '/run/user/1000/vscode-git-76f8f1b8ce.sock' } remote: No anonymous write access. fatal: Authentication failed for '<redacted>' ```
info-needed,git
low
Critical
2,723,972,496
deno
Deno panick using VS-Code to debug
Platform: linux x86_64 Version: 2.1.3 Args: ["/home/matthias/.deno/bin/deno", "run", "--inspect-wait", "--allow-read=input.txt", "--watch", "main.ts", "--quiz=1"] thread 'main' panicked at /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/deno_core-0.324.0/inspector.rs:381:16: internal error: entered unreachable code stack backtrace: 0: 0x55da62330cca - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h304520fd6a30aa07 1: 0x55da62361f7b - core::fmt::write::hf5713710ce10ff22 2: 0x55da6232a4d3 - std::io::Write::write_fmt::hda708db57927dacf 3: 0x55da623324d2 - std::panicking::default_hook::{{closure}}::he1ad87607d0c11c5 4: 0x55da6233213e - std::panicking::default_hook::h81c8cd2e7c59ee33 5: 0x55da6297f2b5 - deno::setup_panic_hook::{{closure}}::h349393b90919cd59 6: 0x55da62332e02 - std::panicking::rust_panic_with_hook::had2118629c312a4a 7: 0x55da62332a83 - std::panicking::begin_panic_handler::{{closure}}::h7fa5985d111bafa2 8: 0x55da623311a9 - std::sys::backtrace::__rust_end_short_backtrace::h704d151dbefa09c5 9: 0x55da62332744 - rust_begin_unwind 10: 0x55da6235ef33 - core::panicking::panic_fmt::h3eea515d05f7a35e 11: 0x55da6235efbc - core::panicking::panic::h102d65dbfa674afe 12: 0x55da646acb3b - deno_core::inspector::JsRuntimeInspector::poll_sessions::hea23432d5a0acfb1 13: 0x55da6338bd7a - deno_runtime::worker::MainWorker::wait_for_inspector_session::hf36f3251a08fc007 14: 0x55da623cfa69 - deno_runtime::worker::MainWorker::evaluate_module::{{closure}}::h6d6d786aa86b2c32 15: 0x55da62964a8a - deno::worker::CliMainWorker::execute_main_module::{{closure}}::h413ab1558753234b 16: 0x55da629641dd - deno::worker::CliMainWorker::run_for_watcher::{{closure}}::h18e575000b9b1ea6 17: 0x55da6291a57a - deno::tools::run::run_script::{{closure}}::hb4796d71edee2e40 18: 0x55da6296aeb1 - deno::spawn_subcommand::{{closure}}::h29e4d91a11efa971 19: 0x55da62378261 - <deno_unsync::tokio::task::MaskFutureAsSend<F> as core::future::future::Future>::poll::h526086c43a2329d0 20: 0x55da625a9c67 - tokio::runtime::task::raw::poll::h0534bc9bd8f67ab4 21: 0x55da62980dca - deno::main::h2cfbf468374a5d6f 22: 0x55da6246c6aa - std::sys::backtrace::__rust_begin_short_backtrace::h8c0fe723132db713 23: 0x55da62465181 - std::rt::lang_start::{{closure}}::h39735df29837b89e 24: 0x55da6231ec90 - std::rt::lang_start_internal::h4d90db0530245041 25: 0x55da62a02845 - main 26: 0x7f0786275d90 - <unknown> 27: 0x7f0786275e40 - __libc_start_main 28: 0x55da60c75029 - _start 29: 0x0 - <unknown> `launch.json` ```json { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "request": "launch", "name": "Launch Program", "type": "node", "cwd": "${workspaceFolder}", "env": {}, "runtimeExecutable": "/home/matthias/.deno/bin/deno", "runtimeArgs": ["run", "debug"], "attachSimplePort": 9229 } ] } ``` `deno.json` ```json { "tasks": { "dev": "deno run --allow-read=input.txt --watch main.ts", "debug": "deno run --inspect-wait --allow-read=input.txt --watch main.ts --quiz=1" }, "imports": { "@std/assert": "jsr:@std/assert@1" } } ```
bug,debugger,--watch,panic
low
Critical
2,723,985,255
godot
CLI import hangs in docker environment with blend file
### Tested versions - Reproducible in: v4.3-stable, 4.4-dev6 ### System information Linux 6.11.10-2-MANJARO, 64-bit, Docker version 27.3.1, build ce1223035a ### Issue description When using a Docker image in a CI/CD setup to build a Godot game, the command: ```shell godot --headless --import --verbose project.godot ``` unexpectedly hangs if a .blend file is present, regardless of whether it is utilized in the project. The import completes successfully if the .blend file is removed. **What have I tried?** - Installed Blender within the Docker image, but the issue persists. - Defined a .gdignore file and moved all .blend files into it, yet the problem remains. <details> <summary>View Detailed Docker Logs</summary> ```plaintext #10 [6/6] RUN /opt/godot --import --headless --verbose /app/test/project.godot #10 0.244 Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org #10 0.245 TextServer: Added interface "Dummy" #10 0.252 TextServer: Added interface "ICU / HarfBuzz / Graphite (Built-in)" #10 0.257 JoypadLinux: udev enabled and loaded successfully. #10 0.257 Using "default" pen tablet driver... #10 0.258 #10 0.262 TextServer: Primary interface set to: "ICU / HarfBuzz / Graphite (Built-in)". #10 0.317 CORE API HASH: 966092234 #10 0.318 EDITOR API HASH: 444927587 #10 0.320 EditorTheme: Generating new theme for the config '66886095'. #10 0.320 EditorTheme: Generating new icons. #10 0.354 EditorTheme: Generating new fonts. #10 0.358 EditorTheme: Generating new styles. #10 0.991 Regenerating editor help cache #10 0.991 Class 'AbstractPolygon2DEditor' is not exposed, skipping. #10 0.991 Class 'AbstractPolygon2DEditorPlugin' is not exposed, skipping. #10 0.992 Class 'ActionMapEditor' is not exposed, skipping. #10 0.992 Class 'AnchorPresetPicker' is not exposed, skipping. #10 0.995 Class 'AnimationBezierTrackEdit' is not exposed, skipping. #10 0.995 Class 'AnimationLibraryEditor' is not exposed, skipping. #10 0.996 Class 'AnimationNodeBlendSpace1DEditor' is not exposed, skipping. #10 0.997 Class 'AnimationNodeBlendSpace2DEditor' is not exposed, skipping. #10 0.997 Class 'AnimationNodeBlendTreeEditor' is not exposed, skipping. #10 0.997 Class 'AnimationNodeStateMachineEditor' is not exposed, skipping. #10 0.998 Class 'AnimationPlayerEditor' is not exposed, skipping. #10 0.998 Class 'AnimationPlayerEditorPlugin' is not exposed, skipping. #10 0.998 Class 'AnimationTimelineEdit' is not exposed, skipping. #10 0.998 Class 'AnimationTrackEditDefaultPlugin' is not exposed, skipping. #10 0.998 Class 'AnimationTrackEditPlugin' is not exposed, skipping. #10 0.998 Class 'AnimationTrackEditor' is not exposed, skipping. #10 0.998 Class 'AnimationTrackKeyEditEditorPlugin' is not exposed, skipping. #10 0.999 Class 'AnimationTreeEditor' is not exposed, skipping. #10 0.999 Class 'AnimationTreeEditorPlugin' is not exposed, skipping. #10 0.999 Class 'AnimationTreeNodeEditorPlugin' is not exposed, skipping. #10 1.002 Class 'AssetLibraryEditorPlugin' is not exposed, skipping. #10 1.002 Class 'AtlasMergingDialog' is not exposed, skipping. #10 1.002 Class 'AtlasTileProxyObject' is not exposed, skipping. #10 1.003 Class 'AudioBusesEditorPlugin' is not exposed, skipping. #10 1.006 Class 'AudioStreamEditorPlugin' is not exposed, skipping. #10 1.006 Class 'AudioStreamImportSettingsDialog' is not exposed, skipping. #10 1.006 Class 'AudioStreamInteractiveEditorPlugin' is not exposed, skipping. #10 1.006 Class 'AudioStreamInteractiveTransitionEditor' is not exposed, skipping. #10 1.007 Class 'AudioStreamPlayerInternal' is not exposed, skipping. #10 1.007 Class 'AudioStreamPreviewGenerator' is not exposed, skipping. #10 1.007 Class 'AudioStreamRandomizerEditorPlugin' is not exposed, skipping. #10 1.007 Class 'BackgroundProgress' is not exposed, skipping. #10 1.015 Class 'BitMapEditorPlugin' is not exposed, skipping. #10 1.016 Class 'BoneMapEditorPlugin' is not exposed, skipping. #10 1.019 Class 'CPUParticles2DEditorPlugin' is not exposed, skipping. #10 1.021 Class 'CPUParticles3DEditor' is not exposed, skipping. #10 1.021 Class 'CPUParticles3DEditorPlugin' is not exposed, skipping. #10 1.034 Class 'CSGShape3DGizmoPlugin' is not exposed, skipping. #10 1.037 Class 'Camera3DEditorPlugin' is not exposed, skipping. #10 1.040 Class 'CanvasItemEditor' is not exposed, skipping. #10 1.040 Class 'CanvasItemEditorPlugin' is not exposed, skipping. #10 1.040 Class 'CanvasItemEditorViewport' is not exposed, skipping. #10 1.040 Class 'CanvasItemMaterialConversionPlugin' is not exposed, skipping. #10 1.041 Class 'Cast2DEditor' is not exposed, skipping. #10 1.041 Class 'Cast2DEditorPlugin' is not exposed, skipping. #10 1.048 Class 'CollisionPolygon2DEditor' is not exposed, skipping. #10 1.048 Class 'CollisionPolygon2DEditorPlugin' is not exposed, skipping. #10 1.048 Class 'CollisionShape2DEditor' is not exposed, skipping. #10 1.048 Class 'CollisionShape2DEditorPlugin' is not exposed, skipping. #10 1.053 Class 'ConnectDialog' is not exposed, skipping. #10 1.053 Class 'ConnectDialogBinds' is not exposed, skipping. #10 1.053 Class 'ConnectionsDock' is not exposed, skipping. #10 1.054 Class 'ControlEditorPlugin' is not exposed, skipping. #10 1.054 Class 'ControlEditorPopupButton' is not exposed, skipping. #10 1.054 Class 'ControlEditorPresetPicker' is not exposed, skipping. #10 1.054 Class 'ControlEditorToolbar' is not exposed, skipping. #10 1.055 Class 'CreateDialog' is not exposed, skipping. #10 1.055 Class 'CurveEditorPlugin' is not exposed, skipping. #10 1.055 Class 'CurvePreviewGenerator' is not exposed, skipping. #10 1.057 Class 'DebugAdapterParser' is not exposed, skipping. #10 1.057 Class 'DebugAdapterServer' is not exposed, skipping. #10 1.057 Class 'DebuggerEditorPlugin' is not exposed, skipping. #10 1.058 Class 'DefaultThemeEditorPreview' is not exposed, skipping. #10 1.058 Class 'DependencyEditor' is not exposed, skipping. #10 1.058 Class 'DependencyEditorOwners' is not exposed, skipping. #10 1.058 Class 'DependencyErrorDialog' is not exposed, skipping. #10 1.058 Class 'DependencyRemoveDialog' is not exposed, skipping. #10 1.062 Class 'DirectoryCreateDialog' is not exposed, skipping. #10 1.062 Class 'DockContextPopup' is not exposed, skipping. #10 1.062 Class 'DockSplitContainer' is not exposed, skipping. #10 1.062 Class 'DynamicFontImportSettingsData' is not exposed, skipping. #10 1.062 Class 'DynamicFontImportSettingsDialog' is not exposed, skipping. #10 1.063 Class 'EditorAbout' is not exposed, skipping. #10 1.063 Class 'EditorAssetLibrary' is not exposed, skipping. #10 1.063 Class 'EditorAudioBuses' is not exposed, skipping. #10 1.063 Class 'EditorAudioStreamPreviewPlugin' is not exposed, skipping. #10 1.063 Class 'EditorAudioStreamTooltipPlugin' is not exposed, skipping. #10 1.063 Class 'EditorAutoloadSettings' is not exposed, skipping. #10 1.063 Class 'EditorBitmapPreviewPlugin' is not exposed, skipping. #10 1.063 Class 'EditorBottomPanel' is not exposed, skipping. #10 1.063 Class 'EditorBuildProfileManager' is not exposed, skipping. #10 1.064 Class 'EditorDebuggerInspector' is not exposed, skipping. #10 1.064 Class 'EditorDebuggerNode' is not exposed, skipping. #10 1.064 Class 'EditorDebuggerRemoteObject' is not exposed, skipping. #10 1.064 Class 'EditorDebuggerTree' is not exposed, skipping. #10 1.064 Class 'EditorDirDialog' is not exposed, skipping. #10 1.064 Class 'EditorDockManager' is not exposed, skipping. #10 1.064 Class 'EditorExport' is not exposed, skipping. #10 1.064 Class 'EditorExportGDScript' is not exposed, skipping. #10 1.284 Class 'EditorFeatureProfileManager' is not exposed, skipping. #10 1.290 Class 'EditorFileServer' is not exposed, skipping. #10 1.291 Class 'EditorFileSystemImportFormatSupportQueryBlend' is not exposed, skipping. #10 1.291 Class 'EditorFontPreviewPlugin' is not exposed, skipping. #10 1.291 Class 'EditorGradientPreviewPlugin' is not exposed, skipping. #10 1.291 Class 'EditorHelpBit' is not exposed, skipping. #10 1.291 Class 'EditorHelpSearch' is not exposed, skipping. #10 1.291 Class 'EditorImagePreviewPlugin' is not exposed, skipping. #10 1.291 Class 'EditorImportBlendRunner' is not exposed, skipping. #10 1.291 Class 'EditorInspectorDefaultPlugin' is not exposed, skipping. #10 1.291 Class 'EditorInspectorParticleProcessMaterialPlugin' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPlugin3DTexture' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginAnimationTrackKeyEdit' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginAudioStream' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginAudioStreamInteractive' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginBitMap' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginBoneMap' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginControl' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginCurve' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginFontPreview' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginFontVariation' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginGradient' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginGradientTexture2D' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginInputEvent' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginLayeredTexture' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginMaterial' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginMesh' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginPackedScene' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginSkeleton' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginStyleBox' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginSubViewportPreview' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginSystemFont' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginTexture' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginTextureRegion' is not exposed, skipping. #10 1.291 Class 'EditorInspectorPluginTileData' is not exposed, skipping. #10 1.291 Class 'EditorInspectorRootMotionPlugin' is not exposed, skipping. #10 1.291 Class 'EditorInspectorVisualShaderModePlugin' is not exposed, skipping. #10 1.291 Class 'EditorJSONSyntaxHighlighter' is not exposed, skipping. #10 1.291 Class 'EditorLayoutsDialog' is not exposed, skipping. #10 1.291 Class 'EditorLocaleDialog' is not exposed, skipping. #10 1.291 Class 'EditorLog' is not exposed, skipping. #10 1.291 Class 'EditorMaterialPreviewPlugin' is not exposed, skipping. #10 1.291 Class 'EditorMeshPreviewPlugin' is not exposed, skipping. #10 1.291 Class 'EditorNativeShaderSourceVisualizer' is not exposed, skipping. #10 1.291 Class 'EditorNode' is not exposed, skipping. #10 1.292 Class 'EditorOBJImporter' is not exposed, skipping. #10 1.292 Class 'EditorObjectSelector' is not exposed, skipping. #10 1.292 Class 'EditorPackedScenePreviewPlugin' is not exposed, skipping. #10 1.292 Class 'EditorPerformanceProfiler' is not exposed, skipping. #10 1.292 Class 'EditorPluginCSG' is not exposed, skipping. #10 1.292 Class 'EditorPluginSettings' is not exposed, skipping. #10 1.292 Class 'EditorProfiler' is not exposed, skipping. #10 1.292 Class 'EditorPropertyInteger' is not exposed, skipping. #10 1.292 Class 'EditorPropertyNameProcessor' is not exposed, skipping. #10 1.292 Class 'EditorPropertyPath' is not exposed, skipping. #10 1.292 Class 'EditorPropertyVector2i' is not exposed, skipping. #10 1.292 Class 'EditorPropertyVectorN' is not exposed, skipping. #10 1.292 Class 'EditorQuickOpen' is not exposed, skipping. #10 1.293 Class 'EditorRunBar' is not exposed, skipping. #10 1.293 Class 'EditorRunNative' is not exposed, skipping. #10 1.293 Class 'EditorSceneExporterGLTFSettings' is not exposed, skipping. #10 1.293 Class 'EditorSceneFormatImporterCollada' is not exposed, skipping. #10 1.293 Class 'EditorSceneFormatImporterESCN' is not exposed, skipping. #10 1.293 Class 'EditorSceneTabs' is not exposed, skipping. #10 1.293 Class 'EditorScriptPreviewPlugin' is not exposed, skipping. #10 1.294 Class 'EditorSettingsDialog' is not exposed, skipping. #10 1.297 Class 'EditorTexturePreviewPlugin' is not exposed, skipping. #10 1.297 Class 'EditorTextureTooltipPlugin' is not exposed, skipping. #10 1.297 Class 'EditorTheme' is not exposed, skipping. #10 1.297 Class 'EditorTitleBar' is not exposed, skipping. #10 1.297 Class 'EditorToaster' is not exposed, skipping. #10 1.297 Class 'EditorValidationPanel' is not exposed, skipping. #10 1.297 Class 'EditorVisualProfiler' is not exposed, skipping. #10 1.297 Class 'EditorZoomWidget' is not exposed, skipping. #10 1.298 Class 'EventListenerLineEdit' is not exposed, skipping. #10 1.298 Class 'ExportTemplateManager' is not exposed, skipping. #10 1.298 Class 'FBXImporterManager' is not exposed, skipping. #10 1.303 Class 'FileSystemList' is not exposed, skipping. #10 1.303 Class 'FindInFiles' is not exposed, skipping. #10 1.303 Class 'FindInFilesDialog' is not exposed, skipping. #10 1.303 Class 'FindInFilesPanel' is not exposed, skipping. #10 1.303 Class 'FindReplaceBar' is not exposed, skipping. #10 1.304 Class 'FogMaterialConversionPlugin' is not exposed, skipping. #10 1.305 Class 'FontEditorPlugin' is not exposed, skipping. #10 1.306 Class 'GDScriptEditorTranslationParserPlugin' is not exposed, skipping. #10 1.306 Class 'GDScriptLanguageServer' is not exposed, skipping. #10 1.306 Class 'GDScriptNativeClass' is not exposed, skipping. #10 1.306 Class 'GDScriptSyntaxHighlighter' is not exposed, skipping. #10 1.306 Class 'GLTFDocumentExtensionPhysics' is not exposed, skipping. #10 1.306 Class 'GLTFDocumentExtensionTextureKTX' is not exposed, skipping. #10 1.306 Class 'GLTFDocumentExtensionTextureWebP' is not exposed, skipping. #10 1.307 Class 'GPUParticles2DEditorPlugin' is not exposed, skipping. #10 1.308 Class 'GPUParticles3DEditor' is not exposed, skipping. #10 1.308 Class 'GPUParticles3DEditorBase' is not exposed, skipping. #10 1.308 Class 'GPUParticles3DEditorPlugin' is not exposed, skipping. #10 1.319 Class 'GPUParticlesCollisionSDF3DEditorPlugin' is not exposed, skipping. #10 1.324 Class 'Gizmo3DHelper' is not exposed, skipping. #10 1.324 Class 'GodotNavigationServer2D' is not exposed, skipping. #10 1.324 Class 'GodotPhysicsServer2D' is not exposed, skipping. #10 1.324 Class 'GodotPhysicsServer3D' is not exposed, skipping. #10 1.324 Class 'GradientEditorPlugin' is not exposed, skipping. #10 1.324 Class 'GradientTexture2DEditorPlugin' is not exposed, skipping. #10 1.325 Class 'GraphEditFilter' is not exposed, skipping. #10 1.325 Class 'GraphEditMinimap' is not exposed, skipping. #10 1.326 Class 'GridMapEditor' is not exposed, skipping. #10 1.326 Class 'GridMapEditorPlugin' is not exposed, skipping. #10 1.327 Class 'GroupSettingsEditor' is not exposed, skipping. #10 1.327 Class 'GroupsEditor' is not exposed, skipping. #10 1.336 Class 'HistoryDock' is not exposed, skipping. #10 1.336 Class 'IPUnix' is not exposed, skipping. #10 1.337 Class 'ImportDefaultsEditor' is not exposed, skipping. #10 1.337 Class 'ImportDefaultsEditorSettings' is not exposed, skipping. #10 1.337 Class 'ImportDock' is not exposed, skipping. #10 1.337 Class 'ImportDockParameters' is not exposed, skipping. #10 1.338 Class 'InputEventConfigurationDialog' is not exposed, skipping. #10 1.338 Class 'InputEventEditorPlugin' is not exposed, skipping. #10 1.344 Class 'InspectorDock' is not exposed, skipping. #10 1.353 Class 'LightOccluder2DEditor' is not exposed, skipping. #10 1.353 Class 'LightOccluder2DEditorPlugin' is not exposed, skipping. #10 1.354 Class 'LightmapGIEditorPlugin' is not exposed, skipping. #10 1.355 Class 'Line2DEditor' is not exposed, skipping. #10 1.355 Class 'Line2DEditorPlugin' is not exposed, skipping. #10 1.357 Class 'LocalizationEditor' is not exposed, skipping. #10 1.358 Class 'MaterialEditorPlugin' is not exposed, skipping. #10 1.359 Class 'MeshEditorPlugin' is not exposed, skipping. #10 1.360 Class 'MeshInstance3DEditor' is not exposed, skipping. #10 1.360 Class 'MeshInstance3DEditorPlugin' is not exposed, skipping. #10 1.360 Class 'MeshLibraryEditor' is not exposed, skipping. #10 1.360 Class 'MeshLibraryEditorPlugin' is not exposed, skipping. #10 1.362 Class 'MovieWriterMJPEG' is not exposed, skipping. #10 1.362 Class 'MovieWriterPNGWAV' is not exposed, skipping. #10 1.362 Class 'MultiMeshEditor' is not exposed, skipping. #10 1.362 Class 'MultiMeshEditorPlugin' is not exposed, skipping. #10 1.363 Class 'MultiplayerEditorDebugger' is not exposed, skipping. #10 1.363 Class 'MultiplayerEditorPlugin' is not exposed, skipping. #10 1.364 Class 'NavigationLink2DEditor' is not exposed, skipping. #10 1.364 Class 'NavigationLink2DEditorPlugin' is not exposed, skipping. #10 1.365 Class 'NavigationMeshEditor' is not exposed, skipping. #10 1.365 Class 'NavigationMeshEditorPlugin' is not exposed, skipping. #10 1.365 Class 'NavigationObstacle2DEditor' is not exposed, skipping. #10 1.365 Class 'NavigationObstacle2DEditorPlugin' is not exposed, skipping. #10 1.365 Class 'NavigationObstacle3DEditor' is not exposed, skipping. #10 1.365 Class 'NavigationObstacle3DEditorPlugin' is not exposed, skipping. #10 1.365 Class 'NavigationPolygonEditor' is not exposed, skipping. #10 1.365 Class 'NavigationPolygonEditorPlugin' is not exposed, skipping. #10 1.367 Class 'Node3DEditor' is not exposed, skipping. #10 1.367 Class 'Node3DEditorPlugin' is not exposed, skipping. #10 1.367 Class 'Node3DEditorViewport' is not exposed, skipping. #10 1.367 Class 'Node3DEditorViewportContainer' is not exposed, skipping. #10 1.367 Class 'NodeDock' is not exposed, skipping. #10 1.367 Class 'NoiseEditorInspectorPlugin' is not exposed, skipping. #10 1.367 Class 'NoiseEditorPlugin' is not exposed, skipping. #10 1.371 Class 'ORMMaterial3DConversionPlugin' is not exposed, skipping. #10 1.372 Class 'OccluderInstance3DEditorPlugin' is not exposed, skipping. #10 1.379 Class 'OrphanResourcesDialog' is not exposed, skipping. #10 1.379 Class 'PackedSceneEditorPlugin' is not exposed, skipping. #10 1.379 Class 'PackedSceneEditorTranslationParserPlugin' is not exposed, skipping. #10 1.380 Class 'PanoramaSkyMaterialConversionPlugin' is not exposed, skipping. #10 1.380 Class 'ParallaxBackgroundEditorPlugin' is not exposed, skipping. #10 1.381 Class 'ParticleProcessMaterialConversionPlugin' is not exposed, skipping. #10 1.381 Class 'Path2DEditor' is not exposed, skipping. #10 1.381 Class 'Path2DEditorPlugin' is not exposed, skipping. #10 1.381 Class 'Path3DEditorPlugin' is not exposed, skipping. #10 1.381 Class 'Path3DGizmoPlugin' is not exposed, skipping. #10 1.383 Class 'PhysicalBone3DEditorPlugin' is not exposed, skipping. #10 1.384 Class 'PhysicalSkyMaterialConversionPlugin' is not exposed, skipping. #10 1.397 Class 'PluginConfigDialog' is not exposed, skipping. #10 1.399 Class 'Polygon2DEditor' is not exposed, skipping. #10 1.399 Class 'Polygon2DEditorPlugin' is not exposed, skipping. #10 1.399 Class 'Polygon3DEditor' is not exposed, skipping. #10 1.399 Class 'Polygon3DEditorPlugin' is not exposed, skipping. #10 1.400 Class 'PostImportPluginSkeletonRenamer' is not exposed, skipping. #10 1.400 Class 'PostImportPluginSkeletonRestFixer' is not exposed, skipping. #10 1.400 Class 'PostImportPluginSkeletonTrackOrganizer' is not exposed, skipping. #10 1.401 Class 'ProceduralSkyMaterialConversionPlugin' is not exposed, skipping. #10 1.404 Class 'ProgressDialog' is not exposed, skipping. #10 1.404 Class 'ProjectExportDialog' is not exposed, skipping. #10 1.404 Class 'ProjectExportTextureFormatError' is not exposed, skipping. #10 1.404 Class 'ProjectSettingsEditor' is not exposed, skipping. #10 1.404 Class 'PropertySelector' is not exposed, skipping. #10 1.409 Class 'RenameDialog' is not exposed, skipping. #10 1.412 Class 'ReparentDialog' is not exposed, skipping. #10 1.412 Class 'ReplicationEditor' is not exposed, skipping. #10 1.412 Class 'ResourceFormatImporterSaver' is not exposed, skipping. #10 1.413 Class 'ResourcePreloaderEditor' is not exposed, skipping. #10 1.413 Class 'ResourcePreloaderEditorPlugin' is not exposed, skipping. #10 1.418 Class 'RunInstancesDialog' is not exposed, skipping. #10 1.418 Class 'SceneCacheInterface' is not exposed, skipping. #10 1.418 Class 'SceneCreateDialog' is not exposed, skipping. #10 1.418 Class 'SceneExporterGLTFPlugin' is not exposed, skipping. #10 1.418 Class 'SceneImportSettingsData' is not exposed, skipping. #10 1.418 Class 'SceneImportSettingsDialog' is not exposed, skipping. #10 1.418 Class 'SceneRPCInterface' is not exposed, skipping. #10 1.418 Class 'SceneReplicationInterface' is not exposed, skipping. #10 1.418 Class 'SceneTileProxyObject' is not exposed, skipping. #10 1.418 Class 'SceneTreeDialog' is not exposed, skipping. #10 1.418 Class 'SceneTreeDock' is not exposed, skipping. #10 1.418 Class 'SceneTreeEditor' is not exposed, skipping. #10 1.418 Class 'ScreenSelect' is not exposed, skipping. #10 1.429 Class 'ScriptEditorDebugger' is not exposed, skipping. #10 1.429 Class 'ScriptEditorPlugin' is not exposed, skipping. #10 1.434 Class 'SectionedInspector' is not exposed, skipping. #10 1.434 Class 'SectionedInspectorFilter' is not exposed, skipping. #10 1.436 Class 'ShaderCreateDialog' is not exposed, skipping. #10 1.436 Class 'ShaderEditorPlugin' is not exposed, skipping. #10 1.436 Class 'ShaderFileEditor' is not exposed, skipping. #10 1.436 Class 'ShaderFileEditorPlugin' is not exposed, skipping. #10 1.436 Class 'ShaderGlobalsEditor' is not exposed, skipping. #10 1.436 Class 'ShaderGlobalsEditorInterface' is not exposed, skipping. #10 1.437 Class 'SizeFlagPresetPicker' is not exposed, skipping. #10 1.437 Class 'Skeleton2DEditor' is not exposed, skipping. #10 1.437 Class 'Skeleton2DEditorPlugin' is not exposed, skipping. #10 1.437 Class 'Skeleton3DEditorPlugin' is not exposed, skipping. #10 1.437 Class 'Skeleton3DGizmoPlugin' is not exposed, skipping. #10 1.438 Class 'SkeletonIK3DEditorPlugin' is not exposed, skipping. #10 1.445 Class 'SnapDialog' is not exposed, skipping. #10 1.448 Class 'SplitContainerDragger' is not exposed, skipping. #10 1.449 Class 'Sprite2DEditor' is not exposed, skipping. #10 1.449 Class 'Sprite2DEditorPlugin' is not exposed, skipping. #10 1.454 Class 'SpriteFramesEditor' is not exposed, skipping. #10 1.454 Class 'SpriteFramesEditorPlugin' is not exposed, skipping. #10 1.458 Class 'StandardMaterial3DConversionPlugin' is not exposed, skipping. #10 1.461 Class 'StyleBoxEditorPlugin' is not exposed, skipping. #10 1.464 Class 'SubViewportPreviewEditorPlugin' is not exposed, skipping. #10 1.464 Class 'SurfaceUpgradeDialog' is not exposed, skipping. #10 1.464 Class 'SurfaceUpgradeTool' is not exposed, skipping. #10 1.468 Class 'Texture3DEditorPlugin' is not exposed, skipping. #10 1.470 Class 'TextureEditorPlugin' is not exposed, skipping. #10 1.470 Class 'TextureLayeredEditorPlugin' is not exposed, skipping. #10 1.473 Class 'TextureRegionEditor' is not exposed, skipping. #10 1.473 Class 'TextureRegionEditorPlugin' is not exposed, skipping. #10 1.473 Class 'ThemeContext' is not exposed, skipping. #10 1.473 Class 'ThemeEditor' is not exposed, skipping. #10 1.473 Class 'ThemeEditorPlugin' is not exposed, skipping. #10 1.473 Class 'ThemeEditorPreview' is not exposed, skipping. #10 1.473 Class 'ThemeItemEditorDialog' is not exposed, skipping. #10 1.473 Class 'ThemeItemImportTree' is not exposed, skipping. #10 1.473 Class 'ThemeTypeDialog' is not exposed, skipping. #10 1.473 Class 'ThemeTypeEditor' is not exposed, skipping. #10 1.473 Class 'TileAtlasView' is not exposed, skipping. #10 1.474 Class 'TileMapEditorPlugin' is not exposed, skipping. #10 1.474 Class 'TileMapLayerEditor' is not exposed, skipping. #10 1.474 Class 'TileMapLayerEditorTerrainsPlugin' is not exposed, skipping. #10 1.474 Class 'TileMapLayerEditorTilesPlugin' is not exposed, skipping. #10 1.474 Class 'TileProxiesManagerDialog' is not exposed, skipping. #10 1.474 Class 'TileSetAtlasSourceEditor' is not exposed, skipping. #10 1.474 Class 'TileSetAtlasSourceProxyObject' is not exposed, skipping. #10 1.474 Class 'TileSetEditor' is not exposed, skipping. #10 1.474 Class 'TileSetEditorPlugin' is not exposed, skipping. #10 1.474 Class 'TileSetScenesCollectionProxyObject' is not exposed, skipping. #10 1.475 Class 'TileSetScenesCollectionSourceEditor' is not exposed, skipping. #10 1.475 Class 'TileSourceInspectorPlugin' is not exposed, skipping. #10 1.475 Class 'TilesEditorUtils' is not exposed, skipping. #10 1.477 Class 'UVEditDialog' is not exposed, skipping. #10 1.484 Class 'VersionControlEditorPlugin' is not exposed, skipping. #10 1.484 Class 'ViewPanner' is not exposed, skipping. #10 1.486 Class 'ViewportNavigationControl' is not exposed, skipping. #10 1.486 Class 'ViewportRotationControl' is not exposed, skipping. #10 1.489 Class 'VisualShaderConversionPlugin' is not exposed, skipping. #10 1.514 Class 'VoxelGIEditorPlugin' is not exposed, skipping. #10 1.517 Class 'WindowWrapper' is not exposed, skipping. #10 1.530 Loaded builtin CA certificates #10 2.540 EditorSettings: Save OK! ^C#10 CANCELED ERROR: failed to solve: Canceled: context canceled ``` </details> ### Steps to reproduce Prerequisites: Linux environment with docker installed. 1. Download the testproject: [docker-setup-blend-bug.zip](https://github.com/user-attachments/files/18043979/docker-setup-blend-bug.zip) 2. Copy the godot executable you want to test into the folder besides the run.sh 3. Rename the godot executable to `godot` 4. Run `run.sh`, expected outcome: stuck / endless loop 5. Remove blend file 6. Run `run.sh`, expected outcome: terminates ### Minimal reproduction project (MRP) [docker-setup-blend-bug.zip](https://github.com/user-attachments/files/18043979/docker-setup-blend-bug.zip)
bug,needs testing,topic:import
low
Critical
2,723,995,134
godot
Export Error with .NET 9 on iOS
### Tested versions Reproducible in 4.3.stable.mono ### System information macOS Sequoia (15.1.1) - Mac Mini (2018) ### Issue description With `.NET 9` out of RC I tested for any errors and there is an error that prevents exporting specifically for iOS. ``` MSB3030: Could not copy the file "/Users/[username]/.nuget/packages/microsoft.netcore.app.runtime.nativeaot.ios-arm64/9.0.0/runtimes/ios-arm64/native/icudt.dat" because it was not found. /Users/[username]/.nuget/packages/godot.net.sdk/4.3.0/Sdk/iOSNativeAOT.targets(18,5) ``` Checking this directory shows that indeed this file no longer exists and that the entire structure is different now. A quick search shows that `icudt.dat` is not longer bundled in `.NET 9` as outlined [here](https://github.com/xamarin/xamarin-macios/issues/17877#issuecomment-2243000189). ### Steps to reproduce * Create C# enabled Godot project * Use `.NET 9` * Export for iOS ### Minimal reproduction project (MRP) N/A
bug,platform:ios,topic:dotnet,topic:export
low
Critical
2,724,006,003
vscode
Block Extension Access to Sensitive Files in the Project
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> related to #52116 Currently, VSCode extensions have access to all files within a project by default. This poses a potential security risk, as projects may include files containing sensitive information, such as .env files. To enhance security, it would be valuable to allow developers to explicitly mark certain files or patterns as **sensitive** in .vscode/settings.json. Files marked as sensitive would remain invisible to all third-party code, including extensions. Proposed Feature: Introduce a files.sensitive setting in the workspace configuration to define sensitive files. Example: ```json { "files.sensitive": { "**/.env": true } } ``` This feature would: - Ensure extensions cannot access or read marked files. - Improve trust and security when using third-party extensions.
feature-request,extensions,file-io
low
Major
2,724,015,853
godot
Remote debug fails in chrome browser Godot 4.3
### Tested versions Reproducible in 4.3 ### System information Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 4070 Ti (NVIDIA; 32.0.15.6614) - AMD Ryzen 9 7950X 16-Core Processor (32 Threads) ### Issue description Remote debug works for Firefox and Edge browsers, but fails for Chrome browser. The following error is logged in the web debugger: Uncaught (in promise) TypeError: WebAssembly.instantiate(): Import #0 "a": module is not an object or function at tmp_js_export.js:153:17 This has happened for any project I have exported in Godot 4.3. Couldn't find an issue for it, so wasn't sure if this was known or if others don't have this issue. Projects work fine in chrome once they are hosted on an external site such as itch.io. It just fails to load in the debugger. Thanks! ### Steps to reproduce - Open any Godot project. - Export to Web. - Run remote debugger ### Minimal reproduction project (MRP) [new-game-project.zip](https://github.com/user-attachments/files/18044245/new-game-project.zip)
bug,platform:web,needs testing
low
Critical
2,724,021,888
next.js
Subpath imports broken since 15.0.3-canary.7
### Link to the code that reproduces this issue https://github.com/fluidsonic/next-import-subpaths-bug ### To Reproduce 1. `cd` into the cloned repo 2. `pnpm i` to install dependencies (it's a pnpm workspace) 3. `cd project-a` 4. `pnpm build` ### Current vs. Expected behavior Build fails because it can't resolve `import "#internal/internal"` in `Test.tsx`. ```text % pnpm build > [email protected] build /Users/marc/Documents/next-test/next-import-subpaths-bug/project-a > next build ▲ Next.js 15.0.3-canary.7 Creating an optimized production build ... Failed to compile. ../common/src/Test.tsx Module not found: Can't resolve '#internal/internal' https://nextjs.org/docs/messages/module-not-found Import trace for requested module: ./src/app/page.tsx > Build failed because of webpack errors ``` ### Provide environment information ```bash Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000 Available memory (MB): 16384 Available CPU cores: 10 Binaries: Node: 22.12.0 npm: 10.9.0 Yarn: 1.22.19 pnpm: 9.15.0 Relevant Packages: next: 15.0.3-canary.7 // There is a newer canary version (15.0.4-canary.45) available, please upgrade! eslint-config-next: N/A react: 19.0.0 react-dom: 19.0.0 typescript: 5.7.2 Next.js Config: output: N/A ⚠ There is a newer canary version (15.0.4-canary.45) available, please upgrade! Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue. Read more - https://nextjs.org/docs/messages/opening-an-issue ``` ``` ### Which area(s) are affected? (Select all that apply) Not sure ### Which stage(s) are affected? (Select all that apply) next dev (local), next build (local) ### Additional context It works in `15.0.3-canary.6` but fails in `15.0.3-canary.7` and later.
bug
low
Critical
2,724,038,076
TypeScript
Writes to indexed access of an index signature are not sufficiently constrained
### 🔎 Search Terms "return type" "generic" "type parameter" "indexed access" "index signature" "assignable" "constraint" ### 🕗 Version & Regression Information - This is the behavior in every version I tried, and I reviewed the FAQ for entries about index signatures ### ⏯ Playground Link https://www.typescriptlang.org/play/?ts=5.7.2#code/C4TwDgpgBAogHgQwLZgDbQLxQN4CgpQIBcUA5AqflANoDWEIJAzsAE4CWAdgOYC6zbLt1wBfXLgAmEAMaoEraNID2nFlAiIU6EvGRoIuZauBR6IAIwCOPKFnKUNe9HQbnetspyUmKUAPR+UF7qrKxKrOJSsvKKKmpmAEwk9lAAPmQARg6a+i4gCe52Xj6k-oEQoeEANIRM6nCQ0sAQEkA ### 💻 Code ```ts type Example = { a: 'a' [key: string]: string } declare const example: Example const key1: string = 'a' example[key1] = 'not a' // no error declare const key2: 'a' | 'b' example[key2] = 'not a' // error, as expected //^^^^^^^^^^^ // Type '"not a"' is not assignable to type '"a"'. ``` ### 🙁 Actual behavior The index signature allows any `string` value to be assigned, ignoring the narrower type of `a`. ### 🙂 Expected behavior Both property assignments in the above code should raise errors, for the same reason. ### Additional information about the issue This is a more generalized version of #60700. @MartinJohns helped me realize that type parameters need not be involved. Here's another pair of examples ([1](https://www.typescriptlang.org/play/?ts=5.7.2#code/C4TwDgpgBA6glgEwgJwCrmgXigbygQwH4AuKAZ2GTgDsBzAGigCMTzKbaoBfAKFEigBRAB74AtmAA2WXAVIByfPO5QAZLEQp0kHkgDGk-Mmh6A9tQpQIoidNIjxUiLogGjJ85YDWEEKR8gpgBmQjZOPNaO0gDaAQC6UNjy1KbABPJAA), [2](https://www.typescriptlang.org/play/?ts=5.7.2#code/C4TwDgpgBA6glgEwgJwCrmgXigbygbQGsIQAuKAZ2GTgDsBzAXXKpoagF8AoUSKAUQAeAQwC2YADZZcUYeQDkw+ZygAyWIhTpIXJAGMJw5ND0B7WlSgQR4qeSFjJEXRANGT5y8TJRvpgGYCNk5c1o5SRCSMUNjytKbAssoA9MlQ8VbIyKbIQA)) which I expected to be reasoned through similarly: ```ts type WiderType = { a?: string, b?: string } type Example = { a: 'a' } & WiderType declare const example: Example declare const key: keyof Example example[key] = 'not a' //^^^^^^^^^^ // Type '"not a"' is not assignable to type '"a"'. ``` ```ts type WiderType = { [key: string]: string } type Example = { a: 'a' } & WiderType declare const example: Example declare const key: keyof Example example[key] = 'not a' // no error ``` In both cases `Example` carries with it the information that the property `a` must have a value assignable to `'a'` (as can be seen with `example.a = 'not a'`). But this is not enforced when assigning to dynamic property using a key whose type matches the index signature. [Here's another example](https://www.typescriptlang.org/play/?ts=5.7.2#code/C4TwDgpgBA6glgEwgJwCrmgXigbygbQGsIQAuKAZ2GTgDsBzKAH0pAFsAjAewBsBdclRoMoAXwBQoSFACiADwCGbMDyy4CxMpWp16AqAHIFBsVABksRCnSRx4pAGMeC5NAddaVKBEXLV5eSUVCHsIJxc3Dy9NQR0RFgp2bh5xHyDVIhI+KGwDWi5gKGMoAHoS72RkLmQAGiKKbzlIB2AIBCA) which does produce the error I expect: ```ts type WiderType = { [key: string | symbol]: string } type Example = { [key: string]: 'a' } & WiderType declare const example: Example declare const key: string | symbol example[key] = 'not a' // error, as expected //^^^^^^^^^^ // Type '"not a"' is not assignable to type '"a"'. ```
Suggestion,Experimentation Needed
low
Critical
2,724,048,216
rust
better explanation when evaluation of constant value failed with overflowing literal
### Code ```Rust fn main() { const _:() = assert!(0 <= 2_147_483_648); } ``` ### Current output ```Shell error[E0080]: evaluation of constant value failed --> src/main.rs:4:18 | 4 | const _:() = assert!(0 <= 2_147_483_648); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ the evaluated program panicked at 'assertion failed: 0 <= 2_147_483_648', src/main.rs:4:18 | = note: this error originates in the macro `assert` (in Nightly builds, run with -Z macro-backtrace for more info) For more information about this error, try `rustc --explain E0080`. ``` ### Desired output ```Shell error[E0080]: evaluation of constant value failed --> src/main.rs:4:18 | 4 | const _:() = assert!(0 <= 2_147_483_648); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ the evaluated program panicked at 'assertion failed: 0 <= 2_147_483_648', src/main.rs:4:18 | = note: the literal `2_147_483_648` does not fit into the type `i32` whose range is `-2147483648..=2147483647` = help: consider using the type `u32` instead For more information about this error, try `rustc --explain E0080`. ``` ### Rationale and extra context See: https://users.rust-lang.org/t/const-assertion-fails-to-infer-correct-type/122166/3 Desired output based on the [message returned in normal (not const) context](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=7136d82a7b4a351769c38695ab75a871). ### Other cases ```Rust ``` ### Rust Version ```Shell rustc 1.83.0 (90b35a623 2024-11-26) binary: rustc commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf commit-date: 2024-11-26 host: x86_64-unknown-linux-gnu release: 1.83.0 LLVM version: 19.1.1 ``` ### Anything else? _No response_
A-diagnostics,T-compiler
low
Critical
2,724,100,193
TypeScript
Rethink AutoImportProvider automatic limits
There have been a few issues with `"includePackageJsonAutoImports": "auto"` recently. Today, we scan your project's package.json for dependencies not yet included in your TS program, scanning those dependencies' package.jsons for entrypoints to include as root files for the auxiliary program that collects additional auto-importable files. By default, we limit this to **10 entrypoints** before giving up and not making an auto import provider at all. I think this was intended to be **10 packages**, not 10 entrypoints. At the time the feature was added, most packages only had one detectable entrypoint (`"main"`/`"types"`). Now, it's quite common to see packages with more than 10 `"exports"` subpaths, which effectively guarantees that the feature will be disabled. https://github.com/microsoft/TypeScript/issues/53116#issuecomment-2466507060 shows an example of this. On the flip side, once we've added our (10 or fewer) root files, there's no limit to how many files they might transitively pull in via imports. AutoImportProvider size was an editor startup time issue for the Teams codebase, not because of package.json inclusion, but because of referenced project inclusion (which has no limiter in place). At any rate, a limiter on root files is not a great solution for preventing costly project load, since a single file could pull in an unbounded number of others. I'm interested in experimenting to see what would happen if we limit the auto import provider program's module resolution in certain ways, and then relax the limits on adding entrypoints. We could try limiting the depth of module resolution (e.g. root files can pull in other files, but then those files cannot pull in any more). At least, we could prevent explosions of dependencies by not allowing a node_modules package to transitively include any files outside of itself. Experimentation is needed, but off the top of my head, I think this wouldn't remove any available auto-imports except ones generated by `export * from "other-package"`, since we don't (shouldn't) need to know the contents of `other-package` to generate an auto-import for `foo` after seeing `export { foo } from "other-package"`. I'm less sure what to do about referenced project inclusion, since projects often have `include` globs instead of distinct entrypoints, so it's very possible to add a huge number of root files even without performing any module resolution.
Needs Investigation
low
Minor
2,724,104,089
deno
Do not require cached packuments to construct npm resolution from the lockfile
If we store more information in the lockfile it should be possible to not need to reach into the global cache to get information for running a Deno program that already has the tarballs cached. Right now we load: - cpu, os - dependencies - dist - optional dependencies - bin - scripts - deprecated But probably in the case that the tarballs are cached we can avoid reading the packuments entirely and only require reading them for npm resolution or for fetching the tarballs (Ideally we don't store tarball urls in the lockfile because it can lead to someone changing urls in the lockfile to cause security issues). This would require a new lockfile version. Also, we should investigate how we optimize for these three scenarios: 1. information required when everything is cached (this issue) 1. information required when doing npm resolution 1. information required when the package itself is not cached (info to download tarballs) It might mean doing something like downloading the packument and then creating a subset from it (ex. a shortened version of the packument that only has the information we need). Related: * https://github.com/denoland/deno/issues/24322 * https://github.com/denoland/deno/pull/27261
perf
low
Minor
2,724,155,947
TypeScript
Design Meeting Notes, 11/19/2024
*Notes from @RyanCavanaugh* # Node unflagging `--experimental-strip-types` https://github.com/nodejs/typescript/issues/17 * Removes types * Doesn't do downleveling or e.g. `enum` syntax * Covered by `--experimental-transform-types` * Requires no commandline flag * Node.js docs cover recommended tsconfig settings * We don't have any flag to cover the "no enum" restriction though * We should have this flag * See #59601 * We should name it * Is this just a temporary flag if nodejs et al will support these features anyway? * Many people want this for purity reasons (complimentary) * Should *imply* verbatimModuleSyntax and isolatedModules but does not *enforce* that * TODO: Name this (extremely difficult), implement it (easy, probably) * Should we support `const x = require('foo')` in TS? * Probably (eventually) * Excited!
Design Notes
low
Minor
2,724,156,328
TypeScript
Design Meeting Notes, 12/3/2024
## New `--module` Targets for Node.js https://github.com/Microsoft/TypeScript/issues/60589 https://github.com/Microsoft/TypeScript/issues/60534 * Previously, we were pretty loose about JSON `import`s in `nodenext`. * We need to stop allowing named imports from these JSON files. * Also, Node.js *requires* you to use import attributes for JSON files. * In 5.7 we added fixes for cases when we resolve to JSON files. * We enabled this for `node16` as well, but we error saying that you can only use import attributes in `nodenext`. * In Node.js 16, you can only use import *assertions* (the old deprecated thing), not import attributes. * We will probably make things loose again in `node16` as well - no checks for JSON modules. * Node.js 16 is already EOL. * So where does this leave us? * Node.js 18 is still in maintenance, but does support import attributes. * Node 23 will have `require(ESM)` support. * Proposal: * `--module nodenext` allows `require(ESM)` (for Node.js 23/24, possibly 22) * Eventually settle on a specific line. * `--module node20` depending on if the above features get back-ported. * `--module node18` for import assertions and JSON import checks (what `nodenext` should be in TypeScript 5.7) * `--module node16` to be deprecated farther down. * It is not the most ideal state (it is EOL, doesn't support JSON imports correctly), but TBD. * We like this plan, we feel okay to move forward with this for TypeScript 5.8.
Design Notes
low
Critical