id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,816,079,924 | vscode | Detects changes but values are not updated | Testing #238896
Repro:
1. Open Powershell (v5) in VS Code
2. Run `set SOMETHING=1`
This triggers the `onDidChangeTerminalShellIntegration` but the `env` field does not have the new `SOMETHING` variable in it.
```
> $PSVersionTable
Name Value
---- -----
PSVersion 5.1.26100.2161
PSEdition Desktop
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.26100.2161
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
> set SOMETHING=1
``` | bug | low | Minor |
2,816,092,388 | rust | NaNs are quieted on Emscripten | `tests/ui/abi/numbers-arithmetic/return-float.rs` currently fails under `wasm32-unknown-emscripten`:
```
---- [ui] tests/ui/abi/numbers-arithmetic/return-float.rs stdout ----
error: test run failed!
status: exit status: 101
command: cd "/home/purplesyringa/rust/build/x86_64-unknown-linux-gnu/test/ui/abi/numbers-arithmetic/return-float" && RUSTC="/home/purplesyringa/rust/build/x86_64-unknown-linux-gnu/stage1/bin/rustc" RUST_TEST_THREADS="8" "node" "/home/purplesyringa/rust/build/x86_64-unknown-linux-gnu/test/ui/abi/numbers-arithmetic/return-float/a.js"
stdout: none
--- stderr -------------------------------
thread 'main' panicked at /home/purplesyringa/rust/tests/ui/abi/numbers-arithmetic/return-float.rs:25:13:
assertion `left == right` failed
left: 2144687445
right: 2140493141
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
------------------------------------------
```
This reproduces on 66d6064f9eb888018775e08f84747ee6f39ba28e on Node v23.6.0. WASI is fine, and I think it uses Node during tests too, so probably not a Node bug.
@rustbot label +A-ABI +A-floating-point +I-miscompile +O-emscripten +T-compiler +E-needs-investigation | T-compiler,C-bug,A-floating-point,A-ABI,needs-triage,I-miscompile,O-emscripten,E-needs-investigation | low | Critical |
2,816,104,172 | rust | `-C force-frame-pointers=yes` not respected by `-Z build-std` or `opt-level = "z"` | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
So I've got a project I'm trying to debug, but I'm running into something of an issue. The setup is that I have RISC-V CPUs (`riscv32imc-unknown-none-elf`) debuggable over JTAG using OpenOCD + GDB. I would like to be able to get a backtrace out of GDB in the case that a program panics, but previously I was never able to guarantee that such a trace would be there since the stack would collapse as soon as execution entered into `core::panicking::panic` or the like. The way I was attempting to get backtraces from GDB was by doing something to the effect of
```gdb
file target/riscv32imc-unknown-none-elf/release/crate
set remotetimeout unlimited
break crate::panic_handler
target extended-remote :3333
load
define hook-stop
printf "!!!!! hit panic, execution has stopped !!!!!\n"
backtrace
disconnect
quit 1
end
continue
```
I figured that if I:
- Added `panic = "abort"` to the release profile in`Cargo.toml`
- Switched to nightly
- Added the following to `.cargo/config.toml`:
```toml
[build]
# ...
# added:
rustflags = [
# ...
"-C", "force-frame-pointers=yes",
]
# ...
[unstable]
build-std = ["core", "panic_abort"]
```
Then in theory, I should be guaranteed to get backtraces. So I wrote a simple program:
```rs
#![no_std]
#![no_main]
// The particular impl here doesn't matter.
use some_hal_crate::uart::Uart;
use riscv_rt::entry;
const UART_ADDR: *const () = (0b11 << 30) as *const ();
// And neither does the implementation here - I'm just using it for some extra debugging output.
#[panic_handler]
fn panic_handler(info: &core::panic::PanicInfo) -> ! {
let mut uart = Uart::new(UART_ADDR);
writeln!(uart, "{info:?}").unwrap();
}
#[entry]
fn main() -> ! {
skooks();
loop {}
}
misadventures! {
am i glad hes frozen in_ there and that were out here and_ that_ hes_ the
sheriff and__ that__ were_ frozen_ out_ here_ and___ that___ were__ in__
there__ and____ i_ just remembered were___ out__ here__ what i__ want to know
is wheres the_ caveman
}
macro_rules! misadventures {
(@rev [] [$($rev:ident)*]) => {
misadventures! {
@defs [skooks $($rev)*]
}
};
(@rev [$first:ident $($rest:ident)*] [$($rev:ident)*]) => {
misadventures! {
@rev [$($rest)*] [$first $($rev)*]
}
};
(@defs [$last:ident]) => {
#[inline(never)]
fn $last() {
panic!();
}
};
(@defs [$n0:ident $n1:ident $($rest:ident)*]) => {
#[inline(never)]
fn $n0() {
$n1();
}
misadventures! {
@defs [$n1 $($rest)*]
}
};
($($words:ident)+) => {
misadventures! {
@rev [$($words)+] []
}
}
}
pub(crate) use misadventures;
```
However, I have two problem cases:
1. In debug builds, the stack trace disappears as soon as execution enters the panicking code (`crate::am::panic_cold_explicit` I believe). Until then, I get a full trace (very nice!).
2. In release builds (with `debug = true`, `split-debuginfo = "unpacked"`, and `opt-level = "z"`), the stack trace is never deeper than `#0` and `#1` in GDB. It reports `Backtrace stopped: frame did not save the PC`.
`panic = "abort"` was used for all of my testing.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
This was done on `nightly-2024-11-18`.
</p>
</details>
| C-bug,needs-triage | low | Critical |
2,816,111,628 | ui | [bug]: Incorrect Import Path for Utility Function in Sidebar Component | ### Describe the bug
In the `Sidebar` component, the import statement for the `cn` utility function is written as:
```tsx
import { cn } from "@/components/lib/utils"
```
However, the correct import path should be:
```tsx
import { cn } from "@/lib/utils"
```
This issue arises from an incorrect reference to the `utils` file location in the import statement. Updating the import path to the correct one will resolve this issue.
### Affected component/components
Sidebar
### How to reproduce
1. Go to https://ui.shadcn.com/docs/components/sidebar
2. Scroll down to Installation
3. Select "Manual"
4. See error
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
```
### System Info
```bash
Browser
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug | low | Critical |
2,816,113,863 | transformers | Mangled tokenization with Llama 3.1 for string sequences containing<space>'m | We observed that trying to tokenize/detokenize strings containing the sequence `<space>'m` would not give back the initial string, but would "eat" the leading whitespace.
For example, the string "for 'manual'" will be transformed into "for'manual'"
Investigating further, we also observed issue with strings containing `<space>'s`, making us think the issue may be related to trying to handle sequences such as "I'm".
### System Info
transformers==4.46.2
### Who can help?
I guess it's for @ArthurZucker and @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Running:
```python
from transformers import AutoTokenizer
prompt = """for 'manual'"""
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
tokenizer.batch_decode(tokenizer([prompt])["input_ids"], skip_special_tokens=True)[0]
```
prints
```
"for'manual'"
```
(missing whitespace before the leading ')
### Expected behavior
It should output the following
```
"for'manual'"
``` | bug | low | Minor |
2,816,151,321 | react | Why does DevTools need permission to write to the clipboard? | React DevTools is asking for permission to read the clipboard.
Why?
![Image](https://github.com/user-attachments/assets/026238d9-d6db-4aaf-8b1e-44e949409b99) | Component: Developer Tools | low | Major |
2,816,197,665 | react | [React 19] HMR Bugs |
### 1st issue: The error gets shown right before the HMR reloads (not updates) the page after changes are made.
![Image](https://github.com/user-attachments/assets/a1bc7ead-6c64-46f7-9ae0-fff59c06051f)
### 2nd issue: HMR reloads the page instead of updating trivial components (App).
**./package.json:**
```
{
"name": "chat-contests",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"dev": "npx webpack serve",
"build": "npx webpack"
},
"author": "",
"license": "ISC",
"description": "",
"devDependencies": {
"@babel/core": "^7.26.0",
"@babel/preset-env": "^7.26.0",
"@babel/preset-react": "^7.26.3",
"babel-loader": "^9.2.1",
"css-loader": "^7.1.2",
"html-webpack-plugin": "^5.6.3",
"style-loader": "^4.0.0",
"webpack": "^5.97.1",
"webpack-cli": "^6.0.1",
"webpack-dev-server": "^5.2.0"
},
"dependencies": {
"@reduxjs/toolkit": "^2.5.0",
"axios": "^1.7.9",
"react": "^19.0.0",
"react-dom": "^19.0.0",
"react-redux": "^9.2.0"
}
}
```
**./src/index.js**:
In the logs there's only the "HOT AVAILABLE" message. For some reason HMR doesn't accept my module.
```
import React from 'react';
import { createRoot } from 'react-dom/client';
import App from './App';
const app = createRoot(
document.getElementById('app')
);
const render = () => app.render(<App />);
if (module.hot) {
console.error('HOT AVAILABLE');
module.hot.accept('./App', () => {
console.error('HOT ACCEPTED');
const NextApp = require('./App').default;
console.error('HOT APP LOADED');
app.render(<NextApp />);
});
} else {
console.error('NOT HOT');
render();
}
```
**./src/App.js:**
```
import React from 'react';
const App = () => {
return (
<div>Hello World!!10!</div>
);
};
export default App;
```
**./webpack.config.js:**
```
const path = require('path');
const HtmlWebpackPlugin = require('html-webpack-plugin');
module.exports = {
mode: 'development',
entry: './src/index.js',
output: {
path: path.resolve(__dirname, 'dist'),
filename: 'bundle.[contenthash].js',
clean: true
},
devServer: {
static: {
directory: './dist',
publicPath: '/apps/chat-contests/dev'
},
port: 1010,
hot: true,
allowedHosts: 'all',
client: {
webSocketURL: {
protocol: 'wss',
port: 443,
hostname: 'my-hostname.com',
pathname: '/chat-contests-ws'
}
}
},
module: {
rules: [{
test: /\.jsx?$/,
exclude: /node_modules/,
use: 'babel-loader'
}, {
test: /\.css$/,
use: [ 'style-loader', 'css-loader' ]
}]
},
resolve: {
extensions: [ '.js', '.jsx' ]
},
plugins: [
new HtmlWebpackPlugin({
template: './public/index.html'
})
]
};
``` | React 19 | medium | Critical |
2,816,210,134 | PowerToys | new+ takes over the "new folder" buttons... | ### Microsoft PowerToys version
0.87.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
New+
### Steps to reproduce
After activating New+, If I'm in a dialog box that offers a "New Folder" button, or in Windows Explorer, clicking the "New folder" button now simply opens a new Explorer window at the New+ template folder location.
### ✔️ Expected Behavior
I only expected this to affect the right click menu.
### ❌ Actual Behavior
If you click a "new folder" button in Explorer while New+ is running, instead of the standard windows action of creating a new folder in the current folder, a new Explorer window gets opened and the dialog box does nothing.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,816,214,988 | kubernetes | [CLE] Add integration test for third party strategy | https://github.com/kubernetes/kubernetes/blob/master/test/integration/apiserver/coordinated_leader_election_test.go#L214-L222 only tests for OldestEmulationVersion. This strategy could be a third party strategy https://github.com/kubernetes/kubernetes/blob/master/pkg/controlplane/controller/leaderelection/leaderelection_controller_test.go#L316, we should add an integration test to ensure that this works.
/cc @henrywu573
/triage accepted
/sig api-machinery
/assign | sig/api-machinery,triage/accepted | low | Minor |
2,816,216,789 | vscode | replace `isFile`, `isDirectory` on proposed terminal completion provider with `kind` | revert this hack
https://github.com/microsoft/vscode/pull/238900
cc @Tyriar | bug,terminal-suggest | low | Minor |
2,816,221,313 | PowerToys | something went wrong | ### Microsoft PowerToys version
0.87.1.0
### Installation method
WinGet
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
Version: 0.87.1.0
OS Version: Microsoft Windows NT 10.0.26100.0
IntPtr Length: 8
x64: True
Date: 28-01-2025 10:10:09 PM
Exception:
System.InvalidOperationException: Cyclic reference found while evaluating the Style property on element 'System.Windows.Controls.ScrollViewer'.
at System.Windows.FrameworkElement.UpdateStyleProperty()
at System.Windows.TreeWalkHelper.InvalidateStyleAndReferences(DependencyObject d, ResourcesChangeInfo info, Boolean containsTypeOfKey)
at System.Windows.TreeWalkHelper.OnResourcesChanged(DependencyObject d, ResourcesChangeInfo info, Boolean raiseResourceChangedEvent)
at System.Windows.TreeWalkHelper.OnResourcesChangedCallback(DependencyObject d, ResourcesChangeInfo info, Boolean visitedViaVisualTree)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.VisitNode(DependencyObject d, Boolean visitedViaVisualTree)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.WalkLogicalChildren(FrameworkElement feParent, FrameworkContentElement fceParent, IEnumerator logicalChildren)
at System.Windows.DescendentsWalker`1.WalkFrameworkElementLogicalThenVisualChildren(FrameworkElement feParent, Boolean hasLogicalChildren)
at System.Windows.DescendentsWalker`1.IterateChildren(DependencyObject d)
at System.Windows.DescendentsWalker`1.StartWalk(DependencyObject startNode, Boolean skipStartNode)
at System.Windows.TreeWalkHelper.InvalidateOnResourcesChange(FrameworkElement fe, FrameworkContentElement fce, ResourcesChangeInfo info)
at System.Windows.ResourceDictionary.NotifyOwners(ResourcesChangeInfo info)
at System.Windows.ResourceDictionary.OnMergedDictionariesChanged(Object sender, NotifyCollectionChangedEventArgs e)
at System.Collections.ObjectModel.ObservableCollection`1.OnCollectionChanged(NotifyCollectionChangedEventArgs e)
at System.Windows.ThemeManager.AddOrUpdateThemeResources(ResourceDictionary rd, ResourceDictionary newDictionary)
at System.Windows.ThemeManager.ApplyFluentOnWindow(Window window)
at System.Windows.ThemeManager.OnSystemThemeChanged()
at System.Windows.SystemResources.SystemThemeFilterMessage(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled)
at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs)
at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler)
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,816,243,859 | pytorch | RecursionError: maximum recursion depth exceeded in comparison | ### 🐛 Describe the bug
Running following small tests results in recursion
```py
def test_a():
def process_tensors(
in_ptr0, in_ptr1, in_ptr2, in_ptr3, in_ptr4, out_ptr0, out_ptr1
):
for x0 in range(32):
for x1 in range(6):
for x2 in range(64):
tmp0 = in_ptr0[x0, x1, x2]
tmp1 = in_ptr1[x1, x2]
tmp6 = in_ptr2[x0, x1, x2]
tmp13 = in_ptr3[x0 // 8, x1, x2]
tmp15 = in_ptr4[x0 // 8, x1, x2]
tmp2 = torch.cos(tmp1)
tmp3 = 1.0
tmp4 = tmp2 * tmp3
tmp5 = tmp0 * tmp4
tmp7 = torch.sin(tmp1)
tmp8 = tmp7 * tmp3
tmp9 = tmp6 * tmp8
tmp10 = tmp5 + tmp9
tmp11 = 0.3535533905932738
tmp12 = tmp10 * tmp11
tmp14 = tmp13 * tmp4
tmp16 = tmp15 * tmp8
tmp17 = tmp14 + tmp16
tmp18 = tmp17 * tmp11
out_ptr0[x0, x1, x2] = tmp12
out_ptr1[x0, x1, x2] = tmp18
# Example usage:
with torch.no_grad():
in_ptr0 = torch.randn(32, 6, 64)
in_ptr1 = torch.randn(6, 64)
in_ptr2 = torch.randn(32, 6, 64)
in_ptr3 = torch.randn(4, 6, 64)
in_ptr4 = torch.randn(4, 6, 64)
out_ptr0 = torch.zeros(32, 6, 64)
out_ptr1 = torch.zeros(32, 6, 64)
out_ptr0_ = torch.zeros(32, 6, 64)
out_ptr1_ = torch.zeros(32, 6, 64)
compiled_fn = torch.compile(
process_tensors,
backend="inductor",
fullgraph=True,
)
process_tensors(in_ptr0, in_ptr1, in_ptr2, in_ptr3, in_ptr4, out_ptr0, out_ptr1)
compiled_fn(in_ptr0, in_ptr1, in_ptr2, in_ptr3, in_ptr4, out_ptr0_, out_ptr1_)
```
### Error logs
[log_.txt](https://github.com/user-attachments/files/18576776/log_.txt)
### Versions
PyTorch version: 2.4.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 19.1.5 (++20241125104649+086d8e6bb5da-1~exp1~20241125104703.66)
CMake version: version 3.29.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9124 16-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
Stepping: 1
Frequency boost: disabled
CPU max MHz: 3711.9141
CPU min MHz: 1500.0000
BogoMIPS: 5991.11
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 32 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.0
[pip3] onnxruntime==1.16.3
[pip3] onnxscript==0.1.0.dev20240327
[pip3] torch==2.4.1+cpu
[pip3] torch_geometric==2.5.2
[pip3] torch_qaic==0.1.0
[pip3] torch-tb-profiler==0.4.3
[conda] Could not collect
cc @chauhang @penguinwu | oncall: pt2 | low | Critical |
2,816,247,529 | vscode | Trust dialog disappears when clicking links | Testing #238847
When I click on any link from the Trust dialog that opens the browser, the dialog is dismissed (assume Cancel). I then have to go to the Extensions View and click on Install again to bring up the dialog.
I'm a little mixed on this behavior. I would prefer the dialog to stay open so that if I want to click on multiple links to learn more then I don't have to keep going back to the Extensions View and clicking Install again. At the same time, I don't want a modal dialog.
Can we keep the same dialog but just not close it when going to the browser? | feature-request,dialogs | low | Minor |
2,816,247,860 | bitcoin | build: depends cros-compile using Clang fails | Seems like CMake is getting confused, and trying to build and link a `x86_64` binary, when it's meant to be (and thinks it is, according to the configure output) building for `aarch64`:
```bash
make -C depends/ CC=clang CXX=clang++ AR=llvm-ar NM=llvm-nm RANLIB=llvm-ranlib STRIP=llvm-strip NO_QT=1 NO_WALLET=1 NO_USDT=1 NO_ZMQ=1 -j18 HOST=aarch64-linux-gnu
<snip>
copying packages: boost libevent
to: /root/ci_scratch/depends/aarch64-linux-gnu
To build Bitcoin Core with these packages, pass '--toolchain /root/ci_scratch/depends/aarch64-linux-gnu/toolchain.cmake' to the first CMake invocation.
cmake -B build --toolchain /root/ci_scratch/depends/aarch64-linux-gnu/toolchain.cmake
<snip>
Cross compiling ....................... TRUE, for Linux, aarch64
C++ compiler .......................... Clang 18.1.3, /usr/bin/clang++
<snip>
cmake --build build -j18 --target bitcoind --verbose
[ 98%] Linking CXX executable bitcoind
cd /root/ci_scratch/build/src && /usr/bin/cmake -E cmake_link_script CMakeFiles/bitcoind.dir/link.txt --verbose=1
/usr/bin/clang++ -pipe -std=c++20 -O2 -O2 -g -fstack-protector-all -fcf-protection=full -fstack-clash-protection -Wl,-z,relro -Wl,-z,now -Wl,-z,separate-code -fPIE -pie CMakeFiles/bitcoind.dir/bitcoind.cpp.o CMakeFiles/bitcoind.dir/init/bitcoind.cpp.o -o bitcoind libbitcoin_node.a libbitcoin_common.a libbitcoin_consensus.a secp256k1/lib/libsecp256k1.a util/libbitcoin_util.a crypto/libbitcoin_crypto.a crypto/libbitcoin_crypto_sse41.a crypto/libbitcoin_crypto_avx2.a crypto/libbitcoin_crypto_x86_shani.a ../libleveldb.a ../libcrc32c.a ../libcrc32c_sse42.a ../libminisketch.a univalue/libunivalue.a /root/ci_scratch/depends/aarch64-linux-gnu/lib/libevent_extra.a /root/ci_scratch/depends/aarch64-linux-gnu/lib/libevent_pthreads.a /root/ci_scratch/depends/aarch64-linux-gnu/lib/libevent_core.a
/usr/bin/ld: /root/ci_scratch/depends/aarch64-linux-gnu/lib/libevent_extra.a(http.c.o): Relocations in generic ELF (EM: 183)
/usr/bin/ld: /root/ci_scratch/depends/aarch64-linux-gnu/lib/libevent_extra.a(http.c.o): Relocations in generic ELF (EM: 183)
/usr/bin/ld: /root/ci_scratch/depends/aarch64-linux-gnu/lib/libevent_extra.a(http.c.o): Relocations in generic ELF (EM: 183)
/usr/bin/ld: /root/ci_scratch/depends/aarch64-linux-gnu/lib/libevent_extra.a(http.c.o): Relocations in generic ELF (EM: 183)
/usr/bin/ld: /root/ci_scratch/depends/aarch64-linux-gnu/lib/libevent_extra.a: error adding symbols: file in wrong format
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
gmake[3]: *** [src/CMakeFiles/bitcoind.dir/build.make:130: src/bitcoind] Error 1
```
Maybe there's something missing from the toolchain that we should be passing through. The libs in depends have been compiled for aarch64, i.e:
```bash
objdump -x /root/ci_scratch/depends/aarch64-linux-gnu/lib/libevent_extra.a | grep aarch64
In archive /root/ci_scratch/depends/aarch64-linux-gnu/lib/libevent_extra.a:
event_tagging.c.o: file format elf64-littleaarch64
architecture: aarch64, flags 0x00000011:
http.c.o: file format elf64-littleaarch64
```
However CMake is building x86_64 object files:
```bash
file build/CMakeFiles/leveldb.dir/src/leveldb/db/filename.cc.o
build/CMakeFiles/leveldb.dir/src/leveldb/db/filename.cc.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), with debug_info, not stripped
```
and binaries:
```bash
file build/src/bitcoin-util
build/src/bitcoin-util: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=4ccab1d492154df5dd9a115e6bbc52371d7439b1, for GNU/Linux 3.2.0, with debug_info, not stripped
``` | Build system | low | Critical |
2,816,251,298 | ui | [bug]: Cannot read properties of undefined (reading 'resolvedPaths') | ### Describe the bug
![Image](https://github.com/user-attachments/assets/b49f5f3e-d96a-4340-b633-fa458ab8695c)
### Affected component/components
Almost all components
### How to reproduce
Any npx shadcn@latest add causes this issue
```
Run the npx shadcn@latest add calendar
```
### System Info
```bash
Windows 11
```
| bug | low | Critical |
2,816,264,279 | vscode | Should the unverified link go to a different docs page? | Testing #238847
When an extension is unverified, the link currently goes to a page meant for extension authors telling them how to become verified: https://code.visualstudio.com/api/working-with-extensions/publishing-extension#verify-a-publisher
However, the dialog is intended for end users, not publishers. Suggest that the unverified link goes here:
https://code.visualstudio.com/docs/editor/extension-runtime-security#_determine-extension-reliability. This tells the user more about extension reliability including verified publishers. | info-needed | low | Minor |
2,816,266,632 | TypeScript | Not possible to use const string as enum value | ### 🔎 Search Terms
enum const string computed value 18033
### 🕗 Version & Regression Information
I tried 2 TS versions, reproducible with both:
- 5.6.2
- 5.7.3
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/MYewdgzgLgBCAOUCW4BMMC8MDkDluwG4AoUSWPFMAZkx0vGuxgEMIYzoTiBTAD3ggATrB5gArgFsYAFR7QAohOkBvYjBgB5RFQCMdXDvC7sAGnVajYdFgbXTMAPSOYAHncwwIGDyFDh7AAWvjwW2vg0dHbUDs5uHlAQABS6ABwADNTUAJQ+fsIwwUKhAL5AA
### 💻 Code
```ts
const option2 = 'option2';
const option3 = 'option3' as const;
export enum TestEnum {
Option1 = 'option1',
Option2 = option2, // <<< no errors here
Option3 = option3, // <<< ts(18033) error here
}
```
### 🙁 Actual behavior
Blocked by `ts(18033)` error
### 🙂 Expected behavior
No `ts(18033)` error
### Additional information about the issue
When string variable created `as const` it's impossible to use it as an enum value because of `ts(18033)` error.
As far as I'm aware this behaviour should've been implemented as part of this issue https://github.com/microsoft/TypeScript/issues/40793 | Duplicate | low | Critical |
2,816,271,460 | godot | 4.4 Beta 1 - Animation Sub-Properties Create Wrong Key Type | ### Tested versions
4.4.beta1
### System information
Godot v4.4.beta1 - Windows 11 (build 22631) - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Laptop GPU (NVIDIA; 31.0.15.4633) - AMD Ryzen 7 5800H with Radeon Graphics (16 threads)
### Issue description
I noticed that 4.4 Beta 1 introduced individual property selection for animations, which is great.
It's bugged however.
Picking the Y value for Position when adding a track still creates a Vector2 key.
![Image](https://github.com/user-attachments/assets/f1b0513a-3cfe-403e-a477-0d15d0fd7e44)
The key also shows up as a red X in the timeline when a value is assigned to it.
![Image](https://github.com/user-attachments/assets/acfc1b99-dddb-4639-81a1-1d5619f47f9d)
### Steps to reproduce
1 - Create an animation
2 - Add a track for a property like Position's Y
### Minimal reproduction project (MRP)
[AnimationBug.zip](https://github.com/user-attachments/files/18576945/AnimationBug.zip) | bug,topic:editor,topic:animation | low | Critical |
2,816,277,402 | neovim | 'background' detection not working on Windows Terminal | ### Problem
`background` option is not set automatically on Windows Terminal.
As far as I investigated the `TermResponse` autocommand is not getting called with `OSC 11` events.
I have tried to log all `TermResponse` events:
```lua
vim.api.nvim_create_autocmd('TermResponse', {
callback = function(args)
print(vim.inspect(args))
end
})
```
But apparently `OSC 11` is not being trigged, even if I run `:= io.stdout:write('\027]11;?\007')`.
### Steps to reproduce
- Select a light theme on Windows Terminal
- Run `nvim --clean`
### Expected behavior
`background` option should be set to `light` automatically when the terminal is using a `light` theme.
### Nvim version (nvim -v)
NVIM v0.11.0-dev-1417+g487c48ec86
### Vim (not Nvim) behaves the same?
no, VIM - Vi IMproved 9.1 (2024 Jan 02, compiled Dec 22 2024 22:14:21)
### Operating system/version
Windows
### Terminal name/version
Windows Terminal 1.21.3231.0
### $TERM environment variable
empty
### Installation
chocolatey | platform:windows,tui,terminal,events | low | Minor |
2,816,288,116 | vscode | How to reset or revoke Trusted Publishers? | Testing #238847
Is there a way to reset or untrust an existing trusted publisher? I can't find any commands in the Command Palette...
The scenario is that I accidentally clicked on "Trust" or that I learned later that a publisher is not to be trusted, so I want the ability to revoke this trust. | feature-request,extensions | low | Minor |
2,816,294,959 | rust | Lifetime generics on Generic Const Items should not affect if const is checked for evaluatability | > There's a difference between `const _: () = panic!();` and `const _<'a>: () = panic!();`: The former is a pre-mono error, the latter is a post-mono error.
This seems inconsistent, and pretty bad. But luckily something we can change before stabilizing generic consts :)
I think we should probably change the code introduced in https://github.com/rust-lang/rust/pull/121387 to instead call `requires_monomorphization` instead of `is_empty` on the generics.
It should also preferably check for impossible predicates in the same way we do for `-Clink-dead-code`/`-Zcollect-mono-items=eager`, since we must avoid monomorphizing consts with impossible (possibly trivial) preds. You could probably turn that into an ICE today.
_Originally posted by @compiler-errors in https://github.com/rust-lang/rust/pull/136168#pullrequestreview-2576994338_
| C-bug,T-types,F-generic_const_items | low | Critical |
2,816,298,677 | vscode | Make the splitter/sash size configurable | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
The width for splitters and sash handles (for things like the left edge of the search box) is only 2px wide. I constantly find this hard to hover on and grab (big screen, sensitive mouse, old hands). Why not make it configurable? You _can_ configure a sash size like this:
```json
{
"workbench.sash.size": 8
}
```
But it only takes effect once you've managed to hover over the splitter. I want to make the hover target bigger. | feature-request,layout | low | Minor |
2,816,301,165 | vscode | Filtering seems to give up after a lot of results | Testing #238846
Searching the pty host channel with trace logs for terminal, notice the overview ruler stops showing results and the filter stops working from then on:
<img width="1172" alt="Image" src="https://github.com/user-attachments/assets/1fc992eb-fb3f-4d79-b945-a5d35f6ee7fd" />
With Terminal highlighted, you can see more matches in the overview ruler:
<img width="1172" alt="Image" src="https://github.com/user-attachments/assets/0a022b8b-7fa1-46ef-a842-4ccc98733883" />
| output | low | Minor |
2,816,308,907 | vscode | Filtering of output channels should run asynchronously and avoid locking up the renderer | Testing #238846
Filtering to a lot of content will lock up the renderer process. In a log with ~11000 lines it blocked the renderer for around 0.5s:
<img width="719" alt="Image" src="https://github.com/user-attachments/assets/24351126-45b5-47e4-bef0-b6ad21a14280" />
Can we do this processing async using something like this so the renderer doesn't lock up?
https://github.com/microsoft/vscode/blob/8081bc3f04fb46dcf8755937107ecfc31f6ed96e/src/vs/editor/browser/gpu/taskQueue.ts#L115-L135
Based on the numbers above I would guess an output channel of 110000 lines (which isn't unreasonable imo) would block the renderer for around 5 seconds. | output | low | Minor |
2,816,324,698 | kubernetes | Basing removed APIs on git tag always causes verify failures when the tag is changed to beta | This follows up on https://github.com/kubernetes/kubernetes/issues/128616#issuecomment-2465832999.
Per @liggitt's comment
> the reason we fail on beta and not alpha is that master transitions to 1.$next.0-alpha.0 as soon as we make a release branch and rc.0 for the current minor
When we transition from `1.$x.0-alpha.0` to `1.$x.0-beta.0`, the alpha flag is changed and certain resource are [dropped](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/server/deleted_kinds.go#L139), causing an OpenAPI diff.
I have a sample PR to force the beta tag to see the OpenAPI diff: https://github.com/kubernetes/kubernetes/pull/129863#issuecomment-2619604117.
At the moment, we can't preemptively remove these "removed" APIs from the OpenAPI as the tag as not transitioned to beta. So we have a chicken egg problem that bumping the tag will cause CI failures because of removed types and we cannot preemptively remove the types because the tag is not bumped.
Can we rely on something other than the git tag alpha/beta/etc for determining what APIs to serve? Perhaps a constant in a file?
cc @aojea @BenTheElder @liggitt @deads2k
| sig/api-machinery,sig/testing,sig/release,triage/accepted | low | Critical |
2,816,337,218 | vscode | Editor GPU: Filtering in compound log shows stale content frequently | Testing #238846
Playing around with the log categories and the filter text I hit this pretty easily:
<img width="744" alt="Image" src="https://github.com/user-attachments/assets/ca5b54ce-899e-4e95-a25c-85550d3fb160" />
<img width="744" alt="Image" src="https://github.com/user-attachments/assets/9d871377-9846-44ec-b9f3-975005e71c43" />
Output filtering just works in such a way that makes this bug happen often, the text at the bottom isn't actually there but it's still hanging around in the GPU buffer. | editor-gpu | low | Critical |
2,816,375,252 | ant-design | Space.Compact Bug | ### Reproduction link
[![Edit on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-7cs4n9)
### Steps to reproduce
Use space.compact with input and button
### What is expected?
The button is the same height as the input
### What is actually happening?
the height of the button is less than the height of the input
| Environment | Info |
| --- | --- |
| antd | 5.23.3 |
| React | v19 Vite |
| System | Kubuntu 24.04.1 LTS |
| Browser | Google Chrome Version 132.0.6834.83 (Official Build) (64-bit) |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Critical |
2,816,383,701 | transformers | Learning rate logging off by one training step | ### System Info
transformers==4.48.1
### Who can help?
@muellerz @SunMarc
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The Trainer class steps the learning rate scheduler at the end of each training loop iteration before calling `Trainer._maybe_log_save_evaluate()`. As a result, the logged learning rate is the learning rate of the next training step, not the one that was just performed. This off-by-one error is usually not much of a problem with monotonic schedulers, but with cycling schedulers such as `cosine_with_restarts`, it can be the difference between something very close to zero and the max learning rate at cycle boundaries.
LR scheduler is stepped here: https://github.com/huggingface/transformers/blob/4ec425ffad56cdbedfb97ab2d11243e42889f71c/src/transformers/trainer.py#L2605
Logging and checkpointing occurs here: https://github.com/huggingface/transformers/blob/4ec425ffad56cdbedfb97ab2d11243e42889f71c/src/transformers/trainer.py#L2611
### Expected behavior
The call to `Trainer._maybe_log_save_evaluate()` should be moved above the LR scheduler step (or vice versa), otherwise an incorrect learning rate will be logged. | bug | low | Critical |
2,816,397,830 | deno | OpenTelemetry setup does not work with `--env-file` | Version: Deno 2.1.7
Configuration of OTEL via (at least these) environment variables from `--env-file` is not respected as stated in [the OTEL docs](https://docs.deno.com/runtime/fundamentals/open_telemetry/).
- `OTEL_SERVICE_NAME` - default is `unknown_service` ([/ext/telemetry/lib.rs#L626](https://github.com/denoland/deno/blob/02ed3005259d71abd125ecf8f07f5e14c7a86fa5/ext/telemetry/lib.rs#L626))
- `OTEL_EXPORTER_OTLP_ENDPOINT` - default is `localhost:4318` ([/ext/telemetry/lib.rs#L662](https://github.com/denoland/deno/blob/02ed3005259d71abd125ecf8f07f5e14c7a86fa5/ext/telemetry/lib.rs#L662))
- `OTEL_EXPORTER_OTLP_PROTOCOL` - default is `http/protobuf ` ([/ext/telemetry/lib.rs#L606](https://github.com/denoland/deno/blob/02ed3005259d71abd125ecf8f07f5e14c7a86fa5/ext/telemetry/lib.rs#L606C34-L606C61))
[Example repository with reproduction](https://github.com/tolu/deno-otel) involving
1. A server bootstrapped with OTEL (either via env file, shell or dockerfile)
2. A logger that listens on the configured `OTLP_ENDPOINT` and print `content-type` and the payload of requests
3. A client that spams the server with some requests
Assigning values like:
- `OTEL_SERVICE_NAME=my-service`
- `OTEL_EXPORTER_OTLP_PROTOCOL=http/json`
Still uses protobuf and yields `service.name: <unknown_service>`
Changing the port of:
- `OTEL_EXPORTER_OTLP_ENDPOINT=localhost:4317`
results in no reports arriving at that endpoint.
| bug,otel | low | Minor |
2,816,404,230 | rust | `tail_expr_drop_order` lint can mention internal `__awaitee` name | ### Code
```Rust
#![warn(tail_expr_drop_order)]
async fn foo() -> bool {
false
}
pub async fn bar() {
while let true = foo().await {}
}
```
### Current output
```Shell
warning: relative drop order changing in Rust 2024
--> src/lib.rs:8:28
|
8 | while let true = foo().await {}
| ------^^^^^ - now the temporary value is dropped here, before the local variables in the block or statement
| | |
| | this value will be stored in a temporary; let us call it `#1`
| | up until Edition 2021 `#1` is dropped last but will be dropped earlier in Edition 2024
| `__awaitee` calls a custom destructor
| `__awaitee` will be dropped later as of Edition 2024
|
= warning: this changes meaning in Rust 2024
= note: for more information, see <https://doc.rust-lang.org/nightly/edition-guide/rust-2024/temporary-tail-expr-scope.html>
= note: most of the time, changing drop order is harmless; inspect the `impl Drop`s for side effects like releasing locks or sending messages
```
### Desired output
Something else instead of “`__awaitee`”.
### Rationale and extra context
Note that arguably the lint shouldn’t be firing at all because there are no other local variables, but that is just because the example is minimal; in the real cases, there were other local variables with significant `Drop`s.
### Rust Version
```Shell
rustc 1.86.0-nightly (2f348cb7c 2025-01-27)
binary: rustc
commit-hash: 2f348cb7ce4063fa4eb40038e6ada3c5214717bd
commit-date: 2025-01-27
host: aarch64-apple-darwin
release: 1.86.0-nightly
LLVM version: 19.1.7
```
### Anything else?
@rustbot label A-edition-2024 | A-diagnostics,T-compiler,A-edition-2024,L-tail_expr_drop_order,I-edition-triaged | low | Minor |
2,816,406,179 | vscode | Editor GPU: The viewport rendering strategy stops working after > 20k lines | Testing #238846
Everything goes blank suddenly when scrolling down, even previously visible lines:
<img width="814" alt="Image" src="https://github.com/user-attachments/assets/bed89772-1d89-43b8-9f14-f8d00a60e1a0" />
| bug,editor-gpu | low | Minor |
2,816,416,438 | rust | Add a test ensuring all our targets set the features required by their ABI. | With https://github.com/rust-lang/rust/pull/136147, we emit a warning (and hopefully eventually a hard error) when a target does not enable all the target features required by the declared ABI, or when it enables a target feature incompatible with the declared ABI. Ideally, we'd ensure that all built-in targets pass this test, but that is non-trivial: we need to actually generate an LLVM TargetMachine for the target and ask LLVM about the enabled target features and then check that against our ABI compatibility list. I'm not sure how to best write such a test.
It could be a ui test compiling an empty program, with a revision for each target, but that seems annoying... I'd prefer something that iterates over all targets automatically, rather than having to list all targets in some ui test.
Cc @workingjubilee @rust-lang/wg-llvm | needs-triage | low | Critical |
2,816,425,021 | vscode | VSCode is probing files in a way that causes "fatal error C1056: cannot update the time date stamp field in 'file.obj'" errors |
Type: <b>Bug</b>
This only happens randomly. Something in VSCode is probing files within the source tree. I build using CMake+ninja and the CMake tools and C/C++ tools extensions on Windows. On occasion, the build just fails because VSCode is holding a handle to the generated file which prevents the operation.
Retrying the build typically succeeds (the issue is pretty random).
Building on a command-line window outside of VSCode still reproduces the problem. You don't have to use the extensions to build in order to reproduce the issue.
I suspect that if I were to move the "build" output directory outside of the source root this issue would disappear. This really seems to stem from some file-change monitoring system running within VSCode.
This started happening at some point in a recent update, say within the past 2-3 months. I've used VSCode with CMake like this for 4 years now, and it was never an issue in the past.
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 9 7950X 16-Core Processor (32 x 4491)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|127.14GB (66.93GB free)|
|Process Argv|--crash-reporter-id 51152fc6-57cf-45c0-ba26-7ec617922cec|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (36)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-research|dev|1.2025.16001
xml|Dot|2.5.1
vscode-wasm|dts|1.4.1
gitlens|eam|16.2.1
x8664assembly|fre|0.1.0
copilot|Git|1.259.0
copilot-chat|Git|0.23.2
vscode-pull-request-github|Git|0.102.0
todo-tree|Gru|0.0.226
vscode-graphviz|joa|0.0.6
devinsights|Mic|2022.6.23-2
razzle-sources|Mic|0.0.1
tdpcode|Mic|10.2304.707
vscode-azdo-codereview|Mic|1.2023.331002
wavework|Mic|1.2024.124001
azure-pipelines|ms-|1.249.0
vscode-azureresourcegroups|ms-|0.10.3
csharp|ms-|2.61.28
vscode-dotnet-runtime|ms-|2.2.5
sarif-viewer|MS-|3.4.4
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
azure-account|ms-|0.13.0
cmake-tools|ms-|1.19.52
cpptools|ms-|1.23.4
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
vsliveshare|ms-|1.0.5948
llvm-ir|rev|1.0.5
windbg-debug|rez|0.3.4
rust-analyzer|rus|0.3.2282
llvmir|sun|0.1.31
cmake|twx|0.0.17
errorlens|use|3.22.0
markdown-all-in-one|yzh|3.6.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
vscaat:30438848
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
9064b325:31222308
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | triage-needed | low | Critical |
2,816,455,001 | vscode | terminal.integrated.suggest.providers should suggest all providers | We want this to give all valid suggestions:
<img width="656" alt="Image" src="https://github.com/user-attachments/assets/73b6195e-bd5f-46b0-a7e2-647ff5bd3b38" />
You can see how defaultProfile does this dynamically here:
<img width="406" alt="Image" src="https://github.com/user-attachments/assets/421cce4b-5cc9-4c52-8169-6468a108f347" />
https://github.com/microsoft/vscode/blob/8081bc3f04fb46dcf8755937107ecfc31f6ed96e/src/vs/platform/terminal/common/terminalPlatformConfiguration.ts#L396-L403
Note that `registerTerminalDefaultProfileConfiguration` is called whenever the value could possibly change:
https://github.com/microsoft/vscode/blob/8081bc3f04fb46dcf8755937107ecfc31f6ed96e/src/vs/workbench/contrib/terminal/browser/terminalProfileService.ts#L202-L206
It may be necessary to register it early without the enum information and when extensions are ready re-register the config. | feature-request,terminal-suggest | low | Minor |
2,816,460,264 | rustdesk | Rustdesk not working on AD Computer | ### Bug Description
Rustdesk software does not seem to work on computers in an Active Directory domain.
It says it is not ready and asks to check the network connection.
I should point out that I have this problem with other connectivities and the common point is always an AD.
Can someone help me?
### How to Reproduce
When an active directory is on the networking !
### Expected Behavior
Rustdesk software does not seem to work on computers in an Active Directory domain.
### Operating system(s) on local (controlling) side and remote (controlled) side
Windows 11
### RustDesk Version(s) on local (controlling) side and remote (controlled) side
1.2.3-2
### Screenshots
![Image](https://github.com/user-attachments/assets/31ad4150-89d1-42c7-ae4a-ba9b38797613)
### Additional Context
_No response_ | bug | low | Critical |
2,816,461,135 | ollama | ollama list command not listing installed models | ### What is the issue?
I installed ollama than changed the drive to install models using this article -
"https://medium.com/@dpn.majumder/how-to-deploy-and-experiment-with-ollama-models-on-your-local-machine-windows-34c967a7ab0e"
Than I installed deepseek r1-7b. Now I have restarted and also tried reinstalling everything again. Still ollama list command doesn't show installed models.
Also the models are running correctly
Any solution??
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.7 | bug | low | Major |
2,816,477,512 | react | [React 19] aria attributes on custom elements behave maybe incorrectly | ## Summary
In [React 18](https://codesandbox.io/p/sandbox/react-18-9r99s), passing `aria-disabled={true}` to a custom element serialized that value as "true". That seemed good and as far as I can tell worked as expected.
In [React 19](https://codesandbox.io/p/sandbox/intelligent-dawn-mv9p62), that same code adds the `aria-disabled` attribute, but doesn't set it to anything. That doesn't work as well. It appears to me (although I'm not certain) that it breaks accessibility for these custom elements. It definitely breaks `@testing-library/jest-dom`'s `toBeChecked` matcher with `aria-checked` (https://github.com/testing-library/jest-dom/blob/918b6fbcde10d4409ee8f05c6e4eecbe96a72b7a/src/to-be-checked.js#L31).
For these aria attributes that require strings instead of booleans to function properly, is this an intended change? | React 19 | medium | Minor |
2,816,488,075 | vscode | Suggest widget position goes out of sync when the panel view is set to maximised | Testing #238901
- Open Terminal and trigger suggest widget
- Maximise the panel size
🐛 Suggest widget goes out of sync with position
![Image](https://github.com/user-attachments/assets/6a5e5e25-835c-47d9-ad69-db45e3f7ed34)
| bug,terminal-suggest | low | Minor |
2,816,503,503 | godot | Incorrect AABB when another instance is animated | ### Tested versions
- Reproducible in v4.4.beta1.official [d33da79d3] , v4.3.stable.official [77dcf97d8]
### System information
Godot v4.4.beta1 - Ubuntu 24.04.1 LTS 24.04 on X11 - X11 display driver, Multi-window, 1 monitor - OpenGL 3 (Compatibility) - NVIDIA GeForce GTX 1080 Ti (nvidia; 535.183.01) - 12th Gen Intel(R) Core(TM) i3-12100F (8 threads)
### Issue description
Objects disappear when they are near the edges of the frame.
When importing a glTF object, creating 2 instances in a scene, and animating one of them. The other features incorrect AABB.
![Image](https://github.com/user-attachments/assets/fcf7ec43-14d1-4630-b568-77b939337dd4)
_In this image the character has missing elements. There is a second instance of the character on the scene that is playing an animation._
![Image](https://github.com/user-attachments/assets/916b1dff-e574-49bc-b1bd-6afe0291c78a)
_In this image the character looks correctly. The second instance of the character on the scene is not playing any animation._
### Steps to reproduce
I've prepared a minimal project. You can just open and run it to see the issue.
- It features a .glb object, imported 2 times in the main scene.
- Through code I'm playing an animation on one of them.
- The other one is on the side of the camera frame.
Several pieces are missing on the second instance.
### Minimal reproduction project (MRP)
[aabb_bug.zip](https://github.com/user-attachments/files/18578076/aabb_bug.zip) | bug,topic:import,topic:animation,topic:3d | low | Critical |
2,816,529,885 | godot | Keys get virtually "stuck" when using non-US modifiers (e.g. alt gr) | ### Tested versions
- Reproducible in v4.4.beta.custom_build [e5498020b]
Looking at the git blame, at least for the X11 backend, the core issue is probably very old.
### System information
Godot v4.4.beta (e5498020b) - KISS Linux Community #7 SMP PREEMPT_DYNAMIC Mon Dec 30 07:40:28 CET 2024 on Wayland - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - integrated AMD Radeon Vega 8 Graphics (RADV RAVEN) - AMD Ryzen 5 3500U with Radeon Vega Mobile Gfx (8 threads)
### Issue description
A continuation of #100713 and #100879
The `keycode` field in `InputEventKey` is supposed to be "unshifted" but this is not the case and from testing this can't be without some other change.
At least the X11 backend disables only the ctrl and shift modifier, leaving the keycode shifted by other modifiers (such as alt gr), if pressed. This confuses the `Input` backend as some release events are never sent, leaving certain keys in the "keys pressed" set. Notably this affects the new editor popups.
This started when I was trying to debug a side effect of #101848 as, after disabling key shifting for that particular field, the numpad stopped working. I think that this is particularly exacerbated by the fact that the Wayland backend is currently single window only; the X11 backend unpresses any input when focusing any different native subwindow (e.g tooltips).
#### Hypothetical solutions
We could collect "pressed keys" by their "physical key", as @pcvonz originally implemented in PR #101131 but, as we learned in #99633, a "real" key event is not guaranteed to have a physical keycode associated. @bruvzg also noted on RC that we should probably not depend on `keycode` so this might be a solution, but we should see how to handle events that do not map cleanly on an american keyboard.
The core issue seems to be the fact that we have the numpad keys mapped "wrong", as in, `keycode` should, according to the docs, always be the number label, but then how would we represent the various "unshifted" actions (begin, up, page up, right, etc.)? If we were to change that I'm also afraid we'd have compatibility issues.
Supposing this is limited to the X11/Wayland backends, we could filter out all but the "numpad" modifier but there's technically no guarantee about which modifier is which so this would be quite unreliable. Maybe we could just filter alt gr out but then other layouts with yet other modifiers could still cause issues.
I'm not really sure about what would be a good solution; I don't have a very deep knowledge of godot's keyboard handling logic.
### Steps to reproduce
Single window mode helps to avoid spurious focus changes from tooltips and whatnot but this can be replicated in multiwindow mode as well.
1. Choose a layout with an alt-gr modifier (e.g. italian);
2. Open the script editor or any other text field;
3. Keep holding alt-gr;
4. Keep holding the `à` key (see https://en.wikipedia.org/wiki/Italian_keyboard_layout for reference), resulting in `#`;
5. Release alt-gr;
6. Now try to trigger one of the new editor popups, as reported in #100713.
### Minimal reproduction project (MRP)
N/A | bug,topic:input | low | Critical |
2,816,532,662 | pytorch | CPU Model compile not working for flexattention | ### 🐛 Describe the bug
Use CPU model (i.e., CUDA_VISIBLE_DEVICES=-1) to run the following code will throw error.
```
from torch.utils.data import DataLoader, Dataset
import torch
from torch import nn
from torch.nn.attention.flex_attention import flex_attention, create_block_mask, _create_empty_block_mask, BlockMask
# flex_attention = torch.compile(flex_attention)
# Create a simple custom dataset
class MyDataset(Dataset):
def __init__(self):
self.data = torch.randn(100, 32)
self.labels = torch.randint(0, 2, (100,))
def __len__(self):
return 10000000
def __getitem__(self, idx):
return self.data[idx % 100], self.labels[idx % 100]
class MyModel(nn.Module):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.loss = torch.nn.MSELoss()
self.q_proj = nn.LazyLinear(128)
self.k_proj = nn.LazyLinear(128)
self.v_proj = nn.LazyLinear(128)
def forward(self, x, y):
x = x[None, None, :, :]
r = x
q = self.q_proj(x)
k = self.k_proj(x)
v = self.v_proj(x)
r = flex_attention(q, k, v)
return self.loss(torch.sum(r[0, 0, :, :], dim=-1), y)
dataset = MyDataset()
data_loader = DataLoader(dataset, batch_size=3, shuffle=True)
model = MyModel()
model.compile()
for x, y in data_loader:
print(model(x, y))
break
```
Error trace:
```
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
Traceback (most recent call last):
File "/data/mizhou/workspace/torch_p13n_embedding/py/tmp/test_flex_attn.py", line 44, in <module>
print(model(x, y))
^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1737, in _wrapped_call_impl
return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/__init__.py", line 2340, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 678, in aot_dispatch_autograd
compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1741, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 569, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 685, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 979, in codegen_and_compile
graph.run(*example_inputs)
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/graph.py", line 855, in run
return super().run(*args)
^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/graph.py", line 1496, in run_node
result = super().run_node(n)
^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/graph.py", line 1074, in call_function
return super().call_function(target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/fx/interpreter.py", line 310, in call_function
return target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
IndexError: tuple index out of range
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.10.226-214.879.amzn2.x86_64-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.90.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3582.060
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0+cu124
[pip3] torchmetrics==1.0.3
[pip3] torchrec==1.1.0+cu124
[pip3] triton==3.2.0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.3.1 pypi_0 pypi
[conda] torchaudio 2.3.1 pypi_0 pypi
[conda] torchvision 0.18.1 pypi_0 pypi
[conda] triton 2.3.1 pypi_0 pypi | oncall: cpu inductor | low | Critical |
2,816,536,507 | yt-dlp | player.pl | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting a new site support request
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Poland
### Example URLs
https://player.pl/programy-online,2/fakty-odcinki,37703/odcinek-9889,S45E9889,10977228
### Provide a description that is worded well enough to be understood
Please add the ability to download videos from the site https://player.pl
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-F', 'https://player.pl/programy-online,2/fakty-odcinki,37703/odcinek-9889,S45E9889,10977228']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [3b4531934] (zip)
[debug] Python 3.11.2 (CPython aarch64 64bit) - Linux-6.6.62+rpt-rpi-v8-aarch64-with-glibc2.36 (OpenSSL 3.0.15 3 Sep 2024, glibc 2.36)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0
[debug] Optional libraries: Cryptodome-3.11.0, certifi-2022.09.24, requests-2.28.1, sqlite3-3.40.1, urllib3-1.26.12
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[generic] Extracting URL: https://player.pl/programy-online,2/fakty-odcinki,37703/odcinek-9889,S45E9889,10977228
[generic] odcinek-9889,S45E9889,10977228: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] odcinek-9889,S45E9889,10977228: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://player.pl/programy-online,2/fakty-odcinki,37703/odcinek-9889,S45E9889,10977228
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1637, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1772, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/generic.py", line 2553, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://player.pl/programy-online,2/fakty-odcinki,37703/odcinek-9889,S45E9889,10977228
``` | site-request,DRM,triage | low | Critical |
2,816,537,140 | electron | `openDevTools` window mode is inconsistent | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
34.0.0
### What operating system(s) are you using?
Windows
### Operating System Version
Edition Windows 11 Enterprise Version 24H2 Installed on 2024-10-04 OS build 26100.2894 Experience Windows Feature Experience Pack 1000.26100.36.0
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
The location of opening dev tools is inconsistent.
These all open dev tools:
```js
// example 1
app.on('web-contents-created', (event, contents) => {
contents.openDevTools();
})
// example 2
app.on('web-contents-created', (event, contents) => {
contents.on('dom-ready', ()=>{contents.openDevTools();});
})
// example 3
app.on('web-contents-created', (event, contents) => {
contents.openDevTools({mode: ""});
})
```
My naive expectation is that these would would all open the dev tools the same way - either detached, docked, or undocked.
### Actual Behavior
example 1 opens the dev tools in "detached" mode, which is the documented default.
example 2 opens the dev tools in the same mode it was last opened in.
example 3 opens the dev tools in the same mode it was last opened in. This is mentioned in the documentation, but `""` is not a type-correct value for "mode".
https://github.com/electron/electron/blob/0e388bce3eb27b67391eb0f75f268608550fbcca/docs/api/web-contents.md?plain=1#L1907-L1910
### Testcase Gist URL
_No response_
### Additional Information
_No response_ | platform/windows,bug :beetle:,34-x-y | low | Critical |
2,816,539,865 | flutter | Run `Linux analyze` in the Merge Queue | Today, 2 PRs [collided](https://ci.chromium.org/ui/p/flutter/builders/prod/Linux%20analyze/22976/overview) mid-air and caused the tree to go red ([PR1](https://github.com/flutter/flutter/pull/162279), [PR2](https://github.com/flutter/flutter/pull/161911)).
This kind of collision is preventable by running some sanity check tests in the MQ (e.g. `Linux analyze`). Such a test would've caught the error and prevented the 2nd PR from landing; keeping the tree green. | team-infra,P1,monorepo | medium | Critical |
2,816,548,692 | vscode | Suggestions are not triggered on backspace | Testing #238901
https://github.com/user-attachments/assets/f66911c5-c412-4c65-b82b-a7f6b6b0283c | feature-request,terminal-suggest | low | Minor |
2,816,555,974 | vscode | A lot of the extra detail boxes just contain the command name again | Testing #238901
![Image](https://github.com/user-attachments/assets/4ca2abdf-2e6c-4cee-bf03-c49695431f7e) | feature-request,terminal-shell-pwsh,terminal-suggest | low | Minor |
2,816,565,729 | kubernetes | HPA wrongly assumes that terminated pods have an utilization of 100% | ### What happened?
A pod that terminated was considered by the HPA controller to be at its target utilization.
The controller logic ([1](https://github.com/kubernetes/kubernetes/blob/ed9572d9c7733602de43979caf886fd4092a7b0f/pkg/controller/podautoscaler/replica_calculator.go#L106-L120), [2](https://github.com/kubernetes/kubernetes/blob/ed9572d9c7733602de43979caf886fd4092a7b0f/pkg/controller/podautoscaler/replica_calculator.go#L211-L223)) is such that, while scaling up, it conservatively considers that a pod for which we couldn't get the utilization metric from a metrics API are at their target utilization. (On scale down, the assumption is conservatively that the utilization is 0.)
### What did you expect to happen?
I expected the controller to assume that a terminated pod has an utilization of 0.
This is [already correctly handled for pods that terminated with a failure](https://github.com/kubernetes/kubernetes/blob/ed9572d9c7733602de43979caf886fd4092a7b0f/pkg/controller/podautoscaler/replica_calculator.go#L383), but the case where a pod terminated successfully isn't handled.
### How can we reproduce it (as minimally and precisely as possible)?
Create a Deployment with pods that terminate (without a failure) and observe that an HPA targetting this Deployment will assume that terminated pods are at target utilization.
### Anything else we need to know?
Handling the case where the pod is terminated normally [here](https://github.com/kubernetes/kubernetes/blob/ed9572d9c7733602de43979caf886fd4092a7b0f/pkg/controller/podautoscaler/replica_calculator.go#L383) will fix this.
### Kubernetes version
<details>
```console
$ kubectl version
v1.29.10-gke.1280000
```
</details>
### Cloud provider
<details>
GKE
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux rodete"
NAME="Debian GNU/Linux rodete"
VERSION_CODENAME=rodete
```
</details>
### Install tools
<details>
N/A
</details>
### Container runtime (CRI) and version (if applicable)
<details>
N/A
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
None
</details>
| kind/bug,sig/autoscaling,needs-triage | low | Critical |
2,816,628,151 | rustdesk | Remote Desktop Window Freezes with Artifacts | ### Bug Description
At random times it will freeze with artifacts. (Only the remote desktop window)
I have to close the window from the taskbar.
### How to Reproduce
Connect to a remote desktop
### Expected Behavior
Not to freeze and see artifacts
### Operating system(s) on local (controlling) side and remote (controlled) side
Windows 11 24H2
### RustDesk Version(s) on local (controlling) side and remote (controlled) side
1.3.7
### Screenshots
![Image](https://github.com/user-attachments/assets/7e1a0ba2-43ad-464c-be84-fa8d82497f6c)
### Additional Context
_No response_ | bug | low | Critical |
2,816,643,570 | iptv | Add: RadioFannJordanStudio.jo | ### Channel ID (required)
RadioFannJordanStudio.jo
### Stream URL (required)
http://45.63.116.205/hls2/stream1.m3u8
### Quality
1080p
### Label
Not 24/7
### Timeshift
_No response_
### HTTP User Agent
_No response_
### HTTP Referrer
_No response_
### Notes
_No response_
### Contributing Guide
- [x] I have read [Contributing Guide](https://github.com/iptv-org/iptv/blob/master/CONTRIBUTING.md) | approved,streams:add | low | Minor |
2,816,647,917 | terminal | Emulated terminal (Git Bash, CMD, PowerShell, IDE) freeze when inserting a symbol (@,-,`,',",...) when Using Terminal | ### Windows Terminal version
1.21.3231.0
### Windows build number
10.0.26100.0
### Other Software
Git bash
Powershell 7.4.6
CMD
### Steps to reproduce
1. Open terminal.
2. Open a tab to emulated an installed terminal (Git Bash, CMD, PowerShell, IDE).
3. Try to use a symbol from keyboard (example using ssh to connect user@host).
4. Terminal freeze.
5. After 10 secs it will print the symbol and continue until a new symbol is pressed.
### Expected Behavior
Not freeze when writing.
### Actual Behavior
When I write something that use symbols the terminal emulated (Git Bash, CMD, PowerShell, IDE) is freeze, however I can go to settings without issue or interact with the window | Issue-Bug,Needs-Triage,Needs-Attention | low | Minor |
2,816,651,043 | PowerToys | Custom VPN | ### Description of the new feature / enhancement
Hello,
I have other types of VPN which are not suppoerted by the built-in windows11 VPN. I really wish there would be a way to either link some excecutables so i can reach unsupported protocols, or just add more protocols to the built-in VPN.
### Scenario when this would be used?
I dont want to open an app to connect to a VPN that loads forever, I just want to click on my command center, and join a VPN just like wifi.
### Supporting information
![Image](https://github.com/user-attachments/assets/3f17409e-fe43-411a-8579-8d7363ee1761)
I meant this VPN button | Needs-Triage | low | Minor |
2,816,651,905 | vscode | Terminal Suggest Widget is cutoff when invoked from a maximized terminal | Testing #238901
Repro Steps:
1. Close all other panels in vscode
2. Expand the terminal to be the only visible panel inside vscode and invoke the suggest widget
🐛
![Image](https://github.com/user-attachments/assets/5f647362-6337-4abd-8c81-b1de68213942) | bug,terminal-suggest | low | Minor |
2,816,676,513 | PowerToys | Add App Exclusion Option for Shortcut Remappings | ### Description of the new feature / enhancement
It would be incredibly helpful to have the option to exclude specific apps from being affected by certain shortcut remappings. For example, If a global shortcut remap conflicts with the functionality of a particular app, users could specify that this app should ignore the remap entirely.
### Scenario when this would be used?
There are scenarios where a globally remapped shortcut interferes with app-specific features or workflows. Allowing exclusions would provide greater flexibility and avoid these conflicts, enhancing both user experience and productivity.
### Supporting information
I use compact keyboard layouts (<60%) to minimize finger and hand travel while typing. To achieve this, I’ve created custom ‘layers’ around the 'home row keys'. For example, I’ve remapped **Ctrl + L** to produce the _%_ character.
This setup is incredibly efficient for my workflow since I code and type extensively. However, I also work in the digital audio workstation Ableton, where **Ctrl + L** is a frequently used shortcut essential to the workflow. Unfortunately, this shortcut cannot be remapped within Ableton itself.
Currently, to avoid conflicts, I have to disable PowerToys Keyboard Manager entirely whenever I use Ableton. While this workaround is functional, it’s far from ideal, as switching between apps becomes tedious—my custom shortcuts are unavailable while Keyboard Manager is disabled.
Adding the ability to exclude specific apps from shortcut remappings would solve this issue elegantly, enabling seamless transitions between different workflows without compromising productivity.
| Needs-Triage | low | Minor |
2,816,690,663 | pytorch | [binary builds] Anaconda. Remove dependency on conda libuv module in MacOS and Windows nightly builds | ### 🐛 Describe the bug
Related to: https://github.com/pytorch/pytorch/issues/138506
In Windows and MacOS wheel builds we use libuv as build time dependency.
In Windows we use lubuv during nightly build:
.ci/pytorch/windows/condaenv.bat
On MacOS :
.ci/wheel/build_wheel.sh
.github/requirements/conda-env-macOS-ARM64
This package is available via conda : https://anaconda.org/anaconda/libuv
And not available via pip install.
As the first step in refactoring workflow that dependent on conda we want to refactor the code to use different install method for this package. Maybe installing via download https://github.com/libuv/libuv#downloading ? If possible one should try to use same method of installation that would work in Windows and MacOS.
### Versions
2.7.0 | oncall: releng,topic: binaries | low | Critical |
2,816,690,767 | rust | Missed `match` optimization of riscv | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I was code golfing different implementations for a function that "inverts" (rotates by 2) an enum with discriminants `1`, `2`, `4`, `8` (i.e. `f(1) = 4`), the obvious implementation being this:
```rust
pub enum Side {
Top = 1 << 0,
Right = 1 << 1,
Bottom = 1 << 2,
Left = 1 << 3,
}
pub fn opposite_match(x: Side) -> Side {
use Side::*;
match x {
Top => Bottom,
Right => Left,
Bottom => Top,
Left => Right,
}
}
```
I expected to see this happen: on riscv `opposite_match` produces the optimal code, or at least a reasonable one.
Instead, this happened: compiler generates the following asm:
```asm
opposite_match:
addi a0, a0, -1
slli a0, a0, 24
srai a0, a0, 24
lui a1, %hi(.Lswitch.table.opposite_match)
addi a1, a1, %lo(.Lswitch.table.opposite_match)
add a0, a1, a0
lbu a0, 0(a0)
example::table_lookup::T::h07c70895e308d45b:
.ascii "\004\b\004\001\004\004\004\002"
```
The problem is the `slli` (shift left logical immediate) and `srai` (shift right arithmetic immediate) instructions -- those together are noop, since `a0 <= 7` at that point and `7 << 24 < 1.rotate_right(1)`.
The underlying issue is that llvm at some point replaces indexing by `u8` with indexing by `u32`, inserting a `sext` in the process; `sext` is not getting optimized out afterwards and is later lowered to `ssli`+`srai`. LLVM issue: https://github.com/llvm/llvm-project/issues/124841.
If you write the lookup table by hand compiler generates better asm:
```rust
pub fn table_lookup(x: Side) -> Side {
static T: [Side; 8] = [
Side::Bottom, // <--
Side::Left, // <--
Side::Bottom,
Side::Top, // <--
Side::Bottom,
Side::Bottom,
Side::Bottom,
Side::Right, // <--
];
T[x as usize - 1]
}
```
```
example::table_lookup::h8d56712109e87652:
lui a1, %hi(example::table_lookup::T::h07c70895e308d45b)
addi a1, a1, %lo(example::table_lookup::T::h07c70895e308d45b)
add a0, a1, a0
lbu a0, -1(a0)
ret
example::table_lookup::T::h07c70895e308d45b:
.ascii "\004\b\004\001\004\004\004\002"
```
There are no shifts and the `-1` is merged into the the `lbu` (load byte unsigned).
[Godbolt link](https://godbolt.org/z/vGjGT9oW1).
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
rustc version:
```
1.86.0-nightly (2025-01-27 2f348cb7ce4063fa4eb4)
``` | A-LLVM,I-slow,A-codegen,C-bug,needs-triage | low | Critical |
2,816,706,974 | puppeteer | [Bug]: Puppeteer produces untagged PDF starting from v.23.8.0 | ### Minimal, reproducible example
```TypeScript
const browser = await puppeteer.launch({ headless: "shell" });
const page = await browser.newPage();
await page.setContent(html);
await page.pdf({ path: outputPath });
await browser.close();
```
### Background
We are doing 3rd party dependencies scheduled upgrade for our product and noticed the issue with PDF generation when upgrading the puppeteer library. The resulting PDF must be tagged by default, this is already for a while (I believe from Chrome 81). After trying different versions we narrowed down the puppeteer version 23.7.1, which is still producing tagged PDF, but the very next version 23.8.0 produces PDF which is not accessible. We tried through the current version 24.1.1 and all of them do not produce tagged PDF.
### Expectation
Must produce tagged PDF by default.
### Reality
Starting from version 23.8.0 doesn't produce tagged PDF.
### Puppeteer configuration file (if used)
```TypeScript
```
### Puppeteer version
23.8.0 through 24.1.1
### Node version
Node 20
### Package manager
npm
### Package manager version
9.6.7
### Operating system
Windows | bug | low | Critical |
2,816,711,472 | vscode | Node.js fetch failing with mitm proxy and basic auth | > @chrmarti Yes that appears to be spot on
>
> <details>
>
> <summary>GitHub Copilot Chat Diagnostics output</summary>
>
> ## GitHub Copilot Chat
>
> - Extension Version: 0.24.2025012802 (prod)
> - VS Code: vscode/1.97.0-insider
> - OS: Mac
>
> ## Network
>
> User Settings:
> ```json
> "http.proxy": "http://localhost:9999",
> "github.copilot.advanced.debug.useElectronFetcher": true,
> "github.copilot.advanced.debug.useNodeFetcher": false,
> "github.copilot.advanced.debug.useNodeFetchFetcher": true
> ```
>
> Connecting to https://api.github.com:
> - DNS ipv4 Lookup: 140.82.114.5 (11 ms)
> - DNS ipv6 Lookup: ::ffff:140.82.114.5 (5 ms)
> - Proxy URL: http://localhost:9999 (0 ms)
> - Proxy Connection: 407 Proxy Authentication Required
> proxy-authenticate: Basic realm="mitmproxy" (34 ms)
> - Electron fetch (configured): HTTP 200 (82 ms)
> - Node.js https: HTTP 200 (80 ms)
> - Node.js fetch: Error (23 ms): fetch failed
> Request was cancelled.
> Proxy response (407) !== 200 when HTTP Tunneling
> - Helix fetch: Error (174 ms): Miscellaneous failure (see text): no credential for 1221457B-73E4-463F-9A4A-B270AD5C8FF3
>
> Connecting to https://api.githubcopilot.com/_ping:
> - DNS ipv4 Lookup: 140.82.114.22 (27 ms)
> - DNS ipv6 Lookup: ::ffff:140.82.114.22 (2 ms)
> - Proxy URL: http://localhost:9999 (1 ms)
> - Proxy Connection: 407 Proxy Authentication Required
> proxy-authenticate: Basic realm="mitmproxy" (31 ms)
> - Electron fetch (configured): HTTP 200 (74 ms)
> - Node.js https: HTTP 200 (88 ms)
> - Node.js fetch: Error (24 ms): fetch failed
> Request was cancelled.
> Proxy response (407) !== 200 when HTTP Tunneling
> - Helix fetch: Error (12 ms): Miscellaneous failure (see text): no credential for 1221457B-73E4-463F-9A4A-B270AD5C8FF3 (negative cache)
>
> ## Documentation
>
> In corporate networks: [Troubleshooting firewall settings for GitHub Copilot](https://docs.github.com/en/copilot/troubleshooting-github-copilot/troubleshooting-firewall-settings-for-github-copilot).
> </details>
>
> Running a proxy, but not in wsl or any vs code remote, so I'm not sure why electron wouldn't be enabled, per: https://github.com/devm33/vscode/blob/419205a8ff1252fc6a3a8e085b8645f2ab51a0ec/extensions/github-authentication/src/node/fetch.ts#L6-L12
>
> The `github-authentication` extension runs in the standard extension host right?
_Originally posted by @devm33 in [#207867](https://github.com/microsoft/vscode/issues/207867#issuecomment-2619408420)_ | bug,proxy | low | Critical |
2,816,715,796 | pytorch | Use of broadcast_shapes() errors attempting to guard on symbolic nested int | Repro:
```python
import torch
torch._dynamo.config.capture_dynamic_output_shape_ops = True
torch._dynamo.config.capture_scalar_outputs = True
nt = torch.nested.nested_tensor([
torch.randn(2),
torch.randn(3),
torch.randn(4),
], layout=torch.jagged, device="cuda")
@torch.compile(fullgraph=True)
def f(t, mask):
nt = torch.nested.masked_select(t, mask)
return torch.where(nt > 0., torch.ones_like(nt), torch.zeros_like(nt))
t = torch.randn(3, 5)
mask = torch.randint(0, 2, t.shape, dtype=torch.bool)
output = f(t, mask)
```
Problem: `torch.nested.masked_select()` constructs an NJT in-graph, which invokes a special path for constructing a new symbolic nested int. Attempting to guard on this (due to usage of `where()` -> `broadcast_shapes()` -> `Ne(s1, 1)` guard on the symbolic nested int) gives this error:
```
...
AssertionError: s0 (could be from ['<ephemeral: intermediate_offsets_or_lengths>']) not in {s0: []}. If this assert is failing, it could be due to the issue described in https://github.com/pytorch/pytorch/pull/90665
```
cc @cpuhrsch @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: nestedtensor | low | Critical |
2,816,730,937 | PowerToys | Explorer shortcut for adding/removing a directory from PATH | ### Description of the new feature / enhancement
When in a directory in Explorer, right clicking on empty space should reveal an option for adding the current directory to PATH if it isn't there already. If the directory already exists in PATH, the option should instead be a button for removing the current directory from PATH.
### Scenario when this would be used?
When installing a new program (like Postgres), it's common to not have useful tools available in PowerShell because the proper directories aren't added to PATH. Having a button in Explorer to add/remove these directories would make dealing with this issue easier.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,816,754,686 | flutter | `Mac ruby` failing due to ActiveSupport issue | ### Type of Request
bug
### Infrastructure Environment
cocoon
### What is happening?
```
/opt/s/w/ir/x/w/cocoon/cipd_packages/ruby/build/bin/darwin_ruby/lib/ruby/gems/3.1.0/gems/activesupport-7.0.8/lib/active_support/logger_thread_safe_level.rb:12:in `<module:LoggerThreadSafeLevel>': uninitialized constant ActiveSupport::LoggerThreadSafeLevel::Logger (NameError)
Logger::Severity.constants.each do |severity|
```
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac%20ruby/1197/overview
Rails tracking issue: https://github.com/rails/rails/issues/54260
I'm assuming, we need to pin the version of `concurrent-ruby` to `1.3.4` in https://github.com/flutter/cocoon/blob/main/cipd_packages/ruby/third_party/ruby_ship/ruby_ship_build.sh#L246
### Steps to reproduce
Run test
### Expected results
Expect test to pass | team-infra | low | Critical |
2,816,754,908 | flutter | [SwiftPM] Remove warnings exception for google_sign_in_ios plugin | ### Background
Xcode produces warnings for the AppAuth-iOS Swift package (see https://github.com/openid/AppAuth-iOS/issues/865), which is a dependency of the google_sign_in_ios plugin.
We cannot suppress these warnings when using Swift Package Manager. As a result, we added to the google_sign_in_ios plugin to the Xcode warnings exception list (see https://github.com/flutter/flutter/issues/146904).
### Work
Once the AppAuth-iOS warnings are addressed (see https://github.com/openid/AppAuth-iOS/pull/888), remove google_sign_in_ios from this exception list. | platform-ios,platform-mac,p: google_sign_in,P3,a: plugins,team-ios,triaged-ios | low | Minor |
2,816,767,650 | vscode | Disable telemetry reporting via GPO | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
You asked for suggestions what Enterprise needs are - regarding future GPO settings.
1. [disable telemetry reporting](https://code.visualstudio.com/docs/supporting/FAQ#_how-to-disable-telemetry-reporting) and other privacy related settings.
| triage-needed | low | Minor |
2,816,781,381 | transformers | Mixed-precision with `torch.autocast` is broken for many models when using `attn_implementation="flash_attention_2"` | ### System Info
transformers==4.481
python=3.11.11
### Who can help?
@ArthurZucker
### Expected behavior
Mixed-precision training via `torch.autocast` is broken for most models inspired by the HF Llama code (which is a lot of models) when using `attn_implementation="flash_attention_2` and potentially not working as intended on general.
**Snippet to reproduce on `transformers==4.48.1`**:
```python
torch.set_default_device("cuda:0")
model_name = "meta-llama/Llama-3.2-3B" # many others, e.g. "allenai/OLMo-2-1124-7B"
inputs = AutoTokenizer.from_pretrained(model_name)("I ❤️ 🤗", return_tensors="pt")
model = AutoModel.from_pretrained(model_name, attn_implementation="flash_attention_2", torch_dtype=torch.float32)
with torch.autocast("cuda", dtype=torch.bfloat16):
# errors with -> RuntimeError: FlashAttention only support fp16 and bf16 data type
outputs = model(**inputs)
```
Indeed after calling `AutoModel.from_pretrained(model_name, attn_implementation="flash_attention_2", torch_dtype=torch.float32)`, we get
```
Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in LlamaModel is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. ...`
```
but the snippet fails even though we do use `torch.autocast` as suggested. For mixed-precision training, we do actually want to load the weights in `float32`.
Concretely, the source is two different issues:
- The common llama-inspired implementation of RMSNorm silently upcasts to `float32` even within an `autocast` context, which is propagated from the `q_norm` / `v_norm` up until passing the projections to the attention function (FA2 fails here)
- The RMSNorm issue has been discussed in these related issues: [here](https://github.com/NVIDIA/TransformerEngine/issues/1132#issuecomment-2307468268) and [here](https://github.com/huggingface/transformers/issues/30236) and [here](https://github.com/huggingface/transformers/issues/33133)
- We have this comment in the FlashAttention integration about `RMSNorm` usually handling silent upcasting correctly but it seems at some point this broke: https://github.com/huggingface/transformers/blob/ec7afad60909dd97d998c1f14681812d69a15728/src/transformers/integrations/flash_attention.py#L32-L36
- The `cos` and `sin` position embeddings for RoPE are in `float32` even within an `autocast` context, which will again silently upcast the `query_states`/`key_states` to `float32` before passing to the attention function:
https://github.com/huggingface/transformers/blob/ec7afad60909dd97d998c1f14681812d69a15728/src/transformers/models/llama/modeling_llama.py#L275
- they are in `float32` is because `cos` and `sin` are created in e.g. `LlamaModel`: https://github.com/huggingface/transformers/blob/ec7afad60909dd97d998c1f14681812d69a15728/src/transformers/models/llama/modeling_llama.py#L571
where the `hidden_states` come from the `nn.Embedding` which is never autocasted by `torch.autocast`. So:
https://github.com/huggingface/transformers/blob/ec7afad60909dd97d998c1f14681812d69a15728/src/transformers/models/llama/modeling_llama.py#L141
does not work as intended (?) because the input at that point has not been autocasted yet.
One fix is to remove the silent upcasting of the *output* to `float32` in `RMSNorm` if the input is `bfloat16` and directly casting `cos` and `sin` to the `torch.get_autocast_dtype` if in autocast.
[In this discussion](https://github.com/huggingface/transformers/pull/25598#discussion_r1338745787) it seems that this might come with some issues so there might have to be some different solution (I am not quite sure of the exact reasons though for potential issues).
It's important to note that through all this silent upcasting, we're probably (I haven't benchmarked though) using a lot of extra memory when doing mixed-precision training (regardless of whether we use `attn_implementation="flash_attention_2"` or not). | bug | low | Critical |
2,816,782,717 | flutter | `Mac ruby` failing due to missing command line tools | ### Type of Request
bug
### Infrastructure Environment
cocoon
### What is happening?
```
Error: Xcode alone is not sufficient on Sonoma.
Install the Command Line Tools:
xcode-select --install
```
https://ci.chromium.org/ui/p/flutter/builders/try/Mac_arm64%20ruby/45/infra
### Steps to reproduce
Run test
### Expected results
Expect test to pass | team-infra | low | Critical |
2,816,787,602 | TypeScript | Missing computed properties in declaration emit | With #60052, we now emit qualified names rather than generate an index signature for computed names in classes; however, this now puts us in an awkward place where regardless of errors, declaration emit diverges on `isolatedDeclarations`.
Playground Links:
* [5.7, without `isolatedDeclarations`](https://www.typescriptlang.org/play/?isolatedDeclarations=false&ts=5.7.3#code/KYDwDg9gTgLgBAYwgOwM7wOZWMGBLZDALjgFdk8BHU4OVATwFsAjCAGzgF44BlJ1tgAoAlAG4AUKEixEKdLPIwS5KjTr92XXhqFjxk8NHgI2AQ1So4AWXoBhMxYCMcAN7i4HuAG0sOfIQBdEnQoAgwtACIAd2g2ABMIiU9vJEUguGRSFmAoLQAWACYJAF99KSM4YEzGOAAFKAgwHJh6ADlTRmBLN2TfXDDIvv8MCIAad09U5HhuCKmYMfFSg2ljB0sbe3NUAtcJjy96xua2jq6AOiGw9JCB2ZioeMT97yOm2FPO1HP59OrmHL5IpLMqGGRsXBwMANMAAQmCMFChAkKwqJm21js6wAzHtkl5oY10iIuAA+OAANwgeDiWhJnHJLlKpSAA)
* [Nightly, without `isolatedDeclarations`](https://www.typescriptlang.org/play/?isolatedDeclarations=false&ts=5.8.0-dev.20250128#code/KYDwDg9gTgLgBAYwgOwM7wOZWMGBLZDALjgFdk8BHU4OVATwFsAjCAGzgF44BlJ1tgAoAlAG4AUKEixEKdLPIwS5KjTr92XXhqFjxk8NHgI2AQ1So4AWXoBhMxYCMcAN7i4HuAG0sOfIQBdEnQoAgwtACIAd2g2ABMIiU9vJEUguGRSFmAoLQAWACYJAF99KSM4YEzGOAAFKAgwHJh6ADlTRmBLN2TfXDDIvv8MCIAad09U5HhuCKmYMfFSg2ljB0sbe3NUAtcJjy96xua2jq6AOiGw9JCB2ZioeMT97yOm2FPO1HP59OrmHL5IpLMqGGRsXBwMANMAAQmCMFChAkKwqJm21js6wAzHtkl5oY10iIuAA+OAANwgeDiWhJnHJLlKpSAA)
* [Nightly, with `isolatedDeclarations`](https://www.typescriptlang.org/play/?isolatedDeclarations=true&ts=5.8.0-dev.20250128#code/KYDwDg9gTgLgBAYwgOwM7wOZWMGBLZDALjgFdk8BHU4OVATwFsAjCAGzgF44BlJ1tgAoAlAG4AUKEixEKdLPIwS5KjTr92XXhqFjxk8NHgI2AQ1So4AWXoBhMxYCMcAN7i4HuAG0sOfIQBdEnQoAgwtACIAd2g2ABMIiU9vJEUguGRSFmAoLQAWACYJAF99KSM4YEzGOAAFKAgwHJh6ADlTRmBLN2TfXDDIvv8MCIAad09U5HhuCKmYMfFSg2ljB0sbe3NUAtcJjy96xua2jq6AOiGw9JCB2ZioeMT97yOm2FPO1HP59OrmHL5IpLMqGGRsXBwMANMAAQmCMFChAkKwqJm21js6wAzHtkl5oY10iIuAA+OAANwgeDiWhJnHJLlKpSAA)
```ts
export const greeting: unique symbol = Symbol();
export const count: unique symbol = Symbol();
export class MyClass1 {
[greeting]: string = "world";
[count]: number = 42;
}
export enum PropertyNames {
greeting = "greeting",
count = "count",
}
export class MyClass2 {
[PropertyNames.greeting]: string = "world";
[PropertyNames.count]: number = 42;
}
export let prop!: string;
export class MyClass3 {
[prop]: () => void = () => {}
}
```
Compared to `--isolatedDeclarations false` The declaration emit lacks expected contents in `MyClass1` and `MyClass2`:
```diff lang=ts
export declare const greeting: unique symbol;
export declare const count: unique symbol;
export declare class MyClass1 {
- [greeting]: string;
- [count]: number;
}
export declare enum PropertyNames {
greeting = "greeting",
count = "count"
}
export declare class MyClass2 {
- [PropertyNames.greeting]: string;
- [PropertyNames.count]: number;
}
export declare let prop: string;
export declare class MyClass3 {
[prop]: () => void;
}
```
It is surprising that `prop` is emitted regardless of `--isolatedDeclarations`, but not the other contents. | Suggestion,In Discussion | low | Critical |
2,816,800,806 | go | os/exec: LookPath considers paths containing ":" to be absolute on windows | os/exec.LookPath considers all paths containing `:` to be absolute, as one use of the colon is to indicate drive letters. Unfortunately it is also used for "alternate data streams" on NTFS filesystems, which allow attaching additional data to a file, accessed using a special suffix.
This can result in LookPath returning a file (with a ADS) in the current directory unexpectedly.
This is a PUBLIC track security issue per our security policy.
Thanks to Juho Forsén for reporting this issue. | Security,OS-Windows | low | Minor |
2,816,802,325 | electron | BaseWindow.contentView.addChildView(WebContentsView) is not rendering a page properly | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
34.0.1
### What operating system(s) are you using?
Windows
### Operating System Version
Version 10.0.19045 Build 19045
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
This is an excerpt of my entry file
```js
import { app, screen, BaseWindow, WebContentsView } from 'electron';
app.whenReady().then(() => {
const
{ workAreaSize } = screen.getPrimaryDisplay()
,
parentView = new BaseWindow({
width: workAreaSize.width,
height: workAreaSize.height,
})
,
mainPage = new WebContentsView()
;
if (parentView) parentView.maximize();
parentView.contentView.addChildView(mainPage); // [FAILING] # failing even if commented out avoiding any clash of instruction set, only the line below works:..
parentView.contentView = mainPage; // [PASSING] # temporary workaround (read "Additional Information" in this issue)
mainPage.webContents.loadFile('index.html');
mainPage.webContents.openDevTools();
parentView.on('closed', () => {
mainPage.webContents.close();
});
})
```
This is what I have expected to see:..
![Image](https://github.com/user-attachments/assets/64197452-e4ae-4feb-877d-becd0423ae27)
### Actual Behavior
This is what I truly get, i.e. only web page visible in Debugger's outliner, but not actually rendered in Web Contents' View:
![Image](https://github.com/user-attachments/assets/34026b2f-1760-4c54-9544-f10eb7ef6cb1)
### Testcase Gist URL
_No response_
### Additional Information
Possibly related to [the issue](https://github.com/electron/electron/issues/43255) recently raised. However it underlines a possible, although a temporary, workaround (see diff below): the drawback of the workaround itself is that it potentially overrides `BaseWindow.contentView.children` list, which is no-brainer BAD PRACTICE, especially if we need our list to contain multiple `View` instances !
```diff
// workaround (BAD PRACTICE)
- parentView.contentView.addChildView(mainPage); // [FAILING]
+ parentView.contentView = mainPage; // [PASSING]
```
If this will be accepted as bug by Electron's maintainers team, it means multiple View composition is and will be not possible until potential bug is fixed.
Thank you. | platform/windows,bug :beetle:,34-x-y | low | Critical |
2,816,802,698 | iptv | Add: Telecinco | ### Channel ID (required)
Telecinco.es
### Stream URL (required)
https://spa-ha-p005.cdn.masmediatv.es/SVoriginOperatorEdge3/smil:8_HD.smil/manifest.m3u8
### Quality
1080p
### Label
None
### Timeshift
_No response_
### HTTP User Agent
_No response_
### HTTP Referrer
_No response_
### Notes
_No response_
### Contributing Guide
- [x] I have read [Contributing Guide](https://github.com/iptv-org/iptv/blob/master/CONTRIBUTING.md) | streams:add | low | Minor |
2,816,805,504 | go | cmd/go: toolchain directive can point to file relative to go.mod with ADS on windows | Due to https://github.com/golang/go/issues/71469, a toolchain directive with a ADS suffix (e.g. `toolchain go1.25-:alt`), can result in the toolchain attempting to execute a files alternate data stream that is located alongside the go.mod file.
Since it's somewhat complex to create a file with ADS, and as far as we could tell no source control software supports it, while clearly unexpected, we do not consider this significantly dangerous.
This is a PUBLIC track security issue per our security policy.
Thanks to Juho Forsén for reporting this issue. | Security,OS-Windows | low | Minor |
2,816,805,792 | iptv | Add: Antena3 | ### Channel ID (required)
Antena3.es
### Stream URL (required)
https://spa-ha-p005.cdn.masmediatv.es/SVoriginOperatorEdge3/smil:7_HD.smil/manifest.m3u8
### Quality
1080p
### Label
None
### Timeshift
_No response_
### HTTP User Agent
_No response_
### HTTP Referrer
_No response_
### Notes
_No response_
### Contributing Guide
- [x] I have read [Contributing Guide](https://github.com/iptv-org/iptv/blob/master/CONTRIBUTING.md) | streams:add | low | Minor |
2,816,837,142 | iptv | Broken: es.m3u LaSexta.es Boing.es Cuatro.es | ### Broken Links
[LaSexta.es](https://spa-ha-p005.cdn.masmediatv.es/SVoriginOperatorEdge2/smil:11_HD.smil/index.m3u8)
[Boing.es](https://spa-ha-p002.cdn.masmediatv.es/SVoriginOperatorEdge/smil:17_HD.smil/index.m3u8)
[Cuatro.es](https://spa-ha-p002.cdn.masmediatv.es/SVoriginOperatorEdge/smil:9_HD.smil/index.m3u8)
### What happened to the stream?
Not loading
### Notes (optional)
New links:
[LaSexta.es](https://spa-ha-p005.cdn.masmediatv.es/SVoriginOperatorEdge3/smil:11_HD.smil/manifest.m3u8)
[Boing.es](https://spa-ha-p005.cdn.masmediatv.es/SVoriginOperatorEdge/smil:17_HD.smil/index.m3u8)
[Cuatro.es](https://spa-ha-p005.cdn.masmediatv.es/SVoriginOperatorEdge3/smil:9_HD.smil/index.m3u8)
resolution does not change
### Contributing Guide
- [x] I have read [Contributing Guide](https://github.com/iptv-org/iptv/blob/master/CONTRIBUTING.md) | broken stream | low | Critical |
2,816,845,065 | kubernetes | apiserver incompatible with cel-go 0.23.0 | I hit this error when a robot updated google/cel-go to 0.23.0 in a project that vendors k8s.io/apiserver v0.32.1:
```
# k8s.io/apiserver/pkg/cel/environment
vendor/k8s.io/apiserver/pkg/cel/environment/base.go:176:19: cannot use ext.TwoVarComprehensions (value of type func(options ...ext.TwoVarComprehensionsOption) "github.com/google/cel-go/cel".EnvOption) as func() "github.com/google/cel-go/cel".EnvOption value in argument to UnversionedLib
```
https://github.com/cilium/tetragon/pull/3290
I traced the problem, and it seems the incompatibility was introduced by https://github.com/google/cel-go/pull/1075/ that changed the type of `TwoVarComprehensions` (from `func() cel.EnvOption` to `func(...TwoVarComprehensionsOption) cel.EnvOption`). | sig/api-machinery,needs-triage | low | Critical |
2,816,851,199 | rust | Missed optimization: big immutable locals are not promoted to constants | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code, on `riscv32i-unknown-none-elf` (this code is originally from #136216):
```rust
#![no_std]
pub enum Idx { _0, _1, _2, _3 }
#[no_mangle]
pub unsafe fn lookup_local(x: Idx) -> u8 {
[0xEF, 0x0E, 0x0D, 0xBD][x as usize]
}
#[no_mangle]
pub unsafe fn lookup_const(x: Idx) -> u8 {
(const { [0xEF, 0x0E, 0x0D, 0xBD] })[x as usize]
}
```
I expected to see this happen: compiler outputs identical asm for both functions.
Instead, this happened: the asm for `lookup_local` is suboptimal ([godbolt](https://godbolt.org/z/7ea5Tn4zE)):
```asm
lookup_local:
addi sp, sp, -16
li a1, -17
sb a1, 12(sp)
li a1, 14
sb a1, 13(sp)
li a1, 13
sb a1, 14(sp)
li a1, 189
sb a1, 15(sp)
addi a1, sp, 12
add a0, a1, a0
lbu a0, 0(a0)
addi sp, sp, 16
ret
lookup_const:
lui a1, %hi(.L__unnamed_1)
addi a1, a1, %lo(.L__unnamed_1)
add a0, a1, a0
lbu a0, 0(a0)
ret
.L__unnamed_1:
.ascii "\357\016\r\275"
```
Instead of a using constant, `lookup_local` constructs the array on the stack.
This might be a missed optimization in the LLVM, but we could also potentially handle it in rustc (and personally I feel like the latter might be easier).
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.86.0-nightly (2f348cb7c 2025-01-27)
binary: rustc
commit-hash: 2f348cb7ce4063fa4eb40038e6ada3c5214717bd
commit-date: 2025-01-27
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.7
```
<details><summary>above is the code from godbolt, locally I get slightly different output</summary>
<p>
`rustc +nightly ./t.rs --target riscv32i-unknown-none-elf -Copt-level=3 -Zexport-executable-symbols`, `objdump --disassemble t`
```rust
#![no_std]
#![no_main]
pub enum Idx {
_0,
_1,
_2,
_3,
}
#[no_mangle]
#[inline(never)]
pub unsafe fn lookup_local(x: Idx) -> u8 {
[0xEF, 0x0E, 0x0D, 0xBD][x as usize]
}
#[inline(never)]
#[no_mangle]
pub unsafe fn lookup_const(x: Idx) -> u8 {
(const { [0xEF, 0x0E, 0x0D, 0xBD] })[x as usize]
}
#[panic_handler]
fn disco_loop(_: &core::panic::PanicInfo) -> ! {
loop {}
}
```
```asm
000110d8 <lookup_local>:
110d8: ff010113 addi sp, sp, -0x10
110dc: fef00593 li a1, -0x11
110e0: 00b10623 sb a1, 0xc(sp)
110e4: 00e00593 li a1, 0xe
110e8: 00b106a3 sb a1, 0xd(sp)
110ec: 00d00593 li a1, 0xd
110f0: 00b10723 sb a1, 0xe(sp)
110f4: 0bd00593 li a1, 0xbd
110f8: 00b107a3 sb a1, 0xf(sp)
110fc: 00c10593 addi a1, sp, 0xc
11100: 00a58533 add a0, a1, a0
11104: 00054503 lbu a0, 0x0(a0)
11108: 01010113 addi sp, sp, 0x10
1110c: 00008067 ret
00011110 <lookup_const>:
11110: 000105b7 lui a1, 0x10
11114: 0d458593 addi a1, a1, 0xd4
11118: 00a58533 add a0, a1, a0
1111c: 00054503 lbu a0, 0x0(a0)
11120: 00008067 ret
```
</p>
</details>
| A-LLVM,I-slow,A-codegen,C-bug,needs-triage | low | Critical |