id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,736,101,917 | react-native | [0.76.5] Modal glitchy behavior on iOS | ### Description
My issue is closely related to https://github.com/facebook/react-native/issues/47694. Initially I didn't see 2nd modal at all (in 0.76.3), but after updating to 0.76.5 at least 2nd Modal started appearing.
But what I noticed is that hiding animation shows artifacts of the previous render of a Modal if I render conditionally.
Possible "fix\hack" that I found for now is to render both Modals, but to control `visibility` property. This of course is a sub-optimal solution since no one wants to render all Modals.
Please check my videos and reproduction repository.
### Steps to reproduce
1. install app from the repo
2. Open `ModalContent.tsx`
3. Ensure that lines **are not commented out**
```
if (!visible) {
return null;
}
```
4. Run iOS.
5. Open/Hide Modals multiple times. After the first time, you'll start seeing UI glitch when it renders a Modal from previous state during Hiding transition.
Possible fix:
1. disable lines with
```
if (!visible) {
return null;
}
```
This will force rendering of `both` components, but visibility will be controlled by `visibility` property of the `Modal`.
2. Run iOS
3. Try opening/hiding Modal
4. Verify it hides/show properly without issues.
### React Native Version
0.76.5
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1.1
CPU: (10) arm64 Apple M1 Pro
Memory: 85.80 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.19.1
path: ~/.nvm/versions/node/v18.19.1/bin/node
Yarn: Not Found
npm:
version: 10.7.0
path: ~/.nvm/versions/node/v18.19.1/bin/npm
Watchman:
version: 2024.01.22.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods: Not Found
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12550806
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 2.7.8
path: /opt/homebrew/opt/[email protected]/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.5
wanted: 0.76.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
It's a UI bug
```
### Reproducer
https://github.com/vsheyanov/react-76-4-modal-bug/tree/main
### Screenshots and Videos
<table>
<tr>
<td><b>Not rendering a Modal based on a state</b></td>
<td><b>Hiding using visibility based on a state</b></td>
</tr>
<tr>
<td>
1st time - looks ok<br/>
2nd and 3rd - during <code>closing</code> animation you can see that it shows the previous state
https://github.com/user-attachments/assets/be4bc510-6a1e-40a1-af0d-8b0fc27f97e4
</td>
<td>
Looks as expected. Modals switch each other without any glitches.
https://github.com/user-attachments/assets/a1953523-b9bd-4f68-88c0-c0a5d6965f4d
</td>
</tr>
</table>
| Platform: iOS,Issue: Author Provided Repro,Resolution: PR Submitted,Component: Modal | low | Critical |
2,736,141,952 | godot | Bug with VisibilityEnabler and AudioStreamPlayer | ### Tested versions
The bug is present in the stable 4.3 version and the several 4.4 dev versions that I have tried.
### System information
Godot v4.4.dev6 - Windows 10.0.19045 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 6GB (NVIDIA; 32.0.15.6636) - AMD Ryzen 7 2700X Eight-Core Processor (16 threads)
### Issue description
Having a scene with this layout:
```
Node2D
-- VisibilityEnabler2D
-- AudioStreamPlayer
```
print this error in the debugger
```
E 0:00:01:0513 can_process: Condition "!is_inside_tree()" is true. Returning: false
<Source C++> scene/main/node.cpp:865 @ can_process()
```
~~In the editor version it does not pose a problem, on the other hand in the exported version it causes undesirable behavior (queue_free no longer destroys the scenes concerned by this problem in the project where I had it)~~
**Edit:** After further testing I realized that the bug related to `queue_free` comes from elsewhere, the error exists but it does not seem to create more problems during export.
### Steps to reproduce
- create a scene with a Node2D that has as child a VisibilityEnabler2D and an AudioStreamPlayer
- Launch game
- an error appears in the debugger
### Minimal reproduction project (MRP)
[Minimal Project.zip](https://github.com/user-attachments/files/18113305/Minimal.Project.zip)
| bug,topic:core | low | Critical |
2,736,230,216 | vscode | Paste slow | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.0
- OS Version: macOS 14.7.1
Steps to Reproduce:
1. When I paste into a TypeScript file, it is very slow. It shows a small circular progress indicator for several seconds before finishing the paste
This is a very large repo, but pasting was fast even in this repo before version 1.96. | info-needed,typescript | high | Critical |
2,736,255,973 | go | x/net/http2: when StrictMaxConcurrentStreams enabled ClientConn.ReserveNewRequest() causes stalls in request processing | ### Go version
go1.23.3
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/michal/Library/Caches/go-build'
GOENV='/Users/michal/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/michal/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/michal/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/michal/go/pkg/mod/golang.org/[email protected]'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/Users/michal/go/pkg/mod/golang.org/[email protected]/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.3'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/michal/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/7q/nyynjpwj5p19npby48ykjpx00000gn/T/go-build1425036568=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
Please find a minimal reproducer below:
```go
package reprod
import (
"flag"
"fmt"
"net"
"net/http"
"sync"
"testing"
"golang.org/x/net/http2"
)
var reserve = flag.Bool("reserve", false, "Reserve new request")
type singleConnPool struct {
cc *http2.ClientConn
}
func (s *singleConnPool) GetClientConn(req *http.Request, addr string) (*http2.ClientConn, error) {
if (*reserve && s.cc.ReserveNewRequest()) || s.cc.CanTakeNewRequest() {
return s.cc, nil
}
return nil, fmt.Errorf("no available connection")
}
func (s *singleConnPool) MarkDead(conn *http2.ClientConn) {
panic("not implemented")
}
func TestReserveNewRequest(t *testing.T) {
h := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
})
tr := &http2.Transport{
AllowHTTP: true,
StrictMaxConcurrentStreams: true,
DisableCompression: true,
}
c0, c1 := net.Pipe()
s := http2.Server{
MaxConcurrentStreams: 1,
}
go s.ServeConn(c0, &http2.ServeConnOpts{Handler: h})
cc, err := tr.NewClientConn(c1)
if err != nil {
t.Fatal(err)
}
tr.ConnPool = &singleConnPool{cc: cc}
client := http.Client{Transport: tr}
var wg sync.WaitGroup
for range 2 {
wg.Add(1)
go func() {
defer wg.Done()
t.Log("sending request")
res, err := client.Get("http://foobar")
if err != nil {
t.Error(err)
}
t.Log("got response", res.StatusCode)
}()
}
wg.Wait()
}
```
### What did you see happen?
Running it without reserve flag works as expected
```
$ go test -race -v .
=== RUN TestReserveNewRequest
rep_test.go:61: sending request
rep_test.go:61: sending request
rep_test.go:66: got response 200
rep_test.go:66: got response 200
--- PASS: TestReserveNewRequest (0.00s)
PASS
```
**Running it with reserve flag set hangs**
```
$ go test -race -v . -reserve
=== RUN TestReserveNewRequest
rep_test.go:61: sending request
rep_test.go:61: sending request
```
I nailed down this bug to change: https://go-review.googlesource.com/c/net/+/617655
### What did you expect to see?
I expect it to work as in x/net version 0.30.0. | NeedsInvestigation | low | Critical |
2,736,272,796 | ui | [bug]: 2.1.7 breaks alias resolution | ### Describe the bug
`npx shadcn@latest add sidebar` doesn't work (points to 2.1.7)
`npx [email protected] add sidebar` works as expected.
Here are examples of my config
```json
{
...
"aliases": {
"components": "@org/package1/components",
"utils": "@org/package1/lib/utils",
"ui": "@org/package1/components/ui",
"lib": "@org/package1/lib",
"hooks": "@org/package1/hooks"
},
}
```
```json
{
"compilerOptions": {
"paths": {
"@org/package1/*": ["./src/*"],
}
},
"include": ["src"],
"references": []
}
```
with 2.1.6 the components are created with these types of imports
```typescript
import { cn } from "@org/package1/lib/utils"
```
with 2.1.7
```typescript
import { cn } from "@org/lib/utils"
```
### Affected component/components
all
### How to reproduce
with the configs above you can
1. run `npx [email protected] add sidebar` and see how importing utils resolves to
2. run `npx [email protected] add sidebar` and see that's different
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
macos 15.1.1
Tmux
node 20
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,736,293,506 | pytorch | [FlexAttention] Support the number of shared query heads in GQA to not be the power of 2 | ### ๐ The feature, motivation and pitch
I've implemented FlexAttention for SmolLM, but since HF team uses 3 query heads per KV-head, the compilation fails for `seq_length=5`:
```
LoweringException: ValueError: Number of shared query heads sharing the same KV head must be power of 2.
target: flex_attention
args[0]: TensorBox(StorageBox(
InputBuffer(name='primals_1', layout=FixedLayout('cuda:0', torch.float32, size=[1, 9, 5, 64], stride=[2880, 320, 64, 1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='primals_2', layout=FixedLayout('cuda:0', torch.float32, size=[1, 3, 5, 64], stride=[960, 320, 64, 1]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='primals_3', layout=FixedLayout('cuda:0', torch.float32, size=[1, 3, 5, 64], stride=[960, 320, 64, 1]))
))
args[3]: Subgraph(name='sdpa_score0', graph_module=<lambda>(), graph=None)
args[4]: (5, 5, TensorBox(StorageBox(
InputBuffer(name='primals_5', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_4', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_9', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_10', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_11', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_12', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_13', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_14', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]))
)), 128, 128, Subgraph(name='sdpa_mask0', graph_module=<lambda>(), graph=None))
args[5]: 0.125
args[6]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'OUTPUT_LOGSUMEXP': True}
args[7]: ()
args[8]: (TensorBox(StorageBox(
InputBuffer(name='primals_6', layout=FixedLayout('cuda:0', torch.int64, size=[5], stride=[1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_7', layout=FixedLayout('cuda:0', torch.bool, size=[5], stride=[1]))
)), TensorBox(StorageBox(
InputBuffer(name='primals_8', layout=FixedLayout('cuda:0', torch.int64, size=[5], stride=[1]))
)))
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
Weird thing is that the compilation works for `seq_length>128`!!!
### Alternatives
I believe I can pad the head dimension with zeros while I wait for the fix. Please correct me if I am wrong.
### Additional context
I use nightly build: `2.6.0.dev20241212+cu124`
Minimal reproduction code:
```python
from torch.nn.attention.flex_attention import flex_attention, create_block_mask
import torch
seq_len = 128 # 5
# smollest SmolLM
torch.compile(flex_attention, fullgraph=True, dynamic=False, mode="max-autotune-no-cudagraphs", disable=False)(
torch.randn(1, 9, seq_len, 64).to('cuda'),
torch.randn(1, 3, seq_len, 64).to('cuda'),
torch.randn(1, 3, seq_len, 64).to('cuda'),
block_mask=create_block_mask(
lambda b, h, q_idx, kv_idx: q_idx >= kv_idx,
None, None, seq_len, seq_len,
_compile=True
),
enable_gqa=True
)
```
Change `seq_len` to 5 to reproduce the error.
@drisspg you've been a great help with FlexAttention, could you have a look here? Thank you!
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,736,329,518 | node | module.registerHooks() tracking issue | - [x] Initial proposal: https://github.com/nodejs/node/issues/52219
- [x] Loaders design proposal: https://github.com/nodejs/loaders/pull/198
- [x] Initial implementation: https://github.com/nodejs/node/pull/55698
- [ ] Implement evaluation hook for at least CJS to help use cases like require-in-the-middle, which was supposed to be based on https://github.com/nodejs/loaders/pull/198 but we had some discussions over the collaboration summit/NodeConf EU about changing the hook to be "wrapping around evaluation", not "after evaluation": [WIP](https://github.com/joyeecheung/node/tree/evaluate-hooks)
- For ESM, there's currently no way for Node.js to hook into the evaluation of child modules, will need to discuss with V8 about it ([issue](https://issues.chromium.org/u/1/issues/384413088)). Allowing mutation of the exports would require a spec change so out of scope for `module.registerHooks()` or Node.js in general
- [ ] Implement link hook for ESM, as proposed in https://github.com/nodejs/loaders/pull/198 to make use cases like import-in-the-middle less hacky
- [ ] `Symbol.dispose` integration (requested in https://github.com/nodejs/node/pull/55698#discussion_r1855211481)
- [ ] Reordering of the documentation when `module.registerHooks()` is battle tested enough and should be preferred over `module.register` to avoid various caveats (requested in https://github.com/nodejs/node/pull/55698#discussion_r1849538179 and https://github.com/nodejs/node/pull/55698#pullrequestreview-2471072027)
- [ ] Advocating it to popular npm packages doing CJS monkey-patching to reduce the overall dependency of CJS loader internals in the ecosystem
- [ ] Support of unknown extensions (learned about it when I was initially proposing https://github.com/nodejs/node/issues/52219)
- [ ] Support `node:builtin?param=val`
- [ ] Figure out how to make search params work with CJS cache
- [ ] Create a polyfill of `module.register()` built on top of `module.registerHooks()`
- [ ] `startGraph` hook proposed in https://github.com/nodejs/loaders/pull/205 | module,loaders | low | Minor |
2,736,331,021 | rust | Lint for returning a pointer to stack memory associated with a local variable | ### Code
```Rust
pub fn foo() -> *const i32 {
let x = 42;
&x
}
```
### Current output
rustc doesn't currently warn about this.
### Desired output
A warning that this pointer is referencing a local variable and is going to be dangling immediately. E.g.
```text
error: returning a pointer to stack memory associated with a local variable
--> <source>:12:5
|
LL| &x
| ^^
```
### Rationale and extra context
This code is very unlikely to be correct, which is exactly why a warning should be emitted. This would be similar to Clang's `-Wreturn-stack-address` warning.
In case of returning a reference, rustc would catch this and emit a compiler error. While it is fine to create such a pointer, it is very unlikely that the code should be written that way.
### Rust Version
```Shell
rustc 1.84.0-nightly (b8c8287a2 2024-11-03)
binary: rustc
commit-hash: b8c8287a229cd79604aa84c25e1235fc78cd5f2e
commit-date: 2024-11-03
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
```
### Anything else?
If this becomes a `Warn` lint it will start warning immediately and potentially break users that configured warnings as errors. So maybe this should be `Allow` for the beginning? | A-diagnostics,T-compiler | low | Critical |
2,736,332,834 | go | x/crypto/sha3: TestCSHAKELargeS times out on s390x | ### Go version
go version go1.23.2 linux/s390x
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='s390x'
GOBIN=''
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='s390x'
GOHOSTOS='linux'
GOINSECURE=''
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPRIVATE=''
GOPROXY='direct'
GOROOT='/usr/lib/golang'
GOSUMDB='off'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/usr/lib/golang/pkg/tool/linux_s390x'
GOVCS=''
GOVERSION='go1.23.2'
GODEBUG=''
GCCGO='gccgo'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -march=z196 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build429958401=/tmp/go-build -gno-record-gcc-switches
```
### What did you do?
go test ./...
### What did you see happen?
```
[snip]
panic: test timed out after 10m0s
running tests:
TestCSHAKELargeS (9m59s)
goroutine 27 [running]:
testing.(*M).startAlarm.func1()
/usr/lib/golang/src/testing/testing.go:2373 +0x464
created by time.goFunc
/usr/lib/golang/src/time/sleep.go:215 +0x42
goroutine 1 [chan receive, 9 minutes]:
testing.(*T).Run(0xc00009a680, {0x1f3110, 0x10}, 0x1fec40)
/usr/lib/golang/src/testing/testing.go:1751 +0x48e
testing.runTests.func1(0xc00009a680)
/usr/lib/golang/src/testing/testing.go:2168 +0x62
testing.tRunner(0xc00009a680, 0xc00008eca0)
/usr/lib/golang/src/testing/testing.go:1690 +0x132
testing.runTests(0xc000092180, {0x319cc0, 0xc, 0xc}, {0xc1cee04b5ea3e990, 0x8bb2cd7205, 0x31e7e0})
/usr/lib/golang/src/testing/testing.go:2166 +0x4b4
testing.(*M).Run(0xc0000a00a0)
/usr/lib/golang/src/testing/testing.go:2034 +0x704
main.main()
_testmain.go:95 +0xd8
goroutine 14 [runnable]:
golang.org/x/crypto/sha3.(*asmState).Read(0xc00072c008, {0xc000732000, 0x200003e8, 0x200003e8})
/home/jcajka/upstream/crypto/sha3/sha3_s390x.go:194 +0x3d6
golang.org/x/crypto/sha3.TestCSHAKELargeS(0xc00009a4e0)
/home/jcajka/upstream/crypto/sha3/sha3_test.go:470 +0xf8
testing.tRunner(0xc00009a4e0, 0x1fec40)
/usr/lib/golang/src/testing/testing.go:1690 +0x132
created by testing.(*T).Run in goroutine 1
/usr/lib/golang/src/testing/testing.go:1743 +0x46e
FAIL golang.org/x/crypto/sha3 600.056s
[snip]
```
### What did you expect to see?
All tests passing.
It seems that the failure have been introduced in commit https://github.com/golang/crypto/commit/80ea76eb17c0c52f5d5d04e833d6aeb6b062d81d I have suspicion that it might also be related to the removal of the s390x ASM in https://github.com/golang/crypto/commit/59b5a86796b9d310b31d416f56d93b5ce30da22b In that context of losing the optimized path, have you reached out to the original contributor before removing/disabling it? | NeedsInvestigation,arch-s390x | low | Critical |
2,736,335,695 | react-native | Missing '#include "ReactCommon/SchedulerPriority.h"'; 'SchedulerPriority' must be declared before it is used | ### Description
I'm trying to upgrade the react native version to 0.76.3, but an error occurs in iOS and the build fails. What's the problem?
### Steps to reproduce
1. pod install
2. bundle install && RCT_NEW_ARCH_ENABLED=1 bundle exec pod install
3. Xcode ios build
### React Native Version
0.76.3
### Affected Platforms
Build - MacOS
### Areas
Other (please specify)
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6.1
CPU: (12) arm64 Apple M3 Pro
Memory: 114.44 MB / 36.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.19.1
path: ~/.nvm/versions/node/v18.19.1/bin/node
Yarn:
version: 4.2.2
path: ~/.nvm/versions/node/v18.19.1/bin/yarn
npm:
version: 10.2.4
path: ~/.nvm/versions/node/v18.19.1/bin/npm
Watchman:
version: 2024.11.25.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.13.0
path: /Users/benji/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java: Not Found
Ruby:
version: 2.7.4
path: /Users/benji/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.3
wanted: 0.76.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
Missing '#include "ReactCommon/SchedulerPriority.h"'; 'SchedulerPriority' must be declared before it is used
```
### Reproducer
private repository
### Screenshots and Videos
<img width="490" alt="แแ
ณแแ
ณแ
แ
ตแซแแ
ฃแบ 2024-12-13 แแ
ฉแแ
ฅแซ 1 09 34" src="https://github.com/user-attachments/assets/87ad278a-fd0b-44ee-ad41-2cc73f56cbb2" />
<img width="1107" alt="แแ
ณแแ
ณแ
แ
ตแซแแ
ฃแบ 2024-12-13 แแ
ฉแแ
ฅแซ 1 14 12" src="https://github.com/user-attachments/assets/70a85ba4-2422-4b33-b29e-aaf91c4eb750" />
| Needs: Repro,Newer Patch Available,Needs: Attention,Type: New Architecture | medium | Critical |
2,736,340,904 | go | proposal: structs: NoCopy | **Background:** In general, copying of mutable Go data structures is a risky business. Go has no analogue of C++'s DISALLOW_COPY_CONSTRUCTORS macro or Rust's Copy trait, so it is always possible to copy values of any type. However, shallow copying may hide very subtle aliasing bugs:
- The original and the copy can no longer both be used. For example, after copying a bytes.Buffer, subsequent mutations to either the original or the copy may cause unpredictable effects on the other one.
- "Linear copying", in which the old value is no longer used after the copy, is typically less hazardous, but may still be unsafe or unpredictable. In the past, bytes.Buffer contained a small inline array that was used as the initial space for slice allocation. Copying a Buffer would cause the new copy to refer to the array field of the original. (This wasn't incorrect, but it was surprising and unintended.)
- More complex data structures are not safe to use even after linear copying. For example, sync.WaitGroup and sync.Mutex contain a semaphore, which has special treatment in the runtime; a copy won't do. And strings.Builder uses unsafe tricks to avoid allocation; mutating a copy leads to [crashes](https://github.com/golang/go/issues/47276).
In general, Go programmers should assume, in the absence of explicit documentation or insider knowledge, that copying any arbitrary nonzero value of type T, where methods are defined on *T, is not safe.
However, for certain types (such as those mentioned above) the potential for latent serious bugs is high, and it is worth making an effort to detect copying mistakes statically (using cmd/vet's [copylock](https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/copylock) check) and perhaps even dynamically, using the self-pointer technique demonstrated by strings.Builder.
**Proposal:** We propose to publish two new types, shown below, in the `structs` package. The first is merely an annotation for static analysis tools; the second additionally performs a dynamic check (at the cost of one extra word in your data structure).
We also propose to rename vet's copylock check to nocopy, and change it to look for structs.NoCopy fields instead of the presence of Lock/Unlock methods (which are an indirect proxy for non-copyability that dates from when the check applied only to Mutex and WaitGroup).
```go
package structs
// NoCopy helps statically detect copying of non-zero values of
// mutable data types that must not be copied. (Examples include
// [strings.Builder] and [sync.WaitGroup].)
//
// Embed this type within your data structure to indicate that copying
// non-zero values of that type is not safe. This type has no dynamic
// effect, but static analysis tools may report an error when they
// encounter statements that copy a NoCopy value.
//
// See [NoCopyCheck] for a variant that additionally incorporates a
// run-time check.
//
// As a rule, unless explicitly documented to the contrary, you should
// assume that it is not safe to make copies of any non-zero value of
// a type T that defines methods on *T. Calling such a method may
// cause the data structure to incorporate its original address in
// some way--for example, by causing one field to point to
// another--such that copying the value violates invariants of the
// representation.
type NoCopy struct{}
// NoCopyCheck helps detect copying of mutable data types that must
// not be copied (for example, [strings.Builder]), using both runtime
// assertions and static checks.
//
// Embed a NoCopyCheck within your data structure, and call its check
// method at the start of each method:
//
// type Buffer struct {
// data []byte
// nocopy NoCopyCheck
// }
//
// func (b *Buffer) Write(data []byte) {
// b.nocopy.Check()
// b.data = append(b.data, data)
// }
//
// If the structure has been copied, it will panic:
//
// var old Buffer
// old.Write([]byte("hello"))
// new := old // Buffer values must not be copied!
// old.Write([]byte("world"))
type NoCopyCheck struct {
_ NoCopy
ptr *NoCopyCheck // points to self after first call of Check
}
// Check panics if the NoCopyCheck has been copied since the previous
// call to Check, if any.
func (nocopy *NoCopyCheck) Check() {
if nocopy.ptr == nil {
nocopy.ptr = nocopy
} else if nocopy.ptr != nocopy {
panic("a struct with a NoCopyCheck field has been copied")
}
}
```
Related:
- https://github.com/golang/go/issues/23764
- https://github.com/golang/go/issues/25907
- https://github.com/golang/go/issues/47276
- https://github.com/golang/go/issues/67265 (this proposal was split out from that one)
(Thanks to @ianthehat for pointing out that NoCopyCheck can contain a NoCopy so that vet need only consider the latter.) | Proposal,Analysis | medium | Critical |
2,736,351,799 | rust | ICE: `assertion failed: self.let_source != LetSource::None` | <!--
[31mICE[0m: Rustc ./a.rs '' 'thread 'rustc' panicked at compiler/rustc_mir_build/src/thir/pattern/check_match.rs:437:9: 'assertion failed: self.let_source != LetSource::None'', 'thread 'rustc' panicked at compiler/rustc_mir_build/src/thir/pattern/check_match.rs:437:9: 'assertion failed: self.let_source != LetSource::None''
File: /tmp/im/2/a.rs
-->
auto-reduced (treereduce-rust):
````rust
impl<A> std::ops::CoerceUnsized<A> for A {}
fn main() {
match true {
_ if let true = true
&& true => {}
_ => {}
}
}
````
original:
````rust
impl<A> std::ops::CoerceUnsized<A> for A {}
fn main() {
match true {
_ if let true = true && true => {}
//~^ ERROR `if let` guards are
//~| ERROR `let` expressions in this
_ => {}
}
}
````
Version information
````
rustc 1.85.0-nightly (8e37e1518 2024-12-12)
binary: rustc
commit-hash: 8e37e151835d96d6a7415e93e6876561485a3354
commit-date: 2024-12-12
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.5
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/8e37e151835d96d6a7415e93e6876561485a3354/compiler/rustc_mir_build/src/thir/pattern/check_match.rs#L431-L443
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0658]: `if let` guards are experimental
--> /tmp/icemaker_global_tempdir.LCpqT8xE8i8B/rustc_testrunner_tmpdir_reporting.HaAO0ramSh9i/mvce.rs:5:11
|
5 | _ if let true = true
| ___________^
6 | | && true => {}
| |___________________^
|
= note: see issue #51114 <https://github.com/rust-lang/rust/issues/51114> for more information
= help: add `#![feature(if_let_guard)]` to the crate attributes to enable
= note: this compiler was built on 2024-12-12; consider upgrading it if it is out of date
= help: you can write `if matches!(<expr>, <pattern>)` instead of `if let <pattern> = <expr>`
error[E0658]: `let` expressions in this position are unstable
--> /tmp/icemaker_global_tempdir.LCpqT8xE8i8B/rustc_testrunner_tmpdir_reporting.HaAO0ramSh9i/mvce.rs:5:14
|
5 | _ if let true = true
| ^^^^^^^^^^^^^^^
|
= note: see issue #53667 <https://github.com/rust-lang/rust/issues/53667> for more information
= help: add `#![feature(let_chains)]` to the crate attributes to enable
= note: this compiler was built on 2024-12-12; consider upgrading it if it is out of date
error[E0658]: use of unstable library feature `coerce_unsized`
--> /tmp/icemaker_global_tempdir.LCpqT8xE8i8B/rustc_testrunner_tmpdir_reporting.HaAO0ramSh9i/mvce.rs:1:9
|
1 | impl<A> std::ops::CoerceUnsized<A> for A {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: see issue #18598 <https://github.com/rust-lang/rust/issues/18598> for more information
= help: add `#![feature(coerce_unsized)]` to the crate attributes to enable
= note: this compiler was built on 2024-12-12; consider upgrading it if it is out of date
error[E0210]: type parameter `A` must be used as the type parameter for some local type (e.g., `MyStruct<A>`)
--> /tmp/icemaker_global_tempdir.LCpqT8xE8i8B/rustc_testrunner_tmpdir_reporting.HaAO0ramSh9i/mvce.rs:1:6
|
1 | impl<A> std::ops::CoerceUnsized<A> for A {}
| ^ type parameter `A` must be used as the type parameter for some local type
|
= note: implementing a foreign trait is only possible if at least one of the types for which it is implemented is local
= note: only traits defined in the current crate can be implemented for a type parameter
error[E0376]: the trait `CoerceUnsized` may only be implemented for a coercion between structures
--> /tmp/icemaker_global_tempdir.LCpqT8xE8i8B/rustc_testrunner_tmpdir_reporting.HaAO0ramSh9i/mvce.rs:1:1
|
1 | impl<A> std::ops::CoerceUnsized<A> for A {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
thread 'rustc' panicked at compiler/rustc_mir_build/src/thir/pattern/check_match.rs:437:9:
assertion failed: self.let_source != LetSource::None
stack backtrace:
0: 0x7d588315e3aa - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hdcffbc1065d74d75
1: 0x7d5883813c66 - core::fmt::write::h55baa0ecd0176531
2: 0x7d58847da091 - std::io::Write::write_fmt::h49f3dcbd2134b25a
3: 0x7d588315e202 - std::sys::backtrace::BacktraceLock::print::h19877f04c7770d81
4: 0x7d588316071a - std::panicking::default_hook::{{closure}}::h812458b2e3e6cae6
5: 0x7d5883160563 - std::panicking::default_hook::hc806cb2847473728
6: 0x7d58822c4e08 - std[f11eb203934acfd6]::panicking::update_hook::<alloc[a23c71bf91e329cf]::boxed::Box<rustc_driver_impl[b5a68f650dfe829f]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7d5883160f18 - std::panicking::rust_panic_with_hook::ha79d4bc654fea354
8: 0x7d5883160bd6 - std::panicking::begin_panic_handler::{{closure}}::h54a022eda853152d
9: 0x7d588315e859 - std::sys::backtrace::__rust_end_short_backtrace::hd4a582557f71a384
10: 0x7d58831608cd - rust_begin_unwind
11: 0x7d587fdb9560 - core::panicking::panic_fmt::he1c36effa16de984
12: 0x7d588093558c - core::panicking::panic::h70de7f5a871adbc2
13: 0x7d5884200e94 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor as rustc_middle[ec650cf5d749f3c]::thir::visit::Visitor>::visit_expr
14: 0x7d58841ffed4 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor as rustc_middle[ec650cf5d749f3c]::thir::visit::Visitor>::visit_expr
15: 0x7d5884ada95c - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor>::visit_land_rhs
16: 0x7d5884ada88f - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor>::visit_land
17: 0x7d5884ada873 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor>::visit_land
18: 0x7d5884ada7a1 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor>::visit_land
19: 0x7d58841ff29e - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor as rustc_middle[ec650cf5d749f3c]::thir::visit::Visitor>::visit_expr
20: 0x7d58841feb83 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor as rustc_middle[ec650cf5d749f3c]::thir::visit::Visitor>::visit_expr
21: 0x7d58841feaf3 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor>::with_let_source::<<rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor as rustc_middle[ec650cf5d749f3c]::thir::visit::Visitor>::visit_arm::{closure#0}::{closure#0}>
22: 0x7d5884200746 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor as rustc_middle[ec650cf5d749f3c]::thir::visit::Visitor>::visit_expr
23: 0x7d58841ffed4 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor as rustc_middle[ec650cf5d749f3c]::thir::visit::Visitor>::visit_expr
24: 0x7d58841feb83 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor as rustc_middle[ec650cf5d749f3c]::thir::visit::Visitor>::visit_expr
25: 0x7d58841ffed4 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor as rustc_middle[ec650cf5d749f3c]::thir::visit::Visitor>::visit_expr
26: 0x7d58841ffed4 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor as rustc_middle[ec650cf5d749f3c]::thir::visit::Visitor>::visit_expr
27: 0x7d58841feb83 - <rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::MatchVisitor as rustc_middle[ec650cf5d749f3c]::thir::visit::Visitor>::visit_expr
28: 0x7d5884201f27 - rustc_mir_build[2e387cfc83cb721c]::thir::pattern::check_match::check_match
29: 0x7d5884201b33 - rustc_query_impl[37a5c04746b38b2a]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[37a5c04746b38b2a]::query_impl::check_match::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ec650cf5d749f3c]::query::erase::Erased<[u8; 1usize]>>
30: 0x7d5884217347 - rustc_query_system[292d97488c9a101d]::query::plumbing::try_execute_query::<rustc_query_impl[37a5c04746b38b2a]::DynamicConfig<rustc_data_structures[35c611aae2a30678]::vec_cache::VecCache<rustc_span[89c59f74d8e61a10]::def_id::LocalDefId, rustc_middle[ec650cf5d749f3c]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[292d97488c9a101d]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[37a5c04746b38b2a]::plumbing::QueryCtxt, false>
31: 0x7d5884216fdd - rustc_query_impl[37a5c04746b38b2a]::query_impl::check_match::get_query_non_incr::__rust_end_short_backtrace
32: 0x7d588421ba85 - rustc_mir_build[2e387cfc83cb721c]::build::mir_build
33: 0x7d5883809c44 - rustc_mir_transform[ef0b54958ae2c620]::mir_built
34: 0x7d5883809c07 - rustc_query_impl[37a5c04746b38b2a]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[37a5c04746b38b2a]::query_impl::mir_built::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ec650cf5d749f3c]::query::erase::Erased<[u8; 8usize]>>
35: 0x7d5883acfdd1 - rustc_query_system[292d97488c9a101d]::query::plumbing::try_execute_query::<rustc_query_impl[37a5c04746b38b2a]::DynamicConfig<rustc_data_structures[35c611aae2a30678]::vec_cache::VecCache<rustc_span[89c59f74d8e61a10]::def_id::LocalDefId, rustc_middle[ec650cf5d749f3c]::query::erase::Erased<[u8; 8usize]>, rustc_query_system[292d97488c9a101d]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[37a5c04746b38b2a]::plumbing::QueryCtxt, false>
36: 0x7d5883acf98d - rustc_query_impl[37a5c04746b38b2a]::query_impl::mir_built::get_query_non_incr::__rust_end_short_backtrace
37: 0x7d5880a13015 - rustc_mir_build[2e387cfc83cb721c]::check_unsafety::check_unsafety
38: 0x7d588418a6fd - rustc_query_impl[37a5c04746b38b2a]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[37a5c04746b38b2a]::query_impl::check_unsafety::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ec650cf5d749f3c]::query::erase::Erased<[u8; 0usize]>>
39: 0x7d588418a99a - rustc_query_system[292d97488c9a101d]::query::plumbing::try_execute_query::<rustc_query_impl[37a5c04746b38b2a]::DynamicConfig<rustc_data_structures[35c611aae2a30678]::vec_cache::VecCache<rustc_span[89c59f74d8e61a10]::def_id::LocalDefId, rustc_middle[ec650cf5d749f3c]::query::erase::Erased<[u8; 0usize]>, rustc_query_system[292d97488c9a101d]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[37a5c04746b38b2a]::plumbing::QueryCtxt, false>
40: 0x7d588418a641 - rustc_query_impl[37a5c04746b38b2a]::query_impl::check_unsafety::get_query_non_incr::__rust_end_short_backtrace
41: 0x7d5883c5efec - rustc_interface[2b9613d0d03e32ba]::passes::run_required_analyses
42: 0x7d58847c569e - rustc_interface[2b9613d0d03e32ba]::passes::analysis
43: 0x7d58847c566f - rustc_query_impl[37a5c04746b38b2a]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[37a5c04746b38b2a]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[ec650cf5d749f3c]::query::erase::Erased<[u8; 0usize]>>
44: 0x7d588483fbd5 - rustc_query_system[292d97488c9a101d]::query::plumbing::try_execute_query::<rustc_query_impl[37a5c04746b38b2a]::DynamicConfig<rustc_query_system[292d97488c9a101d]::query::caches::SingleCache<rustc_middle[ec650cf5d749f3c]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[37a5c04746b38b2a]::plumbing::QueryCtxt, false>
45: 0x7d588483f90e - rustc_query_impl[37a5c04746b38b2a]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
46: 0x7d5884883a3c - rustc_interface[2b9613d0d03e32ba]::interface::run_compiler::<(), rustc_driver_impl[b5a68f650dfe829f]::run_compiler::{closure#0}>::{closure#1}
47: 0x7d5884742ec7 - std[f11eb203934acfd6]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[2b9613d0d03e32ba]::util::run_in_thread_with_globals<rustc_interface[2b9613d0d03e32ba]::util::run_in_thread_pool_with_globals<rustc_interface[2b9613d0d03e32ba]::interface::run_compiler<(), rustc_driver_impl[b5a68f650dfe829f]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
48: 0x7d5884743362 - <<std[f11eb203934acfd6]::thread::Builder>::spawn_unchecked_<rustc_interface[2b9613d0d03e32ba]::util::run_in_thread_with_globals<rustc_interface[2b9613d0d03e32ba]::util::run_in_thread_pool_with_globals<rustc_interface[2b9613d0d03e32ba]::interface::run_compiler<(), rustc_driver_impl[b5a68f650dfe829f]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[42065d3ef999533b]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
49: 0x7d588474492f - std::sys::pal::unix::thread::Thread::new::thread_start::h8c3d5e2b3337de45
50: 0x7d587eaa339d - <unknown>
51: 0x7d587eb2849c - <unknown>
52: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.85.0-nightly (8e37e1518 2024-12-12) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [check_match] match-checking `main`
#1 [mir_built] building MIR for `main`
end of query stack
error: aborting due to 5 previous errors
Some errors have detailed explanations: E0210, E0376, E0658.
For more information about an error, try `rustc --explain E0210`.
```
</p>
</details>
<!--
query stack:
#0 [check_match] match-checking `main`
#1 [mir_built] building MIR for `main`
-->
| I-ICE,T-compiler,C-bug,F-coerce_unsized,S-has-mcve,S-bug-has-test | low | Critical |
2,736,366,745 | storybook | [Documentation]: Improve docs for storybook/@nextjs-experimental-vite | ### Describe the problem
Could the docs be improved for the Next.js 'With Vite' setup?
I would personally greatly appreciate a template repo as I have not been able to get the right config to get this specific setup working.
I understand this is experimental - but its hard to be a tester if I can't get it running.
@valentinpalkovic Who I think is the expert on this plugin ๐
### Additional context
Just for clarity of my goals-
My final desired stack is a monorepo of Storybook + UI library + tailwindcss (using turborepo). With the UI components finally consumed in a Next.js app in an external repo. | documentation | low | Minor |
2,736,369,998 | godot | Scene takes ~10 seconds longer on startup if another Godot Instance or the Godot Project Manager is open in the background | ### Tested versions
Godot v4.3.stable (77dcf97d8)
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 Laptop GPU (NVIDIA; 32.0.15.6614) - 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz (16 Threads)
### Issue description
If the project manager is open, starting the main or a custom scene takes an extra 10 seconds.
Here is a comparison video:
Left shows me running a project.
Right, starting the same project but the project manager is also open in the background
https://github.com/user-attachments/assets/168d5940-d833-450a-a49b-44c36d31f405
This is the same if another Instance of the Godot Editor is open in the background. But I found it interesting that the project manager already causes this issue.
### Steps to reproduce
Open a Godot Project.
Open a new Godot Project manager.
Run your project.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,performance | low | Minor |
2,736,388,377 | pytorch | assert_size_strided hit on Conv1d with strided input | ### ๐ Describe the bug
backend: inductor, any mode
reproducer:
```python
import torch
from torch.nn import Conv1d
net_d = Conv1d(in_channels=1, out_channels=16, kernel_size=15, stride=1, padding=7, device='cuda')
net_d = torch.compile(net_d)
wave = torch.randn((8, 1, 1024), device='cuda').as_strided(
size=(8, 1, 1024), stride=(1024, 1, 1))
a = net_d(wave)
print("ok")
```
### Error logs
```
File "/home/yorick/code/w/.venv/lib/python3.11/site-packages/torch/_inductor/utils.py", line 1977, in run
return model(new_inputs)
^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_yorick/3s/c3spesdi5qxuareundjfschy7f3ctwcp7uwflyqcfmv24amwvwxj.py", line 83, in call
assert_size_stride(buf0, (8, 16, 1024), (16384, 1024, 1))
AssertionError: expected size 16==16, stride 1==1024 at dim=1; expected size 1024==1024, stride 16==1 at dim=2
```
### Versions
(it also happens on `2.6.0.dev20241210+cu126` nightly)
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: NixOS 24.11 (Vicuna) (x86_64)
GCC version: (GCC) 13.3.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.40
Python version: 3.11.10 (main, Sep 7 2024, 01:03:31) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.63-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A5000
Nvidia driver version: 565.77
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 3950X 16-Core Processor
CPU family: 23
Model: 113
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 64%
CPU max MHz: 3500.0000
CPU min MHz: 2200.0000
BogoMIPS: 6999.87
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnx2torch==1.5.15
[pip3] onnxruntime==1.20.1
[pip3] open_clip_torch==2.29.0
[pip3] rotary-embedding-torch==0.8.6
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchcrepe==0.0.23
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Critical |
2,736,392,325 | angular | Inconsistency between "Accessing query parameters and fragments" headline and provided example | ### Describe the problem that you experienced
The current documentation under the headline "Accessing query parameters and fragments" does not actually demonstrate how to handle query parameters or fragments. Instead, it focuses on accessing a route (path) parameter and navigating to a route with a path parameter.
For example, the text states:
"Sometimes, a feature of your application requires accessing a part of a route, such as a query parameter or a fragment."
However, the example and code snippets only show how to retrieve a path parameter (e.g., hero/:id) and do not include any instructions or examples related to query parameters (?key=value) or URL fragments (#fragment).
I think, accessing all 3 kind of parameters is important and should be explained on the "Common Routing Tasks" page.
### Enter the URL of the topic with the problem
https://angular.dev/guide/routing/common-router-tasks#accessing-query-parameters-and-fragments
### Describe what you were looking for in the documentation
I was into the guide, how the docs use Path Params, Query Params and Fragments.
Especially, because the `ActivatedRoute` has a `queryParamMap` and a `paramMap` (and not a pathParamMap).
### Describe the actions that led you to experience the problem
its the default.
### Describe what you want to experience that would fix the problem
I think, accessing all 3 kind of parameters is important and should be explained on the "Common Routing Tasks" page.
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
```true
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
```true
```
### Provide any additional information here in as much as detail as you can
```true
``` | area: docs | low | Critical |
2,736,405,054 | vscode | VSCode launch failed (code 43) (conflict with MacType?) | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96?
- OS Version: Windows 11
Steps to Reproduce:
1. Ensure there are no VS Code running
2. Try to open VS Code "indirectly", e.g. via GitHub Desktop "Open in Visual Studio Code" or via command line `code`
3. Immediately see error; this error holds for a short while, and then the entire window disappears
Directly opening VS Code from e.g. the start menu works, but that defeats the entire purpose of being able to start VS Code from other places. If somehow there is already a VS Code window running, then the above error doesn't occur.
Example error window:

Normally, even `code --crash-reporter-directory` as mentioned in https://github.com/microsoft/vscode/wiki/Native-Crash-Issues#creating-a-crash-report can result in the same error, and this will NOT result in crash dumps (???), but I am lucky enough that it sometimes passes through, and now I have a crash dump for this error.
**Crash dump**:
[20241212_vscode_dump.zip](https://github.com/user-attachments/files/18114765/20241212_vscode_dump.zip)
For further context, my Windows 11 has OneDrive cloud backup enabled. I notice this cloud backup may somehow hold file locks that it should not (especially when it has completed the backup), which can sometimes block other apps eg Office Word from saving the Word file because the file is "in use" by OneDrive. Not sure if this is relevant in this problem. | bug,freeze-slow-crash-leak,windows,sandbox,mitigated | low | Critical |
2,736,411,159 | ollama | Windows NUMA 4 socket, 144 core system, default thread count causes very poor performance | ### What is the issue?
A 135M-parameter model only yielded 4 words after running for 3.5 hours on one 36-core CPU @ 100% load.
A 3.8B model yielded only 10 words after 10.5 hours on the same machine.
Prompt in both cases: "Introduce yourself."
Windows Server 2016 OS (direct install, no Docker).
Ollama 0.5.1 on 4 Xeon Gold 6140 CPUs (144 logical cores in total) and 768 GB of system RAM (6-channel, NUMA architecture).
No GPU.
Tried two small LMMs for starters, namely smollm:135m and phi3.5 (3.8B).
The correct runner for that CPU type was loaded (cpu_avx2).
smollm:135m was saying (after 3.5 h): "I'm thrilled to introduce..."
phi3.5 (3.8B) was saying (after 10.5 h): "Hello! I am Phi, an artificial intelligence designed to interact..."
I have run larger LLMs with Q4 and FP16 quantizations on a much older server machine running Windows 10 with dual Xeons 5600 (intel Westmere, no AVX), 288 GB of RAM (and no GPU), and the "cpu" runner worked fine. Indeed, a 30B, Q4 model runs very slow (~one word/second), but nothing like one word/hour!!!
On the newer machine (Win Server 2016), Ollama seems to run 288 parallel threads on one of four 36-core (logical) CPU; here's an excerpt from the server.log:
time=2024-12-12T16:47:26.192+01:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=288
On the older machine (Win 10 Pro x64), Ollama used both CPUs and the load peaked at ~60%. RAM is DDR3 @ 1333 MHz, 3 channels/CPU (6 channels for DDR4 @ 2666 MHz on the newer machine).
### OS
Windows
### GPU
Other
### CPU
Intel
### Ollama version
0.5.1 | bug,performance,windows | low | Major |
2,736,438,995 | rust | Detect different borrowing in different sub-expressions the same expression and suggest borrowing when applicable | ```
error[E0308]: mismatched types
--> compiler/rustc_middle/src/ty/context.rs:3212:73
|
3210 | match expr.kind {
| --------- this expression has type `rustc_hir::ExprKind<'_>`
3211 | hir::ExprKind::Path(qpath)
| ----- first introduced with type `rustc_hir::QPath<'_>` here
3212 | | hir::ExprKind::Call(hir::Expr { kind: hir::ExprKind::Path(qpath), .. }, []) => {
| ^^^^^ expected `QPath<'_>`, found `&QPath<'_>`
|
= note: in the same arm, a binding must have the same type in all alternatives
```
The above should suggest `match &expr.kind`. | A-diagnostics,P-low,T-compiler,A-suggestion-diagnostics,D-terse | low | Critical |
2,736,440,309 | rust | SIGSEGV/SIGILL while building `rustc` itself in x86_64 apple CI jobs | This does not seem like an isolated case, see label https://github.com/rust-lang/rust/labels/CI-spurious-x86_64-apple-SIGSEGV-SIGILL
---
In the revert PR #134207, non-compiler (and AFAICT unrelated changes) ran into a SIGSEGV while compiling `rustc_codegen_llvm` on the `dist-x86_64-apple` job.
CI job: https://github.com/rust-lang-ci/rust/actions/runs/12298794250/job/34323085670
Log: https://github.com/rust-lang/rust/pull/134207#issuecomment-2539381085
```
[RUSTC-TIMING] rustc_codegen_llvm test:false 62.093
rustc exited with signal: 11 (SIGSEGV)
error: could not compile `rustc_codegen_llvm` (lib)
Caused by:
process didn't exit successfully: `/Users/runner/work/rust/rust/build/bootstrap/debug/rustc /Users/runner/work/rust/rust/build/bootstrap/debug/rustc --crate-name rustc_codegen_llvm --edition=2021 compiler/rustc_codegen_llvm/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no -C codegen-units=1 --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values())' -C metadata=31812fcdce1ed1a6 -C extra-filename=-31812fcdce1ed1a6 --out-dir /Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps --target x86_64-apple-darwin -L dependency=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps -L dependency=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/release/deps --extern bitflags=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/libbitflags-a1ecb01bec05c607.rmeta --extern itertools=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/libitertools-fc8482292e402829.rmeta --extern libc=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/liblibc-71418dd2bae71546.rmeta --extern measureme=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/libmeasureme-65f26e237d9cbc69.rmeta --extern object=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/libobject-b9fdd0c4eec82855.rmeta --extern rustc_demangle=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_demangle-59d4195c10f8ce59.rmeta --extern rustc_abi=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_abi-8c308e15a91667ae.rmeta --extern rustc_ast=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_ast-6307ddaffbb78c2c.rmeta --extern rustc_attr=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_attr-a64f29895039e85d.rmeta --extern rustc_codegen_ssa=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_codegen_ssa-471fc4cdea12e5aa.rmeta --extern rustc_data_structures=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_data_structures-ad6787aabac41fc7.rmeta --extern rustc_errors=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_errors-ae10afa2bd2798f4.rmeta --extern rustc_fluent_macro=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/release/deps/librustc_fluent_macro-e27f644eab9503e4.dylib --extern rustc_fs_util=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_fs_util-1af14c49c485068e.rmeta --extern rustc_hir=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_hir-ddd2bd35821eba08.rmeta --extern rustc_index=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_index-ba2f5cf42e80b6b3.rmeta --extern rustc_llvm=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_llvm-a806c45190fb0592.rmeta --extern rustc_macros=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/release/deps/librustc_macros-21f9064df7350bce.dylib --extern rustc_metadata=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_metadata-8c77a936e16e5034.rmeta --extern rustc_middle=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_middle-8709a3bcf14547b0.rmeta --extern rustc_query_system=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_query_system-531ae3841194b38a.rmeta --extern rustc_sanitizers=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_sanitizers-21a20f1a3f01e840.rmeta --extern rustc_session=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_session-63c14fae59e29aee.rmeta --extern rustc_span=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_span-2d8da5b611828dda.rmeta --extern rustc_symbol_mangling=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_symbol_mangling-595e7538c1ed6229.rmeta --extern rustc_target=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/librustc_target-bcb3e09908ca7102.rmeta --extern serde=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/libserde-632f5b42d8037b7d.rmeta --extern serde_json=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/libserde_json-b239b225d337f1f5.rmeta --extern smallvec=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/libsmallvec-e8983336671b9a36.rmeta --extern tracing=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/deps/libtracing-6a46015c9b81a7be.rmeta --cfg=windows_raw_dylib -Csymbol-mangling-version=v0 -Zunstable-options '--check-cfg=cfg(bootstrap)' '--check-cfg=cfg(llvm_enzyme)' -Zmacro-backtrace -Csplit-debuginfo=unpacked '-Wrustc::internal' '-Drustc::symbol_intern_string_literal' -Wkeyword_idents_2024 -Wunsafe_op_in_unsafe_fn -Zosx-rpath-install-name '-Clink-args=-Wl,-rpath,@loader_path/../lib' -Zon-broken-pipe=kill -Zdylib-lto -Clto=thin -Cembed-bitcode=yes -Z binary-dep-depinfo -L native=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/build/psm-c80cca6b00966791/out -L native=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/build/blake3-dcfcf6f77ce37e35/out -L native=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/build/blake3-dcfcf6f77ce37e35/out -L native=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/stage1-rustc/x86_64-apple-darwin/release/build/rustc_llvm-d5c0fad7751ae19c/out -L native=/Users/runner/work/rust/rust/build/x86_64-apple-darwin/llvm/lib` (exit status: 254)
warning: build failed, waiting for other jobs to finish...
[RUSTC-TIMING] rustc_hir_analysis test:false 291.587
```
Unfortunately, I don't have a repro or details, and it might be spurious (which is still not good). Opening to record in case anyone else also run into this, and that this might be a genuine issue (like a miscompile or whatever). | I-crash,O-x86_64,T-compiler,A-spurious,C-bug,S-needs-repro,O-apple,CI-spurious-x86_64-apple-SIGSEGV-SIGILL | low | Critical |
2,736,447,987 | terminal | GitHub Copilot on Terminal Chat might be adding some shortcut hotkey | ### Description of the new feature
The `Clear the message history` might be able to bind a hotkey on it. For example, `Ctrl-L`.

### Proposed technical implementation details
_No response_ | Issue-Feature,Product-Terminal,Needs-Tag-Fix,Priority-3,Area-Chat | low | Minor |
2,736,477,871 | pytorch | MPS incompatibility: Calls into the C++ engine to run the backward pass | ### ๐ Describe the bug
I've been build a CLIP model
This is my architecture
```python
from transformers import DistilBertModel
import torch
import torch.nn as nn
class TextEncoderHead(nn.Module):
def __init__(self, model):
super(TextEncoderHead, self).__init__()
self.model = model
self.seq1 = nn.Sequential(
nn.Linear(768, 512),
nn.LayerNorm(512)
)
def forward(self, input_ids, attention_mask):
outputs = self.model(input_ids=input_ids, attention_mask=attention_mask)
outputs = outputs.last_hidden_state.mean(dim=1)
outputs = self.seq1(outputs)
return outputs.contiguous()
class ImageEncoderHead(nn.Module):
def __init__(self, model):
super(ImageEncoderHead, self).__init__()
self.model = model
self.seq1 = nn.Sequential(
nn.Linear(768, 512),
nn.LayerNorm(512)
)
def forward(self, pixel_values):
outputs = self.model(pixel_values=pixel_values)
outputs = outputs.last_hidden_state.mean(dim=1)
outputs = self.seq1(outputs)
return outputs.contiguous()
class CLIPChemistryModel(nn.Module):
def __init__(self, text_encoder, image_encoder):
super(CLIPChemistryModel, self).__init__()
self.text_encoder = text_encoder
self.image_encoder = image_encoder
def forward(self, image, input_ids, attention_mask):
# calculate the embeddings
ie = self.image_encoder(image)
te = self.text_encoder(input_ids, attention_mask)
return ie, te
```
I have this trainer and loss function
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from tqdm import tqdm
def trainer_fn(model, dataloader_train, dataloader_val, epochs, loss_fn, optimizer, device):
total_loss_train = []
total_loss_val = []
model.to(device)
for epoch in tqdm(range(epochs), desc="Training..."):
# MODEL TRAINING
model.train()
running_loss = 0
counter = 0
for batch in dataloader_train:
image, input_ids, attention_mask = batch
image = image.to(device)
input_ids = input_ids.to(device)
attention_mask = attention_mask.to(device)
# Verificar si los tensores son contiguos
print(f"image is contiguous: {image.is_contiguous()}")
print(f"input_ids is contiguous: {input_ids.is_contiguous()}")
print(f"attention_mask is contiguous: {attention_mask.is_contiguous()}")
# Forward pass
image_embeddings, text_embeddings = model(image, input_ids, attention_mask)
# Verificar si los tensores de embeddings son contiguos
print(f"image_embeddings is contiguous: {image_embeddings.is_contiguous()}")
print(f"text_embeddings is contiguous: {text_embeddings.is_contiguous()}")
# Calculate the loss
loss = loss_fn(image_embeddings, text_embeddings)
print(loss)
# Backward pass
loss.backward()
# Optimize the weights
optimizer.step()
# Zero the gradients
optimizer.zero_grad()
# Update the learning rate
running_loss += loss.item()
counter += 1
print(counter)
total_loss_train.append(running_loss/counter)
# MODEL EVALUATION
model.eval()
running_vloss = 0
vcounter = 0
with torch.no_grad():
for batch in dataloader_val:
image, input_ids, attention_mask = batch
image = image.to(device)
input_ids = input_ids.to(device)
attention_mask = attention_mask.to(device)
# forward pass
image_embeddings, text_embeddings = model(image, input_ids, attention_mask)
print(f"image_embeddings is contiguous: {image_embeddings.is_contiguous()}")
print(f"text_embeddings is contiguous: {text_embeddings.is_contiguous()}")
# calculate the loss
loss = loss_fn(image_embeddings=image_embeddings, text_embeddings=text_embeddings)
running_vloss += loss.item()
vcounter += 1
total_loss_val.append(running_vloss/vcounter)
# PRINT THE LOSS
print(f"Epoch {epoch+1} - Train Loss: {total_loss_train[-1]} - Validation Loss: {total_loss_val[-1]}")
def contrastive_loss(image_embeddings, text_embeddings, temperature=1.0):
"""
Compute contrastive loss between image and text embeddings.
"""
temperature = torch.tensor(temperature, device=image_embeddings.device).float()
image_embeddings = image_embeddings.contiguous().float()
text_embeddings = text_embeddings.contiguous().float()
batch_size = image_embeddings.shape[0]
image_embeddings = F.normalize(image_embeddings, p=2, dim=1)
text_embeddings = F.normalize(text_embeddings, p=2, dim=1)
logits = torch.einsum('nc,mc->nm', [image_embeddings, text_embeddings])
logits = logits * torch.exp(temperature)
labels = torch.arange(batch_size, device=image_embeddings.device)
loss_i2t = F.cross_entropy(logits, labels)
loss_t2i = F.cross_entropy(logits.t(), labels)
loss = (loss_i2t + loss_t2i) / 2
return loss
```
When I run over mps, this causes this error
Traceback (most recent call last):
File "/Users/sebastianalejandrosarastizambonino/Documents/projects/CLIP_Pytorch/src/trainer.py", line 74, in <module>
main()
File "/Users/sebastianalejandrosarastizambonino/Documents/projects/CLIP_Pytorch/src/trainer.py", line 61, in main
trainer_fn(
File "/Users/sebastianalejandrosarastizambonino/Documents/projects/CLIP_Pytorch/src/utils.py", line 37, in trainer_fn
loss.backward()
File "/opt/anaconda3/envs/clip/lib/python3.11/site-packages/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "/opt/anaconda3/envs/clip/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/opt/anaconda3/envs/clip/lib/python3.11/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
I changed to CPU and works fine. Does anyone know why this happens?
### Versions
torch 2.5.1
macos 15.1
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | triaged,module: regression,module: mps | low | Critical |
2,736,495,479 | react | [Compiler Bug]: Handle TSInstantiationExpression expressions | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhHCA7MAXABABwEMYwFcBeXAHgCUAaXAQQD4AKAN0IBsoFkmAlPxoVmuYAB0MuXAHpZBYqVwAzKBjjYAlplwB3LdgAWuSAFsyZwgHMtcXFqxaAJggB0HqTJgJssaZw8CFIAvlJSCAAe+BAweGoa2roAKgg4AMIQZvhaXAgwrALiXrjoWHjWvgAihNiEFLiForgACjBZWqRuPpBc7AisANoAugLh0nIAVLjJRp2lWTl5YKqOZJOyJWU4uM619ZSV2DV1hW7GCBisRCQIYxglstOz88bteiuE0vntMLgbjwU2zwezqDSOJ0IZwuVxupCoOBgjmsI2Y9xKPj8MGkVEYGAgFxgmWymEueECvHIwFBhBCcmYoSkIBCQA
### Repro steps
Basically, just use typescript's instantiation expressions inside a component
### How often does this bug happen?
Every time
### What version of React are you using?
19
### What version of React Compiler are you using?
19.0.0-beta-37ed2a7-20241206 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | medium | Critical |
2,736,496,041 | kubernetes | Failed to get rootfs info if the host machine uses bcachefs raid array | ### What happened?
If you create a host machine with bcachefs file system with multiple drives:
```bash
โฏ bcachefs format /dev/nvme0n1p2 /dev/nvme1n1p2 --replicas=2
โฏ mount -t bcachefs /dev/nvme0n1p2:/dev/nvme1n1p2 /mnt
```
The filesystem device is in every linux tool defined as `/dev/nvme0n1p2:/dev/nvme1n1p2`.
```bash
โฏ mount
/dev/nvme0n1p2:/dev/nvme1n1p2 on / type bcachefs (rw,relatime,metadata_replicas=2,data_replicas=2,compression=lz4,background_compression=lz4,nopromote_whole_extents,fix_errors=yes,nojournal_transaction_names)
โฏ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/nvme0n1p2:/dev/nvme1n1p2 3593522709 1260627309 2297004701 36% /
```
As you might know, docker displays this file system label if you add any unnamed volumes to your container. And this brings us to the k8s issue. If you create a cluster e.g. with kind or minikube, it starts a kubelet service in the container. Kubelet tries to read all mounts, and fails to parse the bcachefs filesystem:
```
kubelet.go:1566] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get mount point for device \"/dev/nvme0n1p2:/dev/nvme1n1p2\": no partition info for device \"/dev/nvme0n1p2:/dev/nvme1n1p2\""
```
What you _can_ do is to just take the first partition from the list and use that. But here we have kubelet just crashing, rendering systems such as kind or minikube not working.
### What did you expect to happen?
Kubelet should just not care the filesystem name format. The system should start even when using bcachefs.
### How can we reproduce it (as minimally and precisely as possible)?
Spin up a virtual machine with two disks. It's probably easiest to run on NixOS:
https://nixos.wiki/wiki/Bcachefs
Create the root volume with
```bash
โฏ bcachefs format /dev/nvme0n1p2 /dev/nvme1n1p2 --replicas=2
โฏ mount -t bcachefs /dev/nvme0n1p2:/dev/nvme1n1p2 /mnt
```
Continue the installation as instructed.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
```
</details>
### Cloud provider
<details>
Local machine.
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ uname -a
# paste output here
Linux docker-container 6.12.3-cachyos #1-NixOS SMP PREEMPT_DYNAMIC Fri Dec 6 06:20:46 UTC 2024 x86_64 GNU/Linux
```
</details>
### Install tools
_No response_
### Container runtime (CRI) and version (if applicable)
_No response_
### Related plugins (CNI, CSI, ...) and versions (if applicable)
_No response_ | kind/bug,priority/backlog,sig/node,triage/accepted | low | Critical |
2,736,508,208 | TypeScript | Scoped module declarations | ### ๐ Search Terms
scoped module declarations, scoped globals, scoping global types
### โ
Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### โญ Suggestion
In todays web ecosystem that's getting more complex and feature-rich it's very hard to get the desired functionality by declaring normal modules. I will cover one primary example so you can understand the issue and how it would be solved, primarily through vitest global test context. Vitest offers you a way to use `setup files` to inject functions/other utilities into their `context` by using a `beforeEach` hook. The way you would type what you passed into it is like the following:
```ts
declare module "vitest" {
export interface TestContext {
foo: typeof foo
bar: typeof bar
}
}
```
this makes it globally available in all test suites. BUT, here's the problem. let's say you add integration tests as well, and you have another setup file that adds the following to the test context:
```ts
declare module "vitest" {
export interface TestContext {
integration: typeof integration
}
}
```
Well, this creates an issue where whenever you try to access these globally defined contexts you have no idea if they are actually defined in the context they are running in, because the normal test suite has `foo` and `bar`, and the integration test suite has `integration` defined in the context. If you try to use `foo` in `integration tests` TS wouldn't complain but this would fail.
My idea is the following
```ts
// Define a unique `scope` keyword that scopes the following module augmentation to specific files that match a glob pattern
declare module "vitest" scope "**/*.unit.test.*" {
export interface TestContext {
foo: typeof foo
bar: typeof bar
}
}
```
This would override the module ONLY in files that match the scope, namely, in this case `.unit.test.*` files and then if we tried to use the `foo` or `bar` functionalities inside of there it wouldn't work. So to fix the problem from above, one would simply do the following:
```ts
// This overrides the unit.test suite
declare module "vitest" scope "**/*.unit.test.*" {
export interface TestContext {
foo: typeof foo
bar: typeof bar
}
}
// This overrides the integration.test suite
declare module "vitest" scope "**/*.integration.test.*" {
export interface TestContext {
integration: typeof something
}
}
```
now in your integration tests you could do:
```ts
it("works", ({ integration }) => {
// this is defined
integration.something
})
```
and in your unit tests:
```ts
it("works", ({ renderDevTools, integration }) => {
// this is defined
renderDevTools
// this is NOT defined and Typescript would complain
integration
})
```
other useful usecases:
- `*.server.ts` => make `window` object undefined
- `*.client.ts` => make `process` object undefined
- help framework authors / normal users declare globals in specific files where they are exposed
- help framework authors / normal users inject custom utilities into specific functions/files
### ๐ Motivating Example
- Allows you to create multiple overrides in specific contexts giving you actual typesafety instead of a potential footgun
- Allows framework authors to exclude/include specific utilities/globals in places where they should be included/omitted for you, without you needing to guard against it (like `window` and `process` objects in SSR frameworks like Remix/React Router and Next.js)
- Allows third party tooling to provide you with scoped globals without you having to configure anything manually
- and more...
### ๐ป Use Cases
1. To make `process` object undefined in client only files
2. To make `window` object undefined in server only files
3. There are no workarounds for this, the only thing you could do is mark the object as `undefined` so you don't forget to add a `?` in front of it so it doesn't crash your application if used somewhere where it's not supposed to be used
| Suggestion,Needs Proposal | low | Critical |
2,736,513,497 | kubernetes | Don't use namespace index when btree is enabled | ### What would you like to be added?
There are two competing improvements to perfomance of namespaced LIST:
* StorageNamespaceIndex - introduced to Beta in 1.30, configures pod resource to index by namespace.
* BtreeWatchCache - introduced to Beta in 1.31, allows prefix based listing which includes namespace. Removes need to filter and sort results from cache.
By default apiserver watch cache will try to use index if available, so `StorageNamespaceIndex` will be preferred. However, with introduction of btree it is faster to list from btree instead of index.
Benchmarks results, for current scalability envelope:
```
$ benchstat -col /Store,/Indexed -filter '/Paginate:0 -/RV:Exact' ./benchmark/master2.txt
goos: linux
goarch: amd64
pkg: k8s.io/apiserver/pkg/storage/cacher
cpu: AMD Ryzen Threadripper PRO 3945WX 12-Cores
โ Btree โ Map โ
โ true โ false โ true โ false โ
โ sec/op โ sec/op vs base โ sec/op vs base โ sec/op vs base โ
StoreList/Namespaces=10000/Pods=150000/Nodes=5000/RV=/Scope=Namespace/Paginate=0-24 20.77ยต ยฑ 10% 20.14ยต ยฑ 13% ~ (p=1.000 n=10) 19.73ยต ยฑ 6% ~ (p=0.105 n=10) 1067.34ยต ยฑ 10% +5037.73% (p=0.000 n=10)
StoreList/Namespaces=10000/Pods=150000/Nodes=5000/RV=NotOlderThan/Scope=Namespace/Paginate=0-24 3.943ยต ยฑ 6% 3.928ยต ยฑ 6% ~ (p=0.218 n=10) 3.665ยต ยฑ 3% -7.05% (p=0.000 n=10) 944.641ยต ยฑ 1% +23857.41% (p=0.000 n=10)
StoreList/Namespaces=50/Pods=150000/Nodes=5000/RV=/Scope=Namespace/Paginate=0-24 303.3ยต ยฑ 2% 258.2ยต ยฑ 2% -14.85% (p=0.000 n=10) 340.1ยต ยฑ 3% +12.15% (p=0.000 n=10) 1668.6ยต ยฑ 4% +450.23% (p=0.000 n=10)
StoreList/Namespaces=50/Pods=150000/Nodes=5000/RV=NotOlderThan/Scope=Namespace/Paginate=0-24 286.2ยต ยฑ 3% 234.7ยต ยฑ 1% -17.99% (p=0.000 n=10) 326.9ยต ยฑ 2% +14.22% (p=0.000 n=10) 1347.7ยต ยฑ 4% +370.91% (p=0.000 n=10)
StoreList/Namespaces=100/Pods=110000/Nodes=1000/RV=/Scope=Namespace/Paginate=0-24 125.3ยต ยฑ 2% 112.3ยต ยฑ 5% -10.38% (p=0.000 n=10) 137.5ยต ยฑ 2% +9.81% (p=0.000 n=10) 1395.1ยต ยฑ 8% +1013.78% (p=0.000 n=10)
StoreList/Namespaces=100/Pods=110000/Nodes=1000/RV=NotOlderThan/Scope=Namespace/Paginate=0-24 120.6ยต ยฑ 2% 113.2ยต ยฑ 1% -6.13% (p=0.000 n=10) 133.8ยต ยฑ 1% +10.92% (p=0.000 n=10) 1719.1ยต ยฑ 5% +1325.35% (p=0.000 n=10)
geomean 68.94ยต 62.73ยต -9.02% 72.72ยต +5.48% 1.326m +1823.40%
โ Btree โ Map โ
โ true โ false โ true โ false โ
โ B/op โ B/op vs base โ B/op vs base โ B/op vs base โ
StoreList/Namespaces=10000/Pods=150000/Nodes=5000/RV=/Scope=Namespace/Paginate=0-24 28.43Ki ยฑ 3% 27.91Ki ยฑ 0% -1.82% (p=0.000 n=10) 28.41Ki ยฑ 0% ~ (p=0.671 n=10) 2371.96Ki ยฑ 0% +8243.54% (p=0.000 n=10)
StoreList/Namespaces=10000/Pods=150000/Nodes=5000/RV=NotOlderThan/Scope=Namespace/Paginate=0-24 11.69Ki ยฑ 0% 11.16Ki ยฑ 0% -4.56% (p=0.000 n=10) 11.69Ki ยฑ 0% ~ (p=0.065 n=10) 2355.11Ki ยฑ 0% +20047.27% (p=0.000 n=10)
StoreList/Namespaces=50/Pods=150000/Nodes=5000/RV=/Scope=Namespace/Paginate=0-24 1.790Mi ยฑ 0% 1.742Mi ยฑ 0% -2.64% (p=0.000 n=10) 1.790Mi ยฑ 0% ~ (p=0.579 n=10) 4.031Mi ยฑ 0% +125.24% (p=0.000 n=10)
StoreList/Namespaces=50/Pods=150000/Nodes=5000/RV=NotOlderThan/Scope=Namespace/Paginate=0-24 1.773Mi ยฑ 0% 1.726Mi ยฑ 0% -2.66% (p=0.000 n=10) 1.773Mi ยฑ 0% ~ (p=0.424 n=10) 4.015Mi ยฑ 0% +126.46% (p=0.000 n=10)
StoreList/Namespaces=100/Pods=110000/Nodes=1000/RV=/Scope=Namespace/Paginate=0-24 690.1Ki ยฑ 0% 671.9Ki ยฑ 0% -2.64% (p=0.000 n=10) 690.3Ki ยฑ 0% ~ (p=0.089 n=10) 2391.7Ki ยฑ 0% +246.56% (p=0.000 n=10)
StoreList/Namespaces=100/Pods=110000/Nodes=1000/RV=NotOlderThan/Scope=Namespace/Paginate=0-24 673.3Ki ยฑ 0% 655.0Ki ยฑ 0% -2.72% (p=0.000 n=10) 673.3Ki ยฑ 0% ~ (p=0.516 n=10) 2374.9Ki ยฑ 0% +252.74% (p=0.000 n=10)
geomean 283.0Ki 274.9Ki -2.84% 283.0Ki +0.00% 2.785Mi +907.87%
โ Btree โ Map โ
โ true โ false โ true โ false โ
โ allocs/op โ allocs/op vs base โ allocs/op vs base โ allocs/op vs base โ
StoreList/Namespaces=10000/Pods=150000/Nodes=5000/RV=/Scope=Namespace/Paginate=0-24 325.0 ยฑ 0% 319.0 ยฑ 0% -1.85% (p=0.000 n=10) 325.0 ยฑ 0% ~ (p=0.736 n=10) 322.0 ยฑ 1% -0.92% (p=0.016 n=10)
StoreList/Namespaces=10000/Pods=150000/Nodes=5000/RV=NotOlderThan/Scope=Namespace/Paginate=0-24 68.00 ยฑ 0% 62.00 ยฑ 0% -8.82% (p=0.000 n=10) 68.00 ยฑ 0% ~ (p=1.000 n=10) 63.00 ยฑ 0% -7.35% (p=0.000 n=10)
StoreList/Namespaces=50/Pods=150000/Nodes=5000/RV=/Scope=Namespace/Paginate=0-24 357.0 ยฑ 3% 348.0 ยฑ 2% -2.52% (p=0.003 n=10) 358.5 ยฑ 2% ~ (p=0.669 n=10) 343.0 ยฑ 1% -3.92% (p=0.000 n=10)
StoreList/Namespaces=50/Pods=150000/Nodes=5000/RV=NotOlderThan/Scope=Namespace/Paginate=0-24 88.00 ยฑ 1% 83.00 ยฑ 1% -5.68% (p=0.000 n=10) 88.00 ยฑ 1% ~ (p=1.000 n=10) 83.00 ยฑ 0% -5.68% (p=0.000 n=10)
StoreList/Namespaces=100/Pods=110000/Nodes=1000/RV=/Scope=Namespace/Paginate=0-24 344.0 ยฑ 2% 340.0 ยฑ 2% -1.16% (p=0.038 n=10) 347.0 ยฑ 3% ~ (p=0.117 n=10) 338.0 ยฑ 0% -1.74% (p=0.001 n=10)
StoreList/Namespaces=100/Pods=110000/Nodes=1000/RV=NotOlderThan/Scope=Namespace/Paginate=0-24 85.00 ยฑ 1% 79.00 ยฑ 0% -7.06% (p=0.000 n=10) 85.00 ยฑ 0% ~ (p=0.087 n=10) 79.00 ยฑ 0% -7.06% (p=0.000 n=10)
geomean 165.2 157.6 -4.56% 165.5 +0.21% 157.8 -4.48%
```
Those result show:
* Namespace index is hugely beneficial when using map
* With btree disabling namespace index brings 4-17% performance improvement
/cc @jpbetz @deads2k @wojtek-t
/sig api-machinery
### Why is this needed?
Simplify and improve performance of storage layer. With btree, namespace indexing is not needed. | sig/api-machinery,kind/feature,triage/accepted | low | Major |
2,736,550,372 | terminal | Make sure foo.ico,0 works as an icon specifier | ### Windows Terminal version
1.21.3231.0
### Windows build number
10.0.19045.5198
### Other Software
_No response_
### Steps to reproduce
Change default PNG resource with any std Windows ICO resource and you get one behavior. Add ",0" at the end for another behavior. Both together would be correct as a whole. Individually, both are wrong.
[Screenshots](https://postimg.cc/gallery/pP9SHw6)
### Expected Behavior
Correct usage of the icon resource for the given place it's supposed to be displayed.
### Actual Behavior
Wrong or no icon displayed in taskbar jump list or in app. | Issue-Bug,Area-UserInterface,Product-Terminal,Needs-Tag-Fix,Priority-3 | low | Major |
2,736,589,961 | go | x/vulndb/internal/worker/log: TestGoogleCloudHandler failures | ```
#!watchflakes
default <- pkg == "golang.org/x/vulndb/internal/worker/log" && test == "TestGoogleCloudHandler"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8728801344753110513)):
=== RUN TestGoogleCloudHandler
log_test.go:28:
got {"time":"2024-12-11T16:30:47-08:00","severity":"INFO","message":"hello","logging.googleapis.com/trace":"tid","foo":"bar","count":17}
want {"time":"2024-12-11T16:30:46-08:00","severity":"INFO","message":"hello","logging.googleapis.com/trace":"tid","foo":"bar","count":17}
--- FAIL: TestGoogleCloudHandler (0.00s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,vulncheck or vulndb | low | Critical |
2,736,591,853 | PowerToys | Add greater than to the possible key remapping | ### Description of the new feature / enhancement
Hi,
in **Powertoys Keyboard Manager** remaping the "<" key is straghtforward but ">" must be done by using SHIFT+ combination for "<". (not very intuitive)
I am writing this from a laptop keyboard that (for some strange reason) does not have "<" nor ">" but i have seen some keyboards where the combination is exactly AltGr +z for "<" and AltGr+x for ">".
For ease of use ">" should be added to the possible key mappings, this way people could map AltGr +z for "<" and AltGr+x for ">" and possibly use stickers to show the mappings on the keyboard.
PS:Congrats on the powertoys tools.
### Scenario when this would be used?
"<" and ">" are keys that need to be easily acessible for coding, math formulas, teaching...
This change would make it possible to use keyboard stykers in the usual place reserved for the AltGr combination and visually help users find the keys.
### Supporting information
Seen numerous post on this subject missing "<" and ">" on keyboard many people seem to never even find the SHIFT "trick" in powertoys to get the ">" sign.
Other tools seem to offer this possibility but if anything is altering keyboard related, i think Microsoft should be the (safest) way.
| Issue-Bug,Product-Keyboard Shortcut Manager,Priority-2 | low | Minor |
2,736,596,840 | go | proposal: slices: MutableValues iterator | ### Proposal Details
It's a common task to iterate over a slice and modify its elements. For the following structure:
```go
type element struct {
field int
}
var (
elements = []element{
{field: 1},
{field: 2},
{field: 3},
}
doubledElements = []element{
{field: 2},
{field: 4},
{field: 6},
}
)
```
we have to use one of:
```go
func TestDoubleWithRangeAndIndex(t *testing.T) {
// change base elements without MutableValues
// using indexing
for i, e := range elements {
e.field *= 2
elements[i] = e
}
if !slices.Equal(elements, doubledElements) {
t.Fail()
}
}
```
or
```go
func TestDoubleWithRangeAndPointer(t *testing.T) {
// change base elements without MutableValues
// using pointer to element (useful in case of neested loops
// or operations on more than one field)
for i := range elements {
e := &elements[i]
e.field *= 2
}
if !slices.Equal(elements, doubledElements) {
t.Fail()
}
}
```
The proposal is to add `MutableValues` iterator to `slices` package:
```go
func MutableValues[Slice ~[]E, E any](s Slice) iter.Seq[*E] {
return func(yield func(*E) bool) {
for i := range s {
if !yield(&s[i]) {
return
}
}
}
}
```
Mutating of all elements of the slice can looks like:
```go
func TestDoubleWithMutableValuesIterator(t *testing.T) {
// values in elements will be changed
for e := range slices.MutableValues(elements) {
// we can directly change element via pointer
e.field *= 2
}
if !slices.Equal(elements, doubledElements) {
t.Fail()
}
}
```
| Proposal | medium | Major |
2,736,603,389 | vscode | Use unique sounds for code action triggered and applied | ATM, we're re-using sounds, it would be better to have unique ones to reduce confusion.
cc @jooyoungseo, @afre100 | feature-request,accessibility | low | Minor |
2,736,617,423 | vscode | Expose syntactic or semantic tokens with styles to extensions | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
While VSCode has for a while supported the "text/html" clipboard format[^1], it doesn't expose this in a way that extensions could use. The [Clipboard](https://code.visualstudio.com/api/references/vscode-api#Clipboard) API only allows you to get plain text - can't specify a format - and I can find nothing attached to or "under" `window` or on `Selection`, `TextEditor`, `TextDocument`, etc.
To facilitate extension that could, for example, render SVG (or an image) a la [Carbon](https://carbon.now.sh)[^2], extensions would need to get these tokens and, ideally, the styles currently associated with them based on the current theme. A more robust API not requiring the user to change the current theme would instead provide those tokens - based on TextMate grammars or any token provider, if registered - and a list of themes returning maps of tokens to styles (this would also be a new API it seems).
Even just extending the clipboard API to allow us to pass an optional clipboard format and returning either a `UInt8Array` or just limiting us to raw text e.g., the raw HTML text of the "text/html" clipboard format would be useful, but still require extra steps for the user to copy the text first since it appears there are no APIs to copy the current selection to the clipboard - only to get the plain text of a selection(s) and place it in the clipboard as plain text yourself.
[^1]: Which seems to be inconsistent on Windows vs. macOS: macOS 18.2 does not contain the "application/vnd.code.copymetadata" or "vscode-editor-data" - or perhaps Safari is suppressing it - nor does it seem to use the full HTML clipboard recommendation of using fragments.
[^2]: Carbon doesn't export SVGs in a way that most apps I've used can correctly consume. It'd also save a lot of time just to be able to export an SVG or rendered image to use in presentations, social media, etc., perhaps with adornments like window chrome similar to what Carbon does. | feature-request,api,semantic-tokens | low | Minor |
2,736,620,437 | rust | alloc_error_handler can be an `unsafe fn` which is then unsoundly invoked | This code currently compiles:
```rust
#![feature(alloc_error_handler)]
#![no_std]
extern crate alloc;
#[alloc_error_handler]
unsafe fn f(_: alloc::alloc::Layout) -> ! {
core::hint::unreachable_unchecked();
}
```
This is unsound if the alloc error handler ever gets invoked.
The alloc_error_handler feature is still unstable, tracking issue: https://github.com/rust-lang/rust/issues/51540 | T-compiler,I-unsound,C-bug,requires-nightly | low | Critical |
2,736,640,387 | rust | bootstrap/compiletest: msys2 msys `--skip` filter behaves strangely | > As I suspected (unfortunately), inside MSYS2 MSYS, the `--skip` on the exact path does not work (I suspect this is a compiletest problem, not bootstrap). However, MSYS2 MSYS also has several other `./x test`-related issues, so this is not worth blocking on MSYS2 problems IMO.
```bash
$ ./x test tests/ui/traits/fn-pointer/bare-fn-no-impl-fn-ptr-99875.rs --stage 1 --skip tests/ui/traits/fn-pointer/bare-fn-no-impl-fn-ptr-99875.rs
running 1 tests
.
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 18066 filtered out; finished in 97.23ms
```
_Originally posted by @jieyouxu in https://github.com/rust-lang/rust/issues/134209#issuecomment-2539760846_
| E-hard,C-enhancement,T-bootstrap,A-compiletest,A-test-infra | low | Critical |
2,736,690,029 | pytorch | Support Mutation on Epilogue Fusion on template | ### ๐ Describe the bug
It seems that `torch.compile` does not fuse matmul followed by `index_add`, even though scatter (`index_put_`) is considered pointwise and should theoretically be fusable.
When attempting to fuse `op1 (matmul)` -> `op2 (index_put_)`, the inductor throws a fusion error due to `index_put_` involving mutation https://github.com/pytorch/pytorch/blob/0b75b7ff2b8ab8f40e433a52b06a671d6377997f/torch/_inductor/scheduler.py#L3102
Here's the code snippet:
```python
import torch
def matmul(a, b, n, d):
C = torch.matmul(a, b)
e = d.index_add_(dim=0, index=n, source=C)
return e
a = torch.randn(100, 100, device="cuda")
b = torch.randn(100, 100, device="cuda")
d = torch.randn(100, 100, device="cuda")
n = torch.randint(100, (100,), device="cuda")
torch._inductor.epilogue_fusion_first = True
torch._inductor.epilogue_fusion = True
torch._inductor.benchmark_epilogue_fusion = True
compiled = torch.compile(matmul, mode="max-autotune-no-cudagraphs")
result = compiled(a, b, n, d)
```
When I manually inserted scatter into the matmul template, it produced the correct result, which suggests these two operations should be fusable. However, if I remove the mutation check `node2.has_aliasing_or_mutation()`, another error occurs during fusion (see attached).
Is there a strict correctness requirement preventing the fusion of mutation nodes with templates, or is this something that could be addressed?
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @jansel
### Error logs
## Error Logs
```
$ TORCH_LOGS="output_code,fusion" python test.py
warnings.warn(
V1212 13:20:34.264000 791207 site-packages/torch/_inductor/scheduler.py:2272] [0/0] [__fusion] ===== attempting fusion (1/10): 2 nodes =====
V1212 13:20:34.265000 791207 site-packages/torch/_inductor/scheduler.py:2547] [0/0] [__fusion] fuse_nodes_once, candidates:
V1212 13:20:34.265000 791207 site-packages/torch/_inductor/scheduler.py:2549] [0/0] [__fusion] SchedulerNode(name='op0')
V1212 13:20:34.266000 791207 site-packages/torch/_inductor/scheduler.py:2549] [0/0] [__fusion] SchedulerNode(name='op1'), Scatter(['[100, 100]', 'origins=OrderedSet([index_put])'])
V1212 13:20:34.266000 791207 site-packages/torch/_inductor/scheduler.py:2673] [0/0] [__fusion] found 1 possible fusions
V1212 13:20:34.267000 791207 site-packages/torch/_inductor/scheduler.py:2558] [0/0] [__fusion] fusing op0 with op1
V1212 13:20:34.267000 791207 site-packages/torch/_inductor/scheduler.py:2279] [0/0] [__fusion] completed fusion round (1/10): fused 2 nodes into 1 nodes
V1212 13:20:34.267000 791207 site-packages/torch/_inductor/scheduler.py:2279] [0/0] [__fusion]
V1212 13:20:34.267000 791207 site-packages/torch/_inductor/scheduler.py:2286] [0/0] [__fusion] ===== fusion complete (1 iterations) =====
Traceback (most recent call last):
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/__init__.py", line 2234, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1253, in compile_fx
return compile_fx(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1521, in compile_fx
return aot_autograd(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1350, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1421, in _fw_compiler_base
return inner_compile(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 475, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 85, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 661, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1334, in load
compiled_graph = compile_fx_fn(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 570, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 878, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1913, in compile_to_fn
return self.compile_to_module().call
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1839, in compile_to_module
return self._compile_to_module()
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1845, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1784, in codegen
self.scheduler.codegen()
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 3383, in codegen
return self._codegen()
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 3445, in _codegen
self.get_backend(device).codegen_template(node, epilogue)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/codegen/cuda_combined_scheduling.py", line 75, in codegen_template
return self._triton_scheduling.codegen_template(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_inductor/codegen/simd.py", line 1461, in codegen_template
kernel, render = template_node.node.make_kernel_render(template_node.node)
TypeError: 'NoneType' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jaeyeon/Playground/halide_pointcloud/template_fusion_test/test.py", line 22, in <module>
result = compiled(a, b, n,d)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__
1
result = self._inner_convert(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2987, in RETURN_VALUE
self._return(inst)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2972, in _return
self.output.compile_subgraph(
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1117, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1369, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1416, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/jaeyeon/miniconda3/envs/halide/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1465, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
TypeError: 'NoneType' object is not callable
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
## ir_post_fusion.txt
```
op0_op1: FusedSchedulerNode(SchedulerNode,SchedulerNode)
op0_op1.writes =
[ MemoryDep('buf0', c0, {c0: 10000}, None),
MemoryDep('buf1', c1 + 100*tmp0, {c0: 100, c1: 100}, atomic_add)]
op0_op1.unmet_dependencies = []
op0_op1.met_dependencies =
[ MemoryDep('arg3_1', c0, {c0: 100}, None),
StarDep(name='arg0_1', mode=None),
StarDep(name='arg1_1', mode=None),
StarDep(name='arg2_1', mode='atomic_add')]
op0_op1.outputs = [
buf0: MultiTemplateBuffer
buf0.layout = FixedLayout('cuda', torch.float32, size=[100, 100], stride=[100, 1])
buf0.users = [NodeUser(node=SchedulerNode(name='op1'), can_inplace=False, is_weak=False)]
buf1: ComputedBuffer
buf1.layout = MutationLayoutSHOULDREMOVE('cuda', torch.float32, size=[100, 100], stride=[100, 1])
buf1.mutations = ['arg2_1']
buf1.users = [NodeUser(node=OUTPUT, can_inplace=False, is_weak=False)]
]
op0_op1.snodes[0] =
op0: SchedulerNode(MultiTemplateBuffer)
op0.writes = [MemoryDep('buf0', c0, {c0: 10000}, None)]
op0.unmet_dependencies = []
op0.met_dependencies = [StarDep(name='arg0_1', mode=None), StarDep(name='arg1_1', mode=None)]
op0.outputs = [
buf0: MultiTemplateBuffer
buf0.layout = FixedLayout('cuda', torch.float32, size=[100, 100], stride=[100, 1])
buf0.users = [NodeUser(node=SchedulerNode(name='op1'), can_inplace=False, is_weak=False)]
]
op0.group.device = cuda:0
op0.group.iteration = (10000, 1)
op0.sizes = ([100, 100], ())
arg1_1_layout = FixedLayout('cuda', torch.float32, size=[100, 100], stride=[100, 1])
arg0_1_layout = FixedLayout('cuda', torch.float32, size=[100, 100], stride=[100, 1])
buf0_layout = FixedLayout('cuda', torch.float32, size=[100, 100], stride=[100, 1])
op0 Unfinalized multi template buffer
op0_op1.snodes[1] =
op1: SchedulerNode(ComputedBuffer)
op1.writes = [MemoryDep('buf1', c1 + 100*tmp0, {c0: 100, c1: 100}, atomic_add)]
op1.unmet_dependencies = [MemoryDep('buf0', c0, {c0: 10000}, None)]
op1.met_dependencies =
[ MemoryDep('arg3_1', c0, {c0: 100}, None),
StarDep(name='arg2_1', mode='atomic_add')]
op1.outputs = [
buf1: ComputedBuffer
buf1.layout = MutationLayoutSHOULDREMOVE('cuda', torch.float32, size=[100, 100], stride=[100, 1])
buf1.mutations = ['arg2_1']
buf1.users = [NodeUser(node=OUTPUT, can_inplace=False, is_weak=False)]
]
op1.group.device = cuda:0
op1.group.iteration = (10000, 1)
op1.sizes = ([100, 100], [])
arg3_1_layout = FixedLayout('cuda', torch.int64, size=[100], stride=[1])
buf0_layout = FixedLayout('cuda', torch.float32, size=[100, 100], stride=[100, 1])
arg2_1_layout = FixedLayout('cuda', torch.float32, size=[100, 100], stride=[100, 1])
buf1_layout = MutationLayoutSHOULDREMOVE('cuda', torch.float32, size=[100, 100], stride=[100, 1])
class op1_loop_body:
var_ranges = {z0: 100, z1: 100}
index0 = z0
index1 = 100*z0 + z1
index2 = 100*indirect0 + z1
def body(self, ops):
get_index = self.get_index('index0')
load = ops.load('arg3_1', get_index)
set_indirect0 = self.set_indirect0(load)
get_index_1 = self.get_index('index1')
load_1 = ops.load('buf0', get_index_1)
get_index_2 = self.get_index('index2')
store = ops.store('buf1', get_index_2, load_1, 'atomic_add')
return store
op1 Triton code:
import triton
import triton.language as tl
from triton.compiler.compiler import AttrsDescriptor
from torch._inductor.runtime import triton_helpers, triton_heuristics
from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math
from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, instance_descriptor, DeviceProperties
@triton_heuristics.pointwise(
size_hints=[16384],
filename=__file__,
triton_meta={'signature': {0: '*i64', 1: '*fp32', 2: '*fp32', 3: 'i32'}, 'device': DeviceProperties(type='cuda', index=0, cc=86, major=8, regs_per_multiprocessor=65536, max_threads_per_multi_processor=1536, multi_processor_count=82), 'constants': {}, 'configs': [AttrsDescriptor(divisible_by_16=(0, 1, 2, 3), equal_to_1=())]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'Placeholder.DESCRIPTIVE_NAME', 'mutated_arg_names': ['out_ptr0'], 'no_x_dim': False, 'num_load': 2, 'num_reduction': 0, 'backend_hash': 'EB0670E067E4DCC123BDA265AA7F4C62765F41E5CCF5A66A7FA518D5515E450E', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': True, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False},
min_elem_per_thread=0
)
@triton.jit
def triton_(in_ptr0, in_ptr1, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 10000
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x1 = (xindex // 100)
x2 = xindex
x0 = xindex % 100
tmp0 = tl.load(in_ptr0 + (x1), xmask, eviction_policy='evict_last')
tmp6 = tl.load(in_ptr1 + (x2), xmask)
tmp1 = tl.full([XBLOCK], 100, tl.int32)
tmp2 = tmp0 + tmp1
tmp3 = tmp0 < 0
tmp4 = tl.where(tmp3, tmp2, tmp0)
tl.device_assert(((0 <= tmp4) & (tmp4 < 100)) | ~(xmask), "index out of bounds: 0 <= tmp4 < 100")
tl.atomic_add(out_ptr0 + (x0 + (100*tmp4)), tmp6, xmask, sem='relaxed')
op0_op1 Unfinalized multi template buffer
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.31.0-rc1
Libc version: glibc-2.31
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 165
Model name: Intel(R) Core(TM) i9-10900K CPU @ 3.70GHz
Stepping: 5
CPU MHz: 3700.000
CPU max MHz: 5300.0000
CPU min MHz: 800.0000
BogoMIPS: 7399.70
Virtualization: VT-x
L1d cache: 320 KiB
L1i cache: 320 KiB
L2 cache: 2.5 MiB
L3 cache: 20 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Vulnerable: No microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp pku ospke md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] jax-triton==0.2.0
[pip3] numpy==2.0.1
[pip3] nvidia-cublas-cu12==12.6.3.3
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvtx==0.2.10
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.5.1
[pip3] torch_cluster==1.6.3+pt24cu121
[pip3] torch-geometric==2.6.1
[pip3] torch_scatter==2.1.2+pt24cu121
[pip3] torch_sparse==0.6.18+pt24cu121
[pip3] torch_spline_conv==1.2.2+pt24cu121
[pip3] torchaudio==2.5.1
[pip3] torchsparse==2.1.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.55 0 nvidia/label/cuda-12.1.0
[conda] cuda-cudart-dev 12.1.55 0 nvidia/label/cuda-12.1.0
[conda] cuda-cudart-static 12.1.55 0 nvidia/label/cuda-12.1.0
[conda] cuda-cupti 12.1.62 0 nvidia/label/cuda-12.1.0
[conda] cuda-cupti-static 12.1.62 0 nvidia/label/cuda-12.1.0
[conda] cuda-libraries 12.1.0 0 nvidia/label/cuda-12.1.0
[conda] cuda-libraries-dev 12.1.0 0 nvidia/label/cuda-12.1.0
[conda] cuda-libraries-static 12.1.0 0 nvidia/label/cuda-12.1.0
[conda] cuda-nvrtc 12.1.55 0 nvidia/label/cuda-12.1.0
[conda] cuda-nvrtc-dev 12.1.55 0 nvidia/label/cuda-12.1.0
[conda] cuda-nvrtc-static 12.1.55 0 nvidia/label/cuda-12.1.0
[conda] cuda-nvtx 12.1.66 0 nvidia/label/cuda-12.1.0
[conda] cuda-opencl 12.1.56 0 nvidia/label/cuda-12.1.0
[conda] cuda-opencl-dev 12.1.56 0 nvidia/label/cuda-12.1.0
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] jax-triton 0.2.0 pypi_0 pypi
[conda] libcublas 12.1.0.26 0 nvidia/label/cuda-12.1.0
[conda] libcublas-dev 12.1.0.26 0 nvidia/label/cuda-12.1.0
[conda] libcublas-static 12.1.0.26 0 nvidia/label/cuda-12.1.0
[conda] libcufft 11.0.2.4 0 nvidia/label/cuda-12.1.0
[conda] libcufft-dev 11.0.2.4 0 nvidia/label/cuda-12.1.0
[conda] libcufft-static 11.0.2.4 0 nvidia/label/cuda-12.1.0
[conda] libcurand 10.3.2.56 0 nvidia/label/cuda-12.1.0
[conda] libcurand-dev 10.3.2.56 0 nvidia/label/cuda-12.1.0
[conda] libcurand-static 10.3.2.56 0 nvidia/label/cuda-12.1.0
[conda] libcusolver 11.4.4.55 0 nvidia/label/cuda-12.1.0
[conda] libcusolver-dev 11.4.4.55 0 nvidia/label/cuda-12.1.0
[conda] libcusolver-static 11.4.4.55 0 nvidia/label/cuda-12.1.0
[conda] libcusparse 12.0.2.55 0 nvidia/label/cuda-12.1.0
[conda] libcusparse-dev 12.0.2.55 0 nvidia/label/cuda-12.1.0
[conda] libcusparse-static 12.0.2.55 0 nvidia/label/cuda-12.1.0
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] libnvjitlink-dev 12.1.55 0 nvidia/label/cuda-12.1.0
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-fft 1.3.11 pypi_0 pypi
[conda] mkl-random 1.2.8 pypi_0 pypi
[conda] mkl-service 2.4.0 pypi_0 pypi
[conda] mkl_fft 1.3.11 py310h5eee18b_0
[conda] mkl_random 1.2.8 py310h1128e8f_0
[conda] numpy 2.0.1 pypi_0 pypi
[conda] numpy-base 2.0.1 py310hb5e798b_1
[conda] nvidia-cublas-cu12 12.6.3.3 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.23.4 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.77 pypi_0 pypi
[conda] nvtx 0.2.10 pypi_0 pypi
[conda] pynvjitlink-cu12 0.4.0 pypi_0 pypi
[conda] pytorch 2.5.1 py3.10_cuda12.1_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.5.1 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt24cu121 pypi_0 pypi
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt24cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt24cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt24cu121 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchsparse 2.1.0 pypi_0 pypi
[conda] torchtriton 3.1.0 py310 pytorch
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
``` | triaged,oncall: pt2,module: inductor | low | Critical |
2,736,729,528 | terminal | SUI Nullable Colors & Icon control Follow-ups | Follow-ups from #17870:
- [x] [A11y] Screen readers don't read expander's preview text
- Tracked in https://github.com/microsoft/microsoft-ui-xaml/issues/10282
- ~Hmm... it might be better to just directly set the `AutomationProperty.Name`~
- PR #18418
- [ ] Add Tab Color to settings UI
- [ ] Update CursorColor preview to display #FFFFFF as "invert"
- [ ] Use Leonard's font picker UI, with the Segoe icon picker, so that you can filter the list
- [ ] [High contrast] we should probably add borders to the color chip previews
### From Bug bash 1/21
- [ ] [Windows 10] Conflicting Windows & app theme causes this:

- [ ] [Windows 10] Checkmark icon is missing!

- [ ] Narrow window = cut off

- [ ] When using an emoji for the Profile icon, I run into this while using certain emojis (shield) though that might just be an issue with the emoji itself (not our problem?)
 | Product-Terminal,Issue-Task,Needs-Tag-Fix,Area-SettingsUI | low | Critical |
2,736,729,758 | storybook | [Bug]: At the end of a test run, the description briefly flashes "Not run" | When running tests, the description goes through these phases:
1. _"Starting..."_
1. _"Testing... X/Y"_
1. _"Not run"_ ๐ **Bug**
1. Empty, causing the UI to jump for a frame or two ๐ **Bug**
2. _"Ran X tests Y seconds ago"_
(The video is slowed down x0.5)
https://github.com/user-attachments/assets/e00ee708-0fbb-47e3-a231-d97c35a5d04e
| bug,ui,sev:S3,addon: test | low | Critical |
2,736,748,539 | ui | [bug]: Missing tailwind config for borderRadius xl | ### Describe the bug
Affects components that use `rounded-xl`. eg. the `<Card />` component.
See discussion:
https://github.com/shadcn-ui/ui/discussions/4902
### Affected component/components
Card
### How to reproduce
1. Change your global css ---radius to 0rem
2. You'll see ever component except for `<Card />` be affected by this change.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Firefox, Mac
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,736,756,260 | svelte | More detailed explanation of why using $effect is not a good idea if it can be avoided | ### Describe the bug
I think the explanation of when not to use `$effect`'s is rather lacking in terms of the exact reasons why. Instead of just saying `don't do this...`, which is fine, it would more helpful for learning to understand the exact reasons.
This section: https://svelte.dev/docs/svelte/$effect#When-not-to-use-$effect
Starting a summary of the reasons:
1. Using effects is generally more overhead due to how it works internally, and signals in general, versus using something like simple callbacks. For example:
- Effects have to run at least once regardless whether or not they read any states / signals.
- if you return a function (dispose function) from an $effect, it runs not only when a component is destroyed but every time before the $effect is executed again.
- Even if you don't return a dispose function, internally, before the $effect is run again, its own dispose actions have to run, like removing all previous subscriptions.
- Effects run every time any of the observable states / signals change, and every time subscriptions to these changes have to be set up anew.
2. Mutating states / signals inside effects can cause unintended reads of these states / signals, thus causing unintended subscriptions. In turn, these will lead to infinite loops. Using `untrack()` or async code should remedy such situations, however, it may not be sufficient (see below).
3. If mutating the same states / signals that are observed by the same $effect, even if mutations happen inside `untrack()` or async code, it will lead to an infinite loop. It can be remedied by using conditional mutations.
4. Mutating state inside effects across multiple effects can lead to circular logic. For example, given two states `a` and `b` with 2 separate effects for each, if the effect for `a` mutates `b`, it will cause `b`'s effect to run and if `b`'s effect mutates `a`, it will cause the `a`'s effect to run, thus causing an infinite loop. Conditional mutations can resolve such infinite loops due to circular logic, however, they may be difficult to discover across multiple effects and components.
6. Understanding the timing of when effects run is important to avoid unexpected behavior.
- `$effect()` runs only after components mount or after the template (which is also a type of effect) is updated, whereas `$effect.pre()` is executed before.
- Effects are completely skipped and not executed in SSR mode. Even changing any of the state / signals variables will not cause the effects to run.
In summary, there are 3 main reasons: usage overhead, infinite loops because of mutations and timing. I broke out each infinite loop case separately for easier reference to each but they can just be under one section.
Anything else?
Also, the phrase at the end of this section is not accurate because of the reason `3.` above:
`If you absolutely have to update $state within an effect and run into an infinite loop because you read and write to the same $state, use [untrack](https://svelte.dev/docs/svelte/svelte#untrack).`
### Reproduction
docs
### Logs
_No response_
### System Info
```shell
docs
```
### Severity
annoyance | documentation | low | Critical |
2,736,768,105 | pytorch | bug with pytorch compile + DTensor on latest nighly | ### ๐ Describe the bug
```shell
[rank1]: Traceback (most recent call last):
[rank1]: File "<frozen runpy>", line 198, in _run_module_as_main
[rank1]: File "<frozen runpy>", line 88, in _run_code
[rank1]: File "/u/mayank98/scratch/tmp1/dolomite-engine/dolomite_engine/pretrain.py", line 411, in <module>
[rank1]: main()
[rank1]: File "/u/mayank98/scratch/tmp1/dolomite-engine/dolomite_engine/pretrain.py", line 394, in main
[rank1]: train(
[rank1]: File "/u/mayank98/scratch/tmp1/dolomite-engine/dolomite_engine/pretrain.py", line 158, in train
[rank1]: loss_step_dict = train_step(
[rank1]: ^^^^^^^^^^^
[rank1]: File "/u/mayank98/scratch/tmp1/dolomite-engine/dolomite_engine/train_utils.py", line 79, in train_step
[rank1]: metrics_tracker = _train_step_without_pipeline_parallel(
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/u/mayank98/scratch/tmp1/dolomite-engine/dolomite_engine/train_utils.py", line 262, in _train_step_without_pipeline_parallel
[rank1]: loss_micro_step_scaled.backward()
[rank1]: File "/u/mayank98/miniconda3/envs/new-ai/lib/python3.11/site-packages/torch/_tensor.py", line 626, in backward
[rank1]: torch.autograd.backward(
[rank1]: File "/u/mayank98/miniconda3/envs/new-ai/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward
[rank1]: _engine_run_backward(
[rank1]: File "/u/mayank98/miniconda3/envs/new-ai/lib/python3.11/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
[rank1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/u/mayank98/miniconda3/envs/new-ai/lib/python3.11/site-packages/torch/autograd/function.py", line 307, in apply
[rank1]: return user_fn(self, *args)
[rank1]: ^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/u/mayank98/miniconda3/envs/new-ai/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1697, in backward
[rank1]: all_args = CompiledFunction._backward_prologue(ctx, *flat_args)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/u/mayank98/miniconda3/envs/new-ai/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1914, in _backward_prologue
[rank1]: flat_processed_tangents = list(
[rank1]: ^^^^^
[rank1]: File "/u/mayank98/miniconda3/envs/new-ai/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1917, in <genexpr>
[rank1]: AOTDispatchAutograd.process_runtime_tangent(
[rank1]: File "/u/mayank98/miniconda3/envs/new-ai/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1503, in process_runtime_tangent
[rank1]: raise RuntimeError(
[rank1]: RuntimeError:
[rank1]: During the backward, we encountered a tensor subclass where we guessed its
[rank1]: metadata incorrectly.
[rank1]: Expected metadata: (DTensorSpec(mesh=DeviceMesh('cuda', [0, 1], mesh_dim_names=('tp',)), placements=(Replicate(),), tensor_meta=TensorMeta(shape=torch.Size([]), stride=(), dtype=torch.float32)), False), expected type: <class 'torch.distributed.tensor.DTensor'>
[rank1]: Runtime metadata: None, runtime type: <class 'torch.Tensor'>
[rank1]: shape: torch.Size([])
[rank1]: To fix this, your tensor subclass must implement the dunder method __force_to_same_metadata__
```
unfortunately, I dont have a minimal repro for this.
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241212+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.4 (Plow) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.42.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 100%
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241212+cu124
[pip3] torchao==0.7.0
[pip3] triton==3.1.0
[conda] numpy 2.1.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241212+cu124 pypi_0 pypi
[conda] torchao 0.7.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | oncall: distributed,triaged,oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,736,787,539 | flutter | Overlay content scroll should not change Appbar's Background Color | As per [material 3 guide for Appbar](https://m3.material.io/components/top-app-bar/overview). The Appbar should apply a container fill color to separate app bar from body content. This works well when using the Flutter app bar.
But when the body Contains an overlay with a scrollable the Appbar color changes when scrolling the overlay content. A simple reproducible for this is using an autoComplete widget.
I may be wrong but My understanding is overlay is not a part of the body content.
### Steps to reproduce
1. Run the below code sample
2. tap the input and scroll the overlay
### Expected results
App bar should not change color
### Actual results
App bar is changing the color
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text('Autocomplete Basic'),
),
body: const Center(
child: Padding(
padding: EdgeInsets.all(10.0),
child: AutocompleteText(),
),
),
),
);
}
}
class AutocompleteText extends StatelessWidget {
const AutocompleteText({Key? key}) : super(key: key);
static const List<String> data = <String>[
'Adrian',
'Axel',
'jhonn',
'bobcat',
'chameleon',
'Oliver',
'William',
'Ethan',
'Everett',
'Jayden',
];
@override
Widget build(BuildContext context) {
return Autocomplete<String>(
optionsViewOpenDirection: OptionsViewOpenDirection.up,
optionsBuilder: (TextEditingValue textEditingValue) {
if (textEditingValue.text == '') {
return const Iterable<String>.empty();
}
return data.where((String option) {
return option.contains(textEditingValue.text.toLowerCase());
});
},
onSelected: (String selection) {
debugPrint('You just selected $selection');
},
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Flutter (Channel master, 3.28.0-1.0.pre.64, on macOS 15.1.1 24B2091 darwin-arm64, locale en-IN)
โข Flutter version 3.28.0-1.0.pre.64 on channel master at /Users/mahesh/Development/flutter_master
```
```
[โ] Flutter (Channel stable, 3.27.0, on macOS 15.1.1 24B2091 darwin-arm64, locale en-IN)
โข Flutter version 3.27.0 on channel stable at /Users/mahesh/Development/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 8495dee1fd (2 days ago), 2024-12-10 14:23:39 -0800
โข Engine revision 83bacfc525
โข Dart version 3.6.0
โข DevTools version 2.40.2
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/mahesh/Library/Android/sdk
โข Platform android-35, build-tools 34.0.0
โข ANDROID_HOME = /Users/mahesh/Library/Android/sdk
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 16.0)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 16A242d
โข CocoaPods version 1.16.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.1)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[โ] IntelliJ IDEA Community Edition (version 2021.2.1)
โข IntelliJ at /Applications/IntelliJ IDEA CE.app
โข Flutter plugin version 61.2.4
โข Dart plugin version 212.5080.8
[โ] VS Code (version 1.94.0)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.100.0
[โ] Connected device (3 available)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 15.1.1 24B2091 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 15.1.1 24B2091 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 128.0.6613.85
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| framework,f: material design,has reproducible steps,P2,team-design,triaged-design,found in release: 3.27,found in release: 3.28 | low | Critical |
2,736,809,868 | flutter | Incorrect flutter tools output for profile mode web apps | DevTools is not available for profile mode web apps, but "Open Flutter DevTools" is still available in the help menu. We should remove this option for profile mode web apps.
It would be nice if the flutter tools output gave an instruction for how to profile this type of app. Something that instructs the user to user their browsers profiler tools (likely chrome devtools) for any debugging and profiling of the app running in this mode.
```
flutter run -d chrome --profile
...
Launching lib/main.dart on Chrome in profile mode...
Compiling lib/main.dart for the Web... 82.2s
โ Built build/web
๐ฅ To hot restart changes while running, press "r" or "R".
For a more detailed help message, press "h". To quit, press "q".
> h
v Open Flutter DevTools.
``` | tool,platform-web,team-tool | low | Critical |
2,736,810,488 | go | x/tools/gopls/internal/analysis: add "modernizer" analyzers | Between generics, iterators, min and max, new standard packages such as `slices`, `maps`, and `cmp`, there are a large number of new ways to write Go code more concisely and no less clearly. We should develop "modernizer" analyzer(s) that identify such opportunities and offer suggested fixes.
Many of these are already implemented in `staticcheck` (see [S1000](https://staticcheck.dev/docs/checks/) et seq); we would just need to enable them.
When the appropriate fix is to inline a call to a deprecated function whose body illustrates the recommended approach (e.g. a call to a more modern function, as in the case of the functions in the golang.org/x/exp/maps package), these will be handled by annotating the deprecated function as prescribed by proposal #32816, and letting the [inliner](https://go.dev/cl/634919) take care of it.
Examples:
- [x] b.N โ b.Loop in Benchmarks (https://go.dev/cl/638337)
- [x] interface{} โ any (CL 636276)
- [ ] explicit loops โ slices.{Equal,Compare,Index,Contains}{,Func}. The challenge is treating the `return false` and `return true` steps in the slices package as wildcards that match "equal and opposite" actions--not just return statements--in the user code.
- [x] explicit loops โ maps.{Clone,Collect.Insert,Copy} (https://go.dev/cl/638875)
- ~maps.Keys(m) + slices.Collect (or loop equivalent of Keys+Collect) + sort -> `slices.Sorted(maps.Keys(m))`. Eliminate temporaries when used once in `range _`. Downside: de-optimizes preallocation of correct array size; perhaps best left until resolution of #68598.~
- [ ] various append + s[::] combinations -> slices.Grow, Clip, Concat & Clone (https://go.dev/cl/637735), etc.
- [x] `append(s[:i], s[i+k:]...)` -> `slices.Delete(s, i, i+k)`, where k is a non-negative constant
- [x] exp/{slices,maps} -> std {slices,maps} (see CL 635680 + issue #32816)
- [x] [exp/maps.Clear](https://github.com/golang/go/issues/70717) โ clear etc (https://go.dev/cl/635680)
- [x] if/else โ min/max ([CL 635777](https://go.dev/cl/635777))
- ~awkward logic in slice.Sort's less function โ `if cmp := cmp.Compare(x, y); cmp != 0 { return cmp < 0 }`~
- [x] sort.Slice -> slices.Sort, avoiding the less func when it is the natural order (https://go.dev/cl/635681)
- ~sort.Slice -> slices.SortFunc, eliminating the indices~
- [x] json:omitempty โ json:omitzero for basic types
- ~SetFinalizer โ AddCleanup~
- ~use strings.Lines instead of strings.Split("\n") (but: "\r"?!) or text.scanner(strings.NewReader)/Scan/Text~
- [x] `[]byte(fmt.Sprintf...)` โ `fmt.Appendf(nil, ...)` (https://go.dev/cl/638436)
- ~implementing {Text,Binary}Appender for types that currently implement {Text,Binary}Marshaler~
- [x] replacing `ctx, cancel := context.WithCancel(context.Background()); defer cancel()` in tests with `ctx := t.Context()`.
They should of course respect the effective version of Go determined by go.mod and build tags.
@aclements
| FeatureRequest,gopls,Tools,Analysis,gopls/analysis,Refactoring | medium | Critical |
2,736,820,588 | react | [React 19] [SUGGESTION] Consistency: Rename useFormStatus().pending to useFormStatus().isPending | This isnโt a bug report or a question, apologies if this isnโt the right place to post.
Throughout the documentation, conventions like adding prefixes to booleans (e.g., isPending for startTransition, useActionState, etc.) are consistently followed.
To keep things consistent, could useFormStatus().pending be renamed to useFormStatus().isPending?
To prevent a breaking change, you could deprecate pending and introduce isPending as an alternative. | React 19 | low | Critical |
2,736,826,792 | go | x/build: set a maintenance window for the kubernetes instance | The Kubernetes instance we have on GKE should have a maintenance window set. The Go 1.24 RC1 release was interrupted by a Kubernetes upgrade. We should decide when an optimal window is and set it.
@golang/release | Builders,NeedsFix | low | Minor |
2,736,870,967 | ui | [bug]: 'SidebarContext' is already defined @typescript-eslint/no-redeclare | ### Describe the bug
great work with the sidebar component. it works very well!
when i cleanup my project and reduce all warnings, i found that the sidebar component contains one.
of course i can eslint ignore the line, but i think its worth revisiting to avoid that others need to do this as well:
https://github.com/shadcn-ui/ui/blob/704991247c748ea0ecf37f946482b9fdb04d4acd/apps/www/registry/default/ui/sidebar.tsx#L29
https://github.com/shadcn-ui/ui/blob/704991247c748ea0ecf37f946482b9fdb04d4acd/apps/www/registry/default/ui/sidebar.tsx#L39
### Affected component/components
Sidebar
### How to reproduce
1. cra typescript (yarn create react-app my-app --template typescript)
2. run lint -> no warnings (./node_modules/.bin/eslint .)
3. manual install shadcn (components.json from docs)
4. add sidebar npx shadcn@latest add sidebar
5. run lint
expect: no warnings (./node_modules/.bin/eslint .)
actual:
```
shadcn-app/@/components/ui/sidebar.tsx
37:7 warning 'SidebarContext' is already defined @typescript-eslint/no-redeclare
โ 1 problem (0 errors, 1 warning)
```
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
node 20/alpine, not really relevant
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,736,911,996 | ollama | Add support for setting models to private on `ollama push` | Currently, while it's possible to create private repositories through the ollama.com web interface, there's no way to initialize a new private repository directly through the ollama push CLI command. This creates friction in automated workflows and requires context switching between CLI and web interface when working with private models.
# Proposed Solution
Extend the ollama push command to support creating private repositories on initialization. This would allow users to create and push to private repos in a single command.
Example usage:
```bash
ollama push username/my-model:latest --private
```
# Justification
Enables fully automated workflows for private model management
Maintains consistency with other CLI tools (like GitHub CLI) that support private repo creation
Reduces context switching between CLI and web interface
Important for organizations that want to automate private model deployment pipelines
# Alternative Solutions
1. Status Quo: Continue requiring web interface for private repo creation
Pro: Simpler CLI interface
Con: Breaks automation workflows
2. A user-level setting on ollama.com that can set new repos to automatically be private for the user. | feature request | low | Minor |
2,736,936,556 | rust | The combination of alloc_error_handler and target_features_11 is unsound | The following code compiles:
```
#![feature(target_feature_11, alloc_error_handler)]
#[target_feature(enable = "avx")]
#[alloc_error_handler]
fn f(_: std::alloc::Layout) -> ! {
panic!()
}
```
However, this is unsound if the allocation error handler is invoked on a machine that does not support `avx`.
The alloc_error_handler feature is still unstable, tracking issue: https://github.com/rust-lang/rust/issues/51540
This issue emerged during the stabilization PR for target_features_11: https://github.com/rust-lang/rust/pull/134090 | T-compiler,I-unsound,C-bug,requires-nightly,F-target_feature_11 | low | Critical |
2,737,062,102 | go | runtime: panic in runtime.mapiterinit (loong64) | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/gopls/internal/test/integration/misc" && test == "TestIssue38815/default"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8728815051462899121)):
=== RUN TestIssue38815/default
panic: runtime error: invalid memory address or nil pointer dereference
panic: invalid handle
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x12007eee8]
goroutine 45720 [running]:
golang.org/x/tools/gopls/internal/cache.assert(...)
/home/swarming/.swarming/w/ir/x/w/targetrepo40136824/gopls/internal/cache/debug.go:10
golang.org/x/tools/gopls/internal/cache.(*packageHandleBuilder).evaluatePackageHandle.func1()
/home/swarming/.swarming/w/ir/x/w/targetrepo40136824/gopls/internal/cache/check.go:1074 +0x2e8
panic({0x120bd6e40?, 0x12170d4a0?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:785 +0x13c
golang.org/x/tools/gopls/internal/cache/typerefs.(*PackageSet).Union(...)
/home/swarming/.swarming/w/ir/x/w/targetrepo40136824/gopls/internal/cache/typerefs/packageset.go:111
golang.org/x/tools/gopls/internal/cache.(*packageHandleBuilder).evaluatePackageHandle(0xc00c63cb00, {0x121038b40, 0xc006aa0960}, 0xc00cdefe00)
/home/swarming/.swarming/w/ir/x/w/targetrepo40136824/gopls/internal/cache/check.go:1199 +0xcbc
golang.org/x/tools/gopls/internal/cache.(*Snapshot).getPackageHandles.func2.1()
/home/swarming/.swarming/w/ir/x/w/targetrepo40136824/gopls/internal/cache/check.go:896 +0xc8
golang.org/x/sync/errgroup.(*Group).Go.func1()
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:78 +0x64
created by golang.org/x/sync/errgroup.(*Group).Go in goroutine 45718
/home/swarming/.swarming/w/ir/x/w/gopath/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:75 +0xb4
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| help wanted,NeedsInvestigation,Tools,arch-loong64,compiler/runtime | low | Critical |
2,737,080,683 | angular | Incorrect parsing of complex selectors inside `:host-context` | ### Which @angular/* package(s) are the source of the bug?
compiler
### Is this a regression?
No
### Description
Basically the expectation is:
```
expect(shim(':host-context(:where(.foo:not(.bar))) {}', 'contenta', 'hosta')).toEqualCss(
':where(.foo:not(.bar))[hosta], :where(.foo:not(.bar)) [hosta] {}',
);
```
But it returns `([hosta]:where(.foo:not(.bar))) {}`. It looks like the regular expression which is used here doesn't parse nested selectors with parenthesis (i.e. `:not()` within ':where()`) correctly.
### Please provide a link to a minimal reproduction of the bug
Run the tests with `yarn test packages/compiler/test:test`, adding the test case above to the list.
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Reproducible among different versions.
```
### Anything else?
I tried it with and without recent changes to the compiler, that changed the way `:where` and `:host-context` are being parsed, the result is identical between the versions. | area: compiler,compiler: styles | low | Critical |
2,737,082,772 | godot | TileMap Pixel Snapping Precision - Black Lines | ### Tested versions
4.3
4.4-dev6
### System information
Windows 11 - Godot 4.3
### Issue description
TileMap pixel snapping loses precision the further away from the origin you get. Near (0,0) coordinates the TileMap behaves perfectly, for the sake of exaggerating the effect, I've attached a minimal project at ~20,000 tiles from the origin in both x, y direction and the black lines become far more frequent both horizontally and vertically.
In my personal project the issue occurs at ~1,000 tiles from origin (I'm guessing due to the amount of layers/other sprites?, not sure).
The issue is still present even with pixel snapping enabled.
Pictures: Make sure you open at full resolution



[](https://www.youtube.com/watch?v=H72fgI7LFZY)
### Steps to reproduce
Make a TileMap
Make a Camera2D
Draw some tiles ~20,000 tiles away from the origin
Set camera position to those tiles
Add some WASD/Arrow key movement and camera zoom functions
Start project and move around and test different zoom functions
### Minimal reproduction project (MRP)
[tilemap.zip](https://github.com/user-attachments/files/18118781/tilemap.zip)
| bug,topic:2d | low | Minor |
2,737,115,483 | pytorch | is it desirable to assume complex expressions in torch._check to be true | ### ๐ The feature, motivation and pitch
For the following code,
```python
import torch
torch._dynamo.config.capture_scalar_outputs = True
def f(x):
c = x.item()
return torch.ones(c*2)
out = torch.compile(f, fullgraph=True)(torch.tensor(3, dtype=torch.int64))
```
I'm getting an data dependent error:
```
GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(2*u0, 0) (unhinted: Eq(2*u0, 0)). (Size-like symbols: none)
```
So ideally I could just add a torch._check to unblock myself:
```python
import torch
torch._dynamo.config.capture_scalar_outputs = True
class Mod(torch.nn.Module):
def forward(self, x):
c = x.item()
torch._check(c > 0)
# torch._check(c*2 > 0) also doesn't work.
return torch.ones(c*2)
out = torch.compile(Mod(), fullgraph=True)(torch.tensor(3, dtype=torch.int64))
```
The above doesn't work due to `Could not guard on data-dependent expression Eq(2*u0, 0)`.
Then I try:
```python
import torch
torch._dynamo.config.capture_scalar_outputs = True
def f(x):
c = x.item()
torch._check(not (c*2 == 0))
return torch.ones(c*2)
out = torch.compile(f, fullgraph=True)(torch.tensor(3, dtype=torch.int64))
```
But the error is still there. After talking to @pianpwk, it seems that it's difficult to reason the range of`not equal` operator and more complicated ones like %, etc.
So I'm wondering if it's desirable that we can assume the information we get from users e.g. via torch._check is correct. This would be mostly useful in draft mode export i think, where correctness doesn't matter that much.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @pianpwk | oncall: pt2,module: dynamic shapes,oncall: export | low | Critical |
2,737,118,469 | rust | Trivial bounds with associated types regression | ### Code
I tried writing some macro code that in some cases generates code similar to:
```rust
pub trait A {
type X;
}
pub trait B: A<X = Self::Y> {
type Y: C;
}
impl<L> A for L {
type X = L;
}
pub trait C {}
impl C for () {}
impl<T> B for T
where
T: A,
T::X: C,
{
type Y = T::X;
}
fn demo() where for<'a> (): B {}
```
I expected to see this happen: The code should compile without error.
Instead, this happened: The following compile error happens.
```rust
error[E0284]: type annotations needed
--> <source>:25:1
|
25 | fn demo() where for<'a> (): B {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot infer type
|
= note: cannot satisfy `<() as B>::Y == _`
error[E0284]: type annotations needed
--> <source>:25:29
|
25 | fn demo() where for<'a> (): B {}
| ^ cannot infer type
|
= note: cannot satisfy `<() as B>::Y == _`
```
Discussing with @compiler-errors some it would appear https://github.com/rust-lang/rust/pull/122791 is the cause of the change. The above code does compile if it isn't a trivial bound. For example when you do this instead:
```rust
fn demo<T>() where T: B {}
fn other() {
demo::<()>();
}
```
For more context on 1.78.0 and before if you remove the `for<'a>` you get the following.
```rust
error[E0271]: type mismatch resolving `<() as A>::X == <() as B>::Y`
--> <source>:25:17
|
25 | fn demo() where (): B {}
| ^^^^^ type mismatch resolving `<() as A>::X == <() as B>::Y`
|
note: expected this to be `()`
--> <source>:10:14
|
10 | type X = L;
| ^
= note: expected unit type `()`
found associated type `<() as B>::Y`
= help: consider constraining the associated type `<() as B>::Y` to `()`
= note: for more information, visit https://doc.rust-lang.org/book/ch19-03-advanced-traits.html
= help: see issue #48214
```
### Version it worked on
It most recently worked on: 1.78.0
### Version with regression
All versions including 1.79.0 and after. Specifically starting with nightly-2024-04-04 (found with a bisection).
@rustbot modify labels: +regression-from-stable-to-stable -regression-untriaged | A-trait-system,P-low,regression-from-stable-to-stable,C-bug,T-types,fixed-by-next-solver | low | Critical |
2,737,141,779 | PowerToys | Keep Awake - option for 'until another program is not busy' | ### Description of the new feature / enhancement
I would like to be able to select another option on the Keep Awake tray icon context menu for 'keep awake until program finishes'
Then, I choose the process from a list or maybe just select an active window on the screen like a terminal window.
Once the selected program exits or its cpu utilization drops near zero for a consistent amount of time then the pc can go to sleep as normal.
### Scenario when this would be used?
My most common reason for using Keep Awake is I need my pc to stay awake while some process like a large file copy operation is in progress or a media transcoding process. Currently I will chose 'indefinitely' however then my computer will just be left on, sometimes for day or so. Or I go for the max of 2 hours and the pc sleeps before the long running operation finishes.
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage,Needs-Team-Response,Product-Awake | low | Minor |
2,737,169,657 | kubernetes | "Error syncing pod, skipping" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" pod="" podUID="" | ### What happened?
Pod has been terminating, error: "Error syncing pod, skipping" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" pod="" podUID=""๏ผ Actually, containerd is normal
### What did you expect to happen?
Pod pulled up normally
### How can we reproduce it (as minimally and precisely as possible)?
Dec 5 23: Upgrade caas from 53 onwards
Dec 5 23: 56 Node Restart
Dec 5 23:56:52 containerd starts after power on
Since the start of the kubelet process on Dec 6 00:00:23, it has not been restarted yet. It should be at this point in time that the kubelet was upgraded
Dec 6 00:00:06 Crittl Check the start and run time of the container process for nbidataserver (cinit process inside the container)
Dec 6 00:01:16 ps Check the startup time of the business process. The probe may not be ready for a short period of time before starting, and the kubelet may only sync after reaching certain conditions
Dec 6 00:01:37 nbidataservice updataPodContainerResources first reported failure containerdsock connection rejected and kept reporting errors afterwards
Dec 6 00:01:38 containerd upgraded and started up until now
Latest progress: This issue is caused by enabling the InPlacePodVerticalScaling feature
Event replay:
Event 1:2024-12-24 22:41:33 Machine Power On/Off
Event 2:2024-12-24 22:47:53 kubelet starts to start, Pod starts to be pulled up
Event 3:2024-12-24 22:49:03 containerd starts
Event 4:2024-12-24 22:49:04 kubelet started pulling up smauthservice, but received an error message "server is not initialized yet" when accessing containerd. After containerd started, smauthservice started normally.
Event 5:2024-12-24 22:54:12: The upgrade of smauthservice started, and the Kubernetes control plane set the pod smauthservice to Terminating state, which has been consistent until now.
Cause analysis:
1. The kubelet caches real pod information (also known as pod cache), which is synchronized from containerd by PLEG (PodLifecycle Event Generator) coroutine timing. PLEG itself also caches the real pod state (known as PLEG cache). If PLEG detects a difference between the real container state and its own cache, it will synchronize the real pod information to the pod cache.
2. After enabling the InPlacePodVerticalScaling feature, during the pod startup phase, if the pod is expanding, kubelet will query the real pod status from containerd and update the pod cache directly without PLEG. At this time, because containerd cannot access it, unknown status will be saved to the pod cache (Event 4). However, PLEG will not update the pod cache again because the actual pod status is no different from its own cache.
3. Later on, when upgrading the pod, the coroutine for handling pod deletion needed to retrieve the actual pod state from the pod cache for tuning. However, an unknown state was obtained, so the deletion of the pod was stopped and the pod remained in Terminating (Event 5).
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
1.28.1
</details>
### Cloud provider
<details>
na
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,needs-sig,needs-triage | low | Critical |
2,737,171,624 | pytorch | Cannot compile a block that contains Flex attention without graph breaks | ### ๐ Describe the bug
Flex attention needs to be compiled independently to the rest of the block which creates graphbreaks.
To make it work it wrap the flex attention as follow:
```python
flex_attention_compiled = torch.compile(
flex_attention,
dynamic=False,
)
@torch.compiler.disable(recursive=False)
def compile_friendly_flex_attention(q, k, v, block_mask=None):
return flex_attention_compiled(q, k, v, block_mask=block_mask)
```
Then when i compile my transformer block i get a graph break.
I tried to not compile flex attention independently and only compile it with the full block, but then i ran out of memory
### Error logs
_No response_
### Versions
PyTorch version: 2.6.0.dev20241212+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture : x86_64
Mode(s) opรฉratoire(s) des processeurs : 32-bit, 64-bit
Boutisme : Little Endian
Address sizes: 46 bits physical, 48 bits virtual
Processeur(s) : 64
Liste de processeur(s) en ligne : 0-63
Thread(s) par cลur : 2
Cลur(s) par socket : 16
Socket(s) : 2
Nลud(s) NUMA : 2
Identifiant constructeur : GenuineIntel
Famille de processeur : 6
Modรจle : 85
Nom de modรจle : Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz
Rรฉvision : 7
Vitesse du processeur en MHz : 2900.000
Vitesse maximale du processeur en MHz : 3900,0000
Vitesse minimale du processeur en MHz : 1200,0000
BogoMIPS : 5800.00
Virtualisation : VT-x
Cache L1d : 1 MiB
Cache L1i : 1 MiB
Cache L2 : 32 MiB
Cache L3 : 44 MiB
Nลud NUMA 0 de processeur(s) : 0-15,32-47
Nลud NUMA 1 de processeur(s) : 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Drapaux : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] lovely-numpy==0.2.13
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241212+cu126
[pip3] torch-fidelity==0.3.0
[pip3] torch-tb-profiler==0.4.3
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.22.0.dev20241212+cu126
[pip3] triton==3.1.0
[conda] lovely-numpy 0.2.13 pypi_0 pypi
[conda] numpy 2.2.0 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241212+cu126 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241212+cu126 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,737,191,457 | kubernetes | store more event in watch cache when apiserver restart to avoid watch request trigger relist | ### What would you like to be added?
Due to the existence of the bookmark mechanism, the difference between the resource version that the client carries when rewatching and the resource version that the apiserver gets from the list in etcd when it starts up is not too big.
It would be nice to be able to cache more events in the watch cache instead of starting from the list resource version returned by etcd to start the cache. The result would be that a client rewatch would not trigger a โtoo old resource versionโ.
### Why is this needed?
In a stable k8s cluster, when the apiserver of a node is restarted, the clients that list-watched the apiserver before will trigger a relist operation when they re-watch it, and the relist will cause a big burden to the apiserver. | sig/api-machinery,kind/feature,triage/accepted | low | Major |
2,737,192,396 | flutter | [SwiftPM] When Swift Package Manager migration fails, link to documentation instructions to set it up manually | The Xcode project format has a notoriously bad reputation among iOS/macOS software engineers. It is fragile, hard to read, and difficult to maintain. This is why iOS engineers often reinvent alternatives or generators for Xcode project files, such as [XcodeGen](https://github.com/yonaskolb/XcodeGen) and [Tuist](https://tuist.io/). Personally, I no longer use plain Xcode project files, except for Flutter-related projects. Flutter performs a lot of "magic" during the build process and relies on Xcode project files for iOS/macOS targets.
Additionally, with the decline of CocoaPods, most dependencies are now available through Swift Package Manager (SPM). Flutter has embraced this shift by introducing an option to use SPM for iOS/macOS projects in Flutter 3.27, which I find very exciting.
However, I encountered an issue when migrating to Swift Package Manager. The migration script failed with the following error:
```plaintext
Adding Swift Package Manager integration... 22ms
An error occurred when adding Swift Package Manager integration:
Exception: Unable to find PBXFrameworksBuildPhase for Runner target.
Swift Package Manager is currently an experimental feature. Please file a bug at:
https://github.com/flutter/flutter/issues/new?template=1_activation.yml
Consider including a copy of the following files in your bug report:
ios/Runner.xcodeproj/project.pbxproj
ios/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme (or the scheme for the flavor used)
To avoid this failure, disable Flutter Swift Package Manager integration for the project
by adding the following to the project's `pubspec.yaml` under the "flutter" section:
`disable-swift-package-manager: true`
Alternatively, disable Flutter Swift Package Manager integration globally with:
`flutter config --no-enable-swift-package-manager`
```
Upon investigation, I reviewed the migration script code in [`swift_package_manager_integration_migration.dart`](https://github.com/flutter/flutter/blob/97426566104215127a44666fe2b75c18208d1701/packages/flutter_tools/lib/src/migrations/swift_package_manager_integration_migration.dart) and discovered that it relies on hard-coded identifiers for various Xcode project sections. For example:
```dart
static const String _iosRunnerNativeTargetIdentifier = '97C146ED1CF9000F007C117D';
```
This is problematic because tools like [xUnique](https://github.com/truebit/xUnique), which normalize Xcode project files, can change these identifiers. Here's an example of how identifiers change after normalization:
#### Before Normalization
```plaintext
/* Begin PBXNativeTarget section */
97C146ED1CF9000F007C117D /* Runner */ = {
isa = PBXNativeTarget;
buildConfigurationList = 97C147051CF9000F007C117D /* Build configuration list for PBXNativeTarget "Runner" */;
buildPhases = (
EF5FC0595FAA045E778ABA64 /* [CP] Check Pods Manifest.lock */,
9740EEB61CF901F6004384FC /* Run Script */,
97C146EA1CF9000F007C117D /* Sources */,
97C146EB1CF9000F007C117D /* Frameworks */,
97C146EC1CF9000F007C117D /* Resources */,
9705A1C41CF9048500538489 /* Embed Frameworks */,
3B06AD1E1E4923F5004D2608 /* Thin Binary */,
);
productName = Runner;
...
};
/* End PBXNativeTarget section */
```
#### After Normalization
```plaintext
/* Begin PBXNativeTarget section */
E30F7D64842177B99FDDF63DCA10BCDE /* Runner */ = {
isa = PBXNativeTarget;
buildConfigurationList = 92578F7B9E3F6AC8F17192C290223DCF /* Build configuration list for PBXNativeTarget "Runner" */;
buildPhases = (
AC0061BF939EE938EA540EE9D04BF3B3 /* [CP] Check Pods Manifest.lock */,
7E9D2C9174A80CAA4BFDFDCB4F72C7A8 /* Run Script */,
FBC56BB6E87C12DAFB6BB5C3C53DF533 /* Sources */,
00C27919A6B50DC1DE2CD3F64D33BCCA /* Frameworks */,
8CF819BBBFB9AE05A1821B127DF85DF8 /* Resources */,
E55F38183CFB6C120953C92EF0C091F4 /* Embed Frameworks */,
3CBF9DA705FA99067B6DD4D7F9246CC9 /* Thin Binary */,
);
productName = Runner;
...
};
/* End PBXNativeTarget section */
```
The hard-coded identifier for the `Runner` target (`97C146ED1CF9000F007C117D`) was replaced with a new identifier (`E30F7D64842177B99FDDF63DCA10BCDE`). As a result, the migration script failed.
### Suggested Solution
To avoid these issues, the migration script should not rely on hard-coded identifiers. Instead, it should dynamically locate the required objects in the Xcode project file, possibly by matching the `name` property of targets or using other stable metadata. While implementing such a solution might be more complex, it would make the migration process more robust and compatible with various Xcode project configurations.
| platform-ios,tool,platform-mac,c: proposal,P2,team-ios,triaged-ios | low | Critical |
2,737,203,788 | react | Bug: Replace md5 with sha256 in createFastHash(). | 1.MD5 has potential collision risks.
2.Even for speed considerations, MD5 is slower than SHA256, so replacing MD5 with SHA256 is worthwhile.
code at:https://github.com/facebook/react/blob/e06c72fcf4632ad3117add713a25f6354631f037/packages/react-server/src/ReactServerStreamConfigNode.js#L234-L238
Below are the speed test results(:
#### String Length: 16
MD5 Average Time: 0.0148 ms
SHA256 Average Time: 0.0131 ms
#### String Length: 256
MD5 Average Time: 0.0193 ms
SHA256 Average Time: 0.0175 ms
#### String Length: 10000
MD5 Average Time: 0.5830 ms
SHA256 Average Time: 0.4030 ms
#### String Length: 100000
MD5 Average Time: 6.2866 ms
SHA256 Average Time: 4.6130 ms
code is
```
<!DOCTYPE html>
<html>
<head>
<title>Hash Function Test</title>
<!-- Include the CryptoJS library -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/4.1.1/crypto-js.min.js"></script>
</head>
<body>
<h1>Hash Function Performance Test</h1>
<div id="results"></div>
<script>
const md5 = (str) => CryptoJS.MD5(str).toString();
const sha256 = (str) => CryptoJS.SHA256(str).toString();
// Generate a random string of specified length
function getRandomString(length) {
let result = '';
const characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
for (let i = 0; i < length; i++) {
result += characters.charAt(Math.floor(Math.random() * characters.length));
}
return result;
}
// Test the performance of the hash function
function testHashFunction(func, length, iterations = 1000) {
let total = 0;
const data = getRandomString(length); // Generate data once outside the loop
const startTime = performance.now();
for (let i = 0; i < iterations; i++) {
func(data); // Reuse the same data for all iterations
}
const endTime = performance.now();
total = endTime - startTime; // Calculate total time once
const average = total / iterations;
return average;
}
// Test different string lengths
const lengths = [16, 256, 10000, 100000];
const resultsElement = document.getElementById('results');
lengths.forEach(length => {
const md5Avg = testHashFunction(md5, length);
const sha256Avg = testHashFunction(sha256, length);
resultsElement.innerHTML += `
<h2>String Length: ${length}</h2>
<p>MD5 Average Time: ${md5Avg.toFixed(4)} ms</p>
<p>SHA256 Average Time: ${sha256Avg.toFixed(4)} ms</p>
<hr>
`;
});
</script>
</body>
</html>
```
| Status: Unconfirmed | low | Critical |
2,737,210,753 | flutter | Screen readers cannot list or move focus to elements outside of the current viewport | ### Steps to reproduce
<p>NVDA, JAWS, and VoiceOver Screen readers cannot list or move focus to elements outside of the current viewport. Any elements that have scrolled out of view or haven't yet been scrolled into view will NOT be reachable via screen reader basic keyboard commands for reading and navigating web content.</p>
<p>Examples:</p>
<div class="table-wrap">
Function | JAWS Shortcut | NVDA Shortcut | VoiceOver Shortcut
-- | -- | -- | --
Next Heading | H, 1-6 for heading level | H, 1-6 for heading level | Control Option Command H
List Headings | Insert F6 | Insert F7 to open Elements list, Select the "Headings" radio button | Control Option U, Press left/right arrow keys until the "Heading" menu is displayed
Next Link | U for unvisited link, V for visited link | K for link, U for unvisited, link V for visited link | Control Option Command L for link, Control Option Command V for visited link
List Links | Insert F7 | Insert F7 to open Elements list, Select the "Links" radio button | Control Option U, Press left/right arrow keys until the "Links" menu is displayed
Next Button | B | B | N/A
List Buttons | Control Insert B | Insert F7 to open Elements list, Select the "Buttons" radio button | Control Option U, Press left/right arrow keys until the "Form Controls" menu is displayed
Next Form Control | F | F | Control Option Command J
Next Table | T | T | Control Option Command T
Next Landmark | R | D | N/A
Next List | L | L | Control Option Command X
</div>
### Expected results
- The "Next" keyboard shortcuts should move the screen reader focus to the next element of that type which exists on the page, not just the elements displayed in the current viewport
- The lists should contain all elements of that type which exist on the page, not just the elements displayed in the current viewport
### Actual results
- The "Next" keyboard shortcuts only move the screen reader focus to the next element of that type which is displayed in the current viewport
- The lists only contain elements of that type which are displayed in the current viewport
### Code sample
N/A
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
`Flutter (Channel beta, 3.27.0-0.1.pre, on macOS 14.5 23F79 darwin-arm64, locale en-US)` | framework,a: accessibility,platform-mac,platform-windows,platform-web,P2,customer: castaway,team-web,triaged-web | low | Minor |
2,737,219,918 | kubernetes | 'requiredDuringSchedulingIgnoredDuringExecution' evicts Daemonset when node label removed | ### What happened?
The node affinity requires that the Daemonset be deployed on a node with a specified label. Affinity type is requiredDuringSchedulingIgnoredDuringExecution.
When Daemonset has been running for a while, remove the label.
Daemonset will be destroyed.
However, according to [the official doc,](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity-per-scheduling-profile) the Daemonset type is IgnoredDuringExecution, and the pod should not be destroyed after the tag is removed.
```
Note:
In the preceding types, IgnoredDuringExecution means that if the node labels change after Kubernetes schedules the Pod, the Pod continues to run.
```
### What did you expect to happen?
After the label is removed, Daemonset should continue to run (consistent with the IgnoredDuringExecution described in the doc).
### How can we reproduce it (as minimally and precisely as possible)?
apply the following yaml and modify the image path. [full yaml link](https://gitee.com/OpenCloudOS/xuanwan/blob/master/os-update-operator/configs/deploy/operator/up-proxy.yaml)
```
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: up-proxy
namespace: kube-system
labels:
control-plane: up-proxy
spec:
selector:
matchLabels:
control-plane: up-proxy
template:
metadata:
labels:
control-plane: up-proxy
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: xuanwan.opencloudos.org/updater-interface-version
operator: In
values:
- 1.0.0
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
containers:
- name: up-proxy
image: <up-proxy-container-path>
imagePullPolicy: IfNotPresent
```
On the node label xuanwan.opencloudos.org/updater-interface-version= "1.0.0" and wait for Daemonset deployed and run.
Remove the node label and observe whether Daemonset continues to run
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
v1.27.4
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
NAME="OpenCloudOS"
VERSION="9.2"
ID="opencloudos"
ID_LIKE="opencloudos"
VERSION_ID="9.2"
PLATFORM_ID="platform:oc9"
PRETTY_NAME="OpenCloudOS 9.2"
$ uname -a
Linux kube-master 6.6.47-12.oc9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 24 16:15:42 CST 2024 x86_64 GNU/Linux
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scheduling,needs-triage | low | Minor |
2,737,274,756 | godot | `Area2D` `monitorable`, `monitoring` and `area_enter` logic flaw | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22621 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Laptop GPU (NVIDIA; 31.0.15.2887) - 13th Gen Intel(R) Core(TM) i7-13700H (20 Threads)
### Issue description
There are two area2d,we call them A and B, make A can check B. make A's area_enter connect to a function to observe.
When they are overlaping, turn A' monitoring(true -> false -> true), A' area_enter will emit. (It is right)
However, turn B' monitorable(true -> false -> true), A' area_enter wouldn't emit. ( I think it should emit)
Additionally, every change on B's monitoring (true -> false or false -> true) will makes A' area_enter emit. ( I think it shouldn't emit according to the principle)
I hope that a appropriate change will happen on there and make them liked described.
### Steps to reproduce
1. open the function "display collision area"
2. press "D" until the small area overlap with the big .
3. change B'monitoring and monitorable,then oberve
### Minimal reproduction project (MRP)
[test.zip](https://github.com/user-attachments/files/18120274/test.zip)
| topic:physics,topic:2d | low | Minor |
2,737,304,911 | ant-design | Input ๅญๅจ suffix ็ๆ
ๅตไธ Popover ๅผนๅบไฝ็ฝฎ้ไนฑ | ### Reproduction link
[](https://codesandbox.io/p/sandbox/ji-ben-shi-yong-antd-5-22-4-forked-wq79lt?workspaceId=ws_UGUVoYEGqyRdAXEjC2LGE2)
### Steps to reproduce
็้็ฐ้พๆฅไธญไปฃ็
### What is expected?
Popover ๅผนๅบๆกๅง็ปไปฅ Input ๅ
็ด ็ๅ
่ฃ
dom ่็น็ไฝ็ฝฎๅฏนๅ
ถ
### What is actually happening?
Popover ๅผนๅบๆก่ทInput ๅ
้จ็ๅ็ input ๅ
็ด ๅฏน้ฝไบ
| Environment | Info |
| --- | --- |
| antd | 5.22.4 |
| React | 18.3.1 |
| System | MacOSๆๆฐ |
| Browser | Chromeๆๆฐ |
---
ๅๆญฅ็ๆตๅญๅจ suffix ็ๆ
ๅตไธ Input ็ปไปถ ref ็ nativeElement ๅผไธๆฏ Input ็ๆๅคๅฑๅ
่ฃ
่็น๏ผ่ๆฏๅ
้จ็ๅ็ input ่็น
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ Bug,help wanted | low | Major |
2,737,306,104 | vscode | Add the 'preview' flag to vscode.window.createWebviewPanel | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Add `preview: boolean` to `vscode.window.createWebviewPanel`'s `showOptions `parameter so we can create preview panels that will be automatically closed when another preview editor is opened.
Also needed on:
`WebviewPanel.reveal`
While you're at it, it'd be nicer to have a proper interface rather than having the properties listed in-line:
```
interface ShowOptions {
readonly viewColumn?: ViewColumn;
readonly preserveFocus?: boolean;
readonly preview?: boolean;
}
```
...and then have `TextDocumentShowOptions `and `NotebookDocumentShowOptions` extend that. | feature-request,api,webview | low | Minor |
2,737,306,446 | ant-design | ไฝฟ็จpx2remTransformerๆถ๏ผcssๅ้ไธญ็pxๆฒกๆ่ขซ่ฝฌๅ | ### Reproduction link
[](https://codesandbox.io/p/sandbox/sandpack-project-forked-3zrwyk?file=%2Fapp.tsx)
### Steps to reproduce

่งๅค็ฐ้พๆฅ๏ผๆๆพๅฐไบๅ ไธช็ธๅ
ณ็issue๏ผไฝๆฏๅบๆฌไธ้ฝ่ขซcloseๆๆ่
ๆฒกๆไบบๅๅค๏ผ่ฟไธช้ฎ้ขๆบๅฝฑๅไฝฟ็จ็๏ผๅธๆๅฎๆน่ฝๆฏๆไธไธๆ่
ๆไพไฟฎๅคๆนๆกๆๅฏไปฅๅฐ่ฏๅธฎๅฉไฟฎๅคไธไธ
https://github.com/ant-design/ant-design/issues/47748
https://github.com/ant-design/cssinjs/issues/203
https://github.com/ant-design/cssinjs/issues/180
### What is expected?
cssๅ้ไธญ็pxไน่ฝ่ขซ่ฝฌๅ
### What is actually happening?
cssๅ้ไธญ็pxๆช่ฝ่ขซ่ฝฌๅ
| Environment | Info |
| --- | --- |
| antd | undefined |
| React | 18 |
| System | ubuntu |
| Browser | Chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ Bug | low | Minor |
2,737,313,180 | kubernetes | watcher always terminal | ### What happened?
when is use
https://xxxxxx/api/v1/pods?timeoutSeconds=10000&watch=true
watcher will terminal
we have 100000 pods
i see #13969
this param will return all pods event
(c *cacheWatcher) processInterval will exec process func when initEvents send to result success,but is took 5s-6s.
if process func not be exec,watcher input chan will not have comsumer.
then watcher will be blockedWatcher,when etcd have changed
the func watcher.add(event, timer) will timeout and kill my watcher.
### What did you expect to happen?
watcher keep to timeoutSecond
### How can we reproduce it (as minimally and precisely as possible)?
https://xxxxxx/api/v1/pods?timeoutSeconds=10000&watch=true
100000 pods
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
1.30
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,needs-sig,needs-triage | low | Major |
2,737,406,067 | kubernetes | add `matchLabelKeys: ["pod-template-hash"]` in the default `ScheduleAnyway` topology spread | We recently added `matchLabelKeys` feature at [pod topology spread](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/).
Pod topology spread has the [Cluster-level default constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#cluster-level-default-constraints);
```yaml
defaultConstraints:
- maxSkew: 3
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
- maxSkew: 5
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
```
I was wondering, when it comes to the pods owned by a replicaset controller, would it make sense to consider `matchLabelKeys: ["pod-template-hash"]` there? (I guess such change in the default behavior is allowed, am I right?)
/cc @kubernetes/sig-scheduling-misc @macsko @dom4ha
/sig scheduling
/kind feature | sig/scheduling,kind/feature,sig/apps,needs-triage | medium | Critical |
2,737,444,812 | deno | mutation testing | Deno's testing and code coverage support are great and continue to improve. Sometimes developers write code and don't realize their tests don't assert certain things. Sometimes they write tests and they pass but don't realize that they wouldn't fail even if the code under test were changed. We can "test our tests" using methods like [mutation testing](https://en.m.wikipedia.org/wiki/Mutation_testing).
I've seen some tools that automate this for JavaScript (e g. [Stryker](https://stryker-mutator.io/)) but I think these solutions rely on things that don't apply in a Deno context: Babel, CommonJS, etc.
I'd love to see support for mutation testing built-in to Deno or a way to plug in to add the capability. This might be something that would be beneficial to write in Rust (for quickly generating the different code variations, etc.). I could see using code coverage from individual tests to intelligently only run tests that could possibly kill the mutation and not running tests that clearly wouldn't.
If this isn't something worth exploring being built into Deno then I would be interested in getting some pointers on how to build a 3rd partly library to achieve this. I'm imagining the following flow:
- run all tests collecting code coverage for each individual test (not aggregating coverage from all tests)
- if any tests are already failing then there's no point in continuing (although I suppose mutation testing could still be done for code covered by passing tests)
- have a default set of mutation operators to apply to the source code yielding mutants
- for each mutant, re-run the tests that covered the statement before mutation, if any test fails then the mutant is killed and we have coverage otherwise the mutant survives and we don't have meaningful assertions for the statement before mutation
- collect how many mutants survive and report a mutation score of survived vs. generated along with more detailed reporting like code coverage to be able to inspect mutation coverage and understand uncovered code
Additional options would be helpful to expose. Here are some ideas:
- mutation operators: to enable or disable different operators than the default set
- include/exclude filters for code to mutate
- include/exclude filters for tests to run
I think having this built-in (perhaps as an option to `deno test` itself) would provide opportunities to make it perform very well and ease adoption. Code coverage only tells us what code was executed during testing while mutation coverage tells us what code was actually asserted. This can then provide developers much more confidence in their code. | suggestion,testing | low | Major |
2,737,459,128 | svelte | Nested Destructuring of $props() | ### Describe the problem
I often find myself destructuring $props. In cases such as
```ts
const { index } = $props()
```
This is, as shown, simple and clean. But what if the prop we are destructuring is in itself an object?
```ts
const { suggestion: { id } } = $props()
```
The above does not work, and gives the following error:
> `$props()` assignment must not contain nested properties or computed keys
https://svelte.dev/e/props_invalid_pattern
Instead, we have to destructure in another line, like so:
```ts
const { suggestion } = $props()
const { id } = suggestion
```
### Describe the proposed solution
I would love to know the reasoning behind this not being supported currently. If the issue is reactivity, I believe reactivity can be achieved using $derived or $state and $effect, like so:
```ts
const { suggestion } = $props()
const { id } = $derived(suggestion)
```
which is how it could work under the hood.
So, to state the obvious, my proposed solution is to...allow nested destructuring from $props().
```ts
const { suggestion: { id } } = $props()
```
It can be done in other parts of svelte, albeit without reactivity, and would make a few people's lives a fair bit easier.
### Importance
would make my life easier | feature request | low | Critical |
2,737,466,003 | ant-design | Autocomplete dropdown not moving to next line when there is a textarea with multiple lines |
### Reproduction link
[](https://codesandbox.io/p/sandbox/h7r72q)
### Steps to reproduce
Look at the Customize Input Component for Autocomplete.
When you type in the next line of the textarea - the dropdown of the Autocomplete stays for the top line.
### What is expected?
We need to adjust the dropdown underneath the cursor area.
### What is actually happening?
The dropdown stays for the top line
| Environment | Info |
| --- | --- |
| antd | 5.22.4 |
| React | 18.0.0 |
| System | Mac 15.0.1 (24A348) |
| Browser | Chrome Version 131.0.6778.139 (Official Build) (arm64) |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | improvement | low | Minor |
2,737,473,325 | pytorch | Compiled Autograd + Activation Checkpointing/Offloading | ### ๐ The feature, motivation and pitch
There's been many issues interested in this topic, so I'm going to use this issue to track the current status.
## Full support
- Checkpointing inside of compiled region i.e. `torch.compile(checkpoint(fn))`
- Reentrant i.e. `checkpoint(fn, use_reentrant=True)` on 2.6+
## Partial support
Eager region checkpointing i.e. `checkpoint(fn)` outside of a torch.compile wrapper
- Non-reentrant i.e. `checkpoint(fn, use_reentrant=False)`: Doesn't error, but no memory savings expected
- Selective i.e. `checkpoint(fn, context_fn=create_selective_checkpoint_contexts(...))`: Doesn't error >2.6.0, but no memory savings expected
## Unsupported
Disk offloading i.e. defining pack_hooks/unpack_hooks using torch.save and torch.load
CPU offloading i.e. defining pack_hooks/unpack_hooks using .cuda() and .cpu()
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @yf225 | triaged,oncall: pt2,module: compiled autograd,activation-checkpointing | low | Critical |
2,737,522,010 | PowerToys | Keyboard manager is not working | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
I use Dvorak keyboard layout. So, I map Ctrl-J to Ctrl-C and Ctrl-K to Ctrl-V.
It would work half of the time and half of other time it just not doing anything.
### โ๏ธ Expected Behavior
It should always work. And when I pressed Ctrl-J it should do the copy to clipboard as Ctrl-C would
### โ Actual Behavior
Ctrl-J still acting like Ctrl-J
### Other Software
Vivaldi | 7.0.3495.26ย (Stable channel)ย (64-bit)
| Issue-Bug,Product-Keyboard Shortcut Manager,Needs-Triage,Needs-Team-Response | low | Minor |
2,737,545,868 | pytorch | FSDP constructor get stuck with `sync_module_states=True` | ### ๐ Describe the bug
When using FSDP with `sync_module_states=True`, the FSDP constructor will get stuck. Since the CUDA operations are async, the main process will get stuck at the first time accessing the FSDP wrapped model.
```python
import torch
import torch.distributed as dist
from transformers import AutoModelForCausalLM, AutoConfig
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
def main():
dist.init_process_group(backend="nccl")
rank = dist.get_rank()
torch.cuda.set_device(rank)
model_path = "/checkpoints/Qwen2-0.5B-Instruct/"
if rank == 0:
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="cuda",
)
param_init_fn = None
else:
with torch.device("meta"):
model = AutoModelForCausalLM.from_config(AutoConfig.from_pretrained(model_path))
param_init_fn = lambda x: x.to_empty(device=torch.cuda.current_device(), recurse=False)
model = FSDP(
model,
sync_module_states=True,
param_init_fn=param_init_fn,
device_id=rank,
)
model = model.to('cpu')
model = model.to('cuda')
print(f"Rank {rank} Done.")
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
Output:
```
W1213 14:25:20.129000 139867745548096 torch/distributed/run.py:779]
W1213 14:25:20.129000 139867745548096 torch/distributed/run.py:779] *****************************************
W1213 14:25:20.129000 139867745548096 torch/distributed/run.py:779] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W1213 14:25:20.129000 139867745548096 torch/distributed/run.py:779] *****************************************
Rank 0 Done.
```
### Versions
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 535.161.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-95
Off-line CPU(s) list: 96-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] flashinfer==0.1.6+cu121torch2.4
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.4.0
[pip3] torchao==0.6.1
[pip3] torchmetrics==1.5.1
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] flashinfer 0.1.6+cu121torch2.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchao 0.6.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang | oncall: distributed,module: fsdp | low | Critical |
2,737,606,089 | pytorch | [Inductor] [CPU] `nn.LayerNorm` outputs numerical error compared with eager | ### ๐ Describe the bug
related to #142839.
To be honest, I am not sure whether it's my negligence because It seems that many `Norm` operators output numerical error in `Inductor mode`.
This error is only obvious on `CPU`.
Subsequently, I would check other `Norm` operators.
Plz feel free to correct me if I am wrong. :)
```python
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super().__init__()
self.ln = nn.LayerNorm([10000, 1000]) # LayerNorm for 2D input
def forward(self, x):
x = self.ln(x)
return x
model = Model().eval()
x = torch.randn(1, 3, 10000, 1000) # As `H` and `W` increase, the error might be amplified
inputs = [x]
c_model = torch.compile(model)
output = model(*inputs)
c_output = c_model(*inputs)
print(torch.allclose(output, c_output, 1e-5, 1e-5)) # loose check in fp32
print(torch.max(torch.abs(output - c_output)))
fp_64_ref = c_model(x.double())
print("Eager divergence", torch.max(torch.abs(output - fp_64_ref)))
print("Compile divergence divergence", torch.max(torch.abs(c_output - fp_64_ref)))
```
### Error logs
cpu
```
False
tensor(0.0007, grad_fn=<MaxBackward1>)
Eager divergence tensor(6.1528e-07, dtype=torch.float64, grad_fn=<MaxBackward1>)
Compile divergence divergence tensor(0.0007, dtype=torch.float64, grad_fn=<MaxBackward1>)
```
### Versions
torch version: 2.6.0.dev20241205+cu124
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241205+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2500.000
BogoMIPS: 5000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241205+cu124
[pip3] torchaudio==2.5.0.dev20241205+cu124
[pip3] torchvision==0.20.0.dev20241205+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241205+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241205+cu124 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241205+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,module: correctness (silent),oncall: pt2,module: inductor,oncall: cpu inductor | low | Critical |
2,737,607,946 | tauri | [bug] window randomly restarts and freezes, possibly WASM related | ### Describe the bug
I am working on an application that uses whisper from transformers.js (which downloadss wasm files), in electron and the browser this issue doesn't exist.
I am not seeing any error messages in the console or in the terminal.
What I am seeing is that the page randomly refreshes, and sometimes just freezes or freezes to gray, which has me thinking it crashes for an unknown reason. Inspect element is also freezing then.
### Reproduction
```
import { pipeline } from '@huggingface/transformers';
const transcriber = await pipeline('automatic-speech-recognition', 'onnx-community/whisper-base', {
// dtype: {
// encoder_model: 'fp32',
// decoder_model_merged: 'fp32',
// },
// device: 'webgpu' // not supported in MacOS
});
```
Just this and wait 10 - 60 seconds, and the app window freezes or restarts.
### Expected behavior
No restarts / crashes
### Full `tauri info` output
```text
[โ] Environment
- OS: Mac OS 15.2.0 arm64 (X64)
โ Xcode Command Line Tools: installed
โ rustc: 1.82.0 (f6e511eec 2024-10-15)
โ cargo: 1.82.0 (8f40fc59f 2024-08-21)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.7.0
- yarn: 1.22.22
- npm: 10.8.2
[-] Packages
- tauri ๐ฆ: 2.1.1
- tauri-build ๐ฆ: 2.0.3
- wry ๐ฆ: 0.47.2
- tao ๐ฆ: 0.30.8
- @tauri-apps/api ๎: 2.1.1
- @tauri-apps/cli ๎: 2.1.0
[-] Plugins
- tauri-plugin-shell ๐ฆ: 2.2.0
- @tauri-apps/plugin-shell ๎: 2.2.0
- tauri-plugin-fs ๐ฆ: 2.2.0
- @tauri-apps/plugin-fs ๎: 2.2.0
- tauri-plugin-dialog ๐ฆ: 2.2.0
- @tauri-apps/plugin-dialog ๎: 2.2.0
[-] App
- build-type: bundle
- CSP: default-src 'self' ipc: http://ipc.localhost; img-src 'self' asset: http://asset.localhost; media-src 'self' asset: http://asset.localhost
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
```text
none
```
### Additional context
```
"dependencies": {
"@huggingface/transformers": "^3.1.2",
"@tauri-apps/api": "^2",
"@tauri-apps/plugin-dialog": "~2",
"@tauri-apps/plugin-fs": "~2",
"@tauri-apps/plugin-opener": "^1.0.0",
"@tauri-apps/plugin-shell": "^2",
"axios": "^1.7.7",
"tauri-plugin-serialplugin": "^2.4.11",
"vue": "^3.3.4",
"vue-router": "^4.4.5"
},
"devDependencies": {
"@tauri-apps/cli": "^2",
"@vitejs/plugin-vue": "^5.0.5",
"sass": "^1.81.0",
"typescript": "^5.2.2",
"vite": "^5.3.1",
"vue-tsc": "^2.0.22"
}
``` | type: bug,status: upstream,platform: macOS,status: needs triage | low | Critical |
2,737,666,079 | vscode | SCM - The version-control comment is not DevContainer-specific | - VS Code Version: 1.69.0 | 92d25e35d9bf1a6b16f7d0758f25d48ace11e5b9 | x64
- OS Version: Linux | 5.15.0-40-generic | 43-Ubuntu SMP | x86_64 GNU/Linux
Steps to Reproduce:
1. Install Remote Development extension
2. Set a `name` for the `devcontainer.json`
3. Get inside the DevContainer
4. Make a commit comment
5. Switch to another DevContainer with a different `name`
6. The same comment for the first container is shown here for the 2nd one
I'm assuming that this is **not** the expected behavior since the `name` is technically identifying the container...
| bug,scm | low | Major |
2,737,709,016 | pytorch | [Inductor] [Device] `LazyConv` behave differently on CUDA and CPU with 1D, 2D, and 3D | ### ๐ Describe the bug
Using `LazyConv2d` as an example, When the `input_channels` in `input_tesor` is set to **0**.
`CUDA Inductor` can't pass the check while others return empty tensors.
```python
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv = nn.LazyConv2d(2, kernel_size=1)
def forward(self, x):
x = self.conv(x)
return x
model = Model()
def run_test(inputs, model, inductor_mode, device):
inputs = [x.to(device) for x in inputs]
model = model.eval().to(device)
if inductor_mode == True:
model = torch.compile(model)
try:
output = model(*inputs)
print(output)
except Exception as e:
print(e)
x = torch.randn(1, 0, 32, 32) # trigger condition: input_channel = 0
inputs = [x]
run_test(inputs, model, inductor_mode=True, device="cpu") # tensor([], size=(1, 0, 32, 32), grad_fn=<CompiledFunctionBackward>)
run_test(inputs, model, inductor_mode=False, device="cpu") # tensor([], size=(1, 0, 32, 32), grad_fn=<ConvolutionBackward0>)
run_test(inputs, model, inductor_mode=False, device="cuda") # tensor([], device='cuda:0', size=(1, 0, 32, 32), grad_fn=<ConvolutionBackward0>)
run_test(inputs, model, inductor_mode=True, device="cuda") # RuntimeError: Function ConvolutionBackward0 returned an invalid gradient at index 2 - got [0] but expected shape compatible with [2]
```
### Error logs
```
tensor([], size=(1, 0, 32, 32), grad_fn=<CompiledFunctionBackward>)
tensor([], size=(1, 0, 32, 32), grad_fn=<ConvolutionBackward0>)
tensor([], device='cuda:0', size=(1, 0, 32, 32),
grad_fn=<ConvolutionBackward0>)
RuntimeError: Function ConvolutionBackward0 returned an invalid gradient at index 2 - got [0] but expected shape compatible with [2]
```
### Versions
torch version: 2.6.0.dev20241205+cu124
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241205+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2500.000
BogoMIPS: 5000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241205+cu124
[pip3] torchaudio==2.5.0.dev20241205+cu124
[pip3] torchvision==0.20.0.dev20241205+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241205+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241205+cu124 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241205+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | module: autograd,triaged,oncall: pt2,module: inductor | low | Critical |
2,737,881,542 | tauri | [bug] Android build on Arch Linux: 'getentropy' is unavailable: introduced in Android 28 | ### Describe the bug
I used reqwest in the project and needed to compile it to Android, but reqwest relies on 'opensl-sys', which reported an error when compiling Android.
`When compiling Android "openssl-sys" read the"aarch64-linux-android24-clang" version, but we followed the error message and requested to use a version higher than 28.
Therefore, if the CC variable is passed separately when compiling "opensl-sys" for Android, it can be successfully compiled, for example, โCC=$HOME/Android/Sdk/ndk/28.0.12433566/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android35-clang cargo build -p mobile --target aarch64-linux-androidโ๏ผ
But passing CC variables during Tauri compilation doesn't seem to take effect? Or are there any other solutions?
### Reproduction
_No response_
### Expected behavior
I hope to compile it into an Android app in archLinux.
### Full `tauri info` output
```text
cargo tauri info
[โ] Environment
- OS: Arch Linux 20240721.0.248532 x86_64 (X64)
โ webkit2gtk-4.1: 2.46.4
โ rsvg2: 2.59.2
โ rustc: 1.82.0 (f6e511eec 2024-10-15)
โ cargo: 1.82.0 (8f40fc59f 2024-08-21)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-unknown-linux-gnu (environment override by RUSTUP_TOOLCHAIN)
- node: 23.1.0
- pnpm: 9.12.3
- npm: 10.9.0
[-] Packages
- tauri ๐ฆ: 2.1.1
- tauri-build ๐ฆ: 2.0.3
- wry ๐ฆ: 0.47.2
- tao ๐ฆ: 0.30.8
- tauri-cli ๐ฆ: 2.0.4
[-] Plugins
- tauri-plugin-shell ๐ฆ: 2.0.2
- tauri-plugin-websocket ๐ฆ: 2.2.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../../mobile-web/dist
- devUrl: http://localhost:1420/
```
### Stack trace
```text
/home/one/Android/Sdk/ndk/28.0.12433566/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android24-clang -I. -Icrypto -Iinclude -Iproviders/implementations/include -Iproviders/common/include -Iproviders/fips/include -DBSAES_ASM -DECP_NISTZ256_ASM -DECP_SM2P256_ASM -DKECCAK1600_ASM -DOPENSSL_CPUID_OBJ -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DSM4_ASM -DVPAES_ASM -DVPSM4_ASM -fPIC -pthread -Wa,--noexecstack -Qunused-arguments -Wall -O3 -O2 -ffunction-sections -fdata-sections -fPIC -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSLDIR="\"/usr/local/ssl\"" -DENGINESDIR="\"/home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/engines-3\"" -DMODULESDIR="\"/home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/ossl-modules\"" -DOPENSSL_BUILDING_OPENSSL -DNDEBUG -DANDROID -MMD -MF providers/implementations/rands/seeding/libdefault-lib-rand_win.d.tmp -c -o providers/implementations/rands/seeding/libdefault-lib-rand_win.o providers/implementations/rands/seeding/rand_win.c
/usr/bin/perl "-I." "-Iproviders/common/der" "-Mconfigdata" "-Mconfigdata" "-Moids_to_c" "util/dofile.pl" "-oMakefile" providers/common/include/prov/der_dsa.h.in > providers/common/include/prov/der_dsa.h
/home/one/Android/Sdk/ndk/28.0.12433566/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android24-clang -Iproviders/common/include/prov -I. -Icrypto -Iinclude -Iproviders/implementations/include -Iproviders/common/include -Iproviders/fips/include -DBSAES_ASM -DECP_NISTZ256_ASM -DECP_SM2P256_ASM -DKECCAK1600_ASM -DOPENSSL_CPUID_OBJ -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DSM4_ASM -DVPAES_ASM -DVPSM4_ASM -fPIC -pthread -Wa,--noexecstack -Qunused-arguments -Wall -O3 -O2 -ffunction-sections -fdata-sections -fPIC -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSLDIR="\"/usr/local/ssl\"" -DENGINESDIR="\"/home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/engines-3\"" -DMODULESDIR="\"/home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/ossl-modules\"" -DOPENSSL_BUILDING_OPENSSL -DNDEBUG -DANDROID -MMD -MF providers/implementations/signature/libdefault-lib-ecdsa_sig.d.tmp -c -o providers/implementations/signature/libdefault-lib-ecdsa_sig.o providers/implementations/signature/ecdsa_sig.c
/usr/bin/perl "-I." "-Iproviders/common/der" "-Mconfigdata" "-Mconfigdata" "-Moids_to_c" "util/dofile.pl" "-oMakefile" providers/common/include/prov/der_ecx.h.in > providers/common/include/prov/der_ecx.h
/home/one/Android/Sdk/ndk/28.0.12433566/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android24-clang -I. -Icrypto -Iinclude -Iproviders/implementations/include -Iproviders/common/include -Iproviders/fips/include -DBSAES_ASM -DECP_NISTZ256_ASM -DECP_SM2P256_ASM -DKECCAK1600_ASM -DOPENSSL_CPUID_OBJ -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DSM4_ASM -DVPAES_ASM -DVPSM4_ASM -fPIC -pthread -Wa,--noexecstack -Qunused-arguments -Wall -O3 -O2 -ffunction-sections -fdata-sections -fPIC -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSLDIR="\"/usr/local/ssl\"" -DENGINESDIR="\"/home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/engines-3\"" -DMODULESDIR="\"/home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/ossl-modules\"" -DOPENSSL_BUILDING_OPENSSL -DNDEBUG -DANDROID -MMD -MF providers/implementations/signature/libdefault-lib-mac_legacy_sig.d.tmp -c -o providers/implementations/signature/libdefault-lib-mac_legacy_sig.o providers/implementations/signature/mac_legacy_sig.c
/home/one/Android/Sdk/ndk/28.0.12433566/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android24-clang -Iproviders/common/include/prov -I. -Icrypto -Iinclude -Iproviders/implementations/include -Iproviders/common/include -Iproviders/fips/include -DBSAES_ASM -DECP_NISTZ256_ASM -DECP_SM2P256_ASM -DKECCAK1600_ASM -DOPENSSL_CPUID_OBJ -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DSM4_ASM -DVPAES_ASM -DVPSM4_ASM -fPIC -pthread -Wa,--noexecstack -Qunused-arguments -Wall -O3 -O2 -ffunction-sections -fdata-sections -fPIC -DOPENSSL_USE_NODELETE -DOPENSSL_PIC -DOPENSSLDIR="\"/usr/local/ssl\"" -DENGINESDIR="\"/home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/engines-3\"" -DMODULESDIR="\"/home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/ossl-modules\"" -DOPENSSL_BUILDING_OPENSSL -DNDEBUG -DANDROID -MMD -MF providers/implementations/signature/libdefault-lib-rsa_sig.d.tmp -c -o providers/implementations/signature/libdefault-lib-rsa_sig.o providers/implementations/signature/rsa_sig.c
make[1]: Leaving directory '/home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src'
cargo:warning=building OpenSSL: 'make' reported failure with exit status: 2
cargo:warning=openssl-src: failed to build OpenSSL from source
--- stderr
DEBUG: all keys: APPLINKDIR, BINDIR, CMAKECONFIGDIR, ENGINESDIR, INCLUDEDIR, LDLIBS, LIBDIR, MODULESDIR, PKGCONFIGDIR, PREFIX, VERSION, libdir
No value given for CMAKECONFIGDIR
No value given for PKGCONFIGDIR
No value given for libdir
DEBUG: PREFIX = . => PREFIX = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src
DEBUG: libdir = . => libdir = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src
DEBUG: BINDIR = apps => BINDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src/apps, BINDIR_REL_PREFIX = apps
DEBUG: LIBDIR = => LIBDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src, LIBDIR_REL_PREFIX =
DEBUG: INCLUDEDIR = [ include, ./include ] => INCLUDEDIR = [ /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src/include, /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src/include ], INCLUDEDIR_REL_PREFIX = [ include, ./include ]
DEBUG: APPLINKDIR = ms => APPLINKDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src/ms, APPLINKDIR_REL_PREFIX = ms
DEBUG: ENGINESDIR = engines => ENGINESDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src/engines, ENGINESDIR_REL_LIBDIR = engines
DEBUG: MODULESDIR = providers => MODULESDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src/providers, MODULESDIR_REL_LIBDIR = providers
DEBUG: PKGCONFIGDIR = . => PKGCONFIGDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src, PKGCONFIGDIR_REL_LIBDIR = .
DEBUG: CMAKECONFIGDIR = . => CMAKECONFIGDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src, CMAKECONFIGDIR_REL_LIBDIR = .
DEBUG: all keys: APPLINKDIR, BINDIR, CMAKECONFIGDIR, ENGINESDIR, INCLUDEDIR, LDLIBS, LIBDIR, MODULESDIR, PKGCONFIGDIR, PREFIX, VERSION, libdir
DEBUG: PREFIX = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install => PREFIX = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install
DEBUG: libdir = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib => libdir = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib
DEBUG: BINDIR = bin => BINDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/bin, BINDIR_REL_PREFIX = bin
DEBUG: LIBDIR = lib => LIBDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib, LIBDIR_REL_PREFIX = lib
DEBUG: INCLUDEDIR = include => INCLUDEDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/include, INCLUDEDIR_REL_PREFIX = include
DEBUG: APPLINKDIR = include/openssl => APPLINKDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/include/openssl, APPLINKDIR_REL_PREFIX = include/openssl
DEBUG: ENGINESDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/engines-3 => ENGINESDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/engines-3, ENGINESDIR_REL_LIBDIR = engines-3
DEBUG: MODULESDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/ossl-modules => MODULESDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/ossl-modules, MODULESDIR_REL_LIBDIR = ossl-modules
DEBUG: PKGCONFIGDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/pkgconfig => PKGCONFIGDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/pkgconfig, PKGCONFIGDIR_REL_LIBDIR = pkgconfig
DEBUG: CMAKECONFIGDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/cmake/OpenSSL => CMAKECONFIGDIR = /home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/install/lib/cmake/OpenSSL, CMAKECONFIGDIR_REL_LIBDIR = cmake/OpenSSL
providers/implementations/rands/seeding/rand_unix.c:355:9: error: 'getentropy' is unavailable: introduced in Android 28
355 | if (getentropy != NULL) {
| ^
/home/one/Android/Sdk/ndk/28.0.12433566/toolchains/llvm/prebuilt/linux-x86_64/bin/../sysroot/usr/include/bits/getentropy.h:51:11: note: 'getentropy' has been explicitly marked unavailable here
51 | __wur int getentropy(void* _Nonnull __buffer, size_t __buffer_size) __INTRODUCED_IN(28);
| ^
providers/implementations/rands/seeding/rand_unix.c:356:13: error: 'getentropy' is unavailable: introduced in Android 28
356 | if (getentropy(buf, buflen) == 0)
| ^
/home/one/Android/Sdk/ndk/28.0.12433566/toolchains/llvm/prebuilt/linux-x86_64/bin/../sysroot/usr/include/bits/getentropy.h:51:11: note: 'getentropy' has been explicitly marked unavailable here
51 | __wur int getentropy(void* _Nonnull __buffer, size_t __buffer_size) __INTRODUCED_IN(28);
| ^
2 errors generated.
make[1]: *** [Makefile:13097: providers/implementations/rands/seeding/libdefault-lib-rand_unix.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make: *** [Makefile:2356: build_libs] Error 2
Error building OpenSSL:
'make' reported failure with exit status: 2
Command failed: cd "/home/one/code/mobile-llm/apps/target/aarch64-linux-android/release/build/openssl-sys-1795d91a965d6ea0/out/openssl-build/build/src" && MAKEFLAGS="-j --jobserver-fds=8,9 --jobserver-auth=8,9" "make" "build_libs"
Error `Failed to run `cargo build`: command ["cargo", "build", "--package", "mobile", "--manifest-path", "/home/one/code/mobile-llm/apps/mobile/Cargo.toml", "--target", "aarch64-linux-android", "--features", "tauri/custom-protocol tauri/rustls-tls", "--lib", "--release"] exited with code 101
```
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,737,891,221 | ant-design | form.setFieldsValue don't work when trigger twice | ### Reproduction link
[](https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-jztmdz)
### Steps to reproduce
1. ็นๅป setValue ๆ้ฎ
2. Select ้ๆฉ 2
3. ๅๆฌก็นๅป setValue ๆ้ฎ
### What is expected?
list ็ๅผๅบ่ฏฅๆฏ 1
### What is actually happening?
list ไป็ถๆฏ 2
| Environment | Info |
| --- | --- |
| antd | 5.22.4 |
| React | 18.x |
| System | macOS |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ Documentation | low | Minor |
2,737,896,252 | create-react-app | Bug: creation reactjs new project latest so v19 | see issue and solution on https://github.com/facebook/react/issues/31701 | needs triage,issue: bug report | low | Critical |
2,737,926,458 | godot | The value format of `_spawnable_scenes` property of `MultiplayerSpawner` in .tscn is different between 4.4.dev3 and 4.4.dev6 | ### Tested versions
- Reproducible in 4.4.dev6 official and 4.4.dev3 official
### System information
Godot v4.4.dev3 - Windows 10.0.22631 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Ti (NVIDIA; 32.0.15.6094) - 13th Gen Intel(R) Core(TM) i5-13600KF (20 threads)
### Issue description
When use 4.4.dev3 and set the property in Node `MultiplayerSpawner`'s `Auto Spawn List`, the saved value in .tscn is like:
```
[node name="Slot1Spawner" type="MultiplayerSpawner" parent="."]
_spawnable_scenes = PackedStringArray("res://player_head.tscn")
```
When use 4.4.dev6 and do the same, the saved value in .tscn is like:
```
[node name="Slot1Spawner" type="MultiplayerSpawner" parent="."]
_spawnable_scenes = PackedStringArray("uid://docb3aisq4txw")
```
Is this supposed to happen? A bug or not? I think using PackedStringArray("res://player_head.tscn") is better than using PackedStringArray("uid://docb3aisq4txw") but I'm not so sure.
Can an older version Godot understand `_spawnable_scenes = PackedStringArray("uid://docb3aisq4txw")` ? This behavior might **breaks compatibility** which is not mentioned before.

### Steps to reproduce
1. using 4.4.dev3, create a project
2. create a scene
3. add a `MultiplayerSpawner` node
4. edit it's property `Auto Spawn List` then add an element, then choose another scene.
5. save and see the value saved in .tscn file
6. Do the same in 4.4.dev6 and compare .tscn file.
### Minimal reproduction project (MRP)
NA | discussion,topic:multiplayer | low | Critical |
2,737,963,986 | ollama | ollama : /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.25 not found - Kylin Linux glibc++ version incompatible with official builds | ### What is the issue?
when run ollama the error has happend as follow:
ollama : /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.25 not found
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | bug,linux | low | Critical |
2,737,981,698 | angular | docs: `HttpRequest` is missing constructor information. | ### Describe the problem that you experienced
An API entry like `HttpRequest` should have its constructor signatures (overloaded) display on the API docs.
ex: https://angular.dev/api/common/http/HttpRequest
| area: docs-infra | low | Minor |
2,738,006,176 | three.js | NodeMaterial support in Loader | ### Description
Add support to NodeMaterial in GltfLoader/Loader in general.
### Solution
Assign node material directly instead of legacy material in Loaders.
### Alternatives
Pass an option to the LoaderManager to return the node or legacy material version ( ex: StandardNodeMaterial / StandardMaterial) .
Then inside the loader we call this option to get the material class. This way the change can be effective for all loaders quite fast.
### Additional context
From my understanding we're going to an all node material world in threejs.
In this sense it would be nice to directly got nodeMaterial when loading a 3d model.
I can open a PR if you think it's a good idea. | Loaders | low | Minor |
2,738,022,645 | excalidraw | arrow binding indicator should scale with zoom | The binding area indicator when binding an arrow to a shape should scale with zoom. Meaning, on zoom <> 100% it should be visibly identical to 100%.

incorrect:

| bug,Arrow Binding | low | Minor |
2,738,048,182 | TypeScript | Typescript gets confused when renaming by only changing filename case of imported file on Windows | ### ๐ Search Terms
windows rename already included
### ๐ Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about _________
- I was unable to test this on prior versions because _______
### โฏ Playground Link
_No response_
### ๐ป Code
file `src/vector.js`
```js
export default class Vector {}
```
file `test/vector.test.js`
```js
import Vector from '../src/vector.js'
new Vector();
```
now rename the file
```sh
mv src/vector.js src/Vector.js
```
and update the file `test/vector.test.js`
```js
import '../src/Vector.js'
new Vector();
```
### ๐ Actual behavior
There is an error message:

### ๐ Expected behavior
No error message, as the imported file name matches the actual file name.
### Additional information about the issue
`tsconfig.json`
```json
{
"compilerOptions": {
"target": "ES2017",
"module": "commonjs",
"lib": ["es2017", "dom", "DOM.Iterable"],
"allowJs": true,
"checkJs": true,
"skipLibCheck": true,
"declaration": true,
"noEmit": true,
"strict": true,
"baseUrl": ".",
"typeRoots": [
"./node_modules/@types",
],
"esModuleInterop": true
},
"include": [
"src/**/*.js",
"test/**/*.js",
]
}
``` | Bug | low | Critical |
2,738,048,734 | rust | Cannot use unicode identifiers in `--extern` flags | I tried this code:
```console
> rustc --crate-name รถwรถ --crate-type lib - <<<'pub fn รถwรถ() {}'
> rustc --crate-name ลซwลซ --crate-type bin --extern รถwรถ=libรถwรถ.rlib - <<<'fn main() { รถwรถ::รถwรถ(); }'
```
I expected to see this happen: successful compilation
Instead, this happened:
```
error: crate name `รถwรถ` passed to `--extern` is not a valid ASCII identifier
```
### Meta
`rustc --version --verbose`:
```
rustc 1.85.0 (c44b3d50f 2024-12-03)
binary: rustc
commit-hash: c44b3d50fea96a3e0417e8264c16ea21a0a3fca2
commit-date: 2024-12-03
host: x86_64-unknown-linux-gnu
release: 1.85.0
LLVM version: 19.1.4
``` | A-metadata,A-Unicode,T-compiler,C-feature-request,A-crates | low | Critical |
2,738,057,324 | PowerToys | Specific key remapping not working | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce

### โ๏ธ Expected Behavior
Remapping a key to sent this shortcut:
Browser favorite button on keyboard --> CTRL (left) + ALT (left) + Home
### โ Actual Behavior
The command is not registered and doesn't work when hitting the remapped key.
Hitting the physical keys works instead:
CTRL (left) + ALT (left) + Home
### Other Software
This shortcut is useful when running a RDP session on full screen, to move the focus to the host machine instead of the remote session, and hit afterwards different shortcuts which will take action on the host. | Issue-Bug,Product-Keyboard Shortcut Manager,Needs-Triage,Needs-Team-Response | low | Major |
2,738,110,495 | vscode | VSCode shows a loading spinner on pasting anything |
Type: <b>Bug</b>
This doesn't reproduce all the time. It happens randomly. I disabled extensions using start extension bisect and it's still happening. There's a related old issue that got closed #210540
VS Code version: Code - Insiders 1.97.0-insider (b425f4802fcbcccb11ad991208fa262c06255be3, 2024-12-13T05:04:19.453Z)
OS version: Windows_NT x64 10.0.22635
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i7-12700F (20 x 2112)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.84GB (6.37GB free)|
|Process Argv|--folder-uri file:///d%3A/TEC/Spartans --crash-reporter-id 2a515175-97ab-457e-94e2-d5dadae50b98|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (38)</summary>
Extension|Author (truncated)|Version
---|---|---
markdown-mermaid|bie|1.27.0
vscode-tailwindcss|bra|0.12.16
npm-intellisense|chr|1.4.5
path-intellisense|chr|2.10.0
vscode-eslint|dba|3.0.10
gitlens|eam|2024.12.1304
prettier-vscode|esb|11.0.0
vscode-expo-tools|exp|1.5.0
codespaces|Git|1.17.3
copilot|Git|1.251.1262
copilot-chat|Git|0.24.2024121301
remotehub|Git|0.65.2024112101
vscode-pull-request-github|Git|0.103.2024121117
go|gol|0.42.1
todo-tree|Gru|0.0.226
i18n-ally|lok|2.12.0
dotenv|mik|1.0.1
python|ms-|2024.23.2024121201
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2025.1.2024121301
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-ssh|ms-|0.116.0
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
remote-explorer|ms-|0.4.3
remote-repositories|ms-|0.43.2024112101
material-icon-theme|PKi|5.15.0
vscode-yaml|red|1.15.0
rust-analyzer|rus|0.3.2212
code-spell-checker|str|4.0.21
even-better-toml|tam|0.19.2
vscode-mdx|uni|1.8.11
explorer|vit|1.8.3
vscode-wakatime|Wak|24.9.2
markdown-all-in-one|yzh|3.6.2
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vsc_aacf:30263846
vscod805cf:30301675
vsaa593cf:30376535
py29gd2263:31024238
c4g48928:30535728
2i9eh265:30646982
962ge761:30841072
pythonnoceb:30776497
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
dvdeprecation:31040973
dwnewjupytercf:31046870
nativerepl1:31134653
pythonrstrctxt:31093868
nativeloc1:31118317
cf971741:31144450
e80f6927:31120813
iacca1:31150324
notype1:31143044
dwcopilot:31158714
h409b430:31177054
2j25a237:31183119
5b1c1929:31184661
```
</details>
<!-- generated by issue reporter --> | info-needed,typescript | low | Critical |
2,738,112,289 | pytorch | `torch.linalg.qr_ex` and `torch.linalg.lstsq_ex` | ### ๐ The feature, motivation and pitch
PyTorch already includes multiple linear algebra functions that return the `info` error code from the underlying LAPACK call: [`torch.linalg.cholesky_ex`](https://pytorch.org/docs/stable/generated/torch.linalg.cholesky_ex.html), [`torch.linalg.solve_ex`](https://pytorch.org/docs/stable/generated/torch.linalg.solve_ex.html) and [`torch.linalg.ldl_factor_ex`](https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor_ex.html#torch.linalg.ldl_factor_ex).
The motivation behind this feature request aligns with one of the motivations that I believe led to exposing the previous functions: a fast and clean way to handle decomposition errors during the function call.
### Alternatives
To my knowledge, there is currently no workaround to handle decomposition errors from [`torch.linalg.qr`]() and [`torch.linalg.lstsq`](https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html).
As noted in the [docs](https://pytorch.org/docs/stable/generated/torch.linalg.qr.html#:~:text=The%20QR%20decomposition%20is%20only%20well%2Ddefined%20if%20the%20first%20k%20%3D%20min(m%2C%20n)%20columns%20of%20every%20matrix%20in%20A%20are%20linearly%20independent.%20If%20this%20condition%20is%20not%20met%2C%20no%20error%20will%20be%20thrown%2C%20but%20the%20QR%20produced%20may%20be%20incorrect%20and%20its%20autodiff%20may%20fail%20or%20produce%20incorrect%20results.), `torch.linalg.qr` fails silently, and `torch.linalg.lstsq` also may fail silently despite what is mentioned in the [docs](https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html#:~:text=residuals%3A%20the%20squared,future%20PyTorch%20release):
> *residuals*: the squared residuals of the solutions, that is, $\lVert AX - B\rVert_{F}^{2}$. It has shape equal to the batch dimensions of $A$. It is computed when $m > n$ and every matrix in $A$ is full-rank, **otherwise, it is an empty tensor**. If $A$ is a batch of matrices and any matrix in the batch is not full rank, **then an empty tensor is returned**. This behavior may change in a future PyTorch release
Example:
```python
B, N, D = 20, 10, 3
dev = torch.device("cuda")
a = torch.rand(B, N, D, device=dev)
a[..., 0] = a[..., 1] # force rank deficient matrix
assert (torch.linalg.matrix_rank(a) < D).any(-1)
b = torch.rand(B, N, device=dev)
sol = torch.linalg.lstsq(a, b)
print(f"Empty residuals: {sol.residuals.numel()==0}. Shape: {sol.residuals.shape}")
```
### Additional context
_No response_
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | triaged,enhancement,module: linear algebra | low | Critical |
2,738,113,508 | excalidraw | Option to Lock Sizes within a Group | Currently, when a group of items is scaled, all items inside the group are scaled proportionally. There is no feature to lock the dimensions of specific items within the group.
Feature Request:
Add an option to lock the dimensions of individual items in a group so they remain unchanged during group scaling. This would provide more flexibility when working with grouped elements, especially when certain items need to maintain their original size. | enhancement | low | Minor |
2,738,114,705 | react | [React 19] `renderToString`'s output missing some rendered elements | ## Summary
The bug can be seen on [stackblitz](https://stackblitz.com/edit/stackblitz-starters-a7nkhwt8?description=Starter%20project%20for%20Node.js,%20a%20JavaScript%20runtime%20built%20on%20Chrome%27s%20V8%20JavaScript%20engine&file=index.js&title=node.new%20Starter).
It looks like some timing issue so reproducing might require a few runs. However, I'm able to repro this each time - the problem is that different elements are missing each~ time.
An example excerpt from the logged output:
```html
<div class="some-key-ervrair" data-id="101">woah there<span>hello world</span><div class="some-key-4a2dabr" data-id="100">woah there<span>hello world</span>woah there<span>hello world</span><div class="some-key-9nbpw7" data-id="97">woah there<span>hello world</span><div class="some-key-ta4dr8b" data-id="96">woah there<span>hello world</span>
```
We can see here that it skips over 99 and 98 elements, going straight from 100 to 97.
The code used on stackblitz:
```js
const renderToString = require('react-dom/server').renderToString;
const jsx = require('react').createElement;
const BigComponent = ({ count } /*: { count: number } */) => {
if (count === 0) return null;
const start = Date.now();
while (true) {
if (Date.now() - start > 5) {
break;
}
}
return jsx(
'div',
{
className: `some-key-${Math.random().toString(36).slice(6)}`,
'data-id': count,
},
'woah there',
jsx('span', {}, 'hello world'),
jsx(BigComponent, { count: count - 1 })
);
};
console.log(renderToString(jsx(BigComponent, { count: 200 })));
``` | React 19 | low | Critical |
2,738,145,218 | react-native | 0.75.4 - Android 15 for StatusBarModule (EdgeToEdge) | ### Description
Your app uses deprecated APIs or parameters for end-to-end display
One or more APIs you use or parameters you set for end-to-end screen and window display have been discontinued in Android 15. Your application uses the following deprecated APIs or parameters:
android.view.Window.getStatusBarColor
android.view.Window.setStatusBarColor
These start from:
com.facebook.react.modules.statusbar.StatusBarModule$setColor$1.runGuarded
com.facebook.react.modules.statusbar.StatusBarModule.getTypedExportedConstants
https://developer.android.com/about/versions/15/behavior-changes-15?hl=tr#edge-to-edge
### React Native Version
0.75.4
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: Windows 10 10.0.19045
CPU: (8) x64 Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
Memory: 15.45 GB / 31.96 GB
Binaries:
Node:
version: 22.9.0
path: C:\Program Files\nodejs\node.EXE
Yarn: Not Found
npm:
version: 10.9.1
path: C:\Program Files\nodejs\npm.CMD
Watchman: Not Found
SDKs:
Android SDK:
API Levels:
- "31"
- "35"
Build Tools:
- 34.0.0
- 35.0.0
System Images:
- android-24 | Google Play Intel x86 Atom
- android-26 | Google APIs Intel x86 Atom
- android-27 | Intel x86 Atom
- android-27 | Google APIs Intel x86 Atom
- android-27 | Google Play Intel x86 Atom
- android-28 | Google Play Intel x86 Atom
- android-34 | Google Play Intel x86_64 Atom
- android-35 | Google Play Intel x86_64 Atom
- android-35 | Pre-Release 16 KB Page Size Google Play ARM Intel x86_64
Atom
Android NDK: Not Found
Windows SDK: Not Found
IDEs:
Android Studio: AI-242.23339.11.2421.12700392
Visual Studio: Not Found
Languages:
Java: 17.0.9
Ruby: Not Found
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.4
wanted: 0.75.4
react-native-windows: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```

| Platform: Android,Component: StatusBar,Needs: Author Feedback,Needs: Repro | medium | Major |
2,738,158,272 | godot | Godot 4.3 crashes when trying to open AssetLib | ### Tested versions
Reproducible in:
Godot v4.3.stable from godotengine.org
Not reproduced in:
Godot v4.3.stable (77dcf97d8) - Freedesktop SDK 24.08 (Flatpak runtime) - Wayland - Vulkan (Forward+) - integrated Intel(R) Graphics (ADL GT2) - 12th Gen Intel(R) Core(TM) i7-12700H (20 Threads)
### System information
Godot v4.3.stable - TUXEDO OS 3 22.04.5 - Wayland - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3050 Ti Laptop GPU - 12th Gen Intel(R) Core(TM) i7-12700H (20 Threads)
### Issue description
When trying to access AssetLib, Godot crashes. It did not crash until I clicked "Go online". After this, it will always close the program every time I access it, whether it is from inside a project or from the project overview.
The Flatpak-version does not reproduce this issue. I can't use this however, as I am unable to get my discrete GPU to be recognized and it instead uses the integrated Intel-chip.
### Steps to reproduce
1. Open Godot v4.3.stable
2. Click on AssetLib from either inside a project or from the project overview.
3. Click "Go online"
4. Godot crashes
### Minimal reproduction project (MRP)
It is project independent. | bug,topic:editor,crash | low | Critical |
2,738,202,527 | godot | ctrl+f in editor has delay before switching, in which key presses can be captured by the code window | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
windows 11 (also present on earlier)
### Issue description
When pressing ctrl+f to search in the editor, there is sometimes a delay before the focus switches, so some characters are typed into the code window instead of the search bar.
### Steps to reproduce
* type in code editor
* press ctrl+f, and immediately start typing a query
* sometimes there's a brief delay, and the first several characters of the search query are entered into code editor instead
* the rest of the characters appear in the search field
### Minimal reproduction project (MRP)
any text file in editor | bug,topic:editor,usability,topic:input | low | Major |
2,738,212,271 | flutter | Update the DeviceLab README.md with updated warnings and info on how to run tests on a local machine | ### Use case
### Use case: Engineers get up-to-date information from the DeviceLab readme
The https://github.com/flutter/flutter/blob/master/dev/devicelab/README.md file explains how to run DeviceLab tests locally. But it omits some important details that differ from the usual behaviour of running tests engineers are familiar with:
1. By default, the failing tests are autoretried. It makes sense to mention this as it is likely any engineer running the tests locally does not want that behaviour and wants to disable it.
2. The DeviceLab test runner automatically reboots the Android and iOS devices it runs tests on. While this behaviour currently can't be disabled, it still makes sense to tell engineers about it, as otherwise, devices (seemingly) randomly restarting looks like a serious bug.
As an engineer, it would be very useful to have these things mentioned in the documentation.
### Proposal
### Proposal: Update the DeviceLab readme
Update the DeviceLab Readme at https://github.com/flutter/flutter/blob/master/dev/devicelab/README.md and add a warning about reboots and a small section about disabling the automatic retries for failed tests. | framework,P2,team-framework,triaged-framework,d: docs/ | low | Critical |
2,738,266,786 | react-native | SectionList renders all items at once. | ### Description
I found out section list invokes `renderItem` method for all the items even if there are not visible in the screen.
Here's the expo snack link: https://snack.expo.dev/7cjOb_kzYa8_8gcuW9i9s
If anyone can help understand why is it happening?
This is the basic code snippet
```js
import React from 'react';
import {
View,
Text,
SectionList,
StyleSheet,
SafeAreaView
} from 'react-native';
const GroceryInventory = () => {
const groceryData = [
{
title: 'Fruits',
data: Array(50).fill(2).map((_, index) => ({
id: `fruit-${index}`,
name: `Fruit ${index}`,
price: `$${(Math.random() * 5).toFixed(2)}`
}))
},
{
title: 'Vegetables',
data: Array(50).fill(2).map((_, index) => ({
id: `vegetable-${index}`,
name: `Vegetable ${index}`,
price: `$${(Math.random() * 5).toFixed(2)}`
}))
},
{
title: 'Dairy',
data: Array(50).fill(2).map((_, index) => ({
id: `dairy-${index}`,
name: `Dairy ${index}`,
price: `$${(Math.random() * 5).toFixed(2)}`
}))
}
];
const renderSectionHeader = ({ section: { title } }) => {
console.log("RENDERING SECTION", title);
return (
<View style={styles.sectionHeader}>
<Text style={styles.sectionHeaderText}>{title}</Text>
</View>
)
};
const renderItem = ({ item }) => {
console.log("RENERING ITEEEM", item);
return (
<View style={styles.item}>
<Text style={styles.itemName}>{item.name}</Text>
<Text style={styles.itemPrice}>{item.price}</Text>
</View>
)
};
return (
<SafeAreaView style={styles.container}>
<SectionList
sections={groceryData}
keyExtractor={(item) => item.id}
renderSectionHeader={renderSectionHeader}
renderItem={renderItem}
// Performance optimization props
initialNumToRender={10} // Initially render 10 items
maxToRenderPerBatch={10} // Render 10 items per batch
windowSize={21} // Render items just outside of visible area
// Optional: Improve performance with these additional props
removeClippedSubviews={true} // Unmount components when outside of window
updateCellsBatchingPeriod={50} // Increase time between batch renders
listKey="grocery-section-list"
// Optional scroll performance improvements
getItemLayout={(data, index) => ({
length: 50, // Assuming a fixed item height
offset: 50 * index,
index,
})}
/>
</SafeAreaView>
);
};
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: '#f5f5f5'
},
sectionHeader: {
backgroundColor: '#e1e1e1',
padding: 10,
},
sectionHeaderText: {
fontSize: 18,
fontWeight: 'bold',
},
item: {
flexDirection: 'row',
justifyContent: 'space-between',
padding: 15,
borderBottomWidth: 1,
borderBottomColor: '#ddd',
},
itemName: {
fontSize: 16,
},
itemPrice: {
fontSize: 16,
color: '#888',
}
});
export default GroceryInventory;
```
Here's the output logs.

### Steps to reproduce
Create a basic react native project with or without framework. Copy and paste the above code or just run the expo snack.
### React Native Version
0.76.5
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
info Fetching system and libraries information...
System:
OS: macOS 15.0
CPU: (10) arm64 Apple M1 Pro
Memory: 132.58 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.12.2
path: /usr/local/bin/node
Yarn:
version: 1.22.22
path: /usr/local/bin/yarn
npm:
version: 10.5.0
path: /usr/local/bin/npm
Watchman:
version: 2024.12.02.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /opt/homebrew/lib/ruby/gems/3.1.0/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.19072.14.2412.12360217
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.11
path: /usr/bin/javac
Ruby:
version: 3.1.5
path: /opt/homebrew/opt/[email protected]/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.3
wanted: latest
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.5
wanted: 0.76.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
(NOBRIDGE) LOG RENDERING SECTION Vegetables
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-0", "name": "Vegetable 0", "price": "$2.14"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-1", "name": "Vegetable 1", "price": "$4.02"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-2", "name": "Vegetable 2", "price": "$1.36"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-3", "name": "Vegetable 3", "price": "$1.35"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-4", "name": "Vegetable 4", "price": "$4.33"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-5", "name": "Vegetable 5", "price": "$1.82"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-6", "name": "Vegetable 6", "price": "$4.90"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-7", "name": "Vegetable 7", "price": "$3.02"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-8", "name": "Vegetable 8", "price": "$1.79"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-9", "name": "Vegetable 9", "price": "$0.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-10", "name": "Vegetable 10", "price": "$0.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-11", "name": "Vegetable 11", "price": "$3.23"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-12", "name": "Vegetable 12", "price": "$1.73"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-13", "name": "Vegetable 13", "price": "$4.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-14", "name": "Vegetable 14", "price": "$3.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-15", "name": "Vegetable 15", "price": "$4.03"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-16", "name": "Vegetable 16", "price": "$4.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-17", "name": "Vegetable 17", "price": "$4.05"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-18", "name": "Vegetable 18", "price": "$2.32"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-19", "name": "Vegetable 19", "price": "$0.38"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-20", "name": "Vegetable 20", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-21", "name": "Vegetable 21", "price": "$3.06"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-22", "name": "Vegetable 22", "price": "$4.67"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-23", "name": "Vegetable 23", "price": "$2.10"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-24", "name": "Vegetable 24", "price": "$0.88"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-25", "name": "Vegetable 25", "price": "$2.51"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-26", "name": "Vegetable 26", "price": "$3.97"}
(NOBRIDGE) LOG RENDERING SECTION Fruits
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-0", "name": "Fruit 0", "price": "$2.21"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-1", "name": "Fruit 1", "price": "$0.12"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-2", "name": "Fruit 2", "price": "$0.86"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-3", "name": "Fruit 3", "price": "$3.05"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-4", "name": "Fruit 4", "price": "$2.40"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-5", "name": "Fruit 5", "price": "$2.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-6", "name": "Fruit 6", "price": "$4.87"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-7", "name": "Fruit 7", "price": "$4.58"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-8", "name": "Fruit 8", "price": "$3.64"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-9", "name": "Fruit 9", "price": "$2.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-10", "name": "Fruit 10", "price": "$3.99"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-11", "name": "Fruit 11", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-12", "name": "Fruit 12", "price": "$3.67"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-13", "name": "Fruit 13", "price": "$3.73"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-14", "name": "Fruit 14", "price": "$4.76"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-15", "name": "Fruit 15", "price": "$1.54"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-16", "name": "Fruit 16", "price": "$1.17"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-17", "name": "Fruit 17", "price": "$4.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-18", "name": "Fruit 18", "price": "$2.23"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-19", "name": "Fruit 19", "price": "$4.14"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-20", "name": "Fruit 20", "price": "$3.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-21", "name": "Fruit 21", "price": "$2.17"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-22", "name": "Fruit 22", "price": "$4.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-23", "name": "Fruit 23", "price": "$4.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-24", "name": "Fruit 24", "price": "$4.11"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-25", "name": "Fruit 25", "price": "$3.96"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-26", "name": "Fruit 26", "price": "$4.03"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-27", "name": "Fruit 27", "price": "$0.41"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-28", "name": "Fruit 28", "price": "$1.12"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-29", "name": "Fruit 29", "price": "$3.70"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-30", "name": "Fruit 30", "price": "$1.91"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-31", "name": "Fruit 31", "price": "$2.81"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-32", "name": "Fruit 32", "price": "$4.60"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-33", "name": "Fruit 33", "price": "$2.93"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-34", "name": "Fruit 34", "price": "$2.89"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-35", "name": "Fruit 35", "price": "$1.59"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-36", "name": "Fruit 36", "price": "$0.13"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-37", "name": "Fruit 37", "price": "$3.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-38", "name": "Fruit 38", "price": "$1.91"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-39", "name": "Fruit 39", "price": "$0.63"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-40", "name": "Fruit 40", "price": "$3.96"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-41", "name": "Fruit 41", "price": "$1.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-42", "name": "Fruit 42", "price": "$4.30"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-43", "name": "Fruit 43", "price": "$1.20"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-44", "name": "Fruit 44", "price": "$2.81"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-45", "name": "Fruit 45", "price": "$3.79"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-46", "name": "Fruit 46", "price": "$3.32"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-47", "name": "Fruit 47", "price": "$4.30"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-48", "name": "Fruit 48", "price": "$0.13"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-49", "name": "Fruit 49", "price": "$3.66"}
(NOBRIDGE) LOG RENDERING SECTION Vegetables
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-0", "name": "Vegetable 0", "price": "$2.14"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-1", "name": "Vegetable 1", "price": "$4.02"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-2", "name": "Vegetable 2", "price": "$1.36"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-3", "name": "Vegetable 3", "price": "$1.35"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-4", "name": "Vegetable 4", "price": "$4.33"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-5", "name": "Vegetable 5", "price": "$1.82"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-6", "name": "Vegetable 6", "price": "$4.90"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-7", "name": "Vegetable 7", "price": "$3.02"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-8", "name": "Vegetable 8", "price": "$1.79"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-9", "name": "Vegetable 9", "price": "$0.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-10", "name": "Vegetable 10", "price": "$0.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-11", "name": "Vegetable 11", "price": "$3.23"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-12", "name": "Vegetable 12", "price": "$1.73"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-13", "name": "Vegetable 13", "price": "$4.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-14", "name": "Vegetable 14", "price": "$3.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-15", "name": "Vegetable 15", "price": "$4.03"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-16", "name": "Vegetable 16", "price": "$4.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-17", "name": "Vegetable 17", "price": "$4.05"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-18", "name": "Vegetable 18", "price": "$2.32"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-19", "name": "Vegetable 19", "price": "$0.38"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-20", "name": "Vegetable 20", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-21", "name": "Vegetable 21", "price": "$3.06"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-22", "name": "Vegetable 22", "price": "$4.67"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-23", "name": "Vegetable 23", "price": "$2.10"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-24", "name": "Vegetable 24", "price": "$0.88"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-25", "name": "Vegetable 25", "price": "$2.51"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-26", "name": "Vegetable 26", "price": "$3.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-27", "name": "Vegetable 27", "price": "$3.25"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-28", "name": "Vegetable 28", "price": "$2.62"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-29", "name": "Vegetable 29", "price": "$4.50"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-30", "name": "Vegetable 30", "price": "$2.33"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-31", "name": "Vegetable 31", "price": "$4.01"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-32", "name": "Vegetable 32", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-33", "name": "Vegetable 33", "price": "$1.01"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-34", "name": "Vegetable 34", "price": "$3.83"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-35", "name": "Vegetable 35", "price": "$4.89"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-36", "name": "Vegetable 36", "price": "$2.37"}
(NOBRIDGE) LOG RENDERING SECTION Fruits
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-0", "name": "Fruit 0", "price": "$2.21"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-1", "name": "Fruit 1", "price": "$0.12"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-2", "name": "Fruit 2", "price": "$0.86"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-3", "name": "Fruit 3", "price": "$3.05"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-4", "name": "Fruit 4", "price": "$2.40"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-5", "name": "Fruit 5", "price": "$2.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-6", "name": "Fruit 6", "price": "$4.87"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-7", "name": "Fruit 7", "price": "$4.58"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-8", "name": "Fruit 8", "price": "$3.64"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-9", "name": "Fruit 9", "price": "$2.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-10", "name": "Fruit 10", "price": "$3.99"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-11", "name": "Fruit 11", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-12", "name": "Fruit 12", "price": "$3.67"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-13", "name": "Fruit 13", "price": "$3.73"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-14", "name": "Fruit 14", "price": "$4.76"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-15", "name": "Fruit 15", "price": "$1.54"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-16", "name": "Fruit 16", "price": "$1.17"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-17", "name": "Fruit 17", "price": "$4.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-18", "name": "Fruit 18", "price": "$2.23"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-19", "name": "Fruit 19", "price": "$4.14"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-20", "name": "Fruit 20", "price": "$3.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-21", "name": "Fruit 21", "price": "$2.17"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-22", "name": "Fruit 22", "price": "$4.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-23", "name": "Fruit 23", "price": "$4.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-24", "name": "Fruit 24", "price": "$4.11"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-25", "name": "Fruit 25", "price": "$3.96"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-26", "name": "Fruit 26", "price": "$4.03"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-27", "name": "Fruit 27", "price": "$0.41"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-28", "name": "Fruit 28", "price": "$1.12"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-29", "name": "Fruit 29", "price": "$3.70"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-30", "name": "Fruit 30", "price": "$1.91"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-31", "name": "Fruit 31", "price": "$2.81"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-32", "name": "Fruit 32", "price": "$4.60"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-33", "name": "Fruit 33", "price": "$2.93"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-34", "name": "Fruit 34", "price": "$2.89"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-35", "name": "Fruit 35", "price": "$1.59"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-36", "name": "Fruit 36", "price": "$0.13"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-37", "name": "Fruit 37", "price": "$3.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-38", "name": "Fruit 38", "price": "$1.91"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-39", "name": "Fruit 39", "price": "$0.63"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-40", "name": "Fruit 40", "price": "$3.96"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-41", "name": "Fruit 41", "price": "$1.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-42", "name": "Fruit 42", "price": "$4.30"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-43", "name": "Fruit 43", "price": "$1.20"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-44", "name": "Fruit 44", "price": "$2.81"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-45", "name": "Fruit 45", "price": "$3.79"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-46", "name": "Fruit 46", "price": "$3.32"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-47", "name": "Fruit 47", "price": "$4.30"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-48", "name": "Fruit 48", "price": "$0.13"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-49", "name": "Fruit 49", "price": "$3.66"}
(NOBRIDGE) LOG RENDERING SECTION Vegetables
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-0", "name": "Vegetable 0", "price": "$2.14"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-1", "name": "Vegetable 1", "price": "$4.02"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-2", "name": "Vegetable 2", "price": "$1.36"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-3", "name": "Vegetable 3", "price": "$1.35"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-4", "name": "Vegetable 4", "price": "$4.33"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-5", "name": "Vegetable 5", "price": "$1.82"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-6", "name": "Vegetable 6", "price": "$4.90"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-7", "name": "Vegetable 7", "price": "$3.02"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-8", "name": "Vegetable 8", "price": "$1.79"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-9", "name": "Vegetable 9", "price": "$0.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-10", "name": "Vegetable 10", "price": "$0.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-11", "name": "Vegetable 11", "price": "$3.23"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-12", "name": "Vegetable 12", "price": "$1.73"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-13", "name": "Vegetable 13", "price": "$4.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-14", "name": "Vegetable 14", "price": "$3.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-15", "name": "Vegetable 15", "price": "$4.03"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-16", "name": "Vegetable 16", "price": "$4.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-17", "name": "Vegetable 17", "price": "$4.05"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-18", "name": "Vegetable 18", "price": "$2.32"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-19", "name": "Vegetable 19", "price": "$0.38"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-20", "name": "Vegetable 20", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-21", "name": "Vegetable 21", "price": "$3.06"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-22", "name": "Vegetable 22", "price": "$4.67"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-23", "name": "Vegetable 23", "price": "$2.10"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-24", "name": "Vegetable 24", "price": "$0.88"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-25", "name": "Vegetable 25", "price": "$2.51"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-26", "name": "Vegetable 26", "price": "$3.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-27", "name": "Vegetable 27", "price": "$3.25"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-28", "name": "Vegetable 28", "price": "$2.62"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-29", "name": "Vegetable 29", "price": "$4.50"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-30", "name": "Vegetable 30", "price": "$2.33"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-31", "name": "Vegetable 31", "price": "$4.01"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-32", "name": "Vegetable 32", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-33", "name": "Vegetable 33", "price": "$1.01"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-34", "name": "Vegetable 34", "price": "$3.83"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-35", "name": "Vegetable 35", "price": "$4.89"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-36", "name": "Vegetable 36", "price": "$2.37"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-37", "name": "Vegetable 37", "price": "$3.89"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-38", "name": "Vegetable 38", "price": "$0.08"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-39", "name": "Vegetable 39", "price": "$1.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-40", "name": "Vegetable 40", "price": "$3.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-41", "name": "Vegetable 41", "price": "$3.30"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-42", "name": "Vegetable 42", "price": "$1.12"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-43", "name": "Vegetable 43", "price": "$0.19"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-44", "name": "Vegetable 44", "price": "$1.06"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-45", "name": "Vegetable 45", "price": "$3.32"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-46", "name": "Vegetable 46", "price": "$1.13"}
(NOBRIDGE) LOG RENDERING SECTION Fruits
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-0", "name": "Fruit 0", "price": "$2.21"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-1", "name": "Fruit 1", "price": "$0.12"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-2", "name": "Fruit 2", "price": "$0.86"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-3", "name": "Fruit 3", "price": "$3.05"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-4", "name": "Fruit 4", "price": "$2.40"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-5", "name": "Fruit 5", "price": "$2.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-6", "name": "Fruit 6", "price": "$4.87"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-7", "name": "Fruit 7", "price": "$4.58"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-8", "name": "Fruit 8", "price": "$3.64"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-9", "name": "Fruit 9", "price": "$2.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-10", "name": "Fruit 10", "price": "$3.99"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-11", "name": "Fruit 11", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-12", "name": "Fruit 12", "price": "$3.67"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-13", "name": "Fruit 13", "price": "$3.73"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-14", "name": "Fruit 14", "price": "$4.76"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-15", "name": "Fruit 15", "price": "$1.54"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-16", "name": "Fruit 16", "price": "$1.17"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-17", "name": "Fruit 17", "price": "$4.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-18", "name": "Fruit 18", "price": "$2.23"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-19", "name": "Fruit 19", "price": "$4.14"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-20", "name": "Fruit 20", "price": "$3.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-21", "name": "Fruit 21", "price": "$2.17"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-22", "name": "Fruit 22", "price": "$4.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-23", "name": "Fruit 23", "price": "$4.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-24", "name": "Fruit 24", "price": "$4.11"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-25", "name": "Fruit 25", "price": "$3.96"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-26", "name": "Fruit 26", "price": "$4.03"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-27", "name": "Fruit 27", "price": "$0.41"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-28", "name": "Fruit 28", "price": "$1.12"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-29", "name": "Fruit 29", "price": "$3.70"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-30", "name": "Fruit 30", "price": "$1.91"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-31", "name": "Fruit 31", "price": "$2.81"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-32", "name": "Fruit 32", "price": "$4.60"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-33", "name": "Fruit 33", "price": "$2.93"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-34", "name": "Fruit 34", "price": "$2.89"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-35", "name": "Fruit 35", "price": "$1.59"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-36", "name": "Fruit 36", "price": "$0.13"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-37", "name": "Fruit 37", "price": "$3.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-38", "name": "Fruit 38", "price": "$1.91"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-39", "name": "Fruit 39", "price": "$0.63"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-40", "name": "Fruit 40", "price": "$3.96"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-41", "name": "Fruit 41", "price": "$1.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-42", "name": "Fruit 42", "price": "$4.30"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-43", "name": "Fruit 43", "price": "$1.20"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-44", "name": "Fruit 44", "price": "$2.81"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-45", "name": "Fruit 45", "price": "$3.79"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-46", "name": "Fruit 46", "price": "$3.32"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-47", "name": "Fruit 47", "price": "$4.30"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-48", "name": "Fruit 48", "price": "$0.13"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-49", "name": "Fruit 49", "price": "$3.66"}
(NOBRIDGE) LOG RENDERING SECTION Vegetables
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-0", "name": "Vegetable 0", "price": "$2.14"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-1", "name": "Vegetable 1", "price": "$4.02"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-2", "name": "Vegetable 2", "price": "$1.36"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-3", "name": "Vegetable 3", "price": "$1.35"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-4", "name": "Vegetable 4", "price": "$4.33"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-5", "name": "Vegetable 5", "price": "$1.82"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-6", "name": "Vegetable 6", "price": "$4.90"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-7", "name": "Vegetable 7", "price": "$3.02"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-8", "name": "Vegetable 8", "price": "$1.79"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-9", "name": "Vegetable 9", "price": "$0.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-10", "name": "Vegetable 10", "price": "$0.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-11", "name": "Vegetable 11", "price": "$3.23"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-12", "name": "Vegetable 12", "price": "$1.73"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-13", "name": "Vegetable 13", "price": "$4.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-14", "name": "Vegetable 14", "price": "$3.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-15", "name": "Vegetable 15", "price": "$4.03"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-16", "name": "Vegetable 16", "price": "$4.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-17", "name": "Vegetable 17", "price": "$4.05"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-18", "name": "Vegetable 18", "price": "$2.32"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-19", "name": "Vegetable 19", "price": "$0.38"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-20", "name": "Vegetable 20", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-21", "name": "Vegetable 21", "price": "$3.06"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-22", "name": "Vegetable 22", "price": "$4.67"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-23", "name": "Vegetable 23", "price": "$2.10"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-24", "name": "Vegetable 24", "price": "$0.88"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-25", "name": "Vegetable 25", "price": "$2.51"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-26", "name": "Vegetable 26", "price": "$3.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-27", "name": "Vegetable 27", "price": "$3.25"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-28", "name": "Vegetable 28", "price": "$2.62"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-29", "name": "Vegetable 29", "price": "$4.50"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-30", "name": "Vegetable 30", "price": "$2.33"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-31", "name": "Vegetable 31", "price": "$4.01"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-32", "name": "Vegetable 32", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-33", "name": "Vegetable 33", "price": "$1.01"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-34", "name": "Vegetable 34", "price": "$3.83"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-35", "name": "Vegetable 35", "price": "$4.89"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-36", "name": "Vegetable 36", "price": "$2.37"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-37", "name": "Vegetable 37", "price": "$3.89"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-38", "name": "Vegetable 38", "price": "$0.08"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-39", "name": "Vegetable 39", "price": "$1.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-40", "name": "Vegetable 40", "price": "$3.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-41", "name": "Vegetable 41", "price": "$3.30"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-42", "name": "Vegetable 42", "price": "$1.12"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-43", "name": "Vegetable 43", "price": "$0.19"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-44", "name": "Vegetable 44", "price": "$1.06"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-45", "name": "Vegetable 45", "price": "$3.32"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-46", "name": "Vegetable 46", "price": "$1.13"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-47", "name": "Vegetable 47", "price": "$0.68"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-48", "name": "Vegetable 48", "price": "$4.73"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-49", "name": "Vegetable 49", "price": "$0.48"}
(NOBRIDGE) LOG RENDERING SECTION Dairy
(NOBRIDGE) LOG RENERING ITEEEM {"id": "dairy-0", "name": "Dairy 0", "price": "$2.07"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "dairy-1", "name": "Dairy 1", "price": "$1.81"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "dairy-2", "name": "Dairy 2", "price": "$0.21"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "dairy-3", "name": "Dairy 3", "price": "$1.91"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "dairy-4", "name": "Dairy 4", "price": "$3.86"}
(NOBRIDGE) LOG RENDERING SECTION Dairy
(NOBRIDGE) LOG RENDERING SECTION Dairy
(NOBRIDGE) LOG RENDERING SECTION Vegetables
(NOBRIDGE) LOG RENDERING SECTION Dairy
(NOBRIDGE) LOG RENDERING SECTION Vegetables
(NOBRIDGE) LOG RENDERING SECTION Dairy
(NOBRIDGE) LOG RENDERING SECTION Fruits
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-0", "name": "Fruit 0", "price": "$2.21"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-1", "name": "Fruit 1", "price": "$0.12"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-2", "name": "Fruit 2", "price": "$0.86"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-3", "name": "Fruit 3", "price": "$3.05"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-4", "name": "Fruit 4", "price": "$2.40"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-5", "name": "Fruit 5", "price": "$2.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-6", "name": "Fruit 6", "price": "$4.87"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-7", "name": "Fruit 7", "price": "$4.58"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-8", "name": "Fruit 8", "price": "$3.64"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-9", "name": "Fruit 9", "price": "$2.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-10", "name": "Fruit 10", "price": "$3.99"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-11", "name": "Fruit 11", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-12", "name": "Fruit 12", "price": "$3.67"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-13", "name": "Fruit 13", "price": "$3.73"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-14", "name": "Fruit 14", "price": "$4.76"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-15", "name": "Fruit 15", "price": "$1.54"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-16", "name": "Fruit 16", "price": "$1.17"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-17", "name": "Fruit 17", "price": "$4.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-18", "name": "Fruit 18", "price": "$2.23"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-19", "name": "Fruit 19", "price": "$4.14"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-20", "name": "Fruit 20", "price": "$3.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-21", "name": "Fruit 21", "price": "$2.17"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-22", "name": "Fruit 22", "price": "$4.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-23", "name": "Fruit 23", "price": "$4.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-24", "name": "Fruit 24", "price": "$4.11"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-25", "name": "Fruit 25", "price": "$3.96"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-26", "name": "Fruit 26", "price": "$4.03"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-27", "name": "Fruit 27", "price": "$0.41"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-28", "name": "Fruit 28", "price": "$1.12"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-29", "name": "Fruit 29", "price": "$3.70"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-30", "name": "Fruit 30", "price": "$1.91"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-31", "name": "Fruit 31", "price": "$2.81"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-32", "name": "Fruit 32", "price": "$4.60"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-33", "name": "Fruit 33", "price": "$2.93"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-34", "name": "Fruit 34", "price": "$2.89"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-35", "name": "Fruit 35", "price": "$1.59"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-36", "name": "Fruit 36", "price": "$0.13"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-37", "name": "Fruit 37", "price": "$3.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-38", "name": "Fruit 38", "price": "$1.91"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-39", "name": "Fruit 39", "price": "$0.63"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-40", "name": "Fruit 40", "price": "$3.96"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-41", "name": "Fruit 41", "price": "$1.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-42", "name": "Fruit 42", "price": "$4.30"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-43", "name": "Fruit 43", "price": "$1.20"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-44", "name": "Fruit 44", "price": "$2.81"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-45", "name": "Fruit 45", "price": "$3.79"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-46", "name": "Fruit 46", "price": "$3.32"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-47", "name": "Fruit 47", "price": "$4.30"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-48", "name": "Fruit 48", "price": "$0.13"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "fruit-49", "name": "Fruit 49", "price": "$3.66"}
(NOBRIDGE) LOG RENDERING SECTION Vegetables
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-0", "name": "Vegetable 0", "price": "$2.14"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-1", "name": "Vegetable 1", "price": "$4.02"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-2", "name": "Vegetable 2", "price": "$1.36"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-3", "name": "Vegetable 3", "price": "$1.35"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-4", "name": "Vegetable 4", "price": "$4.33"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-5", "name": "Vegetable 5", "price": "$1.82"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-6", "name": "Vegetable 6", "price": "$4.90"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-7", "name": "Vegetable 7", "price": "$3.02"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-8", "name": "Vegetable 8", "price": "$1.79"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-9", "name": "Vegetable 9", "price": "$0.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-10", "name": "Vegetable 10", "price": "$0.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-11", "name": "Vegetable 11", "price": "$3.23"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-12", "name": "Vegetable 12", "price": "$1.73"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-13", "name": "Vegetable 13", "price": "$4.34"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-14", "name": "Vegetable 14", "price": "$3.28"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-15", "name": "Vegetable 15", "price": "$4.03"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-16", "name": "Vegetable 16", "price": "$4.69"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-17", "name": "Vegetable 17", "price": "$4.05"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-18", "name": "Vegetable 18", "price": "$2.32"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-19", "name": "Vegetable 19", "price": "$0.38"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-20", "name": "Vegetable 20", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-21", "name": "Vegetable 21", "price": "$3.06"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-22", "name": "Vegetable 22", "price": "$4.67"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-23", "name": "Vegetable 23", "price": "$2.10"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-24", "name": "Vegetable 24", "price": "$0.88"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-25", "name": "Vegetable 25", "price": "$2.51"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-26", "name": "Vegetable 26", "price": "$3.97"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-27", "name": "Vegetable 27", "price": "$3.25"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-28", "name": "Vegetable 28", "price": "$2.62"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-29", "name": "Vegetable 29", "price": "$4.50"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-30", "name": "Vegetable 30", "price": "$2.33"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-31", "name": "Vegetable 31", "price": "$4.01"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-32", "name": "Vegetable 32", "price": "$1.44"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-33", "name": "Vegetable 33", "price": "$1.01"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-34", "name": "Vegetable 34", "price": "$3.83"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-35", "name": "Vegetable 35", "price": "$4.89"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-36", "name": "Vegetable 36", "price": "$2.37"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-37", "name": "Vegetable 37", "price": "$3.89"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-38", "name": "Vegetable 38", "price": "$0.08"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-39", "name": "Vegetable 39", "price": "$1.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-40", "name": "Vegetable 40", "price": "$3.78"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-41", "name": "Vegetable 41", "price": "$3.30"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-42", "name": "Vegetable 42", "price": "$1.12"}
(NOBRIDGE) LOG RENERING ITEEEM {"id": "vegetable-43", "name": "Vegetable 43", "price": "$0.19"}
```
### Reproducer
https://snack.expo.dev/7cjOb_kzYa8_8gcuW9i9s
### Screenshots and Videos
_No response_ | Issue: Author Provided Repro,Component: SectionList | low | Major |
2,738,358,305 | storybook | [Bug]: global decorator are not updated in composed storybook | ### Describe the bug
Adding [project level modes](https://www.chromatic.com/docs/modes/#project-level-modes) to let the user switch between light and dark mode works pretty well in a single storybook package.
When the user clicks the light/dark mode button, the content is re-rendered in the expected theme.
In a [package composition](https://storybook.js.org/docs/sharing/package-composition) however, the same is not true.
Clicking the button will update [the decorator](https://storybook.js.org/docs/writing-stories/decorators#global-decorators) in the URL (address bar), but the attribute in HTML remains unchanged.
The user has the refresh/reload the page (F5) in order to see the story in the expected theme.
### Reproduction link
https://stackblitz.com/edit/github-nk6zbbmz?file=.storybook-composed%2Fmain.js
### Reproduction steps
1. Got to the above link
2. Start the composited storybook: `yarn storybook`
3. Navigate to the composited storybook running on port 6006
4. Open the top Button Docs and switch between light and dark theme. It works
5. Open the bottom (under composite) Button Docs and switch between light and dark theme. Nothing happens unless you refresh the page.
> [!NOTE]
> Switching between light and dark mode in the composited stories directly under port 6007 works as expected
### System
```bash
Storybook Environment Info:
System:
OS: Linux 5.0 undefined
CPU: (8) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Shell: 1.0 - /bin/jsh
Binaries:
Node: 18.20.3 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 10.2.3 - /usr/local/bin/npm <----- active
pnpm: 8.15.6 - /usr/local/bin/pnpm
npmPackages:
@storybook/addon-essentials: ^8.5.0-alpha.21 => 8.5.0-alpha.21
@storybook/addon-themes: 8.5.0-alpha.21 => 8.5.0-alpha.21
@storybook/blocks: ^8.5.0-alpha.21 => 8.5.0-alpha.21
@storybook/test: ^8.5.0-alpha.21 => 8.5.0-alpha.21
@storybook/web-components: ^8.5.0-alpha.21 => 8.5.0-alpha.21
@storybook/web-components-vite: ^8.5.0-alpha.21 => 8.5.0-alpha.21
storybook: ^8.5.0-alpha.21 => 8.5.0-alpha.21
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,738,379,105 | bitcoin | build: compiler flags in linker flags output | This is confusing / undermines the output printed, so it'd be good if it could be improved, or fixed entirely.
Seeing:
```bash
Linker flags .......................... <a bunch of compiler flags> i.e `-ftrivial-auto-var-init=pattern`
```
just doesn't make sense. | Build system | low | Major |
2,738,438,700 | next.js | Cannot enable incremental PPR in a page file that is marked with 'use cache' | ### Link to the code that reproduces this issue
https://github.com/focux/dynamicio-with-inc-ppr-repro
### To Reproduce
1. Start the app
2. Run `npm run build`
### Current vs. Expected behavior
When you try to build the app, it throws this error:
```
Error: x Only async functions are allowed to be exported in a "use cache" file.
| export const experimental_ppr = true;
```
I would expect the `use cache` directive to work with incremental PPR. It currently work when `ppr: true` and not incremental.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 20.11.1
npm: 10.2.4
Yarn: 1.22.22
pnpm: 9.15.0
Relevant Packages:
next: 15.1.1-canary.2 // Latest available version is detected (15.1.1-canary.2).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
dynamicIO, Partial Prerendering (PPR)
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
_No response_ | Partial Prerendering (PPR),dynamicIO | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.