id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,808,297,031 | go | x/vuln: default govulncheck output is very long | ### govulncheck version
Go: go1.23.4
Scanner: [email protected]
DB: https://vuln.go.dev
DB updated: 2025-01-17 21:48:34 +0000 UTC
### Does this issue reproduce at the latest version of golang.org/x/vuln?
Yes
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/julieqiu/Library/Caches/go-build'
GOENV='/Users/julieqiu/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/julieqiu/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/julieqiu/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/julieqiu/bin/homebrew/Cellar/go/1.23.4/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/Users/julieqiu/bin/homebrew/Cellar/go/1.23.4/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.4'
GODEBUG=''
GOTELEMETRY='off'
GOTELEMETRYDIR='/Users/julieqiu/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/Users/julieqiu/code/googleapis/generator/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/_j/n7nk6w3x6t3f0xjc0nz2chr80000gn/T/go-build2644197326=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
Ran `govulncheck ./...` on https://github.com/googleapis/generator at commit `b242d1e499f3d874ac95ad5b5aabc75a690bcf91`
### What did you see happen?
```
=== Symbol Results ===
Vulnerability #1: GO-2025-3368
Argument Injection via the URL field in github.com/go-git/go-git
More info: https://pkg.go.dev/vuln/GO-2025-3368
Module: github.com/go-git/go-git/v5
Found in: github.com/go-git/go-git/[email protected]
Fixed in: github.com/go-git/go-git/[email protected]
Example traces found:
#1: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls binary.Read
#2: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls binary.ReadHash
#3: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls binary.ReadUint16
#4: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls binary.ReadUint32
#5: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls binary.ReadUntil
#6: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls binary.ReadVariableWidthInt
#7: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls binary.Write
#8: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls binary.WriteUint32
#9: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls binary.WriteUint64
#10: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which calls binary.init
#11: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls cache.BufferLRU.Get
#12: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls cache.BufferLRU.Put
#13: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls cache.NewBufferLRUDefault
#14: internal/gitrepo/gitrepo.go:86:28: gitrepo.Open calls git.PlainOpen, which eventually calls cache.NewObjectLRUDefault
#15: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls cache.ObjectLRU.Get
#16: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls cache.ObjectLRU.Put
#17: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls cache.init
#18: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls capability.Capability.String
#19: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls capability.DefaultAgent
#20: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls capability.List.Decode
#21: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls capability.List.Delete
#22: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls capability.List.Get
#23: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls capability.List.IsEmpty
#24: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls capability.List.Set
#25: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls capability.List.String
#26: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls capability.List.Supports
#27: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls capability.NewList
#28: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls capability.init
#29: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls client.NewClient
#30: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls client.init
#31: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which eventually calls color.init
#32: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls common.DecodeUploadPackResponse
#33: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which eventually calls common.NewClient
#34: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls common.client.NewUploadPackSession
#35: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which eventually calls common.init
#36: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls common.session.AdvertisedReferencesContext
#37: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls common.session.Close
#38: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls common.session.UploadPack
#39: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls common.session.onError
#40: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.Branch.Validate
#41: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.Config.Marshal
#42: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.Config.Section
#43: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.Config.Validate
#44: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls config.Decoder.Decode
#45: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.Encoder.Encode
#46: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls config.LoadConfig
#47: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls config.Modules.Unmarshal
#48: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls config.New
#49: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls config.NewConfig
#50: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls config.NewDecoder
#51: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.NewEncoder
#52: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls config.NewModules
#53: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls config.Options.Get
#54: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls config.Options.GetAll
#55: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls config.ReadConfig
#56: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.RefSpec.Dst
#57: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.RefSpec.IsExactSHA1
#58: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.RefSpec.IsForceUpdate
#59: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.RefSpec.IsWildcard
#60: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.RefSpec.Match
#61: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.RefSpec.Reverse
#62: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls config.RefSpec.Src
#63: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.RefSpec.Validate
#64: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.RemoteConfig.Validate
#65: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.Section.SetOption
#66: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls config.Subsection.Option
#67: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.Subsection.RemoveOption
#68: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls config.Subsection.SetOption
#69: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls config.init
#70: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls config.init
#71: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which calls diff.init
#72: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which calls diff.init
#73: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls dotgit.DotGit.AddAlternate
#74: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.DotGit.Alternates
#75: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls dotgit.DotGit.Config
#76: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls dotgit.DotGit.ConfigWriter
#77: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.DotGit.Fs
#78: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls dotgit.DotGit.Index
#79: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls dotgit.DotGit.IndexWriter
#80: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls dotgit.DotGit.Initialize
#81: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls dotgit.DotGit.Module
#82: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls dotgit.DotGit.NewObject
#83: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls dotgit.DotGit.NewObjectPack
#84: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.DotGit.Object
#85: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.DotGit.ObjectPack
#86: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.DotGit.ObjectPackIdx
#87: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.DotGit.ObjectPacks
#88: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls dotgit.DotGit.Ref
#89: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls dotgit.DotGit.Refs
#90: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls dotgit.DotGit.RemoveRef
#91: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls dotgit.DotGit.SetRef
#92: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls dotgit.DotGit.Shallow
#93: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls dotgit.DotGit.ShallowWriter
#94: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.EncodedObject.Hash
#95: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.EncodedObject.Reader
#96: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.EncodedObject.SetSize
#97: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.EncodedObject.SetType
#98: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.EncodedObject.Size
#99: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.EncodedObject.Type
#100: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.EncodedObject.Writer
#101: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.NewEncodedObject
#102: internal/gitrepo/gitrepo.go:86:28: gitrepo.Open calls git.PlainOpen, which eventually calls dotgit.NewRepositoryFilesystem
#103: internal/gitrepo/gitrepo.go:86:28: gitrepo.Open calls git.PlainOpen, which eventually calls dotgit.NewWithOptions
#104: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls dotgit.ObjectWriter.Close
#105: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls dotgit.PackWriter.Close
#106: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls dotgit.PackWriter.Write
#107: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.RepositoryFilesystem.Open
#108: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls dotgit.RepositoryFilesystem.Root
#109: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls dotgit.init
#110: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls dotgit.syncedReader.Read
#111: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls dotgit.syncedReader.Seek
#112: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls file.command.Close
#113: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls file.command.Kill
#114: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls file.command.Start
#115: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls file.command.StderrPipe
#116: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls file.command.StdinPipe
#117: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls file.command.StdoutPipe
#118: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which eventually calls file.init
#119: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls file.runner.Command
#120: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls filemode.FileMode.Bytes
#121: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls filemode.FileMode.IsRegular
#122: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls filemode.FileMode.String
#123: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls filemode.FileMode.ToOSFileMode
#124: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls filemode.New
#125: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls filemode.NewFromOSFileMode
#126: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which calls filemode.init
#127: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls filesystem.ConfigStorage.Config
#128: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls filesystem.ConfigStorage.SetConfig
#129: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which calls filesystem.IndexStorage.Index
#130: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls filesystem.IndexStorage.SetIndex
#131: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls filesystem.ModuleStorage.Module
#132: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls filesystem.NewRootNode
#133: internal/gitrepo/gitrepo.go:86:28: gitrepo.Open calls git.PlainOpen, which eventually calls filesystem.NewStorage
#134: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls filesystem.ObjectStorage.EncodedObject
#135: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls filesystem.ObjectStorage.HasEncodedObject
#136: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls filesystem.ObjectStorage.NewEncodedObject
#137: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls filesystem.ObjectStorage.PackfileWriter
#138: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls filesystem.ObjectStorage.SetEncodedObject
#139: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls filesystem.PackfileWriter
#140: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls filesystem.ReferenceStorage.CheckAndSetReference
#141: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls filesystem.ReferenceStorage.IterReferences
#142: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls filesystem.ReferenceStorage.Reference
#143: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls filesystem.ReferenceStorage.RemoveReference
#144: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls filesystem.ReferenceStorage.SetReference
#145: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls filesystem.ShallowStorage.SetShallow
#146: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls filesystem.ShallowStorage.Shallow
#147: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls filesystem.Storage.AddAlternate
#148: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls filesystem.Storage.Filesystem
#149: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls filesystem.Storage.Init
#150: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls filesystem.deltaObject.ActualSize
#151: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls filesystem.init
#152: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls filesystem.init
#153: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls filesystem.init
#154: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls filesystem.node.Children
#155: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls filesystem.node.Hash
#156: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls filesystem.node.IsDir
#157: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls filesystem.node.Name
#158: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls filesystem.node.NumChildren
#159: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls filesystem.node.Skip
#160: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls frame.Frame.Drop
#161: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls frame.Frame.First
#162: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls frame.Frame.Len
#163: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls frame.New
#164: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls frame.byName.Len
#165: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls frame.byName.Less
#166: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls frame.byName.Swap
#167: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which eventually calls frame.init
#168: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls git.NoMatchingRefSpecError.Error
#169: internal/gitrepo/gitrepo.go:22:2: gitrepo.init calls os.init, which eventually calls git.NoMatchingRefSpecError.Is
#170: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone
#171: internal/gitrepo/gitrepo.go:86:28: gitrepo.Open calls git.PlainOpen
#172: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject
#173: internal/gitrepo/gitrepo.go:145:37: gitrepo.PrintStatus calls git.Repository.Worktree
#174: internal/gitrepo/gitrepo.go:155:19: gitrepo.PrintStatus calls git.Status.IsClean
#175: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions
#176: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit
#177: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status
#178: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls git.command.Close
#179: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls git.command.Start
#180: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls git.command.StderrPipe
#181: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls git.command.StdinPipe
#182: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls git.command.StdoutPipe
#183: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which eventually calls git.init
#184: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init
#185: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls git.runner.Command
#186: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which eventually calls git.sortableEntries.Len
#187: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which eventually calls git.sortableEntries.Less
#188: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which eventually calls git.sortableEntries.Swap
#189: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls gitignore.NewMatcher
#190: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls gitignore.ReadPatterns
#191: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls gitignore.init
#192: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls gitignore.matcher.Match
#193: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls hash.New
#194: internal/gitrepo/gitrepo.go:27:2: gitrepo.init calls plumbing.init, which calls hash.init
#195: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls http.Err.Error
#196: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls http.client.NewUploadPackSession
#197: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which eventually calls http.init
#198: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls http.upSession.AdvertisedReferencesContext
#199: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls http.upSession.Close
#200: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls http.upSession.UploadPack
#201: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls idxfile.Decoder.Decode
#202: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls idxfile.Encoder.Encode
#203: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls idxfile.MemoryIndex.FindHash
#204: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls idxfile.MemoryIndex.FindOffset
#205: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls idxfile.NewDecoder
#206: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls idxfile.NewEncoder
#207: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls idxfile.NewMemoryIndex
#208: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls idxfile.Writer.Finished
#209: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls idxfile.Writer.Index
#210: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls idxfile.Writer.OnFooter
#211: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls idxfile.Writer.OnHeader
#212: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls idxfile.Writer.OnInflatedObjectContent
#213: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls idxfile.Writer.OnInflatedObjectHeader
#214: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which eventually calls idxfile.init
#215: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls idxfile.objects.Len
#216: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls idxfile.objects.Less
#217: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls idxfile.objects.Swap
#218: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls index.Decoder.Decode
#219: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls index.Encoder.Encode
#220: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls index.Index.Add
#221: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls index.Index.Entry
#222: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls index.Index.Remove
#223: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls index.Index.SkipUnless
#224: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls index.NewDecoder
#225: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls index.NewEncoder
#226: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls index.NewRootNode
#227: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls index.byName.Len
#228: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls index.byName.Less
#229: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls index.byName.Swap
#230: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls index.init
#231: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls index.init
#232: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls index.node.Children
#233: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls index.node.Hash
#234: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls index.node.IsDir
#235: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls index.node.Name
#236: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls index.node.NumChildren
#237: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls index.node.Skip
#238: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls ioutil.CheckClose
#239: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls ioutil.NewContextReader
#240: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls ioutil.NewContextWriteCloser
#241: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls ioutil.NewReadCloser
#242: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls ioutil.NewReadCloserWithCloser
#243: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls ioutil.NewReaderOnError
#244: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls ioutil.NewReaderUsingReaderAt
#245: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls ioutil.NewWriteCloserOnError
#246: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls ioutil.NonEmptyReader
#247: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which eventually calls ioutil.WriteNopCloser
#248: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which calls ioutil.init
#249: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls ioutil.readCloserCloser.Close
#250: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls ioutil.readerAtAsReader.Read
#251: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls ioutil.readerOnError.Read
#252: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls ioutil.writeCloser.Close
#253: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls ioutil.writerOnError.Write
#254: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls memory.ConfigStorage.Config
#255: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls memory.ConfigStorage.SetConfig
#256: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which calls memory.IndexStorage.Index
#257: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls memory.IndexStorage.SetIndex
#258: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls memory.ModuleStorage.Module
#259: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls memory.ObjectStorage.AddAlternate
#260: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls memory.ObjectStorage.EncodedObject
#261: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls memory.ObjectStorage.HasEncodedObject
#262: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls memory.ObjectStorage.NewEncodedObject
#263: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls memory.ObjectStorage.SetEncodedObject
#264: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls memory.ReferenceStorage.CheckAndSetReference
#265: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls memory.ReferenceStorage.IterReferences
#266: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls memory.ReferenceStorage.Reference
#267: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls memory.ReferenceStorage.RemoveReference
#268: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls memory.ReferenceStorage.SetReference
#269: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls memory.ShallowStorage.SetShallow
#270: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls memory.ShallowStorage.Shallow
#271: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls memory.init
#272: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls merkletrie.Action.String
#273: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls merkletrie.Change.Action
#274: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls merkletrie.DiffTree
#275: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which calls merkletrie.init
#276: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls noder.Path.Children
#277: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls noder.Path.Compare
#278: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls noder.Path.Hash
#279: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls noder.Path.IsDir
#280: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls noder.Path.NumChildren
#281: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls noder.Path.Skip
#282: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls noder.Path.String
#283: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init, which calls noder.init
#284: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls object.Blob.Reader
#285: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls object.Blob.Type
#286: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls object.Commit.Encode
#287: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls object.Commit.EncodeWithoutSignature
#288: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls object.Commit.String
#289: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls object.Commit.Tree
#290: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls object.Commit.Type
#291: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls object.DecodeObject
#292: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls object.DecodeTag
#293: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which calls object.GetCommit
#294: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls object.NewCommitPreorderIter
#295: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls object.NewTreeRootNode
#296: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls object.Tag.Type
#297: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls object.Tree.Encode
#298: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls object.Tree.File
#299: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls object.Tree.FindEntry
#300: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls object.Tree.Type
#301: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls object.commitPreIterator.ForEach
#302: internal/gitrepo/gitrepo.go:28:2: gitrepo.init calls object.init
#303: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls object.treeNoder.Children
#304: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls object.treeNoder.Hash
#305: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls object.treeNoder.IsDir
#306: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls object.treeNoder.Name
#307: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls object.treeNoder.NumChildren
#308: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls object.treeNoder.Skip
#309: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls objfile.NewReader
#310: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls objfile.NewWriter
#311: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls objfile.Reader.Close
#312: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls objfile.Reader.Header
#313: internal/gitrepo/gitrepo.go:86:28: gitrepo.Open calls git.PlainOpen, which eventually calls objfile.Reader.Read
#314: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls objfile.Writer.Close
#315: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls objfile.Writer.Hash
#316: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls objfile.Writer.Write
#317: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls objfile.Writer.WriteHeader
#318: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which eventually calls objfile.init
#319: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls packfile.Error.AddDetails
#320: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls packfile.Error.Error
#321: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls packfile.FSObject.Hash
#322: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls packfile.FSObject.Reader
#323: internal/gitrepo/gitrepo.go:150:32: gitrepo.PrintStatus calls git.Worktree.Status, which eventually calls packfile.FSObject.Size
#324: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls packfile.FSObject.Type
#325: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls packfile.NewPackfile
#326: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls packfile.NewPackfileWithCache
#327: internal/gitrepo/gitrepo.go:70:20: gitrepo.Clone calls os.Getenv, which eventually calls packfile.ObjectsToPack
#328: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls packfile.Packfile.Close
#329: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls packfile.Packfile.GetByOffset
#330: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls packfile.Packfile.Scanner
#331: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls packfile.Scanner.NextObject
#332: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls packfile.Scanner.SeekObjectHeader
#333: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packfile.UpdateObjectStorage
#334: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls packfile.asyncReader
#335: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls packfile.byTypeAndSize.Len
#336: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls packfile.byTypeAndSize.Less
#337: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls packfile.byTypeAndSize.Swap
#338: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls packfile.init
#339: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls packfile.offsetWriter.Write
#340: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls packfile.scannerReader.Read
#341: internal/gitrepo/gitrepo.go:86:28: gitrepo.Open calls git.PlainOpen, which eventually calls packfile.scannerReader.ReadByte
#342: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.AdvRefs.AllReferences
#343: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.AdvRefs.Decode
#344: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.AdvRefs.IsEmpty
#345: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.DepthCommits.IsZero
#346: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.DepthReference.IsZero
#347: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.DepthSince.IsZero
#348: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls packp.ErrUnexpectedData.Error
#349: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.GitProtoRequest.Encode
#350: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.NewAdvRefs
#351: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.NewUploadPackRequestFromCapabilities
#352: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.NewUploadPackResponse
#353: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.UploadHaves.Encode
#354: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.UploadPackRequest.IsEmpty
#355: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls packp.UploadPackResponse.Close
#356: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.UploadPackResponse.Decode
#357: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls packp.UploadPackResponse.Read
#358: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.UploadRequest.Encode
#359: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.UploadRequest.Validate
#360: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls packp.init
#361: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls packp.resolveHead
#362: internal/gitrepo/gitrepo.go:86:28: gitrepo.Open calls git.PlainOpen, which eventually calls path_util.ReplaceTildeWithHome
#363: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which calls path_util.init
#364: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls pktline.Encoder.Encode
#365: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls pktline.Encoder.EncodeString
#366: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls pktline.Encoder.Encodef
#367: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls pktline.Encoder.Flush
#368: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls pktline.ErrorLine.Error
#369: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls pktline.NewEncoder
#370: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls pktline.NewScanner
#371: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls pktline.Scanner.Bytes
#372: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls pktline.Scanner.Err
#373: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls pktline.Scanner.Scan
#374: internal/gitrepo/gitrepo.go:26:2: gitrepo.init calls git.init, which eventually calls pktline.init
#375: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.Hash.IsZero
#376: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.Hash.String
#377: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.HashSlice.Len
#378: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.HashSlice.Less
#379: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.HashSlice.Swap
#380: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.Hasher.Sum
#381: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls plumbing.HashesSort
#382: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls plumbing.IsHash
#383: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.MemoryObject.Close
#384: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.MemoryObject.Hash
#385: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.MemoryObject.Reader
#386: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls plumbing.MemoryObject.SetSize
#387: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls plumbing.MemoryObject.SetType
#388: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.MemoryObject.Size
#389: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls plumbing.MemoryObject.Type
#390: internal/generator/generator.go:53:14: generator.parseArgs calls fmt.Fprintf, which calls plumbing.MemoryObject.Write
#391: internal/gitrepo/gitrepo.go:101:31: gitrepo.AddAll calls git.Worktree.AddWithOptions, which eventually calls plumbing.MemoryObject.Writer
#392: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls plumbing.NewBranchReferenceName
#393: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls plumbing.NewHash
#394: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.NewHashReference
#395: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.NewHasher
#396: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls plumbing.NewPermanentError
#397: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.NewReferenceFromStrings
#398: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls plumbing.NewRemoteHEADReferenceName
#399: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls plumbing.NewSymbolicReference
#400: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls plumbing.NewUnexpectedError
#401: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.ObjectType.Bytes
#402: internal/gitrepo/gitrepo.go:74:29: gitrepo.Clone calls git.PlainClone, which eventually calls plumbing.ObjectType.IsDelta
#403: internal/gitrepo/gitrepo.go:139:23: gitrepo.Commit calls fmt.Sprint, which eventually calls plumbing.ObjectType.String
#404: internal/gitrepo/gitrepo.go:122:32: gitrepo.Commit calls git.Worktree.Commit, which eventually calls plumbing.ObjectType.Valid
#405: internal/gitrepo/gitrepo.go:135:37: gitrepo.Commit calls git.Repository.CommitObject, which eventually calls plumbing.ParseObjectType
...
```
The output exceeded the 65536 character limit on this field, but you can reproduce with `git clone https://github.com/googleapis/generator.git` and `git checkout b242d1e499f3d874ac95ad5b5aabc75a690bcf91`.
### What did you expect to see?
The output was very long, even though the final action I took was just `go get github.com/go-git/go-git/[email protected] && go mod tidy`. I think it would be more helpful if:
1. The output were grouped by module at the top level, as modules are the primary unit of upgrades.
2. The number of traces displayed by default were limited. | vulncheck or vulndb | low | Critical |
2,808,307,809 | TypeScript | Type inference fails to recognize never when it's the result of a template tag function | ### 🔎 Search Terms
tagged template literal never type
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about the never type and about template literals.
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/GYVwdgxgLglg9mABMAhjANgFQKYFsAO6KU2AFAM5QBOMYA5uQFyI4FEkDK1tDAglVRQBPADSIAdJIBuKdCGxNEKMEIDaAXQCUzMNinYqiAN4AoROcRQAFlTgB3RLocBRAXCqkA5KgytCxbCV0O2FySxt7ck9NAG4TAF8TE1BIWARkNHQAMXBoeDBSbUc9A2MzC2tbBydEV1sPb0yc1PygkKEwysjouMSTAHp+xABhdypsaHQhRHGIODowGAAvBXDAlLz0iFl0RAATOFWwOCgZ7CgQKjBk3LSkKCF8bGGrCYBrA1GBCagpgBkYB9yJgrDByKQUFQ6AB+ZiUGj0IrwnhlCyIGDARAQqGaVFoizjC5XJRQuJoxJonzZW75Qq9JKDRAASUgYx+U2QRAYazORPuj0Cdhg1mYzU2SCIEDeYWwYD2KMJlyQlACuFlp2Ue15SssAv2hzCx1OtAgcj2gU84HNwFo2D2nnEpAATABmABsbs0Nxa6QeTxe7wMLLm30mQgAIgaAHInAEfEFg7EwuHcREphF0PHmDFYyF0XGmfEE846vNkiwUixUvzsbAAAwARig9nX6QMhl9Zr9puRsKtrMQeb25nLtcSVSQ1WBjWFwOMUBArCgG+hsN7xbr-a8pZ82WHIwoY1A49gE+C87DEMi01fU5nCxYc0mC+Ui4riWXX4hK+YqWK7nSX7vkgnivOg6BwJ4baMlkmSdHAV59uEg7WIEw4IFqwG3qq6rorOYDzouy6ruudybs824fFQwZ7t2J7AqC55Qpe175umKIPtmmLPlm+JYZ+5JftWeD+CQjbNq2QElsSoHYOBkH0kAA
### 💻 Code
```ts
function failTemplate(strings: TemplateStringsArray, ...values: any[]): never {
throw new Error('failTemplate always throws');
}
function failFunction(): never {
throw new Error('failFunction always throws');
}
// Correctly recognizes the function call does not return
function typeCheckerCorrectlyLikesThis(arg?: string): string {
if (arg) {
return arg;
}
failFunction();
}
// Incorrectly flags the return type with: Function lacks ending return statement and return type does not include 'undefined'.(2366)
function typeCheckerIncorrectlyDoesNotLikeThis(arg?: string): string {
if (arg) {
return arg;
}
failTemplate`bad`;
}
// Correctly sees that the second return statement is unreachable
function typeCheckerCorrectlyDoesNotLikeThis(arg?: string): string {
if (arg) {
return arg;
}
failFunction();
return 'hello';
}
// Fails to see that the second return statement is unreachable
function typeCheckerIncorrectlyLikesThis(arg?: string): string {
if (arg) {
return arg;
}
failTemplate`bad`;
return 'hello';
}
```
### 🙁 Actual behavior
The invocation of a template tag function (via a tagged template literal) that is declared to return type `never` terminates the flow of control of the containing function, but the type checker does not recognize this and complains as if execution had continued beyond the invocation.
It appears, to my naive sensibilities, that the use of a tagged template literal is not recognized as a function invocation even though it is one. However, if the return type of the tag function is, say, `number`, the type inference engine _does_ correctly recognize this, so it seems like the issue is specifically with the flow control logic associated with `never`.
### 🙂 Expected behavior
A template literal using a `never` returning template tag function should be recognized the same as an ordinary invocation of a `never` returning function.
### Additional information about the issue
_No response_ | Suggestion,Awaiting More Feedback | low | Critical |
2,808,356,388 | node | Intl.DateTimeFormat.format() output for fi locale no longer matches chrome or latest ICU data (v22) | ### Version
22.13
### Platform
```text
MacOS, Windows, and Linux
```
### Subsystem
_No response_
### What steps will reproduce the bug?
v22 sandbox (incorrect month format): https://codesandbox.io/p/devbox/node22-finnish-datetimeformat-bug-vt3jx6
```sh
en: Jan 24
fi: 24.1.
```
v20 sandbox (correct): https://codesandbox.io/p/devbox/node22-finnish-datetimeformat-bug-v20-f8zvvg
```sh
en: Jan 24
fi: 24. tammik.
```
### How often does it reproduce? Is there a required condition?
Consistently
### What is the expected behavior? Why is that the expected behavior?
FI `short` month format should match ICU data. Finnish month in `short` format should be the abbreviated version of the month (e.g. `tammik.`)
### What do you see instead?
It currently is outputting the same as `numeric` format. e.g. `1.`
### Additional information
I noticed that the ICU data did change in the November update for the Finnish locale, but not in this way. This only seems to be impacting Finnish from what I can tell, and only the `short` month format. Furthermore, this is working fine in v20. It only presented this regression after updating to v22.
 | v8 engine | low | Critical |
2,808,360,780 | pytorch | torch crashes on ubuntu:24.04 during SDPA-CuDNN test | ### 🐛 Describe the bug
Please see issue: https://github.com/pytorch/pytorch/issues/138340 this happens with cu118, cu124 and cu126 binaries
Test:
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from dataclasses import dataclass
from torch.nn.attention import bias, sdpa_kernel, SDPBackend
@dataclass
class Config:
n_embd: int = 512
n_head: int = 8
n_layer: int = 6
n_ctx: int = 2048
bias: bool = False
class CausalSelfAttention(nn.Module):
def __init__(self, config):
super().__init__()
assert config.n_embd % config.n_head == 0
# key, query, value projections for all heads, but in a batch
self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd, bias=config.bias)
# output projection
self.c_proj = nn.Linear(config.n_embd, config.n_embd, bias=config.bias)
self.n_head = config.n_head
self.n_embd = config.n_embd
def forward(self, x):
B, T, C = x.size() # batch size, sequence length, embedding dimensionality (n_embd)
q, k, v = self.c_attn(x).split(self.n_embd, dim=2)
k = k.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
q = q.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
v = v.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
y = F.scaled_dot_product_attention(q, k, v, attn_mask=None, is_causal=True)
# HERE, WE NEED THIS CONTIGUOUS TO BE A NO-OP
# y = y.transpose(1, 2).contiguous().view(B, T, C)
y = y.transpose(1, 2).view(B, T, C)
y = self.c_proj(y)
return y
def test_attention(backend: SDPBackend):
config = Config()
Attention = CausalSelfAttention(config).to("cuda", dtype=torch.float16)
sample_input = torch.randn(1, 2048, config.n_embd, device="cuda", dtype = torch.float16)
with sdpa_kernel(backend):
try:
out = Attention(sample_input)
print("ALL GOOD")
except RuntimeError as e:
print("❗ NOT GOOD ❗")
print(e)
if __name__ == "__main__":
width = 100
print("SDPA-Flash".center(width, "-"))
test_attention(SDPBackend.FLASH_ATTENTION)
print("SDPA-CuDNN".center(width, "-"))
test_attention(SDPBackend.CUDNN_ATTENTION)
```
Observing crash like this:
```
.venv) root@b4ffe5c8ac8c:/pytorch/.ci/pytorch/smoke_test# python3 cudnn_test.py
---------------------------------------------SDPA-Flash---------------------------------------------
ALL GOOD
---------------------------------------------SDPA-CuDNN---------------------------------------------
Could not load library libnvrtc.so.12. Error: libnvrtc.so.12: cannot open shared object file: No such file or directory
Could not load library libnvrtc.so. Error: libnvrtc.so: cannot open shared object file: No such file or directory
Could not load library libnvrtc.so.12. Error: libnvrtc.so.12: cannot open shared object file: No such file or directory
Could not load library libnvrtc.so. Error: libnvrtc.so: cannot open shared object file: No such file or directory
Could not load library libnvrtc.so.12. Error: libnvrtc.so.12: cannot open shared object file: No such file or directory
Could not load library libnvrtc.so. Error: libnvrtc.so: cannot open shared object file: No such file or directory
❗ NOT GOOD ❗
cuDNN Frontend error: No valid engine configs for Matmul_MUL_GEN_INDEX_GEN_INDEX_CMP_GE_BINARY_SELECT_Reduction_SUB_EXP_Reduction_LOG_ADD_DIV_Matmul_
```
### Versions
2.6.0
cc @csarofeen @ptrblck @xwang233 @eqy | module: cudnn,triaged,module: sdpa | low | Critical |
2,808,372,299 | flutter | Flutter Web: Text <input> form elements that collect information about the user require the autocomplete attribute (Accessibility) | ### Use case
Text `<input>` form elements that collect information about the user require the autocomplete attribute. The `autocomplete` attribute allows the browser to do a pattern match against a list of values locally stored with the browser, and supplies the appropriate corresponding value when the input is programmatically tagged.
### Proposal
Add all values for the `autocomplete` attribute. A full list can be found at [HTML autocomplete attribute specification](https://html.spec.whatwg.org/multipage/form-control-infrastructure.html#autofilling-form-controls:-the-autocomplete-attribute). | a: text input,c: new feature,a: accessibility,platform-web,c: proposal,customer: castaway,team-accessibility | low | Minor |
2,808,381,073 | flutter | Flutter Web: Radio buttons require proper keyboard interactions (Accessibility) | ### Use case
Radio buttons (contained in a radiogroup) do not have the expected keyboard interactions.
### Proposal
Add the following keyboard interactions for radio buttons contained inside of a radiogroup:
- **Tab** and **Shift + Tab**: Move focus into and out of the radio group. When focus moves into a radio group:
-- If a radio button is checked, focus is set on the checked button.
-- If none of the radio buttons are checked, focus is set on the first radio button in the group.
- **Space**: checks the focused radio button if it is not already checked.
- **Right Arrow** and **Down Arrow**: move focus to the next radio button in the group, uncheck the previously focused button, and check the newly focused button. If focus is on the last button, focus moves to the first button.
- **Left Arrow** and **Up Arrow**: move focus to the previous radio button in the group, uncheck the previously focused button, and check the newly focused button. If focus is on the first button, focus moves to the last button. | a: text input,c: new feature,framework,a: accessibility,platform-web,c: proposal,customer: castaway,team-accessibility | low | Minor |
2,808,386,818 | ollama | Please separate deepseek-r1 from deepseek-r1-Distill! | Please separate deepseek-r1 from deepseek-r1-Distill!
This is not the same model and the architecture is different!
The model on the ollama official website is a perfect obfuscation! | model request | low | Major |
2,808,396,754 | PowerToys | Copilot and PowerToys Run have the same keyboard shortcut, so maybe add Copilot as a plugin | ### Description of the new feature / enhancement
I noticed that Copilot and PowerToys Run both have the same keyboard shortcut, Alt + Space, and so it makes sense to add Copilot as a plugin so that for example anything you type after '.' keyword becomes a copilot prompt, this way both can be used with the same hotkey
### Scenario when this would be used?
If the user enables the option to use Alt + Space hotkey for Copilot, then it will override PowerToys Run, so this would enable both to be used
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,808,398,740 | ui | [bug]: PostCSS and Tailwind new configuration: components no longer work / Outdated documentation | ### Describe the bug
The documentation no longer works, I believe it has to do with the new version of tailwind, or it might just be some error with the documentation. I tried multiple times, and followed tutorials to see if I am messing something up, but in fact the documentation needs some update because of how tailwind works with postcss:
plugin:vite:css] [postcss] It looks like you're trying to use tailwindcss directly as a PostCSS plugin. The PostCSS plugin has moved to a separate package, so to continue using Tailwind CSS with PostCSS you'll need to install @tailwindcss/postcss and update your PostCSS configuration
### Affected component/components
General function of PostCSS with Tailwind
### How to reproduce
Follow Vite's documentation to run shadCN, and it will say the following error:
plugin:vite:css] [postcss] It looks like you're trying to use tailwindcss directly as a PostCSS plugin. The PostCSS plugin has moved to a separate package, so to continue using Tailwind CSS with PostCSS you'll need to install @tailwindcss/postcss and update your PostCSS configuration
### Codesandbox/StackBlitz link
Can't reproduce in Codesandbox because it doesn't let me use the terminal
### Logs
```bash
```
### System Info
```bash
Browser: Opera, using vscode,
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug,tailwind | low | Critical |
2,808,400,931 | flutter | Flutter Web: Landmark roles are required to identify regions of a page (Accessibility) | ### Use case
WAI-ARIA landmark roles provide a powerful way to identify the organization and structure of a web page. By classifying and labelling sections of a page, they enable structural information that is conveyed visually through layout to be represented programmatically. Screen readers exploit landmark roles to provide keyboard navigation to important sections of a page. Landmark regions can also be used as targets for "skip links" and by browser extensions to enhanced keyboard navigation.
### Proposal
Add the following landmark roles to the the available semantics roles in existing widget libraries:
- `banner` (when in context of the `body` element)
- `complementary`
- `contentinfo` (when in context of the `body` element)
- `main`
- `navigation`
- `region` (when it has an accessible name using `aria-labelledby` or `aria-label`)
| c: new feature,a: accessibility,platform-web,c: proposal,customer: castaway,team-accessibility | low | Minor |
2,808,407,177 | tailwindcss | Variant doesn't expand when inside a media query | **What version of Tailwind CSS are you using?**
v4.0.0
**What build tool (or framework if it abstracts the build tool) are you using?**
Astro 5.1.9
@tailwindcss/vite 4.0.0
**What version of Node.js are you using?**
v23.4.0
**What browser are you using?**
Chrome
**What operating system are you using?**
Windows
**Reproduction URL**
https://play.tailwindcss.com/11WjRHqAY2?file=css
**Describe your issue**
`@variant dark` will expand if it wraps `@media`, but not if it's inside it.
I'd expect this
```css
@media (prefers-reduced-motion: reduce) {
@variant dark {
a {
color: red;
}
}
}
```
to become this
```css
@media (prefers-reduced-motion: reduce) {
&:where(.dark, .dark *) {
a {
color: red;
}
}
}
```
but `@variant dark` remains in the output css.
| bug,v4 | low | Minor |
2,808,407,431 | vscode | Inline Color Picker should be enabled/disabled depends on context (file type) | Similar Issues:
#233193
#235479
A `#XXXXXX` annotation does not always mean an html color, as obviously `#` just means number or id in natural languages, and in programming it can refer to an issue id or line number.
When not in an web development context, displaying such inline color pickers can be quite annoying - not only because it shows a color block, but also because it makes cursor/selection unintuitive. But it can be still helpful when a user happens to need to hack on some web stuff.
So we need a feature to **enable/disable it depends on context, an easy way to guess the context is file type**. For example:
- html/css/php, you may always want this feature.
- js: **_likely_** web development but maybe not interested in colors, and also there're node.js context which is not web development.
- csharp/python/java/go: they **can** be used in web development
- c/c++/...: unlikely but possible
- scm/changelogs: no matter whether it's web development, a `#XXXXXX` likely does not mean an html color here.
It's kind of complex to design a more intelligent way to guess the probability of _whether a `#XXXXXX` means an html color_, but at least one thing that satisfies user needs more is:
**Disable** Inline Color Picker **globally** and enable it on html/css/php/... file types (and let user add the file types). | editor-color-picker | low | Minor |
2,808,408,362 | flutter | Flutter Web: aria-required must be an available property on form inputs (Accessibility) | ### Use case
The WAI-ARIA property `aria-required` must be available on all Flutter form input elements types/widgets when user input is required on the element before a form may be submitted.
### Proposal
Add the WAI-ARIA property `aria-required` to the list of semantics flags on the following element types/widgets. Its value is either `true` or `false`.
- `<input>` (all types)
- `checkbox`
- `combobox`
- `gridcell`
- `listbox`
- `radiogroup`
- `spinbutton`
- `textbox`
- `tree` | c: new feature,a: accessibility,platform-web,c: proposal,customer: castaway,team-accessibility | low | Minor |
2,808,413,919 | flutter | Flutter Web: aria-describedby must be an available property on all component types (Accessibility) | ### Use case
The WAI-ARIA property `aria-describedby` must be available on all Flutter component types/widgets to provide more information about an element that the user might need.
### Proposal
Add the WAI-ARIA property `aria-describedby` to the list of semantics flags on all Flutter component types/widgets. Its value must only be the ID of another text string element on the page (text strings are not a valid value for `aria-describedby`). Multiple ID's can be listed (each separated by a single space) in order to form a single accessible description.
Examples:
```
aria-describedby="text1"
aria-describedby="text1 text2 text3"
``` | c: new feature,a: accessibility,platform-web,c: proposal,customer: castaway,team-accessibility | low | Minor |
2,808,425,569 | flutter | Flutter Web: aria-expanded must be an available property on elements that toggle visibility of content (Accessibility) | ### Use case
The `aria-expanded` attribute is applied to a focusable, interactive element that toggles visibility of content in another element. For example, show/hide or "accordion" elements, `combobox` elements, `listbox` elements, and more.
### Proposal
Add the WAI-ARIA property `aria-expanded` to the list of semantics flags on the following element types/widgets. Its value is `true` when the content is expanded/visible, and `false` when the content is collapsed/not visible.
- `application`
- `button`
- `checkbox`
- `combobox`
- `gridecell`
- `link`
- `listbox`
- `menuitem`
- `row`
- `rowheader`
- `tab`
- `treeitem`
| c: new feature,a: accessibility,platform-web,c: proposal,customer: castaway,team-accessibility | low | Minor |
2,808,430,951 | rust | Inconsistent `fn` casting behavior in `if`-`else` branches for tuple and struct | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
#### Version 1 (Compiles successfully):
```rust
use std::hint::black_box;
use std::any::type_name_of_val;
fn foo() {}
fn bar() {}
fn main() {
let x = if black_box(true) { foo } else { bar };
println!("{}", type_name_of_val(&x)); // observing `fn` items being cast to `fn` pointers, output: `fn()`
}
```
#### Version 2 (Fails to compile, tuple)
```rust
use std::hint::black_box;
use std::any::type_name_of_val;
fn foo() {}
fn bar() {}
fn main() {
let x = if black_box(true) { (foo,) } else { (bar,) }; // mismatched types, no casting observed
println!("{}", type_name_of_val(&x.0));
}
```
#### Version 3 (Fails to compile, struct)
```rust
use std::any::type_name_of_val;
use std::hint::black_box;
fn foo() {}
fn bar() {}
struct F<T>
where
T: Fn(),
{
inner: T,
}
fn main() {
let x = if black_box(true) {
F { inner: foo }
} else {
F { inner: bar }
}; // mismatched types, no casting observed
println!("{}", type_name_of_val(&x.inner));
}
```
I expected to see this happen:
In all versions, I expected the code to either compile successfully or fail with the same type mismatch error.
Instead, this happened:
Version 1 compiles successfully, while version 2 and version 3 fail with the same error `expected fn item, found a different fn item`. Explicitly specifying the `fn` pointer type for `x` resolves the issue. By the way, the use of `impl_trait_in_bindings` feature does not affect this behavior.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.86.0-nightly (99768c80a 2025-01-23)
binary: rustc
commit-hash: 99768c80a1c094a5cfc3b25a04e7a99de7210eae
commit-date: 2025-01-23
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.7
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>
`RUST_BACKTRACE=1` doesn't provide additional information for any of the versions.
Version 1 only provides `Finished ...` and no other information.
</summary>
<p>
**Version 2**
```
Compiling rust-test v0.1.0 (.../rust-test)
error[E0308]: `if` and `else` have incompatible types
--> src/main.rs:8:50
|
8 | let x = if black_box(true) { (foo,) } else { (bar,) }; // mismatched types, no casting observed
| ------ ^^^^^^ expected `(fn() {foo},)`, found `(fn() {bar},)`
| |
| expected because of this
|
= note: expected tuple `(fn() {foo},)`
found tuple `(fn() {bar},)`
For more information about this error, try `rustc --explain E0308`.
error: could not compile `rust-test` (bin "rust-test") due to 1 previous error
```
**Version 3**
```
Compiling rust-test v0.1.0 (.../rust-test)
error[E0308]: `if` and `else` have incompatible types
--> src/main.rs:18:9
|
15 | let x = if black_box(true) {
| _____________-
16 | | F { inner: foo }
| | ---------------- expected because of this
17 | | } else {
18 | | F { inner: bar }
| | ^^^^^^^^^^^^^^^^ expected `F<fn() {foo}>`, found `F<fn() {bar}>`
19 | | }; // mismatched types, no casting observed
| |_____- `if` and `else` have incompatible types
|
= note: expected struct `F<fn() {foo}>`
found struct `F<fn() {bar}>`
For more information about this error, try `rustc --explain E0308`.
error: could not compile `rust-test` (bin "rust-test") due to 1 previous error
```
</p>
</details> | T-compiler,C-bug,needs-triage | low | Critical |
2,808,431,291 | flutter | Flutter Web: aria-invalid must be an available property on form inputs (Accessibility) | ### Use case
The WAI-ARIA property `aria-invalid` must be available on all Flutter form input elements types/widgets when the value is computed to be invalid or out-of-range.
### Proposal
Add the WAI-ARIA property `aria-invalid` to the list of semantics flags on the following element types/widgets. Its value is `true` when the entered value is invalid, and `false` when the entered value is valid.
- `<input>` (all types)
- `application`
- `checkbox`
- `combobox`
- `gridecell`
- `listbox`
- `radiogroup`
- `slider`
- `spinbutton`
- `textbox`
- `tree`
| c: new feature,a: accessibility,platform-web,c: proposal,customer: castaway,team-accessibility | low | Minor |
2,808,493,928 | yt-dlp | LBRY/ODYSEE: downloading non-video content (zip file) fails with error "This stream is not live" | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States
### Provide a description that is worded well enough to be understood
I attempted to download [a non-video resource](https://odysee.com/@TheGatalog-Accessories:e/RedYankeeShieldHolster:2) from odysee.com (a zip file) but was unable to do so as yt-dlp died with the error "This stream is not live". Following output was produced with the latest yt-dlp.sh script cloned from master. I confirmed that this version is able to download a video from odysee.com without difficulty.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://odysee.com/@TheGatalog-Accessories:e/RedYankeeShieldHolster:2']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8 (No ANSI), screen utf-8 (No ANSI)
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [c8541f8b1] (source)
[debug] Lazy loading extractors is disabled
[debug] Git HEAD: ccda63934
[debug] Python 3.11.10 (CPython x86_64 64bit) - Linux-6.11.2-1-default-x86_64-with-glibc2.40 (OpenSSL 3.2.3 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg 6.1.1 (fdk,setts), ffprobe 6.1.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.0, urllib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1844 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[lbry] Extracting URL: https://odysee.com/@TheGatalog-Accessories:e/RedYankeeShieldHolster:2
[lbry] @TheGatalog-Accessories#e/RedYankeeShieldHolster#2: Downloading stream JSON metadata
[lbry] e56e732ba39db09b4b4ad43cd399732cf226591f: Downloading livestream JSON metadata
ERROR: [lbry] e56e732ba39db09b4b4ad43cd399732cf226591f: This stream is not live
File "/home/me/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/yt-dlp/yt_dlp/extractor/lbry.py", line 354, in _real_extract
self.raise_no_formats('This stream is not live', True, claim_id)
File "/home/me/yt-dlp/yt_dlp/extractor/common.py", line 1276, in raise_no_formats
raise ExtractorError(msg, expected=expected, video_id=video_id)
``` | site-bug,triage | low | Critical |
2,808,511,867 | tensorflow | LiteRT build for Android failing | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
v2.17.0
### Custom code
No
### OS platform and distribution
Ubuntu 22.04
### Mobile device
_No response_
### Python version
3.10
### Bazel version
bazel 6.5.0
### GCC/compiler version
NDK26,NDK28
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Following this link https://ai.google.dev/edge/litert/build/android for building libtensorflowlite.so for my android JNI project, but its failing all the time with below error snapshot
**Initial Steps**
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
git checkout v2.17.0
./configure ( For Android Environment)
**Below is the content of .tf_configure.bazelrc in my build environment**
build --action_env PYTHON_BIN_PATH="/usr/bin/python3"
build --action_env PYTHON_LIB_PATH="/usr/lib/python3.10/dist-packages"
build --python_path="/usr/bin/python3"
build --action_env CLANG_COMPILER_PATH="/media/avaish/aiwork/Android-sdk/ndk/26.1.10909125/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-17"
build --repo_env=CC=/media/avaish/aiwork/Android-sdk/ndk/26.1.10909125/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-17
build --repo_env=BAZEL_COMPILER=/media/avaish/aiwork/Android-sdk/ndk/26.1.10909125/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-17
build --copt=-Wno-gnu-offsetof-extensions
build:opt --copt=-Wno-sign-compare
build:opt --host_copt=-Wno-sign-compare
build --action_env ANDROID_NDK_HOME="/media/avaish/aiwork/Android-sdk/ndk/26.1.10909125/"
build --action_env ANDROID_NDK_VERSION="26"
build --action_env ANDROID_NDK_API_LEVEL="21"
build --action_env ANDROID_BUILD_TOOLS_VERSION="35.0.0"
build --action_env ANDROID_SDK_API_LEVEL="35"
build --action_env ANDROID_SDK_HOME="/media/avaish/aiwork/Android-sdk/"
test --test_size_filters=small,medium
test:v1 --test_tag_filters=-benchmark-test,-no_oss,-oss_excluded,-gpu,-oss_serial
test:v1 --build_tag_filters=-benchmark-test,-no_oss,-oss_excluded,-gpu
test:v2 --test_tag_filters=-benchmark-test,-no_oss,-oss_excluded,-gpu,-oss_serial,-v1only
test:v2 --build_tag_filters=-benchmark-test,-no_oss,-oss_excluded,-gpu,-v1only
**Expected Output**
libtensorflowlite.so should get built successfully.
### Standalone code to reproduce the issue
```shell
**Build Command**
avaish@avaish-dekstop:/media/avaish/linux-games/litebuild/tensorflow$ bazel build -c opt --cxxopt=--std=c++17 --config=android_arm64 --fat_apk_cpu=x86,x86_64,arm64-v8a,armeabi-v7a --define=android_dexmerger_tool=d8_dexmerger --define=android_incremental_dexing_tool=d8_dexbuilder //tensorflow/lite/java:tensorflow-lite
```
### Relevant log output
```shell
INFO: Reading 'startup' options from /media/avaish/linux-games/litebuild/tensorflow/.bazelrc: --windows_enable_symlinks
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=153
INFO: Reading rc options for 'build' from /media/avaish/linux-games/litebuild/tensorflow/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /media/avaish/linux-games/litebuild/tensorflow/.bazelrc:
'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --features=-force_no_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
INFO: Reading rc options for 'build' from /media/avaish/linux-games/litebuild/tensorflow/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3 --action_env PYTHON_LIB_PATH=/usr/lib/python3.10/dist-packages --python_path=/usr/bin/python3 --action_env CLANG_COMPILER_PATH=/media/avaish/aiwork/Android-sdk/ndk/26.1.10909125/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-17 --repo_env=CC=/media/avaish/aiwork/Android-sdk/ndk/26.1.10909125/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-17 --repo_env=BAZEL_COMPILER=/media/avaish/aiwork/Android-sdk/ndk/26.1.10909125/toolchains/llvm/prebuilt/linux-x86_64/bin/clang-17 --copt=-Wno-gnu-offsetof-extensions --action_env ANDROID_NDK_HOME=/media/avaish/aiwork/Android-sdk/ndk/26.1.10909125/ --action_env ANDROID_NDK_VERSION=26 --action_env ANDROID_NDK_API_LEVEL=21 --action_env ANDROID_BUILD_TOOLS_VERSION=35.0.0 --action_env ANDROID_SDK_API_LEVEL=35 --action_env ANDROID_SDK_HOME=/media/avaish/aiwork/Android-sdk/
INFO: Found applicable config definition build:short_logs in file /media/avaish/linux-games/litebuild/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /media/avaish/linux-games/litebuild/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:android_arm64 in file /media/avaish/linux-games/litebuild/tensorflow/.bazelrc: --config=android --cpu=arm64-v8a --fat_apk_cpu=arm64-v8a
INFO: Found applicable config definition build:android in file /media/avaish/linux-games/litebuild/tensorflow/.bazelrc: --crosstool_top=//external:android/crosstool --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --dynamic_mode=off --define=xnn_enable_avxvnniint8=false --noenable_platform_specific_config --copt=-w --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --define=with_xla_support=false --config=no_tfrt
INFO: Found applicable config definition build:no_tfrt in file /media/avaish/linux-games/litebuild/tensorflow/.bazelrc: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/ir,tensorflow/compiler/mlir/tfrt/ir/mlrt,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/ifrt,tensorflow/compiler/mlir/tfrt/tests/mlrt,tensorflow/compiler/mlir/tfrt/tests/ir,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_jitrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/compiler/mlir/tfrt/transforms/mlrt,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/runtime_fallback/test,tensorflow/core/runtime_fallback/test/gpu,tensorflow/core/runtime_fallback/test/saved_model,tensorflow/core/runtime_fallback/test/testdata,tensorflow/core/tfrt/stubs,tensorflow/core/tfrt/tfrt_session,tensorflow/core/tfrt/mlrt,tensorflow/core/tfrt/mlrt/attribute,tensorflow/core/tfrt/mlrt/kernel,tensorflow/core/tfrt/mlrt/bytecode,tensorflow/core/tfrt/mlrt/interpreter,tensorflow/compiler/mlir/tfrt/translate/mlrt,tensorflow/compiler/mlir/tfrt/translate/mlrt/testdata,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/graph_executor,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils,tensorflow/core/tfrt/utils/debug,tensorflow/core/tfrt/saved_model/python,tensorflow/core/tfrt/graph_executor/python,tensorflow/core/tfrt/saved_model/utils
INFO: Analyzed target //tensorflow/lite/java:tensorflow-lite (2 packages loaded, 8416 targets configured).
INFO: Found 1 target...
ERROR: /media/avaish/linux-games/litebuild/tensorflow/tensorflow/lite/c/jni/BUILD:12:43: Compiling tensorflow/lite/c/jni/jni_utils.cc failed: undeclared inclusion(s) in rule '//tensorflow/lite/c/jni:jni_utils':
this rule is missing dependency declarations for the following files included by 'tensorflow/lite/c/jni/jni_utils.cc':
'external/androidndk/toolchains/llvm/prebuilt/linux-x86_64/lib/clang/17/include/stdarg.h'
'external/androidndk/toolchains/llvm/prebuilt/linux-x86_64/lib/clang/17/include/stdint.h'
'external/androidndk/toolchains/llvm/prebuilt/linux-x86_64/lib/clang/17/include/stddef.h'
'external/androidndk/toolchains/llvm/prebuilt/linux-x86_64/lib/clang/17/include/__stddef_max_align_t.h'
'external/androidndk/toolchains/llvm/prebuilt/linux-x86_64/lib/clang/17/include/stdbool.h'
Target //tensorflow/lite/java:tensorflow-lite failed to build
Use --verbose_failures to see the command lines of failed build steps.
``` | stat:awaiting response,type:build/install,comp:lite,2.17 | medium | Critical |
2,808,521,925 | ui | [bug]: ShadCn Ui Not Working :tailwind css not installing using vite after update of tailwind v3 to v4 | ### Describe the bug
tailwind css not installing using vite after update of tailwind v3 to v4 ...
i tried Many time from yesterday but it is not working i think code and installation should code have to update
error occuring in npx tailwindcss init -p
it is not working
### Affected component/components
installation
### How to reproduce
any suggestions
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
```
### System Info
```bash
System Information:
Operating System:
- Windows 11
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug,tailwind | low | Critical |
2,808,552,608 | rust | Tracking Issue for `Vec::push_mut` | Feature gate: `#![feature(push_mut)]`
This is a tracking issue for `Vec::push_mut`, as discussed in the comments of [this ACP](https://github.com/rust-lang/libs-team/issues/465). This adds a way to get a reference to the just-pushed value, which can eliminate having to `.unwrap()` or access the back of the list twice.
### Public API
```rust
#[must_use = "if you don't need a reference to the value, use Vec::push")]
pub fn push_mut(&mut self, value: T) -> &mut T { /* ... */ }
```
### Steps / History
- [x] Implementation: #135975
- [ ] Final comment period (FCP)
- [ ] Stabilization PR
### Unresolved Questions
- None yet
| T-libs-api,C-tracking-issue | low | Minor |
2,808,566,400 | godot | ColorRect not scaling to entire viewport under BackBufferCopy with anchor preset set to full rect (& expand) | ### Tested versions
- Reproducible in: 4.3-stable (commit 77dcf97d82cbfe4e4615475fa52ca03da645dbd8)
Compiled for double precision floats
### System information
6.12.9 (NixOS) 4.3-stable-77dcf97d82cbfe4e4615475fa52ca03da645dbd8 (Mobile preset)
### Issue description
A simple ColorRect won't scale to the window, even though its' parent (BackBufferCopy) copy mode is set to Viewport.

Reparenting said ColorRect to be directly under a Control node allows it to be @ fullscreen

### Steps to reproduce
Add anything that should scale to the entire viewport under a BackBufferCopy. It won't scale unless you do hacks to work around it.
### Minimal reproduction project (MRP)
If it scales correctly, everything should be white and you won't see the low-poly sphere.
[Godot issue.zip](https://github.com/user-attachments/files/18531088/Godot.issue.zip) | documentation,topic:2d | low | Major |
2,808,599,814 | ui | [bug]: Animation stuttering on Sheet component after update tailwind v3 to v4 | ### Describe the bug
Sheet component somehow looses the smooth animation.
Idk if updating tailwind v3 to v4 is related to the problem, but i tryed one version before the update and the sheet component opened smoothly without any problem.
### Bad opening
https://github.com/user-attachments/assets/ca326b72-fadd-4ab4-b45d-030af7e20b4c
### Expected opening
https://github.com/user-attachments/assets/7e145edb-f31a-441b-9c83-71307227b99e
### Affected component/components
Sheet
### How to reproduce
1. Install Sheet component
2. Build the component
3. Test
4. Update Tailwind v3 to v4
5. Click in the trigger component
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
```
### System Info
```bash
Fedora 41, Firefox-Aurora, NextJs 15.1.4, Tailwindcss 4.0.0, @tailwindcss/postcss ^4.0.0
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug,tailwind | low | Critical |
2,808,609,669 | tailwindcss | Out of memory after migrating from 3 > 4 | **What version of Tailwind CSS are you using?**
v4.0.0
**What build tool (or framework if it abstracts the build tool) are you using?**
For example: Angular 19
**What version of Node.js are you using?**
For example: v22.0.x
**What browser are you using?**
For example: Chrome
**What operating system are you using?**
For example: macOS
**Reproduction URL**
NA
**Describe your issue**
We have a 2000+ module monorepo where a few 100 of these modules use Tailwind 3.x and are built as libraries with Nx.
Then we tried migrating.
1. Added .postcssrc.json with this content:
```
{
"plugins": {
"@tailwindcss/postcss": {}
}
}
```
- Ran `npx @tailwindcss/upgrade@next`
Tried to run one of our Tailwind apps. It takes forever to build and it eventually fails with an out-of-memoy error:
```
> nx run online-travel-agency:serve:development --host=dev.wink.travel --port=4200 --ssl=true --ssl-key=./certs/dev.wink.travel-key.pem --ssl-cert=./certs/dev.wink.travel.pem
Warning: This is a simple server for use in testing or debugging Angular applications
locally. It hasn't been reviewed for security issues.
Binding this server to an open connection can result in compromising your application or
computer. Using a different host than the one passed to the "--host" flag might result in
websocket connection issues. You might need to use "--disable-host-check" if that's the
case.
Component HMR has been enabled.
If you encounter application reload issues, you can manually reload the page to bypass HMR and/or disable this feature with the `--no-hmr` command line option.
Please consider reporting any issues you encounter here: https://github.com/angular/angular-cli/issues
⠋ Building...
<--- Last few GCs --->
[26609:0x120008000] 495037 ms: Mark-Compact 4014.9 (4129.8) -> 4001.1 (4132.3) MB, pooled: 5 MB, 1536.67 / 0.00 ms (average mu = 0.125, current mu = 0.032) allocation failure; scavenge might not succeed
[26609:0x120008000] 496680 ms: Mark-Compact 4016.9 (4132.3) -> 4003.0 (4134.0) MB, pooled: 3 MB, 1631.75 / 0.00 ms (average mu = 0.067, current mu = 0.007) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
1: 0x1006a874c node::OOMErrorHandler(char const*, v8::OOMDetails const&) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
2: 0x10084fff0 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
3: 0x10084ffa4 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
4: 0x1009f59d8 v8::internal::Heap::CallGCPrologueCallbacks(v8::GCType, v8::GCCallbackFlags, v8::internal::GCTracer::Scope::ScopeId) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
5: 0x1009f7720 v8::internal::Heap::DevToolsTraceEventScope::~DevToolsTraceEventScope() [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
6: 0x1009f6090 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags)::$_1::operator()() const [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
7: 0x1009f5d44 void heap::base::Stack::SetMarkerAndCallbackImpl<v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags)::$_1>(heap::base::Stack*, void*, void const*) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
8: 0x1005b8028 PushAllRegistersAndIterateStack [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
9: 0x1009f4700 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
10: 0x1009eace8 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
11: 0x1009eb454 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
12: 0x1009c49ac v8::internal::MaybeHandle<v8::internal::SeqOneByteString> v8::internal::FactoryBase<v8::internal::Factory>::NewRawStringWithMap<v8::internal::SeqOneByteString>(int, v8::internal::Tagged<v8::internal::Map>, v8::internal::AllocationType) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
13: 0x100c3ee7c v8::internal::String::SlowFlatten(v8::internal::Isolate*, v8::internal::Handle<v8::internal::ConsString>, v8::internal::AllocationType) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
14: 0x100cfd634 v8::internal::Runtime_StringCharCodeAt(int, unsigned long*, v8::internal::Isolate*) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
15: 0x100499af4 Builtins_CEntry_Return1_ArgvOnStack_NoBuiltinExit [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
16: 0x109ad3e18
17: 0x10976c2ec
18: 0x10976efbc
19: 0x10976ee1c
20: 0x10976e830
21: 0x1004c865c Builtins_ArrayMap [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
22: 0x1096e3450
23: 0x100441290 Builtins_AsyncFunctionAwaitResolveClosure [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
24: 0x10050c4d8 Builtins_PromiseFulfillReactionJob [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
25: 0x100431594 Builtins_RunMicrotasks [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
26: 0x100402af4 Builtins_JSRunMicrotasksEntry [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
27: 0x1009610f8 v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
28: 0x1009617f0 v8::internal::(anonymous namespace)::InvokeWithTryCatch(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
29: 0x100984cf0 v8::internal::MicrotaskQueue::RunMicrotasks(v8::internal::Isolate*) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
30: 0x100984a80 v8::internal::MicrotaskQueue::PerformCheckpointInternal(v8::Isolate*) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
31: 0x1005b95e4 node::InternalCallbackScope::Close() [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
32: 0x1005b9b64 node::InternalMakeCallback(node::Environment*, v8::Local<v8::Object>, v8::Local<v8::Object>, v8::Local<v8::Function>, int, v8::Local<v8::Value>*, node::async_context, v8::Local<v8::Value>) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
33: 0x1005d1a04 node::AsyncWrap::MakeCallback(v8::Local<v8::Function>, int, v8::Local<v8::Value>*) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
34: 0x1006ae958 node::fs::FSReqCallback::Resolve(v8::Local<v8::Value>) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
35: 0x1006b1420 node::fs::AfterScanDir(uv_fs_s*) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
36: 0x10069f094 node::MakeLibuvRequestCallback<uv_fs_s, void (*)(uv_fs_s*)>::Wrapper(uv_fs_s*) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
37: 0x10498214c uv__work_done [/opt/homebrew/Cellar/libuv/1.50.0/lib/libuv.1.dylib]
38: 0x104985a74 uv__async_io [/opt/homebrew/Cellar/libuv/1.50.0/lib/libuv.1.dylib]
39: 0x10499601c uv__io_poll [/opt/homebrew/Cellar/libuv/1.50.0/lib/libuv.1.dylib]
40: 0x104985f08 uv_run [/opt/homebrew/Cellar/libuv/1.50.0/lib/libuv.1.dylib]
41: 0x1005ba428 node::SpinEventLoopInternal(node::Environment*) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
42: 0x1006f2774 node::NodeMainInstance::Run(node::ExitCode*, node::Environment*) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
43: 0x1006f24c8 node::NodeMainInstance::Run() [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
44: 0x100668ca0 node::Start(int, char**) [/opt/homebrew/Cellar/node@22/22.13.0/bin/node]
45: 0x19d158274 start [/usr/lib/dyld]
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [chan receive]:
github.com/evanw/esbuild/internal/helpers.(*ThreadSafeWaitGroup).Wait(...)
github.com/evanw/esbuild/internal/helpers/waitgroup.go:36
main.runService.func2()
github.com/evanw/esbuild/cmd/esbuild/service.go:114 +0x88
main.runService(0x1)
github.com/evanw/esbuild/cmd/esbuild/service.go:160 +0x47c
main.main()
github.com/evanw/esbuild/cmd/esbuild/main.go:241 +0x898
goroutine 7 [chan receive]:
main.runService.func1()
github.com/evanw/esbuild/cmd/esbuild/service.go:98 +0x40
created by main.runService in goroutine 1
github.com/evanw/esbuild/cmd/esbuild/service.go:97 +0x19c
goroutine 8 [chan receive]:
main.(*serviceType).sendRequest(0x140000ea2d0, {0x102b81d60, 0x140151e9080})
github.com/evanw/esbuild/cmd/esbuild/service.go:192 +0x138
main.runService.func3()
github.com/evanw/esbuild/cmd/esbuild/service.go:125 +0x44
created by main.runService in goroutine 1
github.com/evanw/esbuild/cmd/esbuild/service.go:122 +0x2e4
goroutine 997 [chan receive, 8 minutes]:
main.(*serviceType).sendRequest(0x140000ea2d0, {0x102b81d60, 0x14003930a20})
github.com/evanw/esbuild/cmd/esbuild/service.go:192 +0x138
main.(*serviceType).convertPlugins.func2.4({{0x14004bb9b90, 0x90}, {0x102a473ef, 0x4}, {0x0, 0x0}, {0x0, 0x0}, 0x0})
github.com/evanw/esbuild/cmd/esbuild/service.go:1069 +0x634
github.com/evanw/esbuild/pkg/api.(*pluginImpl).onLoad.func1({{0x0, 0x0}, {{0x14004bb9b90, 0x90}, {0x102a473ef, 0x4}, {0x0, 0x0}, {{0x0, 0x0}}, ...}})
github.com/evanw/esbuild/pkg/api/api_impl.go:1992 +0x130
github.com/evanw/esbuild/internal/bundler.runOnLoadPlugins({0x140011a2840?, 0x0?, 0x0?}, {0x102c0a1c0, 0x14001240d40}, 0x1400109baa0, {0x140000f9f10, 0x14000c88168, 0x14000c88210, 0x14001240d20, ...}, ...)
github.com/evanw/esbuild/internal/bundler/bundler.go:1072 +0x9cc
github.com/evanw/esbuild/internal/bundler.parseFile({{0x102c0a1c0, 0x14001240d40}, {0x140000f9f10, 0x14000c88168, 0x14000c88210, 0x14001240d20, 0x6, 0x140010f6930}, 0x14002267208, 0x1400109baa0, ...})
github.com/evanw/esbuild/internal/bundler/bundler.go:162 +0x1f0
created by github.com/evanw/esbuild/internal/bundler.(*scanner).maybeParseFile in goroutine 451
github.com/evanw/esbuild/internal/bundler/bundler.go:1512 +0xa0c
goroutine 49 [chan receive, 8 minutes]:
main.(*serviceType).sendRequest(0x140000ea2d0, {0x102b81d60, 0x140000ea930})
github.com/evanw/esbuild/cmd/esbuild/service.go:192 +0x138
main.(*serviceType).convertPlugins.func2.2()
github.com/evanw/esbuild/cmd/esbuild/service.go:944 +0x100
github.com/evanw/esbuild/pkg/api.(*pluginImpl).onStart-fm.(*pluginImpl).onStart.func1()
github.com/evanw/esbuild/pkg/api/api_impl.go:1843 +0x34
github.com/evanw/esbuild/internal/bundler.ScanBundle.func1({{0x102a50abf, 0x12}, {0x1400020e420, 0x1, 0x1}, {0x14000260b40, 0x1, 0x1}, {0x14000260ba0, 0x1, ...}}, ...)
github.com/evanw/esbuild/internal/bundler/bundler.go:1286 +0x7c
created by github.com/evanw/esbuild/internal/bundler.ScanBundle in goroutine 19
github.com/evanw/esbuild/internal/bundler/bundler.go:1285 +0xb4c
goroutine 19 [semacquire, 8 minutes]:
sync.runtime_Semacquire(0x1400031c380?)
runtime/sema.go:71 +0x2c
sync.(*WaitGroup).Wait(0x14000146590)
sync/waitgroup.go:118 +0x74
github.com/evanw/esbuild/internal/bundler.ScanBundle(_, {_, _, _, _, _, _}, {_, _}, _, ...)
github.com/evanw/esbuild/internal/bundler/bundler.go:1369 +0x6c4
github.com/evanw/esbuild/pkg/api.rebuildImpl({0x14000200420, {0x1400020e498, 0x1, 0x1}, {0x0, 0x0, 0x0}, {0x0, 0x1, 0x2, ...}, ...}, ...)
github.com/evanw/esbuild/pkg/api/api_impl.go:1479 +0x1f8
github.com/evanw/esbuild/pkg/api.(*internalContext).rebuild(_)
github.com/evanw/esbuild/pkg/api/api_impl.go:998 +0x2cc
github.com/evanw/esbuild/pkg/api.(*internalContext).Rebuild(0x1400027c508?)
github.com/evanw/esbuild/pkg/api/api_impl.go:1059 +0x3c
main.(*serviceType).handleIncomingPacket.func5()
github.com/evanw/esbuild/cmd/esbuild/service.go:293 +0xa0
created by main.(*serviceType).handleIncomingPacket in goroutine 1
github.com/evanw/esbuild/cmd/esbuild/service.go:290 +0x12b0
goroutine 14 [chan send, 8 minutes]:
github.com/evanw/esbuild/internal/bundler.ScanBundle.func2()
github.com/evanw/esbuild/internal/bundler/bundler.go:1320 +0x23c
created by github.com/evanw/esbuild/internal/bundler.ScanBundle in goroutine 19
github.com/evanw/esbuild/internal/bundler/bundler.go:1318 +0x6bc
goroutine 1096 [chan receive, 8 minutes]:
main.(*serviceType).sendRequest(0x140000ea2d0, {0x102b81d60, 0x14006b15e60})
github.com/evanw/esbuild/cmd/esbuild/service.go:192 +0x138
main.(*serviceType).convertPlugins.func2.4({{0x1400b552140, 0x91}, {0x102a473ef, 0x4}, {0x0, 0x0}, {0x0, 0x0}, 0x0})
github.com/evanw/esbuild/cmd/esbuild/service.go:1069 +0x634
github.com/evanw/esbuild/pkg/api.(*pluginImpl).onLoad.func1({{0x0, 0x0}, {{0x1400b552140, 0x91}, {0x102a473ef, 0x4}, {0x0, 0x0}, {{0x0, 0x0}}, ...}})
github.com/evanw/esbuild/pkg/api/api_impl.go:1992 +0x130
github.com/evanw/esbuild/internal/bundler.runOnLoadPlugins({0x1400016b8c0?, 0x140021d1f88?, 0x10262e1bc?}, {0x102c0a1c0, 0x14001192ae0}, 0x14001590fc0, {0x14001c2c380, 0x14000c88858, 0x14000c88870, 0x14001192ac0, ...}, ...)
github.com/evanw/esbuild/internal/bundler/bundler.go:1072 +0x9cc
github.com/evanw/esbuild/internal/bundler.parseFile({{0x102c0a1c0, 0x14001192ae0}, {0x14001c2c380, 0x14000c88858, 0x14000c88870, 0x14001192ac0, 0x6, 0x14000e7ca80}, 0x14001f40d88, 0x14001590fc0, ...})
github.com/evanw/esbuild/internal/bundler/bundler.go:162 +0x1f0
created by github.com/evanw/esbuild/internal/bundler.(*scanner).maybeParseFile in goroutine 443
github.com/evanw/esbuild/internal/bundler/bundler.go:1512 +0xa0c
goroutine
```
Not really sure what else to report. Guessing it's the number of modules that causes this. Works great with Tailwind 3.x.
Looking at what the migration tool did was change our libs from this:
```
@config "./tailwind-component.config.js";
@tailwind components;
@tailwind utilities;
```
to this:
```
@import 'tailwindcss/utilities' layer(utilities);
@config "./tailwind-component.config.js";
```
And our app css from this:
```
@tailwind base;
@tailwind components;
@tailwind utilities;
```
to this:
```
@import 'tailwindcss';
/* You can add global styles to this file, and also import other style files */
/*
The default border color has changed to `currentColor` in Tailwind CSS v4,
so we've added these compatibility styles to make sure everything still
looks the same as it did with Tailwind CSS v3.
If we ever want to remove these styles, we need to add an explicit border
color utility to any element that depends on these defaults.
*/
@layer base {
*,
::after,
::before,
::backdrop,
::file-selector-button {
border-color: var(--color-gray-200, currentColor);
}
}
```
Reverting back to v3 for now. | needs reproduction | low | Critical |
2,808,612,495 | pytorch | _pickle.UnpicklingError: invalid load key, ''. | ### 🐛 Describe the bug
`import pickle
import torch
import io
_pickler = pickle.Pickler
_unpickler = pickle.Unpickler
tensor = torch.tensor([126, 188, 133, 30, 60, 138, 188], dtype=torch.uint8)
buf = tensor.numpy().tobytes()[:3]
_unpickler(io.BytesIO(buf)).load()`
tobytes is error:
[rank14]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2362, in _tensor_to_object
[rank14]: return _unpickler(io.BytesIO(buf)).load()
[rank14]: _pickle.UnpicklingError: invalid load key, '~'.
# need to modify:
tensor.numpy().tobytes()
to
pickle.dumps(tensor.numpy())
### Versions
python3.10
torch2.3.0
ubuntu20.04
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,808,624,780 | yt-dlp | Mango tv partially unsupported。error messge:HTTP Error 401: Unauthorized | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
china
### Provide a description that is worded well enough to be understood
Mango TV, when trying to download "https://www.mgtv.com/b/693354/21790490.html?lastp=list_index", an error occurred as follows: yt_dlp.networking.exceptions.HTTPError: HTTP Error 401: Unauthorized. After investigation, this interface (https://pcweb.api.mgtv.com/player/getSource) returned an error. The error message is as follows:
{
"msg": "Due to copyright restrictions, this video is temporarily not available for playback in this country/region.",
"code": 40005,
"data": {},
"err_msg": "Due to copyright restrictions, this video is temporarily not available for playback in this country/region.",
"err_code": 40005,
"seqid": "ce599784a6a34dbaabfdc3a60acbee3a",
"timestamp": 1737699781060
}
Help!!!:
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-F', '-vU', '--no-config', '--cookies-from-browser', 'firefox', '--user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:134.0) Gecko/20100101 Firefox/134.0', 'https://www.mgtv.com/b/693354/21790490.html?lastp=list_index']
[debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8 (No VT), error utf-8 (No VT), screen utf-8 (No VT)
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [c8541f8b1] (source)
[debug] Git HEAD: 35421d9b6
[debug] Python 3.10.7 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1q 5 Jul 2022)
Extracting cookies from firefox
Extracted 254 cookies from firefox
[debug] exe versions: ffmpeg n7.1-153-gaeb8631048-20250122 (setts), ffprobe n7.1-153-gaeb8631048-20250122
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, mutagen-1.47.0, requests-2.32.3, sqlite3-3.37.2, urllib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\Administrator\AppData\Roaming\Mozilla\Firefox\Profiles\mfekanbk.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1844 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[MangoTV] Extracting URL: https://www.mgtv.com/b/693354/21790490.html?lastp=list_index
[MangoTV] 21790490: Downloading JSON metadata
[MangoTV] 21790490: Downloading JSON metadata
ERROR: [MangoTV] 21790490: Unable to download JSON metadata: HTTP Error 401: Unauthorized (caused by <HTTPError 401: Unauthorized>)
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\extractor\common.py", line 742, in extract
ie_result = self._real_extract(url)
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\extractor\mgtv.py", line 96, in _real_extract
stream_data = self._download_json(
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\extractor\common.py", line 1152, in download_content
res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs)
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\extractor\common.py", line 1112, in download_handle
res = self._download_webpage_handle(
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\extractor\common.py", line 962, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data,
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\extractor\common.py", line 911, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\extractor\common.py", line 898, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query, extensions))
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\YoutubeDL.py", line 4175, in urlopen
return self._request_director.send(req)
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\networking\common.py", line 117, in send
response = handler.send(request)
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\networking\_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\networking\common.py", line 340, in send
return self._send(request)
File "C:\Users\Administrator\PycharmProjects\yt-dlp\yt_dlp\networking\_requests.py", line 365, in _send
raise HTTPError(res, redirect_loop=max_redirects_exceeded)
yt_dlp.networking.exceptions.HTTPError: HTTP Error 401: Unauthorized
``` | geo-blocked,site-bug,triage | low | Critical |
2,808,634,114 | flutter | [iOS] Password save dialog not appearing with autofillHints on Flutter 3.27.1 | ### Description
While the password save dialog was displayed using autofillHints in Flutter 3.10.5, upgrading to Flutter 3.27.1 caused the same code to no longer display the password save dialog. Please check why it is no longer appearing.
### Steps to reproduce
1.Install the app for the first time.
2.Enter ID/password.
3.The password save dialog does not appear.
### Flutter Version Comparison
Version where it worked: Flutter 3.10.5
Version where the issue occurs: Flutter 3.27.1
### Reproduction Environment
iOS Device: iPhone 11, iOS 18.2
Xcode Version: 15.3
CocoaPods: 1.16.2
### Expected results
The password save dialog should appear after entering the password.
### Actual results
The password save dialog does not appear even after entering the password.
### Code sample
<details open><summary>Code sample</summary>
```dart
Widget _passwordField(BuildContext ctx) {
final obscure = ctx.select<LoginState, bool>((state) => state.obscure);
final validity = ctx.select<LoginState, bool>((state) => state.passwordValidity);
return Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
_title(AppKey().l10n!.password),
Padding(
padding: const EdgeInsets.only(bottom: 10, left: 30, right: 30),
child: ConstrainedBox(
constraints: const BoxConstraints(maxWidth: itemMaxSize),
child: Visibility(
visible: ctx.select<LoginState, bool>((state) => state.isMain),
child: TextFormField(
textInputAction: TextInputAction.done,
controller: ctx.read<LoginStateNotifier>().passwordController,
decoration: InputDecoration(
border: const OutlineInputBorder(),
fillColor: errorBackgroundColor,
filled: !validity,
suffixIcon: IconButton(
icon: Icon(
obscure ? Icons.visibility_off : Icons.visibility,
color: validity ? Colors.grey : Colors.red,
),
onPressed: ctx.read<LoginStateNotifier>().onChangeObscure,
),
counterText: '',
),
obscureText: obscure,
autofillHints: Platform.isIOS ? const [AutofillHints.password] : null,
onEditingComplete: TextInput.finishAutofillContext,
maxLength: 50,
keyboardType: TextInputType.visiblePassword,
onChanged: ctx.read<LoginStateNotifier>().onChangePassword,
validator: ctx.read<LoginStateNotifier>().validatePassword,
),
),
),
),
],
);
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
fvm flutter doctor -v
[✓] Flutter (Channel stable, 3.27.1, on macOS 14.4.1 23E224 darwin-arm64, locale ja-JP)
• Flutter version 3.27.1 on channel stable at /Users/tf/fvm/versions/3.27.1
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (5 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/tf/Library/Android/sdk
• Platform android-35, build-tools 34.0.0
• Java binary at: /opt/homebrew/Cellar/openjdk@17/17.0.13/libexec/openjdk.jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment Homebrew (build 17.0.13+0)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.3)
• Xcode at /Applications/Xcode_15-3.app/Contents/Developer
• Build 15E204a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.2)
• Android Studio at /Applications/Android Studio_2023-2.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.90.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
```
</details>
| in triage | low | Critical |
2,808,642,022 | puppeteer | [Feature]: Convert Puppeteer Response object into Response object of Fetch API | ### Feature description
`Page.goto` method returns `HTTP Response`:
https://pptr.dev/api/puppeteer.httpresponse
In my project, I'm encapsulating puppeteer. I make an interface like
```ts
export type IPage = {
goto(
url: string,
options?: { timeout: number }
): Promise<Response | null>;
```
Not to unaware of puppeteer, I'd be happy if I could convert Puppeteer HTTP Response class into Response class of Fetch API shown below
https://developer.mozilla.org/en-US/docs/Web/API/Response
Thank you. | feature,P3 | low | Minor |
2,808,675,495 | vscode | Python file is not running |
Type: <b>Bug</b>
print("adithya"+"ranjit")
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 3 3250U with Radeon Graphics (4 x 2595)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|5.94GB (0.88GB free)|
|Process Argv|--crash-reporter-id 27a663b3-1774-456e-a8d4-f79d63956c8a|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (20)</summary>
Extension|Author (truncated)|Version
---|---|---
auto-save-toggler|Bac|1.0.6
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
java|red|1.39.0
LiveServer|rit|5.7.9
pdf|tom|1.2.2
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
9064b325:31222308
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,808,684,287 | next.js | [Turbopack] Error: ENOENT: no such file or directory, open '[project]/......' | ### Link to the code that reproduces this issue
https://github.com/HMubaireek/nextjs-turbopack-zeebe-node
### To Reproduce
1. Clone the repo, run npm install, then serve the app with: `npx nx dev nextjs-with-zeebe-node --turbo`
2. Open this link in the browser: `http://localhost:3000/api/zeebe`
3. See the error in terminal.
### Current vs. Expected behavior
This is the error that appears:
```sh
Error: ENOENT: no such file or directory, open '[project]/node_modules/zeebe-node/dist/proto/zeebe.proto'
at GET (apps/nextjs-with-zeebe-node/src/app/api/zeebe/route.ts:10:22)
8 |
9 | try{
> 10 | const zeebeClient = new ZBClient('localhost:26500');
| ^
11 | isConnected = zeebeClient.connected;
12 | } catch (error) {
13 | console.error(error); {
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: '[project]/node_modules/zeebe-node/dist/proto/zeebe.proto'
}
```
The library used can't find a file, which is fine, but the problem is in the `[project]` in the path. When running with `webpack` instead, the error is:
```sh
Error: ENOENT: no such file or directory, open '/home/hasm/repositories/sandbox/org/apps/nextjs-with-zeebe-node/.next/proto/zeebe.proto'
```
So, I know which path it is looking for, and that is fixed easily with this addition to `next.config.js` file:
```js
webpack: (config, { isServer }) => {
config.plugins = [
...(config.plugins || []),
new CopyPlugin({
patterns: [
{
from: '../../node_modules/zeebe-node/proto',
to: 'proto',
},
],
})
];
return config;
}
```
With this, serving the app without ` --turbo` flag works as expected.
I tried to put the file manually in many places but that didn't work.
So, the question is, what is this `[project]` in the path, and how to fix it with turbopack?
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #202409151536 SMP PREEMPT_DYNAMIC Sun Sep 15 16:01:12 UTC 2024
Available memory (MB): 47875
Available CPU cores: 12
Binaries:
Node: 22.6.0
npm: 10.8.2
Yarn: 0.32+git
pnpm: 9.15.2
Relevant Packages:
next: 15.1.6 // Latest available version is detected (15.1.6).
eslint-config-next: 15.1.6
react: 19.0.0
react-dom: 19.0.0
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
1. [The library tries to load the file like this](https://github.com/camunda-community-hub/zeebe-client-node-js/blob/7969ce1808c96a87519cb1a3f279287f30637c4b/src/zb/ZBClient.ts#L1351):
```js
protoPath: path.join(__dirname, '../../proto/zeebe.proto'),
```
2. I used nx to create the sample repo but the issue is the same without it, and here a sample repo created with nextjs only: https://github.com/HMubaireek/nextjs-turbopack-zeebe-node-standalone it's exactly the same error. | Turbopack | low | Critical |
2,808,702,791 | neovim | :terminal can display images (graphic protocol support) | ### Problem
in neovim , create a terminal by :terminal
ma@ubuntu2204:~$ kitten icat en.png
Error: Terminal does not support reporting screen sizes in pixels, use a terminal such as kitty, WezTerm, Konsole, etc. that does.
### Expected behavior
I apologize if my previous explanation was unclear, but I believe this issue is not a duplicate of #30889. That issue appears to address a consistent general interface for rendering images in Neovim's normal buffer, as seen in plugins like [image.nvim](https://github.com/3rd/image.nvim) or [hologram.nvim](https://github.com/edluffy/hologram.nvim). However, in the context of a terminal buffer (e.g., VT220/xterm terminal emulator), it seems that additional work is required to enhance it into a graphic protocol-compatible terminal emulator. This would allow commands like kitten icat en.png to render images properly. | terminal,image | low | Critical |
2,808,703,487 | tauri | [bug] ureq proxy agent failing | #7338
I don't know where the problem is, but I tried with the reqwest library and it is OK. Can anyone rewrite this block of code?
[file](https://github.com/tauri-apps/tauri/blob/dev/crates/tauri-bundler/src/utils/http_utils.rs)
```rust
fn create_agent_and_url(url: &str) -> (ureq::Agent, String) {
generate_github_alternative_url(url).unwrap_or((
ureq::AgentBuilder::new().try_proxy_from_env(true).build(),
url.to_owned(),
))
}
#[allow(dead_code)]
pub fn download(url: &str) -> crate::Result<Vec<u8>> {
let (agent, final_url) = create_agent_and_url(url);
log::info!(action = "Downloading"; "{}", final_url);
let response = agent.get(&final_url).call().map_err(Box::new)?;
let mut bytes = Vec::new();
response.into_reader().read_to_end(&mut bytes)?;
Ok(bytes)
}
```
| type: bug,status: needs triage | low | Critical |
2,808,758,507 | rust | Rust can't figure out that two types are the same with supertraits and associated types. | I tried this code:
```rust
trait Foo {
type ToBar;
}
trait Bar {
type ToFoo: Foo<ToBar = Self>;
}
trait SubFoo: Foo {
type SubToBar: Bar<ToFoo = Self>;
}
fn works<F: SubFoo>(x: F::SubToBar) -> F::ToBar {
fn helper<B: Bar>(x: B) -> <B::ToFoo as Foo>::ToBar {
x
}
helper::<F::SubToBar>(x)
}
fn fails<F: SubFoo>(x: F::SubToBar) -> F::ToBar {
x
}
```
I expected the code to compile. However, the `works` function compiles, while the `fails` function fails to compile, producing the error:
```
error[E0308]: mismatched types
--> src/lib.rs:22:5
|
21 | fn fails<F: SubFoo>(x: F::SubToBar) -> F::ToBar {
| -------- expected `<F as Foo>::ToBar` because of return type
22 | x
| ^ expected `Foo::ToBar`, found `SubFoo::SubToBar`
|
= note: expected associated type `<F as Foo>::ToBar`
found associated type `<F as SubFoo>::SubToBar`
= note: an associated type was expected, but a different one was found
For more information about this error, try `rustc --explain E0308`.
```
For every SubFoo type, the ToBar and SubToBar associated types will always be the same (`Self::ToBar == Self::SubToBar`). This can be proven from by the following chain of reasoning for any type `F: SubFoo`:
```
<F as SubFoo>::SubToBar
== <<<F as SubFoo>::SubToBar as Bar>::ToFoo as Foo>::ToBar
(from applying the trait bound `ToFoo: Foo<ToBar = Self>` to the type `<F as SubFoo>::SubToBar`)
== <F as Foo>::ToBar
(from applying the trait bound `SubToBar: Bar<ToFoo = Self>` to the type `F`)
```
It seems like the trait solver can't figure this out by itself. But prodding it with the `helper()` function makes it able to figure this out. This workaround is rather inconvenient though, and it would be nice if the trait solver can figure this out itself, or if there were some way to give hints to the trait solver to reach this conclusion.
Note: my attempts to simplify things have repeatedly ran into the problem described at #65913. Also, the original use case had GATs, which made associated type bounds not usable on the Foo/SubFoo traits.
### Meta
Reproducible on the playground with stable rust version 1.84.0, and nightly rust version `1.86.0-nightly (2025-01-23 99768c80a1c094a5cfc3)`
Using `-Z next-solver=globally` doesn't fix the problem.
@rustbot labels +A-trait-system +A-associated-items | A-trait-system,A-associated-items,C-bug,F-associated_type_bounds,T-types | low | Critical |
2,808,782,849 | angular | bug(mat-tab): MatTab content is removed instantly at the start of route transition animation | ### Is this a regression?
- [ ] Yes, this behavior used to work in the previous version
### The previous version in which this bug was not present was
_No response_
### Description
The content of a mat-tab instantly gets removed when implementing a route transition animation between two components.
Here is a gif of showing the problem

### Reproduction
StackBlitz link: https://stackblitz.com/~/github.com/FlepTheFlabbergasted/mat-tab-bug-v19.1.0?file=src/app/one/one.component.html
Steps to reproduce:
1. Create a route transition animation between two components
2. Have the first components display some content in a mat-tab-group
3. Trigger the animation between the components
4. Observe the mat-tab content dissapears immediately when the router animation starts
### Expected Behavior
The content of the mat-tab fades away slowly together with the component using it.
### Actual Behavior
The mat-tab content gets removed instantly when the animations start.
### Environment
- Angular:
- @angular-devkit/architect 0.1901.3
@angular-devkit/build-angular 19.1.3
@angular-devkit/core 19.1.3
@angular-devkit/schematics 19.1.3
@angular/cli 19.1.3
@schematics/angular 19.1.3
- CDK/Material:
- @angular/cdk 19.1.0
@angular/material 19.1.0
- Browser(s):
- Chrome (Version 131.0.6778.265 (Official Build) (64-bit))
- Firefox (Version 134.0.1 (64-bit)
- Operating System (e.g. Windows, macOS, Ubuntu):
- Windows 10
| area: router,needs triage | low | Critical |
2,808,799,040 | rust | u128 manual byte-reading is not optimized, in contrast to u64 | ## Code
https://play.rust-lang.org/?version=stable&mode=release&edition=2021&gist=7e184ab3e02960b05e214be4d73d84e8
```rust
use std::hint::black_box;
use std::array::from_fn;
#[inline(never)]
fn test_u64() {
let n: u64 = black_box(137_777);
let a: [u8; 8] = black_box(from_fn(|i| ((n >> (i*8)) & 255) as u8));
let b: [u8; 8] = black_box(n.to_le_bytes());
}
#[inline(never)]
fn test_u128() {
let n: u128 = black_box(137_777_u128);
let a: [u8; 16] = black_box(from_fn(|i| ((n >> (i*8)) & 255) as u8));
let b: [u8; 16] = black_box(n.to_le_bytes());
}
fn main() {
test_u64();
test_u128();
}
```
## Expected behavior
In both functions, same LLVM IR and same assembly is generated to evaluate `a` and `b`.
## Current behavior
`u64` byte-by-byte read is optimized so that it just reads a pointer. (The optimization seems to happen at MIR -> LLVM IR level.)
```asm
playground::test_u64:
movq $137777, -24(%rsp)
leaq -24(%rsp), %rax
#APP
#NO_APP
movq -24(%rsp), %rax
movq %rax, -16(%rsp)
leaq -16(%rsp), %rax
#APP
#NO_APP
movq -24(%rsp), %rax
movq %rax, -8(%rsp)
leaq -8(%rsp), %rax
#APP
#NO_APP
retq
```
`u128` read, however, is not optimized in the same way, and seems to use a whole lot of registers.
```asm
playground::test_u128:
movq $0, -48(%rsp)
movq $137777, -56(%rsp)
leaq -56(%rsp), %rax
#APP
#NO_APP
movq -48(%rsp), %rax
movq %rax, %rcx
shrq $56, %rcx
movd %ecx, %xmm0
movq %rax, %rcx
shrq $48, %rcx
movd %ecx, %xmm1
movq %rax, %rcx
shrq $40, %rcx
movd %ecx, %xmm4
movq %rax, %rcx
shrq $32, %rcx
movd %ecx, %xmm3
movl %eax, %ecx
shrl $24, %ecx
movd %ecx, %xmm5
movl %eax, %ecx
shrl $16, %ecx
movd %ecx, %xmm6
movq -56(%rsp), %rcx
movd %eax, %xmm2
shrl $8, %eax
movd %eax, %xmm7
movq %rcx, %rax
shrq $56, %rax
movd %eax, %xmm9
movq %rcx, %rax
shrq $48, %rax
movd %eax, %xmm10
movq %rcx, %rax
shrq $40, %rax
movd %eax, %xmm11
movq %rcx, %rax
shrq $32, %rax
movd %eax, %xmm8
movl %ecx, %eax
shrl $24, %eax
movd %eax, %xmm12
movl %ecx, %eax
shrl $16, %eax
movd %eax, %xmm13
punpcklbw %xmm0, %xmm1
punpcklbw %xmm4, %xmm3
punpcklwd %xmm1, %xmm3
punpcklbw %xmm5, %xmm6
punpcklbw %xmm7, %xmm2
punpcklwd %xmm6, %xmm2
punpckldq %xmm3, %xmm2
punpcklbw %xmm9, %xmm10
punpcklbw %xmm11, %xmm8
punpcklwd %xmm10, %xmm8
movd %ecx, %xmm0
movl %ecx, %eax
shrl $8, %eax
punpcklbw %xmm12, %xmm13
movd %eax, %xmm1
punpcklbw %xmm1, %xmm0
punpcklwd %xmm13, %xmm0
punpckldq %xmm8, %xmm0
punpcklqdq %xmm2, %xmm0
movdqa %xmm0, -40(%rsp)
leaq -40(%rsp), %rax
#APP
#NO_APP
movaps -56(%rsp), %xmm0
movaps %xmm0, -24(%rsp)
leaq -24(%rsp), %rax
#APP
#NO_APP
retq
```
@rustbot labels +C-optimization | A-LLVM,I-slow,T-compiler,C-optimization | low | Minor |
2,808,800,704 | go | cmd/link: place Darwin bind entries on the __DATA_CONST segment | #38830 moved the .rodata, .typelink, .itablink, and .gopclntab sections to the __DATA_CONST segment. Looks like the .got section can also go into that new segment. It is currently passed in the __DATA segment.
You can verify that the macOS clang linker already does what I'm suggesting here by following these steps:
1. Build a hello world binary with `go build -ldflags=linkmode=external -o ext`
2. Get the bind table by running `objdump --macho --bind ./ext`
The result is something like this:
```
Bind table:
segment section address type addend dylib symbol
__DATA_CONST __got 0x100088000 bind ptr 0 libSystem ___error
__DATA_CONST __got 0x100088008 bind ptr 0 libSystem ___stderrp
...
```
While the same binary built with the Go internal linker has the following bind table:
```
Bind table:
segment section address type addend dylib symbol
__DATA __nl_symbol_ptr 0x1000EC2B0 pointer 0 libSystem ___error
__DATA __nl_symbol_ptr 0x1000EC2B8 pointer 0 libSystem ___stderrp
...
```
Also note that clang names the.got section as `__got` on ARM, while we use `__nl_symbol_ptr` uncoditionally. Don't know if this makes any real difference, but clang made that change consciously in this commit: https://cgit.geodns-americas.gentoo.org/fork/llvm-project.git/commit/?h=release/3.8.x&id=e8d9df4ea52a5652a13f080614507d70e9f9ad79.
Go version:
> go version go1.23.3 darwin/arm64
Clang version:
> Apple clang version 16.0.0 (clang-1600.0.26.4)
@golang/compiler @golang/darwin | compiler/runtime,FixPending,Implementation | low | Critical |
2,808,825,122 | flutter | [PageView] The BouncingScrollPhysics is not applied when the scroll direction is scrolling up | ### Steps to reproduce
### Title
[PageView] The BouncingScrollPhysics is not applied when the scroll direction is scrolling up
---
### Question
I'm experiencing an issue with Flutter's `PageView` when using `BouncingScrollPhysics`. The behavior of the scroll physics changes unexpectedly when scrolling vertically and locking a specific page. Here's how you can reproduce the issue:
1. Scroll to page 6.
2. Press the lock button (to activate the lock).
3. Scroll up.
---
### Context
I am working on a video app similar to YouTube Shorts. In this app, I need to dynamically manage pages because I may need to reload the previously visited pages due to the limited number of `VideoPlayerController` instances I can create. This is why I introduced the `lock` functionality to control access to certain pages dynamically. However, this appears to interfere with the expected behavior of the `BouncingScrollPhysics`.
---
### Expected results
When scrolling sequentially (e.g., 0 → 1 → 2 → 3 → 4 → 5), the BouncingScrollPhysics blocks the scroll and rebounds as expected.
It should behave consistently regardless of the scroll direction or lock state.
### Actual results
The scroll index changes like this: 7 → 6 → 10.
The index jumps to 10 and is not blocked by the BouncingScrollPhysics.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:math';
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
bool lock = false;
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Random Color PageView',
theme: ThemeData(primarySwatch: Colors.blue),
home: Scaffold(
appBar: AppBar(
title: const Text('Random Color PageView'),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
setState(() {
lock = !lock;
});
},
child: const Icon(Icons.lock),
),
body: RandomColorPageView(
lock: lock,
),
),
);
}
}
class RandomColorPageView extends StatelessWidget {
final Random _random = Random();
RandomColorPageView({super.key, required this.lock});
final bool lock;
@override
Widget build(BuildContext context) {
return PageView.builder(
physics: const BouncingScrollPhysics(),
scrollDirection: Axis.vertical,
itemBuilder: (context, index) {
final Color randomColor = Color.fromARGB(
255,
_random.nextInt(256),
_random.nextInt(256),
_random.nextInt(256),
);
if (index == 5 && lock) {
return null;
}
Widget child = Container(
width: double.infinity,
height: double.infinity,
color: randomColor,
child: Center(
child: Text(
'Page $index',
style: const TextStyle(
fontSize: 48,
color: Colors.white,
fontWeight: FontWeight.bold,
),
),
),
);
if (lock) {
child = ColoredBox(
color: Colors.red,
child: Padding(
padding: const EdgeInsets.all(8),
child: child,
),
);
}
return child;
},
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
### when scroll up
https://github.com/user-attachments/assets/63f18d06-fa57-42a1-9c18-8c35c4e721cd
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel [user-branch], 3.24.5, on macOS 14.5 23F79 darwin-arm64, locale en-HK)
! Flutter version 3.24.5 on channel [user-branch] at /Users/wtcheung/dev/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at
https://flutter.dev/setup.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error.
• Framework revision dec2ee5c1f (2 months ago), 2024-11-13 11:13:06 -0800
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
• If those were intentional, you can disregard the above warnings; however it is recommended to
use "git" directly to perform update checks and upgrades.
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/wtcheung/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[✓] VS Code (version 1.96.4)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (5 available)
• A015 (mobile) • 00112346F004917 • android-arm64 • Android 15 (API
35)
• W T’s iPhone (mobile) • 00008110-001A03E62EF0401E • ios • iOS 17.6.1
21G93
• macOS (desktop) • macos • darwin-arm64 • macOS 14.5
23F79 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.5
23F79 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome
131.0.6778.265
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.```
</details>
| framework,f: scrolling,has reproducible steps,team-framework,found in release: 3.27,found in release: 3.29 | low | Critical |
2,808,831,401 | pytorch | [Dynamo] compile torch.logit with different data types | ### 🐛 Describe the bug
When fixing https://github.com/pytorch/pytorch/issues/145379, I met a failure which seems related to dynamo. If test with `torch.float64`, below example works well. However, it fails with `torch.float32` with error
```
File "/home/leslie/community/pytorch/torch/_dynamo/variables/builder.py", line 2167, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/home/leslie/community/pytorch/torch/_dynamo/variables/builder.py", line 2233, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
File "/home/leslie/community/pytorch/torch/_dynamo/variables/builder.py", line 2329, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "/home/leslie/community/pytorch/torch/_dynamo/utils.py", line 2965, in get_fake_value
unimplemented(
File "/home/leslie/community/pytorch/torch/_dynamo/exc.py", line 361, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: data dependent operator: aten._local_scalar_dense.default; to enable, set torch._dynamo.config.capture_scalar_outputs = True
from user code:
File "/home/leslie/community/pytorch/torch/_dynamo/external_utils.py", line 48, in inner
return fn(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
Repro:
```
import torch
# dtype = torch.float64 # Pass
dtype = torch.float32 # Fail
input = torch.tensor(0.3, dtype=dtype)
eps = torch.tensor(0.9, dtype=dtype)
compiled = torch.compile(torch.logit, fullgraph=True)
print(compiled(input, eps))
```
### Versions
```
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+git95a92b5
[pip3] torchaudio==2.5.0a0+a95cfa8
[pip3] torchdata==0.10.0a0+2631c38
[pip3] torchmultimodal==0.1.0b0
[pip3] torchtext==0.17.0a0+1d4ce73
[pip3] torchvision==0.20.0a0+945bdad
[conda] mkl 2024.2.2 ha957f24_15 conda-forge
[conda] mkl-include 2024.2.2 ha957f24_15 conda-forge
[conda] numpy 1.26.0 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.7.0a0+git95a92b5 dev_0 <develop>
[conda] torchaudio 2.5.0a0+a95cfa8 dev_0 <develop>
[conda] torchdata 0.10.0a0+2631c38 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchtext 0.17.0a0+1d4ce73 dev_0 <develop>
[conda] torchvision 0.20.0a0+945bdad dev_0 <develop>
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,808,851,638 | terminal | Crash when closing one terminal window, all other terminal instances get killed | ### Windows Terminal version
1.21.3231.0
### Windows build number
22621.1413
### Other Software
powershell 5.1
### Steps to reproduce
I have several powershell instances, running in separate windows terminal instances. (So they should be separate processes)
I closed one, it seems like it crashed. (powershell itself did not crash, I believe)
All my other terminal instances get killed (including those running powershell), and I am toasted. 💢
### Expected Behavior
_No response_
### Actual Behavior
Dump too big to upload to github, so I submited a feedback with the dump.
https://aka.ms/AAu35be | Issue-Bug,Needs-Triage | low | Critical |
2,808,857,272 | rust | Meta Tracking Issue for LLVM workarounds | This is a meta tracking issues for LLVM workarounds that should eventually be removed once newer LLVM versions render the workarounds obsolete. Please edit this issue to backlink to relevant PRs or issues as suitable (preferably open new issues for specific LLVM workarounds). You can also leave a `FIXME(#135981)` or a FIXME backlinking to a dedicated issue in code to backlink to this tracking issue. Please remove entries if they are cleaned up.
### Tracked by WG-llvm
- https://github.com/rust-lang/rust/labels/llvm-fixed-upstream
### Fixed in LLVM 20
- https://github.com/rust-lang/rust/issues/135982
| A-LLVM,T-compiler,C-tracking-issue,WG-llvm,T-libs,PG-portable-simd,S-tracking-forever | low | Minor |
2,808,883,825 | godot | Editing an exported array of type Resource, will not update in a running debug instance. | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6094) - 12th Gen Intel(R) Core(TM) i7-12700K (20 Threads)
### Issue description
If I try to edit an exported Array[Resource] inside the editor while a debug instance is running, the array will not update in the running instance.
If I edit exported ints, floats, bools, Strings and more; it will update in the running debug instance, allowing me to test game state. For example: reducing the player's health to see HUD updates and character animations/display. I expected this functionality would work with Arrays filled with objects but it does not. I would like to add items to a player inventory, quests to a quest log, status effects and state to a player character and have it update in the running instance.
**Edit: I should also add, that Arrays of primitives like ints, floats, bools, and object Strings update in the debug instance as you would expect. The issue seems to happen exclusively with Resource objects. (I have not tried other objects like Nodes).
Additionally, (and I don't know if this is related), but non-typed arrays can be edited; however if they are, all Resources inside the array are changed to an "EncodedObjectAsID" which breaks Resources used as constant types. I'm not sure if this is intended, or if I should open another issue.
### Steps to reproduce
1) Create an exported array with 'Resource' types.
2) Create a timer that prints the array to console every few seconds.
3) Try editing the array, add/remove elements, drag/drop Resource objects from file system; the array will not update as seen in the console window.
### Minimal reproduction project (MRP)
[Godot Array Issue.zip](https://github.com/user-attachments/files/18532843/Godot.Array.Issue.zip) | bug,topic:editor | low | Critical |
2,808,885,721 | rust | Workaround for llvm bug for f32x3 saturating fp->int opt-level=0 on aarch64 is fixed in LLVM 20 | workaround added in https://github.com/rust-lang/portable-simd/pull/422#issuecomment-2153517021 for https://github.com/llvm/llvm-project/issues/94694
fixed in llvm 20 | C-cleanup,A-LLVM,A-SIMD,WG-llvm,T-libs,PG-portable-simd,O-AArch64 | low | Critical |
2,808,923,715 | pytorch | add scalar inputs with out causes error in torch.compile | ### 🐛 Describe the bug
```
import torch
def add_fn(params):
res = torch.add(**params)
return res
if __name__ == "__main__":
add_fn = torch.compile(add_fn)
params = {'other': 1.1, 'alpha': 0.4, 'input': 2, 'out': torch.tensor(1.)}
res = add_fn(params)
print(res)
```
It cases below error:
```
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] failed while attempting to run meta for aten.add.out
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] Traceback (most recent call last):
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] File "/tmp/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2016, in _dispatch_impl
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] r = func(*args, **kwargs)
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] File "/tmp/lib/python3.10/site-packages/torch/_ops.py", line 716, in __call__
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] return self._op(*args, **kwargs)
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] File "/tmp/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 273, in _fn
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] result = fn(*args, **kwargs)
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] File "/tmp/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 141, in _fn
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] result = fn(**bound.arguments)
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] File "/tmp/lib/python3.10/site-packages/torch/_refs/__init__.py", line 1087, in add
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] dtype = a.dtype if isinstance(a, TensorLike) else b.dtype # type: ignore[union-attr]
E0124 11:04:10.160000 286274 torch/_subclasses/fake_tensor.py:2020] [0/0] AttributeError: 'float' object has no attribute 'dtype'
Traceback (most recent call last):
File "/tmp/val.py", line 12, in <module>
res = add_fn(params)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__
result = self._inner_convert(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/tmp/lib/python3.10/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1680, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 897, in call_function
tensor_variable = wrap_fx_proxy(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2037, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2124, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2082, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2017, in get_fake_value
ret_val = wrap_fake_exception(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1574, in wrap_fake_exception
return fn()
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2018, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2150, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/tmp/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 2132, in run_node
return node.target(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1241, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1695, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1342, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2016, in _dispatch_impl
r = func(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_ops.py", line 716, in __call__
return self._op(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 273, in _fn
result = fn(*args, **kwargs)
File "/tmp/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 141, in _fn
result = fn(**bound.arguments)
File "/tmp/lib/python3.10/site-packages/torch/_refs/__init__.py", line 1087, in add
dtype = a.dtype if isinstance(a, TensorLike) else b.dtype # type: ignore[union-attr]
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method add of type object at 0x7f10670a3a20>(*(), **{'other': 1.1, 'alpha': 0.4, 'input': 2, 'out': FakeTensor(..., size=())}):
'float' object has no attribute 'dtype'
from user code:
File "/tmp/val.py", line 5, in add_fn
res = torch.add(**params)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.5.1
cc @chauhang @penguinwu @SherlockNoMad @zou3519 @bdhirsh @yf225 | triaged,oncall: pt2,module: decompositions,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,808,937,994 | godot | Screenspace Reflections do not change intensity when changing materials Specular value | ### Tested versions
Tested in 4.4 master
### System information
Windows 11 - RTX 2070 Super - Forward +
### Issue description
When changing the specular value on a material while SSR is enabled you would expect the intensity of the reflections to change with the specular.
Also worth noting that this happens for all spatial shaders not just the StandardMaterial3D
https://github.com/user-attachments/assets/aa4bf610-bffc-4515-9b63-48453bca5d8a
I havent gotten to test this in prior versions but from what I recall this did work properly at one time
### Steps to reproduce
- Enable SSR
- Add reflective material to object
- Change the specular intensity
### Minimal reproduction project (MRP)
[ssr_bug_mrp_2025-01-24_01-14-18.zip](https://github.com/user-attachments/files/18533174/ssr_bug_mrp_2025-01-24_01-14-18.zip) | bug,topic:rendering,topic:3d | low | Critical |
2,808,953,153 | kubernetes | ReplicaSet controller may create extra Pods when expectations expire during informer delays | ### What happened?
The ReplicaSet controller may create more Pods than desired under the following conditions:
1. ReplicaSet controller creates N Pods and sets expectations (+N)
2. Due to network issues or high latency, PodInformer hasn't received the Pod creation events
3. After 5 minutes, the expectations expire (isExpired() == true)
4. The `SatisfiedExpectations` function returns `true` immediately when expectations expire, without checking if they are actually fulfilled
5. This causes the controller to create additional Pods, even though the previously created Pods may still exist
Code path :https://github.com/kubernetes/kubernetes/blob/f6f06806cc43ed9f7eb2f68368c90a8239884118/pkg/controller/controller_utils.go#L193
```
func (r *ControllerExpectations) SatisfiedExpectations(logger klog.Logger, controllerKey string) bool {
if exp, exists, err := r.GetExpectations(controllerKey); exists {
if exp.Fulfilled() {
logger.V(4).Info("Controller expectations fulfilled", "expectations", exp)
return true
} else if exp.isExpired() {
logger.V(4).Info("Controller expectations expired", "expectations", exp)
return true // There is an issue here. !!!
} else {
logger.V(4).Info("Controller still waiting on expectations", "expectations", exp)
return false
}
} else if err != nil {
logger.V(2).Info("Error encountered while checking expectations, forcing sync", "err", err)
} else {
// When a new controller is created, it doesn't have expectations.
// When it doesn't see expected watch events for > TTL, the expectations expire.
// - In this case it wakes up, creates/deletes controllees, and sets expectations again.
// When it has satisfied expectations and no controllees need to be created/destroyed > TTL, the expectations expire.
// - In this case it continues without setting expectations till it needs to create/delete controllees.
logger.V(4).Info("Controller either never recorded expectations, or the ttl expired", "controller", controllerKey)
}
// Trigger a sync if we either encountered and error (which shouldn't happen since we're
// getting from local store) or this controller hasn't established expectations.
return true
}
```
If the PodInformer is delayed, manageReplicas will create too many Pods.
https://github.com/kubernetes/kubernetes/blob/f6f06806cc43ed9f7eb2f68368c90a8239884118/pkg/controller/replicaset/replica_set.go#L572
```
if diff < 0 {
diff *= -1
if diff > rsc.burstReplicas {
diff = rsc.burstReplicas
}
rsc.expectations.ExpectCreations(logger, rsKey, diff)
successfulCreations, err := slowStartBatch(diff, controller.SlowStartInitialBatchSize, func() error {
err := rsc.podControl.CreatePods(ctx, rs.Namespace, &rs.Spec.Template, rs, metav1.NewControllerRef(rs, rsc.GroupVersionKind))
if err != nil {
if apierrors.HasStatusCause(err, v1.NamespaceTerminatingCause) {
return nil
}
}
return err
})
if skippedPods := diff - successfulCreations; skippedPods > 0 {
logger.V(2).Info("Slow-start failure. Skipping creation of pods, decrementing expectations", "podsSkipped", skippedPods, "kind", rsc.Kind, "replicaSet", klog.KObj(rs))
for i := 0; i < skippedPods; i++ {
rsc.expectations.CreationObserved(logger, rsKey)
}
}
return err
}
```
### What did you expect to happen?
When controller expectations expire, the ReplicaSet controller should still ensure the actual Pod count matches the desired count, even if there are informer delays. Specifically:
1. If expectations expire but are not fulfilled (add/delete operations not observed), the controller should wait for the informer to catch up rather than immediately proceeding with new Pod creations.
2. The `SatisfiedExpectations` function should return `false` when expectations are expired but not fulfilled, preventing potential Pod over-provisioning during informer delays.
This would maintain the safety guarantee that a ReplicaSet never creates more Pods than desired, even in the presence of informer delays.
### How can we reproduce it (as minimally and precisely as possible)?
NONE
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/apps,needs-triage | low | Critical |
2,808,967,214 | vscode | ctrl+v, paste not working after electron update | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.4
- OS Version: Arch Linux (6.12.10-arch1-1 x86_64 GNU/Linux)
- Desktop: Gnome 47 - X11 or Wayland (code-flags.conf: --enable-features=UseOzonePlatform,WaylandWindowDecorations --ozone-platform-hint=wayland)
Steps to Reproduce:
1. upgrade electron to current version: 32.3.0-1-x86_64
2. in VSCode, open any file, select and copy text via ctrl-c
3. paste via ctrl-v into same editor or any other editor (as a matter of fact, neither other input fields, like find) will not work (neither via context menu paste)
4. the selection will be copied, since it is possible to paste it into e.g. gedit
5. downgrading electron to version: electron32-32.2.8-3-x86_64 will solve the issue
| triage-needed | low | Critical |
2,808,999,555 | ollama | Use cases for using Ollama in Microsoft Word | If Microsoft Word users are a potential target audience for Ollama, what use cases would you expect? We recently released the following quick demo based on Ollama, and we are curious about what the next use case could be from this community's perspective. We’d greatly appreciate any advice.
* [Use Ollama in Microsoft Word Locally](https://medium.com/@gptlocalhost/using-ollama-in-microsoft-word-locally-b713d65d11b0)
| feature request | low | Minor |
2,809,002,551 | transformers | ERROR: Video features and Video Tokens do not match!!! | Hi, i tried to fine tune LLava Next Video using a custom Dataset, but every time i try to train the model i receive this error in output. I followed the guide for the fine tuning here https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LLaVA-NeXT-Video/Fine_tune_LLaVa_NeXT_Video_with_HFTrainer.ipynb.
I do not understand why i receive this error because if i pass a video composed by only one frame the model accept it and start the training. Actually I am using the Atlas Dione Dataset to fine tune this model.
This is my notebook for the fine tuning:
https://github.com/nicol-buratti/activity-recognition/blob/main/llava_finetuning.ipynb
this is the script for the dataset preparation:
https://github.com/nicol-buratti/activity-recognition/blob/main/activity_dataset.py
Thank you all! | bug,Multimodal,VLM | low | Critical |
2,809,014,671 | PowerToys | Service DCOM 100% CPU Usage when Powertoys and Edge is Running | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
It doesn't happen all the time, and I don't understand what might trigger it. I have two Edge windows and several hundred tabs open. In Powertoys, everything is disabled except Color.
Even if DCOM doesn't stick to 100% load, when I open the Task Manager with Edge and Powertoys running, I see a load spike from this service.
The sticking itself is strange because the CPU does not increase the frequency or heat up. But it shows a 100% load, and the results in benchmarks drop 10 times.
Ryzen 5700x,
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,809,042,931 | flutter | [tool_crash] _TypeError: (#0 Plugin._getDefaultPackageForPlatform (package:flutter_tools/src/plugins.dart:351:71)) | ## Command
```sh
flutter --no-color run --machine --track-widget-creation --device-id=chrome --start-paused --dart-define=flutter.inspector.structuredErrors=true lib/main.dart
```
## Steps to Reproduce
1. ...
2. ...
3. ...
## Logs
_TypeError: (#0 Plugin._getDefaultPackageForPlatform (package:flutter_tools/src/plugins.dart:351:71))
```console
#0 Plugin._getDefaultPackageForPlatform (package:flutter_tools/src/plugins.dart:351:71)
#1 new Plugin._fromMultiPlatformYaml (package:flutter_tools/src/plugins.dart:163:40)
#2 new Plugin.fromYaml (package:flutter_tools/src/plugins.dart:78:21)
#3 _pluginFromPackage (package:flutter_tools/src/flutter_plugins.dart:71:17)
<asynchronous suspension>
#4 findPlugins (package:flutter_tools/src/flutter_plugins.dart:93:28)
<asynchronous suspension>
#5 injectBuildTimePluginFiles (package:flutter_tools/src/flutter_plugins.dart:1058:32)
<asynchronous suspension>
#6 ResidentWebRunner._generateEntrypoint (package:flutter_tools/src/isolated/resident_web_runner.dart:524:7)
<asynchronous suspension>
#7 ResidentWebRunner._updateDevFS (package:flutter_tools/src/isolated/resident_web_runner.dart:581:16)
<asynchronous suspension>
#8 ResidentWebRunner.run.<anonymous closure> (package:flutter_tools/src/isolated/resident_web_runner.dart:331:41)
<asynchronous suspension>
#9 asyncGuard.<anonymous closure> (package:flutter_tools/src/base/async_guard.dart:111:24)
<asynchronous suspension>
```
```console
[✓] Flutter (Channel stable, 3.27.3, on macOS 15.1.1 24B91 darwin-arm64, locale en-IN)
• Flutter version 3.27.3 on channel stable at /Users/deendayal/flutter_sdk
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision c519ee916e (3 days ago), 2025-01-21 10:32:23 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/deendayal/Library/Android/sdk
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup for more details.
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.67.2)
• VS Code at /Users/deendayal/Downloads/Visual Studio Code.app/Contents
• Flutter extension version 3.60.0
[✓] Connected device (4 available)
• M2006C3MI (mobile) • TWHUROIV6HPNAUZL • android-arm • Android 10 (API 29)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B91 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B91 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
! Error: Browsing on the local area network for Deendayal’s iPhone. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
## Flutter Application Metadata
**Type**: app
**Version**: 1.0.0+1
**Material**: true
**Android X**: false
**Module**: false
**Plugin**: true
**Android package**: null
**iOS bundle identifier**: null
**Creation channel**: stable
**Creation framework version**: 17025dd88227cd9532c33fa78f5250d548d87e9a
| c: crash,tool,P2,team-tool,triaged-tool | low | Critical |
2,809,061,187 | vscode | New file using keyboard only in file explorer not working on macbook | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
Version: 1.96.4 (Universal)
Commit: cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba
Date: 2025-01-16T00:16:19.038Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.2.0
- OS Version: Sequoia 15.2
Steps to Reproduce:
1. open any project in VSC
2. navigate to the file explorer using ⌘ ⇧ E (Command shift E)
3. navigate to the folder you want the file to be created in using arrow keys ⬆ ⬇ ⬅ to collapse folder, ➡ to expand folder
4. when the folder is selected press ⇧ F10 (default context menu for macbook) the menu appears

5. using arrows, select create folder, or create file and click enter
6. observe it will start renaming the folder in both cases, it doesn't matter if the folder is expanded or collapsed

Whereas if I'm using the physical mouse to to the same, it start the interaction correctly (unless you hadn't focussed the VSC, which causes the interaction to be buggy, but in this case it is not that)

| bug,file-explorer | low | Critical |
2,809,099,990 | deno | `npm:google-auth-library` causes unresolved top-level await | Version: Deno 2.1.6
If the Google Auth library fails to lookup googleapis.com, that error is apparently impossible to `catch` from Deno scripts.
# Reproduction
1. Download the password-protected service account file from the bottom of the first comment of [this issue](https://github.com/denoland/deno/issues/27384) and unzip `service-account-key-deno-firestore-dns-bug.json`.
2. Save the following Deno script as `deno-fail.ts`:
```ts
import { GoogleAuth } from 'npm:google-auth-library';
export async function getOAuthToken() {
try {
// Ensure this call succeeds by using the valid service account JSON file extracted earlier
const auth = new GoogleAuth({
keyFilename: 'service-account-key-deno-firestore-dns-bug.json',
});
// Ensure this call succeeds by using the valid service account JSON file extracted earlier
const client = await auth.getClient();
console.log('---- Before .getAccessToken() call');
// await fetch('https://pleasedontregisterthisdomainnam.e'); // this IS caught in both Deno and Node
await client.getAccessToken(); // this will throw if offline
console.log('---- execution correctly never reaches this point');
} catch (e) {
console.log('🦕🦕🦕🦕 Why does execution not reach this point in Deno but does in Node?', (e as Error).message);
}
}
const token = await getOAuthToken();
Deno.test('access token', async () => {
console.log(token);
});
```
3. Go offline
4. Run the test:
```
$ deno test -A deno-fail.ts
------- pre-test output -------
---- Before .getAccessToken() call
Uncaught error from ./deno-fail.ts FAILED
Uncaught error from ./deno-fail.ts FAILED
ERRORS
./deno-fail.ts (uncaught error)
error: Top-level await promise never resolved
const token = await getOAuthToken();
^
at <anonymous> (file:///home/dandv/deno-bugs/deno-fail.ts:23:15)
This error was not caught from a test and caused the test runner to fail on the referenced module.
It most likely originated from a dangling promise, event/timeout handler or top-level code.
FAILURES
./deno-fail.ts (uncaught error)
FAILED | 0 passed | 2 failed (0ms)
error: Test failed
```
Also note that the error reason (`getaddrinfo EAI_AGAIN www.googleapis.com`) was not output.
# Node version
The node version of the script behaves as expected. Here is `node-works.ts`:
```ts
import test from 'node:test';
import { GoogleAuth } from 'google-auth-library';
export async function getOAuthToken() {
try {
// Ensure this call succeeds by using the valid service account JSON file extracted earlier
const auth = new GoogleAuth({
keyFilename: '../../service-account-key-cfsh.json',
});
// Ensure this call succeeds by using the valid service account JSON file extracted earlier
const client = await auth.getClient();
console.log('---- Before .getAccessToken() call');
// await fetch('https://pleasedontregisterthisdomainnam.e'); // this IS caught in both Deno and Node
await client.getAccessToken(); // this will throw if offline
console.log('---- execution correctly never reaches this point');
} catch (e) {
debugger;
console.log('🦕🦕🦕🦕 Why does execution not reach this point in Deno but does in Node?', (e as Error).message);
}
}
const token = await getOAuthToken();
test('access token', async () => {
console.log(token);
});
```
When run with node, the script correctly prints the dinos line.
```
$ node --experimental-strip-types node-works.ts
(node:1523620) ExperimentalWarning: Type Stripping is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
(node:1523620) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
---- Before .getAccessToken() call
🦕🦕🦕🦕 Why does execution not reach this point in Deno but does in Node? request to https://www.googleapis.com/oauth2/v4/token failed, reason: getaddrinfo EAI_AGAIN www.googleapis.com
undefined
✔ access token (1.308108ms)
``` | node compat | low | Critical |
2,809,127,396 | rust | Missing Safety Guarantee in `merge_down` Function Documentation (`smallsort` Module) | ### Location
The _SAFETY_ comment in question is located in the [`merge_down`](https://github.com/rust-lang/rust/blob/48ef38d3503a04e5e18157e664e3e65dc7eca1a5/library/core/src/slice/sort/shared/smallsort.rs#L721) function in the `smallsort` module in `shared` in `sort` in the `slice` module of the `core` crate.
### Summary
While working on the [Rust std-lib verification](https://github.com/model-checking/verify-rust-std), I identified a missing requirement in the _SAFETY_ comment in the [`merge_down`](https://github.com/rust-lang/rust/blob/48ef38d3503a04e5e18157e664e3e65dc7eca1a5/library/core/src/slice/sort/shared/smallsort.rs#L721) function in the `smallsort` module.
## Description of the problem
In the following code, the _SAFETY_ comment does not guarantee that `dst.sub(1)` remains within the same allocated object as `dst`. This violates one of the [safety requirements for `pointer.sub`](https://doc.rust-lang.org/std/primitive.pointer.html#safety-8).
```rust
unsafe fn merge_down<T, F: FnMut(&T, &T) -> bool>(
mut left_src: *const T,
mut right_src: *const T,
mut dst: *mut T,
is_less: &mut F,
) -> (*const T, *const T, *mut T) {
// snip
// SAFETY: The caller must guarantee that `left_src`, `right_src` are valid
// to read and `dst` is valid to write, while not aliasing.
unsafe {
// snip
dst = dst.sub(1); // <- issue here
}
(left_src, right_src, dst)
}
```
## Proposed fix
Update the _SAFETY_ comment as follows.
```diff
// SAFETY: The caller must guarantee that `left_src`, `right_src` are valid
- // to read and `dst` is valid to write, while not aliasing.
+ // to read, `dst` is valid to write, while not aliasing, and `dst.sub(1)`
+ // is within the same allocated object as `dst`.
```
## `merge_up` does not have the same problem
Although the _SAFETY_ comment in the [`merge_up`](https://github.com/rust-lang/rust/blob/48ef38d3503a04e5e18157e664e3e65dc7eca1a5/library/core/src/slice/sort/shared/smallsort.rs#L688) function may appear to have a similar issue, I believe this is not the case. Since `dst` must already be valid for a write, `dst.add(1)` will remain within the bounds of the same allocated object. | A-docs,T-libs | low | Minor |
2,809,131,152 | vscode | Cannot open a terminal inside a dev container |
Type: <b>Bug</b>
It was working fine. Today when I re-opened, the terminals that weere already opened had like a disconnection symbol in red, but when hivering on it, there was no info.
I closed it completely and reopened, however, the issue persists: I cannot open any terminal. When I try to do so, some sort of ghost tab appears on the left Tags pannel (on the terminal section at the bottom), but no title, absolutely nothing happens.
I would really appreciate y anyone takes a look into it :)
VS Code version: Code 1.93.1 (Universal) (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Darwin arm64 24.1.0
Modes:
Remote OS version: Linux x64 6.10.14-linuxkit
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M3 Pro (12 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|3, 3, 3|
|Memory (System)|18.00GB (0.26GB free)|
|Process Argv|--crash-reporter-id ec1e1863-9ec5-463a-a8b2-77556a5a69e4|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|Dev Container: Existing Docker Compose (Extend) @ desktop-linux|
|OS|Linux x64 6.10.14-linuxkit|
|CPUs|unknown (12 x 0)|
|Memory (System)|7.65GB (1.26GB free)|
|VM|0%|
</details><details><summary>Extensions (16)</summary>
Extension|Author (truncated)|Version
---|---|---
vsc-python-indent|Kev|1.19.0
jupyter-keymap|ms-|1.1.2
remote-containers|ms-|0.394.0
remote-ssh|ms-|0.115.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
vscode-remote-extensionpack|ms-|0.26.0
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
pdf|tom|1.2.2
copilot|Git|1.250.0
copilot-chat|Git|0.20.3
vscode-docker|ms-|1.29.4
debugpy|ms-|2024.14.0
python|ms-|2024.14.1
vscode-pylance|ms-|2024.12.1
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
hdaa2157:31222309
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed,terminal | low | Critical |
2,809,151,507 | ollama | burn windows update at the stake | Listen, Its 5am, I am tired, I dont want to be awake anymore, if this makes any sense or is logically possible, please do ANYTHING
I just posted [this](https://www.reddit.com/r/ollama/comments/1i8spym/recovering_lost_model_files_due_to_forced_windows/) to the subreddit
Hours and hours lost to windows update. **Please just automatically turn it off while downloading models.**
when downloading `net stop wuauserv` or whatever you need to do.
While writing the reddit post, I was again prompted to update windows. Mid recovery attempt.
WHEN WRITING THIS, I HAVE AGAIN BEEN PROMPTED TO UPDATE WINDOWS, AFTER I TOLD IT "I DONT WANT TO UPDATE"

(my server is on the wrong timezone for some reason. i dont care to fix it)
You have no idea the pain and rage that consumes me.
Please save others from knowing what I know........ | feature request | low | Minor |
2,809,177,304 | kubernetes | [Flaking test] [sig-node] Kubernetes e2e suite.[It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails | ### Which jobs are flaking?
master-blocking
- gce-ubuntu-master-containerd
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
[Prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-ubuntu-gce-containerd/1882490406982127616)
[Triage](https://storage.googleapis.com/k8s-triage/index.html?test=ods%20Extended%20Pod%20Container%20Status%20should%20never%20report%20container%20start%20when%20an%20init%20container%20fails&xjob=e2e-kops)
### Since when has it been flaking?
[1/15/2025, 1:23:19 PM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-ubuntu-ec2-arm64-containerd/1879564729022681088)
[1/20/2025, 7:25:30 PM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-ubuntu-ec2-containerd/1881467771536019456)
[1/21/2025, 7:26:40 PM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-ubuntu-ec2-arm64-containerd/1881830414499188736)
[1/22/2025, 1:24:08 AM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-ec2-eks-al2023-arm64/1881920257635913728)
[1/23/2025, 3:07:44 PM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-ubuntu-gce-containerd/1882490406982127616)
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#gce-ubuntu-master-containerd
### Reason for failure (if possible)
```
{ failed [FAILED] 1 errors:
pod pod-terminate-status-2-10 on node bootstrap-e2e-minion-group-5d4d container unexpected exit code 2: start=2025-01-23 18:26:43 +0000 UTC end=2025-01-23 18:26:44 +0000 UTC reason=Error message=
In [It] at: k8s.io/kubernetes/test/e2e/node/pods.go:548 @ 01/23/25 18:27:35.688
}
```
### Anything else we need to know?
N/A
### Relevant SIG(s)
/sig node
cc: @kubernetes/release-team-release-signal | sig/node,kind/flake,needs-triage | low | Critical |
2,809,178,195 | ollama | Error: server metal not listed in available servers map | ### What is the issue?
I downloaded Ollama today on my Macbook (Apple M3 Pro, with MacOS Sonoma 14.3 23D56), and tried to run deepseek-r1:8b, but ollama failed with this error:
> $ ollama run deepseek-r1:8b
> Error: [0] server metal not listed in available servers map[]
p.s. I can run this model with llama-cli on the same device.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.7 | bug | low | Critical |
2,809,182,773 | ollama | Error when trying to download deepseek-r1:7b | ### What is the issue?
I tried using ollama run deepseek-r1:7b
It started to download for a minute then this error appeared
Error: Post "http://127.0.0.1:11434/api/show": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.7 | bug | low | Critical |
2,809,189,231 | flutter | Add ability to tree shake icon for rfw | ### Use case
Example code:
```
import 'package:flutter/widgets.dart';
import 'package:rfw/rfw.dart';
void main() {
final runtime = Runtime()
..update(const LibraryName(['core']), createCoreWidgets());
runApp(
RemoteWidget(
runtime: runtime,
data: DynamicContent(),
widget: const FullyQualifiedWidgetName(LibraryName(['remote']), 'root'),
),
);
}
```
Try to build in profile mode:
`flutter build apk --profile`
Build failed and below is the log.
```
This application cannot tree shake icons fonts. It has non-constant instances of IconData at the following locations:
- file:///Users/zhenhao.ng/.pub-cache/hosted/pub.dev/rfw-1.0.30/lib/src/flutter/argument_decoders.dart:806:12
Target aot_android_asset_bundle failed: Error: Avoid non-constant invocations of IconData or try to build again with --no-tree-shake-icons.
```
### Proposal
Let us do tree shaking on the IconData | package,team-ecosystem,p: rfw | low | Critical |
2,809,189,946 | transformers | ZeroShotClassificationArgumentHandler should be explicit it has a somewhat unsafe internal behaviour. | ### Feature request
Currently, `ZeroShotClassificationArgumentHandler::__call__` will execute https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/zero_shot_classification.py#L41 , that is, it will call python `.format()` on the hypothesis provided to format the label in it, while allowing the full extent of .format() placeholders syntax, which is quite large.
For example, passing `hypothesis_template = "{:>9999999999}"` and any label will happily eat 100Go of RAM because the whole scope of python formatting is allowed.
This is not made clear annywhere, but library users need to know they have to sanitize those inputs very carefully.
I think that at least the docstring of the class, and ideally the reference doc for "hypothesis_template" on https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.zero_shot_classification should be updated to mention this, it's quite important for users of the lib (in particular for parameters that will naturally tend to be user facing in the end).
Alternatively, this call could accept {} only as a placeholder, it's hard to see a legitimate use case for exotic formatting of labels in the hypothesis template.
Thanks :-)
### Motivation
I think it's good to help the internet be a safer place in general :-)
### Your contribution
It's unclear to me whether I can contribute to the documentation on hugginface.com.
I could contribute a fix to be stricter on allowed hypothesis_template in transformers though if you want to take this route (I'm pretty sure even an AI model could contribute the two lines needed though...) | Feature request | low | Minor |
2,809,211,922 | vscode | Allow customization of editor tab height and editor tab bar scrollbar height | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I would like to be able to customize the height of tabs and of the scrollbar in the tab bar. There are options to adjust the size of these but they're really only choices between small and smaller.
I always find myself fighting with the minuscule scrollbar and inadvertently clicking other open files. It gets frustrating when it happens dozens of times in the same session.
If these were customizable in a way similar to how you can set rulers in the editor, it would not remove current functionality for users who want these to be tiny, and it would make it easier for users needing them to be larger(for disabilities or other reasons).
Thank you! | feature-request,workbench-tabs | low | Minor |
2,809,213,866 | deno | Repl prints an error twice | Version: Deno 2.1.7
1. Run `deno` to start the repl
2. Execute `for (console.log("a") of [1]);`
It reports the syntax error twice:
```
➜ deno
Deno 2.1.7
exit using ctrl+d, ctrl+c, or close()
REPL is running with all permissions allowed.
To specify permissions, run `deno repl` with allow flags.
> for (console.log("a") of [1]);
error: The left-hand side of an assignment expression must be a variable or a property access. at file:///repl.tsx:1:6
for (console.log("a") of [1]);
~~~~~~~~~~~~~~~~
The left-hand side of an assignment expression must be a variable or a property access. at file:///repl.tsx:1:6
for (console.log("a") of [1]);
~~~~~~~~~~~~~~~~
```
This is not a SWC bug (in their playground they only report it once), and I couldn't find other syntax errors that are printed twice.
I cannot find any other error for which this happens. | repl | low | Critical |
2,809,214,360 | pytorch | `torch.ops.aten.copy` causes SIGSEGV when handling sparse CSR tensors with invalid metadata | ### 🐛 Describe the bug
Using torch.ops.aten.copy with sparse CSR tensors can cause a segmentation fault (SIGSEGV). The issue appears to stem from a lack of validation for the sparse tensor metadata (crow_indices, col_indices, and values). When the metadata contains invalid or uninitialized data (e.g., due to torch.randn generating sparse CSR tensors with incomplete initialization), torch.ops.aten.copy attempts to access this data directly, leading to undefined behavior and a crash.
example:
```python
import torch
print(torch.__version__)
# Create sparse CSR tensors with torch.randn
sym_0 = (5, 5)
sym_1 = torch.sparse_csr
var_1 = torch.randn(size=sym_0, layout=sym_1) # Generates sparse CSR tensor
var_2 = torch.randn(size=sym_0, layout=sym_1)
# Attempt to copy data
res = torch.ops.aten.copy(var_1, var_2)
print(res)
```
Observed behavior:
```
2.7.0.dev20250116+cu124
/home/user/test.py:8: UserWarning: Sparse CSR tensor support is in beta state.
If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues.
(Triggered internally at /pytorch/aten/src/ATen/SparseCsrTensorImpl.cpp:53.)
var_1 = torch.randn(size=sym_0, layout=sym_1)
fish: Job 2, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip | module: sparse,module: crash,triaged | low | Critical |
2,809,238,675 | TypeScript | Type variables in type abstractions are not properly concretized | ### 🔎 Search Terms
type variables unknown
### 🕗 Version & Regression Information
- This changed between versions 4.9.5 and 5.0.4.
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.7.3#code/MYewdgzgLgBAZjAvDAPADQDQD4AUBzALlQEFscAjItASiSxkxgitsXrxwmoG4AoX0JFgAnAKaxkcHCgBCZAB5EZregEYMMAETRhmnrwD0BmCZgA9APxA
### 💻 Code
```ts
const f = <X,>(g: <A,>(b: X) => X, s: X) => g(s);
const ret = f(<B,>(x: B) => 1, "str");
// ^? const ret: unknown
```
### 🙁 Actual behavior
No type error is reported and `ret` is inferred as `unknown` type.
### 🙂 Expected behavior
I think a type error is reported because the call to `f` attempts to concretize the type variable X not only as a number but also as a string. In fact, TypeScript 4.9.5 reported a type error.
### Additional information about the issue
I am not sure that TypeScript 4.9.5 was perfect. The following code is inferred to be `unknown`, but I think `string` is correct. This behavior was back to TypeScript 3.5.1.
```ts
const f = <X,>(g: <A,>(b: X) => X, s: X) => g(s);
const ret = f(<B,>(x: B) => x, "str");
// ^? const ret: unknown
// actual: ret is inferred as unknown
// expected: ret is inferred as string
```
https://www.typescriptlang.org/play/?ts=5.7.3#code/MYewdgzgLgBAZjAvDAPADQDQD4AUBzALlQEFscAjItASiSxkxgitsXrxwmoG4AoX0JFgAnAKaxkcHCgBCZAB5EZrevIwwARNGEaevAPT6YxmAD0A-P0MwAhsCgBXGwBsiY2AEsIMD2DijhMQATW28HMABrMBAAdzADI1F5AAdRe1Egt3Efb19-QIzQpihhXzwgA
I also found a strange behavior that a type variable that should have been concretized was returned without being concretized. I thought this is related, but I could be wrong. Sorry if so.
```ts
const f = <X,>(g: <A,>(x: X) => X) => g<string>;
const h = f<number>(<B,>(x: number) => 1);
// ^? const h: (x: X) => X
h(1);
// actual: type error
// expected: no type error
```
https://www.typescriptlang.org/play/?ts=5.7.3#code/MYewdgzgLgBAZjAvDAPADQDQD4AUBzALlQEFscAPItASiSxhrpjxWgCcBLMPLAbgCh+oSLAAWSeCjABXALYAjAKZtcKAEJlKMGQuW1E9AIzUBAelMxLMAHoB+QaJzGzFgIbAo01wBsiUAJ4ADoowymwgbPzmoeTBHooAJkRgIDABwaFs4WxAA | Bug,Fix Available | low | Critical |
2,809,263,612 | kubernetes | [Flaking Test] [sig-api-machinery] k8s.io/kubernetes/test/integration/apiserver/coordinatedleaderelection.coordinatedleaderelection | ### Which jobs are flaking?
master-blocking
- integration-master
### Which tests are flaking?
k8s.io/kubernetes/test/integration/apiserver/coordinatedleaderelection.coordinatedleaderelection
[Prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1881787128904421376)
[Triage](https://storage.googleapis.com/k8s-triage/index.html?test=k8s.io%2Fkubernetes%2Ftest%2Fintegration%2Fapiserver%2Fcoordinatedleaderelection.coordinatedleaderelection&xjob=e2e-kops)
### Since when has it been flaking?
[1/14/2025, 12:41:26 AM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1878882224963588096)
[1/18/2025, 1:29:23 AM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1880343855438499840)
[1/20/2025, 1:20:29 PM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1881247568734720000)
[1/20/2025, 1:37:27 PM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-1-32/1881251846668947456)
[1/22/2025, 1:04:24 AM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1881787128904421376)
[1/23/2025, 1:45:57 PM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-1-32/1882341172085526528)
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#integration-master
### Reason for failure (if possible)
```
{Failed === RUN TestCoordinatedLeaderElectionLeaseTransfer
testserver.go:582: Resolved testserver package path to: "/home/prow/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing"
testserver.go:402: runtime-config=map[api/all:true]
testserver.go:403: Starting kube-apiserver on port 43809...
testserver.go:438: Waiting for /healthz to be ok...
[-]poststarthook/start-apiextensions-controllers failed: not finished
[-]poststarthook/crd-informer-synced failed: not finished
[-]poststarthook/start-service-ip-repair-controllers failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/priority-and-fairness-config-producer failed: not finished
[-]poststarthook/bootstrap-controller failed: not finished
[-]poststarthook/apiservice-registration-controller failed: not finished
[-]poststarthook/apiservice-discovery-controller failed: not finished
[-]autoregister-completion failed: missing APIService: [v1. v1.admissionregistration.k8s.io v1.apiextensions.k8s.io v1.apps v1.authentication.k8s.io v1.authorization.k8s.io v1.autoscaling v1.batch v1.certificates.k8s.io v1.coordination.k8s.io v1.discovery.k8s.io v1.events.k8s.io v1.flowcontrol.apiserver.k8s.io v1.networking.k8s.io v1.node.k8s.io v1.policy v1.rbac.authorization.k8s.io v1.scheduling.k8s.io v1.storage.k8s.io v1alpha1.admissionregistration.k8s.io v1alpha1.internal.apiserver.k8s.io v1alpha1.storage.k8s.io v1alpha2.coordination.k8s.io v1alpha3.resource.k8s.io v1beta1.admissionregistration.k8s.io v1beta1.networking.k8s.io v1beta1.resource.k8s.io v1beta1.storage.k8s.io v2.autoscaling]
[-]poststarthook/start-apiextensions-controllers failed: not finished
[-]poststarthook/crd-informer-synced failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/priority-and-fairness-config-producer failed: not finished
[-]poststarthook/bootstrap-controller failed: not finished
[-]poststarthook/apiservice-registration-controller failed: not finished
[-]poststarthook/apiservice-discovery-controller failed: not finished
[-]autoregister-completion failed: missing APIService: [v1. v1.admissionregistration.k8s.io v1.apiextensions.k8s.io v1.apps v1.authentication.k8s.io v1.authorization.k8s.io v1.autoscaling v1.batch v1.certificates.k8s.io v1.coordination.k8s.io v1.discovery.k8s.io v1.events.k8s.io v1.flowcontrol.apiserver.k8s.io v1.networking.k8s.io v1.node.k8s.io v1.policy v1.rbac.authorization.k8s.io v1.scheduling.k8s.io v1.storage.k8s.io v1alpha1.admissionregistration.k8s.io v1alpha1.internal.apiserver.k8s.io v1alpha1.storage.k8s.io v1alpha2.coordination.k8s.io v1alpha3.resource.k8s.io v1beta1.admissionregistration.k8s.io v1beta1.networking.k8s.io v1beta1.resource.k8s.io v1beta1.storage.k8s.io v2.autoscaling]
[-]poststarthook/start-apiextensions-controllers failed: not finished
[-]poststarthook/crd-informer-synced failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/priority-and-fairness-config-producer failed: not finished
[-]autoregister-completion failed: missing APIService: [v1. v1.admissionregistration.k8s.io v1.apiextensions.k8s.io v1.apps v1.authentication.k8s.io v1.authorization.k8s.io v1.autoscaling v1.batch v1.certificates.k8s.io v1.coordination.k8s.io v1.discovery.k8s.io v1.events.k8s.io v1.flowcontrol.apiserver.k8s.io v1.networking.k8s.io v1.node.k8s.io v1.policy v1.rbac.authorization.k8s.io v1.scheduling.k8s.io v1.storage.k8s.io v1alpha1.admissionregistration.k8s.io v1alpha1.internal.apiserver.k8s.io v1alpha1.storage.k8s.io v1alpha2.coordination.k8s.io v1alpha3.resource.k8s.io v1beta1.admissionregistration.k8s.io v1beta1.networking.k8s.io v1beta1.resource.k8s.io v1beta1.storage.k8s.io v2.autoscaling]
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/priority-and-fairness-config-producer failed: not finished
[-]autoregister-completion failed: missing APIService: [v1. v1.admissionregistration.k8s.io v1.apiextensions.k8s.io v1.apps v1.authentication.k8s.io v1.authorization.k8s.io v1.autoscaling v1.batch v1.certificates.k8s.io v1.coordination.k8s.io v1.discovery.k8s.io v1.events.k8s.io v1.flowcontrol.apiserver.k8s.io v1.networking.k8s.io v1.node.k8s.io v1.policy v1.rbac.authorization.k8s.io v1.scheduling.k8s.io v1.storage.k8s.io v1alpha1.admissionregistration.k8s.io v1alpha1.internal.apiserver.k8s.io v1alpha1.storage.k8s.io v1alpha2.coordination.k8s.io v1alpha3.resource.k8s.io v1beta1.admissionregistration.k8s.io v1beta1.networking.k8s.io v1beta1.resource.k8s.io v1beta1.storage.k8s.io v2.autoscaling]
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/priority-and-fairness-config-producer failed: not finished
[-]autoregister-completion failed: missing APIService: [v1. v1.admissionregistration.k8s.io v1.apiextensions.k8s.io v1.apps v1.authentication.k8s.io v1.authorization.k8s.io v1.autoscaling v1.batch v1.certificates.k8s.io v1.coordination.k8s.io v1.discovery.k8s.io v1.events.k8s.io v1.flowcontrol.apiserver.k8s.io v1.networking.k8s.io v1.node.k8s.io v1.policy v1.rbac.authorization.k8s.io v1.scheduling.k8s.io v1.storage.k8s.io v1alpha1.admissionregistration.k8s.io v1alpha1.internal.apiserver.k8s.io v1alpha1.storage.k8s.io v1alpha2.coordination.k8s.io v1alpha3.resource.k8s.io v1beta1.admissionregistration.k8s.io v1beta1.networking.k8s.io v1beta1.resource.k8s.io v1beta1.storage.k8s.io v2.autoscaling]
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.22697537 +0000 UTC m=+2.405278625"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "system" and it exists`,
}
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.232527805 +0000 UTC m=+2.410831090"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "node-high" and it exists`,
}
Type: "Dangling",
- Status: "",
+ Status: "True",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.236808066 +0000 UTC m=+2.415111321"},
- Reason: "",
+ Reason: "NotFound",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "exempt" but there is no such object`,
}
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.236810496 +0000 UTC m=+2.415113761"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "leader-election" and it exists`,
}
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.244349304 +0000 UTC m=+2.422652559"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "leader-election" and it exists`,
}
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.244351044 +0000 UTC m=+2.422654299"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "workload-high" and it exists`,
}
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/priority-and-fairness-config-producer failed: not finished
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.253019695 +0000 UTC m=+2.431322950"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "workload-high" and it exists`,
}
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.253021215 +0000 UTC m=+2.431324470"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "workload-high" and it exists`,
}
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.253025715 +0000 UTC m=+2.431328970"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "workload-high" and it exists`,
}
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.266023747 +0000 UTC m=+2.444327022"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "exempt" and it exists`,
}
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.266032617 +0000 UTC m=+2.444335882"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "global-default" and it exists`,
}
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.266040247 +0000 UTC m=+2.444343502"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "workload-low" and it exists`,
}
Type: "Dangling",
- Status: "True",
+ Status: "False",
- LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28 +0000 UTC"},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.266050447 +0000 UTC m=+2.444353702"},
- Reason: "NotFound",
+ Reason: "Found",
Message: strings.Join({
"This FlowSchema references the PriorityLevelConfiguration object",
` named "exempt" `,
- "but there is no such object",
+ "and it exists",
}, ""),
}
Type: "Dangling",
- Status: "",
+ Status: "False",
- LastTransitionTime: v1.Time{},
+ LastTransitionTime: v1.Time{Time: s"2025-01-21 19:59:28.284227301 +0000 UTC m=+2.462530576"},
- Reason: "",
+ Reason: "Found",
- Message: "",
+ Message: `This FlowSchema references the PriorityLevelConfiguration object named "catch-all" and it exists`,
}
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
leases.coordination.k8s.io "leader-election-controller" not found
leases.coordination.k8s.io "leader-election-controller" not found
testserver.go:582: Resolved testserver package path to: "/home/prow/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing"
testserver.go:402: runtime-config=map[api/all:true]
testserver.go:403: Starting kube-apiserver on port 42539...
testserver.go:438: Waiting for /healthz to be ok...
[-]poststarthook/start-apiextensions-controllers failed: not finished
[-]poststarthook/crd-informer-synced failed: not finished
[-]poststarthook/start-service-ip-repair-controllers failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/priority-and-fairness-config-producer failed: not finished
[-]poststarthook/bootstrap-controller failed: not finished
[-]poststarthook/apiservice-registration-controller failed: not finished
[-]poststarthook/apiservice-discovery-controller failed: not finished
[-]autoregister-completion failed: missing APIService: [v1. v1.admissionregistration.k8s.io v1.apiextensions.k8s.io v1.apps v1.authentication.k8s.io v1.authorization.k8s.io v1.autoscaling v1.batch v1.certificates.k8s.io v1.coordination.k8s.io v1.discovery.k8s.io v1.events.k8s.io v1.flowcontrol.apiserver.k8s.io v1.networking.k8s.io v1.node.k8s.io v1.policy v1.rbac.authorization.k8s.io v1.scheduling.k8s.io v1.storage.k8s.io v1alpha1.admissionregistration.k8s.io v1alpha1.internal.apiserver.k8s.io v1alpha1.storage.k8s.io v1alpha2.coordination.k8s.io v1alpha3.resource.k8s.io v1beta1.admissionregistration.k8s.io v1beta1.networking.k8s.io v1beta1.resource.k8s.io v1beta1.storage.k8s.io v2.autoscaling]
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/priority-and-fairness-config-producer failed: not finished
[-]poststarthook/bootstrap-controller failed: not finished
[-]poststarthook/apiservice-registration-controller failed: not finished
[-]poststarthook/apiservice-discovery-controller failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/priority-and-fairness-config-producer failed: not finished
[-]poststarthook/apiservice-registration-controller failed: not finished
[-]poststarthook/apiservice-discovery-controller failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
client rate limiter Wait returned an error: context deadline exceeded
leaderelection_test.go:162: Expected the cle lease lock to transition to the first apiserver
--- FAIL: TestCoordinatedLeaderElectionLeaseTransfer (25.47s)
}
```
### Anything else we need to know?
N/A
### Relevant SIG(s)
/sig api-machinery | sig/api-machinery,kind/flake,needs-triage | low | Critical |
2,809,271,562 | ollama | Model loaded each time | ### What is the issue?
Model reload each time it's need the internet to load when i terminate the cmd and run again

### OS
_No response_
### GPU
_No response_
### CPU
Intel
### Ollama version
0.5.4 | bug | low | Minor |
2,809,305,316 | next.js | Automatically adding initial-scale to meta viewport | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/fervent-microservice-fxis37?fork=true
### To Reproduce
1. open root layout.tsx
2. add
```
export const viewport: Viewport = {
width: 390,
userScalable: false,
};
```
### Current vs. Expected behavior
**Current**
`<meta name="viewport" content="width=390, initial-scale=1, user-scalable=no">`
**Expected**
`<meta name="viewport" content="width=390, user-scalable=no">`
initial-scale is not expected here as default and I am not be able to remove it. I expect full control over meta viewport tag.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Dec 6 18:56:34 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6020
Available memory (MB): 16384
Available CPU cores: 12
Binaries:
Node: 22.10.0
npm: 10.9.0
Yarn: N/A
pnpm: 8.15.1
Relevant Packages:
next: 15.1.6 // Latest available version is detected (15.1.6).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Metadata
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
_No response_ | Metadata | low | Minor |
2,809,323,533 | vscode | Source Control is missing a disable for this file option |
Type: <b>Feature Request</b>
After I enabled source control for my project file, it was annoying to see the green text so I wanted to turn it off but there is no option for that, please look forward to it.
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<!-- generated by issue reporter --> | info-needed | low | Minor |
2,809,326,090 | langchain | Azure search similarity_search_with_score method error | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.vectorstores.azuresearch import AzureSearch
azure_search = AzureSearch(
azure_search_endpoint=search_url,
index_name=index_name,
embedding_function=embedding_function,
azure_credential=credential,
)
result = azure_search.similarity_search_with_score(
query="example query",
k=10,
search_type='similarity'
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When passing the search_type parameter to the similarity_search_with_score it fails later because it is passed as a kwarg. The error is this:
TypeError: Session.request() got an unexpected keyword argument 'search_type'
The error comes from the method definition [in line 682](https://github.com/langchain-ai/langchain/blob/dbb6b7b103d9c32cea46d3848839a4c9cbb493c3/libs/community/langchain_community/vectorstores/azuresearch.py#L682)
and the solution would simply be to add search_type as a parameter in the method:
def similarity_search_with_score(
self, query: str, *, k: int = 4, search_type: str, **kwargs: Any
) -> List[Tuple[Document, float]]:
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.2 (tags/v3.11.2:878ead1, Feb 7 2023, 16:38:35) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.15
> langchain_community: 0.3.15
> langsmith: 0.3.1
> langchain_openai: 0.2.3
> langchain_postgres: 0.0.12
> langchain_text_splitters: 0.3.5
> langchain_unstructured: 0.1.6
> langgraph_sdk: 0.1.44
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> onnxruntime: 1.19.2
> openai: 1.56.2
> orjson: 3.10.15
> packaging: 24.2
> pgvector: 0.2.5
> psycopg: 3.2.3
> psycopg-pool: 3.2.3
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> pytest: 8.3.3
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rich: Installed. No version info available.
> sqlalchemy: 2.0.37
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> unstructured-client: 0.28.1
> unstructured[all-docs]: Installed. No version info available.
> zstandard: 0.23.0 | Ɑ: vector store | low | Critical |
2,809,328,154 | react-native | [Text on Android]: setting selectable to true breaks the text truncation and lineHeight on Android | ### Description
I use Text component that has to contain selectable text and should be truncated if it extends the width of the parent component.
On iOS everything works fine. On Android if I set `selectable={true}` truncation and line height of the text are breaking.
### Steps to reproduce
1. import Text component from react-native
2. render Text component, add `selectable={true}`, `numberOfLines={1}`
3. the text is cut off instead of being truncated, the line height is bigger than it should be, the top of the second line of text is visible
### React Native Version
0.75.4
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
-
```
### Stacktrace or Logs
```text
-
```
### Reproducer
https://github.com/dariakoko/text-component-on-android
### Screenshots and Videos
<img width="338" alt="Image" src="https://github.com/user-attachments/assets/a1ae3cd8-4f1f-4a5a-99af-9f65e04d66e3" />
```
<View style={{width: 200}}>
<Text style={{color: 'green'}}>default: </Text>
<Text style={{backgroundColor: 'yellow'}}>
Some long very very very very very long text
</Text>
<Text>-------------</Text>
<Text style={{color: 'green'}}>not selectable: </Text>
<Text numberOfLines={1} style={{backgroundColor: 'yellow'}}>
Some long very very very very very long text
</Text>
<Text>-------------</Text>
<Text style={{color: 'red'}}>selectable (issue): </Text>
<Text
selectable={true}
numberOfLines={1}
style={{backgroundColor: 'yellow'}}>
Some long very very very very very long text
</Text>
<View style={{height: 20, backgroundColor: 'pink'}}></View>
</View>
``` | Platform: Android,Needs: Triage :mag: | low | Major |
2,809,342,200 | pytorch | Kw argument `dtype` less relative with the functions themselves | ### 🐛 Describe the bug
The docs of [`torch.randint()`](https://pytorch.org/docs/stable/generated/torch.randint.html#torch-randint), [`torch.randint_like()`](https://pytorch.org/docs/stable/generated/torch.randint_like.html#torch-randint-like), [`torch.randperm()`](https://pytorch.org/docs/stable/generated/torch.randperm.html#torch-randperm) show their returns as below:
> torch.randint: Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive).
> torch.randint_like: Returns a tensor with the same shape as Tensor input filled with random integers generated uniformly between low (inclusive) and high (exclusive).
> torch.randperm: Returns a random permutation of integers from 0 to n - 1.
All of them show that their return should be of integers. They have the same kw arguments `dtype`, but when `dtype` is of float, they all can run well (expected TypeError).
### Minified repro
```python
import torch
dtype = torch.double # choice: torch.double, torch.half, torch.float
randint_result = torch.randint(0, 10, (100, 100), dtype=dtype)
randint_like_result = torch.randint_like(randint_result, 0, 10, dtype=dtype)
randperm_result = torch.randperm(10, dtype=dtype)
print('randint_result:', randint_result)
print('randint_like_result:', randint_like_result)
print('randperm_result:', randperm_result)
```
### Outputs
```txt
randint_result: tensor([[0., 4., 5., ..., 3., 3., 4.],
[6., 0., 2., ..., 0., 1., 4.],
[8., 3., 0., ..., 5., 6., 9.],
...,
[8., 9., 5., ..., 2., 8., 6.],
[2., 5., 4., ..., 5., 8., 4.],
[6., 1., 5., ..., 3., 8., 6.]], dtype=torch.float64)
randint_like_result: tensor([[5., 7., 6., ..., 0., 0., 0.],
[1., 0., 7., ..., 1., 6., 0.],
[1., 7., 0., ..., 9., 7., 8.],
...,
[9., 1., 4., ..., 5., 5., 5.],
[2., 1., 8., ..., 7., 9., 8.],
[8., 9., 1., ..., 1., 5., 3.]], dtype=torch.float64)
randperm_result: tensor([0., 2., 1., 5., 7., 6., 3., 9., 8., 4.], dtype=torch.float64)
```
### Versions
pytorch==2.5.0
torchvision==0.20.0
torchaudio==2.5.0
pytorch-cuda=12.1
cc @pbelevich @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @svekars @brycebortree @sekyondaMeta @mruberry @walterddr @mikaylagawarecki | triaged,module: random,module: edge cases | low | Critical |
2,809,352,899 | pytorch | ROCm+gcc 15 asserts | ### 🐛 Describe the bug
Fedora 42 will have gcc 15.
Gcc 15's libstdc++ asserts in multiple places in the ROCm build.
The errors look like this
/usr/lib/gcc/x86_64-redhat-linux/15/../../../../include/c++/15/array:210:2: error: reference to __host__ function '__glibcxx_assert_fail' in __host__ __device__ function
210 | __glibcxx_requires_subscript(__n);
| ^
/usr/lib/gcc/x86_64-redhat-linux/15/../../../../include/c++/15/debug/assertions.h:39:3: note: expanded from macro '__glibcxx_requires_subscript'
39 | __glibcxx_assert(_N < this->size())
| ^
/usr/lib/gcc/x86_64-redhat-linux/15/../../../../include/c++/15/x86_64-redhat-linux/bits/c++config.h:2553:12: note: expanded from macro '__glibcxx_assert'
2553 | std::__glibcxx_assert_fail(); \
| ^
/home/trix/ai/pytorch/aten/src/ATen/hip/detail/OffsetCalculator.cuh:89:7: note: called by 'get'
89 | offsets[arg] = linear_idx;
| ^
/home/trix/ai/pytorch/aten/src/ATen/native/hip/MemoryAccess.cuh:213:45: note: called by 'load<std::tuple<double, double>>'
213 | auto offset = input_offset_calculator.get(linear_idx);
| ^
/home/trix/ai/pytorch/aten/src/ATen/native/hip/Loops.cuh:59:10: note: called by 'elementwise_kernel_helper<(lambda at /home/trix/ai/pytorch/aten/src/ATen/native/hip/ActivationHardtanhKernel.hip:27:3), at::native::memory::policies::unroll<std::array<char *, 3>, TrivialOffsetCalculator<2>, TrivialOffsetCalculator<1>, at::native::memory::LoadWithoutCast, at::native::memory::StoreWithoutCast, 4>>'
59 | policy.load(args, idx);
| ^
/home/trix/ai/pytorch/aten/src/ATen/native/hip/HIPLoops.cuh:148:5: note: called by 'vectorized_elementwise_kernel<16, (lambda at /home/trix/ai/pytorch/aten/src/ATen/native/hip/ActivationHardtanhKernel.hip:27:3), std::array<char *, 3>>'
148 | elementwise_kernel_helper(f, policy);
| ^
/usr/lib/gcc/x86_64-redhat-linux/15/../../../../include/c++/15/x86_64-redhat-linux/bits/c++config.h:2547:3: note: '__glibcxx_assert_fail' declared here
2547 | __glibcxx_assert_fail()
### Versions
Collecting environment information...
PyTorch version: 2.5.0a0+git446bca5
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42133-0
OS: Fedora Linux 42 (Workstation Edition Prerelease) (x86_64)
GCC version: (GCC) 15.0.1 20250114 (Red Hat 15.0.1-0)
Clang version: 19.1.6 (Fedora 19.1.6-2.fc42)
CMake version: version 3.31.4
Libc version: glibc-2.40.9000
Python version: 3.13.1 (main, Dec 9 2024, 00:00:00) [GCC 14.2.1 20241104 (Red Hat 14.2.1-6)] (64-bit runtime)
Python platform: Linux-6.13.0-0.rc7.20250114gitc45323b7560e.56.fc42.x86_64-x86_64-with-glibc2.40.9000
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon 780M (gfx1103)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42133
MIOpen runtime version: 3.3.0
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7940HS w/ Radeon 780M Graphics
CPU family: 25
Model: 116
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 53%
CPU max MHz: 5263.0000
CPU min MHz: 400.0000
BogoMIPS: 7984.65
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] torch==2.5.0a0+git446bca5
[conda] Could not collect
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | module: rocm,triaged | low | Critical |
2,809,354,138 | yt-dlp | how to download playlist in bilibili.tv | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
how to download playlist from https://www.bilibili.tv/en/video/4787291598363136?bstar_from=bstar-web.ugc-video-detail.playlist.all
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-v', 'https://www.bilibili.tv/en/video/4787291598363136?bstar_from=bstar-web.ugc-video-detail.playlist.all']
[debug] User config "C:\Users\Candra\AppData\Roaming\yt-dlp.conf":['-S', 'res:360', '--concurrent-fragments', '10', '--output', '%(title)s.%(ext)s', '--retries', '100', '--file-access-retries', '100', '--fragment-retries', '100']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [c8541f8b1] (pip)
[debug] Python 3.13.1 (CPython AMD64 64bit) - Windows-11-10.0.22631-SP0 (OpenSSL 3.0.15 3 Sep 2024)
[debug] exe versions: ffmpeg 7.1-essentials_build-www.gyan.dev (setts), ffprobe 7.1-essentials_build-www.gyan.dev
[debug] Optional libraries: sqlite3-3.45.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Extractor Plugins: YTSE (YoutubeIE)
[debug] Plugin directories: ['C:\\Users\\Candra\\AppData\\Roaming\\yt-dlp-plugins\\yt-dlp-ChromeCookieUnlock\\yt_dlp_plugins', 'C:\\Users\\Candra\\pipx\\venvs\\yt-dlp\\Lib\\site-packages\\yt_dlp_plugins']
[debug] Loaded 1837 extractors
[BiliIntl] Extracting URL: https://www.bilibili.tv/en/video/4787291598363136?bstar_from=bstar-web.ugc-video-detail.playlist.all
[BiliIntl] 4787291598363136: Downloading webpage
[BiliIntl] 4787291598363136: Downloading video formats
[debug] Sort order given by user: res:360
[debug] Formats sorted by: hasvid, ie_pref, res:360(360.0), lang, quality, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 4787291598363136: Downloading 1 format(s): 9+1
[debug] Invoking http downloader on "https://upos-sz-mirroralibstar1.bilivideo.com/iupxcodeboss/80/02/n230529ad1el84aamq3ro3i06mje0280-1-111220110000.m4s?e=ig8euxZM2rNcNbdlhoNvNC8BqJIzNbfqXBvEqxTEto8BTrNvN0GvT90W5JZMkX_YN0MvXg8gNEV4NC8xNEV4N03eN0B5tZlqNxTEto8BTrNvNeZVuJ10Kj_g2UB02J0mN0B5tZlqNCNEto8BTrNvNC7MTX502C8f2jmMQJ6mqF2fka1mqx6gqj0eN0B599M=&uipk=5&nbs=1&deadline=1737747332&gen=playurlv2&os=alibstar1bv&oi=1734954642&trid=242d6bba969840fa91bafcdf0fb292a6i&mid=0&platform=pc&upsig=d7ea20abf2a38152454112267f659454&uparams=e,uipk,nbs,deadline,gen,os,oi,trid,mid,platform&bvc=vod&nettype=0&orderid=0,2&logo=00000000&f=i_0_0"
[debug] File locking is not supported. Proceeding without locking
``` | site-enhancement,triage | low | Critical |
2,809,423,565 | electron | Ubuntu 24.04 BrowserWindow setIcon does not change the icon | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.1
### What operating system(s) are you using?
Ubuntu
### Operating System Version
Ubuntu 24.04
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
The `setIcon(nativeImage)` method of a `BrowserWindow` instance changes the icon shown in the Ubuntu dock.
### Actual Behavior
The method call does not have any effect.
### Testcase Gist URL
_No response_
### Additional Information
_No response_ | platform/linux,bug :beetle:,component/BrowserWindow,has-repro-gist,33-x-y | low | Critical |
2,809,429,647 | langchain | LangGraph + PromptFlow KeyError __pf_main__ on instruction StateGraph(State) | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from typing import Annotated, TypedDict
class State(TypedDict):
question: Optional[str]
messages: Annotated[list, add_messages]
check_ups: Optional[list]
check_ups_preparations: Optional[list]
triggered_actions: Optional[list]
triggered_actions_indexes: Optional[list]
triggered_actions_thought: Optional[str]
step_is_manual: Optional[bool]
step_has_preparation: Optional[bool]
manual_step_executed: Optional[dict]
interaction_attempt_executed: Optional[dict]
graph_builder = StateGraph(State)
### Error Message and Stack Trace (if applicable)
KeyError __pf_main__
### Description
Hi,
I am building RAG applications using Azure Promptflow and LangGraph, but the my app crashes when the StateGraph and more specifically its schema is initiated (on instructions: StateGraph(State)), then i get an error that says key error: pf_main.
The crash happens on instruction "self._add_schema(state_schema)" on StateGraph init of the script ...\Lib\site-packages\langgraph\graph\state.py of LangGraph.
I tried even with the latest versions of the libraries LangChain (0.2.17), LangGraph (0.2.61), PromptFlow (1.17.1)
It seems to be due to the addition of state_schema handling in LangGraph and the use of PromptFlow in Azure. Does anyone have any idea how to fix this?
Thank you in advance
### System Info
C:\XX\pythonProject3\YY\Scripts\python.exe -m promptflow._cli._pf.entry flow test --flow c:\XX\YY\agent_assistant\flows\standard --user-agent "prompt-flow-extension/1.20.2 (win32; x64) VSCode/1.94.0"
Prompt flow service has started... | 🤖:bug | low | Critical |
2,809,457,146 | transformers | Support Shared Cache | ### Feature request
A new cache class that supports sharing the same or part of the KV cache between different layers to improve cache efficiency.
### Motivation
Many studies have shown that attention weights between different attention layers are always similar, and `KV cache sharing` only causes a small quality degradation, while improving **2~3 times token/sec**.
### Your contribution
I would try to submit a PR. | Feature request,Cache | low | Minor |
2,809,458,797 | transformers | Mask2Former _init_weights | ### System Info
- `transformers` version: 4.45.2
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
- Python version: 3.11.9
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.4
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA RTX A6000
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run _init_weights
### Expected behavior
The _init_weights method of Mask2Former has multiple problems. It initializes nn.Embeddings with an std of .02 (original Mask2Former code uses PyTorch's default init with std of 1.0). Similarly, the mask MLP is initialised wrongly with zero biases. Finally, another example of a problem is that the initialisation of the multi-scale deformable attention is overwritten by the branch for the Mask2FormerPixelDecoderEncoderOnly. | Core: Modeling,bug,Vision | low | Minor |
2,809,464,988 | PowerToys | Environmental Variables are not included in Backup | ### Microsoft PowerToys version
0.87.1
### Installation method
Other (please specify in "Steps to Reproduce")
### Running as admin
Yes
### Area(s) with issue?
Environment Variables
### Steps to reproduce
1) Having some additional/any environmental varbiables in windows
or having some added through PowerToys as a profile
2) backing up settings (as admin or not)
3) fresh installed windows 10 (or you could just remove a variable you put in for testing)
4) restoring backup
-> fancyzones are back
-> environmental variables not.
### ✔️ Expected Behavior
having all the variables back
### ❌ Actual Behavior
no variables are restored
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,809,465,812 | svelte | Cannot read properties of undefined (reading 'call') | ### Describe the bug
It works in browser and Astro's `npm run dev` .
I'm getting this error when trying to test with Vitest of Astro page with svelte file.
```
TypeError: Cannot read properties of undefined (reading 'call')
// edited - moved to Logs section
```
### Reproduction
If required then later might try to repro. Its big page with runes, functions passed as props and template.
EDIT:
https://stackblitz.com/edit/github-ch9tapzs?file=test%2Fhappy-dom-env.test.ts&on=stackblitz
After dependencies install, stop npm in terminal and run `npm run test`
### Logs
```shell
TypeError: Cannot read properties of undefined (reading 'call')
❯ get_first_child node_modules/svelte/src/internal/client/dom/operations.js:81:28
80| /**
81| * @template {Node} N
82| * @param {N} node
| ^
83| * @returns {Node | null}
84| */
❯ eval node_modules/svelte/src/internal/client/dom/template.js:48:91
❯ eval node_modules/svelte/src/internal/client/dev/elements.js:13:15
❯ Test src/components/test.svelte:15:12
❯ render node_modules/svelte/src/internal/server/index.js:121:2
❯ Object.renderToStaticMarkup node_modules/@astrojs/svelte/server.js:45:49
❯ renderFrameworkComponent node_modules/astro/dist/runtime/server/render/component.js:189:68
❯ renderComponent node_modules/astro/dist/runtime/server/render/component.js:368:10
```
### System Info
```shell
System:
OS: Linux 5.15 Ubuntu 22.04.5 LTS 22.04.5 LTS (Jammy Jellyfish)
CPU: (2) x64 Intel(R) Core(TM)2 Duo CPU E4500 @ 2.20GHz
Memory: 464.32 MB / 3.82 GB
Container: Yes
Shell: 5.1.16 - /bin/bash
Binaries:
Node: 20.11.1 - ~/.nvm/versions/node/v20.11.1/bin/node
Yarn: 1.22.17 - /usr/local/bin/yarn
npm: 10.2.4 - ~/.nvm/versions/node/v20.11.1/bin/npm
pnpm: 9.15.2 - ~/.local/share/pnpm/pnpm
bun: 1.1.6 - ~/.bun/bin/bun
Watchman: 20201115.021953.0 - /usr/local/bin/watchman
Browsers:
Chrome: 132.0.6834.83
npmPackages:
svelte: ^5.1.3 => 5.16.2
```
### Severity
blocking all usage of svelte | awaiting submitter | low | Critical |
2,809,471,982 | pytorch | mmap fails on 64k page aarch64 systems for AOTI model loading | ### 🐛 Describe the bug
The AOTI loader mmap's a file with an offset `weights_offset`. An asssertion ensures that `weights_offset` is a multiple of 16k. The offset in à `mmap` syscall needs to be a multiple of the page size though, causing this mmap to fail on kernels with 64k pages but not on 4k pages.
https://github.com/pytorch/pytorch/blob/629840e038ee623911bedc8fef1ab84acce5ba39/torch/csrc/inductor/aoti_runtime/model.h#L601
**Failing syscall with 64k page kernel:**
```
newfstatat(AT_FDCWD, "/pytorch-llama/torchchat/exportedModels/llama3.1.json", 0xffffec3d0838, 0) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/pytorch-llama/torchchat/exportedModels/llama3.1.so", O_RDONLY) = 3
mmap(NULL, 6144375816, PROT_READ|PROT_WRITE, MAP_PRIVATE, 3, 0xd8000) = -1 EINVAL (Invalid argument) # 0xd8000 is no multiple of 65536
```
**Same syscalls with 4k page kernel:**
```
newfstatat(AT_FDCWD, "/pytorch-llama/torchchat/exportedModels/llama3.1.json", 0xfffff418a778, 0) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/pytorch-llama/torchchat/exportedModels/llama3.1.so", O_RDONLY) = 3
mmap(NULL, 6144375816, PROT_READ|PROT_WRITE, MAP_PRIVATE, 3, 0xd8000) = 0xfff9d10a7000 # 0xd8000 is a multiple of 4096
```
**How to reproduce:**
Follow these steps https://learn.arm.com/learning-paths/servers-and-cloud-computing/pytorch-llama/pytorch-llama/ on an aarch64 system with 64k page size kernel. The error occurs during model loading when running `torchchat.py` (last step).
**Error message:**
```
PyTorch version 2.5.0.dev20240828+cpu available.
Warning: checkpoint path ignored because an exported DSO or PTE path specified
Warning: checkpoint path ignored because an exported DSO or PTE path specified
Using device=cpu
Loading model...
Time to load model: 0.05 seconds
Error: mmap() failed
Traceback (most recent call last):
File "/pytorch-llama/torchchat/build/builder.py", line 480, in _initialize_model
model.forward = torch._export.aot_load(
File "/usr/local/lib/python3.10/dist-packages/torch/_export/__init__.py", line 300, in aot_load
runner = torch._C._aoti.AOTIModelContainerRunnerCpu(so_path, 1) # type: ignore[call-arg]
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /pytorch/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 70
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/pytorch-llama/torchchat/torchchat.py", line 88, in <module>
generate_main(args)
File "/pytorch-llama/torchchat/generate.py", line 901, in main
gen = Generator(
File "/pytorch-llama/torchchat/generate.py", line 253, in __init__
self.model = _initialize_model(self.builder_args, self.quantize, self.tokenizer)
File "/pytorch-llama/torchchat/build/builder.py", line 484, in _initialize_model
raise RuntimeError(f"Failed to load AOTI compiled {builder_args.dso_path}")
RuntimeError: Failed to load AOTI compiled exportedModels/llama3.1.so
```
**GDB backtrace:**
```
Catchpoint 6 (call to syscall mmap), __GI___mmap64 (offset=884736, fd=3, flags=2, prot=3, len=6144375816, addr=<optimized out>) at ../sysdeps/unix/sysv/linux/mmap64.c:58
58 in ../sysdeps/unix/sysv/linux/mmap64.c
(gdb) bt
#0 __GI___mmap64 (offset=884736, fd=3, flags=2, prot=3, len=6144375816, addr=<optimized out>)
at ../sysdeps/unix/sysv/linux/mmap64.c:58
#1 __GI___mmap64 (addr=<optimized out>, len=6144375816, prot=3, flags=2, fd=3, offset=884736)
at ../sysdeps/unix/sysv/linux/mmap64.c:46
#2 0x0000e286e96d2fe8 in torch::aot_inductor::AOTInductorModelBase<torch::aot_inductor::AOTInductorModel>::load_constants() () from /pytorch-llama/torchchat/exportedModels/llama3.1.so
#3 0x0000e286e96ee31c in torch::aot_inductor::AOTInductorModelContainer::AOTInductorModelContainer(unsigned long, std::string const&, std::optional<std::string> const&) ()
from /pytorch-llama/torchchat/exportedModels/llama3.1.so
#4 0x0000e286e96cd0e8 in AOTInductorModelContainerCreateWithDevice ()
from /pytorch-llama/torchchat/exportedModels/llama3.1.so
#5 0x0000e287174d8888 in torch::inductor::AOTIModelContainerRunner::AOTIModelContainerRunner(std::string const&, unsigned long, std::string const&, std::string const&) ()
from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cpu.so
#6 0x0000e287174d97bc in torch::inductor::AOTIModelContainerRunnerCpu::AOTIModelContainerRunnerCpu(std::string const&, unsigned long) () from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cpu.so
#7 0x0000e2871c841618 in pybind11::cpp_function::initialize<pybind11::detail::initimpl::constructor<std::string const&, int>::execute<pybind11::class_<torch::inductor::AOTIModelContainerRunnerCpu>, , 0>(pybind11::class_<torch::inductor::AOTIModelContainerRunnerCpu>&)::{lambda(pybind11::detail::value_and_holder&, std::string const&, int)#1}, void, pybind11::detail::value_and_holder&, std::string const&, int, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<torch::inductor::AOTIModelContainerRunnerCpu>&&, void (*)(pybind11::detail::value_and_holder&, std::string const&, int), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::detail::is_new_style_constructor const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call&) ()
from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so
#8 0x0000e2871c2c0814 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) ()
from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so
#9 0x0000b02a971d4b54 in ?? ()
#10 0x0000b02a971cb100 in _PyObject_MakeTpCall ()
#11 0x0000b02a971e58c4 in ?? ()
#12 0x0000b02a971e1a28 in ?? ()
#13 0x0000b02a971cb568 in ?? ()
#14 0x0000e2871c2be5dc in pybind11_meta_call ()
from /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so
#15 0x0000b02a971cb100 in _PyObject_MakeTpCall ()
#16 0x0000b02a971c2334 in _PyEval_EvalFrameDefault ()
#17 0x0000b02a971d57e8 in _PyFunction_Vectorcall ()
#18 0x0000b02a971c1bcc in _PyEval_EvalFrameDefault ()
#19 0x0000b02a971d57e8 in _PyFunction_Vectorcall ()
#20 0x0000b02a971bd764 in _PyEval_EvalFrameDefault ()
#21 0x0000b02a971ca164 in _PyObject_FastCallDictTstate ()
#22 0x0000b02a971e162c in ?? ()
#23 0x0000b02a971cb078 in _PyObject_MakeTpCall ()
#24 0x0000b02a971c1f68 in _PyEval_EvalFrameDefault ()
#25 0x0000b02a971d57e8 in _PyFunction_Vectorcall ()
#26 0x0000b02a971bd764 in _PyEval_EvalFrameDefault ()
#27 0x0000b02a972be070 in ?? ()
#28 0x0000b02a972bdef4 in PyEval_EvalCode ()
#29 0x0000b02a972f151c in ?? ()
#30 0x0000b02a972e9c38 in ?? ()
#31 0x0000b02a972f11cc in ?? ()
```
### Versions
PyTorch version: 2.5.0.dev20240820+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-<....>
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Vendor ID: ARM
Model name: Neoverse-V2
Model: 0
Thread(s) per core: 1
Core(s) per cluster: 72
Socket(s): -
Cluster(s): 1
Stepping: r0p0
Frequency boost: disabled
CPU max MHz: 3447.0000
CPU min MHz: 81.0000
BogoMIPS: 2000.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh bti
L1d cache: 4.5 MiB (72 instances)
L1i cache: 4.5 MiB (72 instances)
L2 cache: 72 MiB (72 instances)
L3 cache: 114 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-71
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.5.0.dev20240828+cpu
[pip3] torchao==0.4.0+git174e630a
[conda] Could not collect
cc @malfet @snadampal @milpuz01 @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | module: crash,module: arm,oncall: pt2,export-triaged,oncall: export,module: aotinductor | low | Critical |
2,809,473,270 | godot | Always on top does not work | ### Tested versions
[alwaysontop.zip](https://github.com/user-attachments/files/18536259/alwaysontop.zip)
Reproducable in 4.4 Beta 1 but not in 4.3
### System information
Godot v4.4.beta1 - Windows 11 (build 26100) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 Laptop GPU (NVIDIA; 31.0.15.4618) - 13th Gen Intel(R) Core(TM) i9-13900HX (32 threads)
### Issue description
I expected the windows to be on top once I tick "Always on Top" in the project settings, but it doesn't work as intended.
### Steps to reproduce
Go to Project Settings > Windows > tick Always On Top
[alwaysontop.zip](https://github.com/user-attachments/files/18536286/alwaysontop.zip)
### Minimal reproduction project (MRP)
[alwaysontop.zip](https://github.com/user-attachments/files/18536290/alwaysontop.zip) | needs testing,topic:gui | low | Minor |
2,809,503,213 | tensorflow | tensorflow takes a long time to prepare before the first iteration | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
TF 2.10.0
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04
### Mobile device
_No response_
### Python version
3.8
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
11.4/8.9.1
### GPU model and memory
Nvidia Tesla K20m
### Current behavior?
tensorflow takes a long time to prepare before the first iteration.I used my custom model for training, but it took 40-60 minutes from the time the data was ready to the first iteration. This was true even for a very small dataset. And my model only had 835,620 parameters.
This model is used to pick up the phase of seismic data. If an experiment is conducted, the data can be fabricated by itself.
### Standalone code to reproduce the issue
```shell
import numpy as np
import matplotlib
matplotlib.use('agg')
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # Suppress TensorFlow logs
from tensorflow.python import keras
from tensorflow.keras import backend as K
from tensorflow.keras.layers import (
Add, Activation, LSTM, Conv1D, MaxPooling1D, UpSampling1D,
Cropping1D, SpatialDropout1D, Bidirectional, BatchNormalization,add,InputSpec,
LayerNormalization,Layer, Dense, Dropout,Layer
)
from tensorflow.keras.optimizers import Adam
from tensorflow import keras
import tensorflow as tf
from tensorflow.keras import initializers, regularizers, constraints, activations
def f1(y_true, y_pred):
def recall(y_true, y_pred):
'''Recall metric. Computes the recall, a metric for multi-label classification.'''
true_positives = tf.reduce_sum(tf.round(tf.clip_by_value(y_true * y_pred, 0, 1)))
possible_positives = tf.reduce_sum(tf.round(tf.clip_by_value(y_true, 0, 1)))
recall = true_positives / (possible_positives + tf.keras.backend.epsilon())
return recall
def precision(y_true, y_pred):
'''Precision metric. Computes the precision, a metric for multi-label classification.'''
true_positives = tf.reduce_sum(tf.round(tf.clip_by_value(y_true * y_pred, 0, 1)))
predicted_positives = tf.reduce_sum(tf.round(tf.clip_by_value(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + tf.keras.backend.epsilon())
return precision
precision_val = precision(y_true, y_pred)
recall_val = recall(y_true, y_pred)
# F1 score calculation
return 2 * (precision_val * recall_val) / (precision_val + recall_val + tf.keras.backend.epsilon())
class LayerNormalization(keras.layers.Layer):
def __init__(self,
center=True,
scale=True,
epsilon=None,
gamma_initializer='ones',
beta_initializer='zeros',
**kwargs):
super(LayerNormalization, self).__init__(**kwargs)
self.supports_masking = True
self.center = center
self.scale = scale
if epsilon is None:
epsilon = K.epsilon() * K.epsilon()
self.epsilon = epsilon
self.gamma_initializer = keras.initializers.get(gamma_initializer)
self.beta_initializer = keras.initializers.get(beta_initializer)
def get_config(self):
config = {
'center': self.center,
'scale': self.scale,
'epsilon': self.epsilon,
'gamma_initializer': keras.initializers.serialize(self.gamma_initializer),
'beta_initializer': keras.initializers.serialize(self.beta_initializer),
}
base_config = super(LayerNormalization, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def compute_output_shape(self, input_shape):
return input_shape
def compute_mask(self, inputs, input_mask=None):
return input_mask
def build(self, input_shape):
self.input_spec = InputSpec(shape=input_shape)
shape = input_shape[-1:]
if self.scale:
self.gamma = self.add_weight(
shape=shape,
initializer=self.gamma_initializer,
name='gamma',
)
if self.center:
self.beta = self.add_weight(
shape=shape,
initializer=self.beta_initializer,
name='beta',
)
super(LayerNormalization, self).build(input_shape)
def call(self, inputs, training=None):
mean = K.mean(inputs, axis=-1, keepdims=True)
variance = K.mean(K.square(inputs - mean), axis=-1, keepdims=True)
std = K.sqrt(variance + self.epsilon)
outputs = (inputs - mean) / std
if self.scale:
outputs *= self.gamma
if self.center:
outputs += self.beta
return outputs
class FeedForward(keras.layers.Layer):
def __init__(self,
units,
activation='relu',
use_bias=True,
kernel_initializer='glorot_normal',
bias_initializer='zeros',
dropout_rate=0.0,
**kwargs):
self.supports_masking = True
self.units = units
self.activation = keras.activations.get(activation)
self.use_bias = use_bias
self.kernel_initializer = keras.initializers.get(kernel_initializer)
self.bias_initializer = keras.initializers.get(bias_initializer)
self.dropout_rate = dropout_rate
self.W1, self.b1 = None, None
self.W2, self.b2 = None, None
super(FeedForward, self).__init__(**kwargs)
def get_config(self):
config = {
'units': self.units,
'activation': keras.activations.serialize(self.activation),
'use_bias': self.use_bias,
'kernel_initializer': keras.initializers.serialize(self.kernel_initializer),
'bias_initializer': keras.initializers.serialize(self.bias_initializer),
'dropout_rate': self.dropout_rate,
}
base_config = super(FeedForward, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def compute_output_shape(self, input_shape):
return input_shape
def compute_mask(self, inputs, input_mask=None):
return input_mask
def build(self, input_shape):
feature_dim = int(input_shape[-1])
self.W1 = self.add_weight(
shape=(feature_dim, self.units),
initializer=self.kernel_initializer,
name='{}_W1'.format(self.name),
)
if self.use_bias:
self.b1 = self.add_weight(
shape=(self.units,),
initializer=self.bias_initializer,
name='{}_b1'.format(self.name),
)
self.W2 = self.add_weight(
shape=(self.units, feature_dim),
initializer=self.kernel_initializer,
name='{}_W2'.format(self.name),
)
if self.use_bias:
self.b2 = self.add_weight(
shape=(feature_dim,),
initializer=self.bias_initializer,
name='{}_b2'.format(self.name),
)
super(FeedForward, self).build(input_shape)
def call(self, x, mask=None, training=None):
h = K.dot(x, self.W1)
if self.use_bias:
h = K.bias_add(h, self.b1)
if self.activation is not None:
h = self.activation(h)
if 0.0 < self.dropout_rate < 1.0:
def dropped_inputs():
return K.dropout(h, self.dropout_rate, K.shape(h))
h = K.in_train_phase(dropped_inputs, h, training=training)
y = K.dot(h, self.W2)
if self.use_bias:
y = K.bias_add(y, self.b2)
return y
class SeqSelfAttention(keras.layers.Layer):
ATTENTION_TYPE_ADD = 'additive'
ATTENTION_TYPE_MUL = 'multiplicative'
def __init__(self,
units=32,
attention_width=None,
attention_type=ATTENTION_TYPE_ADD,
return_attention=False,
history_only=False,
kernel_initializer='glorot_normal',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
use_additive_bias=True,
use_attention_bias=True,
attention_activation=None,
attention_regularizer_weight=0.0,
**kwargs):
super().__init__(**kwargs)
self.supports_masking = True
self.units = units
self.attention_width = attention_width
self.attention_type = attention_type
self.return_attention = return_attention
self.history_only = history_only
if history_only and attention_width is None:
self.attention_width = int(1e9)
self.use_additive_bias = use_additive_bias
self.use_attention_bias = use_attention_bias
self.kernel_initializer = keras.initializers.get(kernel_initializer)
self.bias_initializer = keras.initializers.get(bias_initializer)
self.kernel_regularizer = keras.regularizers.get(kernel_regularizer)
self.bias_regularizer = keras.regularizers.get(bias_regularizer)
self.kernel_constraint = keras.constraints.get(kernel_constraint)
self.bias_constraint = keras.constraints.get(bias_constraint)
self.attention_activation = keras.activations.get(attention_activation)
self.attention_regularizer_weight = attention_regularizer_weight
self._backend = keras.backend.backend()
if attention_type == SeqSelfAttention.ATTENTION_TYPE_ADD:
self.Wx, self.Wt, self.bh = None, None, None
self.Wa, self.ba = None, None
elif attention_type == SeqSelfAttention.ATTENTION_TYPE_MUL:
self.Wa, self.ba = None, None
else:
raise NotImplementedError('No implementation for attention type : ' + attention_type)
def get_config(self):
config = {
'units': self.units,
'attention_width': self.attention_width,
'attention_type': self.attention_type,
'return_attention': self.return_attention,
'history_only': self.history_only,
'use_additive_bias': self.use_additive_bias,
'use_attention_bias': self.use_attention_bias,
'kernel_initializer': keras.regularizers.serialize(self.kernel_initializer),
'bias_initializer': keras.regularizers.serialize(self.bias_initializer),
'kernel_regularizer': keras.regularizers.serialize(self.kernel_regularizer),
'bias_regularizer': keras.regularizers.serialize(self.bias_regularizer),
'kernel_constraint': keras.constraints.serialize(self.kernel_constraint),
'bias_constraint': keras.constraints.serialize(self.bias_constraint),
'attention_activation': keras.activations.serialize(self.attention_activation),
'attention_regularizer_weight': self.attention_regularizer_weight,
}
base_config = super(SeqSelfAttention, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def build(self, input_shape):
if self.attention_type == SeqSelfAttention.ATTENTION_TYPE_ADD:
self._build_additive_attention(input_shape)
elif self.attention_type == SeqSelfAttention.ATTENTION_TYPE_MUL:
self._build_multiplicative_attention(input_shape)
super(SeqSelfAttention, self).build(input_shape)
def _build_additive_attention(self, input_shape):
feature_dim = int(input_shape[2])
self.Wt = self.add_weight(shape=(feature_dim, self.units),
name='{}_Add_Wt'.format(self.name),
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
self.Wx = self.add_weight(shape=(feature_dim, self.units),
name='{}_Add_Wx'.format(self.name),
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
if self.use_additive_bias:
self.bh = self.add_weight(shape=(self.units,),
name='{}_Add_bh'.format(self.name),
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint)
self.Wa = self.add_weight(shape=(self.units, 1),
name='{}_Add_Wa'.format(self.name),
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
if self.use_attention_bias:
self.ba = self.add_weight(shape=(1,),
name='{}_Add_ba'.format(self.name),
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint)
def _build_multiplicative_attention(self, input_shape):
feature_dim = int(input_shape[2])
self.Wa = self.add_weight(shape=(feature_dim, feature_dim),
name='{}_Mul_Wa'.format(self.name),
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
if self.use_attention_bias:
self.ba = self.add_weight(shape=(1,),
name='{}_Mul_ba'.format(self.name),
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint)
def call(self, inputs, mask=None, **kwargs):
input_len = K.shape(inputs)[1]
if self.attention_type == SeqSelfAttention.ATTENTION_TYPE_ADD:
e = self._call_additive_emission(inputs)
elif self.attention_type == SeqSelfAttention.ATTENTION_TYPE_MUL:
e = self._call_multiplicative_emission(inputs)
if self.attention_activation is not None:
e = self.attention_activation(e)
e = K.exp(e - K.max(e, axis=-1, keepdims=True))
if self.attention_width is not None:
if self.history_only:
lower = K.arange(0, input_len) - (self.attention_width - 1)
else:
lower = K.arange(0, input_len) - self.attention_width // 2
lower = K.expand_dims(lower, axis=-1)
upper = lower + self.attention_width
indices = K.expand_dims(K.arange(0, input_len), axis=0)
e = e * K.cast(lower <= indices, K.floatx()) * K.cast(indices < upper, K.floatx())
if mask is not None:
mask = K.cast(mask, K.floatx())
mask = K.expand_dims(mask)
e = K.permute_dimensions(K.permute_dimensions(e * mask, (0, 2, 1)) * mask, (0, 2, 1))
# a_{t} = \text{softmax}(e_t)
s = K.sum(e, axis=-1, keepdims=True)
a = e / (s + K.epsilon())
# l_t = \sum_{t'} a_{t, t'} x_{t'}
v = K.batch_dot(a, inputs)
if self.attention_regularizer_weight > 0.0:
self.add_loss(self._attention_regularizer(a))
if self.return_attention:
return [v, a]
return v
def _call_additive_emission(self, inputs):
input_shape = K.shape(inputs)
batch_size = input_shape[0]
input_len = inputs.get_shape().as_list()[1]
# h_{t, t'} = \tanh(x_t^T W_t + x_{t'}^T W_x + b_h)
q = K.expand_dims(K.dot(inputs, self.Wt), 2)
k = K.expand_dims(K.dot(inputs, self.Wx), 1)
if self.use_additive_bias:
h = K.tanh(q + k + self.bh)
else:
h = K.tanh(q + k)
# e_{t, t'} = W_a h_{t, t'} + b_a
if self.use_attention_bias:
e = K.reshape(K.dot(h, self.Wa) + self.ba, (batch_size, input_len, input_len))
else:
e = K.reshape(K.dot(h, self.Wa), (batch_size, input_len, input_len))
return e
def _call_multiplicative_emission(self, inputs):
# e_{t, t'} = x_t^T W_a x_{t'} + b_a
e = K.batch_dot(K.dot(inputs, self.Wa), K.permute_dimensions(inputs, (0, 2, 1)))
if self.use_attention_bias:
e += self.ba[0]
return e
def compute_output_shape(self, input_shape):
output_shape = input_shape
if self.return_attention:
attention_shape = (input_shape[0], output_shape[1], input_shape[1])
return [output_shape, attention_shape]
return output_shape
def compute_mask(self, inputs, mask=None):
if self.return_attention:
return [mask, None]
return mask
def _attention_regularizer(self, attention):
batch_size = K.cast(K.shape(attention)[0], K.floatx())
input_len = K.shape(attention)[-1]
indices = K.expand_dims(K.arange(0, input_len), axis=0)
diagonal = K.expand_dims(K.arange(0, input_len), axis=-1)
eye = K.cast(K.equal(indices, diagonal), K.floatx())
return self.attention_regularizer_weight * K.sum(K.square(K.batch_dot(
attention,
K.permute_dimensions(attention, (0, 2, 1))) - eye)) / batch_size
@staticmethod
def get_custom_objects():
return {'SeqSelfAttention': SeqSelfAttention}
def _block_BiLSTM(filters, drop_rate, padding, inpR):
'Returns LSTM residual block'
prev = inpR
# x_rnn = Bidirectional(LSTM(filters, return_sequences=True, dropout=drop_rate, recurrent_dropout=drop_rate))(prev)
#符合使用cudnn核心的LSTM
x_rnn = Bidirectional(LSTM(filters, return_sequences=True, dropout=drop_rate, recurrent_dropout=0, activation='tanh', recurrent_activation='sigmoid', use_bias=True, unroll=False))(prev)
NiN = Conv1D(filters, 1, padding = padding)(x_rnn)
res_out = BatchNormalization()(NiN)
return res_out
def _block_CNN_1(filters, ker, drop_rate, activation, padding, inpC):
' Returns CNN residual blocks '
prev = inpC
layer_1 = BatchNormalization()(prev)
act_1 = Activation(activation)(layer_1)
act_1 = SpatialDropout1D(drop_rate)(act_1, training=True)
conv_1 = Conv1D(filters, ker, padding = padding)(act_1)
layer_2 = BatchNormalization()(conv_1)
act_2 = Activation(activation)(layer_2)
act_2 = SpatialDropout1D(drop_rate)(act_2, training=True)
conv_2 = Conv1D(filters, ker, padding = padding)(act_2)
res_out = add([prev, conv_2])
return res_out
def _transformer(drop_rate, width, name, inpC):
' Returns a transformer block containing one addetive attention and one feed forward layer with residual connections '
x = inpC
att_layer, weight = SeqSelfAttention(return_attention =True,
attention_width = width,
name=name)(x)
# att_layer = Dropout(drop_rate)(att_layer, training=True)
att_layer2 = add([x, att_layer])
norm_layer = LayerNormalization()(att_layer2)
FF = FeedForward(units=128, dropout_rate=drop_rate)(norm_layer)
FF_add = add([norm_layer, FF])
norm_out = LayerNormalization()(FF_add)
return norm_out, weight
def _encoder(filter_number, filter_size, depth, drop_rate, ker_regul, bias_regul, activation, padding, inpC):
' Returns the encoder that is a combination of residual blocks and maxpooling.'
e = inpC
for dp in range(depth):
e = Conv1D(filter_number[dp],
filter_size[dp],
padding = padding,
activation = activation,
kernel_regularizer = ker_regul,
bias_regularizer = bias_regul,
)(e)
e = MaxPooling1D(2, padding = padding)(e)
return(e)
def _decoder(filter_number, filter_size, depth, drop_rate, ker_regul, bias_regul, activation, padding, inpC):
' Returns the dencoder that is a combination of residual blocks and upsampling. '
d = inpC
for dp in range(depth):
d = UpSampling1D(2)(d)
if dp == 2:
d = Cropping1D(cropping=(1, 1))(d)
d = Conv1D(filter_number[dp],
filter_size[dp],
padding = padding,
activation = activation,
kernel_regularizer = ker_regul,
bias_regularizer = bias_regul,
)(d)
return(d)
def _lr_schedule(epoch):
' Learning rate is scheduled to be reduced after 40, 60, 80, 90 epochs.'
lr = 1e-3
if epoch > 90:
lr *= 0.5e-3
elif epoch > 60:
lr *= 1e-3
elif epoch > 40:
lr *= 1e-2
elif epoch > 20:
lr *= 1e-1
print('Learning rate: ', lr)
return lr
class cred2():
def __init__(self,
nb_filters=[8, 16, 16, 32, 32, 96, 96, 128],
kernel_size=[11, 9, 7, 7, 5, 5, 3, 3],
padding='same',
activationf='relu',
endcoder_depth=7,
decoder_depth=7,
cnn_blocks=5,
BiLSTM_blocks=3,
drop_rate=0.1,
loss_weights=[0.2, 0.3, 0.5],
loss_types=['binary_crossentropy', 'binary_crossentropy', 'binary_crossentropy'],
kernel_regularizer=keras.regularizers.l1(1e-4),
bias_regularizer=keras.regularizers.l1(1e-4),
):
self.kernel_size = kernel_size
self.nb_filters = nb_filters
self.padding = padding
self.activationf = activationf
self.endcoder_depth= endcoder_depth
self.decoder_depth= decoder_depth
self.cnn_blocks= cnn_blocks
self.BiLSTM_blocks= BiLSTM_blocks
self.drop_rate= drop_rate
self.loss_weights= loss_weights
self.loss_types = loss_types
self.kernel_regularizer = kernel_regularizer
self.bias_regularizer = bias_regularizer
def __call__(self, inp):
x = inp
x = _encoder(self.nb_filters,
self.kernel_size,
self.endcoder_depth,
self.drop_rate,
self.kernel_regularizer,
self.bias_regularizer,
self.activationf,
self.padding,
x)
for cb in range(self.cnn_blocks):
x = _block_CNN_1(self.nb_filters[6], 3, self.drop_rate, self.activationf, self.padding, x)
if cb > 2:
x = _block_CNN_1(self.nb_filters[6], 2, self.drop_rate, self.activationf, self.padding, x)
for bb in range(self.BiLSTM_blocks):
x = _block_BiLSTM(self.nb_filters[1], self.drop_rate, self.padding, x)
x, weightdD0 = _transformer(self.drop_rate, None, 'attentionD0', x)
encoded, weightdD = _transformer(self.drop_rate, None, 'attentionD', x)
decoder_D = _decoder([i for i in reversed(self.nb_filters)],
[i for i in reversed(self.kernel_size)],
self.decoder_depth,
self.drop_rate,
self.kernel_regularizer,
self.bias_regularizer,
self.activationf,
self.padding,
encoded)
d = Conv1D(1, 11, padding = self.padding, activation='sigmoid', name='detector')(decoder_D)
'''
The requirements to use the cuDNN implementation are:
activation == tanh
recurrent_activation == sigmoid
recurrent_dropout == 0
unroll is False
use_bias is True
Inputs, if use masking, are strictly right-padded.
Eager execution is enabled in the outermost context.
'''
# PLSTM = LSTM(self.nb_filters[1], return_sequences=True, dropout=self.drop_rate, recurrent_dropout=self.drop_rate)(encoded)
# 假设 self.nb_filters 和 self.drop_rate 已经定义
PLSTM = LSTM(self.nb_filters[1],
return_sequences=True,
dropout=self.drop_rate,
recurrent_dropout=0,
activation='tanh',
recurrent_activation='sigmoid',
use_bias=True,
unroll=False)(encoded)
norm_layerP, weightdP = SeqSelfAttention(return_attention=True,
attention_width= 3,
name='attentionP')(PLSTM)
decoder_P = _decoder([i for i in reversed(self.nb_filters)],
[i for i in reversed(self.kernel_size)],
self.decoder_depth,
self.drop_rate,
self.kernel_regularizer,
self.bias_regularizer,
self.activationf,
self.padding,
norm_layerP)
P = Conv1D(1, 11, padding = self.padding, activation='sigmoid', name='picker_P')(decoder_P)
# SLSTM = LSTM(self.nb_filters[1], return_sequences=True, dropout=self.drop_rate, recurrent_dropout=self.drop_rate)(encoded)
SLSTM = LSTM(self.nb_filters[1],
return_sequences=True,
dropout=self.drop_rate,
recurrent_dropout=0,
activation='tanh',
recurrent_activation='sigmoid',
use_bias=True,
unroll=False)(encoded)
norm_layerS, weightdS = SeqSelfAttention(return_attention=True,
attention_width= 3,
name='attentionS')(SLSTM)
decoder_S = _decoder([i for i in reversed(self.nb_filters)],
[i for i in reversed(self.kernel_size)],
self.decoder_depth,
self.drop_rate,
self.kernel_regularizer,
self.bias_regularizer,
self.activationf,
self.padding,
norm_layerS)
S = Conv1D(1, 11, padding = self.padding, activation='sigmoid', name='picker_S')(decoder_S)
model = keras.models.Model(inputs=inp, outputs=[d, P, S])
model.compile(loss=self.loss_types, loss_weights=self.loss_weights,
optimizer=Adam(lr=_lr_schedule(0)), metrics=[f1])
return model
# input=keras.layers.Input(shape=(12000,3))
# model=cred2()(input)
# model.summary()
```
### Relevant log output
```shell
``` | stat:awaiting response,type:performance,TF 2.10 | low | Critical |
2,809,513,501 | vscode | Settings option to disable overtype mode. |
Type: <b>Feature Request</b>
I don't begrudge people that use overtype/overstrike mode. I expected that there would be a settings entry to control allowing the previous behaviour or the new behaviour. This seems to usually be the case for new features that change an existing behaviour.
Accessibility wise, if one's keyboard aim for Home or Del keys isn't perfect, a user can end up in overstrike mode without realizing. The solution for such bad aim, combined with no-choice overstrike mode, is to remove the insert key to eliminate accidents.
Please implement a settings option that allows a user to "disable overtype".
VS Code version: Code 1.96.3 (91fbdddc47bc9c09064bf7acf133d22631cbf083, 2025-01-09T18:14:09.060Z)
OS version: Windows_NT x64 10.0.22631
Modes:
Remote OS version: Linux x64 5.15.153.1-microsoft-standard-WSL2
<!-- generated by issue reporter --> | info-needed | low | Minor |
2,809,528,054 | langchain | ChatOllama with_structured_output not honoured by langchain. Works fine using direct ollama chat() call. | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Python versions are:
langchain 0.3.15
langchain-community 0.3.15
langchain-core 0.3.31
langchain-ollama 0.2.2
ollama 0.4.7
Running ollama 0.5.7 (pip install -U ollama did not increase the version beyond 0.4.7)
Using with_structured_output() seems to work for a very simple example such as the following:
```python
from langchain_ollama import ChatOllama
from typing import Optional
from pydantic import BaseModel, Field
class Person(BaseModel):
name: str
age: int
llm = ChatOllama(
model="qwen2.5:1.5b",
temperature=0,
).with_structured_output(Person)
llm.invoke("Erick 27")
```
However, for a more complex requirement, it fails with Ollama returning a value of None.
```python
from pydantic import BaseModel, Field
from typing import Optional
from openai import OpenAI
from langchain_ollama import ChatOllama
# Define the output model
class Experience(BaseModel):
company: str = Field(..., description="The name of the company.")
position: str = Field(..., description="The job title held at the company.")
start_date: str = Field(..., description="The date when you started working at the company.")
end_date: str = Field(..., description="The date when you left the company. If still employed, use 'Present'.")
class Education(BaseModel):
institution_name: str = Field(..., description="The name of the educational institution.")
degree: str = Field(..., description="The degree obtained from the institution.")
start_date: str = Field(..., description="The date when you started attending school at the institution.")
end_date: str = Field(..., description="The date when you graduated. If still enrolled, use 'Present'.")
class Resume(BaseModel):
full_name: str = Field(..., description="The full name of the person on the resume.")
contact_email: str = Field(..., description="The email address for contacting the person.")
phone_number: str = Field(..., description="The phone number for contacting the person.")
summary: str = Field(..., description="A brief summary of the person's career highlights.")
experience: Optional[list[Experience]] = Field([], description="List of experiences held by the person.")
education: Optional[list[Education]] = Field([], description="List of educational institutions attended by the person.")
with open('CVs/resume.md', 'r') as file:
resume_data = file.read()
verbose=True
model = "qwen2.5:14b"
prompt = f"""
Analyse the following resume from the content between the triple backticks below: For the resume below, identify the following information:
1) Their personal details, including name, email, phone number and anything else they provide.
2) An overall summary of their experience to provide a general background.
3) A list of the companies they have worked for. This should include the company name, the dates they started and and ended working for the company, and the tasks and activities they carried out.
4) A list of universities or colleges that the person went to. This should include the name of the college the title of the qualification, and the dates they started and ended.
The raw data is here:
```{resume_data}```
"""
llm = ChatOllama( model=model,
num_ctx = 32000,
timeout = 600,
temperature=0.0,
verboseness = verbose,
response = "json")
structured_llm = llm.with_structured_output(Resume)
print("Calling LLM")
response = structured_llm.invoke(prompt)
print(response)
```
It also fails without the 'response = "json" included.
I just get a None response.
Oddly, this is not consistent. Sometimes, I get back a response, but it fails satisfying the Resume type. because it won't find education items. Even though Education is an optional type in the Resume class.
Many thanks to @rick-github for help with the RCA for this.
The request sent to ollama was as follows:
```
{
"model": "qwen2.5:14b",
"stream": true,
"options": {
"num_ctx": 32000,
"temperature": 0.0
},
"messages": [
{
"role": "user",
"content": " \nAnalyse the following ... or collaboration.\n```\n"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "Resume",
"description": "",
"parameters": {
"type": "object",
"required": [
"full_name",
"contact_email",
"phone_number",
"summary"
],
"properties": {
"full_name": {
"type": "string",
"description": "The full name of the person on the resume."
},
"contact_email": {
"type": "string",
"description": "The email address for contacting the person."
},
"phone_number": {
"type": "string",
"description": "The phone number for contacting the person."
},
"summary": {
"type": "string",
"description": "A brief summary of the person's career highlights."
},
"experience": {
"description": "List of experiences held by the person."
},
"education": {
"description": "List of educational institutions attended by the person."
}
}
}
}
}
]
}
```
The schema hasn't been recursed and is passed as a tool definition, so the model doesn't know what data is needed. It does its best and fills in a bunch of details, but the returned data fails validation checks.
If the request is sent directly to ollama, the results are better.
Using a direct call to ollama via the chat() function works perfectly.
```python
import ollama, json, sys
response = ollama.chat(model=model,
messages=[{"role":"user","content":prompt}],
options={"num_ctx":32000, "temperature":0.0},
format=Resume.model_json_schema(),
)
print(json.dumps(json.loads(response.message.content), indent=4))
```
The prompt sent to Ollama has the full schema in the format field:
```
{
"model": "qwen2.5:14b",
"stream": false,
"options": {
"num_ctx": 32000,
"temperature": 0.0
},
"format": {
"$defs": {
"Education": {
"properties": {
"institution_name": {
"description": "The name of the educational institution.",
"title": "Institution Name",
"type": "string"
},
"degree": {
"description": "The degree obtained from the institution.",
"title": "Degree",
"type": "string"
},
"start_date": {
"description": "The date when you started attending school at the institution.",
"title": "Start Date",
"type": "string"
},
"end_date": {
"description": "The date when you graduated. If still enrolled, use 'Present'.",
"title": "End Date",
"type": "string"
}
},
"required": [
"institution_name",
"degree",
"start_date",
"end_date"
],
"title": "Education",
"type": "object"
},
"Experience": {
"properties": {
"company": {
"description": "The name of the company.",
"title": "Company",
"type": "string"
},
"position": {
"description": "The job title held at the company.",
"title": "Position",
"type": "string"
},
"start_date": {
"description": "The date when you started working at the company.",
"title": "Start Date",
"type": "string"
},
"end_date": {
"description": "The date when you left the company. If still employed, use 'Present'.",
"title": "End Date",
"type": "string"
}
},
"required": [
"company",
"position",
"start_date",
"end_date"
],
"title": "Experience",
"type": "object"
}
},
"properties": {
"full_name": {
"description": "The full name of the person on the resume.",
"title": "Full Name",
"type": "string"
},
"contact_email": {
"description": "The email address for contacting the person.",
"title": "Contact Email",
"type": "string"
},
"phone_number": {
"description": "The phone number for contacting the person.",
"title": "Phone Number",
"type": "string"
},
"summary": {
"description": "A brief summary of the person's career highlights.",
"title": "Summary",
"type": "string"
},
"experience": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Experience"
},
"type": "array"
},
{
"type": "null"
}
],
"default": [],
"description": "List of experiences held by the person.",
"title": "Experience"
},
"education": {
"anyOf": [
{
"items": {
"$ref": "#/$defs/Education"
},
"type": "array"
},
{
"type": "null"
}
],
"default": [],
"description": "List of educational institutions attended by the person.",
"title": "Education"
}
},
"required": [
"full_name",
"contact_email",
"phone_number",
"summary"
],
"title": "Resume",
"type": "object"
},
"messages": [
{
"role": "user",
"content": " \nAnalyse the following ... or collaboration.\n```\n"
}
],
"tools": []
}
```
This was a trace from one run that provided a relatively comprehensive answer:
```
{
"full_name": "Not provided in the resume",
"contact_email": "Not explicitly provided, but a placeholder is given: Feel free to reach out via email",
"phone_number": "Not provided in the resume",
"summary": "The individual has over two decades of experience in software engineering and architecture roles. They have worked at several companies including Tech Innovators Inc., NextGen Solutions, Alpha Development Corp., and CodeSphere LLC. Their career highlights include designing scalable microservices architectures, leading development teams, integrating AI/ML capabilities into legacy systems, and automating internal processes to reduce operational costs.",
"experience": [
{
"company": "Tech Innovators Inc.",
"position": "Senior Software Engineer",
"start_date": "June 2015",
"end_date": "Present"
},
{
"company": "NextGen Solutions",
"position": "Software Architect",
"start_date": "March 2010",
"end_date": "May 2015"
},
{
"company": "Alpha Development Corp.",
"position": "Lead Developer",
"start_date": "January 2005",
"end_date": "February 2010"
},
{
"company": "CodeSphere LLC",
"position": "Software Engineer",
"start_date": "June 2000",
"end_date": "December 2004"
}
],
"education": [
{
"institution_name": "Massachusetts Institute of Technology",
"degree": "Master of Science in Computer Science",
"start_date": "August 1998",
"end_date": "May 2000"
},
{
"institution_name": "University of California, Berkeley",
"degree": "Bachelor of Science in Computer Science",
"start_date": "August 1994",
"end_date": "May 1998"
}
]
}
```
For some reason, I cannot upload the small resume file, so here it is in cleartext:
## **Professional Experience**
### **Senior Software Engineer**
**Tech Innovators Inc.**
_June 2015 – Present_
- Designed and implemented scalable microservices architecture for a SaaS platform, improving performance by 30%.
- Led a team of 12 engineers, mentoring junior developers and conducting regular code reviews.
- Integrated AI/ML capabilities into legacy systems, increasing operational efficiency by 20%.
- Championed DevOps practices, reducing deployment times from days to hours.
### **Software Architect**
**NextGen Solutions**
_March 2010 – May 2015_
- Architected and delivered a real-time analytics platform for financial services, handling millions of transactions daily.
- Migrated a monolithic system to a distributed microservices-based architecture, enabling faster feature delivery.
- Partnered with product managers to define technical requirements and roadmap, aligning business goals with engineering efforts.
### **Lead Developer**
**Alpha Development Corp.**
_January 2005 – February 2010_
- Built a high-availability e-commerce platform that handled over 500,000 daily users.
- Created APIs to integrate third-party payment gateways, enhancing user experience and reducing downtime.
- Conducted performance optimizations that improved application speed by 40%.
### **Software Engineer**
**CodeSphere LLC**
_June 2000 – December 2004_
- Developed enterprise-grade web applications using Java and C++.
- Automated internal processes, saving the company 15% in operational costs annually.
- Collaborated with cross-functional teams to deliver projects on time and within budget.
---
## **Education**
### **Master of Science in Computer Science**
**Massachusetts Institute of Technology**
_August 1998 – May 2000_
### **Bachelor of Science in Computer Science**
**University of California, Berkeley**
_August 1994 – May 1998_
---
## **Skills**
- Programming Languages: Python, Java, C++, JavaScript
- Cloud Platforms: AWS, Azure, Google Cloud
- Architecture: Microservices, Distributed Systems, RESTful APIs
- Tools: Docker, Kubernetes, Terraform
- Agile Development, DevOps, AI/ML Integration
---
## **Certifications**
- AWS Certified Solutions Architect – Professional
- Certified Kubernetes Administrator (CKA)
- Certified ScrumMaster (CSM)
---
## **Contact**
Feel free to reach out via email or phone for opportunities or collaboration.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use the langchain ChatOllama call to return a Pydantic class with well-defined fields filled in as a result of a context search
I expect that where the required fields are contained within the context, that the Pydantic fields will be populated.
It seems that the format is not being picked up by langchain using the ChatOllama call, but does work when a native ollama library is used.
This results in no valid response being returned by the ChatOllama call.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.2.0: Fri Dec 6 18:56:34 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6020
> Python Version: 3.11.6 (v3.11.6:8b6ee5ba3b, Oct 2 2023, 11:18:21) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.15
> langchain_community: 0.3.15
> langsmith: 0.1.128
> langchain_anthropic: 0.3.0
> langchain_chroma: 0.2.0
> langchain_cohere: 0.3.1
> langchain_experimental: 0.3.2
> langchain_google_genai: 2.0.4
> langchain_groq: 0.2.0
> langchain_mistralai: 0.2.2
> langchain_nomic: 0.1.3
> langchain_ollama: 0.2.2
> langchain_openai: 0.2.14
> langchain_pinecone: 0.2.2
> langchain_tests: 0.3.8
> langchain_text_splitters: 0.3.5
> langchainhub: 0.1.15
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.11
> anthropic: 0.42.0
> async-timeout: 5.0.1
> chromadb: 0.6.3
> cohere: 5.13.11
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: 0.115.7
> google-generativeai: 0.8.3
> groq: 0.15.0
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> nomic: 3.1.3
> numpy: 1.26.4
> ollama: 0.4.7
> openai: 1.59.7
> orjson: 3.10.3
> packaging: 23.2
> pandas: 2.2.3
> pillow: 10.4.0
> pinecone: 5.4.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> pytest: 8.3.4
> pytest-asyncio: 0.25.2
> pytest-socket: 0.7.0
> PyYAML: 6.0.2
> requests: 2.32.3
> SQLAlchemy: 2.0.32
> syrupy: 4.8.1
> tabulate: 0.9.0
> tenacity: 9.0.0
> tiktoken: 0.8.0
> tokenizers: 0.21.0
> types-requests: 2.32.0.20241016
> typing-extensions: 4.12.2
| Ɑ: core | low | Critical |
2,809,530,687 | rust | `extern { fn f(_: *const __m128); }` emits `improper_ctypes` | ### Code
```Rust
use std::arch::x86_64::__m128;
extern
{
fn f(_: *const __m128);
}
```
### Current output
```Shell
warning: `extern` block uses type `__m128`, which is not FFI-safe
--> <source>:5:13
|
5 | fn f(_: *const __m128);
| ^^^^^^^^^^^^^ not FFI-safe
|
= help: consider adding a `#[repr(C)]` or `#[repr(transparent)]` attribute to this struct
= note: this struct has unspecified layout
= note: `#[warn(improper_ctypes)]` on by default
```
### Desired output
```Shell
```
### Rationale and extra context
I don’t see why passing `*const __m128` over FFI could cause ABI problems (as opposed to passing `__m128` by value, which is [known to miscompile](https://github.com/rust-lang/rust/issues/116558)). C and Rust agree on the size and alignment of `__m128`, and for `extern` functions, on the argument passing convention for pointers.
### Other cases
```Rust
```
### Rust Version
```Shell
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
### Anything else?
_No response_ | A-diagnostics,T-compiler,L-improper_ctypes,L-false-positive | low | Minor |
2,809,558,466 | kubernetes | Add ability to override maxWaitForUnmountDuration for attach detach controller in controller manager | ### What would you like to be added?
When node non graceful shutdown occur kube controller manager updates taints on node and sets `node.kubernetes.io/unreachable` taint.
When pod's `tolerationSeconds` expired controller manager evict pod from node. I set `tolerationSeconds: 60` for my pod and it's get evicted in time. But cannot start because I use storage with RWO and volume must be detached from failed node first. So controller manager tries to detach volume and after 6 minutes logs
```log
I0124 12:52:02.752565 1 reconciler.go:279] "attacherDetacher.DetachVolume started: this volume is not safe to detach, but maxWaitForUnmountDuration expired, force detaching" logger="persistentvolume-attach-detach-controller" duration="6m0s" node="prod-tages-k8s-worker-2" volumeName="kubernetes.io/csi/linstor.csi.linbit.com^pvc-fb36d4b7-d16d-4e6b-a82f-70cdf5dbf7d0"
```
I know that `NodeOutOfServiceVolumeDetach` feature gate in GA since kubernetes 1.28.
With that logic I have to update node with `node.kubernetes.io/out-of-service=nodeshutdown:NoExecute` taint to get volume attached on new node and delete from failed node.
As I see the maxWaitForUnmountDuration is hardcoded to 6 minutes in [code](https://github.com/kubernetes/kubernetes/blob/2d32348f86c1e557faef0e8d7aa3722ea921ae72/pkg/controller/volume/attachdetach/attach_detach_controller.go#L94)
### Why is this needed?
I think that's common case.
My goal is to make fast migration of pod in kubernetes, I can't run this pod in multiple replicas.
But for that I need to write special controller that will taint nodes on fail. And this controller will just mark node out of service after some time it gets Unreachable.
| sig/storage,kind/feature,needs-triage | low | Critical |
2,809,593,625 | rust | Replace `rustc_layout_scalar_valid_range_start` attribute with pattern types | the `rustc_layout_scalar_valid_range_start` and `rustc_layout_scalar_valid_range_end` attributes require a lot of machinery to stay sanely usable. Pattern types don't require less machinery, but they are simpler as they don't break through layers of layouts and then require surprisingly subtle unsafe code to be used correctly.
So we're ripping out the attributes, and replace their usages in libcore and rustc:
* `std::ptr::NonNull`
* `std::num::NonZero`
* `rustc_index_macros::newtype::NewType`
* `std::num::Nanoseconds`
cc @Veykril @scottmcm | F-pattern_types | low | Minor |
2,809,597,478 | rust | invalid opcode regression in `x86_64-unknown-linux-musl` release builds while compiling code using `generic-array` | [Repro](https://github.com/SohumB/rustc-musl-opcode-regression-202501).
searched nightlies: from nightly-2024-11-21 to nightly-2025-01-24
regressed nightly: nightly-2025-01-10
searched commit range: https://github.com/rust-lang/rust/compare/a580b5c379b4fca50dfe5afc0fc0ce00921e4e00...824759493246ee383beb9cd5ceffa0e15deb9fa4
regressed commit: https://github.com/rust-lang/rust/commit/b6b8361bce8561fb8786ad33ca1abfdf4bc487b6
<details>
<summary>bisected with <a href='https://github.com/rust-lang/cargo-bisect-rustc'>cargo-bisect-rustc</a> v0.6.9</summary>
Host triple: x86_64-unknown-linux-gnu
Reproduce with:
```bash
cargo bisect-rustc -vv --start=2024-11-21 --target=x86_64-unknown-linux-musl --script=run.sh
```
</details>
I definitely don't understand MIR, but I was curious enough to look at it, so in case it helps anyone else, here's what appears to be the relevant section (without the `StorageLive`/`StorageDead` calls)
<details>
<summary>succeeding</summary>
```rust
_2 = copy (_1.0: Box<GenericArray<usize, U1>>);
_3 = copy (((_2.0: Unique<GenericArray<usize, U1>>).0: NonNull<GenericArray<usize, U1>>).0:
*const GenericArray<usize, U1>);
_4 = *const [usize] from (copy _3, const 1_usize);
_6 = NonNull::<[usize]> { pointer: copy _4 };
_11 = copy _6 as *mut [usize] (Transmute);
_10 = copy _11 as *const usize (PtrToPtr);
_5 = NonNull::<usize> { pointer: move _10 };
_9 = copy _5 as *mut usize (Transmute);
_8 = Offset(copy _9, const 1_usize);
_7 = move _8 as *const usize (PtrToPtr);
drop(_1) -> [return: bb2, unwind continue];
```
</details>
<details>
<summary>failing</summary>
```rust
_2 = copy (_1.0: Box<GenericArray<usize, U1>>);
_3 = copy ((_2.0: Unique<GenericArray<usize, U1>>).0: NonNull<GenericArray<usize, U1>>)
as *const GenericArray<usize, U1> (Transmute);
_4 = *const [usize] from (copy _3, const 1_usize);
_9 = copy _4 as *const usize (Transmute);
_5 = NonNull::<usize> { pointer: move _9 };
_8 = copy _4 as *mut usize (Transmute);
_7 = Offset(copy _8, const 1_usize);
_6 = move _7 as *const usize (PtrToPtr);
drop(_1) -> [return: bb2, unwind continue];
```
</details>
| T-compiler,O-musl,C-bug,A-mir-opt,I-miscompile,A-mir-opt-GVN | low | Minor |
2,809,617,914 | flutter | Impeller Errors | ### Steps to reproduce
A clear and concise description of what the bug is.
```
[ERROR:flutter/impeller/renderer/backend/metal/context_mtl.mm(218)] Break on 'ImpellerValidationBreak' to inspect point of failure: Could not set up the command queue.
[ERROR:flutter/shell/platform/darwin/graphics/FlutterDarwinContextMetalImpeller.mm(31)] Could not create Metal Impeller Context.
Error initializing DevFS: DevFSException(Service disconnected, _createDevFS: (-32000) Service connection disposed, null)
```
The error described above only happens on iOS. The app compiles but not startup. On Android devices it work without problems and my application runs correctly
### Expected results
I want it to work on iOS
### Actual results
```
Launching lib/flavors/Instituciones/CentroAsistencial/main_centro_asistencial.dart on Gema in debug mode...
Xcode build done. 95.6s
[ERROR:flutter/impeller/renderer/backend/metal/context_mtl.mm(218)] Break on 'ImpellerValidationBreak' to inspect point of failure: Could not set up the command queue.
[ERROR:flutter/shell/platform/darwin/graphics/FlutterDarwinContextMetalImpeller.mm(31)] Could not create Metal Impeller Context.
Lost connection to device.
Error initializing DevFS: DevFSException(Service disconnected, _createDevFS: (-32000) Service connection disposed, null)
Exited.
```
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib/flavors/Instituciones/CentroAsistencial/main_centro_asistencial.dart on Gema in debug mode...
Xcode build done. 95.6s
[ERROR:flutter/impeller/renderer/backend/metal/context_mtl.mm(218)] Break on 'ImpellerValidationBreak' to inspect point of failure: Could not set up the command queue.
[ERROR:flutter/shell/platform/darwin/graphics/FlutterDarwinContextMetalImpeller.mm(31)] Could not create Metal Impeller Context.
Lost connection to device.
Error initializing DevFS: DevFSException(Service disconnected, _createDevFS: (-32000) Service connection disposed, null)
Exited.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.3, on macOS 14.7.2 23H311 darwin-arm64, locale es-419)
• Flutter version 3.27.3 on channel stable at /Users/user222320/development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision c519ee916e (3 days ago), 2025-01-21 10:32:23 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.1)
• Android SDK at /Users/user222320/Library/Android/sdk
• Platform android-35, build-tools 35.0.1
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.96.4)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• Gema (mobile) • 44C97DDD-2613-44F7-B262-5EAF786AD097 • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-2 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.7.2 23H311 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.7.2 23H311 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.140
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| waiting for customer response,in triage | low | Critical |
2,809,618,969 | vscode | Remotes UI is totally confusing |
Type: <b>Feature Request</b>
The "Remotes (Tunnels/SSH)" tab of REMOTES EXPLORER lists remotes, but shows no way to edit them. I forget how I added them originally and I can't right-click on them, and the "..." menu is useless.
VS Code version: Code 1.96.3 (91fbdddc47bc9c09064bf7acf133d22631cbf083, 2025-01-09T18:14:09.060Z)
OS version: Windows_NT x64 10.0.22631
Modes:
Remote OS version: Linux x64 5.4.0-204-generic
Remote OS version: Linux x64 5.15.167.4-microsoft-standard-WSL2
Connection to 'SSH: computelab-303-docker' could not be established Connecting with SSH timed out
<!-- generated by issue reporter --> | info-needed | low | Major |
2,809,620,839 | kubernetes | Pop from the backoff queue whenever the active queue is empty | ### What would you like to be added?
Disclaimer: this is a high level description of the idea without providing all necessary details, which can be worked on later whenever there is agreement regarding the general approach.
Whenever the active queue is empty, scheduler could pick the first pod from the backoff queue (the one with the shortest expiration time) and either schedule (assign) it right away or put it back to the unschedulable queue.
The next time the pod goes to the backoff queue it should either use it's previous backoff time if it haven't expired yet, or it should be increased as it is done currently. This way in the worst scenario, pods would have to wait their whole backoff time to get to the active queue (be rescheduled), so not degrade over the current behavior.
### Why is this needed?
The main purpose of the backoff queue is to prevent scheduler from processing unschedulable pods over and over again, which can lead to performance degradation of scheduling lower priority pods, including their starvation.
This problem is prominent because rescheduling of unschedulable pods engages time consuming preemption process, which basically depends on the overall number of lower priority pods (see https://github.com/kubernetes/kubernetes/pull/128466).
The higher max backoff time is used, the smaller performance degradation is observed. Unfortunately pods waiting in the backoff queue are "punished" for their unschedulable status by additional waiting time in case when they eventually become schedulable. This form of punishment is a side effect, but not the feature in itself, therefore this change would reduce this punishment time to the very minimum without defeating the purpose of the backoff queue.
With this feature, unschedulable pods could be rescheduled earlier shortening the punshment time to the very minimum, but without impacting scheduler performance. In effect, unschedulable pods would be expected to be rescheduled soon after some event triggered rescheduling (after passing QHints filter).
In the worst case, the backoff queue will work as currently (move pods to active on the backoff time expiration), although having this feature would allow to increase the `maxPodBackoffSeconds` from default 10s to 120s, as in practice the waiting time is expected to be very small anyway. This is important, because there are known reports that 10s backoff time still does not prevent scheduler from running into the starvation problem and the feature in itself won't address it directly, but indirectly by allowing to saftely increase the default time. | sig/scheduling,kind/feature,needs-triage | low | Major |
2,809,622,402 | next.js | Error during dev and warning at build in middleware.ts with --turbo flag | ### Link to the code that reproduces this issue
https://github.com/sommeeeer/next-middleware-edge-turbo-node
### To Reproduce
1. start the dev server with `pnpm dev`
2. open up [http://localhost:3000](http://localhost:3000) and look for the request header: `hello-from-function`.
3. close the dev server and now run it with `--turbo` flag: `pnpm devturbo`
### Current vs. Expected behavior

So the `src/lib/node-native.ts` exports two functions. One is for the `middleware.ts` and the other one is a random async `hi()` function that dynamically imports a native Node.js API from `fs/promises`.
Since we are not using this function in the middleware it should not be flagged. I think it should work the same way as normal `next dev` without the `--turbo` flag.
Can note that `next build` works but with warnings.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #47~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Oct 2 16:16:55 UTC 2
Available memory (MB): 14800
Available CPU cores: 16
Binaries:
Node: 20.18.0
npm: 10.9.1
Yarn: 1.22.19
pnpm: 9.15.4
Relevant Packages:
next: 15.1.6 // Latest available version is detected (15.1.6).
eslint-config-next: 15.1.6
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack, Middleware
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
_No response_ | Middleware,Runtime,Turbopack | low | Critical |
2,809,681,423 | pytorch | torch.compiler.disable should have the option to raise an informative exception (other than `torch._dynamo.exc.Unsupported`) | ### 🚀 The feature, motivation and pitch
## Context
I'm working on making distributions compatible with compile. We expect that the validate step will never be compilable.
Ideally, users who run into that graph break (whether fullgraph=False and they're using TORCH_LOGS or using fullgraph=True and they look at the exception) should be guided to using `torch.distributions.Distribution.set_default_validate_args(False)` or similar to deactivate the validation step.
That's a reasonable thing to ask users to do if we tell them the pros and cons of doing that.
Currently, the error message just says "if statements are not supported" and if we `disable()` it, we'll get even less context.
In general, if there's a way out of a graph break and we can tell people about it, we should have the tools to provide informative error messages about the macro-context.
## Feature request
An option we talked about with @bdhirsh is to have some way of customize the Unsupported error message or provide a custom error (eg, ValueError or whatever) that tells people how to avoid the graph break.
Something like `validate = torch.compiler.disable(validate, custom_error=RuntimeError(msg))`
Before:
```
torch._dynamo.exc.UserError: Dynamic control flow is not supported at the moment.
```
After
```
RuntimeError: You are attempting to compile a distribution constructors with validate_args=True (default). To compile this without graph breaks, make sure to turn validate_args to False through the constructor or distributions.Distribution.set_default_validate_args.
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo,dynamo-logging | low | Critical |
2,809,705,777 | vscode | Still showing it as an issue even after i rectified it |
Type: <b>Bug</b>
I had an issue in my code and so it was underlined with red.
But even after i rectified it, still it is underlined but my code was running perfectly that means i fixed the issue in my code.
that red underline irritating me a lot.
Please fix it
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 5 3500U with Radeon Vega Mobile Gfx (8 x 2096)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|5.91GB (0.84GB free)|
|Process Argv|--crash-reporter-id 6f5931b0-12d5-4e47-877d-6e08ee63671d|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (22)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-django|bat|1.15.0
python-environment-manager|don|1.2.7
python-extension-pack|don|1.7.0
vsc-python-indent|Kev|1.19.0
python-path|mge|0.0.14
playwright|ms-|1.1.12
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
playwright-snippets|nit|1.0.1
autodocstring|njp|0.6.1
java|red|1.39.0
python-extended-snippets|tus|0.0.1
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
jinja|who|0.0.8
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
pythonvspyt551:31179978
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
vscaac:30438847
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
hdaa2157:31222309
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,809,707,110 | langchain | Passing runtime value args to a tool created by subclassing BaseTool: tool.invoke doesn't see the runtime arg even when provided |
### Discussed in https://github.com/langchain-ai/langchain/discussions/29411
<div type='discussions-op-text'>
<sup>Originally posted by **shruthiR-fauna** January 24, 2025</sup>
### Checked other resources
- [X] I added a very descriptive title to this question.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Annotated, Any, Dict, List, Type
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import chain
from langchain_core.tools import BaseTool, InjectedToolArg
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
class RobotDogStateManager:
# Simulated state management for the robot dog
def __init__(self):
self.battery_level = 100
self.current_location = ""
class GoToObjectInput(BaseModel):
"""
Input for going to an object, visible to the model
"""
object: str = Field(description="The object to go to")
class GoToObjectTool(BaseTool):
name: str = "go_to_object"
description: str = "Instruct the robot dog to go to a specific object"
args_schema: Type[BaseModel] = GoToObjectInput
def _run(
self,
object: str,
# Use InjectedToolArg for arguments the model shouldn't see
robot_state: Annotated[RobotDogStateManager, InjectedToolArg],
battery_threshold: Annotated[float, InjectedToolArg] = 20.0,
) -> str:
"""
Go to an object, with additional context not visible to the model
"""
# Check battery level before proceeding
if robot_state.battery_level < battery_threshold:
return (
f"Cannot go to {object}. Battery too low: {robot_state.battery_level}%"
)
# Update robot's location
robot_state.current_location = object
return f"Successfully went to {object}"
# Example usage
def main():
# Create a shared state manager
robot_state = RobotDogStateManager()
robot_state.battery_level = 10
# Create the tool
go_to_object_tool = GoToObjectTool()
# Input and output schemas for the tool
print(f"Input schema for tool: {go_to_object_tool.get_input_schema().schema()}")
# Try to invoke the tool
go_to_object_tool.invoke({'object': 'kitchen', 'robot_state': robot_state})
if __name__ == "__main__":
main()
```
### Description
I'm trying to use the LangChain library to create an agent that can serve as the coding system for a robot dog. The minimal example provided above provides a toy example of the model I'm using.
I've created a tool `go_to_object` that can be bound to an LLM that supports tool calling. Two arguments to the tool can only be provided at runtime so they are annotated with the `InjectedToolArg` in the `_run` method.
However, when I call this tool with `go_to_object_tool.invoke` where I supply all the required args, I see the following error
`*** TypeError: GoToObjectTool._run() missing 1 required positional argument: 'robot_state'`
I've spent a lot of time digging through the API and the docs. I think this exposes a bug in how `tool_args` and `tool_kwargs` are parsed.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #52~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Dec 9 15:00:52 UTC 2
> Python Version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.10
> langchain: 0.3.3
> langchain_community: 0.3.2
> langsmith: 0.1.133
> langchain_fireworks: 0.2.1
> langchain_google_genai: 2.0.1
> langchain_groq: 0.2.0
> langchain_ollama: 0.2.0
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.9
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> fireworks-ai: 0.15.4
> google-generativeai: 0.8.3
> groq: 0.11.0
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> ollama: 0.3.3
> openai: 1.51.2
> orjson: 3.10.7
> packaging: 24.1
> pillow: 11.0.0
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2</div> | 🤖:bug,Ɑ: core | low | Critical |
2,809,727,056 | go | text/template: improve error message "incompatible types for comparison" | ### Go version
go version go1.23.5 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='amd64'
GOBIN='/home/jdemeyer/.local/bin'
GOCACHE='/home/jdemeyer/.cache/go-build'
GOENV='/home/jdemeyer/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/jdemeyer/go/pkg/mod'
GONOPROXY='bitbucket.org/be-mobile'
GONOSUMDB='bitbucket.org/be-mobile'
GOOS='linux'
GOPATH='/home/jdemeyer/go'
GOPRIVATE='bitbucket.org/be-mobile'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/snap/go/10818'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/snap/go/10818/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.5'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/jdemeyer/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v3'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build1587861785=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Actually, this problem came via using `helm` to template a Helm chart, but this is the essence of the problem in pure Go:
```
package main
import (
"io"
"text/template"
)
func main() {
const tmpl = "{{ gt .a .b }}"
var t = template.Must(template.New("test").Parse(tmpl))
args := map[string]any{
"a": 1.0, // Intentionally incomparable types
"b": 2,
}
err := t.Execute(io.Discard, args)
if err != nil {
panic(err)
}
}
```
### What did you see happen?
```
panic: template: test:1:3: executing "test" at <gt .a .b>: error calling gt: incompatible types for comparison
```
### What did you expect to see?
```
panic: template: test:1:3: executing "test" at <gt .a .b>: error calling gt: incompatible types for comparison: float64, int
```
It would be helpful to add in the error message *which* types are causing the incompatibility. When using `text/template` in Helm, there are many layers in between, so the error message `incompatible types for comparison` might not be easy to debug. | NeedsInvestigation,LibraryProposal | low | Critical |
2,809,731,043 | vscode | KEY BOARD SHORTCUTS NOT WORKING PROPERLY |
Type: <b>Bug</b>
Some keyboard shortcuts are not working properly, especially github copilot(copilot inline chat =ctrl+I). please fix this issue or suggest me some way to correct this issue
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-8665U CPU @ 1.90GHz (8 x 2112)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|23.78GB (13.54GB free)|
|Process Argv|C:\\Users\\hrish\\OneDrive\\Desktop\\script --crash-reporter-id d0290b4b-aeb7-43f6-b823-9cd419c9dd6e|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (37)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-tailwindcss|bra|0.14.1
npm-intellisense|chr|1.4.5
path-intellisense|chr|2.10.0
es7-react-js-snippets|dsz|4.4.3
vscode-html-css|ecm|2.0.13
prettier-vscode|esb|11.0.0
auto-close-tag|for|0.5.15
auto-rename-tag|for|0.1.10
copilot|Git|1.259.0
copilot-chat|Git|0.23.2
go|gol|0.44.0
mongodb-vscode|mon|1.11.0
vscode-docker|ms-|1.29.4
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
remote-containers|ms-|0.394.0
remote-wsl|ms-|0.88.5
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
js-atom-grammar|ms-|0.1.14
js-debug-nightly|ms-|2025.1.817
live-server|ms-|0.4.15
vscode-typescript-next|ms-|5.8.20250123
vsliveshare|ms-|1.0.5948
vscode-react-native|msj|1.13.0
oracle-java|Ora|23.0.1
LiveServer|rit|5.7.9
vscode-standard|sta|2.1.3
cmake|twx|0.0.17
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-java-debug|vsc|0.58.1
vim|vsc|1.29.0
volar|Vue|2.2.0
html-css-class-completion|Zig|1.20.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
hdaa2157:31222309
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,809,734,777 | go | crypto/internal/fips140/nistec: p256NegCond is variable time on ppc64le [1.22 backport] | @rolandshoemaker requested issue #71383 to be considered for backport to the next 1.22 minor release.
> We'll backport this to the next minor releases.
>
> @gopherbot please open backport issues, this is a minor security issue.
| Security,CherryPickCandidate | low | Minor |
2,809,734,810 | go | crypto/internal/fips140/nistec: p256NegCond is variable time on ppc64le [1.23 backport] | @rolandshoemaker requested issue #71383 to be considered for backport to the next 1.23 minor release.
> We'll backport this to the next minor releases.
>
> @gopherbot please open backport issues, this is a minor security issue.
| Security,CherryPickCandidate | low | Minor |
2,809,758,666 | TypeScript | Text between Unicode escapes within an identifier is skipped | ### 🔎 Search Terms
- Unicode escape sequences
- extended Unicode escapes
- variables, constants, property names, JSX identifiers, JSX attribute names…
- regular expression group names, RegExp identifiers
- bundling & transpilation
### 🕗 Version & Regression Information
- This is the behavior since #32725 lands
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/MYewdgzgLgBAhgHQK4G8BsAmAvsZ6AsWApjALwz4YDcAUHAEbAAmRVMA9OzAKIBOvIXgC4YAYThgwIWADMAlmCYwwcALYkA5A2ZENAOhgAROUoCeIJDHUSYWvJhz3CugPx16LNpxgA5EDCJ+QRpQSFg8AHY0LDheewBOGPoAGxJySloAN1i5BlSvLj4BYTEJKVkFJRV1W2zeXJTdA2MzCysiGw1I6NiEpNSNN0yG-I4uPwCg3hogA
### 💻 Code
```ts
const a\u{62}c\u{64}e = 42;
abcde; // Error: Cannot find name 'abcde'. Did you mean 'a\u{62}c\u{64}e'?
abde; // No error
const \u{76}ar\u{69}able = 42;
variable; // Error: Cannot find name 'variable'. Did you mean '\u{76}ar\u{69}able'?
viable; // No error
```
### 🙁 Actual behavior
`abde` and `viable` are recognised but not `abcde` and `variable`.
### 🙂 Expected behavior
`abcde` and `variable` are recognised but not `abde` and `viable`.
### Additional information about the issue
Precisely, text between a <ins>4-digit or extended</ins> Unicode escape and an <ins>extended</ins> Unicode escape is ignored.
(This is not the case for text between a <ins>4-digit or extended</ins> Unicode escape and a <ins>4-digit</ins> Unicode escape.)
This happens with all kinds of identifiers, not just variable names, due to the missing line `result += text.substring(start, pos);`. This issue is opened just for trackability and is fixed in #61042.
| Bug,Help Wanted | low | Critical |
Subsets and Splits