id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
โŒ€
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
367,430,397
godot
Mono: Exported projects don't include runtime assemblies from nuget packages for its platform
**Godot version:** 3.0.6, 3.1 Alpha Snapshot and Master **OS/device including version:** Windows 10 **Issue description:** When you use a nuget package it'll properly add the initial nuget package dll's, but on export it doesn't include the runtime dll's (if they exist) in the assemblies folder. **Steps to reproduce:** 1. Create a new project that creates runtime libraries like sqlite 2. Export project 3. Notice that your app is asking about a missing dl **Minimal reproduction project:** Here's my current project, it includes the packages folder to save time. You'll notice inside the packages/sqlite core folder there's a `runtimes` folder with dll's for each individual platform under `linux-x64` `osx-x64` `win-x64` `win-x32`. Inside includes the corresponding dll's https://g4mr.itch.io/brickhouse (30mb) **Notes** - A workaround would be to manually check each package library and move the runtimes for each platform over to it's appropriate folder. - Nuget packages with platform specific runtime libs have no problem running inside the editor.
bug,confirmed,topic:dotnet
medium
Major
367,437,047
rust
Proc-macros have no way to determine if their invocation context is `no_std`
While writing [this response on Reddit](https://www.reddit.com/r/rust/comments/9lu9xo/_/e79ivl0?context=1000) I realized that there's no way for a proc-macro to determine if it's being invoked from `no_std` code and thus should use types from `core` instead of `std`. I'm not sure what this would look like, maybe a free function in `proc_macro`. cc @dtolnay
A-macros,T-lang,C-feature-request,A-proc-macros
low
Minor
367,441,529
go
cmd/compile: clearer error message for unkeyed composite literal with unexported field
I have a struct: ``` package df type SortKey struct { Key interface{} SortDesc bool seriesIndex int } ``` From an outside package: **TRY 1** ``` z := df.SortKey{"n", false} ``` `Compiler error: too few values in df.SortKey literal` **TRY 2** ``` z := df.SortKey{"n", false, 0} ``` `Compiler error: implicit assignment of unexported field 'seriesIndex' in df.SortKey literal` You can't win. Perhaps it should just be disallowed for Go 2 - It messes up backwards compatibility anyway amongst minor versions of packages (eg. net/http.Request and Context situation).
help wanted,NeedsFix
low
Critical
367,444,280
go
runtime: fatal error: found bad pointer in Go heap (incorrect use of unsafe or cgo?) on 386 FreeBSD after CL 138595
https://build.golang.org/log/dc8386895fee1c38f34eb9376c42f013617a2b29 https://build.golang.org/log/0c46001a74e64da259c54f716f30cc5455b97788 https://build.golang.org/log/8d212ce13ba6fc50c4ca2f0ecc386624164e7913 ``` runtime: pointer 0x39d9a000 to unallocated span span.base()=0x39d9a000 span.limit=0x0 span.state=3 runtime: found in object at *(0x39ab2c78+0x4) object=0x39ab2c78 s.base()=0x39aac000 s.limit=0x39ab4000 s.spanclass=0 s.elemsize=16384 s.state=mSpanManual *(object+0) = 0x39d911b0 *(object+4) = 0x39d9a000 <== *(object+8) = 0x39d91008 *(object+12) = 0x39d98000 *(object+16) = 0x39d91000 *(object+20) = 0x39d91008 *(object+24) = 0x100 *(object+28) = 0x100 *(object+32) = 0x80ae0ca *(object+36) = 0x39d91000 *(object+40) = 0x1b0 *(object+44) = 0x800 *(object+48) = 0x39d98000 *(object+52) = 0x2000 *(object+56) = 0x2000 *(object+60) = 0x0 *(object+64) = 0x0 *(object+68) = 0x800 *(object+72) = 0x0 *(object+76) = 0x39d91000 *(object+80) = 0x0 *(object+84) = 0x80ac08b *(object+88) = 0x3 *(object+92) = 0x39d98000 *(object+96) = 0x2000 *(object+100) = 0x2000 *(object+104) = 0x39ab2cf0 *(object+108) = 0x806e103 *(object+112) = 0x3988e10c *(object+116) = 0x39ab2d04 *(object+120) = 0x0 *(object+124) = 0x0 *(object+128) = 0x80c68c4 *(object+132) = 0x3 *(object+136) = 0x39d98000 *(object+140) = 0x2000 *(object+144) = 0x2000 *(object+148) = 0x844d520 *(object+152) = 0x39bde301 *(object+156) = 0x39a63880 *(object+160) = 0x80c8a7f *(object+164) = 0x399c9980 *(object+168) = 0x39d98000 *(object+172) = 0x2000 *(object+176) = 0x2000 *(object+180) = 0x0 *(object+184) = 0x0 *(object+188) = 0x0 *(object+192) = 0x8 *(object+196) = 0x8075ec8 *(object+200) = 0x845d740 *(object+204) = 0x2 *(object+208) = 0x845a840 *(object+212) = 0x8443380 *(object+216) = 0x0 *(object+220) = 0x64 *(object+224) = 0x2 *(object+228) = 0x39c53b00 *(object+232) = 0x39a63880 *(object+236) = 0x80c8647 *(object+240) = 0x39bde370 *(object+244) = 0xffffffff *(object+248) = 0x845a840 *(object+252) = 0x80cd731 *(object+256) = 0x845a840 *(object+260) = 0x399c9980 *(object+264) = 0x8443380 *(object+268) = 0x80c86fc *(object+272) = 0x39bde370 *(object+276) = 0xffffffff *(object+280) = 0x0 *(object+284) = 0x39bde370 *(object+288) = 0x80cd8f4 *(object+292) = 0x3 *(object+296) = 0x39d734d0 *(object+300) = 0x21 *(object+304) = 0x1 *(object+308) = 0x39bde370 *(object+312) = 0x0 *(object+316) = 0x0 *(object+320) = 0x21 *(object+324) = 0x0 *(object+328) = 0x0 *(object+332) = 0x39d73540 *(object+336) = 0x80cbe37 *(object+340) = 0x39d734d0 *(object+344) = 0x21 *(object+348) = 0x0 *(object+352) = 0x39d734d0 *(object+356) = 0x39bde370 *(object+360) = 0x0 *(object+364) = 0x0 *(object+368) = 0x80c85b7 *(object+372) = 0x39bde370 *(object+376) = 0xffffffff *(object+380) = 0x0 *(object+384) = 0x0 *(object+388) = 0x39bde370 *(object+392) = 0x0 *(object+396) = 0x0 *(object+400) = 0x81394bf *(object+404) = 0x39bde370 *(object+408) = 0xffffffff *(object+412) = 0x39bde370 *(object+416) = 0x0 *(object+420) = 0x0 *(object+424) = 0x39d73530 *(object+428) = 0x2f *(object+432) = 0x0 *(object+436) = 0x810e514 *(object+440) = 0x39d73530 *(object+444) = 0x2f *(object+448) = 0x39bde370 *(object+452) = 0x2f *(object+456) = 0x1 *(object+460) = 0x39d73530 *(object+464) = 0x2f *(object+468) = 0x810d41b *(object+472) = 0x39d75410 *(object+476) = 0x2 *(object+480) = 0x2 *(object+484) = 0x817d2f1 *(object+488) = 0x39d734d0 *(object+492) = 0x21 *(object+496) = 0x39d75410 *(object+500) = 0x2 *(object+504) = 0x2 *(object+508) = 0x39d73530 ... fatal error: found bad pointer in Go heap (incorrect use of unsafe or cgo?) runtime stack: runtime.throw(0x84d5035, 0x3e) /tmp/workdir/go/src/runtime/panic.go:608 +0x64 fp=0xfb9f6af8 sp=0xfb9f6ae4 pc=0x806f0f4 runtime.findObject(0x39d9a000, 0x39ab2c78, 0x4, 0x28b94fac, 0x39822960, 0x2) /tmp/workdir/go/src/runtime/mbitmap.go:399 +0x32d fp=0xfb9f6b1c sp=0xfb9f6af8 pc=0x80597cd runtime.scanblock(0x39ab2c78, 0x20, 0x850de64, 0x39822960, 0xfb9f6db8) /tmp/workdir/go/src/runtime/mgcmark.go:1057 +0x8d fp=0xfb9f6b48 sp=0xfb9f6b1c pc=0x80641fd runtime.scanframeworker(0xfb9f6d38, 0xfb9f6db8, 0x39822960) /tmp/workdir/go/src/runtime/mgcmark.go:793 +0x126 fp=0xfb9f6b88 sp=0xfb9f6b48 pc=0x8063956 runtime.scanstack.func1(0xfb9f6d38, 0x0, 0x8821ee0) /tmp/workdir/go/src/runtime/mgcmark.go:708 +0x29 fp=0xfb9f6b98 sp=0xfb9f6b88 pc=0x8092169 runtime.gentraceback(0xffffffff, 0xffffffff, 0x0, 0x398001c0, 0x0, 0x0, 0x7fffffff, 0xfb9f6dac, 0x0, 0x0, ...) /tmp/workdir/go/src/runtime/traceback.go:341 +0x100e fp=0xfb9f6d68 sp=0xfb9f6b98 pc=0x808bd4e runtime.scanstack(0x398001c0, 0x39822960) /tmp/workdir/go/src/runtime/mgcmark.go:711 +0x147 fp=0xfb9f6e9c sp=0xfb9f6d68 pc=0x80633f7 runtime.newstack() /tmp/workdir/go/src/runtime/stack.go:1019 +0x2aa fp=0xfb9f6f64 sp=0xfb9f6e9c pc=0x8083fda runtime.morestack() /tmp/workdir/go/src/runtime/asm_386.s:475 +0x76 fp=0xfb9f6f68 sp=0xfb9f6f64 pc=0x8093f96 goroutine 1 [GC assist marking (scan)]: syscall.clen(0x39d91008, 0x100, 0x100, 0x800) /tmp/workdir/go/src/syscall/syscall_unix.go:35 +0x3d fp=0x39ab2c60 sp=0x39ab2c5c pc=0x80ae58d syscall.convertFromDirents11(0x39d91000, 0x1b0, 0x800, 0x39d98000, 0x2000, 0x2000, 0x0) /tmp/workdir/go/src/syscall/syscall_freebsd.go:371 +0x12f fp=0x39ab2c9c sp=0x39ab2c60 pc=0x80ae26f syscall.Getdirentries(0x3, 0x39d98000, 0x2000, 0x2000, 0x39ab2cf0, 0x806e103, 0x3988e10c, 0x39ab2d04) /tmp/workdir/go/src/syscall/syscall_freebsd.go:265 +0xfa fp=0x39ab2cd0 sp=0x39ab2c9c pc=0x80ae0ca syscall.ReadDirent(0x3, 0x39d98000, 0x2000, 0x2000, 0x844d520, 0x39bde301, 0x39a63880) /tmp/workdir/go/src/syscall/syscall_bsd.go:71 +0x4b fp=0x39ab2cfc sp=0x39ab2cd0 pc=0x80ac08b internal/poll.(*FD).ReadDirent(0x399c9980, 0x39d98000, 0x2000, 0x2000, 0x0, 0x0, 0x0) /tmp/workdir/go/src/internal/poll/fd_unix.go:416 +0x94 fp=0x39ab2d1c sp=0x39ab2cfc pc=0x80c68c4 os.(*File).readdirnames(0x39bde370, 0xffffffff, 0x845a840, 0x80cd731, 0x845a840, 0x399c9980, 0x8443380) /tmp/workdir/go/src/os/dir_unix.go:68 +0x14f fp=0x39ab2d68 sp=0x39ab2d1c pc=0x80c8a7f os.(*File).Readdirnames(0x39bde370, 0xffffffff, 0x0, 0x39bde370, 0x80cd8f4, 0x3, 0x39d734d0) /tmp/workdir/go/src/os/dir.go:45 +0x27 fp=0x39ab2d88 sp=0x39ab2d68 pc=0x80c8647 os.(*File).readdir(0x39bde370, 0xffffffff, 0x0, 0x0, 0x39bde370, 0x0, 0x0) /tmp/workdir/go/src/os/dir_unix.go:25 +0x4c fp=0x39ab2dec sp=0x39ab2d88 pc=0x80c86fc os.(*File).Readdir(0x39bde370, 0xffffffff, 0x39bde370, 0x0, 0x0, 0x39d73530, 0x2f) /tmp/workdir/go/src/os/dir.go:26 +0x27 fp=0x39ab2e0c sp=0x39ab2dec pc=0x80c85b7 io/ioutil.ReadDir(0x39d734d0, 0x21, 0x39d75410, 0x2, 0x2, 0x39d73530, 0x2f) /tmp/workdir/go/src/io/ioutil/ioutil.go:101 +0x4f fp=0x39ab2e60 sp=0x39ab2e0c pc=0x81394bf go/build.(*Context).readDir(0x8854c80, 0x39d734d0, 0x21, 0x0, 0x39d73530, 0x2f, 0x4, 0x39bde360) /tmp/workdir/go/src/go/build/build.go:179 +0x71 fp=0x39ab2e80 sp=0x39ab2e60 pc=0x817d2f1 go/build.(*Context).Import(0x8854c80, 0x39be74a1, 0xd, 0x3981eb00, 0x1d, 0x4, 0x2, 0x39b79900, 0x39ab3cfc) /tmp/workdir/go/src/go/build/build.go:739 +0x5ca fp=0x39ab3314 sp=0x39ab2e80 pc=0x817e83a cmd/go/internal/load.LoadImport(0x39be74a1, 0xd, 0x3981eb00, 0x1d, 0x39bf3680, 0x39ab3cfc, 0x39c53760, 0x1, 0x1, 0x1, ...) /tmp/workdir/go/src/cmd/go/internal/load/pkg.go:544 +0x152d fp=0x39ab3458 sp=0x39ab3314 pc=0x81b671d cmd/go/internal/load.(*Package).load(0x39bf3680, 0x39ab3cfc, 0x39bf4d00, 0x0, 0x0) /tmp/workdir/go/src/cmd/go/internal/load/pkg.go:1410 +0xa17 fp=0x39ab37b4 sp=0x39ab3458 pc=0x81b9dc7 cmd/go/internal/load.LoadImport(0x399a5d66, 0x9, 0x399c4d80, 0x14, 0x39954280, 0x39ab3cfc, 0x399faa40, 0x1, 0x1, 0x1, ...) /tmp/workdir/go/src/cmd/go/internal/load/pkg.go:556 +0xf14 fp=0x39ab38f8 sp=0x39ab37b4 pc=0x81b6104 cmd/go/internal/load.(*Package).load(0x39954280, 0x39ab3cfc, 0x39950680, 0x0, 0x0) /tmp/workdir/go/src/cmd/go/internal/load/pkg.go:1410 +0xa17 fp=0x39ab3c54 sp=0x39ab38f8 pc=0x81b9dc7 cmd/go/internal/load.GoFilesPackage(0x39878078, 0x1, 0x1, 0x0) /tmp/workdir/go/src/cmd/go/internal/load/pkg.go:2002 +0x693 fp=0x39ab3da4 sp=0x39ab3c54 pc=0x81bfe23 cmd/go/internal/run.runRun(0x884fa80, 0x39878078, 0x1, 0x1) /tmp/workdir/go/src/cmd/go/internal/run/run.go:78 +0x22f fp=0x39ab3eac sp=0x39ab3da4 pc=0x83e53bf main.main() /tmp/workdir/go/src/cmd/go/main.go:219 +0x8de fp=0x39ab3fd0 sp=0x39ab3eac pc=0x83f795e runtime.main() /tmp/workdir/go/src/runtime/proc.go:201 +0x1d5 fp=0x39ab3ff0 sp=0x39ab3fd0 pc=0x8070775 runtime.goexit() /tmp/workdir/go/src/runtime/asm_386.s:1324 +0x1 fp=0x39ab3ff4 sp=0x39ab3ff0 pc=0x8095721 goroutine 2 [force gc (idle)]: runtime.gopark(0x84f86e4, 0x8854420, 0x1410, 0x1) /tmp/workdir/go/src/runtime/proc.go:302 +0xd8 fp=0x3982cfc8 sp=0x3982cfb4 pc=0x8070b28 runtime.goparkunlock(0x8854420, 0x1410, 0x1) /tmp/workdir/go/src/runtime/proc.go:308 +0x3f fp=0x3982cfdc sp=0x3982cfc8 pc=0x8070bbf runtime.forcegchelper() /tmp/workdir/go/src/runtime/proc.go:251 +0xa3 fp=0x3982cff0 sp=0x3982cfdc pc=0x80709c3 runtime.goexit() /tmp/workdir/go/src/runtime/asm_386.s:1324 +0x1 fp=0x3982cff4 sp=0x3982cff0 pc=0x8095721 created by runtime.init.4 /tmp/workdir/go/src/runtime/proc.go:240 +0x25 goroutine 3 [GC sweep wait]: runtime.gopark(0x84f86e4, 0x8854630, 0x809140c, 0x1) /tmp/workdir/go/src/runtime/proc.go:302 +0xd8 fp=0x3982d7c4 sp=0x3982d7b0 pc=0x8070b28 runtime.goparkunlock(0x8854630, 0x856140c, 0x1) /tmp/workdir/go/src/runtime/proc.go:308 +0x3f fp=0x3982d7d8 sp=0x3982d7c4 pc=0x8070bbf runtime.bgsweep(0x39852000) /tmp/workdir/go/src/runtime/mgcsweep.go:71 +0xe3 fp=0x3982d7e8 sp=0x3982d7d8 pc=0x8065753 runtime.goexit() /tmp/workdir/go/src/runtime/asm_386.s:1324 +0x1 fp=0x3982d7ec sp=0x3982d7e8 pc=0x8095721 created by runtime.gcenable /tmp/workdir/go/src/runtime/mgc.go:208 +0x43 goroutine 18 [finalizer wait]: runtime.gopark(0x84f86e4, 0x886730c, 0x140f, 0x1) /tmp/workdir/go/src/runtime/proc.go:302 +0xd8 fp=0x39828794 sp=0x39828780 pc=0x8070b28 runtime.goparkunlock(0x886730c, 0x140f, 0x1) /tmp/workdir/go/src/runtime/proc.go:308 +0x3f fp=0x398287a8 sp=0x39828794 pc=0x8070bbf runtime.runfinq() /tmp/workdir/go/src/runtime/mfinal.go:175 +0x7c fp=0x398287f0 sp=0x398287a8 pc=0x805cb3c runtime.goexit() /tmp/workdir/go/src/runtime/asm_386.s:1324 +0x1 fp=0x398287f4 sp=0x398287f0 pc=0x8095721 created by runtime.createfing /tmp/workdir/go/src/runtime/mfinal.go:156 +0x5a goroutine 19 [syscall]: runtime.notetsleepg(0x8867700, 0xffffffff, 0xffffffff, 0x8049601) /tmp/workdir/go/src/runtime/lock_futex.go:227 +0x24 fp=0x3982c7c4 sp=0x3982c7ac pc=0x8051e44 os/signal.signal_recv(0x0) /tmp/workdir/go/src/runtime/sigqueue.go:139 +0x129 fp=0x3982c7dc sp=0x3982c7c4 pc=0x8081f19 os/signal.loop() /tmp/workdir/go/src/os/signal/signal_unix.go:23 +0x14 fp=0x3982c7f0 sp=0x3982c7dc pc=0x818d6f4 runtime.goexit() /tmp/workdir/go/src/runtime/asm_386.s:1324 +0x1 fp=0x3982c7f4 sp=0x3982c7f0 pc=0x8095721 created by os/signal.init.0 /tmp/workdir/go/src/os/signal/signal_unix.go:29 +0x31 goroutine 34 [GC worker (idle)]: runtime.gopark(0x84f8638, 0x3987c100, 0xffff1417, 0x0) /tmp/workdir/go/src/runtime/proc.go:302 +0xd8 fp=0x39828f9c sp=0x39828f88 pc=0x8070b28 runtime.gcBgMarkWorker(0x39822000) /tmp/workdir/go/src/runtime/mgc.go:1729 +0xd3 fp=0x39828fe8 sp=0x39828f9c pc=0x8060843 runtime.goexit() /tmp/workdir/go/src/runtime/asm_386.s:1324 +0x1 fp=0x39828fec sp=0x39828fe8 pc=0x8095721 created by runtime.gcBgMarkStartWorkers /tmp/workdir/go/src/runtime/mgc.go:1677 +0x5b goroutine 35 [GC worker (idle)]: runtime.gopark(0x84f8638, 0x39be7938, 0xffff1417, 0x0) /tmp/workdir/go/src/runtime/proc.go:302 +0xd8 fp=0x39c6c79c sp=0x39c6c788 pc=0x8070b28 runtime.gcBgMarkWorker(0x39823300) /tmp/workdir/go/src/runtime/mgc.go:1729 +0xd3 fp=0x39c6c7e8 sp=0x39c6c79c pc=0x8060843 runtime.goexit() /tmp/workdir/go/src/runtime/asm_386.s:1324 +0x1 fp=0x39c6c7ec sp=0x39c6c7e8 pc=0x8095721 created by runtime.gcBgMarkStartWorkers /tmp/workdir/go/src/runtime/mgc.go:1677 +0x5b goroutine 36 [GC worker (idle)]: runtime.systemstack_switch() /tmp/workdir/go/src/runtime/asm_386.s:357 fp=0x39c6cf9c sp=0x39c6cf98 pc=0x8093e90 runtime.gcBgMarkWorker(0x39824600) /tmp/workdir/go/src/runtime/mgc.go:1783 +0x19a fp=0x39c6cfe8 sp=0x39c6cf9c pc=0x806090a runtime.goexit() /tmp/workdir/go/src/runtime/asm_386.s:1324 +0x1 fp=0x39c6cfec sp=0x39c6cfe8 pc=0x8095721 created by runtime.gcBgMarkStartWorkers /tmp/workdir/go/src/runtime/mgc.go:1677 +0x5b goroutine 37 [GC worker (idle)]: runtime.gopark(0x84f8638, 0x39be7948, 0xffff1417, 0x0) /tmp/workdir/go/src/runtime/proc.go:302 +0xd8 fp=0x39c6d79c sp=0x39c6d788 pc=0x8070b28 runtime.gcBgMarkWorker(0x39825900) /tmp/workdir/go/src/runtime/mgc.go:1729 +0xd3 fp=0x39c6d7e8 sp=0x39c6d79c pc=0x8060843 runtime.goexit() /tmp/workdir/go/src/runtime/asm_386.s:1324 +0x1 fp=0x39c6d7ec sp=0x39c6d7e8 pc=0x8095721 created by runtime.gcBgMarkStartWorkers /tmp/workdir/go/src/runtime/mgc.go:1677 +0x5b ``` Does this code needs to mark `sl` with runtime.KeepAlive and/or keep a reference to the casted `*(*[]byte)(unsafe.Pointer(&sl))`? https://github.com/golang/go/blob/2294e3ebd374e18b191d0e8d8d32c46b0a1ef961/src/syscall/syscall_freebsd.go#L370-L375 (I couldn't reproduce this with a simple test doing ioutil.ReadDir + runtime.GC calls) /cc @ianlancetaylor @bradfitz
OS-FreeBSD,NeedsInvestigation,compiler/runtime
low
Critical
367,478,971
go
cmd/gofmt: unaligned end-of-line comments in switch cases
### What version of Go are you using (`go version`)? go version go1.11.1 windows/amd64 ### Does this issue reproduce with the latest release? yes. ### What did you do? https://play.golang.org/p/hMeljDlfiZJ ### What did you expect to see? When I click Format, all end-of-line comments should be aligned in the same column. ### What did you see instead? The first comment is not aligned with the other ones.
NeedsInvestigation
low
Minor
367,482,850
rust
Rustdoc: allow collapsing "Methods from Deref" blocks
Similar to how you can fold/hide all methods from an impl block, it would be nice if you could collapse / hide an entire "Methods from Deref" block.
T-rustdoc,C-feature-request
low
Minor
367,483,501
rust
rustc 1.30 beta 12 incremental compilation hangs on win 7
I'm seeing frequent rustc freezes (0% CPU) with incremental compilation enabled on Windows 7. This happens about every 2nd or 3rd build attempt. Once this happens (and after killing the frozen rustc), any further attempts hang forever as well (i.e. I waited 5 minutes). `cargo clean` resolves the issue until it happens again a few builds later, setting CARGO_INCREMENTAL=0 appears to completely prevent it. This might be a duplicate of/related to #54627. I have attached a debugger to get the traces, but I don't have any symbols for rustc, so this is probably useless: ``` > ntdll.dll!NtWaitForKeyedEvent() ntdll.dll!RtlSleepConditionVariableSRW() kernel32.dll!SleepConditionVariableSRW() std-955ac6734338b235.dll!000007fedb3c7fa4() std-955ac6734338b235.dll!000007fedb3a7781() rustc_codegen_llvm-llvm.dll!000007fed71f49d2() rustc_codegen_llvm-llvm.dll!000007fed71c8283() rustc_codegen_llvm-llvm.dll!000007fed718739c() rustc_codegen_llvm-llvm.dll!000007fed71c2720() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc915108() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc9384ac() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc8d8fb3() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc8d4e36() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc987961() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc9e27e0() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc92e728() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc9c7908() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc92c325() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc9c6588() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc92b665() std-955ac6734338b235.dll!000007fedb3da2a2() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc9c37b3() rustc_driver-7ae0c7f4e267e2e6.dll!000007fedc9d5f8e() rustc.exe!000000013f071056() std-955ac6734338b235.dll!000007fedb3a8477() std-955ac6734338b235.dll!000007fedb3da2a2() std-955ac6734338b235.dll!000007fedb3ba4c3() rustc.exe!000000013f07104a() rustc.exe!000000013f071299() kernel32.dll!BaseThreadInitThunk() ntdll.dll!RtlUserThreadStart() > ntdll.dll!NtWaitForMultipleObjects() KernelBase.dll!WaitForMultipleObjectsEx() kernel32.dll!WaitForMultipleObjects() rustc-cfefa64b59573cce.dll!000007fedbea2476() rustc-cfefa64b59573cce.dll!000007fedbea2768() std-955ac6734338b235.dll!000007fedb3da2a2() rustc-cfefa64b59573cce.dll!000007fedbe9e662() std-955ac6734338b235.dll!000007fedb3d8672() std-955ac6734338b235.dll!000007fedb3b48e9() kernel32.dll!BaseThreadInitThunk() ntdll.dll!RtlUserThreadStart() > ntdll.dll!NtWaitForKeyedEvent() ntdll.dll!RtlSleepConditionVariableSRW() kernel32.dll!SleepConditionVariableSRW() std-955ac6734338b235.dll!000007fedb3c7fa4() std-955ac6734338b235.dll!000007fedb3a7781() rustc_codegen_llvm-llvm.dll!000007fed70cfb2a() rustc_codegen_llvm-llvm.dll!000007fed71c78f8() rustc_codegen_llvm-llvm.dll!000007fed70c9384() rustc_codegen_llvm-llvm.dll!000007fed70e3346() std-955ac6734338b235.dll!000007fedb3da2a2() rustc_codegen_llvm-llvm.dll!000007fed70d4e59() std-955ac6734338b235.dll!000007fedb3d8672() std-955ac6734338b235.dll!000007fedb3b48e9() kernel32.dll!BaseThreadInitThunk() ntdll.dll!RtlUserThreadStart() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() rustc_codegen_llvm-llvm.dll!000007fed8af53a0() rustc_codegen_llvm-llvm.dll!000007fed8af564b() rustc_codegen_llvm-llvm.dll!000007fed8af554e() ntdll.dll!RtlProcessFlsData() ntdll.dll!LdrShutdownThread() ntdll.dll!RtlExitUserThread() kernel32.dll!BaseThreadInitThunk() ntdll.dll!RtlUserThreadStart() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!LdrGetProcedureAddressEx() ntdll.dll!LdrGetProcedureAddress() KernelBase.dll!GetProcAddress() rustc_codegen_llvm-llvm.dll!000007fed8af6236() rustc_codegen_llvm-llvm.dll!000007fed8af6bdc() rustc_codegen_llvm-llvm.dll!000007fed8af2e9a() rustc_codegen_llvm-llvm.dll!000007fed8ad3d7d() rustc_codegen_llvm-llvm.dll!000007fed73bc2fe() rustc_codegen_llvm-llvm.dll!000007fed73bc203() rustc_codegen_llvm-llvm.dll!000007fed73bb9e1() rustc_codegen_llvm-llvm.dll!000007fed752c6e7() rustc_codegen_llvm-llvm.dll!000007fed7c20cbc() rustc_codegen_llvm-llvm.dll!000007fed7c19b56() rustc_codegen_llvm-llvm.dll!000007fed7c239d3() rustc_codegen_llvm-llvm.dll!000007fed733a810() rustc_codegen_llvm-llvm.dll!000007fed72469f9() rustc_codegen_llvm-llvm.dll!000007fed70f40bf() rustc_codegen_llvm-llvm.dll!000007fed70cd5d2() rustc_codegen_llvm-llvm.dll!000007fed70e3399() std-955ac6734338b235.dll!000007fedb3da2a2() rustc_codegen_llvm-llvm.dll!000007fed70d4b9a() std-955ac6734338b235.dll!000007fedb3d8672() std-955ac6734338b235.dll!000007fedb3b48e9() kernel32.dll!BaseThreadInitThunk() ntdll.dll!RtlUserThreadStart() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!LdrShutdownThread() ntdll.dll!RtlExitUserThread() kernel32.dll!BaseThreadInitThunk() ntdll.dll!RtlUserThreadStart() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!LdrShutdownThread() ntdll.dll!RtlExitUserThread() kernel32.dll!BaseThreadInitThunk() ntdll.dll!RtlUserThreadStart() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!LdrShutdownThread() ntdll.dll!RtlExitUserThread() kernel32.dll!BaseThreadInitThunk() ntdll.dll!RtlUserThreadStart() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!LdrShutdownThread() ntdll.dll!RtlExitUserThread() kernel32.dll!BaseThreadInitThunk() ntdll.dll!RtlUserThreadStart() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!LdrpInitializeThread() ntdll.dll!_LdrpInitialize() ntdll.dll!LdrInitializeThunk() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!LdrpInitializeThread() ntdll.dll!_LdrpInitialize() ntdll.dll!LdrInitializeThunk() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!LdrpInitializeThread() ntdll.dll!_LdrpInitialize() ntdll.dll!LdrInitializeThunk() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() rustc_codegen_llvm-llvm.dll!000007fed8af53e0() rustc_codegen_llvm-llvm.dll!000007fed8af5533() rustc_codegen_llvm-llvm.dll!000007fed8af5926() rustc_codegen_llvm-llvm.dll!000007fed8af13c9() rustc_codegen_llvm-llvm.dll!000007fed8ad0bf6() rustc_codegen_llvm-llvm.dll!000007fed8ad0471() rustc_codegen_llvm-llvm.dll!000007fed8ad06a1() ntdll.dll!LdrpInitializeThread() ntdll.dll!_LdrpInitialize() ntdll.dll!LdrInitializeThunk() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() rustc_codegen_llvm-llvm.dll!000007fed8af53a0() rustc_codegen_llvm-llvm.dll!000007fed8af564b() rustc_codegen_llvm-llvm.dll!000007fed8af554e() ntdll.dll!RtlProcessFlsData() ntdll.dll!LdrShutdownThread() ntdll.dll!RtlExitUserThread() kernel32.dll!BaseThreadInitThunk() ntdll.dll!RtlUserThreadStart() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!LdrpInitializeThread() ntdll.dll!_LdrpInitialize() ntdll.dll!LdrInitializeThunk() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!LdrpInitializeThread() ntdll.dll!_LdrpInitialize() ntdll.dll!LdrInitializeThunk() > ntdll.dll!ZwWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!LdrpInitializeThread() ntdll.dll!_LdrpInitialize() ntdll.dll!LdrInitializeThunk() ```
P-medium,T-compiler,regression-from-stable-to-stable,O-windows-7
medium
Critical
367,510,599
go
runtime: L2 arena maps are not accounted for in any runtime.MemStats field
### What version of Go are you using (`go version`)? `go version devel +2bb91e093c`, but this is also true in go 1.11.1 and every version since ec252105645 landed. ### Does this issue reproduce with the latest release? Yes. ### What operating system and processor architecture are you using (`go env`)? linux/amd64, but the issue is general. ### What did you do? Build and run this program on linux/amd64 and compare its reported value for MemStats.Sys against the value that eg `pmap` will report. ````package main import ( "fmt" "os" "runtime" "time" ) func main() { var ms runtime.MemStats fmt.Printf("PID %d\n", os.Getpid()) runtime.ReadMemStats(&ms) fmt.Printf("MemStats %#v\n", ms) time.Sleep(time.Second * 100000) } ```` On my machine, there is a significant difference; MemStats.Sys is short by 32 MB or so and shows very little allocation outside of HeapSys. If you look at the Linux kernel memory map (with eg `pmap`) or use `strace -e trace=%memory` on the program, it is clearly false, because there is a large 32 MB allocation that is not reflected in MemStats. This allocation is the L2 arena map, allocated in *mheap.syAlloc (currently [here](https://github.com/golang/go/blob/master/src/runtime/malloc.go#L625)). Because this allocation is made by calling `persistentalloc()` with the last parameter being `nil`, it is not accounted for in any memstats value, and on modest Go programs it may amount to a significant amount of their runtime OS level memory usage. (If this is deliberate I believe it's not the right approach, but I suspect it may be an oversight and the CL for the commit doesn't contain any discussion of it either way that I could see. The related `heapArena` structure is accounted for, for example.)
NeedsInvestigation,compiler/runtime
low
Minor
367,515,370
rust
Tracking issue for RFC 2412, "The optimize attribute"
This is a tracking issue for the RFC "The optimize attribute" (rust-lang/rfcs#2412). **Steps:** - [x] Implement the RFC. Partially done, see https://github.com/rust-lang/rust/issues/54882#issuecomment-457824508. - [ ] Adjust documentation ([see instructions on forge][doc-guide]) - [ ] Stabilization PR ([see instructions on forge][stabilization-guide]) [stabilization-guide]: https://forge.rust-lang.org/stabilization-guide.html [doc-guide]: https://forge.rust-lang.org/stabilization-guide.html#updating-documentation **Unresolved questions:** - [x] Should we also implement `optimize(always)`? `optimize(level=x)`? - [x] Left for future discussion, but should make sure such extension is possible. - [ ] Should there be any way to specify what global optimization for speed level is used in conjunction with the optimization for speed option (e.g. `-Copt-level=s3` could be equivalent to `-Copt-level=3` and `#[optimize(size)]` on the crate item); - [ ] This may matter for users of `#[optimize(speed)]`. - [ ] Are the propagation and `unused_attr` approaches right? - [ ] The RFC specifies we should emit an unused_attributes warning when misapplied, but similar attributes emit an error instead. Should this emit an error too?
B-RFC-approved,T-lang,B-unstable,B-RFC-implemented,C-tracking-issue,S-tracking-impl-incomplete
medium
Critical
367,524,538
rust
Define `fn [_]::try_split_at(&self, usize) -> Option<(&Self, &Self)>`
PR #54887
T-libs-api,C-feature-request
low
Minor
367,528,027
rust
ICE during codegen of unsize coercion on `#[repr(C)]` struct
Very similar to #46152. I ran into this while creating a test case for PR #54383. ```rust #![feature(unsize, coerce_unsized)] use std::{ ops::CoerceUnsized, marker::Unsize, }; #[repr(C)] struct Ptr<T: ?Sized>(Box<T>); impl<T: ?Sized, U: ?Sized> CoerceUnsized<Ptr<U>> for Ptr<T> where T: Unsize<U>, {} fn main() { let foo = Ptr(Box::new(5)) as Ptr<dyn ::std::any::Any>; } ``` Error message: ``` error: internal compiler error: librustc_codegen_llvm/mir/rvalue.rs:268: by-ref operand OperandRef(Ref((%"Ptr<i32>"*: %1 = alloca %"Ptr<i32>", align 8), None, Align { abi_pow2: 3, pref_pow2: 3 }) @ TyLayout { ty: Ptr<i32>, details: LayoutDetails { variants: Single { index: 0 }, fields: Arbitrary { offsets: [Size { raw: 0 }], memory_index: [0] }, abi: Aggregate { sized: true }, align: Align { abi_pow2: 3, pref_pow2: 3 }, size: Size { raw: 8 } } }) in codegen_rvalue_operand ``` EDIT: <details> current ICE message: ``` error: internal compiler error: /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/compiler/rustc_codegen_ssa/src/mir/rvalue.rs:241:33: by-ref operand OperandRef(Ref((ptr: %1 = alloca %4, align 8), None, Align(8 bytes)) @ TyAndLayout { ty: Ptr<i32>, layout: Layout { size: Size(8 bytes), align: AbiAndPrefAlign { abi: Align(8 bytes), pref: Align(8 bytes) }, abi: Aggregate { sized: true }, fields: Arbitrary { offsets: [Size(0 bytes)], memory_index: [0] }, largest_niche: Some(Niche { offset: Size(0 bytes), value: Pointer(AddressSpace(0)), valid_range: 1..=18446744073709551615 }), variants: Single { index: 0 } } }) in `codegen_rvalue_operand` thread 'rustc' panicked at 'Box<dyn Any>', /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/compiler/rustc_errors/src/lib.rs:1644:9 stack backtrace: 0: 0x7f329672c81a - std::backtrace_rs::backtrace::libunwind::trace::h26518014dbf31aba at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5 1: 0x7f329672c81a - std::backtrace_rs::backtrace::trace_unsynchronized::ha516581d0aef3757 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5 2: 0x7f329672c81a - std::sys_common::backtrace::_print_fmt::h9eca712360b21da0 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/std/src/sys_common/backtrace.rs:65:5 3: 0x7f329672c81a - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h528fecd217131eb4 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/std/src/sys_common/backtrace.rs:44:22 4: 0x7f329679044e - core::fmt::write::h073da6791f3f2ff7 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/core/src/fmt/mod.rs:1254:17 5: 0x7f329671f395 - std::io::Write::write_fmt::h51f8756996066b5a at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/std/src/io/mod.rs:1698:15 6: 0x7f329672c5e5 - std::sys_common::backtrace::_print::h5b4ffde9ddd340d3 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/std/src/sys_common/backtrace.rs:47:5 7: 0x7f329672c5e5 - std::sys_common::backtrace::print::hde4ce191c0ed53d2 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/std/src/sys_common/backtrace.rs:34:9 8: 0x7f329672f35f - std::panicking::default_hook::{{closure}}::h23a2d3c2d62785b5 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/std/src/panicking.rs:271:22 9: 0x7f329672f09b - std::panicking::default_hook::hfe56491cf86bf314 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/std/src/panicking.rs:290:9 10: 0x7f3299a84f35 - <rustc_driver_impl[ad8fc07c03d45871]::DEFAULT_HOOK::{closure#0}::{closure#0} as core[ee581b2e0272cb3e]::ops::function::FnOnce<(&core[ee581b2e0272cb3e]::panic::panic_info::PanicInfo,)>>::call_once::{shim:vtable#0} 11: 0x7f329672fb9d - <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args>>::call::hfa9e4663303a5377 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/alloc/src/boxed.rs:2002:9 12: 0x7f329672fb9d - std::panicking::rust_panic_with_hook::hb3c1c1d27d072101 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/std/src/panicking.rs:696:13 13: 0x7f3299fa5881 - std[f9545795ea4997bf]::panicking::begin_panic::<rustc_errors[3a23900e9dcec074]::ExplicitBug>::{closure#0} 14: 0x7f3299fa1996 - std[f9545795ea4997bf]::sys_common::backtrace::__rust_end_short_backtrace::<std[f9545795ea4997bf]::panicking::begin_panic<rustc_errors[3a23900e9dcec074]::ExplicitBug>::{closure#0}, !> 15: 0x7f3299fa1926 - std[f9545795ea4997bf]::panicking::begin_panic::<rustc_errors[3a23900e9dcec074]::ExplicitBug> 16: 0x7f3299ffcfb6 - std[f9545795ea4997bf]::panic::panic_any::<rustc_errors[3a23900e9dcec074]::ExplicitBug> 17: 0x7f3299ffaea6 - <rustc_errors[3a23900e9dcec074]::HandlerInner>::bug::<&alloc[e80928591b456ef9]::string::String> 18: 0x7f3299ffab70 - <rustc_errors[3a23900e9dcec074]::Handler>::bug::<&alloc[e80928591b456ef9]::string::String> 19: 0x7f3299ff1b2b - rustc_middle[267d604f42fb2b42]::util::bug::opt_span_bug_fmt::<rustc_span[48752dc4b679ffbb]::span_encoding::Span>::{closure#0} 20: 0x7f3299ff094a - rustc_middle[267d604f42fb2b42]::ty::context::tls::with_opt::<rustc_middle[267d604f42fb2b42]::util::bug::opt_span_bug_fmt<rustc_span[48752dc4b679ffbb]::span_encoding::Span>::{closure#0}, !>::{closure#0} 21: 0x7f3299ff0916 - rustc_middle[267d604f42fb2b42]::ty::context::tls::with_context_opt::<rustc_middle[267d604f42fb2b42]::ty::context::tls::with_opt<rustc_middle[267d604f42fb2b42]::util::bug::opt_span_bug_fmt<rustc_span[48752dc4b679ffbb]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !> 22: 0x7f3299ff1a76 - rustc_middle[267d604f42fb2b42]::util::bug::opt_span_bug_fmt::<rustc_span[48752dc4b679ffbb]::span_encoding::Span> 23: 0x7f3298029533 - rustc_middle[267d604f42fb2b42]::util::bug::bug_fmt 24: 0x7f329839b2ec - <rustc_codegen_ssa[69e83e84c4ffa06]::mir::FunctionCx<rustc_codegen_llvm[d9e68a931594c80d]::builder::Builder>>::codegen_rvalue_operand 25: 0x7f3298362779 - rustc_codegen_ssa[69e83e84c4ffa06]::mir::codegen_mir::<rustc_codegen_llvm[d9e68a931594c80d]::builder::Builder> 26: 0x7f3298fb42f3 - rustc_codegen_llvm[d9e68a931594c80d]::base::compile_codegen_unit::module_codegen 27: 0x7f3298fb1d6f - <rustc_codegen_llvm[d9e68a931594c80d]::LlvmCodegenBackend as rustc_codegen_ssa[69e83e84c4ffa06]::traits::backend::ExtraBackendMethods>::compile_codegen_unit 28: 0x7f3298fafed8 - rustc_codegen_ssa[69e83e84c4ffa06]::base::codegen_crate::<rustc_codegen_llvm[d9e68a931594c80d]::LlvmCodegenBackend> 29: 0x7f3298faf7ee - <rustc_codegen_llvm[d9e68a931594c80d]::LlvmCodegenBackend as rustc_codegen_ssa[69e83e84c4ffa06]::traits::backend::CodegenBackend>::codegen_crate 30: 0x7f3298c17611 - <rustc_session[6aba769f1422c309]::session::Session>::time::<alloc[e80928591b456ef9]::boxed::Box<dyn core[ee581b2e0272cb3e]::any::Any>, rustc_interface[dce09aa29aba2b19]::passes::start_codegen::{closure#0}> 31: 0x7f3298c17139 - rustc_interface[dce09aa29aba2b19]::passes::start_codegen 32: 0x7f3298c13348 - <rustc_middle[267d604f42fb2b42]::ty::context::GlobalCtxt>::enter::<<rustc_interface[dce09aa29aba2b19]::queries::Queries>::ongoing_codegen::{closure#0}::{closure#0}, core[ee581b2e0272cb3e]::result::Result<alloc[e80928591b456ef9]::boxed::Box<dyn core[ee581b2e0272cb3e]::any::Any>, rustc_span[48752dc4b679ffbb]::ErrorGuaranteed>> 33: 0x7f3298c11974 - <rustc_interface[dce09aa29aba2b19]::queries::Queries>::ongoing_codegen 34: 0x7f3298c10f41 - <rustc_interface[dce09aa29aba2b19]::interface::Compiler>::enter::<rustc_driver_impl[ad8fc07c03d45871]::run_compiler::{closure#1}::{closure#2}, core[ee581b2e0272cb3e]::result::Result<core[ee581b2e0272cb3e]::option::Option<rustc_interface[dce09aa29aba2b19]::queries::Linker>, rustc_span[48752dc4b679ffbb]::ErrorGuaranteed>> 35: 0x7f3298c0c0e0 - rustc_span[48752dc4b679ffbb]::with_source_map::<core[ee581b2e0272cb3e]::result::Result<(), rustc_span[48752dc4b679ffbb]::ErrorGuaranteed>, rustc_interface[dce09aa29aba2b19]::interface::run_compiler<core[ee581b2e0272cb3e]::result::Result<(), rustc_span[48752dc4b679ffbb]::ErrorGuaranteed>, rustc_driver_impl[ad8fc07c03d45871]::run_compiler::{closure#1}>::{closure#0}::{closure#0}> 36: 0x7f3298c0b689 - std[f9545795ea4997bf]::sys_common::backtrace::__rust_begin_short_backtrace::<rustc_interface[dce09aa29aba2b19]::util::run_in_thread_pool_with_globals<rustc_interface[dce09aa29aba2b19]::interface::run_compiler<core[ee581b2e0272cb3e]::result::Result<(), rustc_span[48752dc4b679ffbb]::ErrorGuaranteed>, rustc_driver_impl[ad8fc07c03d45871]::run_compiler::{closure#1}>::{closure#0}, core[ee581b2e0272cb3e]::result::Result<(), rustc_span[48752dc4b679ffbb]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[ee581b2e0272cb3e]::result::Result<(), rustc_span[48752dc4b679ffbb]::ErrorGuaranteed>> 37: 0x7f329931111a - <<std[f9545795ea4997bf]::thread::Builder>::spawn_unchecked_<rustc_interface[dce09aa29aba2b19]::util::run_in_thread_pool_with_globals<rustc_interface[dce09aa29aba2b19]::interface::run_compiler<core[ee581b2e0272cb3e]::result::Result<(), rustc_span[48752dc4b679ffbb]::ErrorGuaranteed>, rustc_driver_impl[ad8fc07c03d45871]::run_compiler::{closure#1}>::{closure#0}, core[ee581b2e0272cb3e]::result::Result<(), rustc_span[48752dc4b679ffbb]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[ee581b2e0272cb3e]::result::Result<(), rustc_span[48752dc4b679ffbb]::ErrorGuaranteed>>::{closure#1} as core[ee581b2e0272cb3e]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} 38: 0x7f3296739c13 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h31d31ee934fae5d7 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/alloc/src/boxed.rs:1988:9 39: 0x7f3296739c13 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h235705d08d5be362 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/alloc/src/boxed.rs:1988:9 40: 0x7f3296739c13 - std::sys::unix::thread::Thread::new::thread_start::h8f78f28fa2155287 at /rustc/a266f11990d9544ee408e213e1eec8cc9eb032b7/library/std/src/sys/unix/thread.rs:108:17 41: 0x7f3296600609 - start_thread 42: 0x7f3296523133 - clone 43: 0x0 - <unknown> note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md note: rustc 1.70.0-nightly (a266f1199 2023-03-22) running on x86_64-unknown-linux-gnu note: compiler flags: --crate-type bin -C embed-bitcode=no -C codegen-units=1 -C debuginfo=2 note: some of the compiler flags provided by cargo are hidden query stack during panic: end of query stack warning: `playground` (bin "playground") generated 1 warning (run `cargo fix --bin "playground"` to apply 1 suggestion) ``` </details>
I-ICE,T-compiler,C-bug,requires-nightly,glacier,S-bug-has-test,A-repr
low
Critical
367,547,515
rust
Crawl doc.rust-lang.org for dead links on a regular basis
Refiling from https://github.com/rust-lang/rfcs/issues/669. > And open tickets here via the GitHub API :)
T-infra,C-feature-request
low
Major
367,549,038
godot
[3.x] Auto-complete not working for autoloaded global scripts
**Godot version:** Godot Engine v3.0.6.stable.official **OS/device including version:** Windows 8.1 Pro & Windows 10 Home **Issue description:** Member functions not auto-completing when the variable is in autoload global.gd **Steps to reproduce:** global.gd `var array = ["zero", "one", "two"]` script.gd `print(global.array.size())` global.array autocompletes correctly but it doesn't complete the function size() **Minimal reproduction project:** [minimal reproduction.zip](https://github.com/godotengine/godot/files/2453886/minimal.reproduction.zip)
bug,topic:gdscript,topic:editor
low
Minor
367,555,023
create-react-app
JetBrains Toolbox apps are not detected by guessEditor
`react-dev-utils` looks for a process named `/Applications/IntelliJ IDEA.app/Contents/MacOS/idea` but when installing IntelliJ IDEA from [JetBrains Toolbox](https://www.jetbrains.com/toolbox/) the apps are installed in `~/Library/Application Support/JetBrains/Toolbox/apps/IDEA-U/ch-0/182.4505.22/IntelliJ IDEA.app/Contents/MacOS/idea` instead. ### Is this a bug report? Yes ### Environment Environment: OS: macOS 10.14 Node: 10.11.0 Yarn: 1.10.0 npm: 6.4.1 Watchman: 4.9.0 Xcode: Xcode 10.0 Build version 10A255 Android Studio: Not Found Packages: (wanted => installed) react: ^16.4.1 => 16.4.1 react-dom: ^16.4.1 => 16.4.1 react-scripts: ^2.0.4 => 2.0.4 ### Steps to Reproduce 1. Start with a cra project 2. Start IntelliJ IDEA (installed via JetBrains Toolbox) 2. Cause a runtime error in your project 3. Click the path of the file of the error ### Expected Behavior The running IntelliJ IDEA instance would be detected and launched. ### Actual Behavior App is not launched, instead a warning about setting the app explicitly via env var appears.
issue: proposal
low
Critical
367,559,759
pytorch
`pstrf` on positive semi-definite matrices
## ๐Ÿ› Bug [`pstrf`](https://pytorch.org/docs/stable/torch.html#torch.pstrf) does not work on Positive Semi-Definite matrices while the documentation says it should. The issue might be a misunderstanding on my part of the documentation. Any correction to this misunderstanding is welcome :) I am assuming that for `M` to be a Positive SemiDefinite matrix, it needs to hold that for any real vector `x`, `x.t() @ M @ x >= 0`, whereas to be Positive Definite, it needs to hold that `x.t() @ M @ x > 0`. ## To Reproduce Using Pytorch 0.4.1, on CPU ``` import torch Zero = torch.tensor([0.0]).view(1,1) print(torch.pstrf(Zero)) ``` throws ``` RuntimeError: Lapack Error pstrf : matrix is rank deficient or not positive semidefinite at c:\programdata\miniconda3\conda-bld\pytorch_1532505617613\work\aten\src\th\generic/THTensorLapack.cpp:744 ``` ## Expected behavior [`pstrf`](https://pytorch.org/docs/stable/torch.html#torch.pstrf) is supposed to > Computes the pivoted Cholesky decomposition of a positive semidefinite matrix `a`. returns matrices `u` and `piv`. In the example provided above, I would expected `u` should be `[[0]]` and `piv` should be `[0]` ## Additional Details I don't understand the error message, ``` matrix is rank deficient or not positive semidefinite ``` From my understanding, Positive SemiDefinite matrices that are not Positive Definite are by definition rank deficient. cc @jianyuh @nikitaved @pearu @mruberry @heitorschueroff @walterddr @IvanYashchuk
triaged,module: linear algebra,function request
low
Critical
367,560,765
rust
Use the "efficient "pivot pointer" scheme" for Rc<Trait>
Refiling what remains of https://github.com/rust-lang/rfcs/issues/981 here... This might already be done... but @eddyb wrote: > [...] it doesn't use the efficient "pivot pointer" scheme, though. Maybe a sepparate issue should be opened for that.
C-enhancement,T-compiler
low
Major
367,562,397
godot
Tilemap save settings
It would be very useful and time saving if the tilemap & tileset editor settings were saved. For example: - Tilemap zoom-level - Tilemap window size - Zoom-level of the tileset editor - Region snap options Every time you open Godot you have to adjust all these settings again.
enhancement,topic:editor,usability,topic:2d
low
Major
367,575,532
TypeScript
Allow to specify return type of constructor
## Search Terms constructor return type ## Suggestion Add an ability to specify return type of a class constructor. Right now the compiler produces the following error: "Type annotation cannot appear on a constructor declaration." ## Use Cases Consider this example: ``` interface MyInterface { readonly data: ReadonlyArray<string> add(item: string): void remove(item: string): void // ...more } class MyClass implements MyInterface { data = [] as string[] constructor(): MyInterface {} // right now it's an error add(item: string) { this.data.push(item) } // ...more } ``` Current behavior: ``` const c = new MyClass() // typeof c == MyClass c.data = [] // oops! no error from the compiler ``` Suggested behavior: ``` const c = new MyClass() // typeof c == MyInterface c.data = [] // the compiler generates an error here, correct behavior ``` Of course, there are several ways to achieve this result (like using private constructor with factory function, or using a type converter like this: `type AsInterface<T, C> = C extends new (...args: infer A) => T ? new (...args: A) => T : never` But I think that the proposed feature is more concise ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,In Discussion
medium
Critical
367,580,533
opencv
Linking against Intel-OpenMP
##### System information (version) - OpenCV => master - #a9c8a52 - Operating System / Platform => Linux - Compiler => GCC 7 + NVCC ##### Detailed description Currently OpenCV cannot be linked easily to Intel OpenMP by passing `-liomp5 -lpthread -L<icc_dir>/lib` as mentioned in [Intel User Guide - using the OpenMP Libraries](https://software.intel.com/en-us/node/522690) due to CMake. Also following limitations in the current [MKL-DNN with TBB](https://github.com/intel/mkl-dnn#intel-mkl-dnn-with-intel-tbb), both Intel MKL and MKL-DNN are built with OpenMP on my system. As I'm linking OpenCV to MKL for the BLAS and LAPACK functionalities, it's probably best to also use OpenMP for OpenCV. Following the recommendation of MKL-DNN I'm trying to link against Intel OpenMP but I'm stuck. ##### Steps to reproduce - Install Intel MKL - Install MKL-DNN - Try to build OpenCV with `WITH_OPENMP=ON` - Check the parallel lib during cmake config, it will be `pthread`
category: build/install
low
Minor
367,587,025
godot
has_method does not work on class
**Godot version:** current master **OS/device including version:** Windows, don't think it matters **Issue description:** Investigating #22833 I found out that `has_method` does not work with classes. Consider the following script: ``` extends Node class Test: static func foo(): print("foo") func _ready(): Test.foo() # works as expected print(Test.has_method("foo")) # false although Test has this method ``` In #22833 this bug makes it impossible to use a static func of a class in array.sort_custom(). **Steps to reproduce:** Call `has_method` on a class. **Minimal reproduction project:** [sfhm.zip](https://github.com/godotengine/godot/files/2454253/sfhm.zip)
bug,topic:core
medium
Critical
367,595,479
godot
Escape should discard Output panel (but not Shader editor when discarding a tooltip)
**Godot version:** <!-- Specify commit hash if non-official. --> b17e71b6e5e035f49b5b3b5b55b9cdac80215d72 **Issue description:** <!-- What happened, and what was expected. --> The output area will not be closed when `Escape` is pressed. This used to work in previous versions. **Steps to reproduce:** 1. Open the output area 2. Press `Escape` 3. Output area will not be close
enhancement,topic:editor,usability
low
Major
367,654,325
kubernetes
Exposing ephemeral storage metrics to prometheus
**Is this a BUG REPORT or FEATURE REQUEST?**: /kind feature Now, we only expose PVC related volumes metrics to protheus, https://github.com/kubernetes/kubernetes/pull/51553 but we need ephemeral storage metrics too, such as EmptyDir usage and capacity info. /assign /sig storage /cc @jingxu97 @gnufied
sig/storage,kind/feature,help wanted,sig/instrumentation,needs-triage
high
Critical
367,664,890
rust
Add the ability to copy a skeleton trait impl from a trait doc entry
refiled from rust-lang/rfcs#621
T-rustdoc,C-feature-request,A-rustdoc-ui,T-rustdoc-frontend
low
Minor
367,679,353
pytorch
Could not find a package configuration file provided by "Torch" with any of the following names:
C++ build documentation seems has code bugs: Simply following the tutorial of newest libtorch doc, just got errors. ``` CMake Error at CMakeLists.txt:5 (find_package): By not providing "FindTorch.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Torch", but CMake did not find one. Could not find a package configuration file provided by "Torch" with any of the following names: TorchConfig.cmake torch-config.cmake Add the installation prefix of "Torch" to CMAKE_PREFIX_PATH or set "Torch_DIR" to a directory containing one of the above files. If "Torch" provides a separate development package or SDK, be sure it has been installed. -- Configuring incomplete, errors occurred! See also "/Volumes/xs/tmp/libtorch_learn/example-app/build/CMakeFiles/CMakeOutput.log". ``` It's cmake issue, I think documentation should more specific and make Torch finable. cc @yf225 @glaringlee
module: cpp-extensions,triaged
medium
Critical
367,716,856
TypeScript
[Feature] make generated codes from enum could be minified when not used
## Search Terms For now TypeScript will transform enum from ```ts enum Test { Key = 1 } ``` to ```ts var Test; (function (Test) { Test[Test["Key"] = 1] = "Key"; })(Test || (Test = {})); ``` This result is not friendly for uglyify or tree-shaking. When `Test` is not be used, the generated code block always remain as dead codes. ## Suggestion prefer to generate codes below: ```ts var Test = /*#__PURE__*/(function () { var e = {} e[e["Key"] = 1] = "Key"; return e })(); ``` ## Examples ![image](https://user-images.githubusercontent.com/1667873/46602882-0d778d00-cb24-11e8-8847-cbb3483e0960.png) The suggestion version will be removed. ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,In Discussion
medium
Major
367,736,768
godot
Moving folder AA to folder BB which have folder with same name(AA) cause error moving instead merge.
**Godot version:** 3.1 b17e71b Windows 10 **Issue description:** When I try to copy folder with name AA to folder BB which contain folder with name AA, error "Error moving:" was occur. I think that Godot should merge this folder to one. When we paste files, the warning message should stay as is "Please Confirm... ... Overwrite Cancel " But when we paste folders, I think the warning should change text "Overwrite" to "Merge and Overwrite". **Steps to reproduce:** 1. Create res://AA folder 2. Create res://BB folder 3. Create res://BB/AA folder 4. Try to cut folder res://AA and paste it to res://BB https://streamable.com/vibga
enhancement,topic:editor,usability
low
Critical
367,824,187
rust
Tracking issue for warning for rust_2018_idioms by default
`#[warn(rust_2018_idioms)]` is not going to be enabled by default for Rust 2018 because we are taking a conservative stance and we aren't sure about how good the suggestions are yet. The plan is to enable this lint by default some number of releases / months after Rust 2018 ships. # Progress - [x] `bare_trait_objects` is now set to warn (https://github.com/rust-lang/rust/pull/61203) - [x] `ellipsis_inclusive_range_patterns` will soon be set to warn (https://github.com/rust-lang/rust/pull/61342) - [ ] `unused_extern_crates` has [open issues](https://github.com/rust-lang/rust/pull/59483#issuecomment-478232879) - [ ] https://github.com/rust-lang/rust/issues/91639 - [x] `elided_lifetimes_in_paths` has open issues (https://github.com/rust-lang/rust/issues/60199, https://github.com/rust-lang/rust/issues/55768) - [x] #44294 - [ ] #51881 - [ ] #56038 - [ ] #56328 - [ ] #57274 - [ ] #77713 This summary was last updated from [this comment](https://github.com/rust-lang/rust/issues/54910#issuecomment-1002041036); check to see if there are new comments since then. /cc @Centril @aturon @Mark-Simulacrum
A-lints,C-tracking-issue,WG-epoch,S-tracking-impl-incomplete,A-edition-2018
medium
Major
367,843,392
opencv
T-API: Using Mat expressions with async UMat functions
Invalid objects lifetime is observed. related: #12754 (workaround in DNN module, but problem is still here).
bug,category: core,category: ocl,category: t-api
low
Minor
367,871,239
flutter
Flutter tool should collect C++ crashes in dev mode and offer to report them
Engine crashes should be rare. Ideally we should catch engine crashes, and offer to report them. Whether we catch them just using the flutter tool (to read logs) or we do something more like adding breakpad to the engine in debug mode, we could catch crashes (e.g. https://github.com/flutter/flutter/issues/19590) and the `flutter` tool could prompt the user to report them to our crash database. It's possible we could automatically report them, but that would require more consideration as the stacks my include non-Flutter code. Thoughts?
c: new feature,tool,P2,team-tool,triaged-tool
low
Critical
367,895,362
TypeScript
Odd quick fix ordering for misspelled identifiers
I believe that "misspelled identifier" quick fixes should precede the "remove unused declarations". ![image](https://user-images.githubusercontent.com/972891/46625525-2a429f00-cae9-11e8-9fee-27b1c38e0b37.png) If anyone wants to take this on, this should just be a matter of 1. Making a test case for this, and 2. Shifting the order of which quick fix is registered first.
Bug,Needs Proposal,In Discussion
low
Major
367,905,059
pytorch
input_device, output_device, devices_used properties
## ๐Ÿš€ Feature <!-- A clear and concise description of the feature proposal --> People keep requesting a .device property on layers, and that's legitimately turned down with the response "_Modules can hold parameters of different types on different devices, and so itโ€™s not always possible to unambiguously determine the device._" e.g., #12135 A simple ".device" property doesn't work well. In fact, I think what people really want is: - Tell me where I can put inputs. - Tell me where outputs are produced. - Tell me what devices are actually used internally by this module. - Convert this input into something you can use. - Produce output in this preferred format. For example, just because a module stores parameters on a GPU doesn't mean it is limited to GPU inputs. ## Pitch I'd suggest thinking more about a simple API to query the relationship between modules and inputs. In principle, these can be pretty complex ("if you give me an X I give you a Y", etc.), but maybe just something for the common cases. E.g., m.input_devices() --> ["cuda:0", "cuda:*", "cpu"] m.output_devices() --> ["cuda:0"] m.required_devices() --> ["cuda:0"] By convention, input_devices()[0] would be the preferred device for inputs. A default implementation based on next(m.parameters()).device would return [device] for all three methods, and return [] if there are no parameters; this seems to reflect current practice and assumptions. Without an API, people are going to hardcode assumptions about the relationship between parameters, modules, and devices that are unnecessarily restrictive or simply wrong in general. In addition, forward and backward methods might get a preferred_device keyword argument that allows them to put results on the preferred output device. cc @albanD @mruberry @jbschlosser
module: nn,triaged
low
Minor
367,908,921
pytorch
Certain operations cause implicity sync-points
## ๐Ÿ› Bug <!-- A clear and concise description of what the bug is. --> When profiling networks I've found that certain functions seem to cause the CUDA backend to synchronize with the host. This is similar to #1989 but is applicable to a wider variety of functions, which I believe shouldn't cause syncs, but they seem to. Even if these operations do implicitly cause syncs it would be nice to have a list of which functions do this in the docs (or perhaps someone can point me to this if this already exists). ## To Reproduce I have a MWE showing several functions that do and do not cause implicit sync points. The basic idea is I create some data, and run it through some busy work operations that do not cause syncing (i.e. convolutions), then I run it through an operation to test if it syncs. Finally, I run it through a few more busy work ops and then force a synchronization at the end. I run a line profile to check how long each call takes. If the operation causes a sync, then it will show up as taking time in the profiler, otherwise if there is no implicit syncing, then the bulk of time should be taken up by the final explicit sync. The operation that I test first is uint8 masking (i.e. masked_select / __getitem__) (which is how I first uncovered the issue) ```python import torch def main(): def profile_onthefly(func): def _wrapper(*args, **kw): import line_profiler from six.moves import cStringIO profile = line_profiler.LineProfiler() result = profile(func)(*args, **kw) file_ = cStringIO() profile.print_stats(stream=file_, stripzeros=True) file_.seek(0) text = file_.read() print(text) return result return _wrapper def test_syncpoint_mask(data, conv2d, do_mask=False, do_sync=False): print('---------') print(' * do_mask = {!r}'.format(do_mask)) print(' * do_sync = {!r}'.format(do_sync)) torch.cuda.synchronize() x = data N = 10 # Do some busy work for i in range(N): x = conv2d(x) # Create a mask mask = (data > .5).view(-1) x = x.view(-1) if do_mask: x = x[mask] if do_sync: torch.cuda.synchronize() # Do more busy work for i in range(N): x = x * x x = torch.sqrt(x) torch.cuda.synchronize() xpu = torch.device(0) # Setup dummy data bsize = 128 data = torch.rand(bsize, 3, 512, 512).to(xpu) conv2d = torch.nn.Conv2d(in_channels=3, out_channels=3, kernel_size=3, padding=1).to(xpu) # Profile to show how masking causes an implicity sync-point profile_onthefly(test_syncpoint_mask)(data, conv2d, do_mask=True, do_sync=False) profile_onthefly(test_syncpoint_mask)(data, conv2d, do_mask=False, do_sync=False) profile_onthefly(test_syncpoint_mask)(data, conv2d, do_mask=False, do_sync=True) ``` The final three lines run the test function in 3 different ways. The first line (`profile_onthefly(test_syncpoint_mask)(data, conv2d, do_mask=True, do_sync=False)`) shows that if we do a mask, but we don't do an internal sync, then there is a bulk of time taken up by the masking operation (suggesting that an implicit sync is occurring). ```python --------- * do_mask = True * do_sync = False Timer unit: 1e-06 s Total time: 0.261704 s File: /home/joncrall/_torch_sync_mwe.py Function: test_syncpoint_mask at line 22 Line # Hits Time Per Hit % Time Line Contents ============================================================== 22 def test_syncpoint_mask(data, conv2d, do_mask=False, do_sync=False): 23 1 27.0 27.0 0.0 print('---------') 24 1 8.0 8.0 0.0 print(' * do_mask = {!r}'.format(do_mask)) 25 1 4.0 4.0 0.0 print(' * do_sync = {!r}'.format(do_sync)) 26 27 1 88.0 88.0 0.0 torch.cuda.synchronize() 28 29 1 1.0 1.0 0.0 x = data 30 1 1.0 1.0 0.0 N = 10 31 # Do some busy work 32 11 15.0 1.4 0.0 for i in range(N): 33 10 10231.0 1023.1 3.9 x = conv2d(x) 34 35 # Create a mask 36 1 455.0 455.0 0.2 mask = (data > .5).view(-1) 37 1 16.0 16.0 0.0 x = x.view(-1) 38 1 1.0 1.0 0.0 if do_mask: 39 1 222128.0 222128.0 84.9 x = x[mask] 40 41 1 1.0 1.0 0.0 if do_sync: 42 torch.cuda.synchronize() 43 44 # Do more busy work 45 11 13.0 1.2 0.0 for i in range(N): 46 10 297.0 29.7 0.1 x = x * x 47 10 2895.0 289.5 1.1 x = torch.sqrt(x) 48 49 1 25523.0 25523.0 9.8 torch.cuda.synchronize() ``` We can see that in this case 84.9% of the time is spent on the mask operation because it is actually waiting for all those convolutions to finish. The next line (`profile_onthefly(test_syncpoint_mask)(data, conv2d, do_mask=False, do_sync=False)`), verifies that if we do neither a mask or a sync, then the majority of time is indeed taken up by the final explicit sync call. ```python --------- * do_mask = False * do_sync = False Timer unit: 1e-06 s Total time: 0.233294 s File: /home/joncrall/_torch_sync_mwe.py Function: test_syncpoint_mask at line 22 Line # Hits Time Per Hit % Time Line Contents ============================================================== 22 def test_syncpoint_mask(data, conv2d, do_mask=False, do_sync=False): 23 1 8.0 8.0 0.0 print('---------') 24 1 7.0 7.0 0.0 print(' * do_mask = {!r}'.format(do_mask)) 25 1 5.0 5.0 0.0 print(' * do_sync = {!r}'.format(do_sync)) 26 27 1 14.0 14.0 0.0 torch.cuda.synchronize() 28 29 1 1.0 1.0 0.0 x = data 30 1 1.0 1.0 0.0 N = 10 31 # Do some busy work 32 11 12.0 1.1 0.0 for i in range(N): 33 10 1091.0 109.1 0.5 x = conv2d(x) 34 35 # Create a mask 36 1 32.0 32.0 0.0 mask = (data > .5).view(-1) 37 1 8.0 8.0 0.0 x = x.view(-1) 38 1 0.0 0.0 0.0 if do_mask: 39 x = x[mask] 40 41 1 0.0 0.0 0.0 if do_sync: 42 torch.cuda.synchronize() 43 44 # Do more busy work 45 11 16.0 1.5 0.0 for i in range(N): 46 10 254.0 25.4 0.1 x = x * x 47 10 5359.0 535.9 2.3 x = torch.sqrt(x) 48 49 1 226486.0 226486.0 97.1 torch.cuda.synchronize() ``` In this case 97% of the time is taken up by the final synchronize operation, which means that no intermediate syncing has taken place, all the work is done in a lazy fashion, so the final sync eats up all the time. Finally the final line `profile_onthefly(test_syncpoint_mask)(data, conv2d, do_mask=False, do_sync=True)`, shows that if we force an intermediate sync (without masking), we get a similar performance time as if we did the mask. ```python --------- * do_mask = False * do_sync = True Timer unit: 1e-06 s Total time: 0.233516 s File: /home/joncrall/_torch_sync_mwe.py Function: test_syncpoint_mask at line 22 Line # Hits Time Per Hit % Time Line Contents ============================================================== 22 def test_syncpoint_mask(data, conv2d, do_mask=False, do_sync=False): 23 1 8.0 8.0 0.0 print('---------') 24 1 7.0 7.0 0.0 print(' * do_mask = {!r}'.format(do_mask)) 25 1 5.0 5.0 0.0 print(' * do_sync = {!r}'.format(do_sync)) 26 27 1 13.0 13.0 0.0 torch.cuda.synchronize() 28 29 1 1.0 1.0 0.0 x = data 30 1 0.0 0.0 0.0 N = 10 31 # Do some busy work 32 11 14.0 1.3 0.0 for i in range(N): 33 10 1093.0 109.3 0.5 x = conv2d(x) 34 35 # Create a mask 36 1 34.0 34.0 0.0 mask = (data > .5).view(-1) 37 1 8.0 8.0 0.0 x = x.view(-1) 38 1 0.0 0.0 0.0 if do_mask: 39 x = x[mask] 40 41 1 1.0 1.0 0.0 if do_sync: 42 1 183895.0 183895.0 78.8 torch.cuda.synchronize() 43 44 # Do more busy work 45 11 10.0 0.9 0.0 for i in range(N): 46 10 220.0 22.0 0.1 x = x * x 47 10 232.0 23.2 0.1 x = torch.sqrt(x) 48 49 1 47975.0 47975.0 20.5 torch.cuda.synchronize() ``` In this case we see if we replace the mask with a explicit sync, that takes up 78% of the time, which is similar to the 84% of the time we saw with the masking operation. ------------------------------ I test this for a few other operations as well ```python # --- There seem to be a few operations that cause implicity syncs def test_synpoint_other(opname, op): def test_syncpoint_(data, conv2d): print('---------') print('opname = {!r}'.format(opname)) torch.cuda.synchronize() x = data N = 10 for i in range(N): x = conv2d(x) # non-syncing busy work op(x) # Do more busy work for i in range(N): x = torch.sqrt(x * x) # non-syncing busy work torch.cuda.synchronize() profile_onthefly(test_syncpoint_)(data, conv2d) test_synpoint_other('sum', lambda x: x.sum()) test_synpoint_other('sum(dim=0)', lambda x: x.sum(dim=0)) test_synpoint_other('sigmoid', lambda x: x.sigmoid()) ``` A simple sum will cause an implicit sync ```python opname = 'sum' Timer unit: 1e-06 s Total time: 0.221028 s File: /home/joncrall/_torch_sync_mwe.py Function: test_syncpoint_ at line 68 Line # Hits Time Per Hit % Time Line Contents ============================================================== 68 def test_syncpoint_(data, conv2d): 69 1 9.0 9.0 0.0 print('---------') 70 1 8.0 8.0 0.0 print('opname = {!r}'.format(opname)) 71 1 13.0 13.0 0.0 torch.cuda.synchronize() 72 1 1.0 1.0 0.0 x = data 73 1 1.0 1.0 0.0 N = 10 74 11 12.0 1.1 0.0 for i in range(N): 75 10 1148.0 114.8 0.5 x = conv2d(x) # non-syncing busy work 76 1 174351.0 174351.0 78.9 op(x) 77 # Do more busy work 78 11 9.0 0.8 0.0 for i in range(N): 79 10 455.0 45.5 0.2 x = torch.sqrt(x * x) # non-syncing busy work 80 1 45021.0 45021.0 20.4 torch.cuda.synchronize() ``` However, a `sum(dim=0)` and a `sigmoid` call will not. ```python --------- opname = 'sum(dim=0)' Timer unit: 1e-06 s Total time: 0.221 s File: /home/joncrall/_torch_sync_mwe.py Function: test_syncpoint_ at line 68 Line # Hits Time Per Hit % Time Line Contents ============================================================== 68 def test_syncpoint_(data, conv2d): 69 1 8.0 8.0 0.0 print('---------') 70 1 7.0 7.0 0.0 print('opname = {!r}'.format(opname)) 71 1 13.0 13.0 0.0 torch.cuda.synchronize() 72 1 1.0 1.0 0.0 x = data 73 1 1.0 1.0 0.0 N = 10 74 11 11.0 1.0 0.0 for i in range(N): 75 10 1099.0 109.9 0.5 x = conv2d(x) # non-syncing busy work 76 1 92.0 92.0 0.0 op(x) 77 # Do more busy work 78 11 10.0 0.9 0.0 for i in range(N): 79 10 421.0 42.1 0.2 x = torch.sqrt(x * x) # non-syncing busy work 80 1 219337.0 219337.0 99.2 torch.cuda.synchronize() --------- opname = 'sigmoid' Timer unit: 1e-06 s Total time: 0.222286 s File: /home/joncrall/_torch_sync_mwe.py Function: test_syncpoint_ at line 68 Line # Hits Time Per Hit % Time Line Contents ============================================================== 68 def test_syncpoint_(data, conv2d): 69 1 24.0 24.0 0.0 print('---------') 70 1 24.0 24.0 0.0 print('opname = {!r}'.format(opname)) 71 1 13.0 13.0 0.0 torch.cuda.synchronize() 72 1 0.0 0.0 0.0 x = data 73 1 1.0 1.0 0.0 N = 10 74 11 12.0 1.1 0.0 for i in range(N): 75 10 1088.0 108.8 0.5 x = conv2d(x) # non-syncing busy work 76 1 83.0 83.0 0.0 op(x) 77 # Do more busy work 78 11 11.0 1.0 0.0 for i in range(N): 79 10 419.0 41.9 0.2 x = torch.sqrt(x * x) # non-syncing busy work 80 1 220611.0 220611.0 99.2 torch.cuda.synchronize() ``` Instead of increasing the length of this report, I'll simply summarize the rest. Ops that did cause implicit sync: * masked_select * nonzero * sum(dim=None) Ops that did not cause implicit sync: * sum(dim=<Int>) * conv * reshape * squeeze * sigmoid * arithmetic operations `+ - / * ` ## Expected behavior Obviously there are lots of operations I didn't test, so it would be nice to have an understanding of when I should expect an operation to cause an implicit sync and when I shouldn't. I expect its something to do with operations which have clearly defined input and output shapes, but having something more than my hunches would be nice. Also, perhaps this is a bug, and masked_select should actually not force a sync. ## Environment ``` PyTorch version: 1.0.0.dev20181008 Is debug build: No CUDA used to build PyTorch: 9.2.148 OS: Ubuntu 18.04.1 LTS GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0 CMake version: version 3.12.0 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.2.148 GPU models and configuration: GPU 0: GeForce GTX 1080 Ti GPU 1: GeForce GTX 1080 Ti Nvidia driver version: 396.54 cuDNN version: Could not collect Versions of relevant libraries: [pip] Could not collect [conda] Could not collect ``` cc @ngimel
module: cuda,triaged
low
Critical
367,915,421
TypeScript
findAllRefs doesn't work for property of `typeof import("./foo")`
**TypeScript Version:** 3.2.0-dev.20181006 **Code** ```ts /// <reference path="fourslash.ts" /> // @Filename: /a.ts ////export const [|x|] = 0; // @Filename: /b.ts ////function f(a: typeof import("./a")) { //// a.[|x|]; //// a.[|x|]; ////} verify.singleReferenceGroup("const x: 0"); ``` **Expected behavior:** Test passes. **Actual behavior:** Test fails. The references in `function f` aren't found.
Bug
low
Minor
367,950,241
youtube-dl
iflix support request
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.10.05*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.10.05** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [x] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` [debug] System config: [] [debug] User config: [] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] youtube-dl version 2018.10.05 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] Proxy map: {} ... <end of log> ``` --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): Requested site name : https://iflix.com Example video : https://piay.iflix.com/play/91571 Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights. --- ### Description of your *issue*, suggested solution and other information Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible. If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
account-needed
low
Critical
367,955,101
create-react-app
Support for a workbox.config.js override file
As per https://github.com/facebook/create-react-app/pull/5111#issuecomment-425458687, there was discussion around the `c-r-a` v2 timeframe for allowing developers to override the default `workbox-webpack-plugin` (i.e. service worker generation) configuration with their own external `workbox.config.js` file. This functionality did not end up making it into the final release, though. I'd love to see that happen, as putting together a one-size-fits-all default config can [leave some developers confused](https://github.com/facebook/create-react-app/issues/5316). I'm happy to submit a PR for that and update the docs accordingly, but I wanted to see if there was any prior art around external, user-controllable configuration files that did make it into the `c-r-a` v2 release. I'd rather just use a similar approach for Workbox if that solution for another plugin already exists. If there is no prior art for configuring other plugins in v2, I'd like to confirm that folks like @gaearon and @Timer are fully on board with the idea of external config files, since it's been a contentious point in the past.
contributions: up for grabs!,issue: proposal,difficulty: medium
high
Critical
367,959,439
go
runtime: arena mapping creates large core files and bad mlockall behavior
``` package main var p *int func main() { *p = 0 } ``` Run with ``` $ ulimit -c unlimited $ GOTRACEBACK=crash go run test.go $ ls -l core ``` Go 1.11 and tip generate a ~100MB core. Go 1.10 and earlier only generate a ~2MB core. Probably related to the arena changes in 1.11. @aclements @heschik @hyangah
Performance,NeedsFix
medium
Critical
367,959,853
vscode
Add selectionBackground defaults
The selectionBackground color token controls the background color of text selection outside of the editor. So this affects text in native input elements like search, selecting text in the debug console, settings editor, other places. If one isn't defined, it comes from the OS and is different between macOS and windows. Maybe the OS picks different colors in some scenarios, like for different windows themes. e.g. in Dark+ MacOS ![image](https://user-images.githubusercontent.com/323878/46511718-08a19900-c805-11e8-8452-f6789b3e4fc2.png) Windows ![image](https://user-images.githubusercontent.com/323878/46511726-1820e200-c805-11e8-9322-94f11dd12115.png) I think that adding defaults for this token would be a good idea just so it looks consistent. It's also an issue for editor instances pretending to be input boxes - the settings and extensions input boxes that use the suggest widget. If selectionBackground isn't configured, they don't know the native selection color and fall back on the editor selection background which doesn't always look good in that context, so setting a default selectionBackground will make those input consistent with the native inputs. But maybe people like the native text selection look? @misolori @Tyriar ?
feature-request,ux
low
Critical
367,974,282
go
spec: clarify requirements for duplicate constants in interface-typed map literal keys and switches
The Go spec disallows duplicate constants in map literal keys, and allows compilers to reject duplicate constants in switch cases. However, the Go spec does not formally allow interface-typed constants, and doesn't mention how to handle constants that are implicitly or explicitly converted to interface type. The existing compilers handle these situations in differing ways: package p // #1 var _ = map[interface{}]int{ 0: 0, 0: 0, } // #2 var _ = map[interface{}]int{ interface{}(0): 0, interface{}(0): 0, } func _() { // #3 switch interface{}(0) { case 0: case 0: } // #4 switch interface{}(0) { case interface{}(0): case interface{}(0): } } cmd/compile rejects 1, 2, and 3. go/types rejects 1 and 3. gccgo (8.0) rejects none. /cc @griesemer @ianlancetaylor
Documentation,NeedsInvestigation
low
Minor
368,004,011
rust
Refiled: "Rustfmt/pretty-print types in error messages"
Refiling... > For any highly generic library (`futures` is the one that I hit this issue with a lot) it's easy for types to become unreadable very fast. Even `impl Trait` only helps so much (we're using `Box<Trait>` a lot in our code so `impl Trait` won't get us better error messages). A good solution would be to `rustfmt`-style pretty-print types in the error messages so that it's possible for humans to parse them. It might be useful to make this optional or otherwise limit the maximum size because I can imagine this causing `diesel`'s types to take hundreds of lines to print. Probably some kind of overrideable heuristic would be useful to implement here. https://github.com/rust-lang/rfcs/issues/2358
A-frontend,C-enhancement,A-diagnostics,T-compiler,WG-diagnostics,D-diagnostic-infra
low
Critical
368,010,550
TypeScript
More poor errors with value/type/namespace confusion
<!-- ๐Ÿšจ STOP ๐Ÿšจ ๐—ฆ๐—ง๐—ข๐—ฃ ๐Ÿšจ ๐‘บ๐‘ป๐‘ถ๐‘ท ๐Ÿšจ Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> "Cannot find name" or "Namespace ... has no exported member", but I see it right there! Some of the more common cases came up in two recent Stack Overflow questions: [1](https://stackoverflow.com/questions/52617485/typescript-constant-string-cannot-be-found), [2](https://stackoverflow.com/questions/52711753/cannot-get-func-from-proptypes). Maybe we should just fix all the cases? <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** master (f6ca10565d8fb0a9737b687b85e14ee94ad244d7) <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** value type namespace meaning error (and all the error messages below) **Code** ```ts const v = 42; type t = number; namespace n {} const vt = 42; type vt = number; const vn = 42; namespace vn {} type tn = number; namespace tn {} namespace M { export const v = 42; export type t = number; export namespace n {} export const vt = 42; export type vt = number; export const vn = 42; export namespace vn {} export type tn = number; export namespace tn {} } // Actual is OK: 't' only refers to a type, but is being used as a value here. const v1 = t; // Actual: Property 't' does not exist on type 'typeof M'. // Proposed: 't' only refers to a type, but is being used as a value here. const v2 = M.t; // Actual is OK: Cannot use namespace 'n' as a value. const v3 = n; // Actual: Property 'n' does not exist on type 'typeof M'. // Proposed: Cannot use namespace 'n' as a value. const v4 = M.n; // Actual: Cannot use namespace 'tn' as a value. // Proposed: 'tn' only refers to a type and a namespace, but is being used as a value here. const v5 = tn; // Actual: Property 'tn' does not exist on type 'typeof M'. // Proposed: 'tn' only refers to a type and a namespace, but is being used as a value here. const v6 = M.tn; // Actual: Cannot find name 'v'. // Proposed: 'v' only refers to a value, but is being used as a type here. type t1 = v; // Actual: Namespace 'M' has no exported member 'v'. // Proposed: 'v' only refers to a value, but is being used as a type here. type t2 = M.v; // Actual is OK: Cannot use namespace 'n' as a type. type t3 = n; // Actual: Namespace 'M' has no exported member 'n'. // Proposed: Cannot use namespace 'n' as a type. type t4 = M.n; // Actual: Cannot use namespace 'vn' as a type. // Proposed: 'vn' only refers to a value and a namespace, but is being used as a type here. type t5 = vn; // Actual: Namespace 'M' has no exported member 'vn'. // Proposed: 'vn' only refers to a value and a namespace, but is being used as a type here. type t6 = M.vn; // Actual: Cannot find namespace 'v'. // Proposed: 'v' only refers to a value, but is being used as a namespace here. type nt1 = v.oops; // Actual: Namespace 'M' has no exported member 'v'. // Proposed: 'v' only refers to a value, but is being used as a namespace here. type nt2 = M.v.oops; // Actual is OK: 't' only refers to a type, but is being used as a namespace here. type nt3 = t.oops; // Actual: Namespace 'M' has no exported member 't'. // Proposed: 't' only refers to a type, but is being used as a namespace here. type nt4 = M.t.oops; // Actual: 'vt' only refers to a type, but is being used as a namespace here. // Proposed: 'vt' only refers to a value and a type, but is being used as a namespace here. type nt5 = vt.oops; // Actual: Namespace 'M' has no exported member 'vt'. // Proposed: 'vt' only refers to a value and a type, but is being used as a namespace here. type nt6 = M.vt.oops; ``` **Expected behavior:** As indicated above **Actual behavior:** As indicated above **Playground Link:** [link](https://www.typescriptlang.org/play/#src=const%20v%20%3D%2042%3B%0D%0Atype%20t%20%3D%20number%3B%0D%0Anamespace%20n%20%7B%7D%0D%0Aconst%20vt%20%3D%2042%3B%0D%0Atype%20vt%20%3D%20number%3B%0D%0Aconst%20vn%20%3D%2042%3B%0D%0Anamespace%20vn%20%7B%7D%0D%0Atype%20tn%20%3D%20number%3B%0D%0Anamespace%20tn%20%7B%7D%0D%0A%0D%0Anamespace%20M%20%7B%20%0D%0A%20%20%20%20export%20const%20v%20%3D%2042%3B%0D%0A%20%20%20%20export%20type%20t%20%3D%20number%3B%0D%0A%20%20%20%20export%20namespace%20n%20%7B%7D%0D%0A%20%20%20%20export%20const%20vt%20%3D%2042%3B%0D%0A%20%20%20%20export%20type%20vt%20%3D%20number%3B%0D%0A%20%20%20%20export%20const%20vn%20%3D%2042%3B%0D%0A%20%20%20%20export%20namespace%20vn%20%7B%7D%0D%0A%20%20%20%20export%20type%20tn%20%3D%20number%3B%0D%0A%20%20%20%20export%20namespace%20tn%20%7B%7D%20%20%20%20%0D%0A%7D%0D%0A%0D%0A%2F%2F%20Actual%20is%20OK%3A%20't'%20only%20refers%20to%20a%20type%2C%20but%20is%20being%20used%20as%20a%20value%20here.%0D%0Aconst%20v1%20%3D%20t%3B%0D%0A%2F%2F%20Actual%3A%20Property%20't'%20does%20not%20exist%20on%20type%20'typeof%20M'.%0D%0A%2F%2F%20Proposed%3A%20't'%20only%20refers%20to%20a%20type%2C%20but%20is%20being%20used%20as%20a%20value%20here.%0D%0Aconst%20v2%20%3D%20M.t%3B%0D%0A%2F%2F%20Actual%20is%20OK%3A%20Cannot%20use%20namespace%20'n'%20as%20a%20value.%0D%0Aconst%20v3%20%3D%20n%3B%0D%0A%2F%2F%20Actual%3A%20Property%20'n'%20does%20not%20exist%20on%20type%20'typeof%20M'.%0D%0A%2F%2F%20Proposed%3A%20Cannot%20use%20namespace%20'n'%20as%20a%20value.%0D%0Aconst%20v4%20%3D%20M.n%3B%0D%0A%2F%2F%20Actual%3A%20Cannot%20use%20namespace%20'tn'%20as%20a%20value.%0D%0A%2F%2F%20Proposed%3A%20'tn'%20only%20refers%20to%20a%20type%20and%20a%20namespace%2C%20but%20is%20being%20used%20as%20a%20value%20here.%0D%0Aconst%20v5%20%3D%20tn%3B%0D%0A%2F%2F%20Actual%3A%20Property%20'tn'%20does%20not%20exist%20on%20type%20'typeof%20M'.%0D%0A%2F%2F%20Proposed%3A%20'tn'%20only%20refers%20to%20a%20type%20and%20a%20namespace%2C%20but%20is%20being%20used%20as%20a%20value%20here.%0D%0Aconst%20v6%20%3D%20M.tn%3B%0D%0A%0D%0A%2F%2F%20Actual%3A%20Cannot%20find%20name%20'v'.%0D%0A%2F%2F%20Proposed%3A%20'v'%20only%20refers%20to%20a%20value%2C%20but%20is%20being%20used%20as%20a%20type%20here.%0D%0Atype%20t1%20%3D%20v%3B%0D%0A%2F%2F%20Actual%3A%20Namespace%20'M'%20has%20no%20exported%20member%20'v'.%0D%0A%2F%2F%20Proposed%3A%20'v'%20only%20refers%20to%20a%20value%2C%20but%20is%20being%20used%20as%20a%20type%20here.%0D%0Atype%20t2%20%3D%20M.v%3B%0D%0A%2F%2F%20Actual%20is%20OK%3A%20Cannot%20use%20namespace%20'n'%20as%20a%20type.%0D%0Atype%20t3%20%3D%20n%3B%0D%0A%2F%2F%20Actual%3A%20Namespace%20'M'%20has%20no%20exported%20member%20'n'.%0D%0A%2F%2F%20Proposed%3A%20Cannot%20use%20namespace%20'n'%20as%20a%20type.%0D%0Atype%20t4%20%3D%20M.n%3B%0D%0A%2F%2F%20Actual%3A%20Cannot%20use%20namespace%20'vn'%20as%20a%20type.%0D%0A%2F%2F%20Proposed%3A%20'vn'%20only%20refers%20to%20a%20value%20and%20a%20namespace%2C%20but%20is%20being%20used%20as%20a%20type%20here.%0D%0Atype%20t5%20%3D%20vn%3B%0D%0A%2F%2F%20Actual%3A%20Namespace%20'M'%20has%20no%20exported%20member%20'vn'.%0D%0A%2F%2F%20Proposed%3A%20'vn'%20only%20refers%20to%20a%20value%20and%20a%20namespace%2C%20but%20is%20being%20used%20as%20a%20type%20here.%0D%0Atype%20t6%20%3D%20M.vn%3B%0D%0A%0D%0A%2F%2F%20Actual%3A%20Cannot%20find%20namespace%20'v'.%0D%0A%2F%2F%20Proposed%3A%20'v'%20only%20refers%20to%20a%20value%2C%20but%20is%20being%20used%20as%20a%20namespace%20here.%0D%0Atype%20nt1%20%3D%20v.oops%3B%0D%0A%2F%2F%20Actual%3A%20Namespace%20'M'%20has%20no%20exported%20member%20'v'.%0D%0A%2F%2F%20Proposed%3A%20'v'%20only%20refers%20to%20a%20value%2C%20but%20is%20being%20used%20as%20a%20namespace%20here.%0D%0Atype%20nt2%20%3D%20M.v.oops%3B%0D%0A%2F%2F%20Actual%20is%20OK%3A%20't'%20only%20refers%20to%20a%20type%2C%20but%20is%20being%20used%20as%20a%20namespace%20here.%0D%0Atype%20nt3%20%3D%20t.oops%3B%0D%0A%2F%2F%20Actual%3A%20Namespace%20'M'%20has%20no%20exported%20member%20't'.%0D%0A%2F%2F%20Proposed%3A%20't'%20only%20refers%20to%20a%20type%2C%20but%20is%20being%20used%20as%20a%20namespace%20here.%0D%0Atype%20nt4%20%3D%20M.t.oops%3B%0D%0A%2F%2F%20Actual%3A%20'vt'%20only%20refers%20to%20a%20type%2C%20but%20is%20being%20used%20as%20a%20namespace%20here.%0D%0A%2F%2F%20Proposed%3A%20'vt'%20only%20refers%20to%20a%20value%20and%20a%20type%2C%20but%20is%20being%20used%20as%20a%20namespace%20here.%0D%0Atype%20nt5%20%3D%20vt.oops%3B%0D%0A%2F%2F%20Actual%3A%20Namespace%20'M'%20has%20no%20exported%20member%20'vt'.%0D%0A%2F%2F%20Proposed%3A%20'vt'%20only%20refers%20to%20a%20value%20and%20a%20type%2C%20but%20is%20being%20used%20as%20a%20namespace%20here.%0D%0Atype%20nt6%20%3D%20M.vt.oops%3B) **Related Issues:** #7900
Bug,Help Wanted,Domain: Error Messages
low
Critical
368,042,282
storybook
Storysource addon - allow prism configuration
If you are reporting a bug or requesting support, start here: ### Bug or support request summary Have added Storysource and was expecting to be able to configure Prism syntax highlighting - eg. choose the theme, tweak the highlight style and so on. However I can't find anything in the docs about that - I can see the Prettier settings but nothing about Prism? That all seems hard-coded. This is an issue as the chosen theme is really hard to read and clashes badly with our own design. Not sure if I'm just missing something, or if this is a feature request... ### Steps to reproduce n/a I think ### Please specify which version of Storybook and optionally any affected addons that you're running - "@storybook/react": "^3.3.15", - "@storybook/addon-storysource": "^3.4.11", ### Affected platforms n/a I think ### Where to start n/a I think ### Acceptance criteria - Ability to choose theme for Prism when using Storysource - Ability to configure/extend the styles applied to the Story tab, eg. the highlight style
feature request,ui,addon: storysource
low
Critical
368,106,663
rust
Path::ancestors can contain empty path
With the following code ```rust use std::path::Path; fn main() { let mut ancestors = Path::new("foo/bar").ancestors(); assert_eq!(ancestors.next(), Some(Path::new("foo/bar"))); assert_eq!(ancestors.next(), Some(Path::new("foo"))); assert_eq!(ancestors.next(), None); } ``` I'd expect to have the last assertion being ok but it isn't since this "ancestors.next()" is in fact Path::new("")
T-libs-api,A-io
low
Minor
368,176,144
opencv
Cannot silence warnings from cv::VideoCapture with cv::utils::logging::setLogLevel
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). Please: * Read the documentation to test with the latest developer build. * Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue. * Try to be as detailed as possible in your report. * Report only one problem per created issue. This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => 3.4.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2017 ##### Detailed description **/modules/videoio/src/cap_ffmpeg_impl.hpp** ignores OpenCV's logging infrastructure and prints directly to STDERR which results in an unreadable character mess for multi-threaded applications. ##### Steps to reproduce <!-- to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file --> ```.cpp cv::utils::logging::setLogLevel( cv::utils::logging::LOG_LEVEL_FATAL ); cv::VideoCapture capture; capture.open( "path to a nonexistent file" ); ```
feature,category: videoio
low
Critical
368,184,903
pytorch
[Feature request]: add `LayerNormLSTMCell`
As layer normalization recently becomes a standard trick to train RNN, it would be very convenient to support it. Here I write up an initial version ```python class LayerNormLSTMCell(RNNCellBase): def __init__(self, input_size, hidden_size, bias=True, ln_preact=True): super(LayerNormLSTMCell, self).__init__(input_size, hidden_size, bias, num_chunks=4) self.ln_preact = ln_preact if self.ln_preact: self.ln_ih = nn.LayerNorm(4*self.hidden_size) self.ln_hh = nn.LayerNorm(4*self.hidden_size) self.ln_cell = nn.LayerNorm(self.hidden_size) def forward(self, input, hx=None): self.check_forward_input(input) if hx is None: hx = input.new_zeros(input.size(0), self.hidden_size, requires_grad=False) hx = (hx, hx) self.check_forward_hidden(input, hx[0], '[0]') self.check_forward_hidden(input, hx[1], '[1]') # hidden states and preactivations h, c = hx ih = input @ self.weight_ih.t() + self.bias_ih hh = h @ self.weight_hh.t() + self.bias_hh if self.ln_preact: ih = self.ln_ih(ih) hh = self.ln_hh(hh) preact = ih + hh # Gates f, i, o, g = preact.chunk(4, dim=1) g = g.tanh() f = f.sigmoid() i = i.sigmoid() o = o.sigmoid() # cell computations c = f*c + i*g c = self.ln_cell(c) h = o*c.tanh() return h, c ```` cc @zou3519
module: rnn,triaged
low
Major
368,211,894
vue
A deliberately empty slot-scope attribute will not be rendered as a scoped slot
### Version 2.5.17 ### Reproduction link [https://jsfiddle.net/decademoon/50wL7mdz/759740/](https://jsfiddle.net/decademoon/50wL7mdz/759740/) ### Steps to reproduce ``` <foo> <bar slot-scope/> </foo> ``` ### What is expected? `<bar slot-scope>` should be a scoped slot. ### What is actually happening? `<bar slot-scope>` is rendered as a child as if `slot-scope` were not present. --- As a workaround, I've been using a dummy variable to force it to be a scoped slot: ```<bar slot-scope="scope"/>``` <!-- generated by vue-issues. DO NOT REMOVE -->
improvement
low
Major
368,233,113
rust
Lifetime inference and Pin
There appears to be some lifetime inference issue with how `Deref` is implemented for `Pin`, so that I get issues about how the pin does not live long in code that should clearly work. For example, this gives an error about how the pin does not live long enough: ```rust fn foo<'a>(x: Pin<&'a i32>) -> &'a i32 { &*x } ``` https://play.rust-lang.org/?gist=5749b0645d17308d62501924e618e49d&version=nightly&mode=debug&edition=2015 cc #49150
A-type-system,T-compiler,A-inference,C-bug,T-types
low
Critical
368,237,077
vscode
Show/log which extension has thrown an exception
pls see https://github.com/Microsoft/vscode/issues/60046. When we catch the extension in the provideHover API call then we can know which extension is handling the call and we could report this to the user.
feature-request,extensions,log
low
Minor
368,242,132
go
cmd/vet: warn about changing fields in non-escaping variables if they are not set after assignment
It's a common mistake for new users of Go* to accidentally attempt to set a field on a non-pointer receiver. For instance: ``` type Something struct{ Done bool } func (s Something) Bar() { if s.Done { return } // Code goes here s.Done = true } ``` The above is legal in Go 1.x because s is passed by value to Bar, but the change only lasts for the duration of the function call since the local version on the stack was modified, not the value which the method was called on. This is almost always a bug and I can't think of any situations where that would be the intention of the programmer when attempting to set a property inside of a method. This proposal is to make it illegal to modify a property on a receiver when the receiver is passed-by value. Arguments other than the receiver would unaffected. This could be done either as a`go vet` check in Go 1.x, or a language change in Go2. * I don't have any numbers to support this claim, but blaming new users means I wouldn't need to admit if it were to theoretically be a mistake that I might still occasionally make.
NeedsInvestigation,Analysis
medium
Critical
368,279,124
pytorch
CuDNN convolution on some CUDA devices will not preserve NaN weights (upstream bug)
## ๐Ÿ› Bug <!-- A clear and concise description of what the bug is. --> `Conv1d` with nan weights outputs non-nan values during traing. But after saving and reloading the weights, it outputs nan values. It should output nan value not only after reloading but also during training if the weights are nan. And I don't know why the weights become nan. These lines in my repo are example. It prints only `weight has nan` after hundreds iterations. https://github.com/dhgrs/pytorch-UniWaveNet/blob/484efb51a48586f9b1189a60ce35c6408310d744/net.py#L58-L70 ## To Reproduce Steps to reproduce the behavior: 1. Clone this [repo](https://github.com/dhgrs/pytorch-UniWaveNet). 1. Download [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) and set the path in `params.py`. 1. Install librosa and tqdm. 1. `Python train.py` ## Expected behavior `Conv1d` outputs nan value or `Conv1d`'s weights are not nan values. <!-- A clear and concise description of what you expected to happen. --> ## Environment PyTorch version: 0.5.0a0+a24163a Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.5 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: GeForce GTX 1080 Ti Nvidia driver version: 390.87 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.2 /usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a /usr/local/lib/python3.6/site-packages/cupy/_lib/libcudnn.so.7 Versions of relevant libraries: [pip3] msgpack-numpy (0.4.3.1) [pip3] numpy (1.15.1) [pip3] torch (0.5.0a0+a24163a) [pip3] torchtext (0.2.3) [pip3] torchvision (0.2.1) [conda] Could not collect
module: dependency bug,module: cudnn,low priority,triaged
low
Critical
368,295,890
kubernetes
Corrupted/bitflipped serialized API data present in etcd
/kind bug **What happened**: API server became unable to respond to requests for pod resources. There is an error in the logs deserializing the Pod protobuf. If we look at etcd we see two entries for a datadog-agent pod ending in dxjt9, where there should only be one (the 'n' in 'agent' changes to 'l'): * `datadog-agent-dxjt9` * `datadog-agelt-dxjt9` Looking at the data for these two records in etcd we can see multiple places where the data differs between them, including the entity's name and the type for WorkingDir being changed from 2 (length-delimited) to 0 (varint) which is the error the API server logs are showing. There is no call to the API server for a resource named `datadog-agelt-dxjt9`. It seems that the key and data is corrupted somewhere between kube-apiserver and etcd or in etcd itself. [etcd-records.zip](https://github.com/kubernetes/kubernetes/files/2461269/etcd-records.zip) A binary diff of the two records shows multiple bits flipped. There are a few interesting things about these flipped bits. * It is always the '2' bit in a byte. * After the first flipped bit, they are all at byte offset ending in 0. * The first is in the etcd key (agent -> agelt). * The others are in the serialized podspec data * They changed values alternate between +2 and -2. ``` offset: 0x22 (34), good: 6e bad: 6c offset: 0xa0 (160), good: d9 bad: db offset: 0x120 (288), good: 7 bad: 5 offset: 0x220 (544), good: 61 bad: 63 offset: 0x260 (608), good: 2a bad: 28 offset: 0x4e0 (1248), good: 2d bad: 2f offset: 0x760 (1888), good: 2e bad: 2c offset: 0x7a0 (1952), good: 5 bad: 7 ``` Manually removing the corrupted record with etcdctl returned the cluster to a working state. API server logs: Host 1: ``` I0722 02:14:16.832788 1 wrap.go:42] PUT /api/v1/namespaces/default/pods/datadog-agent-dxjt9/status: (3.293948ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] XX.XX.154.151:3002] I0722 02:14:41.932049 1 wrap.go:42] GET /api/v1/namespaces/default/pods/datadog-agent-dxjt9: (2.369567ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] XX.XX.154.151:3002] I0722 02:14:41.936873 1 wrap.go:42] PUT /api/v1/namespaces/default/pods/datadog-agent-dxjt9/status: (3.532971ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] XX.XX.154.151:3002] I0722 02:15:07.038952 1 wrap.go:42] GET /api/v1/namespaces/default/pods/datadog-agent-dxjt9: (2.708027ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] XX.XX.154.151:3002] I0722 02:15:07.046041 1 wrap.go:42] PUT /api/v1/namespaces/default/pods/datadog-agent-dxjt9/status: (5.724827ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] XX.XX.154.151:3002] I0722 02:16:04.449918 1 wrap.go:42] PATCH /api/v1/namespaces/default/events/datadog-agent-dxjt9.153cbf96727afa55: (9.020874ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] XX.XX.154.151:3002] ... I0722 02:20:10.061938 1 wrap.go:42] GET /api/v1/namespaces/default/pods/datadog-agent-dxjt9: (3.513629ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] XX.XX.154.151:3002] E0722 02:20:10.068328 1 watcher.go:268] failed to prepare current and previous objects: proto: wrong wireType = 0 for field WorkingDir W0722 02:20:10.068400 1 reflector.go:341] storage/cacher.go:/pods: watch of *core.Pod ended with: Internal error occurred: proto: wrong wireType = 0 for field WorkingDir I0722 02:20:10.068754 1 wrap.go:42] PUT /api/v1/namespaces/default/pods/datadog-agent-dxjt9/status: (5.488655ms) 200 [[kubelet/v1.10.3 (linux/amd64) kubernetes/2bba012] XX.XX.154.151:3002] ... I0722 02:20:11.071738 1 get.go:238] Starting watch for /api/v1/pods, rv=4018844 labels= fields= timeout=8m18s E0722 02:20:11.072657 1 cacher.go:271] unexpected ListAndWatch error: storage/cacher.go:/pods: Failed to list *core.Pod: proto: wrong wireType = 0 for field WorkingDir I0722 02:20:11.074093 1 wrap.go:42] GET /api/v1/namespaces/kube-system/secrets/aws-node-token-9rwj8: (2.031495ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:34804] I0722 02:20:11.078168 1 wrap.go:42] GET /api/v1/namespaces/kube-system/serviceaccounts/aws-node: (3.485772ms) 200 [[kube-apiserver/v1.10.3 (linux/amd64) kubernetes/2bba012] 127.0.0.1:34804] I0722 02:20:11.078760 1 get.go:238] Starting watch for /api/v1/pods, rv=4018844 labels= fields= timeout=7m53s ... E0722 02:22:31.220597 1 status.go:64] apiserver received an error that is not an metav1.Status: rpc error: code = Unknown desc = proto: wrong wireType = 0 for field Key I0722 02:22:31.220825 1 wrap.go:42] PUT /api/v1/namespaces/ticketing/pods/uts-queuemanager-production-df64b6d54-gq5qv/status: (207.160954ms) 500 goroutine 240059803 [running]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc432e987e0, 0x1f4) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc432e987e0, 0x1f4) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc42336cde0, 0x1f4) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0xc42f13e2a0, 0x1f4) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:281 +0x45 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject(0x3d863b2, 0x23, 0x7fa4a5f62440, 0xc428a2d780, 0x9868e80, 0xc42e1e0448, 0xc422e80500, 0x1f4, 0x984fc40, 0xc42fb29200) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:95 +0x8d k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated(0x7fa4a5afce20, 0xc42f0ade00, 0x986b940, 0xc420a75ce0, 0x0, 0x0, 0x3d209f1, 0x2, 0x9868e80, 0xc42e1e0448, ...) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:123 +0x32b k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated(0x7fa4a5afce20, 0xc42f0ade00, 0x983ce00, 0xc42f0add10, 0x986b940, 0xc420a75ce0, 0x0, 0x0, 0x3d209f1, 0x2, ...) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:142 +0x163 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.(*RequestScope).err(0xc428c10000, 0x983ce00, 0xc42f0add10, 0x9868e80, 0xc42e1e0448, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:67 +0x10c k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.UpdateResource.func1(0x9868e80, 0xc42e1e0448, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/update.go:112 +0x12e0 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulUpdateResource.func1(0xc42f13e210, 0xc42f0351a0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1045 +0xd5 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc42f13e210, 0xc42f0351a0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:199 +0x208 k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc42078e360, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:277 +0xb18 k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(0xc42078e360, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199 +0x57 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3d34530, 0xe, 0xc42078e360, 0xc4205d91f0, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:152 +0x4e0 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc420b63c20, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) <autogenerated>:1 +0x75 k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc42a0b7e00, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:93 +0x18a k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc4286f84c0, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x26d k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc4211dba40, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3d372f1, 0xf, 0xc429455cb0, 0xc4211dba40, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:160 +0x6ad k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc429e462c0, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) <autogenerated>:1 +0x75 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:52 +0x37d net/http.HandlerFunc.ServeHTTP(0xc422da3b30, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /usr/local/go/src/net/http/server.go:1918 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:165 +0x42a net/http.HandlerFunc.ServeHTTP(0xc429e030c0, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /usr/local/go/src/net/http/server.go:1918 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a net/http.HandlerFunc.ServeHTTP(0xc422da3c20, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /usr/local/go/src/net/http/server.go:1918 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:78 +0x2b1 net/http.HandlerFunc.ServeHTTP(0xc422da3d10, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /usr/local/go/src/net/http/server.go:1918 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb net/http.HandlerFunc.ServeHTTP(0xc429e462e0, 0x7fa4a59aa370, 0xc42e1e0438, 0xc422e80500) /usr/local/go/src/net/http/server.go:1918 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc429e46360, 0x986d800, 0xc42e1e0438, 0xc422e80500, 0xc42f03bda0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab logging error output: "k8s\x00\n\f\n\x02v1\x12\x06Status\x12`\n\x06\n\x00\x12\x00\x1a\x00\x12\aFailure\x1aHrpc error: code = Unknown desc = proto: wrong wireType = 0 for field Key\"\x000\xf4\x03\x1a\x00\"\x00" ``` Host 2: ``` E0722 02:20:10.068483 1 watcher.go:268] failed to prepare current and previous objects: proto: wrong wireType = 0 for field WorkingDir W0722 02:20:10.068573 1 reflector.go:341] storage/cacher.go:/pods: watch of *core.Pod ended with: Internal error occurred: proto: wrong wireType = 0 for field WorkingDir ... E0722 02:20:10.758223 1 status.go:64] apiserver received an error that is not an metav1.Status: proto: wrong wireType = 0 for field Wor kingDir I0722 02:20:10.758561 1 wrap.go:42] GET /api/v1/pods: (3.060723ms) 500 goroutine 224226315 [running]: k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc424092bd0, 0x1f4) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc424092bd0, 0x1f4) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc433b005a0, 0x1f4) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0xc424f6ee70, 0x1f4) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:281 +0x45 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject(0x3d863b2, 0x23, 0x7fd4dad1e2f8, 0xc42f8a7a8 0, 0x9868e80, 0xc4272851f0, 0xc425cdd900, 0x1f4, 0x984fc40, 0xc431159b00) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers. go:95 +0x8d k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated(0x7fd4dacddbe8, 0xc424f6f3e0, 0x986b94 0, 0xc420cb6120, 0x0, 0x0, 0x3d209f1, 0x2, 0x9868e80, 0xc4272851f0, ...) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers. go:123 +0x32b k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated(0x7fd4dacddbe8, 0xc424f6f3e0, 0x9832b40, 0xc 42d900620, 0x986b940, 0xc420cb6120, 0x0, 0x0, 0x3d209f1, 0x2, ...) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers. go:142 +0x163 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.(*RequestScope).err(0xc430f3c3c0, 0x9832b40, 0xc42d900620, 0x9868e80, 0xc4272851f0, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:67 +0x10c k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.ListResource.func1(0x9868e80, 0xc4272851f0, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/get.go:257 +0xc5e k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulListResource.func1(0xc424f6ede0, 0xc425e89620) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1015 +0xd0 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc424f6ede0, 0xc425e89620) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:199 +0x208 k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc42071f170, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:277 +0xb18 k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(0xc42071f170, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199 +0x57 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3d34530, 0xe, 0xc42071f170, 0xc4202eb500, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:152 +0x4e0 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc4202b6fa0, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) <autogenerated>:1 +0x75 k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc42830bb00, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:93 +0x18a k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc42275adc0, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x26d k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc42300ff80, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3d372f1, 0xf, 0xc429ea4090, 0xc42300ff80, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:160 +0x6ad k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc429e50ee0, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) <autogenerated>:1 +0x75 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:52 +0x37d n.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:78 +0x2b1 net/http.HandlerFunc.ServeHTTP(0xc4230ef400, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /usr/local/go/src/net/http/server.go:1918 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb net/http.HandlerFunc.ServeHTTP(0xc429e50f00, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /usr/local/go/src/net/http/server.go:1918 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc429e50f80, 0x986d800, 0xc4272851c8, 0xc425cdd900, 0xc42738ed80) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab logging error output: "k8s\x00\n\f\n\x02v1\x12\x06Status\x12F\n\x06\n\x00\x12\x00\x1a\x00\x12\aFailure\x1a.proto: wrong wireType = 0 for field WorkingDir\"\x000\xf4\x03\x1a\x00\"\x00" [[Go-http-client/2.0] 10.87.136.119:60554] HandlerFunc.ServeHTTP(0xc4230ef310, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /usr/local/go/src/net/http/server.go:1918 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:165 +0x42a net/http.HandlerFunc.ServeHTTP(0xc429cfd880, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /usr/local/go/src/net/http/server.go:1918 +0x44 k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a net/http.HandlerFunc.ServeHTTP(0xc4230ef360, 0x7fd4da66ade0, 0xc4272851c8, 0xc425cdd900) ... ``` **What you expected to happen**: Did not expect the corrupted etcd record to be written. **How to reproduce it (as minimally and precisely as possible)**: Unable to reproduce. **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): 1.10.3 - Cloud provider or hardware configuration: eks (aws) - OS (e.g. from /etc/os-release): amazon-linux-2 - Others: etcd - 3.1.12
kind/bug,sig/api-machinery,lifecycle/frozen
medium
Critical
368,330,704
opencv
cv::cuda::norm works only for single channel images.
https://github.com/opencv/opencv/blob/808ba552c532408bddd5fe51784cf4209296448a/modules/cudaarithm/src/cuda/norm.cu#L104 There does not seem to be any such constraint for the cpu version. Does removing this constraint have any side effects?
category: gpu/cuda (contrib)
low
Minor
368,346,525
flutter
Flutter Tester binaries should be optimized with debug assertions enabled
The current binaries distributed to cloud buckets by the buildbots are from the host_debug_unopt variant. This was done so that all debug assertions would hold and the binaries would contain debug symbols to boot. We should update the buildroot to enable optimized builds with debug assertions as well as symbols. Unoptimized binaries are significantly slower.
a: tests,engine,P2,team-engine,triaged-engine
low
Critical
368,348,137
create-react-app
mini-css-extract-plugin throws "Conflicting order" errors during build
(react-scripts 2.0.4) Same as in [this issue](https://github.com/webpack-contrib/mini-css-extract-plugin/issues/250) I am getting a lot of errors from the mini-css-extract-plugin when doing a CI build - which fails since no warnings are allowed. Since CRA does not allow customizing the WebPack plugin config, I cannot use the [warningsFilters](https://github.com/webpack-contrib/mini-css-extract-plugin/issues/250#issuecomment-415345126) and the [question about CRA also already popped up](https://github.com/webpack-contrib/mini-css-extract-plugin/issues/250#issuecomment-425287049). I am not sure what the CRA team can do about this - but maybe just keep an eye on it (or have it documented as a Known Issue) until WebPack somehow solves it. PS: for now I am running "set CI=&&react-scripts build" to disable the CI build warnings limit.
contributions: up for grabs!,tag: documentation,difficulty: starter
high
Critical
368,386,383
vscode
Git - support HEAD <> working tree changes in gutter
Issue Type: <b>Feature Request</b> I'd like to see staged changes as well as unstaged changes in the gutter. Currently, VS Code only shows unstaged changes, at least when using Git. Unstaged changes: <img width="131" alt="fikcx" src="https://user-images.githubusercontent.com/103690/46696912-635e3a80-cbe1-11e8-97d4-287bc37bcfcd.png"> Staged changes: <img width="128" alt="b6y38" src="https://user-images.githubusercontent.com/103690/46696925-68bb8500-cbe1-11e8-96bb-cd074c8648ea.png"> See https://stackoverflow.com/questions/48881124/can-i-make-visual-studio-code-highlight-staged-changes as well as https://github.com/eamodio/vscode-gitlens/issues/396 . VS Code version: Code 1.28.0 (431ef9da3cf88a7e164f9d33bf62695e07c6c2a9, 2018-10-04T16:40:40.180Z) OS version: Darwin x64 16.7.0 <!-- generated by issue reporter -->
help wanted,feature-request,git
high
Critical
368,388,827
pytorch
Support calculating grad for dense in sparse @ dense
## ๐Ÿ› Bug I get an error when I try to backprop through `torch.matmul` where the first matrix is a sparse matrix and the second matrix (dense) requires gradient. I am getting the following error: ``` RuntimeError: Expected object of type torch.FloatTensor but found type torch.sparse.FloatTensor for argument #2 'mat2' ``` Note that the sparse matrix does not require gradient in my case. More specifically backpropagating through `torch.matmul(A, x)` works fine, but it doesn't work for `torch.matmul(A, x.transpose(0,1))`. It works fine when doing `torch.matmul(A, x.transpose(0,1).clone())` ## To Reproduce Steps to reproduce the behavior: Run the following script ``` import torch x = torch.ones(3, 3, requires_grad=True) i = torch.LongTensor([ [0, 1, 2], [0, 1, 2] ]) v = torch.FloatTensor([1,1,1]) A = torch.sparse.FloatTensor(i, v, (3,3)) y = torch.matmul(A, x.transpose(0,1)) loss = y.mean() loss.backward() ``` ## Expected behavior Backprop normally through this operation. ## Environment I am using **pytorch/pytorch:0.4.1-cuda9-cudnn7-runtime** ``` PyTorch version: 0.4.1 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Ubuntu 16.04.5 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609 CMake version: Could not collect Python version: 3.6 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce GTX TITAN Z Nvidia driver version: 384.130 cuDNN version: Could not collect Versions of relevant libraries: [pip] numpy (1.14.5) [pip] torch (0.4.1) [pip] torchvision (0.2.1) [conda] cuda90 1.0 h6433d27_0 pytorch [conda] pytorch 0.4.1 py36_cuda9.0.176_cudnn7.1.2_1 pytorch [conda] torchvision 0.2.1 py36_1 pytorch ```
module: sparse,triaged
low
Critical
368,394,197
pytorch
C++: Calling Workspace::RunNet for a prediction on a different thread each time causes a GPU memory leak
## ๐Ÿ› Bug Hi, I am trying to obtain a model prediction (on GPU) while running another piece of code in parallel (on CPU). Since I am streaming data, I instantiate a separate std::thread (or std::async) every time to call Workspace::RunNet, this causes a GPU memory leak which is not noticeable unless you are streaming data. However, if the thread used is maintained (a worker thread with the same thread_id), the leak does not occur. Thank you, <!-- A clear and concise description of what the bug is. --> ## To Reproduce Steps to reproduce the behavior: 1. Given a pre-trained model, please use the following sample code to reproduce the issue: <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ``` // Network setup caffe2::NetDef init_net; caffe2::NetDef predict_net; ReadProtoFromFile(init_file, &init_net); ReadProtoFromFile(predict_file, &predict_net); auto workspace = std::make_unique<caffe2::Workspace>(); const auto cuda_device = caffe2::TypeToProto(caffe2::CUDA); predict_net.mutable_device_option()->set_device_type(cuda_device); init_net.mutable_device_option()->set_device_type(cuda_device); workspace->CreateBlob(predict_net_.external_input(0)); workspace->RunNetOnce(init_net); workspace->CreateNet(predict_net); // Tensors setup auto input_host_tensor = std::make_unique<caffe2::TensorCPU>(dims, caffe2::CPU); input_host_tensor->ShareExternalPointer(some_buffer); // some_buffer will be filled with the input data auto input_device_tensor = std::make_unique<caffe2::TensorCUDA>(dims, caffe2::CUDA); workspace->GetBlob(predict_net_.external_input(0))->Reset(input_device_tensor.get()); // Prediction while(true) { FetchData(some_buffer); input_device_tensor->CopyFrom(*input_host_tensor); // This is where the leak occurs std::thread my_thread(&caffe2::Workspace::RunNet, workspace.get(), predict_net.name()); // // some host code // my_thread.join(); } ``` ## Expected behavior Not expecting a memory leak when executing on threads with different thread ids. <!-- A clear and concise description of what you expected to happen. --> ## Environment ``` Linux=Ubuntu 16.04.4 LTS cmake 3.5.1 gcc 5.4.0 NVIDIA CUDA 8.0 NVIDIA cuDNN v6.0 Pytorch built from source (version to date is v1.0rc1) ``` ## Additional context <!-- Add any other context about the problem here. -->
caffe2
low
Critical
368,430,441
material-ui
[Accessibility] Some of the components are not visible or rendered properly in Windows High Contrast Mode
While testing our application in Windows High Contrast Mode (Both ie11 and Edge), could see that some of the material-ui/core components are not visible or rendered properly. Reproduced the issues while accessing the Demo Components in material-ui page (https://material-ui.com/) <!-- Checked checkbox should look like this: [x] --> - [x] This is not a v0.x issue. <!-- (v0.x is no longer maintained) --> - [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate. ## Expected Behavior In Windows High Contrast Mode, all elements should be visible and user friendly. ## Current Behavior In Windows High Contrast Mode, the below components are not behaving as expected - Switches are not visible. - User cannot distinguish between enabled and disabled Selection Controls like Radio button and Checkbox. - The first character in Text Field (if not intended) is cropped. - The first character in some of the Select (if not intended) is cropped. - In Menus, user is not able to distinguish between selected and de-selected MenuItem. ## Steps to Reproduce 1. In Windows 10, open High Contrast setting and select a High contrast theme (High Contrast#1 or High Contrast#2) 2. Open ie11 and Edge browsers. 3. Go to https://material-ui.com/demos/ and validate Switches, Text Field, Menus, Select etc 4. Verify the issues. ## Context Our application is used by many users who have visual problems and need to make sure it's accessible to all users across platforms. ## Your Environment | Tech | Version | |--------------|---------| | Material-UI/core | v3.0.0 | | React | v16.4.1 | | Browser | ie11, Edge |
accessibility
low
Major
368,432,814
TypeScript
Automatic typings install does not trigger project refresh
From https://github.com/Microsoft/vscode/issues/59436 <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.2.0-dev.20181009 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** - ATA, automatic typings acquisition **Code** Tested on macos with npm 6.4.1. 1. Clear the typings cache. 1. For a new JS project with contents: `package.json` ``` { "dependencies": { "express": "4.16.3" } } ``` `main.js` ```js import * as express from "express" let a = express() ``` 1. Open the project in vscode 1. Run `npm install` 1. Open `main.js` 1. Notice the `...` suggestion on the import stating `could not find a declaration file for module 'express'` 1. Check ATA directory and see that express typings have been installed **Bug** The `...` suggestion is never removed and we fail to get proper typings for express. After reloading the javascript project we do start seeing the correct suggestions
Bug
low
Critical
368,446,257
kubernetes
Add tests using storage framework for the default storageclass
<!-- This form is for bug reports and feature requests ONLY! If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/). If the matter is security related, please disclose it privately via https://kubernetes.io/security/. --> **Is this a BUG REPORT or FEATURE REQUEST?**: @kubernetes/sig-storage-feature-requests **What happened**: We can consider these tests as a future conformance profile candidate. See https://github.com/kubernetes/kubernetes/issues/65155#issuecomment-411486293
sig/storage,kind/feature,area/conformance,lifecycle/frozen
low
Critical
368,449,366
pytorch
The text design (color, type) makes it hard to read
The text color and type is hard to read, contrast is too low and text too thin, even playing with the browser zoom does not make it much better. I actually wrote small bookmarklet for myself to just color the text black and different font to temporarily fixed it. code here for those interested - javascript:function changeFontColor() {var all = document.getElementsByTagName("*"); for (var i=0, max=all.length; i < max; i++) { all[i].style.color = "black"; all[i].style.fontFamily = 'Arial'; }}; changeFontColor();
todo,triaged
low
Minor
368,459,358
go
runtime: segfault during test using -coverpkg and -race
### What version of Go are you using (`go version`)? go1.11.1 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? Linux amd64 ### What did you do? `go test -c -o aTestBinary -coverpkg package1,package2,package3 -covermode atomic -race -ldflags '-w -s -extldflags "-static -Wl,--no-export-dynamic"'` `./aTestBinary -test.v -test.run . -test.coverprofile coverout` ### What did you expect to see? Success ### What did you see instead? ``` unexpected fault address 0x19600 fatal error: fault [signal SIGSEGV: segmentation violation code=0x1 addr=0x19600 pc=0x12a0389] goroutine 63887 [running]: runtime: unexpected return pc for runtime.throw called from 0x5 stack: frame={sp:0x7f0386ffca70, fp:0x7f0386ffcaa0} stack=[0xc0021f4000,0xc0021f6000) runtime.throw(0x5, 0x7f0386ffcae0) /usr/local/go/src/runtime/panic.go:608 +0x72 fp=0x7f0386ffcaa0 sp=0x7f0386ffca70 pc=0x497d42 created by ... goroutine 1 [chan receive]: testing.tRunner.func1(0xc000112100) /usr/local/go/src/testing/testing.go:803 +0x2e4 testing.tRunner(0xc000112100, 0xc00011bc98) /usr/local/go/src/testing/testing.go:831 +0x17f testing.runTests(0xc0000c4060, 0x410d5c0, 0xa5, 0xa5, 0xc0000bc000) /usr/local/go/src/testing/testing.go:1117 +0x4ef testing.(*M).Run(0xc000232200, 0x0) /usr/local/go/src/testing/testing.go:1034 +0x2ef .../....TestMain(0xc000232200) /go/src/....go:36 +0x123 main.main() _testmain.go:962 +0x33f goroutine 50 [chan receive]: testing.(*testContext).waitParallel(0xc0000ad1d0) /usr/local/go/src/testing/testing.go:926 +0x15c testing.(*T).Parallel(0xc000112200) /usr/local/go/src/testing/testing.go:733 +0x3e5 ... ``` Probably related #23882?
RaceDetector,NeedsInvestigation
low
Critical
368,477,499
pytorch
how to use mask-rcnn in caffe2 c++ gpu
I changed the pkl model to pb model use convert_pkl_to_pb.py , but only detection net will be converted even in Mask R-CNN which has mask net If anyone could share an example working code with GPU in C++, that'd be amazing.
caffe2
low
Minor
368,575,877
vscode
[html] proper support of XHTML
XHTML is parsed as HTML rather than as XML, which causes various issues: - Self closing tags are not allowed, except for elements which in HTML are void elements. - Elements which in HTML are void elements are allowed to be unclosed, and are unclosed by default. - Elements which in HTML are void elements are not allowed to have a separate close tag. - Attribute values are allowed to be omitted or unquoted.
feature-request,html
low
Minor
368,632,431
pytorch
[caffe2] Memory usage
Hi. I am getting unexpected big amount of memory usage when running onnx models in Caffe2 on C++. For example for SSD with VGG feature extractor it gets as high as 1.2Gb, while same model with almost equal size in serialized form (about 104Mb) in TensorFlow consume 130Mb. And i getting similar behavior for other models I tried. Just wanna know is that expected to memory usage be such high in Caffe2 or I doing something wrong? Here is my code to evaluate model on Caffe2: ``` #include <string> #include <vector> #include <caffe2/onnx/backend.h> #include <caffe2/core/init.h> #include <caffe2/utils/proto_utils.h> #include <onnx/proto_utils.h> #include <onnx/onnx_ONNX_NAMESPACE.pb.h> #include <fstream> static int sleeptime = 1; static int res = 300; ONNX_NAMESPACE::ModelProto read_model(std::string path){ std::ifstream model_file(path, std::ios::binary|std::ios::ate); auto size = model_file.tellg(); model_file.seekg(0, model_file.beg); ONNX_NAMESPACE::ModelProto model_proto; std::vector<char> model_bin_str(size); model_file.readsome(model_bin_str.data(), size); ONNX_NAMESPACE::ParseProtoFromBytes(&model_proto, model_bin_str.data(), size); model_file.close(); return model_proto; } void f(std::string model_path){ caffe2::onnx::Caffe2Backend backend; auto back = std::unique_ptr<caffe2::onnx::Caffe2BackendRep>(backend.Prepare(read_model(model_path).SerializeAsString(),"CPU :0",{})); std::cout << "Backend prepared" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(sleeptime)); caffe2::Predictor::TensorVector inputs, outputs; caffe2::TensorCPU input_tensor; std::vector<caffe2::TIndex> tensor_size = {1, 3, res, res}; input_tensor.Resize(tensor_size); auto tensor_data = input_tensor.mutable_data<float>(); auto tensor_size_ = std::accumulate(tensor_size.begin(), tensor_size.end(), 1, std::multiplies<int>()); for (int i = 0; i < tensor_size_; ++i) { *(tensor_data+i) = 1.f; } inputs.push_back(&input_tensor); std::cout << "Input prepared" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(sleeptime)); back->Run(inputs, &outputs); std::cout << "Model inferenced" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(sleeptime)); inputs.clear(); outputs.clear(); input_tensor.Resize(tensor_size); for (int i = 0; i < tensor_size_; ++i) { *(tensor_data+i) = 0.f; } inputs.push_back(&input_tensor); std::cout << "Input prepared" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(sleeptime)); back->Run(inputs, &outputs); std::cout << "Model inferenced" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(sleeptime)); } int main(int argc, char *argv[]) { sleeptime = std::stoi(std::string(argv[2])); res = std::stoi(std::string(argv[3])); std::cout << "Started" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(sleeptime)); caffe2::GlobalInit(); f(std::string(argv[1])); std::cout << "Pass done" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(sleeptime)); caffe2::ShutdownProtobufLibrary(); return 0; } ``` And here is my cmake to build it ``` project(caffe2_onnx_mem) cmake_minimum_required(VERSION 3.5) set(CMAKE_CXX_STANDARD 11) include(${CMAKE_ROOT}/Modules/ExternalProject.cmake) set(CAFFE2_DIR ${CMAKE_CURRENT_BINARY_DIR}/caffe2) set(CAFFE2_PREFIX ${CMAKE_CURRENT_BINARY_DIR}/Caffe2-prefix) set(CAFFE2_BUILD_DIR ${CAFFE2_PREFIX}/src/Caffe2-build) set(CAFFE2_SRC_DIR ${CAFFE2_PREFIX}/src/Caffe2/) ExternalProject_Add( Caffe2 UPDATE_COMMAND "" GIT_REPOSITORY "https://github.com/pytorch/pytorch" GIT_TAG "v0.4.1" CMAKE_ARGS -DONNX_NAMESPACE=ONNX_NAMESPACE -DBUILD_BINARY=OFF -DUSE_CUDA=OFF -DBUILD_PYTHON=OFF -DUSE_PROF=OFF -DUSE_ATEN=OFF -DUSE_OPENCV=OFF -DUSE_LMDB=OFF -DUSE_LEVELDB=OFF -DUSE_GLOO=OFF -DUSE_GLOG=OFF -DUSE_GFLAGS=OFF -DUSE_NNPACK=ON -DUSE_MKL=OFF -DUSE_MKLML=OFF -DUSE_IDEEP=OFF -DUSE_NATIVE_ARCH=ON -DUSE_LITE_PROTO=OFF -DBUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=${CAFFE2_DIR} INSTALL_COMMAND make install ) add_executable($caffe2_onnx_mem "c2_onnx_mem.cpp") target_include_directories(caffe2_onnx_mem PUBLIC ${CAFFE2_DIR}/include CAFFE2_SRC_DIR ) target_link_libraries(caffe2_onnx_mem -Wl,--whole-archive ${CAFFE2_DIR}/lib/libcaffe2.a ${CAFFE2_BUILD_DIR}/lib/libcaffe2_protos.a -Wl,--no-whole-archive ${CAFFE2_DIR}/lib/libcpuinfo.a ${CAFFE2_DIR}/lib/libnomnigraph.a ${CAFFE2_DIR}/lib/libonnx.a ${CAFFE2_DIR}/lib/libnnpack.a ${CAFFE2_DIR}/lib/libonnx_proto.a ${CAFFE2_BUILD_DIR}/lib/libonnxifi_loader.a ${CAFFE2_DIR}/lib/libprotobuf.a ${CAFFE2_DIR}/lib/libprotoc.a pthread dl ) ``` I measuring memory usage with ``` watch -n 0.1 'pidof ./caffe2_onnx_mem | xargs -i cat /proc/{}/status | grep "RssAnon"' ``` And here is the python code that i using to make a model ``` import torch class ConvBnAct(torch.nn.Module): def __init__(self, inp, out, stride=1, kernel=3, groups=1, dilations=1,bias=False, pad=None): super().__init__() if pad is not None: is_pad = pad else: is_pad = True if kernel != 1 else False self.conv = torch.nn.Conv2d(inp,out,kernel,stride,int(is_pad),dilations,groups,bias) self.bn = torch.nn.BatchNorm2d(out) self.relu = torch.nn.LeakyReLU(inplace=True) def forward(self, x): x = self.conv(x) x = self.bn(x) x = self.relu(x) return x class ConvAct(torch.nn.Module): def __init__(self, inp, out, stride=1, kernel=3, groups=1, dilations=1,bias=False, pad=None): super().__init__() if pad is not None: is_pad = pad else: is_pad = True if kernel != 1 else False self.conv = torch.nn.Conv2d(inp,out,kernel,stride,int(is_pad),dilations,groups,bias) self.relu = torch.nn.LeakyReLU(inplace=True) def forward(self, x): x = self.conv(x) x = self.relu(x) return x class VGGStage(torch.nn.Module): def __init__(self, input_depth, output_depth, conv_count=2, pool=True): super().__init__() self.ops = [] input_depth_ = input_depth if pool: self.ops.append(torch.nn.MaxPool2d(3,stride=2,padding=1)) for i in range(conv_count): self.ops.append(ConvBnAct(input_depth_, output_depth)) input_depth_ = output_depth self.ops = torch.nn.Sequential( *self.ops ) def forward(self, x): return self.ops(x) class VGG16_SSD(torch.nn.Module): def __init__(self, cls_count=20): super().__init__() stage_defs = [ (3,64,2, False), (64,128,2, True), (128,256,3, True), (256,512,3, True), (512,512,3, True) ] bbox_counts=[4,6,6,6,4] inputs=[512,1024,512,256,256] self.stages = [] for inp, out, num, pool in stage_defs: self.stages.append(VGGStage(inp, out, num, pool)) self.stages += [ ConvBnAct(512,1024,kernel=3), ConvBnAct(1024,1024,kernel=1), torch.nn.Sequential( ConvBnAct(1024,256,kernel=1), ConvBnAct(256,512,kernel=3, stride=2) ), torch.nn.Sequential( ConvBnAct(512,128,kernel=1), ConvBnAct(128,256,kernel=3, stride=2) ), torch.nn.Sequential( ConvBnAct(256,128,kernel=1, pad=False), ConvBnAct(128,256,kernel=3, pad=False) ), torch.nn.Sequential( ConvBnAct(256,128,kernel=1, pad=False), ConvAct(128,256,kernel=3, pad=False) ), ] self.stages = torch.nn.ModuleList(self.stages) self.extra_convs = torch.nn.ModuleList([ ConvBnAct(input_size,bbox_cnt * (4 + cls_count),kernel=3) for bbox_cnt, input_size in zip(bbox_counts, inputs) ] + [ ConvAct(256,4,kernel=1) ]) def forward(self, x): outputs = [] for stage in self.stages[:4]: x = stage(x) outputs.append(x) for stage in self.stages[4:7]: x = stage(x) outputs.append(x) for stage in self.stages[7:]: x = stage(x) outputs.append(x) detections = tuple( c(o) for o, c in zip(outputs, self.extra_convs) ) return detections net = VGG16_SSD() tensor = torch.ones((1,3,300,300)) out = net(tensor) torch.onnx.export(net, tensor, "./vggssd.onnx", verbose=False, input_names=['Input']) ```
caffe2
low
Minor
368,635,571
rust
Subslice search
As enhancement, I think stdlib should contain functions that search a subslice inside a given slice: ``` fn contains_subslice<T: PartialEq>(data: &[T], needle: &[T]) -> bool { data .windows(needle.len()) .any(|w| w == needle) } fn position_subslice<T: PartialEq>(data: &[T], needle: &[T]) -> Option<usize> { data .windows(needle.len()) .enumerate() .find(|&(_, w)| w == needle) .map(|(i, _)| i) } fn main() { println!("{}", contains_subslice(b"hello", b"ll")); println!("{:?}", position_subslice(b"hello", b"ll")); } ``` For the common case of T:Copy items the true stdlib functions should specialize using a smarter algorithm, like: https://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm (Similar functions are useful for iterators too).
T-libs-api,C-feature-request
medium
Critical
368,661,893
go
net: clarify documentation on net.ListenConfig.Listen and related calls w.r.t. context
### What version of Go are you using (`go version`)? go version go1.11.1 linux/amd64 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOOS="linux" ### What did you do? I created a net.ListenConfig.Listen(cancelableCtx, "unix", "/path/to/socket") to create a Unix socket, and called Accept on said listener. ### What did you expect to see? I expected canceling the context to cancel the Accept. ### What did you see instead? The Accept was not canceled. --- So maybe my expectation was wrong. I reasoned "_Why else_ would there be a context there if not to cancel (or time out) the Accept calls?". But clearly not. I asked on the #general channel in the Gophers Slack and was reminded that Dial, for example, specifically says that once Dial completes, the context it's given doesn't affect the connection, and that probably Listen & Accept were similar. That does appear to be the case. So, for folks like me that are not terribly familiar with network or socket programming, would it be possible to note in the Listen & Accept docs that the given context doesn't affect the later Accept? And maybe even what the Listen context actually does; it wasn't 100% clear to me. Thanks.
Documentation,help wanted,NeedsFix
low
Major
368,675,851
go
x/tools/go/packages: support for loading files/syntax irrespective of build tags
### What version of Go are you using (`go version`)? ``` go version go1.11.1 linux/amd64 go/packages commit 29f11e2b93f4d66f7c335bd7b2892836d4944f5c ``` ### Does this issue reproduce with the latest release? Yes, per above ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/home/myitcv/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/myitcv/gostuff" GOPROXY="" GORACE="" GOROOT="/home/myitcv/gos" GOTMPDIR="" GOTOOLDIR="/home/myitcv/gos/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build137760326=/tmp/go-build -gno-record-gcc-switches" ``` Picking up on the discussion from the [2018-10-09 golang-tools catch up](https://docs.google.com/document/d/1oEknhf60Cdg9p_i17ESIm3zjTuVK7Adr-lTw78D0Qrc/edit#) In the pre-modules world, `go/build` and `go/parser` could be used in combination to get file and syntax information for packages and their dependencies _irrespective_ of `GOOS`, `GOARCH` and build tags via the `UseAllFiles` [`go/build.Context`](https://godoc.org/go/build#Context) field. This use case is important for tools like [`govers`](https://github.com/rogpeppe/govers) and [`gogrep`](https://github.com/mvdan/gogrep). `go/packages` is driven by [`Config`](https://godoc.org/golang.org/x/tools/go/packages#Config) which includes `GOOS`, `GOARCH` and build tags and hence its response is specific to the provided config. `go/packages` cannot, therefore, be used in these use cases. This issue is a placeholder to continue discussion about how best to support loading of files, imports and syntax in a `GOOS`, `GOARCH` and build tags-agnostic way. cc @ianthehat @matloob @alandonovan @bcmills @rogpeppe @mvdan
modules,Tools
low
Critical
368,684,060
react
JAWS reads non-interactive elements as Clickable
**Do you want to request a *feature* or report a *bug*?** Bug **What is the current behavior?** 1. I create an app using [create-react-app](https://github.com/facebook/create-react-app) 2. I use JAWS Professional Edition Version 2018 (build 1710.42 ILM) and Internet Explorer 11 on Windows 7 3. I use arrow keys to navigate to paragraph "Edit src/App.js and save to reload." 4. Jaws announces "Edit src/App.js and save to reload. **clickable**" **What is the expected behavior?** This paragraph is non interactive, it should not be announced as clickable. **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?** - JAWS Professional Edition Version 2018 (build 1710.42 ILM) - Internet Explorer 11 - Windows 7 - create-react-app 2.0.3 **suspected cause** Using Chrome Event Listener Breakpoints I observed there's a function called `trapClickOnNonInteractiveElement` which is the onclick handler for non interactive elements. This noop function is causing JAWS to think this is an interactive element **possible solution** In [trapClickOnNonInteractiveElement](https://github.com/facebook/react/blob/8a8d973d3cc5623676a84f87af66ef9259c3937c/packages/react-dom/src/client/ReactDOMComponent.js#L245) there is a comment which reads ``` // TODO: Only do this for the relevant Safaris maybe? ``` I think that this would fix this issue.
Type: Bug
low
Critical
368,686,297
pytorch
Provide better documentation for torch.Size
## ๐Ÿ“š Documentation <!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new --> Current documentation doesn't have an entry for torch.Size, in particular it is not clear how to convert torch.Tensor to torch.Size or the other way around.
module: docs,triaged
low
Major
368,722,612
pytorch
[CMake] Linking against Intel OpenMP
Checking the current CMAKE files, it seems like PyTorch will always use the compile OpenMP libraries due to `-fopenmp` in: https://github.com/pytorch/pytorch/blob/033e95765c19e208a0ac04376ae7cacb62940e9a/torch/CMakeLists.txt#L314-L321 MKL-DNN readme advises to not mix the OpenMP libraries, and to not use `-fopenmp` when using Intel-OpenMP as it would link against both intel and the compiler OpenMP: https://github.com/intel/mkl-dnn/blob/17c33b596670123f8cf74afbdcc722b7ec1e01a2/README.md#intel-mkl-dnn-with-openmp > Intel MKL-DNN library built with binary dependency will link against Intel OpenMP runtime included with Intel MKL small libraries package. Intel OpenMP runtime is binary compatible with GNU OpenMP and CLANG OpenMP runtimes and is recommended for the best performance results. Here are example linklines for GNU C++ compiler and Intel C++ compiler. > > `g++ -std=c++11 -I${MKLDNNROOT}/include -L${MKLDNNROOT}/lib simple_net.cpp -lmkldnn -lmklml_intel -liomp5` > `icpc -std=c++11 -qopenmp -I${MKLDNNROOT}/include -L${MKLDNNROOT}/lib simple_net.cpp -lmkldnn -lmklml_intel` > > Using GNU compiler with -fopenmp and -liomp5 options will link the application with both Intel and GNU OpenMP runtime libraries. This will lead to undefined behavior of the application. Also the MKL cmake seems to link against both gnu and intel threads? https://github.com/pytorch/pytorch/blob/033e95765c19e208a0ac04376ae7cacb62940e9a/cmake/Modules/FindMKL.cmake#L55-L66 cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh
module: build,triaged,module: mkldnn,module: openmp
low
Major
368,725,684
go
proposal: encoding/xml: update character ranges for names to fifth edition (2008) specification
Currently the validation of XML names is based on the original 1998 specification which defines a large set of codepoint ranges that are to be accepted. These ranges were widened and simplified in the [fifth edition of the spec](http://www.w3.org/TR/2008/REC-xml-20081126/), published in 2008 and now the current version. The name production rules are now: ``` NameStartChar ::= ":" | [A-Z] | "_" | [a-z] | [#xC0-#xD6] | [#xD8-#xF6] | [#xF8-#x2FF] | [#x370-#x37D] | [#x37F-#x1FFF] | [#x200C-#x200D] | [#x2070-#x218F] | [#x2C00-#x2FEF] | [#x3001-#xD7FF] | [#xF900-#xFDCF] | [#xFDF0-#xFFFD] | [#x10000-#xEFFFF] NameChar ::= NameStartChar | "-" | "." | [0-9] | #xB7 | [#x0300-#x036F] | [#x203F-#x2040] Name ::= NameStartChar (NameChar)* Names ::= Name (#x20 Name)* Nmtoken ::= (NameChar)+ Nmtokens ::= Nmtoken (#x20 Nmtoken)* ``` This may also address the majority of the requirements for xml1.1 support (#25755) since the [changes between 1.0 and 1.1](https://www.w3.org/TR/xml11/#sec-xml11) were the expansion of the name character ranges, the addition of two line ending characters (U+0085, U+2028) and specification of additional normalisation rules The current ranges span 300 lines of code in the xml package so changing this will also contribute to #26775 If there is interest then I can submit a CL.
Proposal,FeatureRequest
low
Minor
368,740,286
flutter
ThemeData.copyWith() doesn't update dependent themes
Summary: Adding colors to ButtonThemeData (https://github.com/flutter/flutter/pull/22013) highlighted a general problem with `ThemeData.copyWith`: subordinate themes (like ThemeData.buttonTheme) aren't updated when a ThemeData property they depend on is changed. ThemeData.copyWith has simple semantics: it updates the specified fields, but not dependent fields. For example there are 10 subordinate themes, each with its own set of ThemeData dependencies: ``` colorScheme buttonTheme sliderTheme tabBarTheme chipTheme inputDecorationTheme pageTransitionsTheme textTheme primaryTextTheme accentTextTheme ``` None of the subordinate themes are changed if (say) a color they depend on is changed with ThemeData.copyWith(). For example: `myThemeData.copyWith(buttonColor: myButtonColor)` doesn't change `myThemeData.buttonTheme`. We could add a `ThemeData.apply` method, like `TextTheme.apply` but more complicated. The basic idea would be to update all of the subordinate themes that depend on ThemeData properties like primaryColor and accentColor. To implement ThemeData.apply, we'd have to implement colorScheme.apply, buttonTheme.apply etc. Documenting ThemeData.apply would be complicated itself because each subordinate theme has its own set of ThemeData dependencies. That said, in the applications I've looked at - usually - where developers use ThemeData.copyWith they're usually careful to rebuild the subordinate theme they're trying to effect. The ButtonTheme is an exception because it's new, and see https://github.com/flutter/flutter/issues/22711. For the moment I'm inclined to leave things as they are. Some improvements in the docs (including some examples) would help.
framework,f: material design,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-design,triaged-design
low
Major
368,766,655
go
gccgo: confusing error message when encountering invalid .. (dot dot) token
For ```Go $ cat y.go package p func f(x ..int) {} ``` current gccgo reports the errors: ``` $ gccgo -c y.go y.go:3:10: error: expected package 3 | func f(x ..int) {} | ^ y.go:3:8: error: invalid named/anonymous mix 3 | func f(x ..int) {} | ^ ``` I would have expected an error at the `..` as it is not a valid token. For the reference, cmd/compile reports for the same file: ``` $ go tool compile y.go y.go:3:11: syntax error: unexpected ., expecting name ``` which seems more sensible.
NeedsInvestigation
low
Critical
368,780,516
TypeScript
User-defined type guard method can be overridden with any non-type-guard return type
<!-- ๐Ÿšจ STOP ๐Ÿšจ ๐—ฆ๐—ง๐—ข๐—ฃ ๐Ÿšจ ๐‘บ๐‘ป๐‘ถ๐‘ท ๐Ÿšจ Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** master (07966dcb124ee379a4f062c1032fec0a6c4eaf19) <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** user-defined type guard predicate method override overridden return **Code** ```ts interface A { isB(): this is B; } interface B extends A { isB(): "this is nonsense"; } ``` **Expected behavior:** Error on `"this is nonsense"`. **Actual behavior:** No error. **Playground Link:** [link](https://www.typescriptlang.org/play/#src=interface%20A%20%7B%0D%0A%20%20%20%20isB()%3A%20this%20is%20B%3B%0D%0A%7D%0D%0Ainterface%20B%20%7B%0D%0A%20%20%20%20isB()%3A%20%22this%20is%20nonsense%22%3B%0D%0A%7D%0D%0A) **Related Issues:** none found
Bug
low
Critical
368,797,245
go
proposal: spec: make fewer types nillable
Apologies if this has been suggested before; I've been unable to find an issue. Currently go has many basic types where the zero value is `nil`. `nil` is a huge source of bugs, and a potential Go 2, where backwards-incompatible changes are on the table, offers an opportunity to make the language more robust by reducing its scope. Specifically, in some cases there is an obvious zero-value that makes more sense than nil. Off the top of my head: * The zero value for slices could be a slice with length and capacity of zero. * The zero value for maps could be an empty map. * The zero value for channels could be an unbuffered channel. This is more in keeping with the design principle of making the zero value useful. There isn't an obvious good zero value that I can think of for pointers and interfaces, but even this partial reduction would make it easier to build reliable software.
LanguageChange,Proposal,LanguageChangeReview
high
Critical
368,826,600
rust
Improve error message for reserved ambiguation
Compiling this code: ```rust trait Foo { type not_really_ambiguous_i_promise; } impl Foo for isize { type not_really_ambiguous_i_promise = usize; } fn main() { let x: isize::not_really_ambiguous_i_promise = 17; println!("{}", x); } ``` Produces this error: ``` error[E0223]: ambiguous associated type --> src/main.rs:10:12 | 10 | let x: isize::not_really_ambiguous_i_promise = 17; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ambiguous associated type | = note: specify the type using the syntax `<isize as Trait>::not_really_ambiguous_i_promise` ``` Presuming that this is because such code might become ambiguous in the future, we need a better error message.
C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics,D-papercut
low
Critical
368,828,118
TypeScript
Cannot override method in subclass when superclass instance type is a mapped type
**TypeScript Version:** 3.1.1 **Search Terms:** defines instance member property **Code** ```ts class A { foo(): void {}; bar: number; } class B extends (A as { new(): Pick<A, Exclude<keyof A, "foo">> & Pick<A, "foo"> }) { // (1) foo(): void { super.foo(); } baz: number; } ``` **Expected behavior:** Compiles successfully **Actual behavior:** Error at (1): `[ts] Class 'Pick<A, "bar"> & Pick<A, "foo">' defines instance member property 'foo', but extended class 'B' defines it as instance member function.` **Playground Link:** http://www.typescriptlang.org/play/index.html#src=class%20A%20%7B%0D%0A%20%20%20%20foo()%3A%20void%20%7B%7D%3B%0D%0A%20%20%20%20bar%3A%20number%3B%0D%0A%7D%0D%0A%0D%0Aclass%20B%20extends%20(A%20as%20%7B%20new()%3A%20Pick%3CA%2C%20Exclude%3Ckeyof%20A%2C%20%22foo%22%3E%3E%20%26%20Pick%3CA%2C%20%22foo%22%3E%20%7D)%20%7B%0D%0A%20%20%20%20foo()%3A%20void%20%7B%0D%0A%20%20%20%20%20%20%20%20super.foo()%3B%0D%0A%20%20%20%20%7D%0D%0A%20%20%20%20baz%3A%20number%3B%0D%0A%7D A more realistic example of this is when using https://github.com/bterlson/strict-event-emitter-types along with subclassing a [superclass with overridable methods](https://github.com/bterlson/strict-event-emitter-types#usage-with-subclasses): ```ts import StrictEventEmitter from "strict-event-emitter-types"; import * as inspector from "inspector"; interface SessionEvents { "Runtime.executionContextCreated": (message: inspector.InspectorNotification<Runtime.ExecutionContextCreatedEventDataType>) => void } type StrictSession = StrictEventEmitter<inspector.Session, SessionEvents>; class CustomSession extends (inspector.Session as { new (): StrictSession }) { connect() { // error here super.connect(); } } ```
Suggestion,In Discussion
medium
Critical
368,840,553
rust
".rlib: error adding symbols: File format not recognize" error on linux
I'm unable to compile a rust program that compiles fine on macos. ``` = note: "cc" "-Wl,--as-needed" "-Wl,-z,noexecstack" "-m64" "-L" "/usr/local/rustup/toolchains/1.29.1-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.15kq92zzbmxot4k9.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.16u6js6g0l3k1ic6.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.181cuta0v63atwcm.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.1c4sbqhvukbgthag.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.1im38lueib99jsk0.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.1kduva7sc7em934m.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.1mvmz58owquyropc.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.1vut2eft6nlujjxr.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.1y16o1qfye96o7m0.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.1zwd8n7bcl3vhvvh.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.23tqyymcb18u96mb.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.2imnh2vhxcqrizhm.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.2jqywn86b2gsqohu.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.2lyh15q6cjwzy18c.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.2qhkzqx5zqexj20y.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.3ayaeypdcro9d6yk.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.3py0d821mvt07s4n.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.3rngp6bm2u2q5z0y.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.3wta9ctgdrpkmlpr.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.44bsbddupzfao2om.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.48721dc4k5qxei0u.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.49a7n47po4ttqjl7.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.49lx1q7cxvpykyv0.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.4b8ptp1vn215jmoe.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.4ezmh1vbs95c5ack.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.4oc10dk278mpk1vy.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.4vp4wqj2v29i7mgy.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.4xq48u46a1pwiqn7.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.4ybye971cqflgun6.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.4yh8x2b62dcih00t.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.4ypvbwho0bu5tnww.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.56dly8q07ws8ucdq.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.57qy3vyd9bhiuaon.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.5gf6du7k58s78kob.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.8xzrsc1ux72v29j.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.98g0d9x8aw3akpe.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.9elsx31vb4it187.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.c6lbtaiefvx3wya.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.oa3rad818d8sgn4.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.spyjbt69vcsrx9q.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.y08g5q2x813c4wx.rcgu.o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.z9ox7biyn1otfln.rcgu.o" "-o" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba" "/usr/src/myapp/target/debug/deps/create_snapshot-60efbce147cacbba.crate.allocator.rcgu.o" "-Wl,--gc-sections" "-pie" "-Wl,-zrelro" "-Wl,-znow" "-nodefaultlibs" "-L" "/usr/src/myapp/target/debug/deps" "-L" "/usr/src/myapp/target/debug/build/libfly-2ef616123afaf643/out" "-L" "/usr/src/myapp/libfly/third_party/v8/out.gn/x64.release/obj" "-L" "/usr/src/myapp/target/debug/build/libsqlite3-sys-7ac6aaa28420bed7/out" "-L" "/usr/src/myapp/target/debug/build/backtrace-sys-dc873d7131d9513f/out" "-L" "/usr/local/rustup/toolchains/1.29.1-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-Wl,-Bstatic" "/usr/src/myapp/target/debug/deps/liblibfly-973820a835f9f432.rlib" "/usr/src/myapp/target/debug/deps/liblibc-40d568469a8c2e87.rlib" "-Wl,--start-group" "/usr/local/rustup/toolchains/1.29.1-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-9318d1aa9575dbf9.rlib" "/usr/local/rustup/toolchains/1.29.1-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libpanic_unwind-e3cd3f44688b2d97.rlib" "/usr/local/rustup/toolchains/1.29.1-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc_jemalloc-2f4890fbea3bd5e0.rlib" "/usr/local/rustup/toolchains/1.29.1-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libunwind-a0ddde720c2c46c5.rlib" "/usr/local/rustup/toolchains/1.29.1-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc_system-9c41ffe739844496.rlib" "/usr/local/rustup/toolchains/1.29.1-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liblibc-af766b046896c123.rlib" "/usr/local/rustup/toolchains/1.29.1-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc-00b776688b98de66.rlib" "/usr/local/rustup/toolchains/1.29.1-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcore-d05a4396ceff8bc8.rlib" "-Wl,--end-group" "/usr/local/rustup/toolchains/1.29.1-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcompiler_builtins-fe16a4dcdcd95bab.rlib" "-Wl,-Bdynamic" "-lstdc++" "-lutil" "-lutil" "-ldl" "-lrt" "-lpthread" "-lpthread" "-lgcc_s" "-lc" "-lm" "-lrt" "-lpthread" "-lutil" "-lutil" = note: /usr/src/myapp/target/debug/deps/liblibfly-973820a835f9f432.rlib: error adding symbols: File format not recognized collect2: error: ld returned 1 exit status ``` I tried this on debian stretch (using the official docker rust image) and ubuntu xenial (in a non-docker environment.) I tried both debug and release targets. if I `nm` on that file, I get a lot of these "File format not recognize" next to the lib filenames. I asked around on IRC and nobody could help. The project can be found here: https://github.com/superfly/fly.rs (very much a work in progress) It links to V8 statically, but I don't think that's the problem. I tried both pre-compiled V8 and compiling V8 in docker right before compiling the project with rust. What can cause this?
A-linkage,O-linux
low
Critical
368,882,249
kubernetes
Update default storageclasses to use delayed binding
<!-- This form is for bug reports and feature requests ONLY! If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/). If the matter is security related, please disclose it privately via https://kubernetes.io/security/. --> **Is this a BUG REPORT or FEATURE REQUEST?**: @kubernetes/sig-storage-feature-requests **What happened**: Once topology goes GA, let's update the default storageclasses to set "WaitForFirstConsumer" volume binding mode. The addon is set to "EnsureExists", which means that if the object is already created, then we do not modify it. So users upgrading will not be affected. Only new clusters created will get the new default storageclass. This will need an "ACTION REQUIRED" release note that users need to update their workflow if they were dependent on the PVC being provisioned first before creating Pods.
sig/storage,kind/feature,lifecycle/frozen
low
Critical
368,884,710
TypeScript
"An argument for 'param' was not provided." should use types to guess most likely missing parameter
**TypeScript Version:** 3.2.0-dev.20181009 **Code** ```ts function f(n: number, s: string) {} f(""); ``` **Expected behavior:** `An argument for 'n' was not provided.` **Actual behavior:** ``` src/a.ts:2:1 - error TS2554: Expected 2 arguments, but got 1. 2 f(""); ~~~~~ src/a.ts:1:23 1 function f(n: number, s: string) {} ~~~~~~~~~ An argument for 's' was not provided. ```
Suggestion,In Discussion,Domain: Error Messages
low
Critical
368,931,715
ant-design
Calendar Events View
- [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate. ### What problem does this feature solve? 1ใ€ไธ€ไธช้€š็Ÿฅๅฏไปฅ่ทจๆ—ฅๆœŸๆ˜พ็คบใ€‚ 2ใ€้€š็Ÿฅๅฏไปฅๆ˜พ็คบไธบ่‰ฒๅธฆใ€‚ ### What does the proposed API look like? **่ฟ™ไธชๆ ทๅญ็š„๏ผš** ![image](https://user-images.githubusercontent.com/27573161/46777848-a801e700-cd43-11e8-94a8-5760f14fca8f.png) <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
๐Ÿ—ฃ Discussion,๐Ÿ’ก Feature Request,Inactive,IssueHuntFest
low
Major
368,981,191
go
proposal: encoding/json: add "readonly" tag
Previous discussion: #19423 Note: this proposal focuses on the encoding/json package, but the same could be applied to other encoding/\* packages, especially encoding/xml. Problem ------- It is currently hard to marshal json while preventing those fields being set to arbitrary values by outside input. The `-` tag prevents both unmarshalling and marshalling. This is a common requirement in for example REST APIs: type User struct { ID int `json:"id"` Name string `json:"name"` Email string `json:"email"` CreatedBy int `json:"createdBy"` CreatedAt time.Time `json:"createdAt"` DefaultUnmarshal bool `json:"-" } When showing the data to the user (e.g. `GET /user/1`) we want to marshal the `ID`, `CreatedBy`, and `CreatedAt` fields; but we *don't* want to allow users to set these fields on create or update endpoints (e.g. `PUT /user/1`). As far as I can think of, right now it's hard to write a generic fully correct solution for this: 1. It's possible to implement the `json.Unmarshaler` interface like so: func (u *User) UnmarshalJSON(data []byte) error { type Alias User var alias Alias err := json.Unmarshal(data, &alias) if err != nil { return err } if u.DefaultUnmarshal { *u = User(alias) return nil } // Unmodifiable fields alias.ID = u.ID alias.CreatedAt = u.CreatedAt alias.CreatedBy = u.CreatedBy *u = User(alias) return nil } While this works, it's verbose and difficult to generalize without reflection complexity. Every type needs to have its own `UnmarshalJSON()` (and/or `UnmarshalXML()`, etc.) A second problem is that in some cases you do want to unmarshal the readonly fields, for example from tests: func TestUser(t *testing.T) { body := callAPI() var u user u.DefaultUnmarshal = true json.Unmarshal(body, &u) if !expectedUser(u) { t.Error() } } Hence the `DefaultUnmarshal` parameter, which needs to be exported to allow setting them in other packages. 2. rsc proposed creating a custom `ReadOnlyString` with a no-op `UnmarshalJSON()` in #19423. this works well and is not unreasonable, but if you want to use e.g. `sql.NullString`; you would need to add a custom `ReadOnlyNullString` as well. Using it in combination with the [github.com/guregu/null](https://github.com/guregu/null) package means creating *two* extra types (`readOnlyNullString` and `readOnlyZeroString`). It also doesn't allow easy setting readonly fields from tests. 3. A third option is to unmarshal the JSON to `interface{}`, modify the data based on the struct tags, marshal back to JSON, and then unmarshal in to the type you want. I have a working implementation of this, and it works for the simple cases. But the "Unmarshal -> Marshal -> Unmarshal" dance seems needlessly complicated and inefficient, and dealing with arbitrary JSON is rather tricky/verbose in Go. 4. Another solution (which is probably the most common) is to simply use different structs for different purposes. I am not hugely in favour of this, as it leads to a lot of duplication. Proposed solution: `readonly` struct tag ---------------------------------------- With the `readonly` tags added, the above struct would look like: type User struct { ID int `json:"id,readonly"` Name string `json:"name"` Email string `json:"email"` CreatedBy int `json:"createdBy,readonly"` CreatedAt time.Time `json:"createdAt,readonly"` } Regular `Unmarshal()` will not set any of the `readonly` fields; they are silently ignored: json.Unmarshal(data, &u) An option for `json.Decoder` can be added to allow setting the `readonly` fields, similar to `DisallowUnknownFields()`: d := json.NewDecoder(r) d.AllowReadonlyFields() d.Decode(&u) The `DisallowUnknownFields()` option can be used to error out when readonly fields are attempted to be set (although this could potentially also be a new option). --- In the previous discussion rsc mentioned that this feature would not "pull its weight" as it's not common enough of a use case. It's true that it's *less* common of a use case, but I don't believe it's terrible uncommon, either. Implementing this correctly by users is fairly complex, and adding this feature to the encoding/json package seems โ€“ unless I am mistaken โ€“ quite simple to the point of being almost trivial. It will of course increase maintenance burden in the future, but personally, I think it's a fair trade-off. Prototype implementation ------------------------ To test the feasibility and complexity of this change I wrote an implementation of this proposal, which seems to work well. I can make a CL with an expanded version of this if this proposal is received well. <details> ```go diff --git i/src/encoding/json/decode.go w/src/encoding/json/decode.go index fd2bf92dc2..e8e2cd7486 100644 --- i/src/encoding/json/decode.go +++ w/src/encoding/json/decode.go @@ -273,6 +273,7 @@ type decodeState struct { savedError error useNumber bool disallowUnknownFields bool + allowReadonlyFields bool } // readIndex returns the position of the last byte read. @@ -695,6 +696,7 @@ func (d *decodeState) object(v reflect.Value) error { // Figure out field corresponding to key. var subv reflect.Value destring := false // whether the value is wrapped in a string to be decoded first + readOnly := false // ,readonly tag if v.Kind() == reflect.Map { elemType := t.Elem() @@ -719,6 +721,9 @@ func (d *decodeState) object(v reflect.Value) error { if f != nil { subv = v destring = f.quoted + if !d.allowReadonlyFields { + readOnly = f.readOnly + } for _, i := range f.index { if subv.Kind() == reflect.Ptr { if subv.IsNil() { @@ -757,7 +762,9 @@ func (d *decodeState) object(v reflect.Value) error { } d.scanWhile(scanSkipSpace) - if destring { + if readOnly { + _ = d.value(reflect.Value{}) + } else if destring { q, err := d.valueQuoted() if err != nil { return err diff --git i/src/encoding/json/decode_test.go w/src/encoding/json/decode_test.go index b84bbabfcd..74b5c7eccc 100644 --- i/src/encoding/json/decode_test.go +++ w/src/encoding/json/decode_test.go @@ -2239,3 +2239,46 @@ func TestUnmarshalPanic(t *testing.T) { Unmarshal([]byte("{}"), &unmarshalPanic{}) t.Fatalf("Unmarshal should have panicked") } + +func TestReadonly(t *testing.T) { + type nested struct { + RO string `json:"ro,readonly"` + RW string `json:"rw"` + } + + type foo struct { + RO string `json:"ro,readonly"` + RW string `json:"rw"` + Nested nested `json:"nested"` + } + + f := foo{"hello", "hello", nested{"hello", "hello"}} + data := `{"ro": "XXXXX", "rw": "XXXXX", "nested": {"ro": "XXXXX", "rw": "XXXXX"}}` + + t.Run("unmarshal", func(t *testing.T) { + want := foo{"hello", "XXXXX", nested{"hello", "XXXXX"}} + err := Unmarshal([]byte(data), &f) + if err != nil { + t.Fatal(err) + } + + if !reflect.DeepEqual(f, want) { + t.Errorf("\ngot: %#v\nwant: %#v", f, want) + } + }) + + t.Run("allowReadonlyFields", func(t *testing.T) { + want := foo{"XXXXX", "XXXXX", nested{"XXXXX", "XXXXX"}} + d := NewDecoder(strings.NewReader(data)) + d.AllowReadonlyFields() + err := d.Decode(&f) + + if err != nil { + t.Fatal(err) + } + + if !reflect.DeepEqual(f, want) { + t.Errorf("\ngot: %#v\nwant: %#v", f, want) + } + }) +} diff --git i/src/encoding/json/encode.go w/src/encoding/json/encode.go index f10124e67d..944be253eb 100644 --- i/src/encoding/json/encode.go +++ w/src/encoding/json/encode.go @@ -1040,6 +1040,7 @@ type field struct { typ reflect.Type omitEmpty bool quoted bool + readOnly bool encoder encoderFunc } @@ -1156,6 +1157,7 @@ func typeFields(t reflect.Type) []field { index: index, typ: ft, omitEmpty: opts.Contains("omitempty"), + readOnly: opts.Contains("readonly"), quoted: quoted, } field.nameBytes = []byte(field.name) diff --git i/src/encoding/json/stream.go w/src/encoding/json/stream.go index 7d5137fbc7..14463f6842 100644 --- i/src/encoding/json/stream.go +++ w/src/encoding/json/stream.go @@ -41,6 +41,8 @@ func (dec *Decoder) UseNumber() { dec.d.useNumber = true } // non-ignored, exported fields in the destination. func (dec *Decoder) DisallowUnknownFields() { dec.d.disallowUnknownFields = true } +func (dec *Decoder) AllowReadonlyFields() { dec.d.allowReadonlyFields = true } + // Decode reads the next JSON-encoded value from its // input and stores it in the value pointed to by v. // ``` </details>
Proposal,Proposal-Hold
medium
Critical
368,992,958
flutter
Flutter plugin Kotlin version mismatch
Allowing to write plugins in kotlin is cool, but it occurs that there some problem once a plugin use an old version of kotlin compare to the current project. We'll get something like ``` * What went wrong: The Android Gradle plugin supports only Kotlin Gradle plugin version 1.2.51 and higher. Project 'plugin_with_old_kotlin' is using version 1.2.30. ``` Is there a fix for this except waiting for the plugin maintainer to update his plugin with last kotlin version ? If not then all plugins written in kotlin should be avoided in order to be safe for the future. And in this case it will be better if flutter force plugins to be in Java only. Being blocked until all kotlin plugins are using last kotlin version is very scary
platform-android,tool,P2,a: plugins,team-android,triaged-android
low
Major
369,041,619
rust
Goal: Accept partial initialization + use of records created via such
Spawned off of #21232 In the long-term, we want to accept code like this: ```rust struct S { x: u32, y: u32 } fn main() { let mut s: S; s.x = 10; s.y = 30; a_use_of(&s); another_use_of(s); } fn a_use_of(_s: &S) { /* ... */ } fn another_use_of(_s: S) { /* ... */ } ``` We probably *also* want to start accepting this too: ```rust struct S { x: u32, y: u32 } fn main() { let mut s: S; s.x = 10; a_use_of_field(s.x); } fn a_use_of_field(_x: u32) { /* ... */ } ``` (But that second example is more debatable. I don't think there's any debate about the first example.) ---- See #54986 for the short-term goal of rejecting all partial initializations until we can actually *use* the results you get from partial initialization.
P-low,T-lang,A-NLL,NLL-complete
medium
Major
369,046,080
create-react-app
Inconsistent "Relative imports outside of src/" restriction ?
### Is this a bug report? Maybe not ### Which terms did you search for in User Guide? This is related to my previous answer : https://github.com/facebook/create-react-app/issues/1333#issuecomment-428200810 ### Steps to Reproduce I have a projet which imports an other "private" package. I am using sass in both packages. The private package is passed to babel only (not node-sass). I have something like the following in the private module : ```js // src/component/module.js import '../styles/module.scss'; ``` ```scss // src/styles/module.scss .module { background: url('../images/module.jpg'); } ``` Then in the main project, if i do this : ```js // src/index.js import '@my/module/dist/component/module.js'; ``` everything works fine, but if i do "the same thing" from my sass file : ```scss // src/main.scss @import '@my/module/dist/styles/module.scss'; ``` i obtain the following error : ``` Module not found: You attempted to import ../images/module.jpg which falls outside of the project src/ directory. Relative imports outside of src/ are not supported. ``` ### Expected Behavior Both import should fail or success ? To be able to split my project, i would like to be able to import other packages components and sass files. ### Actual Behavior Importing from sass from sass fails but importing sass from js works.
issue: needs investigation
medium
Critical
369,047,341
TypeScript
Stack overflow in long concatenating string
<!-- ๐Ÿšจ STOP ๐Ÿšจ ๐—ฆ๐—ง๐—ข๐—ฃ ๐Ÿšจ ๐‘บ๐‘ป๐‘ถ๐‘ท ๐Ÿšจ Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.2.0-dev.20181011 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** stack overflow, long string **Code** Following link has complete failing example, https://gist.github.com/ackava/673d9fc93cb4b48c4a1b3b4aff807c2f#file-akashdataurl-stack-overflow-ts This is data url generated from an image, which contains base64 representation in concatenating strings. ```ts // A *self-contained* demonstration of the problem follows... // Test this by running `tsc` on the command-line, rather than through another build tool such as Gulp, Webpack, etc. const base64 = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + .... 800 more lines with exact same length "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"; ``` **Expected behavior:** It should compile as it is a valid javascript. **Actual behavior:** It ends up in stack overflow. ``` PS D:\Temp\ts-bug> npm install -s typescript@next + [email protected] added 1 package in 3.028s PS D:\Temp\ts-bug> .\node_modules\.bin\tsc D:\Temp\ts-bug\node_modules\typescript\lib\tsc.js:69654 throw e; ^ RangeError: Maximum call stack size exceeded at needsIndentation (D:\Temp\ts-bug\node_modules\typescript\lib\tsc.js:67933:34) at emitBinaryExpression (D:\Temp\ts-bug\node_modules\typescript\lib\tsc.js:66629:40) at pipelineEmitWithHint (D:\Temp\ts-bug\node_modules\typescript\lib\tsc.js:66043:32) at emitNodeWithSourceMap (D:\Temp\ts-bug\node_modules\typescript\lib\tsc.js:64787:24) at pipelineEmitWithSourceMap (D:\Temp\ts-bug\node_modules\typescript\lib\tsc.js:65746:58) at emitNodeWithNestedComments (D:\Temp\ts-bug\node_modules\typescript\lib\tsc.js:65097:17) at emitNodeWithSynthesizedComments (D:\Temp\ts-bug\node_modules\typescript\lib\tsc.js:65047:13) at emitNodeWithComments (D:\Temp\ts-bug\node_modules\typescript\lib\tsc.js:65020:21) at pipelineEmitWithComments (D:\Temp\ts-bug\node_modules\typescript\lib\tsc.js:65738:13) at emitNodeWithNotification (D:\Temp\ts-bug\node_modules\typescript\lib\tsc.js:64500:21) PS D:\Temp\ts-bug> ``` I did find an alternative, this one works correctly where strings are represented as an array and then I join in the end. Following file works correctly. https://gist.github.com/ackava/673d9fc93cb4b48c4a1b3b4aff807c2f#file-akashdataurl-working-array-ts **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> **Related Issues:** <!-- Did you find other bugs that looked similar? -->
Bug,Crash
low
Critical
369,079,766
TypeScript
Type inference/narrowing lost after assignment
**TypeScript Version:** 3.1 **Search Terms:** type inference, type guard, narrowing, lost, assignment **Code** ```ts let a: unknown = 'x'; if (typeof a === 'string') { // a inferred as `string` a = a.substr(0, 5); // (method) String.substr(from: number, length?: number): string // a inferred as `unknown` a.length; // Failure: Object is of type 'unknown'. } ``` **Expected behavior:** This should compile without an error. **Actual behavior:** Line 7 fails with: `Object is of type 'unknown'.` **Playground Link:** https://www.typescriptlang.org/play/index.html#src=let%20a%3A%20unknown%20%3D%20'x'%3B%0D%0A%0D%0Aif%20(typeof%20a%20%3D%3D%3D%20'string')%20%7B%0D%0A%20%20%2F%2F%20a%20inferred%20as%20%60string%60%0D%0A%20%20a%20%3D%20a.substr(0%2C%205)%3B%0D%0A%20%20%2F%2F%20a%20inferred%20as%20%60unknown%60%0D%0A%20%20a.length%3B%0D%0A%7D%0D%0A **Related Issues:** #18840, #19955, #26673
Suggestion,In Discussion
medium
Critical
369,089,631
kubernetes
Log something about OOMKilled containers
**Is this a BUG REPORT or FEATURE REQUEST?**: /kind feature **What happened**: Container gets killed because it tries to use more memory than allowed. **What you expected to happen**: Have an `OOMKilled` **event** tied to the pod and logs about this /sig node
sig/node,kind/feature
high
Critical
369,102,159
flutter
Add2app doesn't work with Android Studio 3.2 app projects
Repro using Mac and Flutter beta 0.9.4: 1. Follow Android steps in https://github.com/flutter/flutter/wiki/Add-Flutter-to-existing-apps using Android Studio 3.1.2 => App builds fine after adding in the Flutter module 1. Follow Android steps in https://github.com/flutter/flutter/wiki/Add-Flutter-to-existing-apps using Android Studio 3.2 => gradle fails: ``` org.gradle.api.ProjectConfigurationException: A problem occurred configuring root project 'AppAndroid32'. at org.gradle.configuration.project.LifecycleProjectEvaluator.addConfigurationFailure(LifecycleProjectEvaluator.java:94) at org.gradle.configuration.project.LifecycleProjectEvaluator.notifyAfterEvaluate(LifecycleProjectEvaluator.java:89) at org.gradle.configuration.project.LifecycleProjectEvaluator.doConfigure(LifecycleProjectEvaluator.java:70) at org.gradle.configuration.project.LifecycleProjectEvaluator.access$100(LifecycleProjectEvaluator.java:34) at org.gradle.configuration.project.LifecycleProjectEvaluator$ConfigureProject.run(LifecycleProjectEvaluator.java:110) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110) at org.gradle.configuration.project.LifecycleProjectEvaluator.evaluate(LifecycleProjectEvaluator.java:50) at org.gradle.api.internal.project.DefaultProject.evaluate(DefaultProject.java:667) at org.gradle.api.internal.project.DefaultProject.evaluate(DefaultProject.java:136) at org.gradle.execution.TaskPathProjectEvaluator.configure(TaskPathProjectEvaluator.java:35) at org.gradle.execution.TaskPathProjectEvaluator.configureHierarchy(TaskPathProjectEvaluator.java:60) at org.gradle.configuration.DefaultBuildConfigurer.configure(DefaultBuildConfigurer.java:38) at org.gradle.initialization.DefaultGradleLauncher$ConfigureBuild.run(DefaultGradleLauncher.java:261) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110) at org.gradle.initialization.DefaultGradleLauncher.configureBuild(DefaultGradleLauncher.java:173) at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:132) at org.gradle.initialization.DefaultGradleLauncher.getConfiguredBuild(DefaultGradleLauncher.java:110) at org.gradle.internal.invocation.GradleBuildController$2.call(GradleBuildController.java:87) at org.gradle.internal.invocation.GradleBuildController$2.call(GradleBuildController.java:84) at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:152) at org.gradle.internal.invocation.GradleBuildController.doBuild(GradleBuildController.java:100) at org.gradle.internal.invocation.GradleBuildController.configure(GradleBuildController.java:84) at org.gradle.tooling.internal.provider.runner.ClientProvidedBuildActionRunner.run(ClientProvidedBuildActionRunner.java:64) at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35) at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35) at org.gradle.tooling.internal.provider.ValidatingBuildActionRunner.run(ValidatingBuildActionRunner.java:32) at org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner$1.run(RunAsBuildOperationBuildActionRunner.java:43) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110) at org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner.run(RunAsBuildOperationBuildActionRunner.java:40) at org.gradle.tooling.internal.provider.SubscribableBuildActionRunner.run(SubscribableBuildActionRunner.java:51) at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:49) at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:32) at org.gradle.launcher.exec.BuildTreeScopeBuildActionExecuter.execute(BuildTreeScopeBuildActionExecuter.java:39) at org.gradle.launcher.exec.BuildTreeScopeBuildActionExecuter.execute(BuildTreeScopeBuildActionExecuter.java:25) at org.gradle.tooling.internal.provider.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:80) at org.gradle.tooling.internal.provider.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:53) at org.gradle.tooling.internal.provider.ServicesSetupBuildActionExecuter.execute(ServicesSetupBuildActionExecuter.java:57) at org.gradle.tooling.internal.provider.ServicesSetupBuildActionExecuter.execute(ServicesSetupBuildActionExecuter.java:32) at org.gradle.tooling.internal.provider.GradleThreadBuildActionExecuter.execute(GradleThreadBuildActionExecuter.java:36) at org.gradle.tooling.internal.provider.GradleThreadBuildActionExecuter.execute(GradleThreadBuildActionExecuter.java:25) at org.gradle.tooling.internal.provider.ParallelismConfigurationBuildActionExecuter.execute(ParallelismConfigurationBuildActionExecuter.java:43) at org.gradle.tooling.internal.provider.ParallelismConfigurationBuildActionExecuter.execute(ParallelismConfigurationBuildActionExecuter.java:29) at org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:64) at org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:29) at org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:59) at org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:44) at org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:45) at org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:30) at org.gradle.launcher.daemon.server.exec.ExecuteBuild.doBuild(ExecuteBuild.java:67) at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.WatchForDisconnection.execute(WatchForDisconnection.java:37) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.ResetDeprecationLogger.execute(ResetDeprecationLogger.java:26) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.RequestStopIfSingleUsedDaemon.execute(RequestStopIfSingleUsedDaemon.java:34) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.call(ForwardClientInput.java:74) at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.call(ForwardClientInput.java:72) at org.gradle.util.Swapper.swap(Swapper.java:38) at org.gradle.launcher.daemon.server.exec.ForwardClientInput.execute(ForwardClientInput.java:72) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.LogAndCheckHealth.execute(LogAndCheckHealth.java:55) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.LogToClient.doBuild(LogToClient.java:62) at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment.doBuild(EstablishBuildEnvironment.java:82) at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36) at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy$1.run(StartBuildOrRespondWithBusy.java:50) at org.gradle.launcher.daemon.server.DaemonStateCoordinator$1.run(DaemonStateCoordinator.java:295) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55) at java.lang.Thread.run(Thread.java:745) Caused by: org.gradle.api.ProjectConfigurationException: A problem occurred configuring project ':flutter'. at org.gradle.configuration.project.LifecycleProjectEvaluator.addConfigurationFailure(LifecycleProjectEvaluator.java:94) at org.gradle.configuration.project.LifecycleProjectEvaluator.doConfigure(LifecycleProjectEvaluator.java:66) at org.gradle.configuration.project.LifecycleProjectEvaluator.access$100(LifecycleProjectEvaluator.java:34) at org.gradle.configuration.project.LifecycleProjectEvaluator$ConfigureProject.run(LifecycleProjectEvaluator.java:110) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110) at org.gradle.configuration.project.LifecycleProjectEvaluator.evaluate(LifecycleProjectEvaluator.java:50) at org.gradle.api.internal.project.DefaultProject.evaluate(DefaultProject.java:667) at org.gradle.api.internal.project.DefaultProject.evaluationDependsOn(DefaultProject.java:747) at org.gradle.api.internal.project.DefaultProject.evaluationDependsOn(DefaultProject.java:739) at org.gradle.api.Project$evaluationDependsOn$4.call(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) at include_flutter$_run_closure3$_closure4$_closure5.doCall(include_flutter.groovy:25) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) at groovy.lang.Closure.call(Closure.java:414) at groovy.lang.Closure.call(Closure.java:430) at org.gradle.api.internal.ClosureBackedAction.execute(ClosureBackedAction.java:71) at org.gradle.util.ConfigureUtil.configureTarget(ConfigureUtil.java:160) at org.gradle.util.ConfigureUtil.configure(ConfigureUtil.java:106) at org.gradle.api.internal.project.BuildOperationCrossProjectConfigurator$3.run(BuildOperationCrossProjectConfigurator.java:100) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110) at org.gradle.api.internal.project.BuildOperationCrossProjectConfigurator.runProjectConfigureClosure(BuildOperationCrossProjectConfigurator.java:96) at org.gradle.api.internal.project.BuildOperationCrossProjectConfigurator.access$400(BuildOperationCrossProjectConfigurator.java:31) at org.gradle.api.internal.project.BuildOperationCrossProjectConfigurator$1.doRunProjectConfigure(BuildOperationCrossProjectConfigurator.java:81) at org.gradle.api.internal.project.BuildOperationCrossProjectConfigurator$BlockConfigureBuildOperation.run(BuildOperationCrossProjectConfigurator.java:144) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110) at org.gradle.api.internal.project.BuildOperationCrossProjectConfigurator.runBlockConfigureClosure(BuildOperationCrossProjectConfigurator.java:78) at org.gradle.api.internal.project.BuildOperationCrossProjectConfigurator.subprojects(BuildOperationCrossProjectConfigurator.java:53) at org.gradle.api.internal.project.DefaultProject.subprojects(DefaultProject.java:1119) at org.gradle.api.Project$subprojects$3.call(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) at include_flutter$_run_closure3$_closure4.doCall(include_flutter.groovy:23) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) at groovy.lang.Closure.call(Closure.java:414) at org.gradle.listener.ClosureBackedMethodInvocationDispatch.dispatch(ClosureBackedMethodInvocationDispatch.java:40) at org.gradle.listener.ClosureBackedMethodInvocationDispatch.dispatch(ClosureBackedMethodInvocationDispatch.java:25) at org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:42) at org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:230) at org.gradle.internal.event.BroadcastDispatch$SingletonDispatch.dispatch(BroadcastDispatch.java:149) at org.gradle.internal.event.AbstractBroadcastDispatch.dispatch(AbstractBroadcastDispatch.java:58) at org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:324) at org.gradle.internal.event.BroadcastDispatch$CompositeDispatch.dispatch(BroadcastDispatch.java:234) at org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:140) at org.gradle.internal.event.ListenerBroadcast.dispatch(ListenerBroadcast.java:37) at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) at com.sun.proxy.$Proxy30.afterEvaluate(Unknown Source) at org.gradle.configuration.project.LifecycleProjectEvaluator.notifyAfterEvaluate(LifecycleProjectEvaluator.java:76) ... 85 more Caused by: org.gradle.api.GradleScriptException: A problem occurred evaluating project ':flutter'. at org.gradle.groovy.scripts.internal.DefaultScriptRunnerFactory$ScriptRunnerImpl.run(DefaultScriptRunnerFactory.java:92) at org.gradle.configuration.DefaultScriptPluginFactory$ScriptPluginImpl$2.run(DefaultScriptPluginFactory.java:204) at org.gradle.configuration.ProjectScriptTarget.addConfiguration(ProjectScriptTarget.java:77) at org.gradle.configuration.DefaultScriptPluginFactory$ScriptPluginImpl.apply(DefaultScriptPluginFactory.java:209) at org.gradle.configuration.BuildOperationScriptPlugin$1.run(BuildOperationScriptPlugin.java:61) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110) at org.gradle.configuration.BuildOperationScriptPlugin.apply(BuildOperationScriptPlugin.java:58) at org.gradle.configuration.project.BuildScriptProcessor.execute(BuildScriptProcessor.java:41) at org.gradle.configuration.project.BuildScriptProcessor.execute(BuildScriptProcessor.java:26) at org.gradle.configuration.project.ConfigureActionsProjectEvaluator.evaluate(ConfigureActionsProjectEvaluator.java:34) at org.gradle.configuration.project.LifecycleProjectEvaluator.doConfigure(LifecycleProjectEvaluator.java:64) ... 156 more Caused by: org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration$ArtifactResolveException: Could not resolve all artifacts for configuration 'classpath'. at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.rethrowFailure(DefaultConfiguration.java:944) at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.access$1600(DefaultConfiguration.java:120) at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$ConfigurationArtifactCollection.ensureResolved(DefaultConfiguration.java:1285) at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$ConfigurationArtifactCollection.getArtifacts(DefaultConfiguration.java:1257) at org.gradle.composite.internal.CompositeBuildClassPathInitializer.execute(CompositeBuildClassPathInitializer.java:42) at org.gradle.composite.internal.CompositeBuildClassPathInitializer.execute(CompositeBuildClassPathInitializer.java:29) at org.gradle.api.internal.initialization.DefaultScriptClassPathResolver.resolveClassPath(DefaultScriptClassPathResolver.java:37) at org.gradle.api.internal.initialization.DefaultScriptHandler.getScriptClassPath(DefaultScriptHandler.java:72) at org.gradle.plugin.use.internal.DefaultPluginRequestApplicator.defineScriptHandlerClassScope(DefaultPluginRequestApplicator.java:204) at org.gradle.plugin.use.internal.DefaultPluginRequestApplicator.applyPlugins(DefaultPluginRequestApplicator.java:82) at org.gradle.configuration.DefaultScriptPluginFactory$ScriptPluginImpl.apply(DefaultScriptPluginFactory.java:184) at org.gradle.configuration.BuildOperationScriptPlugin$1.run(BuildOperationScriptPlugin.java:61) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110) at org.gradle.configuration.BuildOperationScriptPlugin.apply(BuildOperationScriptPlugin.java:58) at org.gradle.api.internal.plugins.DefaultObjectConfigurationAction.applyScript(DefaultObjectConfigurationAction.java:109) at org.gradle.api.internal.plugins.DefaultObjectConfigurationAction.access$000(DefaultObjectConfigurationAction.java:38) at org.gradle.api.internal.plugins.DefaultObjectConfigurationAction$1.run(DefaultObjectConfigurationAction.java:68) at org.gradle.api.internal.plugins.DefaultObjectConfigurationAction.execute(DefaultObjectConfigurationAction.java:143) at org.gradle.api.internal.project.AbstractPluginAware.apply(AbstractPluginAware.java:46) at org.gradle.api.internal.project.ProjectScript.apply(ProjectScript.java:34) at org.gradle.api.Script$apply.callCurrent(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:52) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:154) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:166) at build_d9s0tegexfy6eatd5rxtnduxm.run(/Users/mit/tmp/module/.android/Flutter/build.gradle:27) at org.gradle.groovy.scripts.internal.DefaultScriptRunnerFactory$ScriptRunnerImpl.run(DefaultScriptRunnerFactory.java:90) ... 169 more Caused by: org.gradle.internal.resolve.ArtifactResolveException: Could not download httpclient.jar (org.apache.httpcomponents:httpclient:4.2.6): No cached version available for offline mode at org.gradle.api.internal.artifacts.ivyservice.ivyresolve.StartParameterResolutionOverride$FailedRemoteAccess.resolveArtifact(StartParameterResolutionOverride.java:144) at org.gradle.api.internal.artifacts.ivyservice.ivyresolve.CachingModuleComponentRepository$ResolveAndCacheRepositoryAccess.resolveArtifact(CachingModuleComponentRepository.java:427) at org.gradle.api.internal.artifacts.ivyservice.ivyresolve.ErrorHandlingModuleComponentRepository$ErrorHandlingModuleComponentRepositoryAccess.resolveArtifact(ErrorHandlingModuleComponentRepository.java:181) at org.gradle.api.internal.artifacts.ivyservice.ivyresolve.RepositoryChainArtifactResolver.resolveArtifact(RepositoryChainArtifactResolver.java:80) at org.gradle.api.internal.artifacts.ivyservice.resolveengine.artifact.DefaultArtifactSet$LazyArtifactSource.create(DefaultArtifactSet.java:170) at org.gradle.api.internal.artifacts.ivyservice.resolveengine.artifact.DefaultArtifactSet$LazyArtifactSource.create(DefaultArtifactSet.java:157) at org.gradle.api.internal.artifacts.DefaultResolvedArtifact.getFile(DefaultResolvedArtifact.java:135) at org.gradle.api.internal.artifacts.ivyservice.resolveengine.artifact.ArtifactBackedResolvedVariant$DownloadArtifactFile.run(ArtifactBackedResolvedVariant.java:148) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.access$900(DefaultBuildOperationExecutor.java:63) at org.gradle.internal.progress.DefaultBuildOperationExecutor$ParentPreservingQueueWorker.execute(DefaultBuildOperationExecutor.java:378) at org.gradle.internal.operations.DefaultBuildOperationQueue$WorkerRunnable.runOperation(DefaultBuildOperationQueue.java:230) at org.gradle.internal.operations.DefaultBuildOperationQueue$WorkerRunnable.access$600(DefaultBuildOperationQueue.java:172) at org.gradle.internal.operations.DefaultBuildOperationQueue$WorkerRunnable$1.call(DefaultBuildOperationQueue.java:209) at org.gradle.internal.operations.DefaultBuildOperationQueue$WorkerRunnable$1.call(DefaultBuildOperationQueue.java:203) at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:152) at org.gradle.internal.operations.DefaultBuildOperationQueue$WorkerRunnable.runBatch(DefaultBuildOperationQueue.java:202) at org.gradle.internal.operations.DefaultBuildOperationQueue$WorkerRunnable.run(DefaultBuildOperationQueue.java:177) ... 6 more ```
platform-android,tool,t: gradle,a: existing-apps,P2,team-android,triaged-android
low
Critical
369,120,056
TypeScript
Nested lookuptypes do not inference correctly
While trying to scope certain keys that would point to different types, I stumbled upon a problem when I needed to nest lookup types to lookup what `type` the `key` would correspond to. During decleration, this did not seem to be possible to get the compiler to understand. Applying the type does actually give out the correct type inferencing though. A simple example can be observed below. <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.2.0-dev.201xxxxx Tested with both 3.1.1 and typescript@next <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** Lookuptypes, type inference, keyof **Code** ``` interface Y { x: { x1: "x1"; x2: "x2"; } } interface Z { x1: number; x2: string; x3: number; } type H<K extends keyof Y, J extends keyof Y[K]> = (k: K, j: J) => Z[Y[K][J]]; // <----- Error Incorrectly?! type T6 = H<"x", "x1">; type T7 = H<"x", "x2">; type T8 = H<"x", "x3">; // Errors Correctly! // @ts-ignore type L<K extends keyof Y, J extends keyof Y[K]> = (k: K, j: J) => Z[Y[K][J]]; type T3 = L<"x", "x1">; type T4 = L<"x", "x2">; type T5 = L<"x", "x3">; // Errors Correctly! ``` **Expected behavior:** For the `Z[Y[K][J]]` in the example to not give error. Applying the type seems to work correctly. **Actual behavior:** Decleration of the type shows error when it should not. **Playground Link:** https://www.typescriptlang.org/play/#src=interface%20Y%20%7B%0D%0A%20%20x%3A%20%7B%0D%0A%20%20%20%20x1%3A%20%22x1%22%3B%0D%0A%20%20%20%20x2%3A%20%22x2%22%3B%0D%0A%20%20%7D%0D%0A%7D%0D%0A%0D%0Ainterface%20Z%20%7B%0D%0A%20%20x1%3A%20number%3B%0D%0A%20%20x2%3A%20string%3B%0D%0A%20%20x3%3A%20number%3B%0D%0A%7D%0D%0A%0D%0A%0D%0Atype%20H%3CK%20extends%20keyof%20Y%2C%20J%20extends%20keyof%20Y%5BK%5D%3E%20%3D%20(k%3A%20K%2C%20j%3A%20J)%20%3D%3E%20Z%5BY%5BK%5D%5BJ%5D%5D%3B%0D%0A%0D%0Atype%20T6%20%3D%20H%3C%22x%22%2C%20%22x1%22%3E%3B%0D%0Atype%20T7%20%3D%20H%3C%22x%22%2C%20%22x2%22%3E%3B%0D%0Atype%20T8%20%3D%20H%3C%22x%22%2C%20%22x3%22%3E%3B%0D%0A%0D%0A%2F%2F%20%40ts-ignore%0D%0Atype%20L%3CK%20extends%20keyof%20Y%2C%20J%20extends%20keyof%20Y%5BK%5D%3E%20%3D%20(k%3A%20K%2C%20j%3A%20J)%20%3D%3E%20Z%5BY%5BK%5D%5BJ%5D%5D%3B%0D%0A%0D%0Atype%20T3%20%3D%20L%3C%22x%22%2C%20%22x1%22%3E%3B%0D%0Atype%20T4%20%3D%20L%3C%22x%22%2C%20%22x2%22%3E%3B%0D%0Atype%20T5%20%3D%20L%3C%22x%22%2C%20%22x3%22%3E%3B%0D%0A
Bug
low
Critical
369,124,660
flutter
Scrolling content is shown above BoxShadow instead of below it
For some reason `BoxShadow` it is not being shown correctly when scrollable content need to pass below it. Don't know hoy to explain this so here it is a full example code to reproduce the behavior/bug and two pictures: ```dart import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Bug demo', theme: new ThemeData( primaryColor: Colors.white, ), home: Home(), ); } } class Home extends StatefulWidget { @override HomeState createState() => new HomeState(); } class HomeState extends State<Home> { @override Widget build(BuildContext context) { return Scaffold( backgroundColor: Colors.white, body: SafeArea( child: Container( child: Column( children: <Widget>[ Row( children: <Widget>[ Expanded( child: Container( decoration: BoxDecoration( color: Colors.white, boxShadow: <BoxShadow>[ BoxShadow( color: Colors.black26, offset: Offset(0.0, 8.0), blurRadius: 2.0, spreadRadius: 0.0, ), ], ), padding: const EdgeInsets.only(top: 8.0, bottom: 8.0), child: Container(), ), ), ], ), Expanded( child: Container( child: SingleChildScrollView( padding: EdgeInsets.all(20.0), child: Container( child: Wrap( spacing: 0.0, // gap between adjacent chips runSpacing: 12.0, // gap between lines children: linesArray(), ), ), ), ), ), ], ), ), ), ); } List<Widget> linesArray() { List<Widget> lines = List<Widget>(); for (var i = 0; i < 10; i++) { lines.add( Row( children: <Widget>[ Expanded( child: Container( height: 100.0, color: Colors.yellow, ), ), ], ), ); } return lines; } } ``` EDIT: This is happening both on iOS and Android :) | Header | Header | |--------|--------| | ![simulator screen shot - iphone x - 2018-10-11 at 10 32 22](https://user-images.githubusercontent.com/14978705/46807779-2ad57280-cd41-11e8-9ab2-8f3eee5fe310.png) | ![simulator screen shot - iphone x - 2018-10-11 at 10 32 25](https://user-images.githubusercontent.com/14978705/46807796-36289e00-cd41-11e8-93cd-9403e803cdb8.png) |
framework,a: fidelity,f: scrolling,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-framework,triaged-framework
low
Critical
369,124,784
go
x/crypto/sha3: add SHA3 assembly implementation for ARMv7
Currently, there's no assembly implementation for SHA3 hashing for ARM platforms (specifically ARMv7). On ARMv7+ there are vector assembly instructions (known as NEON) available which greatly speed up the speed of SHA3 hashing. There is an upstream reference implementation (here: https://github.com/KeccakTeam/KeccakCodePackage/blob/master/lib/low/KeccakP-1600-times2/OptimizedAsmARM/KeccakP-1600-inplace-pl2-armv7a-neon-le-gcc.s) available that implements SHA3 hashing using these vector instructions and so I have ported this to Go. Unfortunately, there is no support in the Go assembler/dissassembler for ARMv7 vector instructions, and so I wrote a small tool (available: https://github.com/anonymouse64/asm2go) which translates native assembly code for ARM into Go's plan9 based assembly unsupported opcode syntax in order to integrate the upstream implementation in Go. I see approximately 3-4 time speedup in SHA3 hashing on a reference Raspberry Pi 3 Model B Revision 1.2 board: ``` benchmark old ns/op new ns/op delta BenchmarkPermutationFunction-4 19033 6054 -68.19% BenchmarkSha3_512_MTU-4 388484 137001 -64.73% BenchmarkSha3_384_MTU-4 279054 100177 -64.10% BenchmarkSha3_256_MTU-4 224595 81443 -63.74% BenchmarkSha3_224_MTU-4 210459 79196 -62.37% BenchmarkShake128_MTU-4 181445 66482 -63.36% BenchmarkShake256_MTU-4 199495 71572 -64.12% BenchmarkShake256_16x-4 2704755 1094933 -59.52% BenchmarkShake256_1MiB-4 151870003 53953383 -64.47% BenchmarkSha3_512_1MiB-4 283838790 97048578 -65.81% benchmark old MB/s new MB/s speedup BenchmarkPermutationFunction-4 10.51 33.03 3.14x BenchmarkSha3_512_MTU-4 3.48 9.85 2.83x BenchmarkSha3_384_MTU-4 4.84 13.48 2.79x BenchmarkSha3_256_MTU-4 6.01 16.58 2.76x BenchmarkSha3_224_MTU-4 6.41 17.05 2.66x BenchmarkShake128_MTU-4 7.44 20.31 2.73x BenchmarkShake256_MTU-4 6.77 18.86 2.79x BenchmarkShake256_16x-4 6.06 14.96 2.47x BenchmarkShake256_1MiB-4 6.90 19.43 2.82x ``` I opened a CL providing this implementation here: https://go-review.googlesource.com/c/crypto/+/119255 however I have not received any feedback on the CL, so here I am opening this issue to hopefully get more visibility on this.
Performance,NeedsInvestigation
medium
Major
369,136,687
neovim
pty: add options to control input buffering and echo
```vim function! F(...) dict echom string(a:000).', '.string(self) echom string(s:opts) endfunction let opts = {} let opts.stdout_buffered = 1 let opts.stderr_buffered = 1 let opts.on_exit = function('F') " let opts.on_stdout = function('F') " let opts.on_stderr = function('F') let opts.pty = 1 let s:opts = opts let job = jobstart(['cat', '/dev/stdin'], opts) " let job = jobstart(['echo', '/dev/stdin'], s:opts) call jobsend(job, ['foo']) call jobclose(job, 'stdin') ``` Running this with `nvim -u t-dev-stdin-buffered.vim` will cause the job to not being closed/stopped. The log says: > INFO 2018-10-11T15:56:16.756 28066 channel_create_event:208: new channel 3 (โ€ฆ/t-dev-stdin-buffered.vim:16) : {"id": 3, "mode": "bytes", "stream": "job", "pty": "/dev/pts/25"} > INFO 2018-10-11T15:56:16.824 28066 main:590: starting main loop Calling `jobstop(job)` then says: > INFO 2018-10-11T15:56:57.007 28066 on_process_exit:385: exited: pid=28067 status=1 stoptime=0 NVIM v0.3.2-671-g384770556
enhancement,terminal,channels-rpc,system
medium
Major
369,140,995
godot
ScrollContainer scroll_started and scroll_ended signals aren't emitted
Bug? **Godot version:** 3.0.6 stable **OS/device including version:** Windows 7 pro / Linux Mint 19 **Issue description:** Signals "scroll_started()" and "scroll_ended()" never fired **Steps to reproduce:** Create Scene with ScrollContainer Create Panel inside ScrollContainer (min_size > ScrollContainer) Link Signals (printing out to console) -> When scrolling with mouse or touch signals are never received
bug,confirmed,topic:gui
medium
Critical
369,201,103
go
cmd/go: list prints leading underscore when given testdata directories
### What version of Go are you using (`go version`)? go version devel +c96c2a39bb Thu Oct 11 04:45:18 2018 +0000 linux/amd64 ### Does this issue reproduce with the latest release? Yes - checked on 1.11.1. ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/home/mvdan/go/cache" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/mvdan/go/land:/home/mvdan/go" GOPROXY="" GORACE="" GOROOT="/home/mvdan/tip" GOTMPDIR="" GOTOOLDIR="/home/mvdan/tip/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build430264220=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? Run `go list ./testdata` both in GOPATH and module mode. Copy and paste this into a shell: ``` GOPATH=$(mktemp -d) mkdir -p $GOPATH/src/foo.com/bar cd $GOPATH/src/foo.com/bar mkdir testdata echo 'package foo' >testdata/foo.go go list ./testdata export GO111MODULE=on go mod init foo.com/bar go list ./testdata ``` ### What did you expect to see? The same result in both cases. ### What did you see instead? ``` $ GOPATH=$(mktemp -d) $ mkdir -p $GOPATH/src/foo.com/bar $ cd $GOPATH/src/foo.com/bar $ mkdir testdata $ echo 'package foo' >testdata/foo.go $ go list ./testdata _/tmp/tmp.rr2RmmLENC/src/foo.com/bar/testdata $ export GO111MODULE=on $ go mod init foo.com/bar go: creating new go.mod: module foo.com/bar $ go list ./testdata foo.com/bar/testdata ``` Note that this only happens with testdata directories containing Go files. It doesn't happen with other directories that shouldn't be importable packages, like `_foo`. First, I'm confused by the weird `_/$GOPATH/src/$IMPORT_PATH` string, when I was expecting just the import path. I couldn't find any piece of documentation about this format. Second, I'm even more confused as to why this behaves differently depending on whether we're in GOPATH or module-aware mode. This bug report is the result of digging a bit after the tests on a linter of mine were failing on 1.10. This was because `go/types.Type.String()` was returning `mvdan.cc/unparam/check/testdata.FooType` on Go 1.11 with `GO111MODULE=on`, but `_/home/travis/gopath/src/mvdan.cc/unparam/check/testdata.FooType` on Go 1.10 with `GO111MODULE=on` (since the env var is ignored there). /cc @bcmills (and thanks to @rogpeppe and @myitcv, who helped me investigate)
NeedsInvestigation,GoCommand
low
Critical
369,215,361
flutter
Is the logical pixel size 150 ppi instead of 96 ppi?
On https://docs.flutter.io/flutter/dart-ui/Window/devicePixelRatio.html it is mentioned that there are typically 96 logical pixels per inch. But in practice I see about 150 logical pixels per inch. For example, a Galaxy Tab A `MediaQuery` returns 1280x800 (physically it is 1920x1200, the `devicePixelRatio` is reported correctly at 1.5). Its longest side is about 8.5" with 1280 logical pixels. Also, drawing fixed size boxes show the same pixel ratios. The Nexus 10 tablet has similar logical figures (`devicePixelRatio` is there 2.0, which is correct). Is the above referred documentation wrong?
framework,engine,d: api docs,P2,team-engine,triaged-engine
low
Major
369,236,853
vscode
`extensions/extension-editing/src/extensionLinter.ts` does not respect `repository` shorthand in package.json
VS Code warns about non-https relative image links in markdown files, with this message: > Relative image URLs require a repository with HTTPS protocol to be specified in the package.json. If I update package.json to include a repository with an HTTPS URL, this warning goes away: ```json "repository": { "type": "git", "url": "https://github.com/user/repo" } ``` However, there are [other ways](https://docs.npmjs.com/files/package.json#repository) to specify a repository url. For example: "repository": "github:user/repo" In this case, the "repository" link will point to https://github.com/user/repo. But the extension-linter does not detect this case, and still shows the warning. <!-- Please search existing issues to avoid creating duplicates. --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> <!-- Use Help > Report Issue to prefill these. --> - VSCode Version: 1.28.0 (user setup) - OS Version: Windows 10 Steps to Reproduce: 1. Create a new GitHub project, and add a new VS Code extension to the repository. 2. In `package.json`, specify the `repository` as `"repository": "github:user/repo"`. 3. In `README.md`, reference an image by its relative path. 4. Open out the "problems" tab in VS Code. Expected: No problems are reported. Actual: `extension-linter` reports the warning `Relative image URLs require a repository with HTTPS protocol to be specified in the package.json.` 5. Change the `repository` property's value to: ```json "repository": { "type": "git", "url": "https://github.com/user/repo" } ``` Expected: No problems are reported. Actual: Behaves as expected (no problems are reported). <!-- Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: **Yes**/No
bug,extensions-development
low
Minor
369,237,064
vscode
[folding] show folding actions at the end of folding range as well
This is for C# but may affect others I'm not sure: Having a large complex code block and wanting to fold it requires always scrolling to the top of the region to collapse it. There should be an identical folding icon that allows to fold from the bottom of the region.
feature-request,editor-folding
medium
Major
369,240,753
go
x/crypto/acme/autocert: Expose Subject Alternative Names in manager.GetCertificate
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? 1.11.1 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? darwin,amd64 autocert has a `certRequest` method which can take Subject Alternative Names as input [here](https://github.com/golang/crypto/blob/master/acme/autocert/autocert.go#L1027), but the Manager does not expose a way to pass these values through `GetCertificate` or otherwise. In theory, `manager.ExtraExtensions` could enable this but it would require creating a new manager instance for every request which doesn't seem right. Is there another way to use autocert to generate certificates with SANs?
NeedsInvestigation,FeatureRequest
low
Major
369,254,814
pytorch
[feature request] `ignore_label` argument in Caffe2 `SoftmaxWithLoss`
## ๐Ÿš€ Feature <!-- A clear and concise description of the feature proposal --> Similar to the Pytorch implementation of [crossentropyloss](https://pytorch.org/docs/stable/nn.html#crossentropyloss), it'd be nice to have an `ignore_index` or `ignore_label` argument in the Caffe2 [SoftmaxWithLoss](https://github.com/pytorch/pytorch/blob/a4120fa132849028d2c84dee684b3dda1ef4e8b2/caffe2/operators/softmax_with_loss_op.cc#L134) or [softmax_ops.cu](https://github.com/pytorch/pytorch/blob/7035975508ed053b5f1ac08b96ac6d6b2bbb954e/caffe2/operators/softmax_ops.cu#L13). ## Motivation In practice, I often experience the need to include additional loss terms (e.g. additional classification losses) that are applicable to some classes but not others. For example, say that I would like to predict gender for only the `person` class, but would like to ignore the `gender` classification for other classes. I could simply set the gender `ignore_label = -1` for classes that don't apply. I'm currently finding it difficult to do this in Caffe2. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too --> ## Pitch <!-- A clear and concise description of what you want to happen. --> ## Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered, if any. --> ## Additional context <!-- Add any other context or screenshots about the feature request here. --> I suppose the cuda implementation in [softmax_ops.cu](https://github.com/pytorch/pytorch/blob/7035975508ed053b5f1ac08b96ac6d6b2bbb954e/caffe2/operators/softmax_ops.cu#L13) would look something like this: ``` #define DONTCARE (-1) __global__ void LabelCrossEntropyKernel( const int N, const int D, const float* logPdata, const int* labeldata, const float* weights, float* Ydata) { CUDA_1D_KERNEL_LOOP(i, N) { const int label = static_cast<int>(labeldata[i]); if (label != DONTCARE) { CUDA_KERNEL_ASSERT(label >= 0 && label < D); float weight = weights ? weights[i] : 1.0; Ydata[i] = -logPdata[i * D + label] * weight; } else { Ydata[i] = 0; } } } __global__ void LabelCrossEntropyGradientKernel( const int N, const int D, const float* Pdata, const int* labeldata, float* dXdata) { CUDA_1D_KERNEL_LOOP(i, N) { const int label = static_cast<int>(labeldata[i]); if (label != DONTCARE) { int idx = i * D + label; dXdata[idx] = Pdata[idx] - 1.; } else { for (int j=0; j<D; j++){ int idx = i * D + j; dXdata[idx] = 0.0; } } } } ```
caffe2
low
Minor