id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
361,300,225
go
proposal: reflect/v2: remove Value.Bytes
**Background** As noted in https://github.com/golang/go/issues/24746#issuecomment-383720056, `(reflect.Value).Bytes` is the only `reflect.Value` methods that return mutable values without enforcing `CanSet` or `CanInterface`. That makes dangerous mutations through `[]byte` (and slices of defined types with `byte` as the underlying type) more difficult to detect, since they do not involve `unsafe` or `syscall` in the way that other dangerous mutations must. This is a proposal to close that mutability loophole and make the handling of `byte` slice types more consistent with other slices. **Proposal** This proposal consists of two parts: 1. Remove `Value.Bytes`, making it accessible only to Go 1 code. 2. Relax `reflect.Copy`, `reflect.Append`, and `reflect.AppendSlice` to allow the source argument (but not the destination) to be obtained through unexported fields, *provided that* the element type of the slice does not contain any pointers or interfaces. * Today, those functions panic if any argument was obtained through unexported fields. Users could replace a call to `Bytes` on an exported field of known element type with a call to `Interface` and a type-assertion (https://play.golang.org/p/9LJIDW7ftZx). * If #19778 is accepted, they could do the same for unknown element types by adding a `Convert` call (https://play.golang.org/p/gMGApEvicM-). To examine slices obtained via unexported fields (and slices of defined types with `byte` or as their underlying type), users could employ any of four strategies: * Copy the contents of the slice to a new slice of the same type (https://play.golang.org/p/R5gEasA35hS). * Copy the contents of the slice to an immutable string (https://play.golang.org/p/rrry8qv2Nxg). * Access the elements of the slice individually (https://play.golang.org/p/1xcsFOHwaJB). * Use `unsafe` to alias the contents of the slice to a slice variable (https://play.golang.org/p/LZTiDZz1FAl). Note that the latter three are already possible today: part (2) of this proposal only affects the clarity of the code, not what is possible to express. **Alternatives** We could keep the `Bytes` method, but make it panic if `CanInterface` returns false. * That would allow more existing code to continue to compile, but at the risk of introducing a run-time error. I prefer to make it a complie-time error in order to prompt the user to consider whether to introduce a run-time check, make a copy, or use an `unsafe` operation. We could go even further and remove `SetBytes`, since it is redundant with `Copy`. * That would make the treatment of `[]byte` more like every other slice type. * Since that method doesn't introduce any loopholes in the usual mutability rules, removing it does not seem worth the code churn. If we had read-only slices (#22876 and associated issues), we could make `Bytes` return a read-only slice. * In contrast, this proposal does not rely on any new language features.
v2,Proposal
low
Critical
361,314,446
flutter
Flutter's AppLifecycleState does not match Android/iOS's Activity/UIViewController lifecycle states
Internal: b/146299589 Hi, We want to get notified when UIViewController/Activity's lifecycle method invoked, such as `viewWillAppear`, `viewDidAppear` in iOS, or `onPause()`, `onResume()` in Android, so that we are able to add some features there. For example, we have to record user track data at flutter side just at the time of method viewWillAppear is called. However, we found that Flutter returns the same state when UIViewController/Activity's different lifecycle method invoked, for example: in iOS: - (void)viewWillAppear:(BOOL)animated - (void)viewWillDisappear:(BOOL)animated (source code in file FlutterViewController.mm) they both return the same state `AppLifecycleState.inactive` so we can not determine which method was called. And in android we found the same situation(in `FlutterView.java`): ```dart public void onStart() { this.mFlutterLifecycleChannel.send("AppLifecycleState.inactive"); } public void onPause() { this.mFlutterLifecycleChannel.send("AppLifecycleState.inactive"); } ``` We can not determine wether `onStart()` or `onPause()` was called when we received state `AppLifecycleState.inactive`. So could you please add more states to distinguish these methods?
c: new feature,platform-android,platform-ios,framework,engine,customer: alibaba,customer: money (g3),P3,team-engine,triaged-engine
low
Major
361,322,325
go
x/mobile: minimal support of obj C generics
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version go1.9.2 darwin/amd64 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/tadovas/work/go/openvpnv3-go-bindings" GORACE="" GOROOT="/usr/local/Cellar/go/1.9.2/libexec" GOTOOLDIR="/usr/local/Cellar/go/1.9.2/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/_j/73ymsvw56v989xyfwn6q18_m0000gn/T/go-build533420179=/tmp/go-build -gno-record-gcc-switches -fno-common" CXX="clang++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" ### What did you do? binding ObjC method with accept or return generalized types like NSArray<NEPacket *> If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. ### What did you expect to see? succesful bind ### What did you see instead? Method not found, which is expected. This issue is more like feature request/discussion. Are there any plans to provide at least minimal support of generics in Obj C? There are some very nice iOS apis which expect NSArrays of concrete objects and it's sad, that we cannot bind them? I would like to try to create a POC myself, so maybe someone else has done some investigation, have some thoughts etc? As there is no generics concept in golang, so I see a biggest problem here is how to distinquish lets say NSArray<NEPacket *> from NSArray<NSDate *> Current import which looks like this "Objc/Foundation/NSArray" generates golang interface NSArray with methods which are not typed (like Count) which is fine. So perhaps we can do something like "ObjC/Foundation/NEPacket_NSArray" to generate distinct go interfaces with methods which accept or return NEPacket type parameters? I know it looks ugly, but I didn't find out any other options how to encode type name of array into interface as only to as part of interface name. Any help, ideas, suggestions would be much appreciated
mobile
low
Critical
361,336,351
opencv
Redesign of the ElemType register for OpenCV v4
The current magic register is designed as: ``` FEDCBA987654321 --------------- ddd depth nnnnnnnnn channels - 1 r reserved (unset) c continuous flag s submatrix flag ``` However, it is slightly space inefficient, so allow me to redesign it as: ``` FEDCBA987654321 --------------- ddddd depth nnnn channels base - 1 mmm channels exponent r reserved (unset) c continuous flag s submatrix flag ``` ### Depth 32 different depth types are allowed, and the order can be recovered again (given that user code will not be broken - e.g. by choosing new names and mapping old ones). Furthermore, we allow some discontinuity for future extensions ### Channels By utilizing `cn = (nnnn + 1) << mmm`, we can have the following channels: 1-16, 18, 20, 22, 24, 26, 28, 30, 32, 36, 40, 44, 48, 52, 56, 60, 64, 72, 80, 88, 96, 104, 112, 120, 128, 144, 160, 176, 192, 208, 224, 240, 256, 288, 320, 352, 384, 416, 448, 480, 512, 576, 640, 704, 768, 832, 896, 960, 1024, 1152, 1280, 1408, 1536, 1664, 1792, 1920, 2048 which I believe is a reasonable way of division There are some channels that can be formed in multi-ways, which are: - 2 ways: 2, 6, 10, 14, 20, 28, 40, 56, 80, 112, 160, 224, 320, 448, 640, 768, 896, 1024 - 3 ways: 4, 12, 24, 48, 96, 192, 384, 512 - 4 ways: 8, 256 - 5 ways: 16, 32, 64, 128 These are in some sense a waste of valuable encoding space, but since we want to have the first 1-16 all available channels, it cannot be helped! As for computing the channel base and exponent from the number of channels (e.g. user input), it can be done efficiently using: ``` mmm = (0x45263107U >> ((((cn & -cn) * 0x17) >> 2) & 0x1C)) & 7 nnnn = (cn >> mmm) - 1 ``` where channel is deemed invalid if `mmm >= 8 || nnnn >= 16`. If user choose unsupported number of channels, the next acceptable `cn` can be easily obtained by iteratively right-shifting the invalid `nnnn` and incrementing the corresponding `mmm` until a valid `nnnn` appears, then suggested to the user. Update: using the following is guaranteed to allocate at least `cn` channels without any need for iterations: ``` mmm = floor(log2((cn - 1) >> 3)) nnnn = ((cn - 1) >> mmm) ``` ### Backwards compatibility Guaranteed via fail-over! If parser fails to interpret a sequence, it should try the legacy format before reporting an error.
category: core,RFC
low
Critical
361,363,161
go
runtime: mark assist blocks GC microbenchmark for 7ms
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? 1.11 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? OSX (also observed on Linux), both AMD64. ### What did you do? This singlethreaded microbenchmark measures GC latency to allocate a byte slice and store it in a big circular buffer, repeating the operation 5 times the size of the big circular buffer (i.e. one initialization and four reuses). Live memory is about 210MB, in the form of the circular buffer (200,000 slice elements) and 200,000 pointer-free buffers of size 1k. This version of the benchmark is instrumented to collect a trace, because that's how we figured out that it was mark assist. The original benchmark source is https://github.com/WillSewell/gc-latency-experiment, adapted here to be more instrumented and more plain-spoken about what it is doing. ``` package main import ( "fmt" "os" "runtime/trace" "time" "unsafe" ) const ( bufferLen = 200000 msgCount = 1000000 ) type kbyte []byte type circularBuffer [bufferLen]kbyte var worst time.Duration // For making sense of the bad outcome. var total time.Duration var worstIndex int var allStart time.Time var worstElapsed time.Duration // newSlice allocates a 1k slice of bytes and initializes them all to byte(n) func newSlice(n int) kbyte { m := make(kbyte, 1024) for i := range m { m[i] = byte(n) } return m } // storeSlice stores a newly allocated 1k slice of bytes at circularBuffer[count%bufferLen] // (bufferLen is the length of the array circularBuffer) // It also checks the time needed to do this and records the worst case. func storeSlice(c *circularBuffer, highID int) { start := time.Now() c[highID%bufferLen] = newSlice(highID) elapsed := time.Since(start) candElapsed := time.Since(allStart) // Record location of worst in trace if elapsed > worst { worst = elapsed worstIndex = highID worstElapsed = candElapsed } total = time.Duration(total.Nanoseconds() + elapsed.Nanoseconds()) } func main() { trace.Start(os.Stderr) allStart = time.Now() defer trace.Stop() var c circularBuffer for i := 0; i < msgCount; i++ { storeSlice(&c, i) } fmt.Println("Worst push latency: ", worst) fmt.Println("Worst push index: ", worstIndex) fmt.Println("Worst push occurs at run elapsed time: ", worstElapsed) fmt.Println("Average push latency: ", time.Duration(total.Nanoseconds()/msgCount)) fmt.Println("Sizeof(circularBuffer) = ", unsafe.Sizeof(c)) fmt.Println("Approximate live memory = ", unsafe.Sizeof(c)+bufferLen*1024) } ``` ### What did you expect to see? I expected to see a sub-millisecond worst-case latency; the average time without GC to initialize a new slice is less about a microsecond (on a 2017 Mac laptop). ### What did you see instead? Worst-case latencies on the order of 4-10ms. I'v attached the trace file for the following run: ``` go run hello.go 2> trace2.out Worst push latency: 5.193319ms Worst push index: 780507 Worst push occurs at run elapsed time: 995.334915ms Average push latency: 1.018µs Sizeof(circularBuffer) = 4800000 Approximate live memory = 209600000 ``` The worst case latency ends at 995ms, corresponding to a single 5ms mark assist. A zoom of the trace displaying this is also attached. [trace2.out.gz](https://github.com/golang/go/files/2393334/trace2.out.gz) ![screen shot 2018-09-18 at 11 17 12 am](https://user-images.githubusercontent.com/1928999/45697802-95c1ce00-bb34-11e8-9e71-c6a8c4916565.png)
Performance,GarbageCollector,NeedsInvestigation,compiler/runtime
medium
Major
361,401,006
go
encoding/json: incorrect usage of sync.Pool
ERROR: type should be string, got "https://golang.org/cl/84897 introduces a denial-of-service attack on `json.Marshal` via a live-lock situation with `sync.Pool`.\r\n\r\nConsider [this snippet](https://play.golang.org/p/htet7ILi9oJ):\r\n```go\r\ntype CustomMarshaler int\r\n\r\nfunc (c CustomMarshaler) MarshalJSON() ([]byte, error) {\r\n\ttime.Sleep(500 * time.Millisecond) // simulate processing time\r\n\tb := make([]byte, int(c))\r\n\tb[0] = '\"'\r\n\tfor i := 1; i < len(b)-1; i++ {\r\n\t\tb[i] = 'a'\r\n\t}\r\n\tb[len(b)-1] = '\"'\r\n\treturn b, nil\r\n}\r\n\r\nfunc main() {\r\n\tprocessRequest := func(size int) {\r\n\t\tjson.Marshal(CustomMarshaler(size))\r\n\t\ttime.Sleep(1 * time.Millisecond) // simulate idle time\r\n\t}\r\n\r\n\t// Simulate a steady stream of infrequent large requests.\r\n\tgo func() {\r\n\t\tfor {\r\n\t\t\tprocessRequest(1 << 28) // 256MiB\r\n\t\t}\r\n\t}()\r\n\r\n\t// Simulate a storm of small requests.\r\n\tfor i := 0; i < 1000; i++ {\r\n\t\tgo func() {\r\n\t\t\tfor {\r\n\t\t\t\tprocessRequest(1 << 10) // 1KiB\r\n\t\t\t}\r\n\t\t}()\r\n\t}\r\n\r\n\t// Continually run a GC and track the allocated bytes.\r\n\tvar stats runtime.MemStats\r\n\tfor i := 0; ; i++ {\r\n\t\truntime.ReadMemStats(&stats)\r\n\t\tfmt.Printf(\"Cycle %d: %dB\\n\", i, stats.Alloc)\r\n\t\ttime.Sleep(time.Second)\r\n\t\truntime.GC()\r\n\t}\r\n}\r\n```\r\n\r\nThis is a variation of https://github.com/golang/go/issues/23199#issuecomment-353193866 of a situation suggested by @bcmills.\r\n\r\nEssentially, we have a 1-to-1000 ratio of a routines that either use 1KiB or 256MiB, respectively. The occasional insertion of a 256MiB buffer into the `sync.Pool` gets continually held by the 1KiB routines. On my machine, after 300 GC cycles, the above program occupies 6GiB of my heap, when I expect it to be 256MiB in the worst-case.\r\n\r\n\\cc @jnjackins @bradfitz "
Performance,NeedsFix
medium
Critical
361,408,698
flutter
Text selection vertical extent should be defined per line (not per glyph)
As [reported by @_mono on Twitter]( https://twitter.com/_mono/status/1023733699614060545), the Flutter text selection highlight rect should be computed at the line level as opposed to per glyph. To see this effect, paste the sequence `( ´・‿・`)` into a Flutter text field and select it. Example images below: Flutter selection: ![flutter](https://user-images.githubusercontent.com/351029/45705368-424c8180-bb2e-11e8-9502-3fa404afc066.png) iOS OEM selection: ![ios](https://user-images.githubusercontent.com/351029/45705373-45e00880-bb2e-11e8-930a-156f02e88a15.png) Android OEM selection: ![android](https://user-images.githubusercontent.com/351029/45706301-bdaf3280-bb30-11e8-9ee6-07384c82395c.png)
a: text input,framework,engine,a: fidelity,a: quality,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-engine,triaged-engine
low
Major
361,413,793
react
How to prevent ReactDOM.render errors from bubbling when otherwise explicitly handled
**Do you want to request a *feature* or report a *bug*?** This is a bug. Ordinarily, this would probably be considered a feature request. However, the stated purpose of the feature referenced below is being violated in certain environments. **What is the current behavior?** React 16+ surfaces an uncaught error during render, even when using `componentDidCatch` as designed or using try/catch around the render. As described in the comment [above the related code](https://github.com/facebook/react/blob/master/packages/shared/invokeGuardedCallbackImpl.js#L32:L49), this is a convenience provided for developers using DevTools for debugging purposes. However, the convenience provided for development debugging is changing behavior in specs, causing failures for otherwise protected code paths, which goes against this statement from the comment description for the code: > But because the error happens in a different event loop context, it does not interrupt the normal program flow. When the error occurs, a spec runner such as Mocha will fail the test with the uncaught error, then continue with the next test. After advancing, the second render of the component will complete and call the ReactDOM.render callback, which continues code from the already-failed test while a subsequent test is in progress. This pollutes the spec suite and leads to other issues that are not produced when using the Production version of React. **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:** All relevant code and content has been included in [this CodeSandbox](https://codesandbox.io/s/vvmv7q7o7y). Due to the use of karma/mocha, tests must be run locally. Inline comments add detail to behavior and expectations. To see the tests pass, switch "test" to "production" in the `karma.js` file. **What is the expected behavior?** Typically, DevTools are used in a different context from running specs—automation vs investigation, for lack of more precise terms. It should be an option rather than the default when using React in a non-production environment. At least in an environment of `test`, where spec runners are conditionally sensitive to global errors, developers must have the option to disable or disallow this behavior as it is implemented at this time. For a second, perhaps more intuitive option, refer to this portion of the mentioned comment, talking about "pause on caught exceptions": > This is untintuitive, though, because even though React has caught the error, from the developer's perspective, the error is uncaught. When an exception during render is captured using `componentDidCatch` or try/catch as mentioned above, the exception should be considered "caught," as the developer has explicitly created an error boundary around this render. In this case, expected behavior would be for the error to not be surfaced globally and for the developer to debug any exceptions within the error boundary they defined. **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?** This is present only in the non-production version of React 16+. The `development` or `test` environments of React 16+ feature this behavior. React 15.* and below do not have this issue. Prior to React 16, explicit try/catch handlers were solely responsible for being an error boundary during render.
Type: Discussion
low
Critical
361,418,864
kubernetes
Setup call to volume plugins should include user and plugin provided mount options
When we create bind mounts in `Setup` function - we should also include user provided mount options coming from PV or StorageClass. This will ensure that bind and remount operations have access to full mount options rather than a partial list. related to https://github.com/kubernetes/kubernetes/pull/68626#discussion_r218534387 /sig storage /assign
sig/storage,lifecycle/frozen
low
Major
361,424,313
go
x/tools/cmd/present: add support for sub bullets
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version go1.11 linux/amd64 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOBIN="" GOCACHE="/home/jerrin/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/jerrin/gocode" GOPROXY="" GORACE="" GOROOT="/home/jerrin/go-go1.11" GOTMPDIR="" GOTOOLDIR="/home/jerrin/go-go1.11/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build014264932=/tmp/go-build -gno-record-gcc-switches" Go present tool does not currently support sub bullets. This proposal is to add a new syntax to support sub bullets. Lines prefixed with '-- ' will be displayed as a sub bullet. For example, the following code block produces the slide shown below. ``` * Title - bullet 1 -- sub bullet 1.1 - bullet 2 -- sub bullet 2.1 -- sub bullet 2.2 ``` ![screenshot](https://user-images.githubusercontent.com/883541/45707665-6743f300-bb34-11e8-827a-155064a4dbb2.png)
NeedsDecision,FeatureRequest,Tools
low
Critical
361,431,757
TypeScript
Renaming a module with a default export should rename default imports which match the filename
## Search Terms rename default import ## Suggestion I noticed that recently it became possible to auto-import default exports of modules: ```ts // foo.ts export default 42; ``` ```ts // bar.ts const x = 3 + foo // ^ control-space here will offer to `import foo from './foo'` ``` This is great and finally makes default exports worth using in TypeScript, so thank you! Also recently, TypeScript got native support for refactoring names of modules. Now, what would be really awesome is if the rename refactor for modules could take into account default exports, so that when we rename `foo.ts` to `baz.ts` this would happen in `bar.ts`: ```diff -import foo from './foo'; +import baz from './baz'; -const x = 3 + foo; +const x = 3 + baz; ``` Basically, try to preserve the cases where the name given to a default import matches that of the module being imported. For cases where a default export is imported with a name not matching the module, it would leave it alone: ```diff -import customName from './foo'; +import customName from './baz'; const x = 3 + customName; ``` This would really make default exports exceptionally useful in TypeScript, because it would potentially reduce by half the number of refactoring gestures needed to rename a set of terms which are 1-to-1 with modules. ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,Awaiting More Feedback
low
Major
361,457,960
rust
Same trait in dyn / impl trait should give a warning
Having the same trait specified multiple time in the list of traits a function returns, does not give a warning: ```Rust fn example() -> impl Copy + Copy + Copy { } fn main() { } ``` It should perhaps give a warning like: ``` warning: found multiple identical traits: `fn example() -> impl Copy + Copy + Copy` ```
A-lints,T-lang,C-bug
low
Minor
361,467,725
rust
return impl Trait should only permit unique lifetimes
Providing more than 1 exactly the same lifetime to a return impl Trait, does not give a warning, nor does it give a compiler error: ```Rust fn example() -> impl Send + 'static + 'static { } fn main() { example(); } ``` The box version does error: ```Rust fn dyn_example() -> Box<dyn Send + 'static + 'static> { Box::new(()) } fn main() { dyn_example(); } ``` ``` Compiling playground v0.0.1 (/playground) error[E0226]: only a single explicit lifetime bound is permitted --> src/main.rs:1:46 | 1 | fn dyn_example() -> Box<dyn Send + 'static + 'static> { | ^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0226`. error: Could not compile `playground`. To learn more, run the command again with --verbose. ```
C-enhancement,A-lints,A-diagnostics,T-lang,T-compiler
low
Critical
361,548,155
pytorch
[Caffe2/Bug] Cannot enable MKL-DNN
## Solution A solution is found for the problem, but maybe a bug in Caffe2 MKL-DNN can be enabled by first compiling the PyTorch. Then turn on MKL by ```bash diff --git a/CMakeLists.txt b/CMakeLists.txt index 827121b1f..235d00bd6 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -111,6 +111,7 @@ option(USE_SYSTEM_EIGEN_INSTALL option(USE_TENSORRT "Using Nvidia TensorRT library" OFF) option(USE_ZMQ "Use ZMQ" OFF) option(USE_ZSTD "Use ZSTD" OFF) +option(USE_MKL "Use MKL" ON) option(USE_MKLDNN "Use MKLDNN" OFF) option(USE_IDEEP "Use IDEEP interface in MKL BLAS" ON) option(USE_MKLML "Use MKLML interface in MKL BLAS" ON) ``` and recompile PyTorch. Then MKL-DNN is enabled. ## Issue description Trying to accelerate caffe2 inference with MKL-DNN. The MKL-DNN lib can be detected: ``` -- Found MKLDNN: /home/jiecaoyu/.local/include -- Found MKLDNN (include: /home/jiecaoyu/.local/include, library: /home/jiecaoyu/.local/lib/libmkldnn.so) ``` But mkl operators are not compiled ``` -- Excluding mkl operators as we are not using mkl ``` And MKL-DNN cannot be found after installation since ```python from caffe2.python import workspace workspace.C.has_mkldnn ``` returns False. Also, I try to enable MKL by changing the CMakeLists.txt: ```bash diff --git a/CMakeLists.txt b/CMakeLists.txt index 827121b1f..235d00bd6 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -111,6 +111,7 @@ option(USE_SYSTEM_EIGEN_INSTALL option(USE_TENSORRT "Using Nvidia TensorRT library" OFF) option(USE_ZMQ "Use ZMQ" OFF) option(USE_ZSTD "Use ZSTD" OFF) +option(USE_MKL "Use MKL" ON) option(USE_MKLDNN "Use MKLDNN" OFF) option(USE_IDEEP "Use IDEEP interface in MKL BLAS" ON) option(USE_MKLML "Use MKLML interface in MKL BLAS" ON) ``` It acutally cause an error: ``` ... /home/jiecaoyu/LIBS/pytorch/aten/src/TH/THAllocator.cpp:397:1: error: prototype for ‘THMapAllocator::THMapAllocator(WithFd, const char*, int, int)’ does not match any in class ‘THMapAllocator’ THMapAllocator::THMapAllocator(WithFd, const char *filename, int fd, int flags) { ^~~~~~~~~~~~~~ In file included from /home/jiecaoyu/LIBS/pytorch/aten/src/TH/THAllocator.cpp:1:0: /home/jiecaoyu/LIBS/pytorch/aten/src/TH/THAllocator.h:41:3: error: candidates are: THMapAllocator::THMapAllocator(THMapAllocator&&) THMapAllocator(THMapAllocator&&) = delete; ^~~~~~~~~~~~~~ /home/jiecaoyu/LIBS/pytorch/aten/src/TH/THAllocator.h:39:3: error: THMapAllocator::THMapAllocator(const THMapAllocator&) THMapAllocator(const THMapAllocator&) = delete; ^~~~~~~~~~~~~~ /home/jiecaoyu/LIBS/pytorch/aten/src/TH/THAllocator.h:38:3: error: THMapAllocator::THMapAllocator(WithFd, const char*, int, int, size_t) THMapAllocator(WithFd, const char *filename, int fd, int flags, size_t size); ^~~~~~~~~~~~~~ /home/jiecaoyu/LIBS/pytorch/aten/src/TH/THAllocator.cpp:393:1: error: THMapAllocator::THMapAllocator(const char*, int, size_t) THMapAllocator::THMapAllocator(const char *filename, int flags, size_t size) { ^~~~~~~~~~~~~~ /home/jiecaoyu/LIBS/pytorch/aten/src/TH/THAllocator.cpp:401:52: error: destructors may not have parameters THMapAllocator::~THMapAllocator(THMapAllocator* ctx) {} ^ /home/jiecaoyu/LIBS/pytorch/aten/src/TH/THAllocator.cpp:401:1: error: redefinition of ‘THMapAllocator::~THMapAllocator()’ THMapAllocator::~THMapAllocator(THMapAllocator* ctx) {} ^~~~~~~~~~~~~~ In file included from /home/jiecaoyu/LIBS/pytorch/aten/src/TH/THAllocator.cpp:1:0: ... ``` ## System Info PyTorch version: 1.0.0a0+98aebed Is debug build: No CUDA used to build PyTorch: None OS: Ubuntu 18.04.1 LTS GCC version: (Ubuntu 7.3.0-16ubuntu3) 7.3.0 CMake version: version 3.10.2 Python version: 2.7 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip] numpy (1.15.1) [pip] torch (1.0.0a0+98aebed) [conda] Could not collect
caffe2
low
Critical
361,558,550
kubernetes
Support an image pull credential flow built on bound service account tokens
ImagePullSecret is a mechanism to provide kubelet with credentials to pull images for pods running as a specific service account. This provides segmentation of image access between tenants of a cluster. Now that Kubernetes has a more featureful service acccount token issuer, we can leverage bound service account tokens rather than static secrets to authenticate kubelet to docker registries. Strawman flow: 1. Pod running as service account `foo` with image `bar.io/baz` is bound to a node 1. Kubelet on node requests a service account token for service account `foo` with audience `bar.io` 1. (Optional) Kubelet does an [OAuth token exchange](https://tools.ietf.org/html/draft-ietf-oauth-token-exchange-15) which is a general approach to what we [already do in CRI](https://github.com/kubernetes/kubernetes/blob/76518f154b717c977b1bfb506aeb7cd4b176fde0/pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto#L1045) 1. Kubelet passes resulting token to as a `registry_token` (which is just an access token of type Bearer) to the CRI. @smarterclayton talked about this in the pod-identity-wg meeting a while ago. @kubernetes/sig-auth-feature-requests @kubernetes/sig-node-feature-requests /kind feature /sig auth
priority/backlog,sig/node,kind/feature,sig/auth,lifecycle/frozen
low
Major
361,581,032
rust
Rust compiler not inferring the correct trait implementation
(I've raised a question on [SO](https://stackoverflow.com/q/52153792/5099839), and the general answer is that it is due to a bug in rustc that should be reported here, so here I am :) I'm not sure about the report format you expect, so sorry if the following does not match) I'm trying to implement a reader which could be able to extract values from different types from a file. There is a `File` struct which represents the file resource (and methods to access its content), and a `Reader` trait which makes it possible to extract values based on the resulting type. The (dummy) implementation looks like this ([playground](https://play.rust-lang.org/?gist=859160ad8016cafc2ce07c7e6b724fe7&version=stable&mode=debug&edition=2015)): ````rust use std::io::Result; mod file { use std::io::Result; pub struct File {/* ... */} pub trait Reader<T> { fn read(&mut self) -> Result<T>; } impl Reader<u32> for File { fn read(&mut self) -> Result<u32> { // Dummy implementation Ok(10) } } impl Reader<u8> for File { fn read(&mut self) -> Result<u8> { // Dummy implementation Ok(0) } } impl Reader<bool> for File { fn read(&mut self) -> Result<bool> { // Dummy implementation Ok(false) } } } use file::{File, Reader}; impl<T: Default> Reader<Vec<T>> for File where File: Reader<T> + Reader<u32>, { fn read(&mut self) -> Result<Vec<T>> { let count: u32 = self.read()?; let mut array: Vec<T> = Vec::with_capacity(count as usize); for _ in 0..count { let mut item: T = self.read()?; array.push(item); } Ok(array) } } fn main() { let mut file = File {}; let _v: Vec<u8> = file.read().unwrap(); } ```` Everything worked until I added the `Reader<Vec<T>>` implementation. Vectors are stored in the file as a `u32` indicating the number of elements followed by the element's representation. The compiler gives the following error: error[E0308]: try expression alternatives have incompatible types --> src/main.rs:41:26 | 41 | let count: u32 = self.read()?; | ^^^^^^^^^^^^ | | | expected u32, found type parameter | help: try wrapping with a success variant: `Ok(self.read()?)` | = note: expected type `u32` found type `T` Even though I specified that `File` implements both `Reader<T>` and `Reader<u32>`, it seems to be stuck on `Reader<T>`. What's even more strange is that if I only keep 2 implementations of the `Reader` trait (removing `Reader<bool>` for instance), the code compiles without any issue ([playground](https://play.rust-lang.org/?gist=0b4ead500802fcd4662413ac4ef340ea&version=stable&mode=debug&edition=2015)). The current workaround is to explicitly tell the compiler it should use the `Reader<u32>` implementation: ````rust let count: u32 = (self as &mut Reader<u32>).read()?; ```` **But the compiler should be able to detect this implicitly, as it does when only 2 implementations exist.** Should Rust Playground be trusted, issue appears in stable (1.29.0), unstable (6fdf1dbb9a6d2fbd7894 aka 1.29.0-beta.15) and nightly (2224a42c353636db6ee5 aka 1.30.0-nightly).
A-trait-system,T-compiler,A-inference,C-bug
low
Critical
361,633,229
pytorch
error: ‘array_size’ is not a class template
Pytorch: git of September 19, 2018 Eigen: eigen-eigen-b3f3d4950030 ``` In file included from /usr/local/include/unsupported/Eigen/CXX11/Tensor:104:0, from ....../pytorch/caffe2/operators/conv_op_eigen.cc:10: /usr/local/include/unsupported/Eigen/CXX11/src/Tensor/TensorIndexList.h:220:10: error: ‘array_size’ is not a class template struct array_size<IndexTuple<T, O...> > { ^~~~~~~~~~ In file included from /usr/local/include/unsupported/Eigen/CXX11/Tensor:114:0, from ....../pytorch/caffe2/operators/conv_op_eigen.cc:10: /usr/local/include/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h:203:50: error: ‘scalar_logistic_op’ is not a member of ‘Eigen::internal’ EIGEN_STRONG_INLINE const TensorCwiseUnaryOp<internal::scalar_logistic_op<Scalar>, const Derived> ^~~~~~~~ /usr/local/include/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h:203:50: error: ‘scalar_logistic_op’ is not a member of ‘Eigen::internal’ /usr/local/include/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h:203:85: error: wrong number of template arguments (1, should be 2) EIGEN_STRONG_INLINE const TensorCwiseUnaryOp<internal::scalar_logistic_op<Scalar>, const Derived> ^ In file included from /usr/local/include/unsupported/Eigen/CXX11/Tensor:96:0, from ....../pytorch/caffe2/operators/conv_op_eigen.cc:10: /usr/local/include/unsupported/Eigen/CXX11/src/Tensor/TensorForwardDeclarations.h:69:52: note: provided for ‘template<class UnaryOp, class XprType> class Eigen::TensorCwiseUnaryOp’ template<typename UnaryOp, typename XprType> class TensorCwiseUnaryOp; ^~~~~~~~~~~~~~~~~~ In file included from /usr/local/include/unsupported/Eigen/CXX11/Tensor:114:0, from ....../pytorch/caffe2/operators/conv_op_eigen.cc:10: /usr/local/include/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h:203:86: error: expected unqualified-id before ‘,’ token EIGEN_STRONG_INLINE const TensorCwiseUnaryOp<internal::scalar_logistic_op<Scalar>, const Derived> ^ /usr/local/include/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h:215:50: error: ‘scalar_expm1_op’ is not a member of ‘Eigen::internal’ EIGEN_STRONG_INLINE const TensorCwiseUnaryOp<internal::scalar_expm1_op<Scalar>, const Derived> ^~~~~~~~ /usr/local/include/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h:215:50: error: ‘scalar_expm1_op’ is not a member of ‘Eigen::internal’ /usr/local/include/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h:215:82: error: wrong number of template arguments (1, should be 2) EIGEN_STRONG_INLINE const TensorCwiseUnaryOp<internal::scalar_expm1_op<Scalar>, const Derived> ^ In file included from /usr/local/include/unsupported/Eigen/CXX11/Tensor:96:0, from ....../pytorch/caffe2/operators/conv_op_eigen.cc:10: /usr/local/include/unsupported/Eigen/CXX11/src/Tensor/TensorForwardDeclarations.h:69:52: note: provided for ‘template<class UnaryOp, class XprType> class Eigen::TensorCwiseUnaryOp’ template<typename UnaryOp, typename XprType> class TensorCwiseUnaryOp; ^~~~~~~~~~~~~~~~~~ In file included from /usr/local/include/unsupported/Eigen/CXX11/Tensor:114:0, from ....../pytorch/caffe2/operators/conv_op_eigen.cc:10: /usr/local/include/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h:215:83: error: expected unqualified-id before ‘,’ token EIGEN_STRONG_INLINE const TensorCwiseUnaryOp<internal::scalar_expm1_op<Scalar>, const Derived> ^ caffe2/CMakeFiles/caffe2.dir/build.make:3521: recipe for target 'caffe2/CMakeFiles/caffe2.dir/operators/conv_op_eigen.cc.o' failed make[2]: *** [caffe2/CMakeFiles/caffe2.dir/operators/conv_op_eigen.cc.o] Error 1 make[2]: Leaving directory '....../pytorch/build' CMakeFiles/Makefile2:1035: recipe for target 'caffe2/CMakeFiles/caffe2.dir/all' failed make[1]: *** [caffe2/CMakeFiles/caffe2.dir/all] Error 2 make[1]: Leaving directory '....../pytorch/build' Makefile:143: recipe for target 'all' failed make: *** [all] Error 2 ``` Any suggestions? cc @malfet @seemethere @walterddr @yf225 @glaringlee
module: build,module: cpp,triaged
low
Critical
361,671,248
go
go2draft-error-handling-overview: Double io.Closer calls
The examples in https://go.googlesource.com/proposal/+/master/design/go2draft-error-handling-overview.md arrange for w.Close() to be called more than once. https://golang.org/pkg/io/#Closer says _The behavior of Close after the first call is undefined. Specific implementations may document their own behavior_. The text before the code, and the problem: On the other hand, the equivalent actual Go code today would be Defers w.Close() and then calls w.Close(). For example, the corrected code above shortens to checks w.Close() but also does that in an above handler. It's OK for the double-Close to be one of the faults with the 'before', but it would be nice if the improved 'after' version didn't also do it. The _more robust version with more helpful errors would be_ one in between doesn't. CC: @rsc
NeedsInvestigation
low
Critical
361,671,420
TypeScript
Visual Studio Code intellisense on React component using typescript with union types
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **3.0.3** also checked on 3.1.0-dev.20180919 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> typescript visual studio code union types intellisense **Code** ```ts interface IA { name: string } interface IB extends IA { type: 'B'; myProp: string } interface IC extends IA { type: 'C'; secondProp: string } const MyComponent: React.SFC<IB | IC> = props => { return <span>test</span> } class App extends React.Component { public render() { return ( <MyComponent type='B' /> ); } } ``` **Expected behavior:** When rendering MyComponent if i set <MyComponent type='B' /> then i should get an autocomplete option for 'myProp'. **Actual behavior:** ![typescript_issue](https://user-images.githubusercontent.com/1786291/45746280-c4a27780-bbfa-11e8-9823-37690f2fd620.gif) no intellisense shown for 'myProp'. If i don't give a value for 'myProp' the compiler does error, but no intellisense autocomplete option is ever shown. **Related Issues:** i posted the issue generally on [stackoverflow ](https://stackoverflow.com/questions/52386145/visual-studio-code-intellisense-on-react-component-using-typescript-with-union-t/52396229#52396229)and a response was that it may be worth logging here and it may be related to bug: [TypeScript issue #26004](https://github.com/Microsoft/TypeScript/issues/26004)
Bug
low
Critical
361,674,272
rust
Check if we can get away with making `fn()` conflict with `&T`
There's the idea that the `fn(...) -> R` types should be unsized types (like `extern type`) and function pointers should be proper pointers (e.g. `&'a fn()`). On its face this appears backwards incompatible in ways editions can't paper over, e.g. if we desugar `fn()` to `&'static fn()` in 2015 these two impls would suddenly start conflicting: ```rust trait Foo {} impl<'a, T: ?Sized> Foo for &'a T {} impl Foo for fn() {} ``` @eddyb wants to try implementing it quickly and hackily it to see what breaks: > I think we can do this in type unification, and treat `fn()` like `&SomeLibCoreLangItem` there, without changing `fn()` itself
T-lang
low
Minor
361,699,595
rust
Confusing `multiple matching crates` error caused by 2018 edition
I tried to `use rand` when I hadn't added it to `Cargo.toml`. ``` error[E0464]: multiple matching crates for `rand` --> src\worker_utils\polling_worker_pool.rs:7:5 | 7 | use rand::{self, Rng}; | ^^^^ | = note: candidates: crate `rand`: \\?\C:\Users\diggs\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\x86_64-pc-windows-msvc\lib\librand-8a8c92c0d9bc43b9.rlib crate `rand`: \\?\C:\Users\diggs\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\x86_64-pc-windows-msvc\lib\librand-74b492d2b987a1e5.rlib error[E0463]: can't find crate for `rand` --> src\worker_utils\polling_worker_pool.rs:7:5 | 7 | use rand::{self, Rng}; | ^^^^ can't find crate ``` The first error is just misleading.
C-enhancement,A-diagnostics,T-compiler
low
Critical
361,725,527
pytorch
[Caffe2 installation problem]
I am trying to install Caffe2 on my Ubuntu PC but I keep getting error messages that I don't understand anything from. I have installed CUDA 8.0 and cuDNN 7.0 when I run `nvcc --version` I get ``` nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2016 NVIDIA Corporation Built on Tue_Jan_10_13:22:03_CST_2017 Cuda compilation tools, release 8.0, V8.0.61 ``` So everything looks like it installed there. The installation also detects the correct CUDA and cuDNN verision. Here are the error messages I receive when running `python setup.py install` from my pytorch folder `[ 57%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_sin_op.cu.o [ 57%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_elementwise_linear_op.cu.o [ 57%] Built target python_copy_files [ 57%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_given_tensor_fill_op.cu.o /home/"myname"/pytorch/caffe2/core/common_cudnn.h(143): error: identifier "CUDNN_DATA_INT32" is undefined /home/"myname"/pytorch/caffe2/core/common_cudnn.h(143): error: identifier "CUDNN_DATA_INT32" is undefined /home/"myname"/pytorch/caffe2/core/common_cudnn.h(143): error: identifier "CUDNN_DATA_INT32" is undefined /home/"myname"/pytorch/caffe2/core/common_cudnn.h(143): error: identifier "CUDNN_DATA_INT32" is undefined 1 error detected in the compilation of "/tmp/tmpxft_0000371e_00000000-7_THCCachingAllocator.cpp1.ii". CMake Error at caffe2_gpu_generated_THCCachingAllocator.cu.o.Release.cmake:279 (message): Error generating file /home/"myname"/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/core/./caffe2_gpu_generated_THCCachingAllocator.cu.o caffe2/CMakeFiles/caffe2_gpu.dir/build.make:112714: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/core/caffe2_gpu_generated_THCCachingAllocator.cu.o' failed make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/core/caffe2_gpu_generated_THCCachingAllocator.cu.o] Error 1 make[2]: *** Waiting for unfinished jobs.... /home/"myname"/pytorch/caffe2/core/common_cudnn.h(143): error: identifier "CUDNN_DATA_INT32" is undefined /home/"myname"/pytorch/caffe2/core/common_cudnn.h(143): error: identifier "CUDNN_DATA_INT32" is undefined /home/"myname"/pytorch/caffe2/operators/elementwise_linear_op.cu(80): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/operators/elementwise_linear_op.cu(105): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/operators/elementwise_linear_op.cu(106): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/operators/elementwise_linear_op.cu(118): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/utils/math_gpu.cu(3905): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/utils/math_gpu.cu(3906): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/utils/math_gpu.cu(3907): warning: the "template" keyword used for syntactic disambiguation may only be used within a template 1 error detected in the compilation of "/tmp/tmpxft_0000371c_00000000-7_context_gpu.cpp1.ii". /home/"myname"/pytorch/caffe2/utils/math_gpu.cu(3908): warning: the "template" keyword used for syntactic disambiguation may only be used within a template CMake Error at caffe2_gpu_generated_context_gpu.cu.o.Release.cmake:279 (message): Error generating file /home/"myname"/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/core/./caffe2_gpu_generated_context_gpu.cu.o caffe2/CMakeFiles/caffe2_gpu.dir/build.make:113186: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/core/caffe2_gpu_generated_context_gpu.cu.o' failed make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/core/caffe2_gpu_generated_context_gpu.cu.o] Error 1 /home/"myname"/pytorch/caffe2/core/common_cudnn.h(143): error: identifier "CUDNN_DATA_INT32" is undefined /home/"myname"/pytorch/caffe2/operators/utility_ops.cu(52): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/operators/utility_ops.cu(195): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/operators/utility_ops.cu(226): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/operators/utility_ops.cu(277): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/operators/piecewise_linear_transform_op.cu(221): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/operators/piecewise_linear_transform_op.cu(257): warning: the "template" keyword used for syntactic disambiguation may only be used within a template /home/"myname"/pytorch/caffe2/operators/piecewise_linear_transform_op.cu(273): warning: the "template" keyword used for syntactic disambiguation may only be used within a template 1 error detected in the compilation of "/tmp/tmpxft_00003736_00000000-7_elementwise_linear_op.cpp1.ii". /home/"myname"/pytorch/caffe2/core/common_cudnn.h(143): error: identifier "CUDNN_DATA_INT32" is undefined CMake Error at caffe2_gpu_generated_elementwise_linear_op.cu.o.Release.cmake:279 (message): Error generating file /home/"myname"/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/operators/./caffe2_gpu_generated_elementwise_linear_op.cu.o caffe2/CMakeFiles/caffe2_gpu.dir/build.make:117489: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_elementwise_linear_op.cu.o' failed make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_elementwise_linear_op.cu.o] Error 1 1 error detected in the compilation of "/tmp/tmpxft_0000375a_00000000-7_sin_op.cpp1.ii". CMake Error at caffe2_gpu_generated_sin_op.cu.o.Release.cmake:279 (message): Error generating file /home/"myname"/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/operators/./caffe2_gpu_generated_sin_op.cu.o caffe2/CMakeFiles/caffe2_gpu.dir/build.make:116979: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_sin_op.cu.o' failed make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_sin_op.cu.o] Error 1 1 error detected in the compilation of "/tmp/tmpxft_0000377b_00000000-7_utility_ops.cpp1.ii". CMake Error at caffe2_gpu_generated_utility_ops.cu.o.Release.cmake:279 (message): Error generating file /home/"myname"/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/operators/./caffe2_gpu_generated_utility_ops.cu.o caffe2/CMakeFiles/caffe2_gpu.dir/build.make:116310: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_utility_ops.cu.o' failed make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_utility_ops.cu.o] Error 1 1 error detected in the compilation of "/tmp/tmpxft_00003768_00000000-7_math_gpu.cpp1.ii". CMake Error at caffe2_gpu_generated_math_gpu.cu.o.Release.cmake:279 (message): Error generating file /home/"myname"/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/utils/./caffe2_gpu_generated_math_gpu.cu.o caffe2/CMakeFiles/caffe2_gpu.dir/build.make:114106: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/utils/caffe2_gpu_generated_math_gpu.cu.o' failed make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/utils/caffe2_gpu_generated_math_gpu.cu.o] Error 1 1 error detected in the compilation of "/tmp/tmpxft_0000377a_00000000-7_piecewise_linear_transform_op.cpp1.ii". CMake Error at caffe2_gpu_generated_piecewise_linear_transform_op.cu.o.Release.cmake:279 (message): Error generating file /home/"myname"/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/operators/./caffe2_gpu_generated_piecewise_linear_transform_op.cu.o caffe2/CMakeFiles/caffe2_gpu.dir/build.make:115208: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_piecewise_linear_transform_op.cu.o' failed make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_piecewise_linear_transform_op.cu.o] Error 1 1 error detected in the compilation of "/tmp/tmpxft_00003819_00000000-7_given_tensor_fill_op.cpp1.ii". CMake Error at caffe2_gpu_generated_given_tensor_fill_op.cu.o.Release.cmake:279 (message): Error generating file /home/"myname"/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/operators/./caffe2_gpu_generated_given_tensor_fill_op.cu.o caffe2/CMakeFiles/caffe2_gpu.dir/build.make:117980: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_given_tensor_fill_op.cu.o' failed make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_given_tensor_fill_op.cu.o] Error 1 CMakeFiles/Makefile2:3734: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/all' failed make[1]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2 Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-nnpack nccl caffe2 libshm gloo c10d THD'`
caffe2
low
Critical
361,767,739
go
runtime/race: race detector misses race condition in case of fmt.Println
### What version of Go are you using (`go version`)? `go version go1.11 linux/amd64` ### Does this issue reproduce with the latest release? `yes` ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/home/dd/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/dd/go:/home/dd/data/Documents/dev" GOPROXY="" GORACE="" GOROOT="/usr/local/go-1.11/go" GOTMPDIR="" GOTOOLDIR="/usr/local/go-1.11/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="0" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -fmessage-length=0 -fdebug-prefix-map=/tmp/user/1000/go-build740506104=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? `cat /home/dd/.GoLand2018.2/config/scratches/scratch_5.go` ``` package main import "fmt" var a int func f() { fmt.Println(a) // if comment this, data race is found a = 1 } func main() { go f() fmt.Println(a) } ``` `go run -race /home/dd/.GoLand2018.2/config/scratches/scratch_5.go` ### What did you expect to see? ``` 0 0 ================== WARNING: DATA RACE Write at 0x0000005e3640 by goroutine 6: main.f() /home/dd/.GoLand2018.2/config/scratches/scratch_5.go:9 +0x69 Previous read at 0x0000005e3640 by main goroutine: main.main() /home/dd/.GoLand2018.2/config/scratches/scratch_5.go:14 +0x5e Goroutine 6 (running) created at: main.main() /home/dd/.GoLand2018.2/config/scratches/scratch_5.go:13 +0x46 ================== Found 1 data race(s) ``` ### What did you see instead? ``` 0 0 ``` I think this is related with: https://github.com/golang/go/issues/12664 cc @dvyukov
RaceDetector,compiler/runtime
medium
Critical
361,789,457
rust
Optimize copies of large enums
For types like `SmallVec<[T; 1000]>`, or in general an enum where the variants have a huge difference in size, we should probably try to optimize the copies better. Basically, for enums with some large-enough difference between variant sizes, we should use a branch when codegenning copies/moves. I'm not sure how common this pattern is, but it's worth looking into! cc @rust-lang/wg-codegen
I-slow,C-enhancement,A-codegen,T-compiler,A-mir-opt,C-optimization
low
Major
361,844,140
rust
Proc macros: ability to refer to a specific crate/symbol (something similar to `$crate`)
### The problem In macros-by-example we have `$crate` to refer to the crate the macro is defined in. This is very useful as the library author doesn't have to assume anything about how that crate is used in the user's crate (in particular, the user can rename the crate without breaking the world). In the new proc macro system we don't seem to have this ability. It's important to note that just `$crate` won't be useful most of the time though, because right now most crates using proc macros are structured like that: - `foo-{macros/derive/codegen}`: this crate is `proc-macro = true` and defines the actual proc macro. - `foo`: defines all runtime dependency stuff, has `foo-{macros/derive/codegen}` as dependency and reexports the proc macro. - The important part: the proc macro emits code that uses stuff from `foo` An example: <details> **`foo-macros/src/lib.rs`** ```rust #[proc_macro] pub fn mac(_: TokenStream) -> TokenStream { quote! { ::foo::do_the_thing(); } } ``` **`foo/src/lib.rs`** ```rust pub fn do_the_thing() { println!("hello!"); } ``` When the user uses `mac!()` now, they have to have `do_the_thing` in scope, otherwise an error from inside the macro will occur. Not nice. Even worse: if the user has a `do_the_thing` in scope that is not from `foo`, strange things could happen. </details> <br /> So an equivalent of `$crate` would refer to the `foo-{macros/derive/codegen}` crate which is not all that useful, because we mostly want to refer to `foo`. The best way to solve this right now is to use absolute paths everywhere and hope that the user doesn't rename the crate `foo` to something else. The proc macro needs to be defined in a separate crate and the main crate `foo` wants to reexport the macro. That means that `foo-macros` doesn't know anything about `foo` and thus *blindly* emits code (tokens) hoping that the crate `foo` is in scope. But this doesn't sound like a very robust solution. Furthermore, using the macro in `foo` itself (usually for testing) is not trivial. The macro assumes `foo` is an extern crate that can be referred to with `::foo`. But that's not the case for `foo` itself. In one of my codebases I used a hacky solution: when the first token of the macro invocation is `*`, I emit paths starting with `crate::` instead of `::foo::`. But again, a better solution would be really appreciated. --- ### How can we do better? I'm really not sure, but I hope we can use this issue as place for discussion (I hope I didn't miss any previous discussion on IRLO). However, I have one idea: **declaring dependencies of emitted code**. One could add another kind of dependencies (apart from `dependencies`, `dev-dependencies` and `build-dependencies`) that defines what crates the emitted code depends on. (Let's call them `emit-dependencies` for now, although that name should probably be changed.) So those dependencies wouldn't be checked/downloaded/compiled when the proc macro crate is compiled, but the compiler could make sure that those dependencies are present in the crate using the proc macro. I guess defining those dependencies globally crate is not sufficient since different proc macros could emit code with different dependencies. So maybe we could define the `emit-dependencies` per proc macro. But I'm not sure if that makes the check too complicated (because then Cargo would have to check which proc macros the user actually uses to collect a set of `emit-dependencies`). That's just one idea I wanted to throw out there. --- ### Related - [Question on StackOverflow](https://stackoverflow.com/questions/44950574/using-crate-in-rusts-procedural-macros) - [`serde` issue related to this](https://github.com/serde-rs/serde/issues/953) - @Ekleog already asked about that [here](https://github.com/rust-lang/rust/issues/38356#issuecomment-412920528) - https://github.com/rust-lang/rust/issues/54647 - https://github.com/dtolnay/syn/issues/507 - https://github.com/rust-lang/rust/issues/56409
A-resolve,A-macros,T-lang,C-feature-request,A-proc-macros
medium
Critical
361,855,111
pytorch
[Caffe2] Errors occured when running Cpp Predictor
Hello, I am writing a predictor written in cpp so I can use it to predict the output class using the mnist data set. I managed to make it work completely in python but my goal is to use the trained net from python to predict in cpp. However, I am encountering some errors when I run the predictor: ``` carlos@carlos-ubuntu:~/Documents/git/Caffe2_scripts/03_cpp_forward$ mkdir build && cd build carlos@carlos-ubuntu:~/Documents/git/Caffe2_scripts/03_cpp_forward/build$ cmake ../ -- The C compiler identification is GNU 5.4.0 -- The CXX compiler identification is GNU 5.4.0 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Caffe2: /usr/local/lib/libcaffe2.so -- Looking for pthread.h -- Looking for pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Found CUDA: /usr/local/cuda (found version "8.0") Project_include_path:/usr/local/include/usr/local/include/eigen3/usr/local/cuda/include/usr/include/opencv/usr/include -- Configuring done -- Generating done -- Build files have been written to: /home/carlos/Documents/git/Caffe2_scripts/03_cpp_forward/build carlos@carlos-ubuntu:~/Documents/git/Caffe2_scripts/03_cpp_forward/build$ make Scanning dependencies of target classifier [ 50%] Building CXX object CMakeFiles/classifier.dir/main.cpp.o [100%] Linking CXX executable ../classifier [100%] Built target classifier carlos@carlos-ubuntu:~/Documents/git/Caffe2_scripts/03_cpp_forward/build$ cd ../ carlos@carlos-ubuntu:~/Documents/git/Caffe2_scripts/03_cpp_forward$ ./classifier --file ./test_img/3.jpg E0919 11:13:08.691625 11152 init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. E0919 11:13:08.692265 11152 init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. E0919 11:13:08.692294 11152 init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. == GPU processing selected == == network loaded == == image size: [42 x 41] == == simply resize: [28 x 28] == == Tensor == == Blob got == == Data copied == *** Aborted at 1537348389 (unix time) try "date -d @1537348389" if you are using GNU date *** PC: @ 0x7f1db78d381d getenv *** SIGSEGV (@0x0) received by PID 11152 (TID 0x7f1dc0900000) from PID 0; stack trace: *** @ 0x7f1db78cf4b0 (unknown) @ 0x7f1db78d381d getenv @ 0x7f1dbdc40ea4 caffe2::NumCudaDevices() @ 0x7f1dbdc41efd caffe2::GetDeviceProperty() @ 0x7f1dbdcc4c05 caffe2::CudnnConvOpBase::DetermineComputeTypeFromInput<>() @ 0x7f1dbdcce146 caffe2::CudnnConvOp::DoRunWithType<>() @ 0x7f1dbdcc21f8 caffe2::CudnnConvOp::RunOnDevice() @ 0x7f1dbdc0606b caffe2::Operator<>::Run() @ 0x7f1dbfc1f75b caffe2::SimpleNet::Run() @ 0x7f1dbfb9b76a caffe2::Workspace::RunNet() @ 0x42a2e0 caffe2::run() @ 0xbfd4bd36bfd893b0 (unknown) Segmentation fault (core dumped) ``` Since I don't know how I can debbug using cmake, I put some printouts to be able to see where the issue occurs. It seems that error occurs in: ``` // forward workSpace.RunNet(predictNet.name()); ``` This is code of the cpp predictor: ``` /******************************************************* * Copyright (C) 2018-2019 bigballon <[email protected]> * * This file is a caffe2 C++ image classification test * by using pre-trained cifar10 model. * * Feel free to modify if you need. *******************************************************/ #include "caffe2/core/common.h" #include "caffe2/utils/proto_utils.h" #include "caffe2/core/workspace.h" #include "caffe2/core/tensor.h" #include "caffe2/core/init.h" // feel free to define USE_GPU if you want to use gpu #define USE_GPU #ifdef USE_GPU #include "caffe2/core/context_gpu.h" #endif // headers for opencv #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <string> #include <iostream> #include <map> // define flags CAFFE2_DEFINE_string(init_net, "./init_net.pb", "The given path to the init protobuffer."); CAFFE2_DEFINE_string(predict_net, "./predict_net.pb", "The given path to the predict protobuffer."); CAFFE2_DEFINE_string(file, "./image_file.jpg", "The image file."); namespace caffe2{ void loadImage(std::string file_name, float* imgArray){ auto image = cv::imread(file_name); // CV_8UC3 std::cout << "== image size: " << image.size() << " ==" << std::endl; // scale image to fit cv::Size scale(28,28); cv::resize(image, image, scale); std::cout << "== simply resize: " << image.size() << " ==" << std::endl; // convert [unsigned int] to [float] image.convertTo(image, CV_32FC3); // convert NHWC to NCHW std::vector<cv::Mat> channels(3); cv::split(image, channels); std::vector<float> data; for (auto &c : channels) { data.insert(data.end(), (float *)c.datastart, (float *)c.dataend); } // do normalization & copy to imgArray int dim = 0; float image_mean[3] = {113.865, 122.95, 125.307}; float image_std[3] = {66.7048, 62.0887, 62.9932}; for(auto i = 0; i < data.size();++i){ if(i > 0 && i % (32*32) == 0) dim++; imgArray[i] = (data[i] - image_mean[dim]) / image_std[dim]; // std::cout << imgArray[i] << std::endl; } } void run(){ // define a caffe2 Workspace Workspace workSpace; // define initNet and predictNet NetDef initNet, predictNet; // read protobuf CAFFE_ENFORCE(ReadProtoFromFile(FLAGS_init_net, &initNet)); CAFFE_ENFORCE(ReadProtoFromFile(FLAGS_predict_net, &predictNet)); // set device type #ifdef USE_GPU predictNet.mutable_device_option()->set_device_type(CUDA); initNet.mutable_device_option()->set_device_type(CUDA); std::cout << "== GPU processing selected " << " ==" << std::endl; #else predictNet.mutable_device_option()->set_device_type(CPU); initNet.mutable_device_option()->set_device_type(CPU); for(int i = 0; i < predictNet.op_size(); ++i){ predictNet.mutable_op(i)->mutable_device_option()->set_device_type(CPU); } for(int i = 0; i < initNet.op_size(); ++i){ initNet.mutable_op(i)->mutable_device_option()->set_device_type(CPU); } #endif // load network CAFFE_ENFORCE(workSpace.RunNetOnce(initNet)); CAFFE_ENFORCE(workSpace.CreateNet(predictNet)); std::cout << "== network loaded " << " ==" << std::endl; // load image from file, then convert it to float array. float imgArray[1 * 28 * 28]; loadImage(FLAGS_file, imgArray); // define a Tensor which is used to stone input data std::cout << "== Tensor " << " ==" << std::endl; TensorCPU input; input.Resize(std::vector<TIndex>({1, 1, 28, 28})); input.ShareExternalPointer(imgArray); // get "data" blob #ifdef USE_GPU auto data = workSpace.GetBlob("data")->GetMutable<TensorCUDA>(); #else auto data = workSpace.GetBlob("data")->GetMutable<TensorCPU>(); #endif std::cout << "== Blob got " << " ==" << std::endl; // copy from input data data->CopyFrom(input); std::cout << "== Data copied " << " ==" << std::endl; // forward workSpace.RunNet(predictNet.name()); std::cout << "== Net was run " << " ==" << std::endl; // get predictions blob and show the results std::vector<std::string> labelName = {"0","1","2","3","4", "5","6","7","8","9"}; #ifdef USE_GPU auto predictions = TensorCPU(workSpace.GetBlob("predictions")->Get<TensorCUDA>()); #else auto predictions = workSpace.GetBlob("predictions")->Get<TensorCPU>(); #endif std::cout << "== predictions got " << " ==" << std::endl; std::vector<float> probs(predictions.data<float>(), predictions.data<float>() + predictions.size()); auto max = std::max_element(probs.begin(), probs.end()); auto index = std::distance(probs.begin(), max); std::cout << "== predicted label: " << labelName[index] << " ==\n== with probability: " << (*max * 100) << "% ==" << std::endl; } } // namespace caffe2 // main function int main(int argc, char** argv) { caffe2::GlobalInit(&argc, &argv); caffe2::run(); google::protobuf::ShutdownProtobufLibrary(); return 0; } ``` This is the structure of my project: ![image](https://user-images.githubusercontent.com/25825048/45745633-8e1c2b00-bc01-11e8-8ffc-fac066470584.png) Attached my init_net.pb and predict_net.pb in `.pb` and `pbtxt` format. I had to chnage their extensions to .txt to be able to upload them. [init_net.txt](https://github.com/BIGBALLON/Caffe2_Demo/files/2396478/init_net.txt) [predict_net.txt](https://github.com/BIGBALLON/Caffe2_Demo/files/2396479/predict_net.txt) [init_net_pbtxt.txt](https://github.com/BIGBALLON/Caffe2_Demo/files/2396599/init_net_pbtxt.txt) [predict_net_pbtxt.txt](https://github.com/BIGBALLON/Caffe2_Demo/files/2396600/predict_net_pbtxt.txt) **Do you have an idea what could be the reason for the error?** **My configuration: - Caffe2 tag v0.4.0 - CUDA/cuDNN version: 8.0/7.0.5 - GPU models and configuration: GTX 1050 - CMake version:**
caffe2
low
Critical
361,865,372
pytorch
[caffe2] Unaligned AVX instruction operands in LayerNorm implementation in OSS build
On my open source build the following program hits a segmentation fault: ``` from caffe2.python import core, workspace test_net = core.Net("layer_norm_test") test_net.LayerNorm(["input"], ["output", "mean", "stddev"], epsilon=1e-5) import numpy as np workspace.FeedBlob('input', np.random.rand(20, 5, 10, 10).astype('f')) workspace.CreateNet(test_net) workspace.RunNet(test_net.Name()) ``` Looking at the stack trace it looks like Eigen is attempting to do a 256-bit wide vector add between two stack-allocated operands, which are only aligned to 16 bytes: ``` #0 0x00007fffdf694ddb in Eigen::internal::padd<float __vector>(float __vector const&, float __vector const&) (a=..., b=...) at ../cmake/../third_party/eigen/Eigen/src/Core/arch/AVX/PacketMath.h:130 #1 0x00007fffdf6988ae in Eigen::internal::scalar_sum_op<float, float>::packetOp<float __vector>(float __vector const&, float __vector const&) const (this=0x7fffffffbacf, a=..., b=...) at ../cmake/../third_party/eigen/Eigen/src/Core/functors/BinaryFunctors.h:45 #2 0x00007fffe0cbbaf4 in Eigen::internal::redux_impl<Eigen::internal::scalar_sum_op<float, float>, Eigen::internal::redux_evaluator<Eigen::Block<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> > const, 1, -1, true> >, 3, 0>::run (mat=..., func=...) at ../cmake/../third_party/eigen/Eigen/src/Core/Redux.h:240 #3 0x00007fffe0cbb6dd in Eigen::DenseBase<Eigen::Block<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> > const, 1, -1, true> >::redux<Eigen::internal::scalar_sum_op<float, float> > (this=0x7fffffffbb80, func=...) at ../cmake/../third_party/eigen/Eigen/src/Core/Redux.h:418 #4 0x00007fffe0cbb38f in Eigen::DenseBase<Eigen::Block<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> > const, 1, -1, true> >::mean ( this=0x7fffffffbb80) at ../cmake/../third_party/eigen/Eigen/src/Core/Redux.h:468 #5 0x00007fffe0cbb180 in Eigen::internal::member_mean<float>::operator()<Eigen::Block<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> > const, 1, -1, true> > (this=0x7fffffffbcf0, mat=...) at ../cmake/../third_party/eigen/Eigen/src/Core/VectorwiseOp.h:105 #6 0x00007fffe0cbaff3 in Eigen::internal::evaluator<Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1> >::coeff (this=0x7fffffffbcd0, i=0, j=0) at ../cmake/../third_party/eigen/Eigen/src/Core/CoreEvaluators.h:1353 #7 0x00007fffe0cbae29 in Eigen::internal::generic_dense_assignment_kernel<Eigen::internal::evaluator<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> > >, Eigen::internal::evaluator<Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1> >, Eigen::internal::assign_op<float, float>, 0>::assignCoeff (this=0x7fffffffbcb0, row=0, col=0) at ../cmake/../third_party/eigen/Eigen/src/Core/AssignEvaluator.h:631 #8 0x00007fffe0cbacab in Eigen::internal::generic_dense_assignment_kernel<Eigen::internal::evaluator<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> > >, Eigen::internal::evaluator<Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1> >, Eigen::internal::assign_op<float, float>, 0>::assignCoeffByOuterInner (this=0x7fffffffbcb0, outer=0, inner=0) at ../cmake/../third_party/eigen/Eigen/src/Core/AssignEvaluator.h:645 #9 0x00007fffe0cba974 in Eigen::internal::dense_assignment_loop<Eigen::internal::generic_dense_assignment_kernel<Eigen::internal::evaluator<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> > >, Eigen::internal::evaluator<Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1> >, Eigen::internal::assign_op<float, float>, 0>, 0, 0>::run (kernel=...) at ../cmake/../third_party/eigen/Eigen/src/Core/AssignEvaluator.h:326 #10 0x00007fffe0cba352 in Eigen::internal::call_dense_assignment_loop<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> >, Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1>, Eigen::internal::assign_op<float, float> > (dst=..., src=..., func=...) at ../cmake/../third_party/eigen/Eigen/src/Core/AssignEvaluator.h:741 #11 0x00007fffe0cba1c8 in Eigen::internal::Assignment<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> >, Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1>, Eigen::internal::assign_op<float, float>, Eigen::internal::Dense2Dense, void>::run (dst=..., src=..., func=...) at ../cmake/../third_party/eigen/Eigen/src/Core/AssignEvaluator.h:879 #12 0x00007fffe0cb9ea9 in Eigen::internal::call_assignment_no_alias<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> >, Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1>, Eigen::internal::assign_op<float, float> > (dst=..., src=..., func=...) at ../cmake/../third_party/eigen/Eigen/src/Core/AssignEvaluator.h:836 #13 0x00007fffe0cb99ad in Eigen::internal::call_assignment<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> >, Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1>, Eigen::internal::assign_op<float, float> >(Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> >&, Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1> const&, Eigen::internal::assign_op<float, float> const&, Eigen::internal::enable_if<!Eigen::internal::evaluator_assume_aliasing<Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1>, Eigen::internal::evaluator_traits<Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1> >::Shape>::value, void*>::type) (dst=..., src=..., func=...) at ../cmake/../third_party/eigen/Eigen/src/Core/AssignEvaluator.h:804 #14 0x00007fffe0cb8cd5 in Eigen::internal::call_assignment<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> >, Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1> > (dst=..., src=...) at ../cmake/../third_party/eigen/Eigen/src/Core/AssignEvaluator.h:782 #15 0x00007fffe0cb8521 in Eigen::MatrixBase<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> > >::operator=<Eigen::PartialReduxExpr<Eigen::Map<Eigen::Matrix<float, -1, -1, 1, -1, -1> const, 0, Eigen::Stride<0, 0> >, Eigen::internal::member_mean<float>, 1> > (this=0x7fffffffbea0, other=...) at ../cmake/../third_party/eigen/Eigen/src/Core/Assign.h:66 #16 0x00007fffe0cabaf5 in caffe2::LayerNormOp<caffe2::CPUContext>::DoRunWithType<float> (this=0x5555562c23f0) at ../caffe2/operators/layer_norm_op.cc:52 #17 0x00007fffe0cbd8bc in caffe2::LayerNormOp<caffe2::CPUContext>::RunOnDevice (this=0x5555562c23f0) at ../caffe2/operators/layer_norm_op.h:24 ``` Current frame: ``` (gdb) f #0 0x00007fffdf694ddb in Eigen::internal::padd<float __vector>(float __vector const&, float __vector const&) (a=..., b=...) at ../cmake/../third_party/eigen/Eigen/src/Core/arch/AVX/PacketMath.h:130 130 template<> EIGEN_STRONG_INLINE Packet8f padd<Packet8f>(const Packet8f& a, const Packet8f& b) { return _mm256_add_ps(a,b); } ``` Debug info: ``` (gdb) p &a $1 = (const Eigen::internal::Packet8f *) 0x7fffffffb970 (gdb) p &b $2 = (const Eigen::internal::Packet8f *) 0x7fffffffb9b0 ``` I've tried building with `-DEIGEN_FORCE_ALIGN_32` but it doesn't seem to make a difference.
caffe2
low
Critical
361,876,351
go
cmd/go: describe stuck ops when terminated by signal
### What version of Go are you using (`go version`)? go version go1.11 darwin/amd64 ### Does this issue reproduce with the latest release? Yes. ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOCACHE="/Users/ccahoon/Library/Caches/go-build" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64" ### What did you do? Note: I am building with `GO111MODULE=on`, from within my GOPATH. 1) I am building a command that imports a module defined as: `module github.com/Figure53/figure53.com` 2) Within that module, I import a package at `github.com/Figure53/figure53.com/backend/remote`, and it builds without issues. 3) I created a new file within a pre-existing package, and copy and pasted that import path. Somewhere along the way, I introduced a typo into the import path. The domain name in the import path was misspelled as `gitdub.com`. 4) I ran `go build` with `GO111MODULE=on`, and it hung silently until I cancelled it. I tried again a few times and got confused. It didn't seem to be CPU issue and I haven't experienced a feedback-less slow build for a while. After I stashed my changes, it built fine. 5) I read my diff, discovered my typo, fixed it, and the build worked again. ### What did you expect to see? I expected some indicator that `go build` was hung on trying to fetch a module, after shorter period of time. I also thought some sort of shorter timeout might make sense. ### What did you see instead? `go build` hung indefinitely. Once I ran `go build -v [...]`, I could see what it was working on, and the build hung indefinitely at: `Fetching https://gitdub.com/Figure53/figure53.com/backend/remote?go-get=1` It also is not a problem when the request returns quickly (as in an experiment I did to try to understand the problem where I changed the domain to one I control which will 404 quickly): `build github.com/Figure53/figure53.com/backend/act: cannot find module for path figure53.com/Figure53/figure53.com/backend/remote` --- I am loving this module system, by the way. It's gone very smoothly for the most part. Thanks for all the great work!
NeedsFix,modules
low
Major
361,921,960
rust
`#[thread_local] static mut` is allowed to have `'static` lifetime
Spinning off from https://github.com/rust-lang/rust/issues/51269: The MIR borrow checker (i.e., NLL) [permits `#[thread_local] static mut` to have `'static` lifetime](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=e4b4f7af67c16546ec3d82acce90d624). This is probably a bug. It arises because we ignore borrows of "unsafe places", which includes `static mut`. We probably ought to stop doing that — at least not ignoring them *entirely* — but if we do so, we have to ensure that we continue to accept overlapping borrows of `static mut` (even though that is basically guaranteed UB), since it compiles today: ```rust fn main() { static mut X: usize = 22; unsafe { let p = &mut X; let q = &mut X; *p += 1; *q += 1; *p += 1; } } ```
P-medium,T-compiler,A-NLL,NLL-sound,requires-nightly,F-thread_local
medium
Critical
361,925,620
vscode
Relative problem paths don't take `cwd` into account
From https://github.com/Microsoft/vscode-cpptools/issues/2518 by @galgalesh -------------------------------------------------------- **Type: LanguageService** <!----- Input information below -----> <!-- **Please review existing issues and our documentation at https://github.com/Microsoft/vscode-cpptools/tree/master/Documentation prior to filing an issue.** --> **Describe the bug** - OS and Version: Ubuntu 18.04 - VS Code Version: 1.27.1 5944e81f3c46a3938a82c701f96d7a59b074cfdc x64 - C/C++ Extension Version: 0.18.1 - Other extensions you installed (and if the issue persists after disabling them): - A clear and concise description of what the bug is. The problemMatcher doesn't take the `cwd` of a task into account when resolving relative paths. **To Reproduce** <!-- Steps to reproduce the behavior: --> <!-- *The most actionable issue reports include a code sample including configuration files such as c_cpp_properties.json* --> Create a task and set the `cwd` of that task to a different path. **Expected behavior** The problemMatcher resolves relative paths as relative to `cwd`. If you set the `cwd` of a task, relative paths will most likely be relative to the `cwd`, not to the workspaceFolder. **Behavior I get** The problemMatcher resolves relative paths as relative to `workspaceFolder` Following is the `tasks.json` for such a task. The line starting with ` "fileLocation"` shouldn't be neccisary. ``` { "label": "build adfgxx", "type": "shell", "command": "ninja", "args": [ ], "group": { "kind": "build", "isDefault": true }, "problemMatcher": { "base": "$gcc", // The next line shouldn't be needed since it is the same value as `cwd` "fileLocation" : ["relative", "${workspaceFolder}/builddir"] }, "options": { "cwd": "${workspaceFolder}/builddir", } } ``` **Additional context** <!-- *Call Stacks: For bugs like crashes, deadlocks, infinite loops, etc. that we are not able to repro and for which the call stack may be useful, please attach a debugger and/or create a dmp and provide the call stacks. Starting with 0.17.3, Windows binaries have symbols available in VS Code by setting your "symbolSearchPath" to "http://msdl.microsoft.com/download/symbols".* Add any other context about the problem here including log messages in your Output window ("C_Cpp.loggingLevel": "Debug" in settings.json). -->
feature-request,tasks
low
Critical
361,939,224
vscode
moving TypeScript files in explorer fails to update/prompt imports when containing folder is moved
<!-- Please search existing issues to avoid creating duplicates. --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> <!-- Use Help > Report Issue to prefill these. --> - VSCode Version: 1.27.2 - OS Version: Win 10 - Typescript: 3.0.3 Steps to Reproduce: 0. Set update imports on move typescript setting to prompt 1. Create an empty project with 2 typescript files, one importing from the other. 2. In the explorer manually move the depended upon file to a new directory. You will be prompted to update the import. This works fine. 3. Now move that directory into ANOTHER new directory. 4. This time there is no prompt and the import will be broken
bug,help wanted,typescript
medium
Critical
361,941,778
scrcpy
Mouse capture for shooting games
I love this app. Its very simple. But my question is, why i cant use mouse with shooting games like on pc when moving around ...all i see is pointer? Can i suggest that you guys add like when i press ctr+p it will show mouse and other way around ? :) Thanks!
feature request,input events,mouse capture
low
Major
361,943,638
create-react-app
Error overlay watching mode doesn't work anymore
The build is complaining about `process` not being defined.
tag: internal,issue: needs investigation
low
Critical
361,985,811
go
syscall/js: Make syscall/js optional when compiling wasm?
At present, when we compile to wasm `syscall/js` is automatically included. This forces the runtime to either be a browser (eg Firefox, Chrome, etc), or at least pretend to be one. That's kind of non-optimal, as some of the wasm execution environments presently being developed aren't targeted at browser environments. eg: * https://github.com/Xe/olin * https://github.com/ewasm/hera * https://github.com/perlin-network/life * https://github.com/cervus-v/cervus * https://github.com/go-interpreter/wagon * https://github.com/paritytech/wasmi * https://github.com/nebulet/nebulet * https://github.com/WAVM/WAVM * https://github.com/cranestation/wasmtime In the [WebAssembly](https://gophers.slack.com/messages/C4GU6CHSQ) Gophers Slack channel we have people asking about non-browser use cases fairly often. It seems likely that'll be a fairly standard use case, if it can be catered to as well. How feasible would it be to have some way to suppress the default `syscall/js` inclusion, or otherwise make it optional?
NeedsDecision,FeatureRequest,arch-wasm
high
Critical
362,009,098
neovim
CursorLine priority issues when 'linking to Normal'
- nvim 0.3.1 - all terminals and operating systems. Since I upgraded to Neovim 0.3.1 I notice NERDTree cursorline looks broken with my preferred Vim theme of [moonfly](https://github.com/bluz71/vim-moonfly-colors). Some Googling indicates others have the same experience as noted in [this Reddit thread](https://www.reddit.com/r/vim/comments/91b7mg/broken_cursorline_highlighting_nvim/) and following image. ![9g9lht08nrb11](https://user-images.githubusercontent.com/11382509/45794665-ee32d000-bcda-11e8-850d-979c5731cffe.png) Notice the `CursorLine` in the NERDTree window, it does not stretch across the full width of the window, it only highlights the section after the `zsh` extension. I get the same result with my theme. I think this relates to Neovim changes to CursorLine priority; especially if one only sets a `ctermbg` color without setting a foreground color. Reading through some of the Neovim CursorLine priority discussions it is mentioned that if one sets foreground and background then the priority system will not come it play. That is correct from my quick testing. However, in my case I do not want to set a foreground color for the `CursorLine`, I only set a low contrast background color whilst preserving the existing colors of the current line. Setting a CursorLine foreground color results in the loss of coloring for the current selected line (aka it looks ugly with my theme). How can I bypass Neovim CursorLine priority changes (for 0.3.1 and above) whilst only setting the background color? Note, Vim 8.1 renders the NERDTree current line correctly via `CursorLine` if background is the only color set. Same went for Neovim 0.3.0 and earlier. Thanks.
compatibility,ui,display,highlight
medium
Critical
362,035,988
pytorch
[caffe2] Bug for softmaxwithloss operator
I found that the ignore label does not work for the operator: softmaxwithloss in [https://github.com/pytorch/pytorch/blob/master/caffe2/operators/softmax_with_loss_op.cc](url), which means that line 151 #define DONT_CARE (-1) does not make sense On the other hand, the operator: spatialsoftmaxwithloss in [https://github.com/pytorch/pytorch/blob/master/caffe2/operators/spatial_softmax_with_loss_op.cc](url) does not have the bug, we can igore the label -1 Hope to fix the bug.
caffe2
low
Critical
362,111,452
flutter
Need to animate the old page during transition
Currently, the transition (PageRoute) is an overlay, only provide a child widget which is the new page. It should be involved with the old page. If I want to fade out or do other animations to the old page, I cannot do it with the current method. For my project, each page has an overlay which is semi-transparent. You will notice the overlay is overlapped by the new page during the transition but suddenly changed to one layer since the old page is hidden when the transition is finished. I need to animate the old page to move and fade out with the new one coming in to give the user a smooth visual effect. I also asked on SO ([link](https://stackoverflow.com/questions/52419056/how-to-animate-old-page-during-transition-in-flutter)) and discord, but people from discord suggesting me to ask here. For comparison, the iOS native transition is able to animate both old and new page, so that you can make very great switching effects.
c: new feature,framework,a: animation,f: routes,P2,team-framework,triaged-framework
low
Major
362,112,811
go
cmd/compile/internal/gc: generalize self-assignment handling in escape analysis
Right now, there are at least 2 quite adhoc patterns that are recognized by escape analysis as safe. It skips detailed analysis when it sees self-assignment statement that does not introduce new pointers that need tracking. Initially, only some simple self-slicing assignments like this were handled: ```go x.buf = x.buf[a:b] ``` Later, some more patterns were added, so these are recognized, too: ```go x.ptr1 = x.ptr2 val.pointers[i] = val.pointers[j] ``` The problem with them is that they are very fragile and can't match expressions that are different from the simplest cases, but still represents self-assignments: ```go x.a.b.buf = x.a.b.buf[a:b] ``` It is possible to generalize all self-assignment patterns with a concept of "base object". What we see above is a matching of object field assignment to the object itself. The base object is that object, the one that contains referenced field. For the simplest cases, base object is just 1 syntactical level "above": ```go base(x.y) // => x base(x.y.z) // => x.y base(x[i]) // => x base(x[i][j]) // => x[i] ``` For cases where expression effectively returns the same object, we can skip several levels: ```go base(x[:]) => x base(x[:][:]) => x ``` Given `base` function, we can express self-assignment as (pseudo Go): ```go // See samesafeexpr from cmd/compile/internal/gc/typecheck.go return samesafeexpr(base(dst), base(src)) ``` This covers all patterns above, plus a few more. As a bonus, it also solves trivial `*p = *p` case (see https://github.com/golang/go/issues/5919): ```go func j(p *string) { *p = *p } ``` More interesting examples of self-assignments that were not recognized until generalization: ```go type node struct { next *node } func createLoop(n *node) { n.next = n // n was leaking param before } ``` ```go type foo struct { pi *int i int } func (f *foo) example() { f.pi = &f.i // f was leaking param before } ``` This solution (if it is correct or can be made correct): - Makes self-assignment check less adhoc, more powerfull (hopefully, more useful) - It makes implementation simpler, reduces code duplication <hr> Collect new escape analysis results info: ```bash go build -a -gcflags=-m ./src/... 2>&1 | grep 'does not escape' > new_noescapes.txt go build -a -gcflags=-m ./src/... 2>&1 | grep 'leaking param' > new_leakparam.txt ``` And now with unpatched `go tool compile`: ```bash go build -a -gcflags=-m ./src/... 2>&1 | grep 'does not escape' > old_noescapes.txt go build -a -gcflags=-m ./src/... 2>&1 | grep 'leaking param' > old_leakparam.txt ``` Now it is possible to do some comparisons. New non-escaping values/parameters: ``` src/net/net.go:687:7: (*Buffers).consume v does not escape src/net/net.go:674:7: (*Buffers).Read v does not escape src/internal/poll/fd.go:48:14: consume v does not escape src/compress/flate/inflate.go:325:10: (*decompressor).nextBlock &f.h1 does not escape src/compress/flate/inflate.go:326:10: (*decompressor).nextBlock &f.h2 does not escape src/cmd/compile/internal/gc/const.go:668:15: evconst &nl does not escape src/container/ring/ring.go:121:7: (*Ring).Len r does not escape src/cmd/compile/internal/types/type.go:727:13: (*Type).copy &nt does not escape ``` Since `ring.Ring.Len` receiver no longer escapes, we can try to verify and measure it with benchmarks: ```go package foo import ( "container/ring" "testing" ) func BenchmarkRingLen(b *testing.B) { for i := 0; i < b.N; i++ { var r ring.Ring _ = r.Len() } } ``` Old is unpatched escape analysis with leaking `r`. New is non-leaking `r`: ``` name old time/op new time/op delta RingLen-8 35.6ns ± 6% 3.0ns ± 0% -91.46% (p=0.000 n=10+10) name old alloc/op new alloc/op delta RingLen-8 32.0B ± 0% 0.0B -100.00% (p=0.000 n=10+10) name old allocs/op new allocs/op delta RingLen-8 1.00 ± 0% 0.00 -100.00% (p=0.000 n=10+10) ``` Here is `ring.Ring.Len` method implementation, for the reference: https://golang.org/src/container/ring/ring.go?s=2869:2893#L111 <hr> For additional examples, see proposed test suite extension: ```go package foo var sink interface{} func length(xs []byte) int { // ERROR "leaking param: xs" sink = xs // ERROR "xs escapes to heap" return len(xs) } func zero() int { return 0 } type buffer struct { arr [64]byte buf1 []byte buf2 []byte bufPtr1 *[]byte bufPtr2 *[]byte bufList [][]byte bufArr [5][]byte bufPtrList []*[]byte str1 string str2 string next *buffer buffers []*buffer } func (b *buffer) getNext() *buffer { // ERROR "leaking param: b to result ~r0 level=1" return b.next } // When referring to the "old" implementation, cases that are covered in // escape2.go are implied. // Most tests here are based on those tests, but with slight changes, // like extra selector expression level. func (b *buffer) noEsc() { // ERROR "\(\*buffer\).noEsc b does not escape" // Like original slicing self-assignment test, but with additional // slicing expressions inside the RHS. b.buf1 = b.buf1[:][1:2] // ERROR "ignoring self-assignment to b.buf1$" b.buf1 = b.buf1[1:][1:2:3] // ERROR "ignoring self-assignment to b.buf1$" b.buf1 = b.buf2[:2][1:][1:2] // ERROR "ignoring self-assignment to b.buf1$" b.buf1 = b.buf2[:][1:2][1:2:3] // ERROR "ignoring self-assignment to b.buf1$" // The "left" (base) part is generalized and can be arbitrary, // as long as it doesn't affect memory. // Basically, these cases add "next" in the buf accessing chain. b.next.buf1 = b.next.buf1[1:2] // ERROR "ignoring self-assignment to b.next.buf1$" b.next.buf1 = b.next.buf1[1:2:3] // ERROR "ignoring self-assignment to b.next.buf1$" b.next.buf1 = b.next.buf2[1:2] // ERROR "ignoring self-assignment to b.next.buf1$" b.next.next.buf1 = b.next.next.buf2[1:2:3] // ERROR "ignoring self-assignment to b.next.next.buf1$" // Indexing functionally is almost identical to field accessing. // It's permitted to have different trailing indexes just as it's // permitted to have different trailing selectors. index1 := 10 index2 := index1 b.bufList[0] = b.bufList[1][1:2] // ERROR "ignoring self-assignment to b.bufList\[0\]$" b.bufList[index1] = b.bufList[index2][1:2:3] // ERROR "ignoring self-assignment to b.bufList\[index1]$" b.bufArr[0] = b.bufArr[1+1][1:2] // ERROR "ignoring self-assignment to b.bufArr\[0]$" *b.bufPtrList[2] = (*b.bufPtrList[1])[1:2:3] // ERROR "ignoring self-assignment to \*b.bufPtrList\[2\]$" // Same indexes should work as well. b.bufList[0] = b.bufList[0][1:2] // ERROR "ignoring self-assignment to b.bufList\[0\]$" b.bufList[index1] = b.bufList[index1][1:2:3] // ERROR "ignoring self-assignment to b.bufList\[index1]$" b.bufArr[1+1] = b.bufArr[1+1][1:2] // ERROR "ignoring self-assignment to b.bufArr\[1 \+ 1\]$" *b.bufPtrList[2] = (*b.bufPtrList[2])[1:2:3] // ERROR "ignoring self-assignment to \*b.bufPtrList\[2\]$" // Works for chained indexing as well, but indexing in base objects must match. b.buffers[0].bufList[0] = b.buffers[0].bufList[0][1:2] // ERROR "ignoring self-assignment to b.buffers\[0\].bufList\[0\]" b.buffers[index1+1].bufList[index1] = b.buffers[index1+1].bufList[index1][1:2:3] // ERROR "ignoring self-assignment to b.buffers\[index1\ \+ 1].bufList\[index1\]" b.buffers[1+1].bufArr[1+1] = b.buffers[1+1].bufArr[1+1][1:2] // ERROR "ignoring self-assignment to b.buffers\[1 \+ 1\].bufArr\[1 \+ 1\]" *b.buffers[1].bufPtrList[2] = (*b.buffers[1].bufPtrList[2])[1:2:3] // ERROR "ignoring self-assignment to \*b.buffers\[1\].bufPtrList\[2\]" } func (b *buffer) esc() { // ERROR "leaking param content: b" // None of the cases below should trigger self-assignment optimization. // These slice expressions contain sub-exprs that may affect memory. b.buf1 = b.buf1[:length(b.buf1)][1:2] b.buf1 = b.buf1[1:length(b.buf1)][1:2:3] b.buf1 = b.buf2[:][zero():length(b.buf2)][1:2] b.buf1 = b.buf2[zero()+1:][:][1:2:3] // Due to method call inside the chain, these should not be optimized. // The baseObject(x) returns b.getNext() node for both sides, // but samesafeexpr would not consider them as "same". b.getNext().buf1 = b.getNext().buf1[1:2] b.getNext().buf1 = b.getNext().buf1[1:2:3] b.getNext().buf1 = b.getNext().buf2[1:2] b.getNext().buf1 = b.getNext().buf2[1:2:3] // Different base objects. b.next.next.buf1 = b.next.buf1[1:2] b.next.next.buf1 = b.next.buf1[1:2:3] b.next.buf1 = b.next.next.buf2[1:2] b.next.buf1 = b.next.next.buf2[1:2:3] b.bufList[0] = b.bufArr[0][1:2] // Different indexes are not permitted inside base objects. index1 := 10 index2 := index1 b.buffers[0].bufList[0] = b.buffers[1].bufList[0][1:2] b.buffers[index1].bufList[index1] = b.buffers[index2].bufList[index1][1:2:3] b.buffers[1+1].bufArr[1+1] = b.buffers[1+0].bufArr[1+1][1:2] *b.buffers[0].bufPtrList[2] = (*b.buffers[1].bufPtrList[2])[1:2:3] b.buffers[index1+1].bufList[index1] = b.buffers[index1+2].bufList[index1][1:2:3] } func (b *buffer) sanity1() { // ERROR "leaking param content: b" b.next.buf1 = b.next.buf2[:] // ERROR "ignoring self-assignment to b.next.buf1" sink = b.next.buf1 // ERROR "b.next.buf1 escapes to heap" sink = b.next.buf2 // ERROR "b.next.buf2 escapes to heap" } func (b *buffer) sanity2() { // ERROR "b does not escape" b.bufList = b.bufList[:len(b.bufList)-1] // ERROR "ignoring self-assignment to b.bufList" } ```
Performance,NeedsInvestigation
low
Critical
362,114,153
godot
OBJ with many meshes is imported as one large mesh
**Godot version:** 3.0.6 **OS/device including version:** Windows 10 **Issue description:** I have added an OBJ file to my project and it is imported. The problem is that my OBJ contains many mesh parts but everything is mashed together to one mesh instead separated into several individual meshes. I can import it to blender and it is split up as expected. The problem here is I can't reorganize and fine tune my model after import. I have tried to change import -> import as Scene. It will be imported as a scene but again as one large mesh and not separated as individual meshes. **Steps to reproduce:** Take an OBJ file with separated meshes and import it into godot [ImportObjTest.zip](https://github.com/godotengine/godot/files/2400767/ImportObjTest.zip)
enhancement,topic:core
low
Major
362,115,308
electron
add upstream doc links to default window
when we run `electron` a default window opens that has chromium and nodejs versions at the top. would be really helpful if these would be links to the upstream docs, just like the links at the bottom. old nodejs docs can be found here https://nodejs.org/docs/ but not sure where to find chromium html, css and javascript docs for a specific version.
enhancement :sparkles:,documentation :notebook:,beginner friendly
medium
Major
362,131,704
opencv
opencv.pc file does not include static dependencies
It looks like cmake does not populate the `@OPENCV_PC_LIBS_PRIVATE@` field in [opencv.pc](https://github.com/opencv/opencv/blob/master/cmake/templates/opencv-XXX.pc.in), at least on MacOS. I am using the [homebrew](https://github.com/Homebrew/homebrew-core/blob/master/Formula/opencv.rb) build of opencv. The `opencv.pc` file looks like this: ``` prefix=/usr/local/Cellar/opencv/3.4.2 exec_prefix=${prefix} libdir=${exec_prefix}/lib includedir_old=${prefix}/include/opencv includedir_new=${prefix}/include Name: OpenCV Description: Open Source Computer Vision Library Version: 3.4.2 Libs: -L${exec_prefix}/lib -lopencv_stitching -lopencv_superres -lopencv_videostab -lopencv_aruco -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_dnn_objdetect -lopencv_dpm -lopencv_face -lopencv_photo -lopencv_fuzzy -lopencv_hfs -lopencv_img_hash -lopencv_line_descriptor -lopencv_optflow -lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_stereo -lopencv_structured_light -lopencv_phase_unwrapping -lopencv_surface_matching -lopencv_tracking -lopencv_datasets -lopencv_dnn -lopencv_plot -lopencv_xfeatures2d -lopencv_shape -lopencv_video -lopencv_ml -lopencv_ximgproc -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_flann -lopencv_xobjdetect -lopencv_imgcodecs -lopencv_objdetect -lopencv_xphoto -lopencv_imgproc -lopencv_core Libs.private: Cflags: -I${includedir_old} -I${includedir_new} ``` As we can see the `Libs.private:` does not declare the static dependencies and therefore static linking fails. In the case of homebrew I would expect something like this in `opencv.pc`: ``` Libs.private: -ltbb -lippicv -lippiw -littnotify -llibjpeg-turbo -llibwebp -lpng -ltiff Requires.private: openexr libavdevice ``` Obviously this depends on the exact configuration, but generally speaking, `Requires.private` names the `.pc` files for external libs that opencv was built (if they have a `pc` file), and `Libs.private` lists the bundled 3rdparty libs, and also external libs that do not have a `pc` file. As a workaround I use the following to get the proper static linking `LIBS`: ```sh LIBS_OPENCV=$(pkg-config --libs --static opencv) LIBS_EXTRA=$(pkg-config --libs --static openexr libavdevice) LIBS_STATIC="-ltbb -lippicv -lippiw -littnotify -llibjpeg-turbo -llibwebp -lpng -ltiff" LIBS="$LIBS_OPENCV $LIBS_STATIC $LIBS_EXTRA" ```
priority: low,category: build/install
low
Minor
362,149,955
rust
`impl Trait` should be able to capture long-lived associated types even if the trait contains a lifetime
Basically, this should compile: ```rust trait Trait<'foo> { type Assoc: 'static; } fn test<'foo, T: Trait<'foo>>(x: T::Assoc) -> impl FnOnce() { move || { let _x = x; } } ``` Thanks to @eddyb I can work around this, writing ```rust trait Trait<'foo> { type Assoc: 'static; } fn test<'foo, A: 'static, T: Trait<'foo, Assoc=A>>(x: T::Assoc) -> impl FnOnce() { move || { let _x = x; } } ``` but that is almost impossible to discover.
A-lifetimes,T-compiler,A-impl-trait,C-bug,T-types
low
Major
362,165,628
angular
The "value" property on radio buttons with FormControl doesn't get updated
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior When I query a radio input element, for example: `myRadioInput` and check it's value like `myRadioInput.value` I get "on" instead of the actual value from "value" ## Expected behavior I expect that `myRadioInput.value` will be the `inputValue` or the `formControl.value` I set on the component. ## Minimal reproduction of the problem with instructions In the following link: 1. Open the console 2. Click on the button 3. See that instead of "good" it shows "on" https://stackblitz.com/edit/angular-reactive-forms-radio-value-bug ## What is the motivation / use case for changing the behavior? For testing purposes, I need to be able to click on a certain radio input to manually select it. that's why it's important to be able to compare the radio.value to the manually selected value. ## Environment <pre><code> Angular version: 6.1.8 <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [x] Chrome (desktop) version 68.0.3440.106 - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: XX <!-- run `node --version` --> - Platform: <!-- Mac, Linux, Windows --> Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
freq1: low,area: forms,P3,bug
low
Critical
362,195,586
TypeScript
Use React TSX tags without having to write "import React from 'react'"
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. --> ## Suggestion I have to write `import React from 'react'` every time I use TSX syntax even though I make no explicit mention of the name `React` in the following lines of code (Of course, TSC output will contain mentions of `React` but the key is _explicit_). It is possible to write JSX tags in Babel's JavaScript without `import React from 'react'` by using Babel's/Webpack's plugins. A notable example is [Next.js](https://github.com/zeit/next.js/blob/canary/examples/basic-export/pages/index.js). Therefore I suggest: If user are not mentioning `React` directly, user shouldn't have to declare `React` explicitly. This can be configured in compiler options, something like a combination of `--jsxFactoryPath` and `--jsxModule` for example. <!-- A summary of what you'd like to see added or changed --> ## Examples <!-- Show how this would be used and what the behavior would be --> ### React.js #### Ideal Source Code **tsconfig.json:** ```json { "compilerOptions": { "module": "commonjs", "jsxFactoryPath": ["createElement"], "jsxModule": "react" } } ``` **index.tsx:** ```typescript export = <div id='123'>hello</div> ``` #### Desired Output **index.js:** ```javascript var _jsx = require("react").createElement; module.exports = _jsx("div", { id: "123" }, "hello"); ``` ### Hyperscript #### Ideal Source Code **tsconfig.json:** ```json { "compilerOptions": { "module": "commonjs", "jsxFactoryPath": [], "jsxModule": "hyperscript" } } ``` **index.tsx:** ```typescript export = <div id='123'>hello</div> ``` #### Desired Output ```javascript var _jsx = require("hyperscript"); module.exports = _jsx("div", { id: "123" }, "hello"); ``` ### Generic #### Ideal Source Code **tsconfig.json:** ```json { "compilerOptions": { "module": "commonjs", "jsxFactoryPath": ["obj1", "obj2", "func"], "jsxModule": "pkg/dir1/dir2" } } ``` **index.tsx:** ```typescript export = <div id='123'>hello</div> ``` #### Desired Output **index.js:** ```javascript var _jsx = require("pkg/dir1/dir2").obj1.obj2.func; module.exports = _jsx("div", { id: "123" }, "hello"); ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,Awaiting More Feedback
low
Critical
362,204,433
create-react-app
Prevent shell output deletion when launching
It's quite silly, but I'd just like to propose that the start script doesn't erase the previous shell output by default. This can be bit annoying if you ever forget to run in a separate shell, use a screen/script command etc. I looked for a possible setup in the docs but didn't find any. Maybe I'm missing something though.
issue: proposal
low
Minor
362,207,990
bitcoin
Callback/notification documentation and cleanup
I believe many aspects of how callbacks or notifications work are not well-documented, particularly the user-facing behaviors (like the various `-*notify` behaviors, zmq interface, and synchronization guarantees between the wallet/mempool/validation). Along with a lack of documentation, I believe there's likely a lack of regression testing to ensure that our software continues to enforce whatever implicit guarantees users may have come to expect. And it makes changes to the behavior more difficult, as it is hard to reason about changes to behavior without some context for what expectations might be. I think it'd be a great project for someone to pick some of these interfaces and document exactly how they work. I imagine this would uncover some oversights that will be a jumping off point for making improvements as well. As an example here are a few questions that show the kinds of things that I think are missing documentation: * What are the ordering guarantees of eg `-blocknotify` callbacks, and what should they be? (#14275 highlights that there is a discrepancy between our own python tests and the behavior in our software) * Exactly which transactions are announced via the zmq interface -- for instance during a reorg, or a transaction being evicted, a transaction being replaced due to conflict, etc? * What ordering guarantees exist (if any) between notifications received via zmq and when the rest of our software (wallet/mempool/validation) reflecting the same underlying event via our RPC interface? * What should the synchronization be between RPC calls that affect validation-level information (like `getbestblockhash` and RPC calls that touch the mempool? (eg see #14193 for a reasonable-looking change that IMO should be evaluated in a larger context about how our interfaces should work). etc.
Docs
low
Minor
362,239,983
pytorch
caffe2 argmax and argmin documentation incorrect for output type
## Issue description It looks like the documentation for the output types for the caffe2 argmax and argmin operators are incorrect. The documentation says that they should be floats: https://github.com/pytorch/pytorch/blob/8f4601fbac5565218212908fe9be57abcbf14145/caffe2/operators/arg_ops.cc#L163 But it looks like the infer function is making sure they are int64s: https://github.com/pytorch/pytorch/blob/8f4601fbac5565218212908fe9be57abcbf14145/caffe2/operators/arg_ops.cc#L89 - PyTorch or Caffe2: caffe2 - How you installed PyTorch (conda, pip, source): nix - Build command you used (if compiling from source): - OS: linux - PyTorch version: - Python version: - CUDA/cuDNN version: - GPU models and configuration: - GCC version (if compiling from source): - CMake version: - Versions of any other relevant libraries:
caffe2
low
Minor
362,296,330
go
cmd/compile: detect and apply more CMOV optimizations
### What version of Go are you using (`go version`)? go version go1.11 linux/amd64 ### Does this issue reproduce with the latest release? Yes. And checked with master as of (go version devel +7f3de1f275 Thu Sep 20 14:44:04 2018 +0530 linux/amd64) ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/home/agniva/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/agniva/play/go" GOPROXY="" GORACE="" GOROOT="/usr/local/go" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build407760751=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? Consider this code - ```go package main func min2(a, b int) int { if a < b { return a } return b } func min3(a, b, c int) int { if a < b { if a < c { return a } } else if b < c { return b } return c } var a, b, c int var d int func main() { d = min2(min2(a, b), c) d = min3(a, b, c) } ``` ### What did you expect to see? Here, both functions are trying to get the min of 3 integers, albeit in slightly different manners. First one performs min of 2 integers twice, second one performs them in a single function. One would think that the compiler would be smart enough so that the generated code would be equivalent. Or atleast, not be [14% faster](https://github.com/agnivade/levenshtein/pull/3) than the other. ### What did you see instead? But on investigating the assembly, we see interesting results ``` 1st one 0x0000 00000 (funcreg.go:34) MOVQ "".c(SB), AX 0x0007 00007 (funcreg.go:34) MOVQ "".a(SB), CX 0x000e 00014 (funcreg.go:34) MOVQ "".b(SB), DX 0x0015 00021 (funcreg.go:34) CMPQ CX, DX 0x0018 00024 (funcreg.go:34) CMOVQLT CX, DX 0x001c 00028 (funcreg.go:34) CMPQ DX, AX 0x001f 00031 (funcreg.go:34) CMOVQLT DX, AX 0x0023 00035 (funcreg.go:34) MOVQ AX, "".d(SB) 2nd one 0x002a 00042 (funcreg.go:35) MOVQ "".c(SB), AX 0x0031 00049 (funcreg.go:35) MOVQ "".a(SB), CX 0x0038 00056 (funcreg.go:35) MOVQ "".b(SB), DX 0x003f 00063 (funcreg.go:35) CMPQ CX, DX 0x0042 00066 (funcreg.go:34) JGE 86 0x0044 00068 (funcreg.go:35) CMPQ CX, AX 0x0047 00071 (funcreg.go:35) JGE 81 0x0049 00073 (funcreg.go:35) MOVQ CX, "".d(SB) 0x0050 00080 (funcreg.go:36) RET 0x0051 00081 (funcreg.go:35) MOVQ AX, CX 0x0054 00084 (funcreg.go:35) JMP 73 0x0056 00086 (funcreg.go:35) CMPQ DX, AX 0x0059 00089 (funcreg.go:35) JGE 81 0x005b 00091 (funcreg.go:35) MOVQ DX, CX 0x005e 00094 (funcreg.go:35) JMP 73 ``` Notice that there is no CMOV generated in the 2nd case. Only if we change the min3 code to behave like min2 like this - ```go func min3(a, b, c int) int { min := b if a < b { min = a } if c < min { min = c } return min } ``` then we get CMOV generated. But this is only after I realized how the compiler is converting the code to explicitly generate the CMOV instruction. Originally discussed in a golang-dev thread here - https://groups.google.com/forum/#!topic/golang-dev/JaYi4D-tsbY. /cc @randall77 @rasky
Performance,NeedsInvestigation,compiler/runtime
low
Critical
362,341,609
TypeScript
No suggestions for extended classes.
_From @zavr-1 on September 20, 2018 6:10_ Issue Type: <b>Bug</b> There are no suggestions of the `super` class's methods. VS Code version: Code - Insiders 1.28.0-insider (cd9d71a31f731baf17330a84448c3efdeabc873f, 2018-09-17T05:12:08.581Z) OS version: Darwin x64 15.6.0 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM)2 Duo CPU P7350 @ 2.00GHz (2 x 2000)| |GPU Status|2d_canvas: unavailable_software<br>checker_imaging: disabled_off<br>flash_3d: unavailable_software<br>flash_stage3d: unavailable_software<br>flash_stage3d_baseline: unavailable_software<br>gpu_compositing: unavailable_software<br>multiple_raster_threads: disabled_off<br>native_gpu_memory_buffers: unavailable_software<br>rasterization: unavailable_software<br>video_decode: unavailable_software<br>video_encode: unavailable_software<br>webgl: unavailable_off<br>webgl2: unavailable_off| |Load (avg)|2, 2, 2| |Memory (System)|8.00GB (1.68GB free)| |Process Argv|/Applications/Visual Studio Code - Insiders.app/Contents/MacOS/Electron -psn_0_36873| |Screen Reader|no| |VM|0%| </details><details><summary>Extensions (5)</summary> Extension|Author (truncated)|Version ---|---|--- vscode-eslint|dba|1.6.0 svg|joc|0.1.2 autoimport|ste|1.5.3 code-spell-checker|str|1.6.10 vscode-wakatime|Wak|1.2.3 </details> <!-- generated by issue reporter --> Suppose I want to spawn a process, and return it: ```js export default function spawnCommand(command, args = [], options = {}) { if (!command) throw new Error('Please specify a command to spawn.') const proc = spawn(command, args, options) return proc } ``` Then in suggestions, I can easily access it. ![screen shot 2018-09-20 at 09 03 02](https://user-images.githubusercontent.com/21156791/45798835-080cec80-bcb4-11e8-9ea4-0083307b2ca3.png) Now suppose I want to add a `promise` property resolved on exit to it: ```js export default function spawnCommand(command, args = [], options = {}) { if (!command) throw new Error('Please specify a command to spawn.') const proc = spawn(command, args, options) const promise = getPromise(proc) proc.promise = promise return proc } ``` ![screen shot 2018-09-20 at 09 04 35](https://user-images.githubusercontent.com/21156791/45798886-37235e00-bcb4-11e8-861d-63de38d6aa9c.png) It is not suggested. What I really want to do, is to be able to write ```js /** @typedef {Object} PromiseResult * @prop {string} stdout The accumulated result of the `stdout` stream. * @prop {string} stderr The accumulated result of the `stderr` stream. * @prop {number} code The code with which the process exited. * * @typedef {ChildProcess} ChildProcessWithPromise A child process with an extra `promise` property. * @prop {Promise.<PromiseResult>} promise A promise resolved when the process exits. */ ``` And then use the `ChildProcessWithPromise` as my return type. However, this does not work as `ChildProcessWithPromise` type is only recognised as `ChildProcess`. Therefore, I want to extend the `ChildProcess` class, and transform the instance I got from a `spawn` into it: ```js class ChildProcessWithPromise extends ChildProcess { /** * @param {ChildProcess} * @param {Promise.<PromiseResult>} promise The promise resolved when the process exits. */ constructor(p, promise) { super() this.promise = promise Object.assign(this, p) } } export default function spawnCommand(command, args = [], options = {}) { if (!command) throw new Error('Please specify a command to spawn.') const proc = spawn(command, args, options) const promise = getPromise(proc) const p = new ChildProcessWithPromise(proc, promise) return p } ``` However, now I loose all of the `ChildProcess` properties: ![screen shot 2018-09-20 at 09 08 07](https://user-images.githubusercontent.com/21156791/45799020-bf096800-bcb4-11e8-899c-787033b87fc2.png) So could you please a) suggest extra properties written with `typedef`, and b) fix the suggestions for extended classes? Much appreciated!!! This is btw for my [`spawncommand`](https://github.com/artdecocode/spawncommand) package which just reduces the need to manually create a promise for processes. _Copied from original issue: Microsoft/vscode#59015_
Bug,Domain: Completion Lists,Domain: JavaScript
low
Critical
362,346,953
realworld
Functionality Testing
It seems that some of the apps aren't actually functioning properly. The only one I've had complete success with is the Keechma app so far: ## Quick summary of non-functional features Framework | Routing | Tagging | Article | Profile | Other --- | --- | --- | --- | --- | --- Angular | home (logo) | article tag nav | | | Images broken Angularjs | | article tag nav | | | Redux | home (logo) | article tag nav | | | No "favorite" Re-frame | | article edit, tag nav | | | Keechma | | | | | Can't delete comments Vue | | article tag nav | | | Mobx | | article tag nav | | | Svelte | | article tag nav | Publish sticks, no favorite | | Apprun | | tag nav persists over feed | | Profile Update | Dojo2 | home (link) | | | | Crizmas | | article tag nav | | | Aurelia | | article edit, display, tag nav | profile links | | Whole app locks up with some clicking around pages Elm | | article edit, tag nav | | | I think there should be some kind of user acceptance testing for these if they're supposed to compare apples to apples. I may have missed some, ran out of time to really dig into them all
v2
medium
Critical
362,354,481
rust
rustdoc: doc comments on `use` and `extern crate` statements run doctests
The following two files fail `cargo test --doc`: ```rust /// ``` /// panic!("oh no"); /// ``` use std::fs::File; ``` ```rust /// ``` /// panic!("oh no"); /// ``` extern crate asdf; ``` ...despite neither of these code blocks actually getting displayed in docs. Until we decide to display any docs written on these kinds of statements, we should stop scanning them for doctests.
T-rustdoc,C-bug,A-doctests
low
Minor
362,386,278
TypeScript
Mix on inline/external defined properties corrupt javascript intellisense
_From @mobisga on September 20, 2018 10:4_ Issue Type: <b>Bug</b> Defining a variable: var test = { prop1: '' }; test.prop2 = ''; intellisense does NOT recognize prop2 as a property belonging to test Whereas defining: var test = { }; test.prop2 = ''; intellisense does recognize prop2 as a property belonging to test as it should VS Code version: Code 1.27.2 (f46c4c469d6e6d8c46f268d1553c5dc4b475840f, 2018-09-12T16:17:45.060Z) OS version: Windows_NT x64 10.0.15063 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i5-7500T CPU @ 2.70GHz (4 x 2712)| |GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: enabled<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled| |Memory (System)|15.85GB (9.39GB free)| |Process Argv|C:\Program Files\Microsoft VS Code\Code.exe C:\Users\_\Desktop\KeyEventsRuleTest.html| |Screen Reader|no| |VM|0%| </details><details><summary>Extensions (4)</summary> Extension|Author (truncated)|Version ---|---|--- ng-template|Ang|0.1.10 tslint|eg2|1.0.39 csharp|ms-|1.16.1 debugger-for-chrome|msj|4.10.1 (1 theme extensions excluded) </details> <!-- generated by issue reporter --> _Copied from original issue: Microsoft/vscode#59033_
Bug,Domain: JavaScript
low
Critical
362,390,951
flutter
Support creation of an engine AAR instead of a JAR for Android.
Support creation of an AAR instead of a JAR for Android. Android uses XML resources as a canonical element of everyday development. It is possible to reference FlutterFragment and FlutterView from XML, but the Flutter embedding currently has no capability to expose XML attributes to customize those elements. For example, we might want to do this: ```xml <io.flutter.embedding.FlutterView android:layout_width="match_parent" android:layout_height="match_parent" app:splash="@android:color/blue" /> ``` The definition of `app:splash` needs to be setup as an XML resource. But we can't do this in the embedding because we aren't producing AARs and therefore we cannot include any resources. The JAR restriction also prevents us from defining any of our own XML layouts, defining our own colors, defining any Flutter-specific IDs, etc.
platform-android,engine,e: embedder,a: existing-apps,P2,team-android,triaged-android
low
Major
362,425,846
go
doc/articles/wiki: "Writing Web Applications" example writes the reponse header twice
### What version of Go are you using (`go version`)? go1.11 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><br> ``` GOARCH="amd64" GOBIN="/home/jabenze/go/bin" GOCACHE="/home/jabenze/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/jabenze/go" GOPROXY="" GORACE="" GOROOT="/usr/lib/go" GOTMPDIR="" GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build876174949=/tmp/go-build -gno-record-gcc-switches" ``` </details> ### What did you do? Follow the "Writing Web Applications" tutorial here, up through the error handling section here: https://golang.org/doc/articles/wiki/#tmp_9. L Look at the function `renderTemplate`, reproduced here: ``` func renderTemplate(w http.ResponseWriter, tmpl string, p *Page) { t, err := template.ParseFiles(tmpl + ".html") if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) return } err = t.Execute(w, p) if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) } } ``` Modify the `Page` data structure by removing a field (or modify the template to reference a nonexisting field) so that the line: `err = t.Execute(w, p)` will cause an error to be created. Observe the response code sent by the application. The following line: `http.Error(w, err.Error(), http.StatusInternalServerError)` would imply the response code to be 500, however, the application returns 200. This is because `t.Execute` is calling `w.Write` which, because `w.WriteHeader` has not yet been called, causes an implicit call to `w.WriteHeader(200)`. Further calls to `w.WriteHeader`, such as the one inside `http.Error` are ignored. There is no way to able to send a 500 error code in the case of all errors, since `w.Write` itself can return an error, and at some point, one has to simply log an error and return without setting an error code. Optionally, the example could render to a `bytes.Buffer` so that template rendering errors could be caught. ### What did you expect to see? Based on the code, a 500 error code ### What did you see instead? Error code 200.
Documentation,NeedsFix
low
Critical
362,518,743
rust
Testing `Sync` diverges on recursive type
Typechecking the following program diverges: ```rust #![feature(never_type)] struct Foo<'a, T: 'a> { ph: std::marker::PhantomData<T>, foo: &'a Foo<'a, (T, T)>, } fn wub(f: Foo<!>) { sync(f) } fn sync<T: Sync>(x: T) {} fn main() {} ``` Cc @eddyb @nikomatsakis
A-type-system,T-compiler,T-types
low
Minor
362,528,603
go
path/filepath: Clean doesn't remove trailing backslash from Windows volume name
### What version of Go are you using (`go version`)? go 1.10.3 windows/amd64 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? Windows 7, Core I7, x64 ### What did you do? > filepath.Clean(\`\\\\somepath\dir\\`) ### What did you expect to see? `\\somepath\dir` (without trailing slash as per documentation) ### What did you see instead? `\\somepath\dir\` (with trailing shash) Seems that problem is in two Windows slashes at the beginning, all other cases work just fine.
OS-Windows,NeedsInvestigation
low
Major
362,555,988
vue
Different functional componens has the same key
### Version 2.5.17 ### Reproduction link [https://github.com/vedmaque/vue-functional-bug](https://github.com/vedmaque/vue-functional-bug) ### Steps to reproduce 1) create `Component1` ``` <template functional> <div class="a"> <div class="b">first (template)</div> <div class="b">component</div> </div> </template> ``` 2) create `Component2` ``` <template functional> <div class="x"> <div class="y">second (template)</div> <div class="y">component</div> </div> </template> ``` 3) render them in App. ``` <template> <div id="app"> <component1 /> <component2 /> </div> </template> ``` ### What is expected? Everything works fine ### What is actually happening? Vue warns about same key ``` [Vue warn]: Duplicate keys detected: '__static__0'. This may cause an update error. ``` --- If you create the same components from render function directly, it works fine, without duplicated keys (keys are `undefined` in this situation) ``` <script> export default { functional: true, render(createElement) { return createElement("div", { staticClass: "a" }, [createElement("div", { staticClass: "b" }, ["first"]), createElement("div", { staticClass: "b" }, ["component"])]) } } </script> ``` This image shows the difference in VNode objects. ![screenshot](https://github.com/vedmaque/vue-functional-bug/raw/master/functional-bug.png) Moreover, if `Component1` looks likes this, it works fine too. ``` <template functional> <div class="a">first (template) component</div> </template> ``` Vue Template Exporer will go crazy too if you try to compile `Component1` https://template-explorer.vuejs.org/#%3Cdiv%20class%3D%22a%22%3E%0A%20%20%3Cdiv%20class%3D%22b%22%3Efirst%20(template)%3C%2Fdiv%3E%0A%20%20%3Cdiv%20class%3D%22b%22%3Ecomponent%3C%2Fdiv%3E%0A%3C%2Fdiv%3E%0A <!-- generated by vue-issues. DO NOT REMOVE -->
has workaround
low
Critical
362,556,513
godot
[Bullet] Kinematicbody snap drags objects down even without any velocity
Godot version 534b7ef Kinematicbody snap drags objects down even without any velocity. **This is 3D, it is just sideview to better illustrate what is happening.** ![snap_drag](https://user-images.githubusercontent.com/40793342/45876405-433a1900-bda3-11e8-92a7-119e0840f08f.gif) This is the full code in kinematicbody. ``` extends KinematicBody var vel = Vector3() func _physics_process(delta): vel = move_and_slide_with_snap(vel, Vector3(0, -1, 0), Vector3(0, 1, 0)) ``` Copied here from one of my comments for visibility: > Increasing collision safe margin makes it go faster so I think something like this happens: > Kinematicbody snaps straight down to floor, then separates to safe margin along collision normal. > Since collision normal is not parallel to y axis it moves slightly sideways and then this repeats. [Minimal Project.zip](https://github.com/godotengine/godot/files/2404944/Minimal.Project.zip)
bug,topic:physics
medium
Major
362,565,463
rust
Should `partial_cmp` on `Range` implement interval order?
Hi, Whilst implementing a data structure, I've noticed a quirk (or bug) in the `Range` implementation. It appears that we can query the partial ordering relation between two ranges using `partial_cmp`. For example: ``` assert_eq!((0..5).partial_cmp(10..20), Some(Ordering::Less)); assert_eq!((5..6).partial_cmp(1..5), Some(Ordering::Greater)); assert_eq!((5..10).partial_cmp(8..9), None); ``` The last assertion here fails: ``` left: `Some(Less)`, right: `None`', src/lib.rs:81:9 ``` This surprised me, as (unless I am mistaken) overlapping ranges cannot be compared in the interval order. In the above example, both 8 and 9 are members of both ranges. From Wikipedia (https://en.wikipedia.org/wiki/Interval_order): > one interval, I1, being considered less than another, I2, if I1 is completely to the left of I2 I suspect that the partial order of ranges has been `derive`ed leading to pairwise comparisons of the upper and lower bounds, which is a different partial ordering (actually a total ordering I think). Am I right? If so, should `Range` implement interval order? In either case, improved documentation would really help. Thanks
C-enhancement,T-libs-api,A-docs
low
Critical
362,608,011
TypeScript
replace implicit 'any' with 'unknown'
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. --> ## Search Terms noImplicitAny, strict, any, unknown <!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily --> ## Suggestion When enabling `"strict": true` or `"noImplicitAny": true` in a project, could we assign `unknown` to un-inferred/un-typed variables, instead of causing a compiler error? This would be a non-breaking change, since all cases where this would have an effect are already compiler errors. ## Use Cases It would allow avoiding having to specify a type when it's being type-checked anyway. The current implementation forces converted-from-js projects to deal with a *lot* of compiler errors from the outset (many of which use various runtime type-checking methods so should be happy with implicit `unknown`s). ## Examples ```typescript const getMessage = input => typeof input === 'string' ? input : 'hello'; const shortenMessage = message => message.substring(3); ``` the `getMessage` example would be an error currently, but would be fine in the new world and the function would have return type `string`. the `shortenMessage` example would have a changed error. instead of `Parameter 'message' implicitly has an 'any' type` we'd see an error at `message.substring` of `Object is of type 'unknown'.` To me that's more intuitive in terms of what the problem actually is. `message` _is_ unknown so it should have type `unknown`. <!-- Show how this would be used and what the behavior would be --> ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,Awaiting More Feedback,Add a Flag
medium
Critical
362,639,360
TypeScript
recursive return type for deterministic generators
## Suggestion If a generator neither yields inside a loop nor defers to another generator, it's considered deterministic. For any given number of `next` calls, the compiler could theoretically reduce the generator to the exact type of its next yield. Of course, because generators are mutable, this isn't true. But what if I'm using a library that allows for [immutable generators](https://github.com/pelotom/immutagen)? For every `next` call, a new generator is returned. Maybe it should be possible to introspect the order of yields in a deterministic generator, and use that to deduce a recursive type structure that represents an immutable generator. ```ts import immutagen from 'immutagen' const gen = immutagen(function* foo() { yield 0 yield 1 }) const a = gen() const b = a.next() typeof a // => { value: 0, next: () => typeof b } typeof b // => { value: 1, next: undefined } ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,Awaiting More Feedback
low
Minor
362,640,799
rust
Vtables not position independent for target thumbv7em-none-eabi
# Summary I'm having trouble using dynamic dispatch for trait objects on a cortex-m4. The setup is rather complicated as relocated binaries only make sense for binaries which will actually be relocated. In this case I try to run a tock app created with libtock-rs. The vtables in the binary get relocated, however the content of the vtables is not position independent but pointing to the load addresses of the function to be called. I have written a "fixup-routine" in the following pull request, however uses a very rough heuristics to identify vtables in the .rodata section: https://github.com/tock/libtock-rs/pull/56 # Description of the problem I compile (using nightly-2018-08-16) the traitobj-Example in the PR using ropi-relocation model. Access to the vtable works just fine as the addresses of the vtables are access relatively to the position of the application in flash. However, the addresses in the vtables point to the static load address in the binary of the functions to call. Currently, I solve the problem using a fixup routine which fixes the addresses in the vtables (and potentially destroys other elements of .rodata). # Expected behavior The addresses in the vtables should be accessed in a way which is position independent. At least there should be a way to identify vtables in the .rodata segment such that at least theoretically the vtables could be fixed. However, recently the behavior of the Rust compiler changed so that it creates vtable-Symbols which don't carry names containing "vtable" anymore so for me it is impossible to find these symbols ( rust nightly 2018-06-14 stil creates symbols named *vtable*). # Provide information The description provided here is relatively rough. Please help me providing the information necessary to investigate the problem.
A-linkage,A-LLVM,O-Arm,T-compiler,C-feature-request
low
Major
362,674,885
pytorch
torch.bmm doesn't support CUDA uint8 (byte) tensor
Hi, It seem that byte type are not supported for the bmm operator on GPU. The function batch_mm in file pytorch/torch/jit/batchop.py use it. Applying the attached patch seem to solve the problem on GPU. [patch_batchop.txt](https://github.com/pytorch/pytorch/files/2406035/patch_batchop.txt) Have a nice day. cc @ngimel @jianyuh @nikitaved @pearu @mruberry @heitorschueroff @walterddr @IvanYashchuk
todo,module: bootcamp,module: cuda,triaged,enhancement,module: linear algebra
low
Major
362,677,531
go
runtime,cmd/compile: string concatenation specializations for performance
[CL123256](https://go-review.googlesource.com/c/go/+/123256) started a discussion on which strings concatenation specialization we should choose (if any). String concatenation can be made faster if we specialize concatenation with escaping result and len(args)<=5 (N) arguments. Escaping result means that we can avoid passing `buf` that is always `nil` as well as use `rawstring` instead of `rawstringtmp`. Known N means that we can unroll loops. There are several ways (and these are not all of them): 1. Specialize for N=2 `x+y`. Less code, seems like most frequent, the boost is maximal, but does not improve any other concatenation (like `x+y+z` which uses `concatstring3`). 2. Specialize for all N, but for marginal performance boost that may not even worth it. 3. Specialize for N={2,3,4,5}. More code. 4. Specialize for N={2,3} covers **most** concatenations. Speeds up concat2 and concat3 measurably. I've started from (1) since it's the easiest change that requires less amount of changes and gives significant performance gain. But in order to have decision extra feedback is required. Here is comparison of (1) against unoptimized concat: ``` name old time/op new time/op delta Concat2-8 74.2ns ± 0% 53.5ns ± 1% -27.95% (p=0.000 n=9+15) Concat3-8 94.9ns ± 0% 94.8ns ± 0% ~ (p=0.082 n=14+15) Concat2Empty-8 21.4ns ± 1% 14.2ns ± 0% -33.75% (p=0.000 n=15+14) Concat3Empty-8 23.9ns ± 1% 23.9ns ± 1% ~ (p=0.756 n=15+15) [Geo mean] 43.6ns 36.2ns -16.88% ``` Comparison for (4) implementation against go tip: ``` name old time/op new time/op delta Concat2-8 74.2ns ± 0% 66.1ns ± 1% -10.95% (p=0.000 n=9+15) Concat3-8 94.9ns ± 0% 71.9ns ± 1% -24.22% (p=0.000 n=14+15) Concat2Empty-8 21.4ns ± 1% 21.1ns ± 0% -1.56% (p=0.000 n=15+15) Concat3Empty-8 23.9ns ± 1% 16.6ns ± 1% -30.63% (p=0.000 n=15+12) [Geo mean] 43.6ns 35.9ns -17.61% ``` Note that these numbers represent overhead savings, not general case strings concatenation performance boost, since length of strings determine these timings significantly. Benchmarks: ```go package foo import "testing" //go:noinline func concat2(x, y string) string { return x + y } //go:noinline func concat3(x, y, z string) string { return x + y + z } var x = "foor" var y = "2ews" var z = "" func BenchmarkConcat2(b *testing.B) { for i := 0; i < b.N; i++ { _ = concat2(x, "abc") } } func BenchmarkConcat3(b *testing.B) { for i := 0; i < b.N; i++ { _ = concat3(x, "abc", y) } } func BenchmarkConcat2Empty(b *testing.B) { for i := 0; i < b.N; i++ { _ = concat2(x, "") } } func BenchmarkConcat3Empty(b *testing.B) { for i := 0; i < b.N; i++ { _ = concat3("", x, z) } } ``` I would like to know: 1. That this kind of optimization is welcome. 2. If it is desired, we need to choose which approach to take. CC @martisch
Performance,NeedsInvestigation,compiler/runtime
low
Major
362,678,162
pytorch
[feature request] Kumaraswamy distribution
## Issue description There is currently no `Kumaraswamy` distribution in `torch.distributions`. It would be nice to have! It is already available in Tensorflow Probability. I'm happy to contribute it. cc @vincentqb @fritzo @neerajprad @alicanb @vishwakftw
module: distributions,feature,triaged
low
Minor
362,718,987
TypeScript
Object spread drops index signature
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.0.1 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** spread index signature, spread indexer **Code** ```ts declare var v: Record<string, string>; let v2 = {...v}; // works let v3 = {a: '', ...v}; // index signature lost let v4 = {...v, a: ''}; // same as above let v5 = {a: 1, ...v}; // index signature lost, should be '{a: number, [x: string]: number | string}' let v6 = {[Number()]: 1, ...v}; // empty object type ò,Ó, should be '{[x: string]: number | string}' ``` **Expected behavior:** Index signature exists on all object types. **Actual behavior:** Index signature is lost if object literal contains another property assignment. I'm not entirely sure what the type of `v6` should be. **Playground Link:** https://agentcooper.github.io/typescript-play/#code/CYUwxgNghgTiAEA3WSBc8BK4D2NgB4BnAFxgEsA7AcwBp4TzqA+AbgCg2IRikAmeALzwA3gDpxiAL4t4AelnwA7rgDWhTtyQBmQSKjoA5AbrjRUmfPiVQAD3pkqFKMQCuceBGwkNPRABZdMQk6fXgjaTkFQigAWwQoQngoACNsRBAfJABWQNCARhMJCMtrEDtCBydXd08SOkIAC2wXCGB4ZIQDYVCKFxiOmDoAbRt0BkoqAF10Xv6QGHgAH3pSCckDTMQANkChgDk+gYAKAEpp+AL4U3NI+BAYgAdiAE94bGSAK3AeF4eEACeaABl+pNFptDphYQjMarajnWYDJYrRhUdZAA **Related Issues:** #16694 #16373
Suggestion,In Discussion
medium
Critical
362,750,343
go
proposal: spec: reduce unspecified behaviors of expression evaluation orders in assignment statements
_(As [this proposal](https://github.com/golang/go/issues/27098) is rejected, I change it as a Go 2 proposal instead.)_ One feature of Go is multi-value assignments support. However, currently, the rules of the expression orders in one multi-value assignment are very not specified, which causes many ambiguities in practice. Please read [this](https://github.com/go101/go101/wiki/Some-evaluation-orders-in-multi-value-assignments-are-unspecified) ~~and [this](https://github.com/go101/go101/wiki/An-ambiguity-of-(or-dispute-on)-the-evaluation-order-of-LHS-(left-hand-side)-items-in-a-multi-value-assignment)~~ for details. As a so called C+ language, one of the main goals of Go is to remove as many unspecified behaviors in C as possible, in particular for the unspecified behaviors which may be common encountered. So here I propose Go 2 should specify more on the expression orders in one multi-value assignment. Currently, Go specification specifies: > At package level, initialization dependencies determine the evaluation order of individual initialization expressions in variable declarations. Otherwise, when evaluating the operands of an expression, assignment, or return statement, all function calls, method calls, and communication operations are evaluated in lexical left-to-right order. By the rule, the following program must print `1` and `2` for `y` and `z`, but the print result for `x` might be `0`, `1` or `2`. The standard Go compiler prints `x` as `2`, which is counterintuitive for most programmers. ```golang package main func main() { a, b := 0, 0 f := func() int { a++ b++ return b } x, y, z := a, f(), f() println(x, y, z) } ``` If we (or the compiler) rewrite the last multi-value assignment in the above example as ```golang x, y, z := func() int{return a}(), f(), f() ``` Then the print result should be `0 1 2` for sure. In other words, there are not any technical obstacles to **evaluate expressions in a ~~multi-value~~ assignment by the from-left-to-right rule** . The reason of why Go specification doesn't adopt this more specified rule might be related to code optimization consideration. The bold line is my proposal. [edit: the following content is moved to a new issue: https://github.com/golang/go/issues/27821]
LanguageChange,Proposal,LanguageChangeReview
medium
Critical
362,777,215
flutter
Option to mimic cmd+k event in IOS to open/hide virtual keyboard
Hello, I have a physical keyboard attached to my ios device, by default the virtual keyboard is hidden. This can be brought back by pressing cmd+k keys in the physical keyboard. Is there a way to programmatically trigger this key events so that we can bring the virtual keyboard by pressing a RaisedButton widget? thank you in advance.
a: text input,c: new feature,platform-ios,framework,engine,P2,team-ios,triaged-ios
low
Minor
362,777,417
bitcoin
Unbounded growth of scheduler queue
In `validation.cpp`, both `ConnectTip` and `DisconnectTip` invoke a `GetMainSignals()...` callback, which schedules a future event though the CScheduler interface. These functions are called in 3 places: - `ActivateBestChainTipStep` - `InvalidateBlock` - `RewindBlockIndex` The first of these 3 prevents the scheduler queue from growing unboundedly, by limiting the size to 10 in `ActivateBestChainTip`. The other two however do not, and @Sjors discovered that doing a giant `invalidateblock` RPC call will in fact blow up the node's memory usage (which turns out to be caused by the queue events holding all disconnected blocks in memory). I believe there are several improvements necessary. 1. (short term symptom fix) If this issue also appears for `RewindBlockIndex`, we must fix it before 0.17, as this may prevent nodes from upgrading pre-0.13 code. I think this is easy, as this function is called prior to normal node operation, so it can easily be reworked to release `cs_main` intermittently and drain the queue. 2. (long term fix) The use of background queues is a means to increase parallellism, not a way to break lock dependencies (as @gmaxwell pointed out privately to me, whenever a background queue is used to break a dependency, attempts to limit its size can turn into a deadlock). One way I think is to have a debug mode where the background scheduler immediately runs all scheduled actions synchronously, forcing all add-to-queue calls to happen without locks held. Once everything works in such a mode, we can safely add actual queue size limits as opposed to ad-hoc checks to prevent the queue from growing too large. 3. (orthogonal memory usage reduction) It's unfortunate that our `DisconnectBlock` events hold a shared pointer to the block being disconnected. Ideally there is a shared layer that only keeps the last few blocks in memory, and releases and reloads them from disk when needed otherwise. This wouldn't fix the unbounded growth issue above, but it would reduce the impact. It's also independently useful to reduce the memory during very large reorgs (where we can't run callbacks until the entire thing completed).
Brainstorming,Resource usage
medium
Critical
362,781,447
pytorch
dtype mismatch error messages can be misleading
Example: ``` In [2]: import torch ...: import torch.nn as nn ...: ...: mat1 = torch.randn(3, 3, dtype=torch.double) ...: mat2 = torch.randn(3, 3, dtype=torch.float) ...: torch.mm(mat1, mat2) ...: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-2-3bb89e9eafdc> in <module>() 4 mat1 = torch.randn(3, 3, dtype=torch.double) 5 mat2 = torch.randn(3, 3, dtype=torch.float) ----> 6 torch.mm(mat1, mat2) RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'mat2' ``` Consider the following: - Let's say as a user I was given mat2 from a black box - The user supplies mat1, and then gets this error message, which is misleading because what the user really needs to hear is that "mat1 is of scalar type Float but should be scalar type Double" Maybe we should change the error message to say "Scalar type mismatch: got scalar type Double for argument #1 mat1 and scalar type Float for argument #2 mat2" What do we think?
todo,module: error checking,module: molly-guard,triaged
low
Critical
362,795,325
rust
FFI mechanism to declare a symbol for an array
If I have a C declaration ```c char *symbol = "hello world"; ``` I can declare that symbol in Rust as ```rust extern "C" { pub static symbol: *c_char; } ``` But if I have a C declaration ```c char symbol[] = "hello world"; ``` I can't directly declare that in Rust in an immediately usable way. In this case, `symbol` refers *directly* to the array of characters; while in C it will "degrade" to a pointer if used in a context that expects a pointer, in Rust a declaration referring to `symbol` will refer to the characters directly. So, for instance, declaring it as a `*c_char` will result in a pointer whose numeric value contains the first `sizeof::<*c_char>()` bytes of `"hello world"`. Declaring `symbol` as a `[c_char; 0]` and then obtaining and using its pointer seems wrong. I can think of a few useful ways to do this, one more straightforward than the other. One would be to have a means of defining in an `extern "C"` block something that gets the value of the *address* of the symbol, just as if in C I'd written `char *symbol_ptr = symbol;` and then referenced *that* from Rust. That seems easy enough to do, modulo bikeshedding over the syntax to do so. Another would be to define `symbol` as a C array of unknown length. However, that seems unfortunate to deal with. The most ideal approach I can think of would be to define `symbol` as a `[c_char; _]` (using the elided size syntax from https://github.com/rust-lang/rfcs/pull/2545), and then infer the size from the symbol size: ``` $ nm --print-size test.o 0000000000000000 000000000000000c D symbol ``` I don't know how feasible that would be, but it'd be incredibly convenient.
A-FFI,T-compiler,C-feature-request,A-array
low
Major
362,802,928
rust
Expand macros inside inline asm clobbers.
https://play.rust-lang.org/?gist=a0de4df14ed21add77a2f74d5665a31c&version=nightly&mode=debug&edition=2015 It'd be nice if rust allowed expanding macros inside the inline asm input/output/clobbers.
A-inline-assembly,A-macros,T-compiler,C-bug,requires-nightly
low
Critical
362,805,214
kubernetes
Support Windows Certificate Store for kubectl client certs
**Is this a BUG REPORT or FEATURE REQUEST?**: /kind feature (I checked and don't think this support already exists, but perhaps I just missed it somewhere.) **Anything else we need to know?**: .kube/config has client-certificate-data and client-key-data on windows. Windows natively puts certificates in a certificate store and configures them via a reference to a certificate in a store, such as ``` Location: CurrentUser Store: My Subject: CN=CertName ``` For some examples, see: * https://docs.microsoft.com/en-us/windows/desktop/WinHttp/winhttpcertcfg-exe--a-certificate-configuration-tool * https://docs.microsoft.com/en-us/previous-versions/orphan-topics/ws.10/cc772898(v=ws.10) > To view the certificate of the user that has the serial number 26e0aaaf000000000004 in the store named My, type: > ``` > certutil -store -user My 26e0aaaf000000000004 > ``` Can .kube/config on Windows support an alternative to client-certificate-data and client-key-data that allows referencing a certificate installed in a Windows certificate store instead? This functionality would allow full integration with Windows key storage mechanisms, such as encrypted or hardware storage of private keys. An example might be ``` user: client-certificate-store: "CurrentUser\My, CN=CertName" ```
kind/feature,sig/windows,lifecycle/frozen
low
Critical
362,811,515
TypeScript
findAllReferences broken when immediately looping over array literal of object literals
**TypeScript Version:** 3.1.0-dev.20180921 **Code** ```ts for (const { a } of [{ a: 0 }, { a: 1 }]) { a; } ``` **Expected behavior:** All 4 `a` are references of each other. **Actual behavior:** The `a` in `{ a: 1 }` is not counted as a reference of the others.
Bug
low
Critical
362,829,384
pytorch
Jit cannot trace autograd for certain operator
## Issue description `torch.jit.trace` can not trace `torch.autograd.grad` for some cetrain operator(SliceBackward) same issue which submit to discuss.pytorch.org: [https://discuss.pytorch.org/t/jit-cannot-trace-autograd-for-certain-operator](url) ## Code example code: ```python import torch def test(fun): x = torch.ones([1, 4], requires_grad=True) y = fun(x) @torch.jit.trace(torch.zeros([1, 4])) def difference_forward(z): return torch.autograd.grad(y, x, grad_outputs=z, retain_graph=True) print(difference_forward(torch.ones([1, 4]))) if __name__ == '__main__': test(lambda x: x * x) test(lambda x: x[0:1]) ``` output: ```bash tensor([[2., 2., 2., 2.]]) Traceback (most recent call last): File "playground.py", line 16, in <module> test(lambda x: x[0:1]) File "playground.py", line 7, in test @torch.jit.trace(torch.zeros([1, 4])) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/jit/__init__.py", line 310, in wrapper module._create_method_from_trace('forward', func, args) File "playground.py", line 9, in difference_forward return torch.autograd.grad(y, x, grad_outputs=z, retain_graph=True) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 145, in grad inputs, allow_unused) RuntimeError: saved_variables() needed but not implemented in SliceBackward ``` ## System Info - PyTorch version: 0.5.0a0+4ad6e53 - Is debug build: No - CUDA used to build PyTorch: 9.1.85 - OS: Ubuntu 16.04.5 LTS - GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609 - CMake version: version 3.5.1 - Python version: 3.6 - Is CUDA available: Yes - CUDA runtime version: Could not collect - GPU models and configuration: GPU 0: GeForce GTX 1080 - Nvidia driver version: 396.37 - cuDNN version: Probably one of the following: - /usr/lib/x86_64-linux-gnu/libcudnn.so.7.0.5 - /usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a Versions of relevant libraries: - [pip] Could not collect - [conda] Could not collect
oncall: jit
low
Critical
362,845,419
rust
Add `String::from_with_capacity(value, capacity)`
In some situations it would be useful to create a string from a &str with a bigger capacity if it is know that the string is extended later on. ````rust fn main() { let mut s = String::from("Hello world"); println!("len: {}, cap: {}", s.len(), s.capacity()); s.push_str("add more characters"); s.push_str("and more"); s.push_str("and even more"); println!("len: {}, cap: {}", s.len(), s.capacity()); } ```` ```` len: 11, cap: 11 len: 51, cap: 60 ```` To not have unused capacity, I wonder if we could have something like `String::from_with_capacity()` ````rust fn main() { let mut s = String::from_with_capacity("Hello world", 51); println!("len: {}, cap: {}", s.len(), s.capacity()); s.push_str("add more characters"); s.push_str("and more"); s.push_str("and even more"); println!("len: {}, cap: {}", s.len(), s.capacity()); } ```` ```` len: 11, cap: 51 len: 51, cap: 51 ```` This would be a shorter way of writing ````rust let mut s = String::with_capacity(51); s.push_str("Hello world"); .... ````
T-libs-api,C-feature-request,A-str
low
Minor
362,857,947
rust
Consider replacing RawVec<T> with Box<[MaybeUninit<T>]>.
After #53508 lands, we could look into this. It might provide a bit more type-safety, although it could be worse to not have the capacity as a separate field. ~~One thing to note is that setting the "length" component of a reference/pointer to `[MaybeUninit<T>]` is "less unsafe" than doing so for for `[T]`, but it's unclear to me whether having a `&[MaybeUninit<T>]` that's larger than the object it points to, is UB or not.~~ (*please ignore, I managed to confuse myself*) cc @RalfJung @Gankro @rust-lang/libs
A-collections,T-libs
medium
Major
362,859,149
pytorch
Specify out= argument to convolution
Hi, i want to use the functions in torch._C._thcunn. Is there any docs for that? what’s the meaning of the parameters? i find the C code for CudaSpatialFullDilatedConvolution_updateGradInput ``` void THNN_(SpatialFullDilatedConvolution_updateOutput)( THCState *state, THCTensor *input, THCTensor *output, THCTensor *weight, THCTensor *bias, THCTensor *columns, THCTensor *ones, int kW, int kH, int dW, int dH, int padW, int padH, int dilationW, int dilationH, int adjW, int adjH) ``` but i don't know the meaning of the ```state```? How can i specify that variable in python?
module: convolution,triaged,enhancement
low
Major
362,860,014
rust
Unused import warning fires incorrectly for imports used and shadowed by decl_macro's.
I wanted to demonstrate name resolution detecting maro expansion shadowing ```rust #![feature(decl_macro)] mod foo { pub macro import_bar($name:ident) { use bar::time_travel as $name; } } mod bar { pub macro time_travel($name:ident) { // The uncommented version never errors. use foo::import_bar as $name; // use bar::time_travel as $name; } } use foo::*; // Only if the invocation uses the glob import and `time_travel` // shadows it to something *different*, does it produce an error. /*foo::*/import_bar!(m2); m2!(import_bar); ``` However, while the example works as expected, "warning: unused import: `foo::*`" is produced, despite the import being used by the first macro invocation (and trying to remove it results in an error). This is likely caused by us marking imports as used after expansion is finished, so the `use foo::import_bar as import_bar;` import gets marked as used, instead of `use foo::*;`. But the macro *couldn't* have expanded without the later-shadowed glob import, so maybe expansion *should* mark the imports used to find the macro, as used, early? cc @petrochenkov @nikomatsakis
A-lints,T-compiler,C-bug,F-decl_macro,L-unused_imports
low
Critical
362,862,209
flutter
TextEditingController doesn't keep selection on text changes
**Steps to reproduce:** - Create TextField with "start value" text with `final tf = TextField(controller: TextEditingController(text: 'start value'));`. - Tap to a middle of the textfield to set the cursor to the middle of the text (like "start| value"). - Programmatically update text in the textfield with `tf.controller.text = 'new value';` **Expected result:** - The cursor stays in the middle of the textfield (like "new v|alue"). **Actual result:** - The cursor moves to the end on the iOS (like "new value|") and to the start on the Android (like "|new value"). **Extra notes:** - As I can see per [TextEditingController sources](https://github.com/flutter/flutter/blob/989cf18b0daed99ea7c83226bd76fa39693bf94c/packages/flutter/lib/src/widgets/editable_text.dart#L80) it explicitly drops selection with text update. **Flutter doctor:** ``` flutter -v doctor [✓] Flutter (Channel beta, v0.8.2, on Mac OS X 10.13.6 17G65, locale en-US) • Flutter version 0.8.2 at /Users/alexkorovyansky/Library/Flutter • Framework revision 5ab9e70727 (2 weeks ago), 2018-09-07 12:33:05 -0700 • Engine revision 58a1894a1c • Dart version 2.1.0-dev.3.1.flutter-760a9690c2 [✓] Android toolchain - develop for Android devices (Android SDK 27.0.3) • Android SDK at /Users/alexkorovyansky/Library/Android/sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-28, build-tools 27.0.3 • ANDROID_HOME = /Users/alexkorovyansky/Library/Android/sdk • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01) • All Android licenses accepted. [✓] iOS toolchain - develop for iOS devices (Xcode 9.4.1) • Xcode at /Applications/Xcode.app/Contents/Developer • Xcode 9.4.1, Build version 9F2000 • ios-deploy 1.9.2 • CocoaPods version 1.5.3 ```
a: text input,framework,a: fidelity,has reproducible steps,P3,team-framework,triaged-framework,found in release: 3.13,found in release: 3.17
low
Major
362,868,082
scrcpy
Not an issue but an improved version for raspberry pi
I am interested in using OpenMax library to hardware decode on pi zero: I have modified decoder.c code to decode stream from socket with broadcom ilclient but I am not an expert c/c++ coder so I need help to review the code. I attach the modified cose here, you can simply put a switch at compile time to compile one of the ffmpeg or ilclient/openmax decoder.c code. Please give me a feedback if any of you are interested in this, I am working on project to made a pi zero android docking station for desktop use. [decoder.c.log](https://github.com/Genymobile/scrcpy/files/2407955/decoder.c.log)
raspberry pi
medium
Major
362,868,197
pytorch
[caffe2] Documentation of Optimizers (Python API)
I have been searching for a while and I have not found any documentation regarding the optimizers such as adam, sgd, rms_prop and adagrad. It would be very nice to have some documentation like the one for pytorch https://pytorch.org/docs/stable/optim.html
caffe2
low
Minor
362,869,227
rust
Failure to parse `{2} + {2}`
These examples should compile according to the rust reference: ```rust fn compiles() -> i32 { ({2}) + ({2}) } fn fails() -> i32 { {2} + {2} } fn compiles2() -> i32 { 2 + {2} } fn fails2() -> i32 { {2} + 2 } ``` ([Playground](https://play.rust-lang.org/?gist=36a06367e5e9b2eded3b7a15067306d5&version=stable&mode=debug&edition=2015)) Errors: ``` Compiling playground v0.0.1 (file:///playground) error: expected expression, found `+` --> src/lib.rs:6:9 | 6 | {2} + {2} | ^ expected expression error: expected expression, found `+` --> src/lib.rs:14:9 | 14 | {2} + 2 | ^ expected expression error: aborting due to 2 previous errors error: Could not compile `playground`. To learn more, run the command again with --verbose. ``` Rust reference snippet (thanks @Moongoodboy-K for the help here): > Expression : BlockExpression | OperatorExpression | .. > OperatorExpression : ArithmeticOrLogicalExpression | .. > ArithmeticOrLogicalExpression : Expression `+` Expression | .. > BlockExpression : `{` InnerAttribute* Statement* Expression? `}` cc: @Moongoodboy-K
A-grammar,A-parser,T-lang,C-discussion
low
Critical
362,876,063
TypeScript
`export default interface` in augmentation shadows original default export
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** master (5fb3976) <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** default export interface augmentation **Code** ```ts declare module "example" { const y = 42; export default y; // Augmentation module "example" { export default interface Foo {} } } declare module "example2" { import y from "example"; // Actual: error TS2693: 'y' only refers to a type, but is being used as a value here. const yy: typeof y; } ``` **Expected behavior:** No error (declarations merge properly), or error on `export default interface Foo {}` that default exports in augmentations aren't supported. **Actual behavior:** Error as marked. **Playground Link:** [Link](https://www.typescriptlang.org/play/#src=declare%20module%20%22example%22%20%7B%0D%0A%20%20%20%20const%20y%20%3D%2042%3B%0D%0A%20%20%20%20export%20default%20y%3B%0D%0A%20%20%20%20%0D%0A%20%20%20%20%2F%2F%20Augmentation%0D%0A%20%20%20%20module%20%22example%22%20%7B%0D%0A%20%20%20%20%20%20%20%20export%20default%20interface%20Foo%20%7B%7D%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0Adeclare%20module%20%22example2%22%20%7B%0D%0A%20%20%20%20import%20y%20from%20%22example%22%3B%0D%0A%20%20%20%20%2F%2F%20Actual%3A%20error%20TS2693%3A%20'y'%20only%20refers%20to%20a%20type%2C%20but%20is%20being%20used%20as%20a%20value%20here.%0D%0A%20%20%20%20const%20yy%3A%20typeof%20y%3B%0D%0A%7D%0D%0A) **Related Issues:** #14080
Bug
low
Critical
362,876,336
pytorch
[Caffe2] Attempting to install Caffe2 in Google Colab
## Issue description Hello, I'm trying to install Caffe2 in Google Colab so I can work with Detectron. I've tried installing from Source however it takes too long (more than 2 hours) so it is impractical for Colab. Thus, I'm forced to install from binaries using Anaconda. I install anaconda without much issues. `!bash ./Anaconda2-5.2.0-Linux-x86_64.sh` However, I'm trying to follow the [recommendation](https://caffe2.ai/docs/faq.html#why-is-caffe2-not-working-as-expected-in-anaconda) to install Caffe2 on a different environment than the base one. I manage to create the environment just fine, however, trying to activate it does not work. In particular using: `source activate test_caffe2` seems to do nothing. So it seems that's out. So, I go and install Caffe2 using the base anaconda environment. First, I have to add some code to add the anaconda binaries to the PATH, as it seems the usual way to assign environmental variables in Colab seem to not work. ``` import os os.environ['PATH'] += ":/root/anaconda2/bin" ``` Then installation as usual: `conda install -c caffe2 caffe2-cuda9.0-cudnn7` It installs without any issues. However, when trying to run the test code: ``` # To check if Caffe2 build was successful python2 -c 'from caffe2.python import core' ``` It returns: ``` Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named caffe2.python ``` I understand this is probably some PYTHONPATH [issue](https://github.com/facebookresearch/Detectron/blob/master/INSTALL.md#caffe2), however, I can't seem to find the "build" folder in caffe2. If I search for caffe2 directory (`find / -type d -name 'caffe2'`) after installing I only get the following results: ``` /root/anaconda2/pkgs/caffe2-cuda9.0-cudnn7-0.8.dev-py27_2018.08.26/lib/python2.7/site-packages/caffe2 /root/anaconda2/pkgs/caffe2-cuda9.0-cudnn7-0.8.dev-py27_2018.08.26/include/caffe2 /root/anaconda2/lib/python2.7/site-packages/caffe2 /root/anaconda2/include/caffe2 ``` If I search in the directories of each result I get: ls /root/anaconda2/pkgs/caffe2-cuda9.0-cudnn7-0.8.dev-py27_2018.08.26/lib/python2.7/site-packages/caffe2: ``` contrib distributed __init__.py perfkernels python core experiments __init__.pyc proto ``` ls /root/anaconda2/pkgs/caffe2-cuda9.0-cudnn7-0.8.dev-py27_2018.08.26/include/caffe2: ``` contrib distributed mkl onnx predictor sgd video core experiments mobile operators proto share cuda_rtc ideep mpi opt python transforms db image observers perfkernels queue utils ``` ls /root/anaconda2/lib/python2.7/site-packages/caffe2: ``` contrib distributed __init__.py perfkernels python core experiments __init__.pyc proto ``` ls /root/anaconda2/include/caffe2: ``` contrib distributed mkl onnx predictor sgd video core experiments mobile operators proto share cuda_rtc ideep mpi opt python transforms db image observers perfkernels queue utils ``` None of them has a build directory. So finally I have some questions: - **Was the Caffe2 installation successful?** I wonder given that I cannot find the 'build' directory. - **Is this only a PYTHONPATH variable issue?** If so, to which folder should I point it? I'm sorry if the question is noobish. But I'm a bit stuck. Thanks in advance! ### System Settings - PyTorch or Caffe2: Caffe2 - How you installed PyTorch (conda, pip, source): conda - Build command you used (if compiling from source): - OS: Linux, whatever Colab's using. - PyTorch version: - Python version: 2.7 - CUDA/cuDNN version: 9.0 and 7.0 - GPU models and configuration: Tesla K80 - GCC version (if compiling from source): - CMake version: - Versions of any other relevant libraries:
caffe2
low
Critical
362,880,373
pytorch
[Enhancement] Increase user-friendliness of dataset.random_split
## Issue description Currently, when using the [random_split function](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataset.py) the parameters that need to be given are: - dataset - list that contains the lengths of splits to be produced This means a user has to calculate these upfront and add them to the function as parameters like this: `train_size = int(0.8 * len(dataset))` `test_size = len(dataset) - train_size` `train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size,test_size])` Wouldn't it be better if the second parameter was a list that contains the percentages in which the users wants the split to happen, like this: `train_dataset, test_dataset = torch.utils.data.random_split(dataset,[0.8,0.2])` . The result of this change would be cleaner code for the user and also I believer in a more natural way of creating the splits. I would like to pick this up as a pull request! cc @VitalyFedyunin @ejguan
triaged,enhancement,module: data
low
Major
362,885,075
pytorch
[feature request] Publish wheels with debug symbols
It would be great if wheels of pytorch with debug symbols were published (or at least made available via https://ci.pytorch.org), so that users can investigate native crashes without going through the whole (error prone) process of building pytorch themselves. For reference, I asked [the question on the forums first, but got no answer](https://discuss.pytorch.org/t/pytorch-wheel-with-debug-symbols/25169). cc @ezyang @seemethere @malfet @walterddr
module: binaries,triaged
medium
Critical
362,899,465
flutter
Tracing macros from FML and Fuchsia trace/event.h collide.
Their APIs do not match exactly either which makes constructing the same tricky. FML should just have its own tracing variants with the FML_ prefix.
engine,platform-fuchsia,P2,team-engine,triaged-engine
low
Minor
362,904,351
rust
try_replace, try_swap in RefCell (code included)
Every method has a non-panicking variant except for `replace` and `swap`. It would be fairly simple to add them, leveraging the `try_borrow_mut` method; here's the code: ```rust /// An error returned by [`RefCell::try_replace`](struct.RefCell.html#method.try_replace). pub struct ReplaceError { _private: () } /// An error returned by [`RefCell::try_swap`](struct.RefCell.html#method.try_swap). pub struct SwapError { _private: () } /// Replaces the wrapped value with a new one, returning the old value, /// without deinitializing either one, or an error if the value is currently /// borrowed. /// /// This function corresponds to [`std::mem::replace`](../mem/fn.replace.html). /// /// This is the non-panicking variant of [`replace`](#method.replace) #[inline] #[unstable(feature = "try_replace_swap")] pub fn try_replace(&self, t: T) -> Result<T, ReplaceError> { match self.try_borrow_mut() { Ok(mut b) => Ok(mem::replace(&mut *b, t)), Err(_) => Err(ReplaceError { _private: () }) } } /// Swaps the wrapped value of `self` with the wrapped value of `other`, /// without deinitializing either one. Returns an error if either value is /// currently borrowed. /// /// This function corresponds to [`std::mem::swap`](../mem/fn.swap.html). /// /// This is the non-panicking variant of [`swap`](#method.swap) #[inline] #[unstable(feature = "try_replace_swap")] pub fn try_swap(&self, other: &Self) -> Result<(), SwapError> { match (self.try_borrow_mut(), other.try_borrow_mut()) { (Ok(mut s), Ok(mut o)) => { mem::swap(&mut *s, &mut *o); Ok(()) }, _ => Err(SwapError { _private: () }) } } ``` I currently don't have the ability to clone the repo, test, etc. right now. I'm also bad at making examples, so those need to be added. 😏 So I've made this issue. If / when I have time, I will probably try to make a PR for this myself, if no one's done it yet. However, that might be months... <!-- TRIAGEBOT_START --> <!-- TRIAGEBOT_ASSIGN_START --> <!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END --> <!-- TRIAGEBOT_ASSIGN_END --> <!-- TRIAGEBOT_END -->
T-libs-api,C-feature-request
low
Critical
362,925,563
go
spec: clarify order in which operands of multi-value assignments are evaluated
(This issue is separated from https://github.com/golang/go/issues/27804 after I realized it is not a proposal but just a doc improvement. There are no language changes in the improvement.) Currently, for assignments in Go, Go specification specifies > The assignment proceeds in two phases. First, the operands of index expressions and pointer indirections (including implicit pointer indirections in selectors) on the left and the expressions on the right are all evaluated in the usual order. Second, the assignments are carried out in left-to-right order. The rule description is some ambiguous. It doesn't specify clearly how the operands in the mentioned expressions on the left should be **exactly** evaluated. This is the cause of [some](https://github.com/golang/go/issues/23188#issuecomment-403951267) [disputes](https://github.com/golang/go/issues/15620). For the following program, if it is compiled with `gccgo` (version 8), then it will panic. But if it is compiled with the standard Go compiler, then the program will exit without panicking. ```golang package main func main() { s := []int{1, 2, 3} s, s[2] = []int{1}, 9 // gccgo 8 think s[2] is index of range } ``` Obviously, gccgo think the multi-value assignment is equivalent to ```golang s = []int{1} s[2] = 9 ``` However, the standard Go thinks it is equivalent to ```golang tmp1 = &s tmp2 = &s[2] *tmp1, *tmp2 = []int{1}, 9 ``` Most Go team members think the interpretation of the standard Go compiler is what Go specification wants to express. I also hold this opinion. To avoid description ambiguities, it would be good to append the following description to the rule. > In the first phase, if the map value in a destination expression is not addressable, the map value will be saved in and replaced by a temporary map value. Just before the second phase is carried out, each destination expression on the left will be further evaluated as its elementary form. Different destination expressions have different elementary forms: > * If a destination expression is a blank identifier, then its elementary form is still a blank identifier. > * If a destination expression is a map index expression `m[k]` then its elementary form is `(*maddr)[k]`, where `maddr` is the address of `m`. > * For other cases, the destination expression must be addressable, then its elementary form is a dereference to its address. I think these supplement descriptions can avoid the above mentioned disputes. [update at Oct. 18, 2019]: [an imperfection](https://github.com/golang/go/issues/27821#issuecomment-543611217) is found by @mdempsky. The below is a simpler alternative improved description: > .... Second, the assignments are carried out in left-to-right order. The assignments carried-out during the second phase don't affect the expression evaluation results got at the end of the first phase.
Documentation,NeedsInvestigation
low
Major
362,935,018
rust
Arrays not coercing a mutable reference in the first position
Given i, j and k of type &mut isize, then `&mut [i,j,k]` moves i, but not j or k. I discovered this when calling `vec![i,j,k]`. Similar issues: Arise from `let r = &mut 0; [&mut 0, r]; *r += 1;` works, but `let r = &mut 0; [r, &mut 0]; *r += 1;` complains about use of moved value `*r`. See an example full program here: https://play.rust-lang.org/?gist=13c2a934ae9d3f6c22b62c433fee4244&version=stable&mode=debug&edition=2015
T-compiler,C-bug,A-coercions,A-array
low
Critical
362,935,797
javascript-algorithms
Italian Language
Can I help you with the Italian translation? You did a great job!
enhancement
medium
Minor
362,951,837
pytorch
Test the Caffe2 Installation with GPU erro
Hi,everyone,when I test the Caffe2 Installation with GPU with this command: python caffe2/python/operator_test/activation_ops_test.py It shows : /usr/local/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py:75: HypothesisDeprecationWarning: The min_satisfying_examples setting has been deprecated and disabled, due to overlap with the filter_too_much healthcheck and poor interaction with the max_examples setting. verbosity=hypothesis.Verbosity.verbose)) /usr/local/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py:84: HypothesisDeprecationWarning: The min_satisfying_examples setting has been deprecated and disabled, due to overlap with the filter_too_much healthcheck and poor interaction with the max_examples setting. verbosity=hypothesis.Verbosity.verbose)) /usr/local/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py:92: HypothesisDeprecationWarning: The min_satisfying_examples setting has been deprecated and disabled, due to overlap with the filter_too_much healthcheck and poor interaction with the max_examples setting. verbosity=hypothesis.Verbosity.verbose)) what‘s wrong? My Hypothesis is 3.57.0
caffe2
low
Minor
362,952,034
flutter
iPhone X nav drawer safety issue
[This app](https://gist.github.com/jonahgreenthal/fe304c7d72e387345623f361d106c6d6) does not properly handle the iPhone X safe area: ![screen shot 2018-09-23 at 10 28 22 am](https://user-images.githubusercontent.com/737172/45929782-fe85bd80-bf1b-11e8-9d07-957a3d3e2209.png) The gray background behind the logo should extend to the top of the drawer. This may be similar to #12103.
e: device-specific,platform-ios,framework,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-ios,triaged-ios
low
Major
362,966,933
vue
On SSR, do not escape RAW nodes
### What problem does this feature solve? When rendering a script tag on server the content is escaped, breaking the js code. ```html <script> var x = "y"; </script> ``` is rendered as ```html <script> var x = &quot;y&quot;; </script> ``` ### What does the proposed API look like? A solution would be to allow users to define what to be escaped, to override this map https://github.com/vuejs/vue/blob/833175e9d6e8f47367e49e1752cd149a677cdae8/src/platforms/web/server/util.js#L43 Or a option to disable escaping et all here https://github.com/vuejs/vue/blob/52719ccab8fccffbdf497b96d3731dc86f04c1ce/src/server/optimizing-compiler/codegen.js#L228 Thanks. <!-- generated by vue-issues. DO NOT REMOVE -->
improvement
low
Major
362,974,347
rust
Confusing error message with reference to boxed trait object
```rust fn main() { let _: Box<dyn '_ + Fn()> = Box::new(|| {}); let _: &'_ Box<dyn '_ + Fn()> = &Box::new(|| {}); //~ ERROR } ``` [Results in](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=de814d02496552c5d35241a9f39c5faa): ``` error[E0308]: mismatched types --> src/main.rs:3:37 | 3 | let _: &'_ Box<dyn '_ + Fn()> = &Box::new(|| {}); | ^^^^^^^^^^^^^^^^ expected trait std::ops::Fn, found closure | = note: expected type `&std::boxed::Box<dyn std::ops::Fn()>` found type `&std::boxed::Box<[closure@src/main.rs:3:47: 3:52]>` ``` This error message is confusing, because without the reference the types match up, so it seems odd that there's a type error as soon as a reference is introduced.
C-enhancement,A-diagnostics,T-compiler,A-coercions,D-confusing,A-trait-objects
low
Critical
362,983,637
flutter
Testing localized widgets can fail when assets are too big due to race condition
Fix: https://github.com/flutter/flutter/issues/22193#issuecomment-1046333568 --- I've found a [LINK](https://stackoverflow.com/questions/52463714/how-to-test-localized-widgets-in-flutter?answertab=oldest#tab-top) with exact same question but didn't help. Could you write some examples of testing localized widgets? Below is what I've tried. ```dart testWidgets('Sample Widget', (WidgetTester tester) async { // Create the Widget tell the tester to build it await tester.pumpWidget(Localizations( delegates: [ const LocalizationDelegate(), GlobalMaterialLocalizations.delegate, GlobalWidgetsLocalizations.delegate ], locale: Locale('en'), child: Sample(sample: 'test'), )); await tester.pumpAndSettle(); expect(find.text('test'), findsOneWidget); expect(find.text('1'), findsNothing); // Tap the '+' icon and trigger a frame. // await tester.tap(find.byIcon(Icons.add)); // await tester.pump(); }); ``` Result ``` ➜ dooboo git:(master) ✗ flutter test test/widgets/sample.dart 00:07 +0: Sample Widget locale: en languageCode: en ══╡ EXCEPTION CAUGHT BY FLUTTER TEST FRAMEWORK ╞════════════════════════════════════════════════════ The following TestFailure object was thrown running a test: Expected: exactly one matching node in the widget tree Actual: ?:<zero widgets with text "test" (ignoring offstage widgets)> Which: means none were found but one was expected When the exception was thrown, this was the stack: #4 main.<anonymous closure> (file:///Users/hyochan/Documents/SourceCode/FlutterProjects/dooboo/test/widgets/sample.dart:29:5) <asynchronous suspension> #5 testWidgets.<anonymous closure>.<anonymous closure> (package:flutter_test/src/widget_tester.dart:72:23) #6 TestWidgetsFlutterBinding._runTestBody (package:flutter_test/src/binding.dart:559:19) <asynchronous suspension> #9 TestWidgetsFlutterBinding._runTest (package:flutter_test/src/binding.dart:543:14) #10 AutomatedTestWidgetsFlutterBinding.runTest.<anonymous closure> (package:flutter_test/src/binding.dart:887:24) #16 AutomatedTestWidgetsFlutterBinding.runTest (package:flutter_test/src/binding.dart:884:15) #17 testWidgets.<anonymous closure> (package:flutter_test/src/widget_tester.dart:71:22) ``` flutter doctor ``` [✓] Flutter (Channel beta, v0.7.3, on Mac OS X 10.14 18A384a, locale en-KR) [!] Android toolchain - develop for Android devices (Android SDK 27.0.3) ! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses [!] iOS toolchain - develop for iOS devices (Xcode 10.0) ✗ ios-deploy not installed. To install: brew install ios-deploy [✓] Android Studio (version 3.1) [!] VS Code (version 1.27.2) [✓] Connected devices (1 available) ! Doctor found issues in 3 categories. ```
a: tests,c: crash,framework,f: material design,a: internationalization,d: api docs,has reproducible steps,P3,found in release: 3.3,workaround available,found in release: 3.7,team-design,triaged-design
high
Critical
362,989,786
vue
Using multiple selects with v-model and bound and unbound values causes value overriding
### Version 2.5.17 ### Reproduction link [https://jsfiddle.net/du578xc0/32/](https://jsfiddle.net/du578xc0/32/) ### Steps to reproduce change first select to "Some", then change newly displayed select. You will see that val2 is being set to the bound vals from the val1 select instead of it's own values. You can change the first select to use the .number modifier on the v-model and exchange the bound number values for strings to fix this issue. (at least for this very specific desired effect). You can "fix" the problem by either using v-bind:value on all values, or by using non-bound values on all values - but as far as I can tell, if you mix them, then it will cause this override bug ### What is expected? different values for each variable ### What is actually happening? second variable is being overwritten with the first's value --- This was posted/discussed in the discord chat <!-- generated by vue-issues. DO NOT REMOVE -->
bug,has workaround
medium
Critical
363,001,560
rust
Expand examples on str functions
Functions like contains, starts_with, ends_with, match_indices, etc could use more examples, for example `foo.contains('\n')` (check if a string contains newlines). find, split, matches, trim_matches, etc all have them.
C-enhancement,P-medium,T-libs-api,A-docs
medium
Major
363,028,436
TypeScript
Allow specifying regex for module alias paths
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. --> ## Search Terms <!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily --> alias regex, import alias regex, regex ## Suggestion <!-- A summary of what you'd like to see added or changed --> Allow specifying path alias using regex with match replacement. ## Use Cases <!-- What do you want to use this for? What shortcomings exist with current approaches? --> Using path alias when they are automatically generated by custom script in webpack or more fine grained control over path resolution with more complex directories. Current approach does not allow having dynamic alias which consists of 1 or more intermediate path variables. ## Examples <!-- Show how this would be used and what the behavior would be --> Having this path: `<projectDir>/modules/Users/resources/js` should be resolvable with a regex, e.g. **RegEx**: `^#(.+?)\/(.*)$` **Resolve**: `./modules/$1/resources/js/$2` So that importing from `'#Users/manage'` would be resolved to `modules/Users/resources/js/manage(.ts|.js)` etc.. ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,Awaiting More Feedback
low
Critical