id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
402,224,797 | flutter | Navigation with a drawer does not have closing animation | Hey,
I'm currently developing a flutter app. Because of its flexibility I use a navigation drawer. I came across multiple tutorials about navigation with a drawer in flutter, but no one was really satisfying. The first approach is to use one Scaffold with multiple layouts inside, like described [here](https://medium.com/@kashifmin/flutter-setting-up-a-navigation-drawer-with-multiple-fragments-widgets-1914fda3c8a8). But like one of the comments says it isn't a clean solution especially in a big app. Another approach is described [here](https://proandroiddev.com/flutter-creating-drawers-e31414f7d71a). It uses multiple Scaffolds and pushes them with `Navigator.of(context).push(...)`. Because of this method you have an animation between pages and are able to use the back button on android. So my question is, if there is a proper solution to use a navigation drawer. Maybe I'm just missing something and there's an example.
Thanks in advance!
| c: new feature,framework,f: material design,a: quality,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Critical |
402,261,141 | pytorch | why check ArgumentInfo is_pod? suffer bugs | we writing a Mixed C++/CUDA extension(https://pytorch.org/tutorials/advanced/cpp_extension.html)
success compile and running test in pytorch=0.4.1 and python= 3.6
however suffer bug in pytorch=1.0 and python= 3.6
`anaconda3/envs/python3.6_pytorch1.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/jit/argument_spec.h(59): error: static assertion failed with "ArgumentInfo is to be a POD struct`
in order to success compile ,We have to uncomment/disable line 59-60 in torch/lib/include/torch/csrc/jit/argument_spec.h
`static_assert(std::is_pod<ArgumentInfo>::value,
"ArgumentInfo is to be a POD struct");`
SO We wonder is check pod necessary ? What is the significance of doing this?
THX | module: docs,low priority,triaged | low | Critical |
402,327,060 | go | proposal: net/http: add MethodSearch constant for HTTP SEARCH method | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.4 darwin/amd64
</pre>
### What operating system and processor architecture are you using (`go env`)?
MacOS/amd64
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/arthur/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/arthur/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.11.4/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.11.4/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/0m/qqy9wr_n4l5_2zhm0fhnp9jw0000gp/T/go-build084923408=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
Added a MethodSearch = "SEARCH" to http/net/method.go, to let developers conveniently use public API, in this particular case constant for http requests in web-apps.
References to RFCs and examples:
https://tools.ietf.org/id/draft-snell-search-method-00.html#search
https://tools.ietf.org/id/draft-snell-search-method-00.html#rfc.section.4.2
| Proposal,Proposal-Hold | low | Critical |
402,330,799 | flutter | running integration tests with option --flavor development returns timeout | I am able to run `flutter run --flavor` development without any error, however when I intend to run integration tests with `flutter driver --target=test_driver/app.dart --flavor development`, i run into this error. App is installed but tests fail.
Any help or pointers to solve this is highly appreciated.
content of `app.dart` file:
```dart
import 'package:flutter_driver/driver_extension.dart';
import 'package:test/main.dart' as app;
void main() {
enableFlutterDriverExtension();
app.main();
}
```
*Error Log*
```
DriverError: Timed out waiting for Flutter Driver extension to become available. Ensure your test app (often: lib/main.dart) imports "package:flutter_driver/driver_extension.dart" and calls enableFlutterDriverExtension() as the first call in main().
Original error: null
Original stack trace:
null
package:flutter_driver/src/driver/driver.dart 355:9 FlutterDriver.connect
===== asynchronous gap ===========================
dart:async/future_impl.dart 22:43 _Completer.completeError
dart:async/runtime/libasync_patch.dart 40:18 _AsyncAwaitCompleter.completeError
package:flutter_driver/src/driver/driver.dart FlutterDriver.connect
===== asynchronous gap ===========================
dart:async/zone.dart 1062:19 _CustomZone.registerBinaryCallback
dart:async/runtime/libasync_patch.dart 86:23 _asyncErrorWrapperHelper
package:flutter_driver/src/driver/driver.dart FlutterDriver.connect
test_driver/security_test.dart 15:36 main.<fn>.<fn>
===== asynchronous gap ===========================
dart:async/zone.dart 1053:19 _CustomZone.registerUnaryCallback
dart:async/zone.dart 967:22 _CustomZone.bindUnaryCallbackGuarded
dart:async/future.dart 529:34 Future.doWhile
dart:async/future.dart 489:12 Future.forEach
package:test_api/src/backend/declarer.dart 291:36 Declarer._setUpAll.<fn>.<fn>
dart:async/zone.dart 1124:13 _rootRun
dart:async/zone.dart 1021:19 _CustomZone.run
dart:async/zone.dart 1516:10 _runZoned
dart:async/zone.dart 1463:12 runZoned
package:test_api/src/backend/declarer.dart 291:14 Declarer._setUpAll.<fn>
package:test_api/src/backend/invoker.dart 399:25 Invoker._onRun.<fn>.<fn>.<fn>.<fn>
===== asynchronous gap ===========================
dart:async/zone.dart 1045:19 _CustomZone.registerCallback
dart:async/zone.dart 962:22 _CustomZone.bindCallbackGuarded
dart:async/timer.dart 52:45 new Timer
dart:async/timer.dart 87:9 Timer.run
dart:async/future.dart 174:11 new Future
package:test_api/src/backend/invoker.dart 398:11 Invoker._onRun.<fn>.<fn>.<fn>
01:00 +0 -1: Security App (tearDownAll)
01:00 +0 -1: Some tests failed.
Unhandled exception:
Dummy exception to set exit code.
#0 _rootHandleUncaughtError.<anonymous closure> (dart:async/zone.dart:1112:29)
#1 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
#2 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
#3 _Timer._runTimers (dart:isolate/runtime/libtimer_impl.dart:391:30)
#4 _Timer._handleMessage (dart:isolate/runtime/libtimer_impl.dart:416:5)
#5 _RawReceivePortImpl._handleMessage (dart:isolate/runtime/libisolate_patch.dart:171:12)
``` | a: tests,c: crash,platform-ios,tool,t: flutter driver,P2,team-ios,triaged-ios | low | Critical |
402,355,469 | pytorch | [JIT] Support C++ front end module and JIT interop | Creating an issue to track this feature request.
## 🚀 Feature
Allow a C++ front end module to be interop in JIT. (since right now C++ front end module can already be interoped in Python)
## Motivation
TorchScript is ultimately very limited. If I have a module that has complex logic. It would be important to be able to write the whole module in C++ and then be called in Python
## Pitch
This allows Pytorch to be a truly production ready framework
## Alternatives
Currently, we can use C++ custom ops for certain things, but for more complex use cases (for example, if the module has to keep track of complex state - e.g. tracking in robotics). C++ custom ops is un-suitable
| oncall: jit | low | Major |
402,368,907 | pytorch | support for multiple torch.cuda.max_memory_allocated() counters | ## 🚀 Feature
Having multiple resettable torch.cuda.max_memory_allocated() counters
## Motivation
With the help of torch.cuda's `reset_max_memory_allocated` and `max_memory_allocated` one can now measure peak memory usage. Which is very helpful.
Now, there is a need for identical functionality, but supported in multiple and concurrent scopes, where scopes can overlap or be nested, and as such one global counter is not enough.
Scenario 1: an application relying on the normal functioning (pre-reset implementation) max_memory_allocated or max_memory_cached could now malfunction if some other application resets either or both (action a a distance).
Scenario 2: two profilers measuring different scopes. Say one measuring at a function level, another at a wider or narrower scope. Since there is only one counter they will be resetting each other’s measurements. python’s tracemalloc has the same issue, since it doesn’t create an object instance for the counter.
The 2nd scenario is not hypothetical, it’s actually a need I have right now as I have different profilers measuring different scopes. There are in different applications so they can’t really communicate with each other to keep each other in sync with reset calls. e.g. I have one profiler running on the train loop epoch-level, another on the jupyter cell-level, yet another on larger parts of the notebook. And unfortunately, my current peak measuring thread approach is clearly failing to catch all peaks, so it’d be extremely helpful to be able to switch to use max_memory_allocated yet different instances of it in different scopes.
## Pitch
So I need to be able to do something like:
```
max_obj1 = MaxMemoryAllocated()
# run some code 1
for epoch in epochs:
max_obj2 = MaxMemoryAllocated()
# run epoch code
peak_epoch = max_obj2.peak()
# run some code ...
peak = max_obj1.peak()
```
Of course, those would be unrelated applications, this code sample is just demonstrating how their execution will overlap and why the current implementation is insufficient.
## Alternatives
Currently, I spawn a thread per counter that measures peak memory usage via nvml (reading directly nvidia stats). Which doesn't give correct measurements, since it's easy for a thread to miss a peak.
## Additional context
Discussion thread: https://discuss.pytorch.org/t/measuring-peak-memory-usage-tracemalloc-for-pytorch/34067/19
Thank you.
| todo,feature,module: cuda,triaged | low | Major |
402,377,600 | pytorch | Complete dtype support for torch.norm | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
After https://github.com/pytorch/pytorch/pull/15414, torhch.norm supports dtype, but not with 'fro' or 'nuc' arguments. This is confusing and not documented.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
## Pitch
<!-- A clear and concise description of what you want to happen. -->
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| module: docs,triaged | low | Minor |
402,379,241 | opencv | Throw an exception on namedWindow() if monitor isn't connected or it is ran over SSH | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
Please:
* Read the documentation to test with the latest developer build.
* Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue.
* Try to be as detailed as possible in your report.
* Report only one problem per created issue.
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => 4.0
- Operating System / Platform => Windows 7 x64, Linux x64
- Compiler => any
##### Detailed description
<!-- your description -->
Feature request: throwing an exception instead of crashing the program with message `QXcbConnection: Could not connect to display`, if the function `cv::namedWindow()` or `cv::imshow()` is called on computer without monitor or if it is run over SSH.
##### Steps to reproduce
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
-->
For example, I want to write `cv::mat` to the `output.jpg` file instead of crash the program if a monitor isn't connected or if I run it over SSH:
```cpp
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
if( argc != 2) return -1;
Mat image = imread(argv[1], CV_LOAD_IMAGE_COLOR);
if(! image.data ) return -1;
try {
namedWindow( "Display window", WINDOW_AUTOSIZE ); // throw an exception instead of crash
imshow( "Display window", image );
waitKey(0);
} catch(...) {
imwrite( "output.jpg", image );
}
return 0;
}
``` | priority: low,category: highgui-gui | low | Critical |
402,392,088 | godot | Add ability to unload resource from ResourceInteractiveLoader | **Godot version:** 3.1 beta
**OS/device including version:** Windows 10
**Issue description:**
I have an application that loads sometimes from the same file and saves to the same file, in one area of the application it uses a ResourceInteractiveLoader and in another area it just uses ResourceLoader. I was struggling to save to the file when it was loaded with ResourceInteractiveLoader, and was pointed in the right direction to that file being locked on Windows 10.
The hackish work around I discovered was to just load a dummy file with ResourceInteractiveLoader, which unlocked the file.
My suggestion is, is adding an Unload flag to get_resource( ), or a method to ResourceInteractiveLoader, that is unload_resource() | enhancement,topic:core,documentation | low | Minor |
402,393,813 | go | x/website/internal/dl: store the list of validUsers in a more dynamic way | The list of validUsers is hardcoded at https://github.com/golang/tools/blob/master/godoc/dl/dl.go#L317. This means that every time someone is doing a release for a first time, we need to make a CL to add them to this list (so that they can run the release command) and redeploy golang.org.
We may want to store this list in a more dynamic way. | NeedsInvestigation | low | Minor |
402,423,476 | TypeScript | Add type definitions for Files And Directories API | ## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
`"Files And Directories API"`
`FileSystemFileEntry`
## Suggestion
Add types for the Files And Directories API: https://wicg.github.io/entries-api/
Specifically:
* `FileSystemEntry`
* `FileSystemDirectoryEntry`
* `FileSystemDirectoryReader`
* `FileSystemFileEntry`
* `FileSystem`
## Use Cases
Handling drag/drop of multiple files and/or folders, and gleaning extra information about the dropped files (which is what the API provides).
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion,Help Wanted,Domain: lib.d.ts | medium | Major |
402,436,764 | flutter | Odd spacing issue with --android-licenses | Ran `flutter doctor` to get an update. I think I'd updated to Android Studio 3.3 since the last time I'd run `flutter doctor` on this machine. Got an error message about not having accepted Android licenses. When I ran `flutter doctor --android-licenses`, it showed a progress bar for a while, and then printed a warning at an odd location on the screen. See below:
```
Microsoft Windows [Version 10.0.16299.904]
(c) 2017 Microsoft Corporation. All rights reserved.
[C:\Users\timsneath] flutter upgrade
Upgrading Flutter from c:\git\flutter...
From https://github.com/flutter/flutter
83af6f48d..e8c2f2c7f master -> origin/master
+ 3975ca5d8...4a685100e chalin-patch-1 -> origin/chalin-patch-1 (forced update)
Updating 83af6f48d..e8c2f2c7f
352 files changed, 9623 insertions(+), 3341 deletions(-)
Upgrading engine...
Checking Dart SDK version...
Downloading Dart SDK from Flutter engine 1c26bf8c4b55f4fa5f0d175768a1a0cc115c70b2...
Unzipping Dart SDK...
Building flutter tool...
Running pub upgrade...
Downloading package sky_engine... 3.4s
Downloading common tools... 4.2s
Downloading windows-x64 tools... 5.0s
Downloading android-arm-profile/windows-x64 tools... 3.8s
Downloading android-arm-release/windows-x64 tools... 3.4s
Downloading android-arm64-profile/windows-x64 tools... 3.5s
Downloading android-arm64-release/windows-x64 tools... 3.5s
Downloading android-arm-dynamic-profile/windows-x64 tools... 3.5s
Downloading android-arm-dynamic-release/windows-x64 tools... 3.3s
Downloading android-arm64-dynamic-profile/windows-x64 tools... 3.5s
Downloading android-arm64-dynamic-release/windows-x64 tools... 3.4s
Downloading android-x86 tools... 4.6s
Downloading android-x64 tools... 4.7s
Downloading android-arm tools... 4.3s
Downloading android-arm-profile tools... 4.2s
Downloading android-arm-release tools... 3.9s
Downloading android-arm64 tools... 4.2s
Downloading android-arm64-profile tools... 4.3s
Downloading android-arm64-release tools... 3.9s
Downloading android-arm-dynamic-profile tools... 4.7s
Downloading android-arm-dynamic-release tools... 4.0s
Downloading android-arm64-dynamic-profile tools... 4.2s
Downloading android-arm64-dynamic-release tools... 4.1s
Flutter 1.1.10-pre.27 • channel master • https://github.com/flutter/flutter.git
Framework • revision 83af6f48d6 (13 days ago) • 2019-01-23 12:49:40 -0500
Engine • revision 1c26bf8c4b
Tools • Dart 2.1.1 (build 2.1.1-dev.0.1 2cb346bd0c)
Running flutter doctor...
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel master, v1.1.10-pre.155, on Microsoft Windows [Version 10.0.16299.904], locale en-US)
[!] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[√] Android Studio (version 3.3)
[√] VS Code (version 1.30.2)
[!] Connected device
! No devices available
! Doctor found issues in 2 categories.
[C:\Users\timsneath] flutter doctor --android-licenses
Warning: File C:\Users\timsneath\.android\repositories.cfg could not be loaded.
1 of 6 SDK package license not accepted.] 100% Computing updates...
Review license that has not been accepted (y/N)?
```
Expected behavior: Warning line is left-justified. | tool,platform-windows,t: flutter doctor,P2,team-tool,triaged-tool | low | Critical |
402,444,414 | scrcpy | USB devices freeze occasionally when scrcp is running | Hi.
I discovered scrcpy just today, and it's really great and almost flawless so far!
The only thing I found to be annoying is that, when scrcpy is up¹, my USB devices(?) seem to die every now and then. Most noticibly, I had to unplug and re-plug my mouse and keyboard for over 10 times today as they just froze (usually at the same time) and kept spamming inputs.
**Where it occured:** Two unrelated Windows 10 (1809) computers.
For this to happen it doesn't even matter whether I'm operating inside the scrcpy window or not. scrcpy just has to be running in the background.
If there's anything I can send in for further investigation (like debug logs or some sort of USB crash dumps?), let me know.
¹it happened today for the first time and on two different computers, so it's gotta be scrcpy | usb | low | Critical |
402,452,307 | pytorch | caffe2: cudaHostRegister() and mbind()-related test failures on ppc64le | ## 🐛 Bug
## Environment
```
PyTorch version: 1.0.0a0+7998997 (with some local changes)
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Red Hat Enterprise Linux Server 7.5 (Maipo)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
Nvidia driver version: 410.72
cuDNN version: Probably one of the following:
/usr/local/cuda-10.0/targets/ppc64le-linux/lib/libcudnn.so.7.3.1
/usr/local/cuda-10.0/targets/ppc64le-linux/lib/libcudnn_static.a
```
## Additional context
Building on ppc64le systems with CUDA / NVIDIA GPUs, we're seeing
failures in various caffe2 tests, for example:
- `TestGradientsAccumulationWithPassThroughGradients::testAccumulationRuns`:
```
E RuntimeError: [enforce fail at context_gpu.h:375] error == cudaSuccess. 61 vs 0. Error at: /opt/anaconda2/conda-bld/pytorch_1547774768751/work/caffe2/core/context_gpu.h:375: part or all of the requested memory range is already mapped
E frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void const*) + 0xb0 (0x7fff76a67510 in /opt/anaconda2/envs/6518b/lib/python2.7/site-packages//torch/lib/libc10.so)
...
```
- `brew_test.py::BrewTest::test_arg_scope_single`
```
...
> workspace.FeedBlob("x", X)
...
> return blob._feed(arg, device_option)
E RuntimeError: [enforce fail at numa.cc:85] mbind( (void*)page_start_ptr, size + offset, MPOL_BIND, &mask, sizeof(mask) * 8, MPOL_MF_MOVE | MPOL_MF_STRICT) == 0. Could not move memory to a NUMA node
E frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void const*) + 0xb0 (0x3fffa2167510 in /opt/anaconda/envs/pytorch-env/lib/python2.7/site-packages//torch/lib/libc10.so)
E frame #1: caffe2::NUMAMove(void*, unsigned long, int) + 0x210 (0x3fffae2485b0 in /opt/anaconda/envs/pytorch-env/lib/python2.7/site-packages//torch/lib/libcaffe2.so)
...
```
Other tests fail simlarly:
```
DataParallelModelTest.test_equiv
TestCloneNet.testPartialClone
TestCloneNet.testPartialClone
TestCrfDecode.test_crf_viterbi
TestOperatorTraceback.test_async_exception_handling
TestOperatorTraceback.test_operator_runtime_traceback
```
We believe the cause is in `PinnedCPUAllocator` in `caffe2/core/context_gpu.h`.
If NUMA is enabled, then `PinnedCPUAllocator`'s `allocate()` function
- calls to `DefaultCPUAllocator` to allocate a `DataPtr`,
- calls `cudaHostRegister()` on the actual memory pointer
- returns `DefaultCPUAllocator`'s `DataPtr` to the caller
`PinnedCPUAllocator` also provides a `Delete()` function that _intends_
to call `cudaHostUnregister()` when the region is freed. But in the
NUMA / `cudaHostRegister()` case, `PinnedCPUAllocator`'s `Delete()` is
never called because the `DataPtr` was constructed by
`DefaultCPUAllocator` and so refers to _that_ `Delete()` instead.
The end result is that `cudaHostRegister()` is called on memory
regions, but there are never any corresponding `cudaHostUnregister()` calls.
In addition to registering the host memory with CUDA, `cudaHostRegister()`
also pins / locks the memory.
So we think this explains both failure modes shown above. If memory
regions are allocated / freed / re-allocated, then either:
- `DefaultCPUAllocator`'s `mbind()` call will fail (prevented by the
pin/lock if the reallocation is on a different NUMA node from the
original allocation), or
- the `cudaHostRegister()` call with fail (if `mbind()` didn't and the
region was previously `cudaHostRegister()`'d).
Not sure of the best way to fix this. Maybe any of:
- extend `at::DataPtr` with something like a `PinnedDataPtr` that just
includes an extra field to hang the underlying `DataPtr` from the base
allocator. Have `PinnedCPUAllocator`'s `allocate()` always construct
with a reference to own `Delete()`, and return that to callers
- add an `allocate()` variant to `DefaultCPUAllocator` that allows a
Delete function reference to be passed in
- add an `allocate()` variant to `DefaultCPUAllocator` that just returns
the naked memory pointer, leaving `PinnedCPUAllocator` to construct
the `DataPtr` as in the non-NUMA case
- thread some `set_deleter()` functionality through the DataPtr class
hierarchy (but that seems not well-supported by the base
`std::unique_ptr` class, and you'd really want deleters to nest)
| caffe2 | low | Critical |
402,496,698 | rust | Confusing compiler error when iterating over Option<IntoIterator> | I recently wrote a type that implemented IntoIterator with a tuple as the Item type. I then tried to test this by using this iterator in a for loop. I got the following error:
```
for (i, j) in foo
^^^^^^ expected struct `MyIter`, found tuple
```
This left me confused, because I had assumed that foo's type was MyIter. Why was the iterator returning itself, instead of the item type I had specified? I began to suspect my implementation of IntoIterator before I realized that foo was actually `Option<MyIter>`. As it turns out, Option also implements IntoIterator (#27996), so the compiler thought I was trying to iterate over the Option instead of the enclosed type!
It'd be nice if there were a note associated with the compiler error in cases where the iterator is type `Option<T : IntoIterator>`. Something like this, maybe?
```
"Note: `iter` is `Option<MyIter>`. Did you forget to unwrap?"
```
Simple reproduction of the compiler error:
https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=4432a78517e24ea649a31c7246b34edd | C-enhancement,A-diagnostics,P-low,T-compiler,D-papercut | low | Critical |
402,501,919 | pytorch | Confusing documentation with distributions.Categorical about logits | ## 📚 Documentation
<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->

The [documentation for Categorical distributions](https://pytorch.org/docs/stable/distributions.html) states that it can take logits, which it calls "event log **probabilities**". I think this should either be log odds. This is minor but the terminology with logits is confusing enough already. | module: distributions,module: docs,triaged | low | Minor |
402,505,593 | pytorch | Unexpected behavior of jit.trace when PYTORCH_JIT=0 | One of the biggest uses of tracing is to save a snapshot of a computation graph. However, when `PYTORCH_JIT=0` (debug mode on), the input function/module of `jit.trace` is simply returned, causing undesired behavior when users expect a snapshot is saved. E.g., the following code snippets behave differently when the flag is set/unset:
1. Save a training graph & a eval graph:
```py
net.train()
saved_train = torch.jit.trace(net, ...)
net.eval()
saved_eval = torch.jit.trace(net, ...)
# and proceed to use saved_train and saved_eval
# assuming they are fixed graphs corresponding to the two modes
```
2. Save a graph with a set of substituted weights:
```py
original_ws = [conv.w for conv in net.convs]
# substitute with new weights
for new_w, conv in zip(new_ws, net.convs):
conv.w = new_w
graph_with_new_weights = jit.trace(net, ...)
for original_w, conv in zip(original_ws, net.convs):
conv.w = original_w
```
3. Crazy things like `read_next_network_from_disk_and_run()`.
The "ideal" solution would be to save a copy of all things the traced function can possibly access, but this approach is obviously not practical (we would need to save the entire internet if the function runs file downloaded from a random url). After a discussion with @apaszke , we think it makes sense to implement the following best-effort mechanism for `jit.trace` when `PYTORCH_JIT=0` (JIT is disabled):
1. A `jit.trace(fn, ...)` call will issue a warning.
2. It will return something that contains both the traced (unoptimized) graph and the original `fn`.
3. When that thing is called, it calls the original `fn` with the given new inputs, trace the call, and raise if the graph and/or leaf nodes that aren’t inputs is different from those of the saved graph.
It's worth noting that even with this mechanism, there is still difference wrt when `fn` is run (see table below), but that's really unavoidable. We should make it really clear in the docs and the warning message.
| |`PYTORCH_JIT=1` (default) | Current `PYTORCH_JIT=0` | Proposed `PYTORCH_JIT=0` |
| :-: |:----------: |:-------------:| :-----:|
| `traced = jit.trace(fn,...)` | `fn(...)` is called | - | `fn(...)` is called |
| `traced(...)` | - | `fn(...)` is called | `fn(...)` is called | | oncall: jit,low priority | low | Critical |
402,512,389 | pytorch | ProcessGroupGlooTest.test_gather_stress is flaky | Running [this pytorch-linux-trusty-py3.6-gcc5.4 CI docker image](308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-trusty-py3.6-gcc5.4:282-8f19dfe947a78822ca7bf490d2f375cf2800c0d0) locally, I occasionally get
```
test_gather_stress (__main__.ProcessGroupGlooTest) ... Process process 3:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "test_c10d.py", line 471, in _run
getattr(self, self.id().split(".")[2])()
File "test_c10d.py", line 437, in wrapper
fn(self)
File "test_c10d.py", line 985, in test_gather_stress
self._test_gather_stress(inputs, lambda t: t.clone())
File "test_c10d.py", line 973, in _test_gather_stress
work_handle.wait()
RuntimeError: [/var/lib/jenkins/workspace/third_party/gloo/gloo/transport/tcp/unbound_buffer.cc:119] Timed out waiting 1000ms for send operation to complete
Process process 2:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "test_c10d.py", line 471, in _run
getattr(self, self.id().split(".")[2])()
File "test_c10d.py", line 437, in wrapper
fn(self)
File "test_c10d.py", line 985, in test_gather_stress
self._test_gather_stress(inputs, lambda t: t.clone())
File "test_c10d.py", line 973, in _test_gather_stress
work_handle.wait()
RuntimeError: [/var/lib/jenkins/workspace/third_party/gloo/gloo/transport/tcp/unbound_buffer.cc:119] Timed out waiting 1000ms for send operation to complete
Process process 0:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "test_c10d.py", line 471, in _run
getattr(self, self.id().split(".")[2])()
File "test_c10d.py", line 437, in wrapper
fn(self)
File "test_c10d.py", line 985, in test_gather_stress
self._test_gather_stress(inputs, lambda t: t.clone())
File "test_c10d.py", line 973, in _test_gather_stress
work_handle.wait()
RuntimeError: [/var/lib/jenkins/workspace/third_party/gloo/gloo/transport/tcp/unbound_buffer.cc:119] Timed out waiting 1000ms for send operation to complete
Process process 1:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "test_c10d.py", line 471, in _run
getattr(self, self.id().split(".")[2])()
File "test_c10d.py", line 437, in wrapper
fn(self)
File "test_c10d.py", line 985, in test_gather_stress
self._test_gather_stress(inputs, lambda t: t.clone())
File "test_c10d.py", line 973, in _test_gather_stress
work_handle.wait()
RuntimeError: [/var/lib/jenkins/workspace/third_party/gloo/gloo/transport/tcp/pair.cc:534] Read error [127.0.0.1]:37100: Connection reset by peer
FAIL
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang | oncall: distributed,triaged,module: flaky-tests,module: c10d | low | Critical |
402,526,324 | pytorch | caffe2::CudnnConvOp::RunOnDevice() fails on Squeezenet | ## 🐛 Bug
`caffe2::CudnnConvOp::RunOnDevice()` fails on Squeezenet
## To Reproduce
Steps to reproduce the behavior:
1. Unzip and copy [test_trt.zip](https://github.com/pytorch/pytorch/files/2790075/test_trt.zip) to `caffe2/python/trt/`
2. Run `test_trt.TensorRTTransformTest.test_squeezenet_core test:
```
Error
Traceback (most recent call last):
File "/home/snikolaev/anaconda3/lib/python3.6/unittest/case.py", line 59, in testPartExecutor
yield
File "/home/snikolaev/anaconda3/lib/python3.6/unittest/case.py", line 605, in run
testMethod()
File "/home/snikolaev/pytorch2/caffe2/python/trt/test_trt.py", line 668, in test_squeezenet_core
workspace.RunNet(pred_net.name)
File "/home/snikolaev/anaconda3/lib/python3.6/site-packages/caffe2/python/workspace.py", line 236, in RunNet
StringifyNetName(name), num_iter, allow_fail,
File "/home/snikolaev/anaconda3/lib/python3.6/site-packages/caffe2/python/workspace.py", line 197, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at conv_op_cudnn.cc:522] X.dim() >= 3 && X.dim() <= 5.
Error from operator:
input: "data" input: "conv1_w" input: "conv1_b" output: "conv1" type: "Conv" arg { name: "stride" i: 2 } arg { name: "pad" i: 0 } arg { name: "kernel" i: 3 } device_option { device_type: 1 device_id: 0 } engine: "CUDNN"frame #0: <unknown function> + 0x3e075 (0x7f3ee48fc075 in /home/snikolaev/anaconda3/lib/python3.6/site-packages/caffe2/python/../../torch/lib/libc10.so)
frame #1: std::function<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > ()>::operator()() const + 0x4c (0x7f3ee48fc504 in /home/snikolaev/anaconda3/lib/python3.6/site-packages/caffe2/python/../../torch/lib/libc10.so)
frame #2: c10::ThrowEnforceNotMet(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void const*) + 0x57 (0x7f3ee48fbbf4 in /home/snikolaev/anaconda3/lib/python3.6/site-packages/caffe2/python/../../torch/lib/libc10.so)
frame #3: bool caffe2::CudnnConvOp::DoRunWithType<float, float, float, float>() + 0xf9 (0x7f3ee8b43d25 in /home/snikolaev/anaconda3/lib/python3.6/site-packages/caffe2/python/../../torch/lib/libcaffe2_gpu.so)
frame #4: caffe2::CudnnConvOp::RunOnDevice() + 0x4e (0x7f3ee8b3c6c2 in /home/snikolaev/anaconda3/lib/python3.6/site-packages/caffe2/python/../../torch/lib/libcaffe2_gpu.so)
``` | caffe2 | low | Critical |
402,527,097 | TypeScript | Js file type inference error | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.2
**Code**
**person.js**
```javascript
let student = function () {
this.age=12;
}
let person = {};
person.name=22;
student.prototype=person;
let tom = new student();
exports.tom =tom;
```
**test.ts**
```javascript
import * as person from './person';
person.tom.name
```
**Expected behavior:** `person.tom.name` is not an error
**Actual behavior:**

| Suggestion,In Discussion,checkJs | low | Critical |
402,572,312 | godot | When zoom is big, text editor waste a lot of space and some elements aren't scaled | **Godot version:**
3.1 beta 2
**OS/device including version:**
Windows 10
**Issue description:**
When zoom in text editor is big, then:
1.Editor waste a lot of space for showing number of row
2,3. Doesn't scale tabulator and code block icons

| enhancement,topic:editor,confirmed | low | Major |
402,593,761 | rust | Self-contradictory error message about Fn types on beta/nightly | https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=e973951415f5092af183ea13d11177cf
```
struct Foo<T, F: Fn(&T, &T) -> T> {
value: T,
func: F
}
fn main() {
let lambda = |&x, &y| x + y;
let foo = Foo {
value: 5 as i32,
func: lambda
};
}
```
```
Compiling playground v0.0.1 (/playground)
error[E0308]: mismatched types
--> src/main.rs:8:15
|
8 | let foo = Foo {
| ^^^ one type is more general than the other
|
= note: expected type `std::ops::FnOnce<(&i32, &i32)>`
found type `std::ops::FnOnce<(&i32, &i32)>`
error: aborting due to previous error
For more information about this error, try `rustc --explain E0308`.
error: Could not compile `playground`.
To learn more, run the command again with --verbose.
```
See also https://stackoverflow.com/questions/54341465/rust-expected-type-error-prints-mismatched-types-that-are-exactly-the-same?noredirect=1 | A-type-system,C-enhancement,A-diagnostics,A-lifetimes,T-compiler,T-types | low | Critical |
402,597,541 | vue | transition-group replacing group flicker | ### Version
2.5.22
### Reproduction link
[https://codesandbox.io/s/y3910wr9j9](https://codesandbox.io/s/y3910wr9j9)
### Steps to reproduce
Click the change button in the link, no matter if animation is on or off (changing the animation name) - there is a brief flicker where both groups are present.
This is barely visible in this scenario, but on a full page and more rows it's a lot more visible.
(Might need to click multiple times to notice)
### What is expected?
Groups would transition "atomically" without having a moment where both groups are present
### What is actually happening?
Both groups are present for a moment
---
The only way to solve this is to remove the the transition-group component completely, not even changing the name of the transition to null or something that doesn't match works.
Happened when I was working on a data table and was using the move transition for sorting, and then when replacing rows completely I saw this flicker and couldn't get around it easily.
<!-- generated by vue-issues. DO NOT REMOVE --> | has workaround | low | Major |
402,603,307 | go | time: one-digit hour is accepted for 15:04:05 template |
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.4 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What did you do?
https://play.golang.org/p/EBxIIXatR1Y
### What did you expect to see?
I expected to see an error at parsing the time "3:04:05" for layout "15:04:05", of the same way that it fails at parsing the minutes ("03:4:05") or the seconds ("03:04:5").
### What did you see instead?
No error. | NeedsInvestigation | low | Critical |
402,617,423 | go | time: extraneous information in Parse error message for interior element missing leading zero | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.4 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What did you do?
Parsing the time "03:4:05" with layout "15:04:05".
### What did you expect to see?
At the last part of the error: cannot parse "4" as "04"
### What did you see instead?
cannot parse "4:05" as "04" | NeedsInvestigation | low | Critical |
402,623,511 | flutter | bottom_sheet.dart - expose optional parameters to control modal dismiss | I would like to suggest exposing optional parameters for the `showModalBottomSheet` function
- `enableDrag: true` (when false, the drag to close will be disabled)
- `barrierDismissible: true` (when false, tapping outside will not close the modal)
- `tapDismissible: true` (when false, tapping inside the modal will not close the modal)
This will allow a bottom-sheet modal with more customization when the developer need an option to dismiss the modal only on certain cases.
I can make a PR if you'll give me a green light | c: new feature,framework,f: material design,P2,team-design,triaged-design | low | Major |
402,624,550 | angular | Duplicate views | <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 🐞 bug report
### Affected Package
<!-- Can you pin-point one or more @angular/* packages as the source of the bug? -->
<!-- ✍️edit: --> I think that the issue is caused by package **@angular/core**
### Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- ✍️--> Probably not. I've tried it only in **Angular >=7.0.0**
### Description
<!-- ✍️--> View of the child component duplicates two times.
If you run `detectChanges()` manually in the root (app) component and you have incomplete observable in the one of the child component then the view of it will be displayed twice. Seems that Angular can't detach (or remove) the view if there are some observable in progress. It only happens when the child component rendered via the routing process and doesn't reproduced if the parent component contains it in the template directly.
## 🔬 Minimal Reproduction
<!--
Please create and share minimal reproduction of the issue starting with this template: https://stackblitz.com/fork/angular-issue-repro2
-->
<!-- ✍️-->
[https://stackblitz.com/edit/angular-8wr99p?file=src%2Fapp%2Fapp.component.ts](https://stackblitz.com/edit/angular-8wr99p?file=src%2Fapp%2Fapp.component.ts)
1. _app.ts_
```typescript
ngOnInit() {
this.loaderService.getLoaderObservable().subscribe(show => {
this.showLoader = show;
this.changeDetectionRef.detectChanges();
});
}
```
2. _info-child-1.ts_
```typescript
constructor(httpClient: HttpClient, loaderService: LoaderService) {
loaderService.show();
httpClient.get('https://raw.githubusercontent.com/ag-grid/ag-grid-docs/master/src/olympicWinners.json')
.subscribe(response => {
loaderService.hide();
});
}
```
3. The result

<!--
If StackBlitz is not suitable for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue. Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior.
Issues that don't have enough info and can't be reproduced will be closed.
You can read more about issue submission guidelines here: https://github.com/angular/angular/blob/master/CONTRIBUTING.md#-submitting-an-issue
-->
## 🔥 Exception or Error
<pre><code>
<!-- If the issue is accompanied by an exception or an error, please share it below: -->
<!-- ✍️-->
</code></pre>
## 🌍 Your Environment
Windows 10
**Angular Version:**
<pre><code>
Angular CLI: 7.0.7
Node: 10.0.0
OS: win32 x64
Angular: 7.2.2
... common, compiler, core, forms, platform-browser
... platform-browser-dynamic, router
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.10.7
@angular-devkit/build-angular 0.10.7
@angular-devkit/build-optimizer 0.10.7
@angular-devkit/build-webpack 0.10.7
@angular-devkit/core 7.0.7
@angular-devkit/schematics 7.0.7
@angular/cli 7.0.7
@angular/compiler-cli 7.0.4
@angular/language-service 7.0.4
@ngtools/webpack 7.0.7
@schematics/angular 7.0.7
@schematics/update 0.10.7
rxjs 6.3.0
typescript 3.1.6
webpack 4.19.1
</code></pre>
**Anything else relevant?**
<!-- ✍️Is this a browser specific issue? If so, please specify the browser and version. -->
<!-- ✍️Do any of these matter: operating system, IDE, package manager, HTTP server, ...? If so, please mention it below. -->
| type: bug/fix,freq1: low,area: router,state: confirmed,router: config matching/activation/validation,P4 | low | Critical |
402,653,498 | kubernetes | Avoid partial volume binding scenario | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!-->
**What happened**:
If bindAPIUpdate binds pod volumes partially, rest of assumed volumes will be reverted and can used by another pod. Example illustrated by @msau42:
- Say if node1 has 2 PVs and node2 has 1 pv
- And pod1 has 2 PVCs and pod2 has 1 PVC
- It assumes the 2 PVs on node1 and goes to bind pod1
- It binds pv1 successfully but fails pv2
- bindAPIUpdate binds pv1 successfully, but failed to bind pv2 and revert its assumed cache
- Then it's possible that pod2 can come in, take pv2. Now pod1 is stuck forever
If this happens, manual intervention is required.
**What you expected to happen**:
No partial volume binding scenario.
ref: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/volume-topology-scheduling.md#binding-multiple-pvcs-in-one-transaction | kind/bug,sig/storage,lifecycle/frozen | low | Critical |
402,665,041 | flutter | Failed assert in Widget Inspector | @jacob314 I'm not sure what I did here, but I was running the Gallery from the console, had devtools (built JS version) open and was clicking toggle buttons when I got the red screen in the simulator and this dumped to the console (where I'd run `flutter run`).
```
🔥 To hot reload changes while running, press "r". To hot restart (and rebuild state), press "R".
An Observatory debugger and profiler on iPhone XS Max is available at: http://127.0.0.1:8182/
For a more detailed help message, press "h". To detach, press "d"; to quit, press "q".
flutter: ══╡ EXCEPTION CAUGHT BY WIDGETS LIBRARY ╞═══════════════════════════════════════════════════════════
flutter: The following assertion was thrown building Title(title: "Flutter Gallery", color:
flutter: MaterialColor(primary value: Color(0xff9e9e9e))):
flutter: 'package:flutter/src/widgets/widget_inspector.dart': Failed assertion: line 2181 pos 12:
flutter: 'WidgetInspectorService.instance.selectionChangedCallback == null': is not true.
flutter:
flutter: Either the assertion indicates an error in the framework itself, or we should provide substantially
flutter: more information in this error message to help you determine and fix the underlying cause.
flutter: In either case, please report this assertion by filing a bug on GitHub:
flutter: https://github.com/flutter/flutter/issues/new?template=BUG.md
flutter:
flutter: When the exception was thrown, this was the stack:
flutter: #2 _WidgetInspectorState.initState (package:flutter/src/widgets/widget_inspector.dart:2181:12)
flutter: #3 StatefulElement._firstBuild (package:flutter/src/widgets/framework.dart:3830:58)
flutter: #4 ComponentElement.mount (package:flutter/src/widgets/framework.dart:3696:5)
flutter: #5 Element.inflateWidget (package:flutter/src/widgets/framework.dart:2950:14)
flutter: #6 Element.updateChild (package:flutter/src/widgets/framework.dart:2753:12)
flutter: #7 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #8 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #9 StatelessElement.update (package:flutter/src/widgets/framework.dart:3781:5)
flutter: #10 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #11 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #12 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #13 ProxyElement.update (package:flutter/src/widgets/framework.dart:3990:5)
flutter: #14 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #15 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #16 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #17 ProxyElement.update (package:flutter/src/widgets/framework.dart:3990:5)
flutter: #18 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #19 SingleChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:4867:14)
flutter: #20 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #21 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #22 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #23 StatefulElement.update (package:flutter/src/widgets/framework.dart:3878:5)
flutter: #24 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #25 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #26 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #27 ProxyElement.update (package:flutter/src/widgets/framework.dart:3990:5)
flutter: #28 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #29 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #30 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #31 StatefulElement.update (package:flutter/src/widgets/framework.dart:3878:5)
flutter: #32 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #33 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #34 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #35 ProxyElement.update (package:flutter/src/widgets/framework.dart:3990:5)
flutter: #36 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #37 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #38 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #39 ProxyElement.update (package:flutter/src/widgets/framework.dart:3990:5)
flutter: #40 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #41 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #42 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #43 ProxyElement.update (package:flutter/src/widgets/framework.dart:3990:5)
flutter: #44 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #45 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #46 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #47 StatelessElement.update (package:flutter/src/widgets/framework.dart:3781:5)
flutter: #48 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #49 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #50 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #51 StatefulElement.update (package:flutter/src/widgets/framework.dart:3878:5)
flutter: #52 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #53 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #54 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #55 ProxyElement.update (package:flutter/src/widgets/framework.dart:3990:5)
flutter: #56 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #57 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #58 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #59 StatefulElement.update (package:flutter/src/widgets/framework.dart:3878:5)
flutter: #60 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #61 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #62 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #63 ProxyElement.update (package:flutter/src/widgets/framework.dart:3990:5)
flutter: #64 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #65 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #66 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #67 StatefulElement.update (package:flutter/src/widgets/framework.dart:3878:5)
flutter: #68 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #69 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #70 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #71 StatelessElement.update (package:flutter/src/widgets/framework.dart:3781:5)
flutter: #72 Element.updateChild (package:flutter/src/widgets/framework.dart:2742:15)
flutter: #73 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3732:16)
flutter: #74 Element.rebuild (package:flutter/src/widgets/framework.dart:3547:5)
flutter: #75 BuildOwner.buildScope (package:flutter/src/widgets/framework.dart:2286:33)
flutter: #76 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&SemanticsBinding&RendererBinding&WidgetsBinding.drawFrame (package:flutter/src/widgets/binding.dart:685:20)
flutter: #77 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&SemanticsBinding&RendererBinding._handlePersistentFrameCallback (package:flutter/src/rendering/binding.dart:219:5)
flutter: #78 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._invokeFrameCallback (package:flutter/src/scheduler/binding.dart:990:15)
flutter: #79 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:930:9)
flutter: #80 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._handleDrawFrame (package:flutter/src/scheduler/binding.dart:842:5)
flutter: #81 _invoke (dart:ui/hooks.dart:173:13)
flutter: #82 _drawFrame (dart:ui/hooks.dart:162:3)
flutter: (elided 2 frames from class _AssertionError)
flutter: ════════════════════════════════════════════════════════════════════════════════════════════════════
```

| c: crash,framework,f: inspector,P2,team-framework,triaged-framework | low | Critical |
402,722,428 | create-react-app | Coverage seems to run twice each test | ### Is this a bug report?
Yes
### Did you try recovering your dependencies?
Yes
### Which terms did you search for in User Guide?
test jest coverage
### Environment
Environment Info:
System:
OS: Windows 7
CPU: x64 Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
Binaries:
Yarn: 1.7.0 - C:\Program Files (x86)\Yarn\bin\yarn.CMD
npm: 6.4.1 - C:\Program Files\nodejs\npm.CMD
Browsers:
Internet Explorer: 11.0.9600.19236
npmPackages:
react: 16.7.0 => 16.7.0
react-dom: 16.7.0 => 16.7.0
react-scripts: 2.1.3 => 2.1.3
npmGlobalPackages:
create-react-app: Not Found
### Steps to Reproduce
npx create-react-app my-app-test
cd my-app-test\
npm run test -- --coverage
### Expected Behavior

### Actual Behavior

Actually, I had this problem last year, I could resolve it because I had Jest in Dev Dependencies, and it seems to be problematic.
But, now I have the same problem, but whatever the configuration. For example, with just a fresh install, I got this problem, locally on my machine, but also in VSTS during the build step... And I'm not the only one in my company. | issue: needs investigation | low | Critical |
402,726,117 | pytorch | Pytorch with CUDA aware OpenMPI for Infiniband not working with HCOLL and MXM | ## Issue description
Dear all,
I try to build PyTorch with CUDA aware OpenMPI working with Infiniband. I'm using a Mellanox Infiniband card. When running this test script
```
$ cat scatter-min
#!/usr/bin/env python
import numpy as np
import torch as to
from torch import distributed as dist
# Initialization
dist.init_process_group('mpi')
rank = dist.get_rank()
world_size = dist.get_world_size()
gpu = 1
D = 5
N = 50
N_reshape = int(50 / 5)
N_chunk = int(N_reshape / world_size)
no_GPUs = to.cuda.device_count()
if gpu:
dtype_f = to.cuda.FloatTensor
to.cuda.set_device(0)
else:
dtype_f = to.FloatTensor
dist.barrier()
# Create some array to be split up and scattered
if rank == 0:
A = to.tensor(np.arange(N).reshape(N_reshape, D)).type(dtype_f)
A_chunk = to.chunk(A, world_size, dim=0)
print("Array to be scattered from rank 0 is \n%s" % (A))
else:
A_chunk = to.tensor([])
dist.barrier()
my_A = to.zeros(N_chunk, D).type(dtype_f)
for r in range(world_size):
if r == rank:
print("Before Scatter: Rank %i has \n%s" % (rank, my_A))
dist.barrier()
# Scatter data into my_A arrays
dist.scatter(my_A, src=0, scatter_list=list(A_chunk))
for r in range(world_size):
if r == rank:
print("After Scatter: Rank %i has \n%s" % (rank, my_A))
dist.barrier()
```
I run into a fatal error with the output (mind the execution has to be with -n 2)
```
$ mpirun -n 2 ./scatter-min
Array to be scattered from rank 0 is
tensor([[ 0., 1., 2., 3., 4.],
[ 5., 6., 7., 8., 9.],
[10., 11., 12., 13., 14.],
[15., 16., 17., 18., 19.],
[20., 21., 22., 23., 24.],
[25., 26., 27., 28., 29.],
[30., 31., 32., 33., 34.],
[35., 36., 37., 38., 39.],
[40., 41., 42., 43., 44.],
[45., 46., 47., 48., 49.]], device='cuda:0')
Before Scatter: Rank 0 has
tensor([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]], device='cuda:0')
Before Scatter: Rank 1 has
tensor([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]], device='cuda:0')
[gold04:14866:0] Caught signal 11 (Segmentation fault)
==== backtrace ====
2 0x000000000006858c mxm_handle_error() /scrap/jenkins/workspace/hpc-power-pack/label/r-vmb-rhel7-u4-x86-64-MOFED-CHECKER/hpcx_root/src/hpcx-v2.1.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.4-x86_64/mxm-hpcx-v2.1/src/mxm/util/debug/debug.c:641
3 0x0000000000068adc mxm_error_signal_handler() /scrap/jenkins/workspace/hpc-power-pack/label/r-vmb-rhel7-u4-x86-64-MOFED-CHECKER/hpcx_root/src/hpcx-v2.1.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.4-x86_64/mxm-hpcx-v2.1/src/mxm/util/debug/debug.c:616
4 0x0000000000036280 killpg() ??:0
5 0x000000000015651a __memcpy_ssse3_back() :0
6 0x000000000005ac4e mxm_proto_set_data_buf_inline() /scrap/jenkins/workspace/hpc-power-pack/label/r-vmb-rhel7-u4-x86-64-MOFED-CHECKER/hpcx_root/src/hpcx-v2.1.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.4-x86_64/mxm-hpcx-v2.1/src/mxm/proto/proto_ops.c:271
7 0x0000000000070085 mxm_shm_channel_send() /scrap/jenkins/workspace/hpc-power-pack/label/r-vmb-rhel7-u4-x86-64-MOFED-CHECKER/hpcx_root/src/hpcx-v2.1.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.4-x86_64/mxm-hpcx-v2.1/src/mxm/tl/shm/shm_channel.c:321
8 0x0000000000062030 mxm_proto_conn_send_op() /scrap/jenkins/workspace/hpc-power-pack/label/r-vmb-rhel7-u4-x86-64-MOFED-CHECKER/hpcx_root/src/hpcx-v2.1.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.4-x86_64/mxm-hpcx-v2.1/src/mxm/proto/proto_send.c:47
9 0x000000000017e2ed mca_pml_yalla_send() /usr/local/src/openmpi/ompi/mca/pml/yalla/pml_yalla.c:514
10 0x00000000000abfbb ompi_coll_base_scatter_intra_basic_linear() /usr/local/src/openmpi/ompi/mca/coll/base/coll_base_scatter.c:243
11 0x000000000009339b PMPI_Scatter() //usr/local/src/openmpi/ompi/mpi/c/profile/pscatter.c:164
12 0x000000000063a670 _ZNSt17_Function_handlerIFvRSt10unique_ptrIN4c10d9WorkEntryESt14default_deleteIS2_EEEZNS1_15ProcessGroupMPI7scatterERSt6vectorIN2at6TensorESaISB_EERS9_ISD_SaISD_EERKNS1_14ScatterOptionsEEUlS6_E_E9_M_invokeERKSt9_Any_dataS6_() ProcessGroupMPI.cpp:0
13 0x0000000000637655 _ZN4c10d15ProcessGroupMPI7runLoopEv() ??:0
14 0x00000000000b8678 execute_native_thread_routine_compat() /opt/conda/conda-bld/compilers_linux-64_1534514838838/work/.build/x86_64-conda_cos6-linux-gnu/src/gcc/libstdc++-v3/src/c++11/thread.cc:94
15 0x0000000000007dd5 start_thread() pthread_create.c:0
16 0x00000000000feb3d __clone() ??:0
===================
```
When gpu = 0 in line 11 of the test script scatter-min, it runs as expected
```
$ mpirun -n 2 ./scatter-min
Array to be scattered from rank 0 is
tensor([[ 0., 1., 2., 3., 4.],
[ 5., 6., 7., 8., 9.],
[10., 11., 12., 13., 14.],
[15., 16., 17., 18., 19.],
[20., 21., 22., 23., 24.],
[25., 26., 27., 28., 29.],
[30., 31., 32., 33., 34.],
[35., 36., 37., 38., 39.],
[40., 41., 42., 43., 44.],
[45., 46., 47., 48., 49.]])
Before Scatter: Rank 0 has
tensor([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]])
Before Scatter: Rank 1 has
tensor([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]])
After Scatter: Rank 0 has
tensor([[ 0., 1., 2., 3., 4.],
[ 5., 6., 7., 8., 9.],
[10., 11., 12., 13., 14.],
[15., 16., 17., 18., 19.],
[20., 21., 22., 23., 24.]])
After Scatter: Rank 1 has
tensor([[25., 26., 27., 28., 29.],
[30., 31., 32., 33., 34.],
[35., 36., 37., 38., 39.],
[40., 41., 42., 43., 44.],
[45., 46., 47., 48., 49.]])
```
This is only happening, when OpenMPI is compiled with MXM and HCOLL for CUDA. UCX and KNEM only seems to be sufficient already. OpenMPI was build the following way
```
# Adapt contrib/platform/mellanox/optimized, otherwise libopen-pla.la is has no make rule
$ sed -i '/with_devel_headers/c with_devel_headers=no' /cluster/src/programs/openmpi/3.0.0/contrib/platform/mellanox/optimized
# using environment modules pointing to my locally installed and compiled software
$ module add mellanox_hpc-x/4.3-1.0.1.0/knem mellanox_hpc-x/4.3-1.0.1.0/ucx-cuda mellanox_hpc-x/4.3-1.0.1.0/hcoll-cuda mellanox_hpc-x/4.3-1.0.1.0/mxm gcc/4.8.5 cuda/9.1.85 nccl/2.1.15-cuda-9.1 libfabric/1.6 valgrind/3.13.0 zlib/1.2.8 libibmad/1.3.9
$ mellanox_autodetect=yes mellanox_threads=yes ./configure --prefix=/usr/local/openmpi --localstatedir=/var --with-platform=contrib/platform/mellanox/optimized --enable-static --with-zlib=/usr/local/zlib --with-cuda=/usr/local/cuda-9.1 --with-jdk-dir=/usr/local/java --with-slurm=/usr/local/slurm --with-pmi=/usr/local/slurm --with-valgrind=/usr/local/valgrind --with-libfabric
[...]
Open MPI configuration:
-----------------------
Version: 3.0.0
Build MPI C bindings: yes
Build MPI C++ bindings (deprecated): no
Build MPI Fortran bindings: mpif.h, use mpi
MPI Build Java bindings (experimental): no
Build Open SHMEM support: yes
Debug build: no
Platform file: contrib/platform/mellanox/optimized
Miscellaneous
-----------------------
CUDA support: yes
Transports
-----------------------
Cray uGNI (Gemini/Aries): no
Intel Omnipath (PSM2): no
Intel SCIF: no
Intel TrueScale (PSM): no
Mellanox MXM: yes
Open UCX: yes
OpenFabrics Libfabric: yes
OpenFabrics Verbs: yes
Portals4: no
Shared memory/copy in+copy out: yes
Shared memory/Linux CMA: yes
Shared memory/Linux KNEM: yes
Shared memory/XPMEM: no
TCP: yes
Resource Managers
-----------------------
Cray Alps: no
Grid Engine: no
LSF: no
Moab: no
Slurm: yes
ssh/rsh: yes
Torque: no
[...]
$ make -j all
$ make check
# No errors
$ make install
```
Compiling mpi4py and installing it following the pytorch build command results in
```
$ module add openmpi/3.0.0
$ TORCH_CUDA_ARCH_LIST="6.1;6.0;5.2;3.5" USE_CUDNN=1 USE_DISTRIBUTED=1 USE_FFMPEG=1 NO_MKLDNN=1 CUDA_HOME=$CUDA_ROOT python setup.py install
[...]
-- ******** Summary ********
-- General:
-- CMake version : 3.12.2
-- CMake command : /usr/local/python/bin/cmake
-- System : Linux
-- C++ compiler : /usr/local/gcc/bin/c++
-- C++ compiler version : 4.8.5
-- BLAS : MKL
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-unused-but-set-variable -Wno-maybe-uninitialized
-- Build type : Release
-- Compile definitions : TH_BLAS_MKL;ONNX_NAMESPACE=onnx_torch;USE_GCC_ATOMICS=1;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1
-- CMAKE_PREFIX_PATH : /usr/local/python/lib/python2.7/site-packages
-- CMAKE_INSTALL_PREFIX : /usr/local/src/pytorchtorch/lib/tmp_install
--
-- TORCH_VERSION : 1.0.0
-- CAFFE2_VERSION : 1.0.0
-- BUILD_ATEN_MOBILE : OFF
-- BUILD_ATEN_ONLY : OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : ON
-- Python version : 2.7.15
-- Python executable : /usr/local/python/bin/python
-- Pythonlibs version : 2.7.15
-- Python library : /usr/local/python/lib/libpython2.7.so.1.0
-- Python includes : /usr/local/python/include/python2.7
-- Python site-packages: lib/python2.7/site-packages
-- BUILD_CAFFE2_OPS : ON
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : ON
-- USE_ASAN : OFF
-- USE_CUDA : 1
-- CUDA static link : 0
-- USE_CUDNN : ON
-- CUDA version : 9.1
-- cuDNN version : 7.4.1
-- CUDA root directory : /usr/local/cuda-9.1
-- CUDA library : /usr/lib64/libcuda.so
-- cudart library : /usr/local/cuda-9.1/lib64/libcudart_static.a;-pthread;dl;/usr/lib64/librt.so
-- cublas library : /usr/local/cuda-9.1/lib64/libcublas.so;/usr/local/cuda-9.1/lib64/libcublas_device.a
-- cufft library : /usr/local/cuda-9.1/lib64/libcufft.so
-- curand library : /usr/local/cuda-9.1/lib64/libcurand.so
-- cuDNN library : /usr/local/cudnn7.4-cuda9.1lib64/libcudnn.so.7
-- nvrtc : /usr/local/cuda-9.1/lib64/libnvrtc.so
-- CUDA include path : /usr/local/cuda-9.1/include
-- NVCC executable : /usr/local/cuda-9.1/bin/nvcc
-- CUDA host compiler : /usr/local/gcc/bin/cc
-- USE_TENSORRT : OFF
-- USE_ROCM : 0
-- USE_EIGEN_FOR_BLAS :
-- USE_FBGEMM : 0
-- USE_FFMPEG : OFF
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : OFF
-- USE_METAL : OFF
-- USE_MKL : ON
-- USE_MKLDNN : OFF
-- USE_MOBILE_OPENGL : OFF
-- USE_NCCL : ON
-- USE_SYSTEM_NCCL : OFF
-- USE_NNPACK : 1
-- USE_NUMPY : OFF
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : OFF
-- USE_OPENMP : OFF
-- USE_PROF : OFF
-- USE_QNNPACK : 1
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : ON
-- USE_MPI : ON
-- USE_GLOO : ON
-- USE_GLOO_IBVERBS : 1
-- Public Dependencies : Threads::Threads;caffe2::mkl
-- Private Dependencies : qnnpack;nnpack;cpuinfo;fp16;/usr/local/openmpi/lib/libmpi.so;gloo;aten_op_header_gen;onnxifi_loader;rt;gcc_s;gcc;dl
-- Configuring done
[...]
```
with no error. As I said, when leaving out hcoll/cuda and mxm, scatter-min runs with gpu = 1. Is MXM and HCOLL/CUDA not supported by pytorch? Is this not necessary for a pytorch installation with CUDA aware OpenMPI for Infiniband? Hope something can answer this question, I know it's kind of specific.
Looking forward to your answers and best wishes,
fwillo
## System Info
```
$ python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
OS: Red Hat Enterprise Linux Server 7.4 (Maipo)
GCC version: (GCC) 4.8.5
CMake version: version 3.12.2
Python version: 2.7
Is CUDA available: N/A
CUDA runtime version: 9.1.85
GPU models and configuration:
GPU 0: Tesla P100-PCIE-16GB
GPU 1: Tesla P100-PCIE-16GB
Nvidia driver version: 390.12
cuDNN version: Could not collect (Should be v7.4.1.5 for cuda-9.1.85)
Versions of relevant libraries:
[pip] Could not collect
[conda] blas 1.0 mkl
[conda] magma-cuda91 2.3.0 1 pytorch
[conda] mkl 2019.1 144
[conda] mkl-include 2019.1 144
[conda] mkl-rt 11.1 p0
[conda] mkl-service 1.1.0 py27_p0
[conda] mkldnn 0.16.1 0 mingfeima
[conda] numpy 1.9.3 py27_p0 [mkl]
[conda] scipy 0.16.1 np19py27_p0 [mkl]
[conda] torch 1.0.0a0+db5d313 <pip>
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar @jiayisuse @agolynski | oncall: distributed,triaged,module: c10d,distributed-backlog | low | Critical |
402,777,627 | vscode | Find in files in non-existent folder highlights wrong input field on error | Issue Type: <b>Bug</b>
If I mistakenly enter a folder that does not exist in the "files to include" box for Find in Files and hit enter to search, the red error highlight ends up on the search expression input, not on the files to include input. As a side effect, the actual error message ("No folder in the workspace with name: ...") only shows up when the input focus is on the search expression field, not the files to include field that has the error.
Verified this still happens with all extensions disabled.
Example:

VS Code version: Code 1.30.2 (61122f88f0bf01e2ac16bdb9e1bc4571755f5bd8, 2019-01-07T22:49:48.319Z)
OS version: Linux x64 4.19.0-1-amd64
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz (8 x 2809)|
|GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: disabled_software<br>video_decode: unavailable_software<br>video_encode: unavailable_software<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|4, 4, 3|
|Memory (System)|15.55GB (0.15GB free)|
|Process Argv|--unity-launch|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (24)</summary>
Extension|Author (truncated)|Version
---|---|---
Bookmarks|ale|10.2.0
ng-template|Ang|0.1.11
markdown-preview-github-styles|bie|0.1.4
bracket-pair-colorizer-2|Coe|0.0.25
vscode-markdownlint|Dav|0.23.0
vscode-eslint|dba|1.8.0
gitlens|eam|9.4.1
EditorConfig|Edi|0.12.6
tslint|eg2|1.0.42
vscode-npm-script|eg2|0.3.5
prettier-vscode|esb|1.7.3
vscode-pull-request-github|Git|0.3.2
beautify|Hoo|1.4.7
bash-ide-vscode|mad|1.3.3
terraform|mau|1.3.7
Go|ms-|0.8.0
debugger-for-chrome|msj|4.11.1
vscode-docker|Pet|0.5.1
bash-debug|rog|0.3.3
vscode-coverage-gutters|rya|2.3.0
vscode-nginx|sha|0.6.0
html-preview-vscode|tht|0.2.5
reflow-paragraph|Tro|1.3.0
vscode-proto3|zxh|0.2.2
(1 theme extensions excluded)
</details>
<!-- generated by issue reporter --> | bug,search,confirmed | low | Critical |
402,801,155 | pytorch | We're binding a bunch of crap to 'torch' namespace which shouldn't be there | Example:
```
>>> import torch
>>> torch.hinge_embedding_loss
<built-in method hinge_embedding_loss of type object at 0x7ff9c8948d20>
```
Official docs tell you to use `torch.nn.functional.hinge_embedding_loss` and for good reason: the `hinge_embedding_loss` in torch exposes a reduction argument that is an int, not a string. | triaged,better-engineering | low | Major |
402,820,115 | angular | routerLink directive always makes element focusable | # 🐞 bug report
### Affected Package
@angular/router
### Description
Issue https://github.com/angular/angular/issues/10895 requests that the `routerLink`-directive should automatically add `tabindex` on the target element if not already present (implemented via https://github.com/angular/angular/pull/13094).
I understand that this behavior might be convinient in some cases, but it should still be configurable.
For example, in order to implement a custom menu widget it's a good practice that the focus always stays on the surrounding element (e.g. `<nav>`) while the user selects the target menu item with the arrow keys.
Right now, implementing this in a clean and straight-forward way is not possible due to the feature implemented in the issue above. If `<a>` is used for the menu entries, the `href` injected by `routeLink` makes it focusable and even if `<span>` is used instead, `routeLink` still injects a `tabindex=0` to always make it focusable.
When the user then clicks on a link the surrounding `<nav>` element looses its focus as focus moves to the link that has been clicked instead. Even setting `tabindex="-1"` only prevents tabbing to the link but still allows focus via mouse click.
There should either be a configuration parameter to prevent setting of `tabindex` or the feature should be reverted as `tabindex` can always be configured manually if necessary.
## 🔬 Minimal Reproduction
<!--
Please create and share minimal reproduction of the issue starting with this template: https://stackblitz.com/fork/angular-issue-repro2
-->
https://stackblitz.com/edit/angular-issue-repro2-qudebd
```html
<nav tabindex="0">
<ol>
<li>
<!-- becomes focusable because of href attribute -->
<a routerLink="bikes" routerLinkActive="active">Bikes</a>
</li>
<li>
<!-- becomes focusable because of tabindex attribute -->
<span routerLink="cars" routerLinkActive="active">Cars</span>
</li>
</ol>
</nav>
```
<!--
If StackBlitz is not suitable for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue. Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior.
Issues that don't have enough info and can't be reproduced will be closed.
You can read more about issue submission guidelines here: https://github.com/angular/angular/blob/master/CONTRIBUTING.md#-submitting-an-issue
-->
## 🌍 Your Environment
**Angular Version:**
<pre><code>
Angular CLI: 6.2.9
Node: 8.11.3
OS: darwin x64
Angular: 6.1.10
... animations, common, compiler, compiler-cli, core, forms
... http, language-service, platform-browser
... platform-browser-dynamic, router
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.8.9
@angular-devkit/build-angular 0.8.9
@angular-devkit/build-optimizer 0.8.9
@angular-devkit/build-webpack 0.8.9
@angular-devkit/core 0.8.9
@angular-devkit/schematics 0.8.9
@angular/cli 6.2.9
@ngtools/webpack 6.2.9
@schematics/angular 0.8.9
@schematics/update 0.8.9
rxjs 6.3.3
typescript 2.9.2
webpack 4.16.4
</code></pre>
**Anything else relevant?**
<!-- ✍️Is this a browser specific issue? If so, please specify the browser and version. -->
This can be reproduced as described in Chrome v71.0.3578.98.
In Safari the focus is only lost on the second link (i.e. where `<span>` is used).
| freq2: medium,area: router,router: directives,P4,bug | medium | Critical |
402,843,937 | react | Chrome Autofill overwrites values on controlled components | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
**Do you want to request a *feature* or report a *bug*?**
Report a bug. Initially reported in https://github.com/mozilla-services/react-jsonschema-form/issues/1153
**What is the current behavior?**
Autofill overwrites existing values in fields when those fields are controlled components in React.
See https://jsfiddle.net/epicfaace/9p17e2qx/21/ -- to test this, add a "Saved Address" in the Chrome options.

**What is the expected behavior?**
Autofill does not overwrite existing fields. I've made a JSFiddle with a plain HTML form, which works with the expected behavior.
See https://jsfiddle.net/epicfaace/1my3f9n4/6/

**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
React 15.3.2
Chrome 71.0.3578.98
| Type: Bug,Component: DOM,Type: Breaking Change | low | Critical |
402,846,272 | pytorch | Should torch.arange take a layout parameter? | I see it in the docs... but it seems a bit questionable as to whether or not it can do anything reasonable here.
```
>>> torch.arange(10, layout=torch.sparse_coo)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: arange_out is not implemented for type torch.sparse.LongTensor
```
cc @nikitaved @pearu @cpuhrsch @IvanYashchuk @gchanan @mruberry | module: sparse,triaged,module: tensor creation,module: ux | low | Critical |
402,851,584 | pytorch | support `unique_indices` option for `unique` | ## 🚀 Feature
Requested by @gkioxari, it would bring `unique` api closer to numpy equivalent. Cuda implementation is not hard, but I haven't looked at CPU.
May be #15804 scope should be widened to also include this.
cc @mruberry @rgommers @ptrblck, @gkioxari, @jcjohnson
| todo,feature,triaged,module: numpy | medium | Major |
402,879,466 | godot | Drawing a texture obtained with `load` in `_draw` always fails | Godot 3.0.6
Godot 3.1 beta 2
Windows 10 64 bits
I tried to use `load` to get a texture to draw in `_draw()`, which returns a non-null texture, however if I try to draw it, it always fails.
```gdscript
extends Node2D
func _process(delta):
update()
func _draw():
var tex = load("res://notexture.png")
assert(tex != null)
draw_texture(tex, Vector2())
```
This spam-prints the following error when reaching `draw_texture()`:
```
ERROR: getornull: Condition ' !id_map.has(p_rid.get_data()) ' is true. returned: __null
At: ./core/rid.h:164
```
Project:
[DrawTexture.zip](https://github.com/godotengine/godot/files/2793431/DrawTexture.zip)
| topic:core,confirmed,documentation | low | Critical |
402,955,255 | godot | Black lines flickering on mesh surface during movement | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.0.5
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Mac OS X 10.14.2
Radeon Pro Vega 20 4080 MB
**Issue description:**
<!-- What happened, and what was expected. -->
When composing a scene in GDScript by placing meshes aligned with each other, black lines appear on the surface of these meshes when under a DirectionalLight. These lines always appear on the surface of the mesh that faces the directional light, and are always horizontal (x-axis, never on the y or z axes).
Here are some screenshots of the lines:



[Link to video with the problem](https://i.imgur.com/munCy23.mp4)
**Steps to reproduce:**
Open minimal reproduction project and play to reproduce.
Or in your own project, place simple cube meshes through GDScript, a DirectionalLight facing the top, and move a camera affixed to a KinematicBody on the surface.
**Minimal reproduction project:**
[flicker.zip](https://github.com/godotengine/godot/files/2794162/flicker.zip)
| bug,topic:rendering,topic:3d | medium | Major |
402,955,586 | godot | [Bullet] None of the shapes in PhysicsServer (Bullet) are initializing their members | Godot 3.1 ab843b16984d8b62a1100d9b77d8b0dcdc4252bb
Following https://github.com/godotengine/godot/issues/25301, I decided to create an issue for this too.
I've been noticing it for a while, but never actually had a problem with it until now:
In `shape_bullet.cpp`, none of the shapes initialize their members. That means if you create a shape with a script using `PhysicsServer` directly, and add it to a body without calling `set_data`, the implementation is potentially creating a Bullet shape with garbage values.
In particular, the HeightmapShape ends up with absurd sizes and a null pointer, which should have made Bullet assert but didnt because of #25301.
Is this API really requiring the user to call `set_data` before adding the shape because it has no defaults? I would assume no because Godot physics does initialize them.
Should we give defaults to all shapes at server level? (for heightmaps, a 1x1 square since Bullet forbids 0 and due to the server's implementation, the shape is forced to create a Bullet collider). | enhancement,topic:physics | low | Minor |
403,099,065 | godot | One-way collision shape does not work with KinematicBody2D | **Version:**
v3.1.beta2.official (Jan-2019)
**Issue description:**
While one-way collision shape works fine with static bodies, they do not work with kinematic body.
(Or at least not with move_and_slide functionality.)
**Steps to reproduce:**
1. Make any kinematic body with a collision shape, let's say a KinematicBody2D.
2. Set the collision shape to being 1 direction only.
3. Collide it with something, then you will see that it does not work. Specifically speaking, it acts as if 1 direction were not on at all, so in the case of a 2D rectangle (4 directions), it's 4 directions active instead of simply 1
**Minimal reproduction project:**
As an example here is a game that has a KinematicBody2D with a collision shape deliberately facing the wrong way. It is rotated to face the right, so it should not logically collide with the box, yet it still does.
[MinimalBugKinematicOneWay.zip](https://github.com/godotengine/godot/files/2795593/MinimalBugKinematicOneWay.zip) | bug,confirmed,topic:physics,topic:2d | low | Critical |
403,138,653 | TypeScript | unclear how to make function argument check produce the same errors as variable assignment | in the following code
```ts
type A = {
a?: number
b?: number
}
type B = {
a?: number
knownField?: number
}
type C = {
a?: A,
knownField?: number | [A, B] | A
}
const c: C = {
a: {
a: 1,
b: 1,
},
knownField: [{ a: 1 }, {
knownField: 1,
unknownField: 1,
}],
}
const fn = (c: C) => {}
fn({
a: {
a: 1,
b: 1,
},
knownField: [{ a: 1 }, {
knownField: 1,
unknownField: 1,
}],
})
```
- in the case of variable assignment it correctly highlights the `unknownField` and produces a nice error that says `Object literal may only specify known properties, and 'unknownField' does not exist in type 'B'.`
- but in the case of the same value passed as a function argument (of the same type) it highlights the top-level field and a different, less readable error `Type '({ a: number; } | { b: number; unknownField: number; })[]' is missing the following properties from type '[A, B]': 0, 1`
am I missing something? is there a way to make sure I get the same error and highlighting in both cases?
in typescript [playground](https://www.typescriptlang.org/play/#src=type%20A%20%3D%20%7B%0D%0A%20%20a%3F%3A%20number%0D%0A%20%20b%3F%3A%20number%0D%0A%7D%0D%0A%0D%0Atype%20B%20%3D%20%7B%0D%0A%20%20a%3F%3A%20number%0D%0A%20%20knownField%3F%3A%20number%0D%0A%7D%0D%0A%0D%0Atype%20C%20%3D%20%7B%0D%0A%20%20a%3F%3A%20A%2C%0D%0A%20%20knownField%3F%3A%20number%20%7C%20%5BA%2C%20B%5D%20%7C%20A%0D%0A%7D%0D%0A%0D%0Aconst%20c%3A%20C%20%3D%20%7B%0D%0A%20%20a%3A%20%7B%0D%0A%20%20%20%20a%3A%201%2C%0D%0A%20%20%20%20b%3A%201%2C%0D%0A%20%20%7D%2C%0D%0A%20%20knownField%3A%20%5B%7B%20a%3A%201%20%7D%2C%20%7B%0D%0A%20%20%20%20knownField%3A%201%2C%0D%0A%20%20%20%20unknownField%3A%201%2C%0D%0A%20%20%7D%5D%2C%0D%0A%7D%0D%0A%0D%0Aconst%20fn%20%3D%20(c%3A%20C)%20%3D%3E%20%7B%7D%0D%0A%0D%0Afn(%7B%0D%0A%20%20a%3A%20%7B%0D%0A%20%20%20%20a%3A%201%2C%0D%0A%20%20%20%20b%3A%201%2C%0D%0A%20%20%7D%2C%0D%0A%20%20knownField%3A%20%5B%7B%20a%3A%201%20%7D%2C%20%7B%0D%0A%20%20%20%20knownField%3A%201%2C%0D%0A%20%20%20%20unknownField%3A%201%2C%0D%0A%20%20%7D%5D%2C%0D%0A%7D)):
 | Needs Investigation | low | Critical |
403,147,696 | rust | Coherence can be bypassed by an indirect impl for a trait object | ## Comments
The check for manual `impl Object for Object` only makes sure there is no *direct* `impl Object for dyn Object` - it does not consider such indirect impls. Therefore, you can write a blanket `impl<T: ?Sized> Object for T` that conflicts with the builtin `impl Object for dyn Object`.
## Reproducer
*edit: minimal reproducer from https://github.com/rust-lang/rust/issues/57893#issuecomment-500250283*
```rust
trait Object<U> {
type Output;
}
impl<T: ?Sized, U> Object<U> for T {
type Output = U;
}
fn foo<T: ?Sized, U>(x: <T as Object<U>>::Output) -> U {
x
}
fn transmute<T, U>(x: T) -> U {
foo::<dyn Object<U, Output = T>, U>(x)
}
```
---
I had some difficulties with getting the standard "incoherence ICE" reproducer, because the object candidate supersedes the impl candidate in selection. So here's a "transmute_lifetime" reproducer.
```Rust
trait Object {
type Output;
}
trait Marker<'b> {}
impl<'b> Marker<'b> for dyn Object<Output=&'b u64> {}
impl<'b, T: ?Sized + Marker<'b>> Object for T {
type Output = &'static u64;
}
fn foo<'a, 'b, T: Marker<'b> + ?Sized>(x: <T as Object>::Output) -> &'a u64 {
x
}
fn transmute_lifetime<'a, 'b>(x: &'a u64) -> &'b u64 {
foo::<dyn Object<Output=&'a u64>>(x)
}
// And yes this is a genuine `transmute_lifetime`!
fn get_dangling<'a>() -> &'a u64 {
let x = 0;
transmute_lifetime(&x)
}
fn main() {
let r = get_dangling();
println!("{}", r);
}
```
### Duplicates, see also
* https://github.com/rust-lang/rust/issues/114389 | I-ICE,A-trait-system,P-high,T-lang,T-compiler,I-unsound,C-bug,S-bug-has-test,T-types,A-trait-objects | high | Critical |
403,212,693 | terminal | About backspace in cursor erase state | It is outputting a text file named "curhide", in which the cursor erase sequence is written.
Looking at the sequence, it should be (^[[25h) (\b) (0x20) (\b) (^[[25l),
It is like this, (^[[25h) (\b) (0x20) (^[[25l),
Also, although erase sequence is output, it can not be erased.
There is no problem when the cursor is displayed.

| Work-Item,Product-Conpty,Area-VT,Issue-Bug | low | Major |
403,249,029 | TypeScript | Expose "getNewLineCharacter" function | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Suggestion
Make `ts.getNewLineCharacter` public
<!-- A summary of what you'd like to see added or changed -->
## Use Cases
This is useful if developers create their own `FormatDiagnosticsHost`, but don't want to implement the logic to determine the `newLine` string based on the `newLine` compiler option.
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
```ts
const formatHost: ts.FormatDiagnosticsHost = {
getCurrentDirectory: () => projectDir || ts.sys.getCurrentDirectory(),
getCanonicalFileName: fileName => fileName,
getNewLine: () => ts.getNewLineCharacter(formatOptions)
};
```
<!-- Show how this would be used and what the behavior would be --> | Suggestion,In Discussion,API | low | Critical |
403,249,606 | rust | `where Self: Sized` on trait method doesn't work as expected when implementing for unsized types. | I've got this code
````
trait Foo {
fn bar(&self) -> Self where Self: Sized;
}
impl Foo for str {
// fn bar(&self) -> Self { unimplemented!() }
}
````
the error is:
````
error[E0046]: not all trait items implemented, missing: `bar`
--> src/lib.rs:5:1
````
However, since `str` is not `Sized`, i don't think `bar` needs to be implemented. if i remove the "//", i'll got:
````
error[E0277]: the size for values of type `str` cannot be known at compilation time
--> src/lib.rs:6:22
````
So although this trait fits object safety rules, i still can't implement it for unsized types.
| A-type-system,A-trait-system,T-lang,T-compiler,C-bug,T-types | low | Critical |
403,294,923 | pytorch | Python-bound C++ frontend modules don't handle attributes well | When we bind C++ frontend modules into Python using `torch::python::bind_module` and load them in Python, we wrap the underlying C++ class with a [wrapper class](https://github.com/pytorch/pytorch/blob/master/torch/nn/cpp.py#L49) that derives from `torch.nn.Module` and copies all methods of the underlying class onto the wrapper class (I can elaborate on design reasons for this).
The issue is that this does not support attributes. As in, if you bind an attribute in C++ using `def_readwrite("x", &MyClass::x)`, and try setting that attribute on the Python wrapper, it will not propagate. This should be handled by intelligently overriding `__getattr__` and `__setattr__`.
CC @yf225
cc @yf225 @glaringlee | module: cpp,triaged | low | Minor |
403,305,533 | rust | Compiler unable to apply trait bound | Sorry for the vague title, please change. I have found a strange bug that is at the intersection of associated types, trait type parameters, and method type parameters.
https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=8584b2c75acb51a8c6082b668649ad29
```
trait Example {
type Foo;
fn foo<T: AsRef<Self::Foo>>(&self, t: T);
}
impl <A, B> Example for (A, B) where
A: Iterator<Item = u32>,
B: AsRef<A::Item>,
{
type Foo = ();
fn foo<T: AsRef<Self::Foo>>(&self, t: T) {}
}
Compiling playground v0.0.1 (/playground)
error[E0277]: the trait bound `B: std::convert::AsRef<u32>` is not satisfied
--> src/main.rs:14:5
|
14 | fn foo<T: AsRef<Self::Foo>>(&self, t: T) {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `std::convert::AsRef<u32>` is not implemented for `B`
|
= help: consider adding a `where B: std::convert::AsRef<u32>` bound
= note: required because of the requirements on the impl of `Example` for `(A, B)`
error[E0277]: the trait bound `T: std::convert::AsRef<()>` is not satisfied
--> src/main.rs:14:5
|
14 | fn foo<T: AsRef<Self::Foo>>(&self, t: T) {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `std::convert::AsRef<()>` is not implemented for `T`
|
= help: consider adding a `where T: std::convert::AsRef<()>` bound
= note: required by `std::convert::AsRef`
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0277`.
error: Could not compile `playground`.
```
Commenting out line 10 `B: AsRef<A::Item>` makes the program compile, or changing it to `B: AsRef<u32>` makes it compile (even though `A::Item` is a `u32`). | A-type-system,A-trait-system,A-associated-items,T-compiler,T-types | low | Critical |
403,329,713 | TypeScript | Preferred refactorings | **Problems**
- Refactorings such as `extract function` and `extract constant` may return multiple possible locations where the code could be extracted to. In many cases however, the user would just like to quick extract something to a reliable location and continue on.
- `extract function` is always returned alongside `extract constant`. However it is often desirable to extract to a constant instead of a function when possible.
**Proposal**
In the TS Server protocol, mark some refactoring as a `preferred` refactorings. Editors could use this information to automatically select the preferred refactoring in the list or even quick apply it without any user input (see https://github.com/Microsoft/vscode/issues/62110 for VS Code's proposal on this)
Preferred refactorings would let users set up actions such as `extract constant` that reliably extract to the nearest scope with a single action or keyboard shortcut. The UX behavior for this type of action:
- If only a single `isPreferred` refactoring is returned, apply it automatically.
- If multiple preferred refactorings are returned, show a list of the preferred refactorings that the user can select from
The normal refactor context menu with full list of refactorings would continue to display the full list of refactorings.
We can start conservative with which refactorings are preferred:
- For `extract constant`, extract to local const
- For `extract function`, extract to function at the scope of the parent function
- For `extract function` in a method, extract to a method
Related to a similar proposal for quick fixes #29450 | Suggestion,In Discussion,VS Code Priority | low | Minor |
403,359,289 | pytorch | Adding new module to caffe2 | Hi,
I am adding a new module to caffe 2 for some development. A new dir is added under caffe2/contrib. So the CMakeList.txt file was modified to add the dir using a conditional if (USE_XXX). The CMakeLists.txt under caffe2 was modified to add libraries to be linked as below.
link_directories(AFTER /home/fpgauser/<xxx>/linux64/lib /home/fpgauser/<xxx>/linux64/host/linux64/lib )
set (EXTRA_LIBS ${EXTRA_LIBS} OpenCL alteracl)
target_link_libraries (caffe2 PUBLIC ${EXTRA_LIBS})
The build/rebuild fails with this for a different module. The error message is below. Any help appreciated.
==========================================================================
[ 93%] Built target c10d
[ 93%] Building CXX object caffe2/torch/lib/c10d/test/CMakeFiles/ProcessGroupGlooTest.dir/ProcessGroupGlooTest.cpp.o
[ 93%] Linking CXX executable ../bin/blob_test
CMakeFiles/blob_test.dir/core/blob_test.cc.o: In function `caffe2::(anonymous namespace)::TypedTensorTest_BigTensorSerialization_Test<double>::TestBody()':
blob_test.cc:(.text+0x4043c): warning: the use of `tmpnam' is dangerous, better use `mkstemp'
[ 93%] Built target blob_test
[ 93%] Building CXX object caffe2/torch/lib/c10d/test/CMakeFiles/TCPStoreTest.dir/TCPStoreTest.cpp.o
[ 93%] Building CXX object caffe2/torch/lib/c10d/test/CMakeFiles/FileStoreTest.dir/FileStoreTest.cpp.o
[ 93%] Linking CXX executable ../../../../../bin/FileStoreTest
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::~Test()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::internal::MakeAndRegisterTestInfo(char const*, char const*, char const*, char const*, testing::internal::CodeLocation, void const*, void (*)(), void (*)(), testing::internal::TestFactoryBase*)'
../../../../../lib/libcaffe2.so: undefined reference to `typeinfo for testing::Test'
../../../../../lib/libcaffe2.so: undefined reference to `testing::internal::GetTestTypeId()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::SetUp()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::TearDown()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::Test()'
collect2: error: ld returned 1 exit status
make[2]: *** [caffe2/torch/lib/c10d/test/CMakeFiles/FileStoreTest.dir/build.make:96: bin/FileStoreTest] Error 1
make[1]: *** [CMakeFiles/Makefile2:7500: caffe2/torch/lib/c10d/test/CMakeFiles/FileStoreTest.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/generated/VariableType_4.cpp.o
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/grad_mode.cpp.o
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/input_buffer.cpp.o
[ 93%] Linking CXX shared library ../../lib/libcaffe2_detectron_ops.so
[ 93%] Linking CXX executable ../../../../../bin/TCPStoreTest
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::~Test()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::internal::MakeAndRegisterTestInfo(char const*, char const*, char const*, char const*, testing::internal::CodeLocation, void const*, void (*)(), void (*)(), testing::internal::TestFactoryBase*)'
../../../../../lib/libcaffe2.so: undefined reference to `typeinfo for testing::Test'
../../../../../lib/libcaffe2.so: undefined reference to `testing::internal::GetTestTypeId()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::SetUp()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::TearDown()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::Test()'
collect2: error: ld returned 1 exit status
make[2]: *** [caffe2/torch/lib/c10d/test/CMakeFiles/TCPStoreTest.dir/build.make:96: bin/TCPStoreTest] Error 1
make[1]: *** [CMakeFiles/Makefile2:7459: caffe2/torch/lib/c10d/test/CMakeFiles/TCPStoreTest.dir/all] Error 2
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/profiler.cpp.o
[ 93%] Built target caffe2_detectron_ops
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/saved_variable.cpp.o
[ 93%] Linking CXX executable ../../../../../bin/ProcessGroupGlooTest
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::~Test()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::internal::MakeAndRegisterTestInfo(char const*, char const*, char const*, char const*, testing::internal::CodeLocation, void const*, void (*)(), void (*)(), testing::internal::TestFactoryBase*)'
../../../../../lib/libcaffe2.so: undefined reference to `typeinfo for testing::Test'
../../../../../lib/libcaffe2.so: undefined reference to `testing::internal::GetTestTypeId()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::SetUp()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::TearDown()'
../../../../../lib/libcaffe2.so: undefined reference to `testing::Test::Test()'
collect2: error: ld returned 1 exit status
make[2]: *** [caffe2/torch/lib/c10d/test/CMakeFiles/ProcessGroupGlooTest.dir/build.make:97: bin/ProcessGroupGlooTest] Error 1
make[1]: *** [CMakeFiles/Makefile2:7418: caffe2/torch/lib/c10d/test/CMakeFiles/ProcessGroupGlooTest.dir/all] Error 2
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/variable.cpp.o
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/VariableTypeManual.cpp.o
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/cuda/comm.cpp.o
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/autodiff.cpp.o
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/export.cpp.o
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/generated/register_aten_ops_0.cpp.o
[ 93%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/generated/register_aten_ops_1.cpp.o
[ 94%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/generated/register_aten_ops_2.cpp.o
[ 94%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/graph_executor.cpp.o
[ 94%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/import.cpp.o
[ 94%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/interpreter.cpp.o
[ 94%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/constants.cpp.o
[ 94%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/node_hashing.cpp.o
[ 95%] Linking CXX shared module python/caffe2_pybind11_state.cpython-35m-x86_64-linux-gnu.so
[ 95%] Built target caffe2_pybind11_state
[ 95%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/ir.cpp.o
[ 95%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/operator.cpp.o
[ 95%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/batch_mm.cpp.o
[ 95%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/canonicalize.cpp.o
[ 95%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/constant_propagation.cpp.o
[ 95%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/constant_pooling.cpp.o
[ 95%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/common_subexpression_elimination.cpp.o
[ 95%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/create_autodiff_subgraphs.cpp.o
[ 95%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/inline_autodiff_subgraphs.cpp.o
[ 95%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/dead_code_elimination.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/canonicalize_ops.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/erase_number_types.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/graph_fuser.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/inplace_check.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/loop_unrolling.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/lower_grad_of.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/lower_tuples.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/peephole.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/remove_expands.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/remove_inplace_ops.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/shape_analysis.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/requires_grad_analysis.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/specialize_undef.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/passes/pretty_print.cpp.o
[ 96%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/fuser/interface.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/register_prim_ops.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/register_special_ops.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/scope.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/script/compiler.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/script/builtin_functions.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/script/lexer.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/script/module.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/tracer.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/torch.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/utils/tensor_flatten.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/utils/variadic.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/__/test/cpp/jit/no-gtest.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/fuser/kernel_cache.cpp.o
/home/fpgauser/FB/pytorch/pytorch/torch/csrc/torch.cpp: In function ‘at::TypeExtendedInterface& torch::CPU(at::ScalarType)’:
/home/fpgauser/FB/pytorch/pytorch/torch/csrc/torch.cpp:11:17: warning: ‘at::TypeExtendedInterface& torch::getVariableType(at::Backend, at::ScalarType)’ is deprecated [-Wdeprecated-declarations]
return torch::getVariableType(at::Backend::CPU, type);
^~~~~~~~~~~~~~~
/home/fpgauser/FB/pytorch/pytorch/torch/csrc/torch.cpp:6:28: note: declared here
at::TypeExtendedInterface& getVariableType(at::Backend backend, at::ScalarType type) {
^~~~~~~~~~~~~~~
/home/fpgauser/FB/pytorch/pytorch/torch/csrc/torch.cpp:11:55: warning: ‘at::TypeExtendedInterface& torch::getVariableType(at::Backend, at::ScalarType)’ is deprecated [-Wdeprecated-declarations]
return torch::getVariableType(at::Backend::CPU, type);
^
/home/fpgauser/FB/pytorch/pytorch/torch/csrc/torch.cpp:6:28: note: declared here
at::TypeExtendedInterface& getVariableType(at::Backend backend, at::ScalarType type) {
^~~~~~~~~~~~~~~
/home/fpgauser/FB/pytorch/pytorch/torch/csrc/torch.cpp: In function ‘at::TypeExtendedInterface& torch::CUDA(at::ScalarType)’:
/home/fpgauser/FB/pytorch/pytorch/torch/csrc/torch.cpp:15:17: warning: ‘at::TypeExtendedInterface& torch::getVariableType(at::Backend, at::ScalarType)’ is deprecated [-Wdeprecated-declarations]
return torch::getVariableType(at::Backend::CUDA, type);
^~~~~~~~~~~~~~~
/home/fpgauser/FB/pytorch/pytorch/torch/csrc/torch.cpp:6:28: note: declared here
at::TypeExtendedInterface& getVariableType(at::Backend backend, at::ScalarType type) {
^~~~~~~~~~~~~~~
/home/fpgauser/FB/pytorch/pytorch/torch/csrc/torch.cpp:15:56: warning: ‘at::TypeExtendedInterface& torch::getVariableType(at::Backend, at::ScalarType)’ is deprecated [-Wdeprecated-declarations]
return torch::getVariableType(at::Backend::CUDA, type);
^
/home/fpgauser/FB/pytorch/pytorch/torch/csrc/torch.cpp:6:28: note: declared here
at::TypeExtendedInterface& getVariableType(at::Backend backend, at::ScalarType type) {
^~~~~~~~~~~~~~~
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/fuser/compiler.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/fuser/executor.cpp.o
[ 97%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/fuser/codegen.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/fuser/fallback.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/jit/fuser/cpu/fused_kernel.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/cuda.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/data/datasets/mnist.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/data/samplers/random.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/data/samplers/sequential.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/data/samplers/stream.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/jit.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/nn/init.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/nn/module.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/nn/modules/batchnorm.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/nn/modules/conv.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/nn/modules/dropout.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/nn/modules/embedding.cpp.o
[ 98%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/nn/modules/functional.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/nn/modules/linear.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/nn/modules/rnn.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/optim/adagrad.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/optim/adam.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/optim/lbfgs.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/optim/optimizer.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/optim/rmsprop.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/optim/serialize.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/optim/sgd.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/serialize/input-archive.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/serialize/output-archive.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/serialize/tensor.cpp.o
[ 99%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/api/src/utils.cpp.o
[ 99%] Linking CXX shared library ../../lib/libtorch.so
[ 99%] Built target torch
make: *** [Makefile:141: all] Error 2
=================================================================== | caffe2 | low | Critical |
403,376,334 | TypeScript | Future proof union to intersection type conversion | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
union to intersection, type merge
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
I'd like to either standardize (meaning documented behavior) or have a less hacky (future proof) way of transforming union types to intersections.
<!-- A summary of what you'd like to see added or changed -->
## Use Cases
The most common use case I can think of is a basic action system where each action has some kind of input data it can work with to determine its disabled status. An actionGroup which packs multiple actions into one updater function needs to request the right input data type which is compatible to **all** of its actions.
While it's possible in the current version of typescript, it feels like a hack.
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
A more presentable form by describing a car dashboard:
```ts
interface Gauge<T> {
display(data: T): void;
}
class SpeedGauge implements Gauge<{ speed: number; }> {
display(data: { speed: number; }) {
}
}
class TempGauge implements Gauge<{ temperature: number; }> {
display(data: { temperature: number; }) {
}
}
class RevGauge implements Gauge<{ rev: number; }> {
display(data: { rev: number; }) {
}
}
type ExtractGaugeType<T> = T extends Gauge<infer U> ? U : never;
// evil magic by jcalz https://stackoverflow.com/a/50375286
// I would like to future proof or have a better version of this
type UnionToIntersection<U> = (U extends any ? (k: U) => void : never) extends ((k: infer I) => void) ? I : never;
class Dashboard<T extends Gauge<unknown>> {
constructor(public gauges: T[]) {
}
display(data: UnionToIntersection<ExtractGaugeType<T>>) {
this.gauges.forEach((g) => g.display(data));
}
}
const myDashboard = new Dashboard([new SpeedGauge(), new TempGauge(), new RevGauge()]);
/*
the type is: { rev: number; } & { speed: number; } & { temperature: number; }
*/
myDashboard.display({ // Ok
speed: 50,
rev: 2000,
temperature: 85
});
myDashboard.display({ // Error: property "rev" is missing
speed: 50,
temperature: 85
});
```
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | high | Critical |
403,384,660 | vscode | Unifying "do X on save" functionality | **Problem**
VS Code ships with multiple actions that can be triggered on save, including:
- Trim trailing whitespace
- Trim trailing new line
- format document
- code actions (such as `organize imports` and `fix all`)
Each of these actions currently has its own setting. Many of the setting names are generic so it may not be clear they apply on save.
**Goals**
- Make it easier to discover built-in `do X on save` actions
- Give these settings a more consistent names
- Potentially allow these actions to be used in a more flexible way (such as from the source action menu)
/cc @kieferrm | feature-request,formatting,under-discussion | low | Major |
403,389,142 | godot | area_entered, area_exited occurs at the same time with Area2D | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.1.beta 973b68f39
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Kubuntu 18.10
**Issue description:**
<!-- What happened, and what was expected. -->
`area_entered` and `area_exited` signal is propagated at the same time when setting `Area.monitoring = true [or false]`
only `area_entered` is expected when setting `Area.monitoring = true` and
only `area_exited` is expected when setting `Area.monitoring = false`
probably related to #22889
**Steps to reproduce:**
```gdscript
extends Node2D
func _ready():
$on.connect("pressed", self, "monitor_on")
$off.connect("pressed", self, "monitor_off")
$area_ray.connect("area_entered", self, "on_area_entered")
$area_ray.connect("area_exited", self, "on_area_exited")
func monitor_on():
$area_box.monitoring = true
func monitor_off():
$area_box.monitoring = false
func on_area_entered(area):
printt("on_area_entered", area.name)
func on_area_exited(area):
printt("on_area_exited", area.name)
```

**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[area monitor.zip](https://github.com/godotengine/godot/files/2798427/area.monitor.zip)
| enhancement,topic:physics | low | Critical |
403,400,014 | vue | Types of FunctionalComponentOptions breaks | ### Version
vue: 2.5.22
typescript: 3.2.4
### Reproduction link
[https://jsfiddle.net/meteorlxy/9x2ts16a/1/](https://jsfiddle.net/meteorlxy/9x2ts16a/1/)
### Steps to reproduce
```ts
// works with vue 2.5.17
// fails with vue 2.5.18+
import Vue, { FunctionalComponentOptions } from 'vue'
const testFunctionalOptions: FunctionalComponentOptions = {
functional: true,
}
Vue.component('Test', testFunctionalOptions)
```
### What is expected?
No types error as in v2.5.17
### What is actually happening?
```sh
error TS2345: Argument of type 'FunctionalComponentOptions<Record<string, any>, PropsDefinition<Record<string, any>>>' is not assignable to parameter of type 'ComponentOptions<Vue, DefaultData<Vue>, DefaultMethods<Vue>, DefaultComputed, PropsDefinition<Record<string, any>>, Record<string, any>>'.
Types of property 'render' are incompatible.
Type '(this: undefined, createElement: CreateElement, context: RenderContext<Record<string, any>>) => VNode | VNode[]' is not assignable to type '(createElement: CreateElement, hack: RenderContext<Record<string,
any>>) => VNode'.
Type 'VNode | VNode[]' is not assignable to type 'VNode'.
Type 'VNode[]' is missing the following properties from type 'VNode': isRootInsert, isComment
```
### PS
It seems to be introduced here:
https://github.com/vuejs/vue/commit/bf2e2ed159f680cd4e230427ce94739c657c1b61#diff-23d7799dcc9e9be419d28a15348b0d99R116
<!-- generated by vue-issues. DO NOT REMOVE --> | typescript,has workaround | low | Critical |
403,425,122 | create-react-app | Setting a minimum font size in Firefox makes code sections on the website unreadable | I would like to report an accessibility issue on the website.
I have a minimum font size set to 15 in Firefox preferences. Most of the website looks okay, except for those animated code sections, e.g. https://facebook.github.io/create-react-app/docs/getting-started
<img width="1552" alt="screen shot 2019-01-26 at 11 36 16" src="https://user-images.githubusercontent.com/2944963/51785397-b5f73e80-215f-11e9-960d-d791cf9c8151.png">
| tag: documentation | low | Minor |
403,429,344 | pytorch | error with nccl when distributed training on caffe2 | The error as follow,I do build caffe2 with flag USE_NCCL=1
WARNING:caffe2.python.workspace:Original python traceback for operator `268` in network `resnet50_init` in exception above (most recent call last):
WARNING:caffe2.python.workspace: File "resnet50_trainer.py", line 603, in <module>
WARNING:caffe2.python.workspace: File "resnet50_trainer.py", line 599, in main
WARNING:caffe2.python.workspace: File "resnet50_trainer.py", line 434, in Train
WARNING:caffe2.python.workspace: File "/root/miniconda2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 296, in Parallelize
WARNING:caffe2.python.workspace: File "/root/miniconda2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 1206, in _AllReduceBlobs
WARNING:caffe2.python.workspace: File "/root/miniconda2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 1370, in _AllReduceBlobsDistributed
WARNING:caffe2.python.workspace: File "/root/miniconda2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 1337, in allreduce
WARNING:caffe2.python.workspace: File "/root/miniconda2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 1287, in get_control_and_context
WARNING:caffe2.python.workspace: File "/root/miniconda2/lib/python2.7/site-packages/caffe2/python/data_parallel_model.py", line 1805, in _CreateOrCloneCommonWorld
Traceback (most recent call last):
File "resnet50_trainer.py", line 603, in <module>
main()
File "resnet50_trainer.py", line 599, in main
Train(args)
File "resnet50_trainer.py", line 439, in Train
workspace.RunNetOnce(train_model.param_init_net)
File "/root/miniconda2/lib/python2.7/site-packages/caffe2/python/workspace.py", line 200, in RunNetOnce
StringifyProto(net),
File "/root/miniconda2/lib/python2.7/site-packages/caffe2/python/workspace.py", line 179, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at no_default_engine_op.h:29] . The operator CreateCommonWorld does not have a default engine implementation. Please specify an engine explicitly for this operator. Error from operator:
input: "store_handler" output: "allreduce_0_cw" name: "allreduce_0_cw_op" type: "CreateCommonWorld" arg { name: "interface" s: "" } arg { name: "timeout_ms" i: 30000 } arg { name: "transport" s: "tcp" } arg { name: "rank" i: 0 } arg { name: "size" i: 2 } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "NCCL" | caffe2 | low | Critical |
403,446,659 | pytorch | error on cmake_version from tools/build_pytorch_libs.py | ## 🐛 Bug
<!-- -->
Cannot build pytorch due to a error on cmake_version from the file tools/build_pytorch_libs.py .
## To Reproduce
1. clone repository
2. python setup.py bdist_wheel
3. Error message:
```bash
python setup.py bdist_wheel
Building wheel torch-1.1.0a0+41e9b09
-- Building version 1.1.0a0+41e9b09
Traceback (most recent call last):
File "setup.py", line 722, in <module>
build_deps()
File "setup.py", line 280, in build_deps
build_dir='build')
File "/data/pytorch/pytorch/tools/build_pytorch_libs.py", line 247, in build_caffe2
build_dir)
File "/data/pytorch/pytorch/tools/build_pytorch_libs.py", line 97, in run_cmake
get_cmake_command(),
File "/data/pytorch/pytorch/tools/build_pytorch_libs.py", line 54, in get_cmake_command
bare_version = cmake_version(cmake)
File "/data/pytorch/pytorch/tools/build_pytorch_libs.py", line 39, in cmake_version
TypeError: a bytes-like object is required, not 'str'
```
## Expected behavior
To configure and build pytorch
## Environment
``` bash
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
OS: Fedora release 29 (Twenty Nine)
GCC version: (GCC) 7.4.0
CMake version: version 3.12.1
Python version: 3.7
Is CUDA available: N/A
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: GeForce GTX 1080
Nvidia driver version: 415.27
cuDNN version: Probably one of the following:
/usr/local/cuda-10.0/lib64/libcudnn.so.7.4.1
```
## Additional context
Wrapping the result of the check_output function in a `str` fix the problem. At least on
python 3.7 the result of check_output is of type `bytes` not `str`.
<!-- Add any other context about the problem here. -->
| module: build,triaged | low | Critical |
403,448,342 | electron | Autoupdater - progress and download choice | **What I did**
I have packaged my electron app with __electron-packager__ and build its installer with __electron-winstaller__.
I used __autoUpdater__ (from electron) for the update process.
So I got:
*RELEASES
*myapp.exe-1.0.0-full.nupkg
*myappinstaller.exe
I put them in a github release with tag: v1.0.0
Then I did the same with v2.0.0 and the v1.0.0 app updated automatically to v2.0.0 on the next restart.
**The problem and the feature requested**
Altough it worked, I find that it would be useful to have an event in wich we can find the downloading progress to show to the users. Electron apps are indeed quite heavy and take a lot of time to be downloaded.
In addition, updates are downloaded soon after a new version is found. The problem is that the user does not have a chance to discard the update if he wants to stay with the old version.
**Alternatives**
There is a third part package, __electron-updater__ wich is very powerful and allows to be notified of the download progress and to choose when install the update or wheter install it.
Its only problem is that it works with __electron-builder__ and it does not match my case, because I'm using **electron-winstaller**
| enhancement :sparkles: | low | Major |
403,467,530 | go | math/big: Mention sign inversion in the (*Int).Not documentation | I would expect that this:
```go
i = big.NewInt(-2)
log.Println(new(big.Int).Not(i).Text(2))
```
prints `-1`, but actually it prints `1`. The `(*Int).Not` documentation simply says:
> Not sets z = ^x and returns z.
I think that it's not obvious from this sentence that the sign is inverted. | Documentation,NeedsInvestigation | low | Minor |
403,471,990 | TypeScript | Issues trying to auto-import across project boundaries | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.2, 3.3.0-rc
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** autoimport, project references, autocomplete
**Code**
Full minimal repro: https://github.com/MLoughry/typescript-repro
See _src/a/lib/index.ts_
```typescript
// If another file in the project already has an import from the other project,
// then attempting to auto-import `bar` will prompt you to import from 'obj/b/lib'
// If no other import is found within the project, no auto-import suggestions are found
console.log(bar);
```
**Expected behavior:**
Auto-import should work across project boundaries when an `outDir` is specified (with or without `paths` specified in `tsconfig.json`)
**Actual behavior:**
* **If there is already a cross-project import within the project**: The auto-import attempts to import from the outDir.

* **If there is no other cross-project import within the project**: No import suggestions are found.

| Suggestion,In Discussion | low | Critical |
403,476,748 | youtube-dl | Add britbox | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2019.01.24*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x ] I've **verified** and **I assure** that I'm running youtube-dl **2019.01.24**
### Before submitting an *issue* make sure you have:
- [ x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [ x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x ] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [ x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
youtube-dl https://www.britbox.com/us/episode/Doctor_Who_S14_E22_7466 -v
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['https://www.britbox.com/us/episode/Doctor_Who_S14_E22_7466', '-v']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2019.01.24
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.14393
[debug] exe versions: ffmpeg N-93001-g87c165c237, ffprobe N-93001-g87c165c237
[debug] Proxy map: {}
[generic] Doctor_Who_S14_E22_7466: Requesting header
WARNING: Falling back on generic information extractor.
[generic] Doctor_Who_S14_E22_7466: Downloading webpage
[generic] Doctor_Who_S14_E22_7466: Extracting information
ERROR: Unsupported URL: https://www.britbox.com/us/episode/Doctor_Who_S14_E22_7466
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpa8aoq3sz\build\youtube_dl\YoutubeDL.py", line 793, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpa8aoq3sz\build\youtube_dl\extractor\common.py", line 508, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpa8aoq3sz\build\youtube_dl\extractor\generic.py", line 3320, in _real_extract
youtube_dl.utils.UnsupportedError: Unsupported URL: https://www.britbox.com/us/episode/Doctor_Who_S14_E22_7466
...
<end of log>
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://www.britbox.com/us/episode/Doctor_Who_S4_E7_14566
- Single video: https://www.britbox.com/us/episode/Doctor_Who_S1_E1_8031
- Playlist: https://www.britbox.com/us/season/Doctor_Who_S4_12799
- Playlist: https://www.britbox.com/us/show/5969
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
I tried to add this myself but I am unable to find the proper url. Thanks | account-needed | low | Critical |
403,480,943 | rust | Implementing a trait for an associated type causes 'overflow evaluating the requirement' error | Implementing a trait for an associated type of something that implements the same trait causes the compiler to emit an overflow evaluating the requirement error. I read the error explanation and it makes reference to trait bounds causing an endless list of things to check, it's not clear to me why this case would produce that.
```
trait Convertable {
type ResultType;
}
trait SomethingElse {}
struct A {}
struct B {}
struct C {}
impl Convertable for A {
type ResultType = B;
}
// The addition of the following causes a compiler error[E0275]: overflow evaluating the requirement `<A as Convertable>::ResultType`:
impl Convertable for <A as Convertable>::ResultType {
type ResultType = C;
}
// Implementing any other trait for the associated type works fine with no errors.
impl SomethingElse for <A as Convertable>::ResultType {}
```
I expected to see this happen:
No compiler errors and the `Convertable` trait implemented for B as the following:
```
// What I wanted to happen.
impl Convertable for B {
type ResultType = C;
}
```
Instead, this happened:
```
error[E0275]: overflow evaluating the requirement `<A as Convertable>::ResultType`
error: aborting due to previous error
```
## Meta
```
rustc --version --verbose
rustc 1.32.0 (9fda7c223 2019-01-16)
binary: rustc
commit-hash: 9fda7c2237db910e41d6a712e9a2139b352e558b
commit-date: 2019-01-16
host: x86_64-apple-darwin
release: 1.32.0
LLVM version: 8.0
``` | A-type-system,A-trait-system,A-associated-items,T-lang,T-compiler,C-bug,T-types | low | Critical |
403,492,259 | godot | Convert To Mesh Library "Merge with existing" option causes confusion | **Godot version:** v3.1.beta2.official

-The "Merge with Existing" text and "ON" are so far apart that the option was invisible to me and it doesn't register in your mind as something important. This is definitely a new-user-experience issue as anyone experienced will know that it's there after they've noticed it once.
-It defaults to ON, which goes against common sense of how saving files usually works. Anyone would expect their files to normally overwrite when they save a file in Windows.
-When you save over a mesh library file with the setting on it still asks "File Exists, Overwrite?" but it should ask "Merge?". This is a very important thing to implement I think, as it would have helped me realize that I need to look for the "Merge with existing" option.
-It doesn't remember the OFF setting if you close Godot.
This has been a headache, call me blind if you wish but it took me days to even realize this option existed. I thought it was a bug with Godot that it wouldn't let me overwrite meshlib files. I had unwanted entries in my Gridmap that I was unable to remove, I was deleting the meshlib file and saving a new one in order to get around this issue, haha.
I also found a second person who most likely had the same issue: https://godotengine.org/qa/30146/problems-with-blender-export-godot-import-meshlib-gridmap?show=30146
"The only thing that seems to work is exporting the meshlib with a different name each time." | enhancement,topic:editor,usability | low | Critical |
403,574,672 | go | wiki: add page on using private repositories (e.g., on-prem GitLab, private github.com repos, etc.) | Consider adding a new wiki page on using Go with private repositories such as on-prem GitLab, on-prem GitHub, on-prem bitbucket, private github.com repos, etc.
This seems to be a fairly frequent source of confusion, and often it is when someone is new or somewhat new to Go, which means it both impacts early impressions of Go, as well as means someone at that point in time is less versed in Go overall (e.g., it is more challenging to work through how to set up something like an on-prem GitLab instance if someone is asking "what is an import path?" at more or less the same time).
It also comes up in the context of modules (because of a different error with modules, or because someone initially assumes they are seeing a module-specific issue, etc.). I would wager that at this point in time, a private repo problem that is reported in the context of modules the majority of the time ends up not actually being a module-specific issue (at least as of 1.11), but there are also module-specific aspects such as how to have an internal corporate mirror via the `replace`directive, etc.
Part of the current complexity comes from the fact that it is a moving target with products like GitLab and GitHub Enterprise improving over time (or in some cases, having new bugs), which also means currently people can end up talking past each other in terms of a specific solution if they are on different versions. A similar piece of current complexity is that sometimes a solution described for GitHub Enterprise also works for GitLab and other solutions, but sometimes that is not the case.
Having a wiki page could help on all of those fronts, including helping to draft off of community experience. It might also cut down on people reinventing solutions from scratch. I personally would not expect this type of material to be in the core Go documentation, nor would I expect the core Go team to be experts in something like on-prem GitLab.
I am not sure of the best title, but perhaps "Private Repositories", or "Working with Private Repositories"?
### Some current material
* https://golang.org/doc/faq#git_https
* https://golang.org/cmd/go/#hdr-Remote_import_paths
* https://github.com/golang/go/issues/26134 "cmd/go: github private repos require special configuration"
* https://github.com/golang/go/issues/27254#issuecomment-419340291 "cmd/go: custom import path private repos require special configuration"
* ["gitlab not support go-get with sub-groups"](https://gitlab.com/gitlab-org/gitlab-ce/issues/37832#note_52388964 )
* ["Jan 22 2019 GitLab release notes: Support for private Go packages in subgroups"](https://about.gitlab.com/2019/01/22/gitlab-11-7-released/#support-for-private-go-packages-in-subgroups)
### Some older items
* https://github.com/golang/go/issues/24076#issuecomment-371371558
* https://github.com/golang/go/issues/24076#issuecomment-424330150
* https://github.com/golang/go/issues/17898#issuecomment-376631102
* #26894 "cmd/go: using git over ssh fails to load modules requirements"
* #28422 "cmd/go: insteadOf broken with MacOS git 2.19.1 and go version 1.11.1?"
* #28653 "cmd/go: mod falls to download dependencies due to git not using ssh""
### Some sample community commentary
The first four are taken from [here](https://www.reddit.com/r/golang/comments/9n2phh/go_1111_module_issue_with_gitlab/) (including as an example of people sharing different solutions that don't always work for the original reporter). The remainder are from various other discussions.
> I have to do this to get to private repos in Github using go get/dep/go mod:
> `git config --global url."[email protected]:".insteadOf "https://github.com/"`
> I have that in my .gitconfig - but it seems to ignore it. I also created a private repo token and added that as well. Both seemed to do nothing.
> We setup a nginx proxy with ssl cert in front of gitlab. Also I setup a url responder to respond back from "go get" requests. There is no need to have any edit to a ~/.git/config to have insteadOf etc
> I solved these problems for GitHub Enterprise. I put together a small go service modelled after https://github.com/niemeyer/gopkg for my team.
> https://help.github.com/enterprise/2.14/admin/guides/user-management/changing-authentication-methods/ says in `Git authentication`: “LDAP supports password-based Git authentication by default, but we recommend that you disable that method and force authentication via a personal access token or SSH key.” Access with personal access tokens is described on https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
> for gitlab:
`$ git config --global -l
[email protected]:.insteadof=https://gitlab.int.company.com/`
CC @bcmills @andybons @myitcv @mvdan @dmitris @leitzler | Documentation,help wanted,NeedsFix,modules | medium | Critical |
403,589,621 | rust | [Universes] Indirection allows calling of invalid trait method | The following code has different behaviour depending on whether the associated function is called directly or through a method call. For example:
```rust
trait X {
type G;
fn make_g() -> Self::G;
}
impl<'a> X for fn(&'a ()) {
type G = &'a ();
fn make_g() -> Self::G {
&()
}
}
fn indirect<T: X>() {
let x = T::make_g();
}
fn call_indirect() {
indirect::<fn(&())>(); // OK
}
fn direct() {
let x = <fn(&())>::make_g(); //~ ERROR
}
```
cc @nikomatsakis | A-lifetimes,A-trait-system,E-needs-test,T-compiler,T-types | low | Critical |
403,596,922 | vscode | [json] jsonValidation fileMatch is confusing when used with URIs | - VSCode Version: 1.30.2
- OS Version: macOS 10.14.3
Steps to Reproduce:
1. Create an extension that [registers a text document content provider](https://code.visualstudio.com/api/references/vscode-api#2290) for a custom scheme, say `custom`. Implement the provider to return a json object as text.
2. Declare a jsonValidation in package.json with a `fileMatch` of `custom://*.todo`.
3. [Open](https://code.visualstudio.com/api/references/vscode-api#2246) a text document for a uri under that custom scheme with path ending in something.todo, and also a defined query or fragment [1], then [set text document language](https://code.visualstudio.com/api/references/vscode-api#2319) to 'json', and [show it](https://code.visualstudio.com/api/references/vscode-api#2074).
4. Expect the json schema validation to occur on the virtual file.
5. Note that it does not.
It appears the `fileMatch` matches on the entire uri, including the query and the fragment! You can verify this by changing the `fileMatch` to `custom://*.todo?*`
It should probably only match on the path.
[1] e.g. `custom:/path/to/something.todo?foo`
| bug,json,debt | low | Minor |
403,605,225 | pytorch | Flip is much slower than advanced indexing | ## 🐛 Bug
Flip is about 3x slower than advanced indexing, even though advanced indexing is a more general operation.
I have tested this on CPU and CUDA, and on pytorch 1.0 stable and pytorch-nightly.
## To Reproduce
```
import time
import torch
n = 1024
batch_size = 256
ntrials = 1000
x = torch.randn(batch_size, n)
start = time.perf_counter()
[x.flip(-1) for _ in range(ntrials)]
end = time.perf_counter()
print('Flip time (CPU): {}s'.format(end - start))
reverse_index = torch.arange(n - 1, -1, -1)
start = time.perf_counter()
[x[..., reverse_index] for _ in range(ntrials)]
end = time.perf_counter()
print('Advanced indexing time (CPU): {}s'.format(end - start))
x = x.to('cuda')
reverse_index = reverse_index.to('cuda')
torch.cuda.synchronize()
start = time.perf_counter()
[x.flip(-1) for _ in range(ntrials)]
torch.cuda.synchronize()
end = time.perf_counter()
print('Flip time (CUDA): {}s'.format(end - start))
start = time.perf_counter()
[x[..., reverse_index] for _ in range(ntrials)]
torch.cuda.synchronize()
end = time.perf_counter()
print('Advanced indexing time (CUDA): {}s'.format(end - start))
```
```
Flip time (CPU): 0.6906896363943815s
Advanced indexing time (CPU): 0.2781159598380327s
Flip time (CUDA): 0.1045754998922348s
Advanced indexing time (CUDA): 0.016148101538419724s
```
## Expected behavior
Flip should be faster or the same speed as advanced indexing. Right now I have to use advanced indexing for speed, which leads to less readable code.
## Environment
PyTorch version: 1.0.0.dev20190123
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.2 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration: GPU 0: Tesla P100-PCIE-16GB
Nvidia driver version: 390.42
cuDNN version: Probably one of the following:
/usr/local/cuda-8.0/lib64/libcudnn.so.6.0.21
/usr/local/cuda-8.0/lib64/libcudnn_static.a
/usr/local/cuda-9.0/lib64/libcudnn.so.7.0.4
/usr/local/cuda-9.0/lib64/libcudnn_static.a
Versions of relevant libraries:
[pip] Could not collect
[conda] blas 1.0 mkl
[conda] mkl 2019.1 144
[conda] mkl-service 1.1.2 py37he904b0f_5
[conda] mkl_fft 1.0.6 py37hd81dba3_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.0.0 py3.7_cuda9.0.176_cudnn7.4.1_1 pytorch
[conda] pytorch-nightly 1.0.0.dev20190123 py3.7_cuda9.0.176_cudnn7.4.1_0 pytorch
[conda] torchvision 0.2.1 py_2 pytorch
cc @VitalyFedyunin @ngimel | module: performance,triaged,module: viewing and reshaping | low | Critical |
403,610,483 | godot | GI-Probe/Light Baking documentation lacks an important piece of info | The fact I didn't know about this from reading the documentation prevented the use of GI-probes in my game until just today.
The piece of info lacking in the docs. is this. That being that GI-probes and baked lighting will **only** work if the nodes are in the first level of the scene tree.
So for instance having _Top Node > GI Probe_ will work, but having _Top Node > Probes > GI Probe_ will not. I don't know if this is by design or if it can be treated like a bug, but I imagine it would throw a lot of people off if they like to categorize their nodes. | enhancement,topic:core,topic:rendering,documentation | low | Critical |
403,622,566 | TypeScript | typeof 'function' using with Exclude<T, Function> | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.2
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** Generic typeof
**Code**
```ts
function f2<T>(v: Exclude<T, Function> | (() => T)): T {
if (typeof v === 'function') {
v = v() // error!
}
return v
}
```
**Expected behavior:**
No error, type v is just (() => T).
**Actual behavior:**
Line 3 report an error: Type '(() => T) | (Exclude<T, Function> & Function)' has no compatible call signatures.
**Playground Link:** https://www.typescriptlang.org/play/index.html#src=function%20f2%3CT%3E(v%3A%20Exclude%3CT%2C%20Function%3E%20%7C%20(()%20%3D%3E%20T))%3A%20T%20%7B%0D%0A%20%20%20%20if%20(typeof%20v%20%3D%3D%3D%20'function')%20%7B%0D%0A%20%20%20%20%20%20%20%20v%20%3D%20v()%0D%0A%20%20%20%20%7D%0D%0A%20%20%20%20return%20v%0D%0A%7D
| Bug,Domain: Conditional Types,Domain: Control Flow | low | Critical |
403,659,493 | pytorch | A bug in parallel.data_parallel when module_kwargs is not None | ## 🐛 Bug
A bug in [parallel.data_parallel](https://github.com/pytorch/pytorch/blob/fdaa77ae8b084eaa7535075538e9d80f4c4a8d1a/torch/nn/parallel/data_parallel.py#L159-L189) when
- The batch size is smaller than the number of GPUs
- With some keyword arguments (module_kwargs) in the custom forward function.
## To Reproduce
Steps to reproduce the behavior:
1. Please execute the test script below (with more than 4 GPUs available).
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
```python
import torch
from torch import nn
from torch.nn import parallel
def check(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None):
'''
This function is a just simple copy of nn.parallel.data_parallel:
https://github.com/pytorch/pytorch/blob/fdaa77ae8b084eaa7535075538e9d80f4c4a8d1a/torch/nn/parallel/data_parallel.py#L159-L189
'''
if not isinstance(inputs, tuple):
inputs = (inputs,)
if device_ids is None:
device_ids = list(range(torch.cuda.device_count()))
if output_device is None:
output_device = device_ids[0]
inputs, module_kwargs = parallel.scatter_gather.scatter_kwargs(
inputs, module_kwargs, device_ids, dim
)
print(module_kwargs)
if len(device_ids) == 1:
return module(*inputs[0], **module_kwargs[0])
used_device_ids = device_ids[:len(inputs)]
replicas = parallel.replicate(module, used_device_ids)
outputs = parallel.parallel_apply(replicas, inputs, module_kwargs, used_device_ids)
return parallel.gather(outputs, output_device, dim)
class Test(nn.Conv2d):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def forward(self, x, msg='default'):
print(msg)
return super().forward(x)
if __name__ == '__main__':
conv = Test(1, 1, 3, padding=1).cuda()
x = torch.randn(4, 1, 4, 4).cuda()
y = torch.randn(1, 1, 4, 4).cuda()
print('x on 1 GPU with kwargs')
check(conv, x, device_ids=range(1), module_kwargs={'msg': 'Hello'})
print('x on 4 GPUs with kwargs')
check(conv, x, device_ids=range(4), module_kwargs={'msg': 'Hello'})
print('y on 1 GPU')
check(conv, y, device_ids=range(1))
print('y on 1 GPU with kwargs')
check(conv, y, device_ids=range(1), module_kwargs={'msg': 'Hello'})
print('y on 4 GPUs')
check(conv, y, device_ids=range(4))
print('y on 4 GPUs with kwargs')
check(conv, y, device_ids=range(4), module_kwargs={'msg': 'Hello'}) # Error!
```
```bash
>>> python test.py
x on 1 GPU with kwargs
({'msg': 'Hello'},)
Hello
x on 4 GPUs with kwargs
({'msg': 'Hello'}, {'msg': 'Hello'}, {'msg': 'Hello'}, {'msg': 'Hello'})
Hello
Hello
Hello
Hello
y on 1 GPU
({},)
default
y on 1 GPU with kwargs
({'msg': 'Hello'},)
Hello
y on 4 GPUs
({},)
default
y on 4 GPUs with kwargs
({'msg': 'Hello'}, {'msg': 'Hello'}, {'msg': 'Hello'}, {'msg': 'Hello'})
Hello
Traceback (most recent call last):
File "test.py", line 53, in <module>
check(conv, y, device_ids=range(4), module_kwargs={'msg': 'Hello'})
File "test.py", line 24, in check
outputs = parallel.parallel_apply(replicas, inputs, module_kwargs, used_device_ids)
File "/home/sanghyun/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
File "/home/sanghyun/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
output = module(*input, **kwargs)
File "/home/sanghyun/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'x'
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
``Test.forward()`` should be executed **only once** with the ``y on 4 GPUs with kwargs`` configuration (like ``y on 4 GPUs``). However, it seems that ``Test.forward()`` is called more than once, with an empty ``inputs``. We can see one ``Hello`` message below ``y on 4 GPUs with kwargs``, and I think ``parallel.parallel_apply`` should be terminated right after then.
## Environment
- PyTorch Version (e.g., 1.0): 1.0
- OS (e.g., Linux): Ubuntu 16.04
- How you installed PyTorch (`conda`, `pip`, source): ``conda install pytorch torchvision -c pytorch``
- Python version: Anaconda 3.7 (https://repo.continuum.io/archive/Anaconda3-2018.12-Linux-x86_64.sh)
- CUDA/cuDNN version:
```bash
>>> nvcc --version
NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176
```
```python
Python 3.7.1 (default, Dec 14 2018, 19:28:38)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.backends.cudnn.version()
7401
```
- GPU models and configuration:
```bash
>>> nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.98 Driver Version: 384.98 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... On | 00000000:04:00.0 Off | N/A |
| 33% 57C P2 106W / 250W | 5691MiB / 11172MiB | 84% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... On | 00000000:05:00.0 Off | N/A |
| 41% 69C P2 163W / 250W | 9115MiB / 11172MiB | 95% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 108... On | 00000000:08:00.0 Off | N/A |
| 0% 26C P8 18W / 250W | 0MiB / 11172MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce GTX 108... On | 00000000:09:00.0 Off | N/A |
| 23% 28C P8 16W / 250W | 0MiB / 11172MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 4 GeForce GTX 108... On | 00000000:83:00.0 Off | N/A |
| 23% 20C P8 7W / 250W | 0MiB / 11172MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 5 GeForce GTX 108... On | 00000000:84:00.0 Off | N/A |
| 0% 24C P8 8W / 250W | 0MiB / 11172MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 6 GeForce GTX 108... On | 00000000:87:00.0 Off | N/A |
| 23% 27C P8 8W / 250W | 0MiB / 11172MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 7 GeForce GTX 108... On | 00000000:88:00.0 Off | N/A |
| 23% 21C P8 8W / 250W | 0MiB / 11172MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 5671 C python 5681MiB |
| 1 6349 C python 9105MiB |
+-----------------------------------------------------------------------------+
```
(Used 4, 5, 6, 7 for the test script.)
cc @ngimel | module: cuda,module: error checking,triaged,module: batching,module: data parallel | low | Critical |
403,682,749 | rust | Memory-alignment declaration for struct only. | Memory alignment can be declared/annotated to struct type declarations only
I would expect possibility to declare memory-alignment for struct-members and local variables as well. | A-attributes,T-lang,C-feature-request | low | Minor |
403,709,831 | pytorch | How to retrain the modelzoo model in Caffe2? | ## 📚 Documentation
Is there any update/documentation on how to retrain (transfer learning) a pertained model from modelzoo with custom dataset in Caffe2?
I want to use Mask R-CNN2Go model from modelzoo and retrain it with my own dataset, however I don't see any documentation or tutorial on this topic.
Can somebody please point me in right direction?
| caffe2 | low | Minor |
403,731,034 | react-native | Cookie based authentication issues aggregation | ## Environment
[skip envinfo]
## Reproducible Demo
Provided in corresponding issues
## Description
Issues closed while still open. Cookie based authentication is at this moment not usable. This is partially due to the following issues:
- https://github.com/facebook/react-native/issues/23005 //TL;DR can only ever work with one cookie on Android
- https://github.com/facebook/react-native/issues/929 //TL;DR redirect: 'manual' doesnt work
These issues have been closed even though they are still open and very relevant.
There's more around cookies/fetch that i will try to hunt down in the following days. E.g one of the two platforms, i believe iOS , wont store cookies after app restart.
## Conclusion
In general cookie based authentication is very problematic on multiple levels. If cookie based authentication is ~~claimed~~ implied to be supported on React Native and developers unknowingly structure their architecture around this these issues need attention. Otherwise people need to know before implementing a project using such an authentication mechanism as dozens of hours could be spend working on an architecture that is inevitably simply not supported.
This is not a matter of pointing fingers or demanding features. It is currently unfortunately misleading to leave people unaware of all these limitations as they might set out to create an architecture that's unsupported as i have.
At the very least maybe we should revise the documentation of `fetch` and explain how some things like "redirect:manual" dont work right now.
| Help Wanted :octocat:,📮Known Issues,Bug | high | Critical |
403,839,969 | TypeScript | Error: Debug Failure getDisplayName for babel project while getting program.getTypeChecker() | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** : "typescript": "^3.0.3"
**Search Terms**:
getTypeChecker()
Error: Debug Failure
Object.assertDefined
getDisplayName
bable
**Code**
```
var options: ts.CompilerOptions = {
noEmitOnError: true
, strict: true
, target: ts.ScriptTarget.Latest
, module: ts.ModuleKind.CommonJS
, skipLibCheck: true
, experimentalDecorators: true
, emitDecoratorMetadata : true
, allowJs: true
, noErrorTruncation : false
};
var compilerHost = ts.createCompilerHost(options);
this.program = ts.createProgram(this.sourceFilesToParse, options, compilerHost);
this.typeChecker = this.program.getTypeChecker();
```
**Expected behavior:**
Typecheker should return successfully without throwing the above exception.
**Actual behavior:**
When we give https://github.com/rolftimmermans/babel.git as input code (for this.sourceFilesToParse in the above code), we get following exception while creating the type checker object using this.program.getTypeChecker().
```
"Error: Debug Failure.
at Object.assertDefined (..\node_modules\typescript\lib\typescript.js:1469:24)
at getDisplayName (..\node_modules\typescript\lib\typescript.js:27251:129)
at addError (..\node_modules\typescript\lib\typescript.js:27349:150)
at Object.forEach (..\node_modules\typescript\lib\typescript.js:210:30)
at declareSymbol (..\node_modules\typescript\lib\typescript.js:27351:28)
at declareModuleMember (..\node_modules\typescript\lib\typescript.js:27370:28)
at declareSourceFileMember (..\node_modules\typescript\lib\typescript.js:28414:19)
at declareSymbolAndAddToSymbolTable (..\node_modules\typescript\lib\typescript.js:28362:28)
at bindWorker (..\node_modules\typescript\lib\typescript.js:29021:28)
at bind (..\node_modules\typescript\lib\typescript.js:28781:13)"
```
Note: The same program works well with other sample projects. | Bug,API,Crash | low | Critical |
403,864,900 | godot | InputEventMouseButton.factor is always positive | **Godot version:** 70689eb
**OS/device including version:** Ubuntu
**Issue description:** While implementing a zooming camera I realized that `InputEventMouseButton.factor` is always either 0 or a positive number. It would make sense that the number is negative if you scroll down (or up).
**Steps to reproduce:**
```
extends Node
func _unhandled_input(event : InputEvent) -> void:
if event is InputEventMouseButton:
print(event.factor)
``` | documentation,topic:input | low | Major |
403,919,249 | terminal | Visual Studio Code + zsh + ConPTY = input issues | I was sent here via https://github.com/Microsoft/vscode/issues/67227
* Your Windows build number:
Microsoft Windows [Version 10.0.18323.1000]
* What you're doing and what's happening: (Copy & paste specific commands and their output, or include screen shots)
I type in the VS Code integrated terminal. Always at the second letter after the prompt, the cursor seems to jump momentarily. Sometimes single characters of my input aren't processed, but I couldn't reproduce this when trying to take a video. However, it is definitely related to the jumping/flickering.
Looks like this:

NB: I can only reproduce this issue with zsh (version 5.4.2 (x86_64-pc-msys)), not with PowerShell nor bash.
@Tyriar pointed me at the `terminal.integrated.windowsEnableConpty` option. With that option disabled, the problem is gone.
* What's wrong / what should be happening instead:
The cursor shouldn't flicker/jump, all input should be processed, the same way as when ConPTY is disabled. Like this:

| Work-Item,Needs-Repro,Area-Performance,Product-Conpty,Area-Input,Issue-Bug | medium | Major |
403,984,425 | pytorch | TensorRTOpTest.test_vgg19 is flaky | Sample failure: https://circleci.com/gh/pytorch/pytorch/632436?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
```
Jan 25 01:19:46 =================================== FAILURES ===================================
Jan 25 01:19:46 __________________________ TensorRTOpTest.test_vgg19 ___________________________
Jan 25 01:19:46
Jan 25 01:19:46 self = <caffe2.python.trt.test_trt.TensorRTOpTest testMethod=test_vgg19>
Jan 25 01:19:46
Jan 25 01:19:46 @unittest.skipIf(not workspace.C.use_trt, "No TensortRT support")
Jan 25 01:19:46 def test_vgg19(self):
Jan 25 01:19:46 > self._test_onnx_importer('vgg19', -2, 9)
Jan 25 01:19:46
Jan 25 01:19:46 ../.local/lib/python2.7/site-packages/caffe2/python/trt/test_trt.py:176:
Jan 25 01:19:46 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Jan 25 01:19:46 ../.local/lib/python2.7/site-packages/caffe2/python/trt/test_trt.py:127: in _test_onnx_importer
Jan 25 01:19:46 op = convert_onnx_model_to_trt_op(model_def, verbosity=3)
Jan 25 01:19:46 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Jan 25 01:19:46
Jan 25 01:19:46 onnx_model = ir_version: 3
Jan 25 01:19:46 producer_name: "onnx-caffe2"
Jan 25 01:19:46 graph {
Jan 25 01:19:46 node {
Jan 25 01:19:46 input: "data_0... }
Jan 25 01:19:46 }
Jan 25 01:19:46 }
Jan 25 01:19:46 }
Jan 25 01:19:46 }
Jan 25 01:19:46 }
Jan 25 01:19:46 opset_import {
Jan 25 01:19:46 domain: ""
Jan 25 01:19:46 version: 9
Jan 25 01:19:46 }
Jan 25 01:19:46
Jan 25 01:19:46 max_batch_size = 64, max_workspace_size = 2097152, verbosity = 3
Jan 25 01:19:46 debug_builder = False
Jan 25 01:19:46
Jan 25 01:19:46 def convert_onnx_model_to_trt_op(onnx_model,
Jan 25 01:19:46 max_batch_size=64,
Jan 25 01:19:46 max_workspace_size=2*1024*1024,
Jan 25 01:19:46 verbosity=1,
Jan 25 01:19:46 debug_builder=False):
Jan 25 01:19:46 """
Jan 25 01:19:46 Convert the whole ONNX model to a TensorRT C2 op
Jan 25 01:19:46 """
Jan 25 01:19:46 check_gpu_()
Jan 25 01:19:46 trt_str = C.onnx_to_trt_op(onnx_model.SerializeToString(),
Jan 25 01:19:46 _get_output_shapes(onnx_model.graph.output),
Jan 25 01:19:46 max_batch_size,
Jan 25 01:19:46 max_workspace_size,
Jan 25 01:19:46 verbosity,
Jan 25 01:19:46 > debug_builder)
Jan 25 01:19:46 E RuntimeError: [enforce fail at trt_utils.h:43] obj. Failed to create TensorRt object
Jan 25 01:19:46 E frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void const*) + 0x68 (0x7febb5761be8 in /var/lib/jenkins/.local/lib/python2.7/site-packages/caffe2/python/../../torch/lib/libc10.so)
Jan 25 01:19:46 E frame #1: caffe2::tensorrt::BuildTrtEngine(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, caffe2::tensorrt::TrtLogger*, unsigned long, unsigned long, bool) + 0xbe2 (0x7febb6df95e2 in /var/lib/jenkins/.local/lib/python2.7/site-packages/caffe2/python/../../torch/lib/libcaffe2_gpu.so)
Jan 25 01:19:46 E frame #2: caffe2::TensorRTTransformer::BuildTrtOp(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<int, std::allocator<int> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::vector<int, std::allocator<int> > > > > const&) + 0xbd (0x7febb6dfdb4d in /var/lib/jenkins/.local/lib/python2.7/site-packages/caffe2/python/../../torch/lib/libcaffe2_gpu.so)
Jan 25 01:19:46 E frame #3: <unknown function> + 0xea487 (0x7febd7734487 in /var/lib/jenkins/.local/lib/python2.7/site-packages/caffe2/python/caffe2_pybind11_state_gpu.so)
Jan 25 01:19:46 E frame #4: <unknown function> + 0x90c00 (0x7febd76dac00 in /var/lib/jenkins/.local/lib/python2.7/site-packages/caffe2/python/caffe2_pybind11_state_gpu.so)
Jan 25 01:19:46 E frame #5: PyEval_EvalFrameEx + 0x5ca (0x4bc4aa in /usr/bin/python2)
Jan 25 01:19:46 E frame #6: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #7: PyEval_EvalFrameEx + 0x58e6 (0x4c17c6 in /usr/bin/python2)
Jan 25 01:19:46 E frame #8: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #9: PyEval_EvalFrameEx + 0x58e6 (0x4c17c6 in /usr/bin/python2)
Jan 25 01:19:46 E frame #10: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #11: PyEval_EvalFrameEx + 0x6076 (0x4c1f56 in /usr/bin/python2)
Jan 25 01:19:46 E frame #12: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #13: /usr/bin/python2() [0x4d57a3]
Jan 25 01:19:46 E frame #14: PyObject_Call + 0x3e (0x4a587e in /usr/bin/python2)
Jan 25 01:19:46 E frame #15: PyEval_EvalFrameEx + 0x263e (0x4be51e in /usr/bin/python2)
Jan 25 01:19:46 E frame #16: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #17: /usr/bin/python2() [0x4d57a3]
Jan 25 01:19:46 E frame #18: /usr/bin/python2() [0x4eef5e]
Jan 25 01:19:46 E frame #19: PyObject_Call + 0x3e (0x4a587e in /usr/bin/python2)
Jan 25 01:19:46 E frame #20: /usr/bin/python2() [0x548fc3]
Jan 25 01:19:46 E frame #21: PyEval_EvalFrameEx + 0x578d (0x4c166d in /usr/bin/python2)
Jan 25 01:19:46 E frame #22: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #23: PyEval_EvalFrameEx + 0x6076 (0x4c1f56 in /usr/bin/python2)
Jan 25 01:19:46 E frame #24: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #25: /usr/bin/python2() [0x4d5669]
Jan 25 01:19:46 E frame #26: PyObject_Call + 0x3e (0x4a587e in /usr/bin/python2)
Jan 25 01:19:46 E frame #27: PyEval_EvalFrameEx + 0x263e (0x4be51e in /usr/bin/python2)
Jan 25 01:19:46 E frame #28: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #29: PyEval_EvalFrameEx + 0x58e6 (0x4c17c6 in /usr/bin/python2)
Jan 25 01:19:46 E frame #30: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #31: PyEval_EvalFrameEx + 0x6076 (0x4c1f56 in /usr/bin/python2)
Jan 25 01:19:46 E frame #32: PyEval_EvalFrameEx + 0x553f (0x4c141f in /usr/bin/python2)
Jan 25 01:19:46 E frame #33: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #34: /usr/bin/python2() [0x4d57a3]
Jan 25 01:19:46 E frame #35: /usr/bin/python2() [0x4eef5e]
Jan 25 01:19:46 E frame #36: PyObject_Call + 0x3e (0x4a587e in /usr/bin/python2)
Jan 25 01:19:46 E frame #37: /usr/bin/python2() [0x548fc3]
Jan 25 01:19:46 E frame #38: PyObject_Call + 0x3e (0x4a587e in /usr/bin/python2)
Jan 25 01:19:46 E frame #39: PyEval_EvalFrameEx + 0x263e (0x4be51e in /usr/bin/python2)
Jan 25 01:19:46 E frame #40: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #41: PyEval_EvalFrameEx + 0x6076 (0x4c1f56 in /usr/bin/python2)
Jan 25 01:19:46 E frame #42: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #43: /usr/bin/python2() [0x4d57a3]
Jan 25 01:19:46 E frame #44: /usr/bin/python2() [0x4eef5e]
Jan 25 01:19:46 E frame #45: /usr/bin/python2() [0x4eeb66]
Jan 25 01:19:46 E frame #46: /usr/bin/python2() [0x4aaafb]
Jan 25 01:19:46 E frame #47: PyEval_EvalFrameEx + 0x578d (0x4c166d in /usr/bin/python2)
Jan 25 01:19:46 E frame #48: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #49: /usr/bin/python2() [0x4d57a3]
Jan 25 01:19:46 E frame #50: PyObject_Call + 0x3e (0x4a587e in /usr/bin/python2)
Jan 25 01:19:46 E frame #51: PyEval_EvalFrameEx + 0x263e (0x4be51e in /usr/bin/python2)
Jan 25 01:19:46 E frame #52: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #53: PyEval_EvalFrameEx + 0x58e6 (0x4c17c6 in /usr/bin/python2)
Jan 25 01:19:46 E frame #54: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #55: PyEval_EvalFrameEx + 0x58e6 (0x4c17c6 in /usr/bin/python2)
Jan 25 01:19:46 E frame #56: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #57: /usr/bin/python2() [0x4d5669]
Jan 25 01:19:46 E frame #58: PyObject_Call + 0x3e (0x4a587e in /usr/bin/python2)
Jan 25 01:19:46 E frame #59: PyEval_EvalFrameEx + 0x263e (0x4be51e in /usr/bin/python2)
Jan 25 01:19:46 E frame #60: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #61: PyEval_EvalFrameEx + 0x58e6 (0x4c17c6 in /usr/bin/python2)
Jan 25 01:19:46 E frame #62: PyEval_EvalCodeEx + 0x306 (0x4b9b66 in /usr/bin/python2)
Jan 25 01:19:46 E frame #63: PyEval_EvalFrameEx + 0x6076 (0x4c1f56 in /usr/bin/python2)
Jan 25 01:19:46
Jan 25 01:19:46 ../.local/lib/python2.7/site-packages/caffe2/python/trt/transform.py:51: RuntimeError
Jan 25 01:19:46 ----------------------------- Captured stdout call -----------------------------
``` | triaged,module: flaky-tests | low | Critical |
403,985,889 | kubernetes | Long running request definition in the kube-aggregator | Kube-API server has a set of long-running pattern check defined [here](https://github.com/kubernetes/kubernetes/blob/8b98e802eddb9f478ff7d991a2f72f60c165388a/cmd/kube-apiserver/app/server.go#L408-L411) that are applied to the operations taking longer than the default timeout of 60 seconds.
Unfortunately, our kube-aggregator has hard-coded the same configuration [here](https://github.com/kubernetes/kubernetes/blob/8b98e802eddb9f478ff7d991a2f72f60c165388a/staging/src/k8s.io/kube-aggregator/pkg/cmd/server/start.go#L127-L130) which makes it impossible for other API servers to define their own endpoints that will allow long running requests.
There are several possible solutions that I can think of at this point in time:
1. kube-aggregator will always just passes through and does not define long-running checks at all. The downside of this approach is that we always rely on the remote API server to behave properly and does not provide the safety net we currently have.
2. kube-aggregator is notified about long-running endpoints from remote API servers. This leaves the safety net I've mentioned before, at the cost of additional complexity which defines long-running requests endpoints.
@kubernetes/sig-api-machinery-feature-requests
@deads2k @sttts | kind/bug,priority/important-soon,sig/api-machinery,lifecycle/frozen | medium | Major |
403,987,976 | TypeScript | getTrailingCommentRanges returns comments at the start of the file | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** master
**Code:**
```ts
// one
// two
```
**Expected behavior:**
`ts.getTrailingCommentRanges(text, 0)` doesn't return a comment.
Both comments are parsed by `ts.getLeadingCommentRanges(text, 0)`.
**Actual behavior:**
`ts.getTrailingCommentRanges(text, 0)` parses `// one` as a trailing comment.
`ts.getLeadingCommentRanges(text, 0)` parses both comments.
The special handling of position 0 should also affect trailing comments.
With the current behavior I have to remember to not parse trailing comment ranges at position 0 to avoid duplicates.
**Related Issues:**
This was part of https://github.com/Microsoft/TypeScript/pull/28489 where I first proposed changing this behavior.
/cc @rbuckton | Suggestion,In Discussion,API | low | Critical |
403,990,542 | kubernetes | Tracking Issue - Conformance Coverage for Networking | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
This is a place holder to track Conformance Coverage on the Topic.
This issue should remain open until all related work is accomplished in the k/k repo
This issue will contain analysis of the coverage. Existing tests and additional tests requested with links to those issues/PR's
**Why is this needed**:
Specifically address CNI networking, identify tests that tests pod-pod communication
/area conformance
| sig/network,kind/feature,help wanted,sig/testing,area/conformance,lifecycle/rotten | medium | Critical |
403,994,320 | flutter | webview_flutter v0.3.0 - Multiple flashes on the page before webview loads. | ## Steps to Reproduce
I created multiple projects testing this issue. In every page that has a `webview_flutter` when you load the page it shows quick and multiple flashes on the screen. On the iOS Simulator and Android emulator most of the time you do not see the flashes. But if you run it on a device they are very clear especially on the `AppBar` and any `bottomNavigationBar`
1. Create a page with body property set to the `WebView()`
2. I installed it on the iOS iPhone 6 Plus device with `flutter run --release`
3. As application starts or when you change a page with a `WebView()` the screen flashes quickly at load time. You can really see the effects on the `AppBar` and `bottomNavigationBar`.
If you need a video of the flashes please let me know.
```dart
@override
Widget build(BuildContext context) {
return Scaffold(
body: WebView(
initialUrl: 'https://flutter.io/',
javascriptMode: JavascriptMode.unrestricted,
onWebViewCreated: (WebViewController webViewController) {
if (!_controller.isCompleted) {
_controller.complete(webViewController);
}
},
),
);
}
```
## Logs
<!-- Finally, paste the output of running `flutter doctor -v` here. -->
```
[✓] Flutter (Channel beta, v1.0.0, on Mac OS X 10.14.3 18D42, locale en-US)
• Flutter version 1.0.0 at /Users/name/Development/flutter
• Framework revision 5391447fae (9 weeks ago), 2018-11-29 19:41:26 -0800
• Engine revision 7375a0f414
• Dart version 2.1.0 (build 2.1.0-dev.9.4 f9ebf21297)
[✓] Android toolchain - develop for Android devices (Android SDK 28.0.3)
• Android SDK at /Users/marco/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 10.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.1, Build version 10B61
• ios-deploy 1.9.4
• CocoaPods version 1.5.2
[✓] Android Studio (version 3.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 32.0.1
• Dart plugin version 182.5124
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01)
[✓] Connected device (1 available)
• iPhone • gc713353tevh533h3co3729ac46e9k37j34h3bw1 • ios • iOS 12.1.2
• No issues found!
```
| c: performance,p: webview,package,team-ecosystem,has reproducible steps,P2,found in release: 3.10,found in release: 3.11,triaged-ecosystem | low | Critical |
404,000,251 | flutter | ScrollViews need to be direct children of Scaffold for TextFields to scroll into focus | `TextField`s scroll into focus properly when the containing scrollview is a direct child of a `Scaffold`. However, if you push on a new screen, a new `Scaffold` is needed in order for the `TextField`s to scroll into focus in the newly pushed class. I would think that so long as a `Scaffold` is in the hierarchy that I should be able to push on new screens with scrollable `TextField`s without wrapping the new screen in a `Scaffold`. The below example will scroll textfields into focus properly on the first page, but when pushing on a new screen by tapping the `FloatingActionButton`, the `TextField`s will not scroll.
```dart
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'TextField Scroll Demo',
home: MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('TextField Scroll Demo'),
),
body: Form(
child: ListView.builder(
itemCount: 20,
itemBuilder: (context, index) {
return TextFormField(initialValue: index.toString());
}),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
Navigator.push(
context,
MaterialPageRoute(
builder: (context) {
return NewPage();
}),
);
},
),
);
}
}
// Won't scroll without a new Scaffold
class NewPage extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Material(
child: Form(
child: ListView.builder(
itemCount: 20,
itemBuilder: (context, index) {
return TextFormField(initialValue: index.toString());
}),
),
);
}
}
```
`flutter doctor -v`
```
[✓] Flutter (Channel unknown, v1.1.10-pre.45, on Mac OS X 10.14.2 18C54, locale en-US)
• Flutter version 1.1.10-pre.45 at /Users/albertlardizabal/dev/flutter
• Framework revision 78f4878fe1 (2 weeks ago), 2019-01-13 20:44:59 -0800
• Engine revision 5722a9685e
• Dart version 2.1.1 (build 2.1.1-dev.1.0 f0c7d971c4)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/albertlardizabal/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 10.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.1, Build version 10B61
• ios-deploy 1.9.4
• CocoaPods version 1.5.3
[✓] Android Studio (version 3.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 32.0.1
• Dart plugin version 182.5124
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01)
[✓] IntelliJ IDEA Community Edition (version 2018.3.3)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin version 31.3.4
• Dart plugin version 183.5153.38
``` | a: text input,framework,f: material design,f: routes,f: focus,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-design,triaged-design | low | Major |
404,026,201 | rust | Failure to clean up incremental compilation artifacts should not be a hard error | ```
error: Failed to delete invalidated or incompatible incremental compilation session directory contents `/home/manishearth/mozilla/servo/target/debug/incremental/script-23kpbvowy6y9i/s-f8z0p73x3w-17kitl0-working/dep-graph.bin`: No such file or directory (os error 2).
```
This is basically rustc attempting to delete a file that no longer exists. This breaks the build, perhaps it should be a warning instead?
(I think you can get this kind of error if you keep around target dirs post rustup.) | T-compiler,A-incr-comp | medium | Critical |
404,036,639 | rust | Defining scope of `existential type` defined by associated type | So, during our discussion earlier today, @nikomatsakis brought up this interesting example of code that should really compile, we think:
```rust
existential type X: Sized;
trait Foo {
type Bar: Iterator<Item = Self::_0>;
type _0;
}
impl Foo for () {
type Bar = std::vec::IntoIter<u32>;
type _0 = X;
}
```
Error:
```
error: could not find defining uses
--> src/main.rs:3:1
|
3 | existential type X: Sized;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
```
[Playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=58b0d50d78a09e5fb79c7dc99e4ff6c8)
It seems that `type _0 = X;` isn't being considered as a "defining use", even though it clearly does constrain `X`. The question is, what sort of changes do we need for this to work? (I'm pretty sure it should work.) Do we need Chalk perhaps?
CC @nikomatsakis @cramertj @scalexm
| A-type-system,A-trait-system,A-associated-items,T-lang,A-impl-trait,F-type_alias_impl_trait,requires-nightly,T-types | low | Critical |
404,041,067 | go | cmd/go: add option to prevent 'go get -u' from updating certain dependencies | We should provide a way for users to prevent `go get -u` from updating certain dependencies unless they're explicitly requested.
There are a number of reasons users may need to pin a module dependency to a lower version. For example:
1. The dependency has a breaking change, which is allowed in `v0` versions.
2. There is a cyclic dependency involving the main module, and care must be taken to keep the cycle at specific versions. This is especially important when one module is a prefix of the other, and we want to avoid ambiguous imports as packages are moved between.
3. The dependency adds many more transitive dependencies that are not needed which may need to be downloaded.
4. The dependency has new functionality which is not used and adds significant overhead.
When a user has a dependency that requires special care to update, `go get -u` is not safe to use on its own. Instead, one needs to build a list of all requirements, exclude modules that shouldn't be updated, then pass that list to `go get -u`. This is cumbersome.
Some ideas for improvements:
* An `-except` or `-exclude` flag, allowing users to specify modules that shouldn't be updated.
* A comment or annotation in `go.mod` that tells `go get -u` not to update a module unless it's specifically named on the command line.
This is somewhat related to #28424. | NeedsInvestigation,FeatureRequest,GoCommand,modules | medium | Major |
404,058,779 | opencv | fully connected layer::weightsMat clone missing | I faced a bug in fully connected layer that I could only resolve it by cloning the weightsMat:
https://github.com/opencv/opencv/blob/a65ccc06039d7e69d48bdad09f6afc45c3f37304/modules/dnn/src/layers/fully_connected_layer.cpp#L83
`weightsMat = blobs[0] = blobs[0].reshape(1, numOutput).clone();`
I noticed there exists the same pattern in cloning of weights mat in convolution layer:
https://github.com/opencv/opencv/blob/a65ccc06039d7e69d48bdad09f6afc45c3f37304/modules/dnn/src/layers/convolution_layer.cpp#L284
I wonder if cloning of weightsMat in fully connected layer is missing or not.
| category: dnn | low | Critical |
404,082,187 | godot | Editor crashes with custom C++ DLL module built under MSVC | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.0.6-stable
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Windows 10 1803
Python 3.6.4
pywin32 224
scons 3.0.1
Visual Studio 2015 14.0.25431.01 Update 3
**Issue description:**
<!-- What happened, and what was expected. -->
After building tools with custom C++ module in a `dll` configuration. The editor would crash when starting up.
The error starts at this line:
https://github.com/godotengine/godot/blob/8ac39d886307d76c286e804e027fc39f6b5aaac6/core/string_db.cpp#L287
Then crashes at this line due to `OS::get_singleton()` returning NULL:
https://github.com/godotengine/godot/blob/8ac39d886307d76c286e804e027fc39f6b5aaac6/core/error_macros.cpp#L84
**Steps to reproduce:**
- Clone `3.0.6-stable` from git repository
- Setup build environment as above
- Extract attached zip file to `modules/`
- Run the following command line from `VS2015 x64 Native Tools Command Prompt`
```sh
scons -j8 platform=windows tools=yes target=debug
```
- Run built editor with the proper work directory.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[saturn.zip](https://github.com/godotengine/godot/files/2805674/saturn.zip) | bug,topic:buildsystem,confirmed | low | Critical |
404,082,916 | rust | Make the `unconditional_recursion` lint work across function calls | The lint for unconditional recursion currently only handles the case where a function calls itself *directly*, which means that many useful cases are missed:
* https://github.com/rust-lang/rust/issues/40437
* https://github.com/rust-lang/rust/issues/57633
* https://github.com/rust-lang/rust/issues/57299
* https://github.com/rust-lang/rust/issues/45838
* (+ lots of duplicates)
I've talked to @eddyb about this and it seems like they've come up with a workable solution that might also benefit other MIR passes:
<details>
<summary>IRC log</summary>
<p>
```
22:32 <eddyb> anyway, consider building a callgraph where you're only considering calls that are unconditional to some extent, i.e. if the function returns, they *must* have happened
22:32 <eddyb> then just fine cycles in it
22:32 <eddyb> *find
22:33 <eddyb> the current analysis is most likely that but limited to self-cycles
22:33 <jschievink> hmm, yeah. sounds like I need to determine postdominators then
22:33 <eddyb> jschievink: the monomorphization collector is actually perfect for this - or would be, if it recorded the source of a call :P
22:34 <eddyb> but, like, you do care about call targets post-Instance::resolve
22:34 <eddyb> (I just had an idea, heh)
22:34 <eddyb> (an optimization for the monomorphization collector)
22:36 <jschievink> so you want to run the lint on monomorphized MIR instead?
22:36 <eddyb> jschievink: the lint already kind of does this by taking parameters into account, it's just polymorphically doing it
22:37 <eddyb> ("monomorphized MIR" is not a thing that is actually stored anywhere, we monomorphize on the fly)
22:37 <jschievink> yeah, I didn't know the existing lint did that
22:37 <jschievink> it seemed so limited
22:38 <eddyb> I mean, all it does is it knows whether something can refer back to itself despite being a trait impl method
22:38 <eddyb> you only need to consider partial/full monomorphization if you look at the whole callgraph
22:38 <eddyb> to be able to construct it in the first place, I mean
22:39 <eddyb> basically you should "expand" callers of trait methods, transitively, until nothing actually can still hit trait dispatch
22:39 <eddyb> so it's actually the opposite of the collector, lol
22:40 <jschievink> wow
22:40 <eddyb> since you want to demand as little specificity as possible, while still ending up with a callgraph with only function bodies (and, well, dynamic dispatch)
22:41 <eddyb> so e.g. `foo<T>` calls `bar<Vec<T>>` and `bar<X>` calls `X::default()`
22:43 <eddyb> so you start with `<Self as Default>::default`, unknown `Self`, look at its callers (which hopefully is easy on a graph), and find that `Self` could be `X` of `bar<X>`
22:44 <eddyb> you recurse, with `bar<X>`, unknown `X`, and in its callers you find that `X` could be `Vec<T>` from `foo<T>`
22:45 <eddyb> therefore, `Self` could be `Vec<_>`, and that's the first type which is enough to resolve the implementation (of `<Vec<_> as Default>::default`)
22:46 <eddyb> this means that you can ignore the fact that `foo<T>` has even a million callers, all with different `T`, and not expand `bar` or `Default::default` further (especially since `Vec<T>: Default` doesn't require `T: Default`)
22:46 <eddyb> jschievink: this seems like a viable strategy for any callgraph-based analysis, not just infinite recursion lints
22:47 <eddyb> maybe someone should note it somewhere, before I forget :P
22:47 * eddyb does need to get back to work though
22:47 <jschievink> ah, so you could use the same approach in the collector?
22:49 <eddyb> jschievink: uhhhh
22:49 <eddyb> jschievink: the collector actually needs to monomorphize a million `foo`, `bar` and `<Vec<_> as Default>::default` (even if we might alleviate this in the future)
22:50 <eddyb> jschievink: hmm maybe you can do this collection in the forward direction too, with a bit of precomputation
22:53 <eddyb> jschievink: ah, no, it doesn't work forward because you'd need to actually gather the *transitive* set of callers
22:53 <eddyb> i.e. know that `foo<T>` calls `Vec<T>::default`, transitively
22:57 <jschievink> how would this analysis start, given that I need the call graph in the first place in order to find all callers of a method?
22:58 <eddyb> jschievink: at the end of the day, monomorphization wants to know all the static call targets (potentially ignoring some type parameters?), whereas this callgraph analysis thing wants to know all the definitions involved, with no finer granularity. they could be related but I have a hard time thinking about it
22:58 <eddyb> jschievink: you can build a callgraph that refers to `Default::default`
23:00 <jschievink> ah, so you'd build the callgraph "normally" and then expand references to trait methods?
23:00 <eddyb> you should mark it as unresolved though, to distinguish it from "default trait method body" (which has the same `DefId`)
23:00 <eddyb> jschievink: yupp
23:00 <eddyb> you'd build the callgraph fully generic, perhaps with Substs on the edges
```
</p>
</details>
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"jonas-schievink"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | C-enhancement,A-lints,T-compiler,A-MIR,C-optimization | low | Major |
404,111,214 | rust | really bad error messages for trying to `use` a macro from a module | If you read up on new-style macro imports in the [edition guide](https://rust-lang-nursery.github.io/edition-guide/rust-2018/macros/macro-changes.html) and miss the tiny note that it doesn't work for macros in modules in your own crate (why not?!), you might try this:
```rust
mod has_a_macro {
macro_rules! a_macro { () => {} }
}
use crate::has_a_macro::a_macro;
fn main() {
a_macro!();
}
```
**BIG MISTAKE.**
```
error[E0432]: unresolved import `crate::has_a_macro::a_macro`
--> src/main.rs:6:5
|
6 | use crate::has_a_macro::a_macro;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `a_macro` in `has_a_macro`
error: cannot determine resolution for the macro `a_macro`
--> src/main.rs:8:5
|
8 | a_macro!();
| ^^^^^^^
|
= note: import resolution is stuck, try simplifying macro imports
error: aborting due to 2 previous errors
```
OK, two errors here, neither of which are helpful.
The first one simply says the macro doesn't exist when it obviously does, this is just the compiler gaslighting the user. Next!
The second one is fairly hilarious (this is already the simplest situation possible) and unactionable.
Let's see... I know that in the past you used to put `#[macro_use]` above the `mod` declaration to do this. Adding that nets a different second error:
```
error[E0659]: `a_macro` is ambiguous (`macro_rules` vs non-`macro_rules` from other module)
--> src/main.rs:9:5
|
9 | a_macro!();
| ^^^^^^^ ambiguous name
|
note: `a_macro` could refer to the macro defined here
--> src/main.rs:3:5
|
3 | macro_rules! a_macro { () => {} }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: `a_macro` could also refer to the unresolved item imported here
--> src/main.rs:6:5
|
6 | use crate::has_a_macro::a_macro;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
= help: use `crate::a_macro` to refer to this unresolved item unambiguously
```
This is... really confusing. I can see that the compiler knows exactly which macro I want to call, but it refuses to do it, and for some reason it says it's ambiguous with the statement trying to import it. The `` `macro_rules` vs non-`macro_rules` `` word salad doesn't help matters.
Also, the suggestion doesn't work, you just get the original `import resolution is stuck` error back.
So, the eventual solution is to remove the `use` statement, leaving only `#[macro_use]`, but this isn't discoverable from the mess of errors. Can we take off the compiler-colored glasses and make better diagnostics for this situation? I can't imagine I'm the only person who tried this. | C-enhancement,A-diagnostics,A-resolve,A-macros,T-compiler | medium | Critical |
404,176,175 | deno | WebRTC Integrating | Reference:
[WebRTC Native API](https://webrtc.googlesource.com/src/+/master/native-api.md)
[samples client & server both side](https://webrtc.googlesource.com/src/+/master/examples)
| cli,suggestion | high | Critical |
404,176,246 | pytorch | caffe2 softmaxwithloss problem | my output label is batch_size x 10 (for 10 labels)
blob_out = model.FC( blob_out, 'pred', dim_in, 10)
softmax, loss = model.SoftmaxWithLoss( [blob_out, labels], ['softmax', 'loss'], scale=scale)
but SoftmaxWithLoss report error like this:
what(): [enforce fail at softmax_ops.cu:307] T.size_from_dim(canonical_axis) == 1. 10 vs 1 Error from operator:
input: "gpu_0/pred" input: "gpu_0/labels" output: "gpu_0/softmax" output: "gpu_0/loss" name: "" type: "SoftmaxWithLoss" arg { name: "scale" f: 1 } device_option { device_type: 1 cuda_gpu_id: 0 }
0 0 0 1 0 0 0 1 0 0 *** Aborted at 1548750491 (unix time) try "date -d @1548750491" if you are using GNU date ***
PC: @ 0x7f89558a1428 gsignal
| caffe2 | low | Critical |
404,176,291 | flutter | precacheImage should return bool instead of void | https://docs.flutter.io/flutter/widgets/precacheImage.html should return a `bool` instead of void indicating if loading the image succeeded.
Currently I use
```dart
bool isSuccess = true;
await precacheImage(..., onError: () => isSuccess = false);
if(isSuccess) {
...
}
```
which is ugly and should instead allow
```dart
if(await precacheImage(...) {
...
}
```
```
[✓] Flutter (Channel master, v1.1.10-pre.205, on Mac OS X 10.14.3 18D42, locale en-AT)
• Flutter version 1.1.10-pre.205 at /Users/zoechi/flutter/flutter
• Framework revision e38efc890b (4 hours ago), 2019-01-28 19:55:01 -0800
• Engine revision 9b6d5031a3
• Dart version 2.1.1 (build 2.1.1-dev.3.2 a5030ed92f)
```
See also https://github.com/flutter/flutter/issues/16592#issuecomment-427180769 | framework,a: quality,a: images,c: proposal,P2,team-framework,triaged-framework | low | Critical |
404,208,019 | vscode | [folding] Show tooltip on hovering collapse markers | It would be nice to be able to hover the `⊞` and `⊟` symbols in the left gutter of the editor and see a tooltip saying what will happen if they're clicked\*, and showing which keyboard shortcut (if any) is assigned to the action, thus making the UI behavior more discoverable.
<sub>\* This may sound superfluous, but IMO it would help with promoting consistent nomenclature throughout the interface, e.g. "collapse"/"expand" vs. "fold"/"unfold" vs. "hide"/"show", etc.</sub> | feature-request,editor-folding | low | Minor |
404,209,521 | godot | Navigation Polygon Instance editing to conform to other polygon instance editing | **Godot version:**
3.1 beta 2
**OS/device including version:**
Windows but any should do.
**Issue description:**
Navigation poly editing currently looks like this

and thus has to be done by hand in the editor for "exact" numbers.
**Expected**
Navigation Poly editing should look like this

which would conform to other poly editing that is already existing.
**Steps to reproduce:**
Open editor
Create Nav Poly
Painfully try to get accurate pixels on a 16 by 16 poly and unable to zoom at the correct level because its so tedious.
**Minimal reproduction project:**
Blank Project should work for starting.
| enhancement,topic:editor,usability | low | Minor |
404,224,797 | rust | Tracking Issue for making incremental compilation the default for Release Builds | Since incremental compilation supports being used in conjunction with ThinLTO the runtime performance of incrementally built artifacts is (presumably) roughly on par with non-incrementally built code. At the same time, building things incrementally often is significantly faster (([1.4-5x](https://github.com/rust-lang/rust/pull/56678#issuecomment-446606215) according to perf.rlo). As a consequence it might be a good idea to make Cargo default to incremental compilation for release builds.
Possible caveats that need to be resolved:
- [ ] The initial build is slightly slower with incremental compilation, usually around 10%. We need to decide if this is a worthwhile tradeoff. For `debug` and `check` builds everybody seems to be fine with this already.
- [ ] Some crates, like `style-servo`, are always slower to compile with incr. comp., even if there is just a small change. In the case of `style-servo` that is 62 seconds versus 64-69 seconds on perf.rlo. It is unlikely that this would improve before we make incr. comp. the default. We need to decide if this is a justifiable price to pay for improvements in other projects.
- [ ] Even if incremental compilation becomes the default, one can still always opt out of it via the `CARGO_INCREMENTAL` flag or a local Cargo config. However, this might not be common knowledge, the same as it isn't common knowledge that one can improve runtime performance by forcing the compiler to use just one codegen unit.
- [x] It still needs to be verified that runtime performance of compiled artifacts does not suffer too much from switching to incremental compilation (see below).
## Data on runtime performance of incrementally compiled release artifacts
Apart from anectodal evidence that runtime performance is "roughly the same" there have been two attempts to measure this in a more reliable way:
1. PR #56678 did an experiment where we compiled the compiler itself incrementally and then tested how the compiler's runtime performance was affected by this. The results are twofold:
1. In general performance drops by **1-2%** ([compare results](https://perf.rust-lang.org/compare.html?start=3a3121337122637fa11f0e5d42aec67551e8c125&end=26f96e5eea2d6d088fd20ebc14dc90bdf123e4a1) for `clean` builds)
2. For two of the small test cases (`helloworld`, `unify-linearly`) performance drops by 30%. It is known that these test cases are very sensitive to LLVM making the right inlining decisions, which we already saw when switching from single-CGU to non-incremental ThinLTO. This is indicative that microbenchmarks may see performance drops unless the author of the benchmark takes care of marking bottleneck functions with `#[inline]`.
2. For a limited period of time we made incremental compilation the default in Cargo (https://github.com/rust-lang/cargo/pull/6564) in order to see how this affected measurements on [lolbench.rs](https://lolbench.rs). It is not yet clear if the experiment succeeded and how much useful data it collected since we had to cut it short because of a regression (#57947). The initial data looks promising: only a handful of the ~600 benchmarks showed performance losses (see https://lolbench.rs/#nightly-2019-01-27). But we need further investigation on how reliable the results are. We might also want to re-run the experiment since the regression can easily be avoided.
One more experiment we should do is compiling Firefox because it is a large Rust codebase with an excellent benchmarking infrastructure (cc @nnethercote).
cc @rust-lang/core @rust-lang/cargo @rust-lang/compiler
| I-compiletime,T-compiler,A-incr-comp,T-core,T-cargo,C-tracking-issue,WG-compiler-performance,S-tracking-design-concerns | medium | Critical |
404,279,585 | vscode | Support filter on type should in search tree | Testing #67274

| feature-request,search | low | Major |
404,296,954 | flutter | [proposal] ValueKey<String>.toString should return the actual value without inserting quotes | Not trying to nitpick, but I just lost a lot of time debugging and I wanted to double check that others agreed this is intentional. Maybe this is a convention in Dart or Flutter, but coming from many other languages I was surprised that quotes were sometimes being inserted into the String representation of the ValueKey. I spent some time debugging, especially since sometimes the exact type could be determined at runtime and other times not. But behavior would have been as I expected if these quotes were not getting inserted.
`final String valueString = T == String ? '<\'$value\'>' : '<$value>';`
[https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/foundation/key.dart#L77](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/foundation/key.dart#L77) | framework,c: proposal,P2,team-framework,triaged-framework | low | Critical |
404,300,613 | pytorch | TorchConfig.cmake always sets _GLIBCXX_USE_CXX11_ABI | Some libraries, namely pybind11, check for the presence of this macro to determine the compiler version; see https://github.com/pybind/pybind11/blob/master/include/pybind11/numpy.h#L286.
When using TorchConfig.cmake and TORCH_CPP_FLAGS or the torch imported target, this definition is added even if the compiler doesn't support it. This breaks compability with those libraries. | module: build,triaged | low | Minor |
404,318,535 | go | cmd/compile: variable read should not be optimized when using -gcflags="all=-N -l" | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.4 windows/amd64
go version go1.12beta2 windows/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\florin\AppData\Local\go-build
set GOEXE=.exe
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=D:\go
set GOPROXY=
set GORACE=
set GOROOT=C:\Go1.12
set GOTMPDIR=
set GOTOOLDIR=C:\Go1.12\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=D:\awesomeProject9\go.mod
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\florin\AppData\Local\Temp\go-build732947326=/tmp/go-build -gno-record-gcc-switches
</pre></details>
### What did you do?
- create a project using Go Modules support and the code from https://play.golang.org/p/nnDfvOVbWk9
- run go-delve/delve@b0056ebc97d8d43a13050839521a2110655fc7eb ` dlv debug blah.go `
- place a breakpoint on line 22
- run the application and make a request with ` curl -i http://localhost:8080/ `
- run ` set returnStatus = 404 ` in delve then resume the application
### What did you expect to see?
The debugger should be able to change the value of the function parameter before the call happens.
### What did you see instead?
The debugger was unable to change the value and the old value was used.
My expectation is that if I compile the application using ` -gcflags="all=-N -l" ` such optimization is removed and I can do what I need to do with my program to debug it.
I also noticed that if I swap lines 22 and 23 and keep the breakpoint on line 22 then the result is the expected one. So the optimization applied here is not consistently applied either (should I open a separate bug for it?).
According to @aarzilli in the original issue, https://github.com/go-delve/delve/issues/1473#issuecomment-458530025:
that line compiles to:
```
blah.go:22 0x750931 488b8424e0000000 MOVQ 0xe0(SP), AX
blah.go:22 0x750939 8400 TESTB AL, 0(AX)
blah.go:22 0x75093b 488b4028 MOVQ 0x28(AX), AX
blah.go:22 0x75093f 488b8c24e8000000 MOVQ 0xe8(SP), CX
blah.go:22 0x750947 48890c24 MOVQ CX, 0(SP)
blah.go:22 0x75094b 48c7442408c8000000 MOVQ $0xc8, 0x8(SP)
blah.go:22 0x750954 ffd0 CALL AX
```
the instruction of interest is 0x75094b which sets one of the function arguments directly to the constant 0xc8 (200). The compiler noticed that the value of the variable is constant and optimized the read away. | NeedsFix,Debugging,compiler/runtime | low | Critical |
404,352,675 | TypeScript | Readonly properties can be modified in derived classes | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
This issue is created even if there was a pretty complete issue about the purpose of the readonly keyword ( https://github.com/Microsoft/TypeScript/issues/8496#issuecomment-217500742 ) and I'll just quote @RyanCavanaugh :
> Basically we see properties as "possibly readonly" or "definitely readonly", and allow assignments from definitely-readonly things to possibly-readonly things.
I totally agree that we must prevent codes from libraries and/or node modules, as still today there's modules that doesn't include d.ts files and the DefinitelyTyped repo doesn't have 100% of updated d.ts files.
BUT as the main purpose was to do a compromise, let's add a compromise over the compromise, see the code below to understand why we should make the user more careful about mutating a `readonly` property when deriving a class.
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.3.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** final readonly properties derived inherits
**Code**
```ts
abstract class Pet {
protected abstract race: string;
protected abstract sayMyRace(): void;
}
class Dog extends Pet {
protected readonly race: string = "Dog";
sayMyRace() {
console.log(this.race);
}
}
class NotADog extends Dog {
protected readonly race: string = "Robot";
}
const scooby = new Dog();
scooby.sayMyRace();
const marvin = new NotADog();
marvin.sayMyRace();
```
**Expected behavior:**
Error saying that race in `NotADog` cannot be modified as it was already declared in class `Dog` and we set the same attributes in both cases, meaning we do want to override a constant.
**Actual behavior:**
Property is overwritten, ignoring the readonly property from the already existing parent's property.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
[here](https://www.typescriptlang.org/play/#src=abstract%20class%20Pet%20%7B%0D%0A%20%20%20%20protected%20abstract%20race%3A%20string%3B%0D%0A%0D%0A%20%20%20%20protected%20abstract%20sayMyRace()%3A%20void%3B%0D%0A%7D%0D%0A%0D%0Aclass%20Dog%20extends%20Pet%20%7B%0D%0A%20%20%20%20protected%20readonly%20race%3A%20string%20%3D%20%22Dog%22%3B%0D%0A%0D%0A%20%20%20%20sayMyRace()%20%7B%0D%0A%20%20%20%20%20%20%20%20console.log(this.race)%3B%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0Aclass%20NotADog%20extends%20Dog%20%7B%0D%0A%20%20%20%20protected%20readonly%20race%3A%20string%20%3D%20%22Robot%22%3B%0D%0A%7D%0D%0A%0D%0Aconst%20scooby%20%3D%20new%20Dog()%3B%0D%0Ascooby.sayMyRace()%3B%0D%0A%0D%0Aconst%20marvin%20%3D%20new%20NotADog()%3B%0D%0Amarvin.sayMyRace()%3B)
Note that in the [documentation](https://www.typescriptlang.org/docs/handbook/classes.html#readonly-modifier) the example shows an error when trying to mutate the property created in the class.
I'm not asking to put a check rule on every keyword in every line of code, but to accept that if a class have a readonly property, nobody can mutate it from outside, I'm not talking about the interfaces and/or declaration file to help to make the compromise, but the Typescript [documentation](http://www.typescriptlang.org/docs/handbook/interfaces.html) state:
>The easiest way to remember whether to use readonly or const is to ask whether you’re using it on a variable or a property. Variables use const whereas properties use readonly.
If a property with readonly is a variable with const, then we must keep this in line and prevent any mutation from a class property to another class property, at least that would allow to add a safety check in the code.
Also the documentation regarding the readonly properties in Interfaces state that
> Some properties should only be modifiable when an object is first created.
If the team says that a mutation between an explicitly typed `readonly` property from a class to a subclass is normal as the compromise was done and that changing the rules would break the declaration files (even though two years had past)
So to sum up :
I open this issue as a bug because the documentation states that a readonly property is a constant variable, and that based on the few informations about it (still from the documentation), this is not the expected behavior.
I also considered the previous discussion and the compromise made to prevent any blocking backward compatibility, and asking only to use the readonly check rule from class to class by default, and allow more in a tsconfig rule later. | Suggestion,In Discussion | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.