id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
503,642,043 | opencv | Prepare a guideline for universal intrinsics development | ##### Detailed description
There is an active extension of Universal Intrinsics utilization by the library and as a result active development of new UI functionality. Also there are ongoing efforts to extend UI support to new architectures.
All that leads to discussion on UI API and structure that fits everyone. So it's reasonable to discuss and prepare a development guideline to refer through development of new UI features.
| category: documentation,RFC | low | Minor |
503,654,715 | flutter | Let's teach GN to build the final bucket artifacts for the engine | Today, the recipes need to know details of output files of the build.
We should just have GN build targets produce the zip files we want, along with some kind of file showing the paths to the outputs we want the recipe to upload. That way, the recipe would be able to just take those outputs and not need to be updated when we change build rules or file names. The recipe could then just know how to read the list of output files and upload them.
/cc @jason-simmons @chinmaygarde @iskakaushik @cbracken @Hixie | team,engine,P2,team-engine,triaged-engine | low | Minor |
503,669,929 | flutter | testReleaseFlutterViewController - Engine failed to release | I saw this on a PR I was working on:
```
Running add-to-app iOS integration tests...
β12:11:26β RUNNING: cd dev/integration_tests/ios_add2app; ../../../build_and_test.sh
/private/tmp/flutter sdk/dev/integration_tests/ios_add2app/flutterapp /private/tmp/flutter sdk/dev/integration_tests/ios_add2app
Downloading ios tools... 33.6s
Downloading ios-profile tools... 26.1s
Downloading ios-release tools... 107.7s
Building com.example.iosAdd2appFlutter for simulator (ios)...
Running Xcode build...
Xcode build done. 36.0s
Built /private/tmp/flutter sdk/dev/integration_tests/ios_add2app/flutterapp/build/ios/iphonesimulator/Runner.app.
/private/tmp/flutter sdk/dev/integration_tests/ios_add2app
Analyzing dependencies
Adding spec repo `trunk` with CDN `https://cdn.cocoapods.org/`
Downloading dependencies
Installing EarlGrey (1.15.1)
Installing Flutter (1.0.0)
Installing FlutterPluginRegistrant (0.0.1)
Installing ios_add2app_flutter (0.0.1)
Generating Pods project
Integrating client project
Pod installation complete! There are 4 dependencies from the Podfile and 4 total pods installed.
/usr/local/bin/xcpretty
βΈ Compiling Pods-ios_add2appTests-dummy.m
βΈ Compiling FlutterPluginRegistrant-dummy.m
βΈ Compiling GeneratedPluginRegistrant.m
βΈ Building library libPods-ios_add2appTests.a
βΈ Building library libFlutterPluginRegistrant.a
βΈ Compiling Pods-ios_add2app-dummy.m
βΈ Building library libPods-ios_add2app.a
βΈ Running script '[CP] Check Pods Manifest.lock'
βΈ Running script '[CP-User] Run Flutter Build Script'
βΈ Compiling MainViewController.m
βΈ Compiling NativeViewController.m
βΈ Compiling main.m
βΈ Compiling HybridViewController.m
βΈ Compiling FullScreenViewController.m
βΈ Compiling DualFlutterViewController.m
βΈ Compiling AppDelegate.m
βΈ Linking ios_add2app
βΈ Compiling Launch\ Screen.storyboard
βΈ Processing Info.plist
βΈ Running script '[CP] Embed Pods Frameworks'
βΈ Touching ios_add2app.app (in target: ios_add2app)
βΈ Processing Info.plist
βΈ Running script '[CP] Check Pods Manifest.lock'
βΈ Compiling IntegrationTests.m
βΈ Compiling FlutterViewControllerTests.m
βΈ Linking ios_add2appTests
βΈ Copying /tmp/flutter\ sdk/dev/integration_tests/ios_add2app/Pods/EarlGrey/EarlGrey/EarlGrey.framework
βΈ Running script '[CP] Embed Pods Frameworks'
βΈ Copying /Applications/Xcode-10.2.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/Library/PrivateFrameworks/XCTAutomationSupport.framework
βΈ Copying /Applications/Xcode-10.2.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/Library/Frameworks/XCTest.framework
βΈ Copying /Applications/Xcode-10.2.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/lib/libXCTestBundleInject.dylib
βΈ Touching ios_add2appTests.xctest (in target: ios_add2appTests)
2019-10-07 12:15:56.581 xcodebuild[95978:171247] IDETestOperationsObserverDebug: Writing diagnostic log for test session to:
/Users/anka/Library/Developer/Xcode/DerivedData/ios_add2app-csiwhdwugybmwjeggtidemdjpolj/Logs/Test/Test-ios_add2appTests-2019.10.07_12-15-23--0700.xcresult/1_Test/Diagnostics/ios_add2appTests-00CB5F11-59D5-409D-9036-1A25FDC64376/ios_add2appTests-D220C3D6-ECEA-494B-865F-DDF9EA725B79/Session-ios_add2appTests-2019-10-07_121556-pLDeOu.log
2019-10-07 12:15:56.583 xcodebuild[95978:170218] [MT] IDETestOperationsObserverDebug: (85BF7B99-172F-4555-9F54-5AA3472A64D7) Beginning test session ios_add2appTests-85BF7B99-172F-4555-9F54-5AA3472A64D7 at 2019-10-07 12:15:56.582 with Xcode 10E125 on target <DVTiPhoneSimulator: 0x7f81c6155350> {
SimDevice: iPhone X (30AB7C65-B526-4C4B-AE55-E6C7C8D8B47F, iOS 12.2, Shutdown)
} (12.2 (16E226))
2019-10-07 12:16:58.058 xcodebuild[95978:170218] [MT] IDETestOperationsObserverDebug: (85BF7B99-172F-4555-9F54-5AA3472A64D7) Finished requesting crash reports. Continuing with testing.
All tests
Test Suite ios_add2appTests.xctest started
FlutterTests
β testDualFlutterView (14.059 seconds)
β testFullScreenCanPop (9.309 seconds)
β testHybridView (28.382 seconds)
ViewControllerRelease
β testReleaseFlutterViewController, ((weakEngine) == nil) failed: "<FlutterEngine: 0x600000040690>" - Engine failed to release.
ViewControllerRelease
testReleaseFlutterViewController, ((weakEngine) == nil) failed: "<FlutterEngine: 0x600000040690>" - Engine failed to release.
/tmp/flutter sdk/dev/integration_tests/ios_add2app/ios_add2appTests/FlutterViewControllerTests.m:24
}
XCTAssertNil(weakEngine, @"Engine failed to release.");
}
```
```
Executed 4 tests, with 1 failure (0 unexpected) in 52.207 (52.222) seconds
2019-10-07 12:18:08.160 xcodebuild[95978:170218] [MT] IDETestOperationsObserverDebug: 131.607 elapsed -- Testing started completed.
2019-10-07 12:18:08.161 xcodebuild[95978:170218] [MT] IDETestOperationsObserverDebug: 0.001 sec, +0.001 sec -- start
2019-10-07 12:18:08.161 xcodebuild[95978:170218] [MT] IDETestOperationsObserverDebug: 131.607 sec, +131.607 sec -- end
Failing tests:
ios_add2appTests:
-[ViewControllerRelease testReleaseFlutterViewController]
Test session results and logs:
/Users/anka/Library/Developer/Xcode/DerivedData/ios_add2app-csiwhdwugybmwjeggtidemdjpolj/Logs/Test/Test-ios_add2appTests-2019.10.07_12-15-23--0700.xcresult
** TEST FAILED **
β12:13:22β ELAPSED TIME: 6min 48.661s for ../../../build_and_test.sh in dev/integration_tests/ios_add2app
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ERROR: Last command exited with 65 (expected: zero).
Command: ../../../build_and_test.sh
Relative working directory: dev/integration_tests/ios_add2app
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
https://cirrus-ci.com/task/5629588356464640 | a: tests,c: regression,team,engine,a: existing-apps,P2,team-engine,triaged-engine | low | Critical |
503,692,466 | go | cmd/doc: load packages with golang.org/x/tools/go/packages | cmd/doc currently uses `go/build.Import` to locate and import packages. It should use `golang.org/x/tools/go/packages`.
This may be a little complicated because cmd/doc supports a number of shorthand formats for specifying the package to display documentation for, so we may not have the full package path. However, `go/build` has limited support for modules, and we're considering deprecating it eventually. Using `go/packages` could also simplify `cmd/doc`, since it wouldn't need code for parsing Go source. | NeedsFix,GoCommand,Tools | low | Minor |
503,696,277 | pytorch | TSAN failure related to mkldnn | ## π Bug
When running tsan on test/cpp/jit:jit -- 'JitTest\.LiteInterpreterConv', there's failure because of data race.
## To Reproduce
Steps to reproduce the behavior:
FB internal build
The output trace is below:
```
> WARNING: ThreadSanitizer: data race (pid=3218388)
> Write of size 8 at 0x7bbc0000a210 by thread T15:
> #0 memset <...> (jit+0xb669b):17 (libmkldnn.so+0x1d38bd):10 (libmkldnn.so+0x1d38bd):15 (libmkldnn.so+0x1d38bd)
>
> Previous write of size 8 at 0x7bbc0000a210 by main thread:
> #0 posix_memalign <...> (jit+0xa325b)
> #1 c10::alloc_cpu(unsigned long) caffe2/c10/core/CPUAllocator.cpp:55 (libcaffe2_c10_c10.so+0x21088)
> #2 c10::DefaultCPUAllocator::allocate(unsigned long) const caffe2/c10/core/CPUAllocator.cpp:115 (libcaffe2_c10_c10.so+0x22692)
> #3 c10::Allocator::raw_allocate(unsigned long) caffe2/c10/core/Allocator.h:161 (libcaffe2_aten_ATen-cpu.so+0x35cf15)
> #4 at::native::AllocForMKLDNN::malloc(unsigned long) caffe2/aten/src/ATen/native/mkldnn/MKLDNNCommon.h:15 (libcaffe2_aten_ATen-cpu.so+0x35bf39) (libcaffe2_aten_ATen-cpu.so+0x35bd12) (libcaffe2_aten_ATen-cpu.so+0x38dc5c) (libcaffe2_aten_ATen-cpu.so+0x35b3fc) (libcaffe2_aten_ATen-cpu.so+0x5e22f6) (libcaffe2_aten_ATen-cpu.so+0x5dfb4a) (libcaffe2_aten_ATen-cpu.so+0x5c7539)
> #11 at::native::_mkldnn_conv2d(ideep::tensor const&, ideep::tensor const&, c10::optional<...> const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) caffe2/aten/src/ATen/native/mkldnn/Conv.cpp:89 (libcaffe2_aten_ATen-cpu.so+0x5c5fe6)
> #12 at::native::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) caffe2/aten/src/ATen/native/mkldnn/Conv.cpp:136 (libcaffe2_aten_ATen-cpu.so+0x5c7d44)
> #13 at::TypeDefault::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/aten/gen_aten=TypeDefault.cpp/TypeDefault.cpp:2683 (libcaffe2_aten_ATen-cpu.so+0x1d4db7e)
> #14 at::Tensor at::ATenOpTable::callUnboxed<...>(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) const caffe2/aten/src/ATen/core/ATenDispatch.h:242 (libcaffe2_libtorch.so+0x750661)
> #15 at::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/aten/generated-aten-headers-cpu#header-mode-symlink-tree-with-header-map,headers/ATen/Functions.h:5702 (libcaffe2_libtorch.so+0x750080)
> #16 torch::autograd::VariableType::(anonymous namespace)::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long)::$_70::operator()() const buck-out/dev/gen/caffe2/generate-code=VariableType_0.cpp/VariableType_0.cpp:7731 (libcaffe2_libtorch.so+0x74ea91)
> #17 torch::autograd::VariableType::(anonymous namespace)::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/generate-code=VariableType_0.cpp/VariableType_0.cpp:7729 (libcaffe2_libtorch.so+0x74d161)
> #18 at::Tensor at::ATenOpTable::callUnboxed<...>(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) const caffe2/aten/src/ATen/core/ATenDispatch.h:219 (libcaffe2_aten_ATen-cpu.so+0x624c94)
> #19 at::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/aten/generated-aten-headers-cpu#header-mode-symlink-tree-with-header-map,headers/ATen/Functions.h:5702 (libcaffe2_aten_ATen-cpu.so+0x60d9a6)
> #20 at::native::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) caffe2/aten/src/ATen/native/Convolution.cpp:640 (libcaffe2_aten_ATen-cpu.so+0x607491)
> #21 at::TypeDefault::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) buck-out/dev/gen/caffe2/aten/gen_aten=TypeDefault.cpp/TypeDefault.cpp:1118 (libcaffe2_aten_ATen-cpu.so+0x1d35dab)
> #22 torch::autograd::VariableType::(anonymous namespace)::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) buck-out/dev/gen/caffe2/generate-code=VariableType_1.cpp/VariableType_1.cpp:696 (libcaffe2_libtorch.so+0x8de2fb)
> #23 at::Tensor at::ATenOpTable::callUnboxed<...>(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) const caffe2/aten/src/ATen/core/ATenDispatch.h:219 (libcaffe2_libtorch.so+0x1771e38)
> #24 at::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) buck-out/dev/gen/caffe2/aten/generated-aten-headers-cpu#header-mode-symlink-tree-with-header-map,headers/ATen/Functions.h:2858 (libcaffe2_libtorch.so+0x196702f)
> #25 $_2::operator()(at::Tensor, at::Tensor, c10::optional<...>, std::vector<...>, std::vector<...>, std::vector<...>, bool, std::vector<...>, long, bool, bool, bool) const caffe2/torch/csrc/jit/mobile/register_mobile_ops.cpp:33 (libcaffe2_libtorch.so+0x1966b77)
> #26 c10::detail::WrapRuntimeKernelFunctor_<...>::operator()(at::Tensor, at::Tensor, c10::optional<...>, std::vector<...>, std::vector<...>, std::vector<...>, bool, std::vector<...>, long, bool, bool, bool) caffe2/aten/src/ATen/core/boxing/kernel_lambda.h:23 (libcaffe2_libtorch.so+0x19667b3)
> #27 c10::guts::infer_function_traits<...>::type::return_type c10::detail::call_functor_with_args_from_stack_<...>(c10::detail::WrapRuntimeKernelFunctor_<...>*, std::vector<...>*, std::integer_sequence<...>) caffe2/aten/src/ATen/core/boxing/kernel_functor.h:194 (libcaffe2_libtorch.so+0x1966257)
> #28 c10::guts::infer_function_traits<...>::type::return_type c10::detail::call_functor_with_args_from_stack<...>(c10::detail::WrapRuntimeKernelFunctor_<...>*, std::vector<...>*) caffe2/aten/src/ATen/core/boxing/kernel_functor.h:202 (libcaffe2_libtorch.so+0x1965e5f)
> #29 c10::detail::wrap_kernel_functor_boxed<...>::call(c10::OperatorKernel*, std::vector<...>*) caffe2/aten/src/ATen/core/boxing/kernel_functor.h:234 (libcaffe2_libtorch.so+0x196586c)
> #30 c10::KernelFunction::callBoxed(std::vector<...>*) const caffe2/aten/src/ATen/core/boxing/KernelFunction.h:65 (libcaffe2_libtorch.so+0x15d5e25)
> #31 c10::impl::OperatorEntry::callBoxed(std::vector<...>*) const::'lambda'(c10::DispatchTable const&)::operator()(c10::DispatchTable const&) const caffe2/aten/src/ATen/core/dispatch/OperatorEntry.h:74 (libcaffe2_libtorch.so+0x15d59f8)
> #32 std::result_of<...>::type c10::LeftRight<...>::read<...>(c10::impl::OperatorEntry::callBoxed(std::vector<...>*) const::'lambda'(c10::DispatchTable const&)&&) const caffe2/c10/util/LeftRight.h:74 (libcaffe2_libtorch.so+0x15d591d)
> #33 c10::impl::OperatorEntry::callBoxed(std::vector<...>*) const caffe2/aten/src/ATen/core/dispatch/OperatorEntry.h:73 (libcaffe2_libtorch.so+0x15d55aa)
> #34 c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<...>*) const caffe2/aten/src/ATen/core/dispatch/Dispatcher.h:181 (libcaffe2_libtorch.so+0x15d4fec)
> #35 torch::jit::mobile::InterpreterState::run(std::vector<...>&) caffe2/torch/csrc/jit/mobile/interpreter.cpp:40 (libcaffe2_libtorch.so+0x15d436d)
> #36 torch::jit::mobile::Function::run(std::vector<...>&) const caffe2/torch/csrc/jit/mobile/function.cpp:89 (libcaffe2_libtorch.so+0x14e1906)
> #37 torch::jit::mobile::Module::run_method(std::__cxx11::basic_string<...> const&, std::vector<...>&) caffe2/torch/csrc/jit/mobile/module.cpp:25 (libcaffe2_libtorch.so+0x167695f)
> #38 torch::jit::testLiteInterpreterConv() caffe2/test/cpp/jit/test_lite_interpreter.cpp:67 (libcaffe2_test_cpp_jit_test_lib.so+0x1d5527)
> #39 torch::jit::JitTest_LiteInterpreterConv_Test::TestBody() caffe2/test/cpp/jit/gtest.cpp:12 (jit+0x58f24):29 (libgtest.so+0x3a3de)
> #41 main caffe2/test/cpp/common/main.cpp:31 (libcaffe2_test_cpp_common_main.so+0x2f5b)
>
> Location is heap block of size 55296 at 0x7bbc00000000 allocated by main thread:
> #0 posix_memalign <...> (jit+0xa325b)
> #1 c10::alloc_cpu(unsigned long) caffe2/c10/core/CPUAllocator.cpp:55 (libcaffe2_c10_c10.so+0x21088)
> #2 c10::DefaultCPUAllocator::allocate(unsigned long) const caffe2/c10/core/CPUAllocator.cpp:115 (libcaffe2_c10_c10.so+0x22692)
> #3 c10::Allocator::raw_allocate(unsigned long) caffe2/c10/core/Allocator.h:161 (libcaffe2_aten_ATen-cpu.so+0x35cf15)
> #4 at::native::AllocForMKLDNN::malloc(unsigned long) caffe2/aten/src/ATen/native/mkldnn/MKLDNNCommon.h:15 (libcaffe2_aten_ATen-cpu.so+0x35bf39) (libcaffe2_aten_ATen-cpu.so+0x35bd12) (libcaffe2_aten_ATen-cpu.so+0x38dc5c) (libcaffe2_aten_ATen-cpu.so+0x35b3fc) (libcaffe2_aten_ATen-cpu.so+0x5e22f6) (libcaffe2_aten_ATen-cpu.so+0x5dfb4a) (libcaffe2_aten_ATen-cpu.so+0x5c7539)
> #11 at::native::_mkldnn_conv2d(ideep::tensor const&, ideep::tensor const&, c10::optional<...> const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) caffe2/aten/src/ATen/native/mkldnn/Conv.cpp:89 (libcaffe2_aten_ATen-cpu.so+0x5c5fe6)
> #12 at::native::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) caffe2/aten/src/ATen/native/mkldnn/Conv.cpp:136 (libcaffe2_aten_ATen-cpu.so+0x5c7d44)
> #13 at::TypeDefault::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/aten/gen_aten=TypeDefault.cpp/TypeDefault.cpp:2683 (libcaffe2_aten_ATen-cpu.so+0x1d4db7e)
> #14 at::Tensor at::ATenOpTable::callUnboxed<...>(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) const caffe2/aten/src/ATen/core/ATenDispatch.h:242 (libcaffe2_libtorch.so+0x750661)
> #15 at::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/aten/generated-aten-headers-cpu#header-mode-symlink-tree-with-header-map,headers/ATen/Functions.h:5702 (libcaffe2_libtorch.so+0x750080)
> #16 torch::autograd::VariableType::(anonymous namespace)::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long)::$_70::operator()() const buck-out/dev/gen/caffe2/generate-code=VariableType_0.cpp/VariableType_0.cpp:7731 (libcaffe2_libtorch.so+0x74ea91)
> #17 torch::autograd::VariableType::(anonymous namespace)::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/generate-code=VariableType_0.cpp/VariableType_0.cpp:7729 (libcaffe2_libtorch.so+0x74d161)
> #18 at::Tensor at::ATenOpTable::callUnboxed<...>(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) const caffe2/aten/src/ATen/core/ATenDispatch.h:219 (libcaffe2_aten_ATen-cpu.so+0x624c94)
> #19 at::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/aten/generated-aten-headers-cpu#header-mode-symlink-tree-with-header-map,headers/ATen/Functions.h:5702 (libcaffe2_aten_ATen-cpu.so+0x60d9a6)
> #20 at::native::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) caffe2/aten/src/ATen/native/Convolution.cpp:640 (libcaffe2_aten_ATen-cpu.so+0x607491)
> #21 at::TypeDefault::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) buck-out/dev/gen/caffe2/aten/gen_aten=TypeDefault.cpp/TypeDefault.cpp:1118 (libcaffe2_aten_ATen-cpu.so+0x1d35dab)
> #22 torch::autograd::VariableType::(anonymous namespace)::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) buck-out/dev/gen/caffe2/generate-code=VariableType_1.cpp/VariableType_1.cpp:696 (libcaffe2_libtorch.so+0x8de2fb)
> #23 at::Tensor at::ATenOpTable::callUnboxed<...>(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) const caffe2/aten/src/ATen/core/ATenDispatch.h:219 (libcaffe2_libtorch.so+0x1771e38)
> #24 at::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) buck-out/dev/gen/caffe2/aten/generated-aten-headers-cpu#header-mode-symlink-tree-with-header-map,headers/ATen/Functions.h:2858 (libcaffe2_libtorch.so+0x196702f)
> #25 $_2::operator()(at::Tensor, at::Tensor, c10::optional<...>, std::vector<...>, std::vector<...>, std::vector<...>, bool, std::vector<...>, long, bool, bool, bool) const caffe2/torch/csrc/jit/mobile/register_mobile_ops.cpp:33 (libcaffe2_libtorch.so+0x1966b77)
> #26 c10::detail::WrapRuntimeKernelFunctor_<...>::operator()(at::Tensor, at::Tensor, c10::optional<...>, std::vector<...>, std::vector<...>, std::vector<...>, bool, std::vector<...>, long, bool, bool, bool) caffe2/aten/src/ATen/core/boxing/kernel_lambda.h:23 (libcaffe2_libtorch.so+0x19667b3)
> #27 c10::guts::infer_function_traits<...>::type::return_type c10::detail::call_functor_with_args_from_stack_<...>(c10::detail::WrapRuntimeKernelFunctor_<...>*, std::vector<...>*, std::integer_sequence<...>) caffe2/aten/src/ATen/core/boxing/kernel_functor.h:194 (libcaffe2_libtorch.so+0x1966257)
> #28 c10::guts::infer_function_traits<...>::type::return_type c10::detail::call_functor_with_args_from_stack<...>(c10::detail::WrapRuntimeKernelFunctor_<...>*, std::vector<...>*) caffe2/aten/src/ATen/core/boxing/kernel_functor.h:202 (libcaffe2_libtorch.so+0x1965e5f)
> #29 c10::detail::wrap_kernel_functor_boxed<...>::call(c10::OperatorKernel*, std::vector<...>*) caffe2/aten/src/ATen/core/boxing/kernel_functor.h:234 (libcaffe2_libtorch.so+0x196586c)
> #30 c10::KernelFunction::callBoxed(std::vector<...>*) const caffe2/aten/src/ATen/core/boxing/KernelFunction.h:65 (libcaffe2_libtorch.so+0x15d5e25)
> #31 c10::impl::OperatorEntry::callBoxed(std::vector<...>*) const::'lambda'(c10::DispatchTable const&)::operator()(c10::DispatchTable const&) const caffe2/aten/src/ATen/core/dispatch/OperatorEntry.h:74 (libcaffe2_libtorch.so+0x15d59f8)
> #32 std::result_of<...>::type c10::LeftRight<...>::read<...>(c10::impl::OperatorEntry::callBoxed(std::vector<...>*) const::'lambda'(c10::DispatchTable const&)&&) const caffe2/c10/util/LeftRight.h:74 (libcaffe2_libtorch.so+0x15d591d)
> #33 c10::impl::OperatorEntry::callBoxed(std::vector<...>*) const caffe2/aten/src/ATen/core/dispatch/OperatorEntry.h:73 (libcaffe2_libtorch.so+0x15d55aa)
> #34 c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<...>*) const caffe2/aten/src/ATen/core/dispatch/Dispatcher.h:181 (libcaffe2_libtorch.so+0x15d4fec)
> #35 torch::jit::mobile::InterpreterState::run(std::vector<...>&) caffe2/torch/csrc/jit/mobile/interpreter.cpp:40 (libcaffe2_libtorch.so+0x15d436d)
> #36 torch::jit::mobile::Function::run(std::vector<...>&) const caffe2/torch/csrc/jit/mobile/function.cpp:89 (libcaffe2_libtorch.so+0x14e1906)
> #37 torch::jit::mobile::Module::run_method(std::__cxx11::basic_string<...> const&, std::vector<...>&) caffe2/torch/csrc/jit/mobile/module.cpp:25 (libcaffe2_libtorch.so+0x167695f)
> #38 torch::jit::testLiteInterpreterConv() caffe2/test/cpp/jit/test_lite_interpreter.cpp:67 (libcaffe2_test_cpp_jit_test_lib.so+0x1d5527)
> #39 torch::jit::JitTest_LiteInterpreterConv_Test::TestBody() caffe2/test/cpp/jit/gtest.cpp:12 (jit+0x58f24):29 (libgtest.so+0x3a3de)
> #41 main caffe2/test/cpp/common/main.cpp:31 (libcaffe2_test_cpp_common_main.so+0x2f5b)
>
> Thread T15 (tid=3219390, running) created by main thread at:
> #0 pthread_create <...> (jit+0x90b56):13 (libgomp.so.1+0x17bd6) (libcaffe2_aten_ATen-cpu.so+0x5f5326) (libcaffe2_aten_ATen-cpu.so+0x5f51ec) (libcaffe2_aten_ATen-cpu.so+0x5ec131) (libcaffe2_aten_ATen-cpu.so+0x5e2155) (libcaffe2_aten_ATen-cpu.so+0x5dfb4a) (libcaffe2_aten_ATen-cpu.so+0x5c7539)
> #8 at::native::_mkldnn_conv2d(ideep::tensor const&, ideep::tensor const&, c10::optional<...> const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) caffe2/aten/src/ATen/native/mkldnn/Conv.cpp:89 (libcaffe2_aten_ATen-cpu.so+0x5c5fe6)
> #9 at::native::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) caffe2/aten/src/ATen/native/mkldnn/Conv.cpp:136 (libcaffe2_aten_ATen-cpu.so+0x5c7d44)
> #10 at::TypeDefault::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/aten/gen_aten=TypeDefault.cpp/TypeDefault.cpp:2683 (libcaffe2_aten_ATen-cpu.so+0x1d4db7e)
> #11 at::Tensor at::ATenOpTable::callUnboxed<...>(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) const caffe2/aten/src/ATen/core/ATenDispatch.h:242 (libcaffe2_libtorch.so+0x750661)
> #12 at::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/aten/generated-aten-headers-cpu#header-mode-symlink-tree-with-header-map,headers/ATen/Functions.h:5702 (libcaffe2_libtorch.so+0x750080)
> #13 torch::autograd::VariableType::(anonymous namespace)::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long)::$_70::operator()() const buck-out/dev/gen/caffe2/generate-code=VariableType_0.cpp/VariableType_0.cpp:7731 (libcaffe2_libtorch.so+0x74ea91)
> #14 torch::autograd::VariableType::(anonymous namespace)::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/generate-code=VariableType_0.cpp/VariableType_0.cpp:7729 (libcaffe2_libtorch.so+0x74d161)
> #15 at::Tensor at::ATenOpTable::callUnboxed<...>(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) const caffe2/aten/src/ATen/core/ATenDispatch.h:219 (libcaffe2_aten_ATen-cpu.so+0x624c94)
> #16 at::mkldnn_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, long) buck-out/dev/gen/caffe2/aten/generated-aten-headers-cpu#header-mode-symlink-tree-with-header-map,headers/ATen/Functions.h:5702 (libcaffe2_aten_ATen-cpu.so+0x60d9a6)
> #17 at::native::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) caffe2/aten/src/ATen/native/Convolution.cpp:640 (libcaffe2_aten_ATen-cpu.so+0x607491)
> #18 at::TypeDefault::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) buck-out/dev/gen/caffe2/aten/gen_aten=TypeDefault.cpp/TypeDefault.cpp:1118 (libcaffe2_aten_ATen-cpu.so+0x1d35dab)
> #19 torch::autograd::VariableType::(anonymous namespace)::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) buck-out/dev/gen/caffe2/generate-code=VariableType_1.cpp/VariableType_1.cpp:696 (libcaffe2_libtorch.so+0x8de2fb)
> #20 at::Tensor at::ATenOpTable::callUnboxed<...>(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) const caffe2/aten/src/ATen/core/ATenDispatch.h:219 (libcaffe2_libtorch.so+0x1771e38)
> #21 at::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<...>, c10::ArrayRef<...>, c10::ArrayRef<...>, bool, c10::ArrayRef<...>, long, bool, bool, bool) buck-out/dev/gen/caffe2/aten/generated-aten-headers-cpu#header-mode-symlink-tree-with-header-map,headers/ATen/Functions.h:2858 (libcaffe2_libtorch.so+0x177189f)
> #22 torch::jit::(anonymous namespace)::$_16::operator()(std::vector<...>&) const buck-out/dev/gen/caffe2/generate-code=register_aten_ops_0.cpp/register_aten_ops_0.cpp:368 (libcaffe2_libtorch.so+0x1771468)
> #23 torch::jit::(anonymous namespace)::$_16::__invoke(std::vector<...>&) buck-out/dev/gen/caffe2/generate-code=register_aten_ops_0.cpp/register_aten_ops_0.cpp:363 (libcaffe2_libtorch.so+0x1771028) (libcaffe2_libtorch.so+0x176a123) (libcaffe2_libtorch.so+0x137c74a)
> #26 torch::jit::InterpreterStateImpl::runImpl(std::vector<...>&) caffe2/torch/csrc/jit/interpreter.cpp:853 (libcaffe2_libtorch.so+0x15c68ee)
> #27 torch::jit::InterpreterStateImpl::run(std::vector<...>&) caffe2/torch/csrc/jit/interpreter.cpp:1090 (libcaffe2_libtorch.so+0x15aa1c2)
> #28 torch::jit::InterpreterState::run(std::vector<...>&) caffe2/torch/csrc/jit/interpreter.cpp:1148 (libcaffe2_libtorch.so+0x15aa100)
> #29 torch::jit::GraphExecutorImplBase::run(std::vector<...>&) caffe2/torch/csrc/jit/graph_executor.cpp:480 (libcaffe2_libtorch.so+0x14f84a7)
> #30 torch::jit::GraphExecutor::run(std::vector<...>&) caffe2/torch/csrc/jit/graph_executor.cpp:630 (libcaffe2_libtorch.so+0x14f8c80)
> #31 torch::jit::Function::run(std::vector<...>&) caffe2/torch/csrc/jit/function.cpp:33 (libcaffe2_libtorch.so+0x14d5aba)
> #32 torch::jit::Function::operator()(std::vector<...>, std::unordered_map<...> const&) caffe2/torch/csrc/jit/function.cpp:44 (libcaffe2_libtorch.so+0x14d5d76)
> #33 torch::jit::script::Method::operator()(std::vector<...>, std::unordered_map<...> const&) caffe2/torch/csrc/jit/script/module.cpp:250 (libcaffe2_libtorch.so+0x1660705)
> #34 torch::jit::script::Module::forward(std::vector<...>) caffe2/torch/csrc/jit/script/module.h:139 (libcaffe2_test_cpp_jit_test_lib.so+0x1d6910)
> #35 torch::jit::testLiteInterpreterConv() caffe2/test/cpp/jit/test_lite_interpreter.cpp:59 (libcaffe2_test_cpp_jit_test_lib.so+0x1d5394)
> #36 torch::jit::JitTest_LiteInterpreterConv_Test::TestBody() caffe2/test/cpp/jit/gtest.cpp:12 (jit+0x58f24):29 (libgtest.so+0x3a3de)
> #38 main caffe2/test/cpp/common/main.cpp:31 (libcaffe2_test_cpp_common_main.so+0x2f5b)
>
> SUMMARY: ThreadSanitizer: data race (/data/users/myuan/fbsource/fbcode/buck-out/dev/gen/caffe2/test/cpp/jit/jit+0xb669b) in memset
```
cc @suo @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh | oncall: jit,triaged,module: mkldnn | low | Critical |
503,710,824 | electron | BrowserWindow.isFocused() can return true when it's not (Windows only) | ### Preflight Checklist
* [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project.
* [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to.
* [x] I have searched the issue tracker for an issue that matches the one I want to file, without success.
### Issue Details
* **Electron Version:** `6.0.11`
* **Operating System:** `Windows 7` & `Windows 10 (1803)`
* **Last Known Working Electron version:** Unknown
### Expected Behavior
Window should be focused and allow typing
### Actual Behavior
Window reports being focused, but it's not.
Strangely, if I call the exact same method I used to create the demo video below via a global hotkey, the window is properly focused.
Another indication that focus truly isn't being transferred to the window is that Windows Aero darkens the glass effect on the focused window, and whatever previous window was focused does not lighten as it should when losing focus.
### To Reproduce
Unknown. I have other windows in my application that work perfectly using the same, very basic:
```
win.show();
```
I also tried
```js
win.showInactive();
win.focus();
```
### Screenshots
https://recordit.co/ydntE6o9dt
You can see in the video the window reports being focused and the `input` as the `document.activeElement`. The input appears to be focused (it has the blue outline and blinking caret) - but it's not actually focused - or at least that's the only way it will allow me to type into the input.
When I move focus to another window, it correctly almost always detects that it does not have focus. It failed once to report lost focus when I moved focus to Sublime Text)
When I re-focus the window with my mouse, rather than programmatically, only then does it actually become focused.
### Additional Information
I tested Mac 10.13 and Linux Mint 19 and the same code works as intended there. | platform/windows,bug :beetle:,8-x-y,6-1-x,11-x-y | medium | Critical |
503,712,792 | pytorch | [JIT] List Comprehensions With Ifs not Supported | ## π Bug
## To Reproduce
```
def func():
return [v for k, v in x.items() if k {"1": 1, "2": 2} in ["1"]]
torch.jit.script(func)
```
cc @suo | oncall: jit,triaged | low | Critical |
503,715,506 | pytorch | [JIT] ndimension not supported | ## π Bug
```
def fn(x)
return x.ndimension()
torch.jit.script(fn)
```
cc @suo | oncall: jit,triaged | low | Critical |
503,720,803 | youtube-dl | www.thesportschronicle.com (Blue Billywig Video player) | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.28. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.09.28**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
https://www.thesportschronicle.com/rugby/nigel-owens-rugby-world-cup/
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Plays OK in Firefox, Safari, Chrome. youtube-dl does not currently support it
| site-support-request | low | Critical |
503,741,497 | flutter | SkSurfaces from Embedder backing stores may not be collected on engine shutdown. | Caught by Leak Sanitizer in the `embedder_unittests` target.
```
Direct leak of 48 byte(s) in 3 object(s) allocated from:
#0 0x113a5b522 in wrap__Znwm (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x66522)
#1 0x10600595d in std::__1::__unique_if<MakeSkSurfaceFromBackingStore(GrContext*, FlutterBackingStoreConfig const&, FlutterSoftwareBackingStore const*)::Captures>::__unique_single std::__1::make_unique<MakeSkSurfaceFromBackingStore(GrContext*, FlutterBackingStoreConfig const&, FlutterSoftwareBackingStore const*)::Captures>() memory:3118
#2 0x105ffed36 in MakeSkSurfaceFromBackingStore(GrContext*, FlutterBackingStoreConfig const&, FlutterSoftwareBackingStore const*) embedder.cc:377
#3 0x105ffd043 in CreateEmbedderRenderTarget(FlutterCompositor const*, FlutterBackingStoreConfig const&, GrContext*) embedder.cc:452
#4 0x105ffc499 in auto InferExternalViewEmbedderFromArgs(FlutterCompositor const*)::$_2::operator()<FlutterBackingStoreConfig>(GrContext*, FlutterBackingStoreConfig const&) const embedder.cc:492
#5 0x105ffc3d9 in decltype(std::__1::forward<InferExternalViewEmbedderFromArgs(FlutterCompositor const*)::$_2&>(fp)(std::__1::forward<GrContext*>(fp0), std::__1::forward<FlutterBackingStoreConfig const&>(fp0))) std::__1::__invoke<InferExternalViewEmbedderFromArgs(FlutterCompositor const*)::$_2&, GrContext*, FlutterBackingStoreConfig const&>(InferExternalViewEmbedderFromArgs(FlutterCompositor const*)::$_2&&&, GrContext*&&, FlutterBackingStoreConfig const&&&) type_traits:4350
#6 0x105ffc266 in std::__1::unique_ptr<flutter::EmbedderRenderTarget, std::__1::default_delete<flutter::EmbedderRenderTarget> > std::__1::__invoke_void_return_wrapper<std::__1::unique_ptr<flutter::EmbedderRenderTarget, std::__1::default_delete<flutter::EmbedderRenderTarget> > >::__call<InferExternalViewEmbedderFromArgs(FlutterCompositor const*)::$_2&, GrContext*, FlutterBackingStoreConfig const&>(InferExternalViewEmbedderFromArgs(FlutterCompositor const*)::$_2&&&, GrContext*&&, FlutterBackingStoreConfig const&&&) __functional_base:318
#7 0x105ff89fa in std::__1::__function::__func<InferExternalViewEmbedderFromArgs(FlutterCompositor const*)::$_2, std::__1::allocator<InferExternalViewEmbedderFromArgs(FlutterCompositor const*)::$_2>, std::__1::unique_ptr<flutter::EmbedderRenderTarget, std::__1::default_delete<flutter::EmbedderRenderTarget> > (GrContext*, FlutterBackingStoreConfig const&)>::operator()(GrContext*&&, FlutterBackingStoreConfig const&) functional:1572
#8 0x1060ba6c5 in std::__1::function<std::__1::unique_ptr<flutter::EmbedderRenderTarget, std::__1::default_delete<flutter::EmbedderRenderTarget> > (GrContext*, FlutterBackingStoreConfig const&)>::operator()(GrContext*, FlutterBackingStoreConfig const&) const functional:1923
#9 0x1060c0690 in flutter::EmbedderExternalViewEmbedder::SubmitFrame(GrContext*) embedder_external_view_embedder.cc:286
#10 0x109673356 in flutter::Rasterizer::DrawToSurface(flutter::LayerTree&) rasterizer.cc:263
#11 0x109675839 in flutter::Rasterizer::DoDraw(std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> >) rasterizer.cc:166
#12 0x10968201f in flutter::Rasterizer::Draw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_0::operator()(std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> >) const rasterizer.cc:119
#13 0x109681e0f in decltype(std::__1::forward<flutter::Rasterizer::Draw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_0&>(fp)(std::__1::forward<std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> > >(fp0))) std::__1::__invoke<flutter::Rasterizer::Draw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_0&, std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> > >(flutter::Rasterizer::Draw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_0&&&, std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> >&&) type_traits:4350
#14 0x109681c6f in void std::__1::__invoke_void_return_wrapper<void>::__call<flutter::Rasterizer::Draw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_0&, std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> > >(flutter::Rasterizer::Draw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_0&&&, std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> >&&) __functional_base:349
#15 0x10967dc06 in std::__1::__function::__func<flutter::Rasterizer::Draw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_0, std::__1::allocator<flutter::Rasterizer::Draw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_0>, void (std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> >)>::operator()(std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> >&&) functional:1572
#16 0x109682c11 in std::__1::function<void (std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> >)>::operator()(std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> >) const functional:1923
#17 0x109674ada in flutter::Pipeline<flutter::LayerTree>::Consume(std::__1::function<void (std::__1::unique_ptr<flutter::LayerTree, std::__1::default_delete<flutter::LayerTree> >)>) pipeline.h:153
#18 0x109673dac in flutter::Rasterizer::Draw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >) rasterizer.cc:122
#19 0x109792b0a in flutter::Shell::OnAnimatorDraw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_28::operator()() const shell.cc:886
#20 0x10979295a in decltype(std::__1::forward<flutter::Shell::OnAnimatorDraw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_28&>(fp)()) std::__1::__invoke<flutter::Shell::OnAnimatorDraw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_28&>(flutter::Shell::OnAnimatorDraw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_28&&&) type_traits:4350
#21 0x1097928aa in void std::__1::__invoke_void_return_wrapper<void>::__call<flutter::Shell::OnAnimatorDraw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_28&>(flutter::Shell::OnAnimatorDraw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_28&&&) __functional_base:349
#22 0x10978edf1 in std::__1::__function::__func<flutter::Shell::OnAnimatorDraw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_28, std::__1::allocator<flutter::Shell::OnAnimatorDraw(fml::RefPtr<flutter::Pipeline<flutter::LayerTree> >)::$_28>, void ()>::operator()() functional:1572
#23 0x105e1303b in std::__1::function<void ()>::operator()() const functional:1923
#24 0x107593f03 in fml::MessageLoopImpl::FlushTasks(fml::FlushType) message_loop_impl.cc:121
#25 0x107593a17 in fml::MessageLoopImpl::RunExpiredTasksNow() message_loop_impl.cc:131
#26 0x1075ded22 in fml::MessageLoopDarwin::OnTimerFire(__CFRunLoopTimer*, fml::MessageLoopDarwin*) message_loop_darwin.mm:75
#27 0x7fff5095305f in __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ (CoreFoundation:x86_64h+0x5a05f)
#28 0x7fff50952c0b in __CFRunLoopDoTimer (CoreFoundation:x86_64h+0x59c0b)
#29 0x7fff50952751 in __CFRunLoopDoTimers (CoreFoundation:x86_64h+0x59751)
``` | engine,from: sanitizer,P2,team-engine,triaged-engine | low | Minor |
503,743,631 | flutter | Message loops in TLS slots may not be collected. | As written, these should be collected when the thread dies, however, leak sanitizer seems to indicate that these are not collected. Possible false positive.
```
Indirect leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x11a77f522 in wrap__Znwm (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x66522)
#1 0x10e2b63cc in fml::RefPtr<fml::TaskRunner> fml::internal::MakeRefCountedHelper<fml::TaskRunner>::MakeRefCounted<fml::RefPtr<fml::MessageLoopImpl>&>(fml::RefPtr<fml::MessageLoopImpl>&&&) ref_ptr_internal.h:31
#2 0x10e2b54fc in fml::RefPtr<fml::TaskRunner> fml::MakeRefCounted<fml::TaskRunner, fml::RefPtr<fml::MessageLoopImpl>&>(fml::RefPtr<fml::MessageLoopImpl>&&&) ref_ptr.h:231
#3 0x10e2b523a in fml::MessageLoop::MessageLoop() message_loop.cc:41
#4 0x10e2b5032 in fml::MessageLoop::MessageLoop() message_loop.cc:41
#5 0x10e2b4f76 in fml::MessageLoop::EnsureInitializedForCurrentThread() message_loop.cc:32
#6 0x111ece7a3 in flutter::testing::ThreadTest::SetUp() thread_test.cc:14
#7 0x10cb23365 in flutter::testing::EmbedderTest::SetUp() embedder_test.cc:30
#8 0x1121d7ed2 in void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2447
#9 0x1121837e9 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2502
#10 0x1121831db in testing::Test::Run() gtest.cc:2517
#11 0x1121854c9 in testing::TestInfo::Run() gtest.cc:2698
#12 0x112187ab9 in testing::TestSuite::Run() gtest.cc:2828
#13 0x1121a0861 in testing::internal::UnitTestImpl::RunAllTests() gtest.cc:5285
#14 0x1121e4fc2 in bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2447
#15 0x11219fb40 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2502
#16 0x11219f68c in testing::UnitTest::Run() gtest.cc:4873
#17 0x111ece682 in RUN_ALL_TESTS() gtest.h:2453
#18 0x111ece5e9 in main run_all_unittests.cc:9
#19 0x7fff7c8aa3d4 in start (libdyld.dylib:x86_64+0x163d4)
``` | engine,from: sanitizer,P2,team-engine,triaged-engine | low | Minor |
503,744,124 | flutter | CFRunLoopSource in MessageLoopDarwin not being collected. | A source added to the CFRunLoop is indicated as being leaked by leak sanitizer. The message loop itself is being released though. Possible false positive.
```
Indirect leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x11a772393 in wrap_malloc (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x59393)
#1 0x7fff7c8f3b10 in _Block_object_assign (libsystem_blocks.dylib:x86_64+0xb10)
#2 0x7fff7c8f39f2 in _Block_copy (libsystem_blocks.dylib:x86_64+0x9f2)
#3 0x7fff7c8f3a94 in _Block_object_assign (libsystem_blocks.dylib:x86_64+0xa94)
#4 0x7fff7c8f39f2 in _Block_copy (libsystem_blocks.dylib:x86_64+0x9f2)
#5 0x7fff7c85c5cf in _dispatch_Block_copy (libdispatch.dylib:x86_64+0x25cf)
#6 0x7fff7c86e20d in _dispatch_source_set_handler (libdispatch.dylib:x86_64+0x1420d)
#7 0x11a771909 in wrap_dispatch_source_set_event_handler (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x58909)
#8 0x7fff50931b7c in __CFRunLoopFindMode (CoreFoundation:x86_64h+0x38b7c)
#9 0x7fff5093188b in __CFRunLoopCreate (CoreFoundation:x86_64h+0x3888b)
#10 0x7fff509315e6 in _CFRunLoopGet0 (CoreFoundation:x86_64h+0x385e6)
#11 0x10e303330 in fml::MessageLoopDarwin::MessageLoopDarwin() message_loop_darwin.mm:17
#12 0x10e303e92 in fml::MessageLoopDarwin::MessageLoopDarwin() message_loop_darwin.mm:17
#13 0x10e2b9459 in fml::RefPtr<fml::MessageLoopDarwin> fml::internal::MakeRefCountedHelper<fml::MessageLoopDarwin>::MakeRefCounted<>() ref_ptr_internal.h:31
#14 0x10e2b6d11 in fml::RefPtr<fml::MessageLoopDarwin> fml::MakeRefCounted<fml::MessageLoopDarwin>() ref_ptr.h:231
#15 0x10e2b6c61 in fml::MessageLoopImpl::Create() message_loop_impl.cc:30
#16 0x10e2b5226 in fml::MessageLoop::MessageLoop() message_loop.cc:40
#17 0x10e2b5032 in fml::MessageLoop::MessageLoop() message_loop.cc:41
#18 0x10e2b4f76 in fml::MessageLoop::EnsureInitializedForCurrentThread() message_loop.cc:32
#19 0x111ece7a3 in flutter::testing::ThreadTest::SetUp() thread_test.cc:14
#20 0x10cb23365 in flutter::testing::EmbedderTest::SetUp() embedder_test.cc:30
#21 0x1121d7ed2 in void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2447
#22 0x1121837e9 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2502
#23 0x1121831db in testing::Test::Run() gtest.cc:2517
#24 0x1121854c9 in testing::TestInfo::Run() gtest.cc:2698
#25 0x112187ab9 in testing::TestSuite::Run() gtest.cc:2828
#26 0x1121a0861 in testing::internal::UnitTestImpl::RunAllTests() gtest.cc:5285
#27 0x1121e4fc2 in bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2447
#28 0x11219fb40 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2502
#29 0x11219f68c in testing::UnitTest::Run() gtest.cc:4873
``` | engine,from: sanitizer,P2,team-engine,triaged-engine | low | Minor |
503,745,395 | flutter | Dispatched platform message that are not handled due to isolate death are leaked. | Caught by leak sanitizer in the `embedder_unittests` harness.
```
Direct leak of 8 byte(s) in 1 object(s) allocated from:
#0 0x1169f3522 in wrap__Znwm (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x66522)
#1 0x1090252b4 in FlutterEngineRun::$_36::operator()(fml::RefPtr<flutter::PlatformMessage>) const embedder.cc:740
#2 0x1090250af in decltype(std::__1::forward<FlutterEngineRun::$_36&>(fp)(std::__1::forward<fml::RefPtr<flutter::PlatformMessage> >(fp0))) std::__1::__invoke<FlutterEngineRun::$_36&, fml::RefPtr<flutter::PlatformMessage> >(FlutterEngineRun::$_36&&&, fml::RefPtr<flutter::PlatformMessage>&&) type_traits:4350
#3 0x109024ebf in void std::__1::__invoke_void_return_wrapper<void>::__call<FlutterEngineRun::$_36&, fml::RefPtr<flutter::PlatformMessage> >(FlutterEngineRun::$_36&&&, fml::RefPtr<flutter::PlatformMessage>&&) __functional_base:349
#4 0x109020ea6 in std::__1::__function::__func<FlutterEngineRun::$_36, std::__1::allocator<FlutterEngineRun::$_36>, void (fml::RefPtr<flutter::PlatformMessage>)>::operator()(fml::RefPtr<flutter::PlatformMessage>&&) functional:1572
#5 0x1090cdb91 in std::__1::function<void (fml::RefPtr<flutter::PlatformMessage>)>::operator()(fml::RefPtr<flutter::PlatformMessage>) const functional:1923
#6 0x1090cd97d in flutter::PlatformViewEmbedder::HandlePlatformMessage(fml::RefPtr<flutter::PlatformMessage>) platform_view_embedder.cc:63
#7 0x10c744bee in flutter::Shell::OnEngineHandlePlatformMessage(fml::RefPtr<flutter::PlatformMessage>)::$_31::operator()() const shell.cc:937
#8 0x10c7449ea in decltype(std::__1::forward<flutter::Shell::OnEngineHandlePlatformMessage(fml::RefPtr<flutter::PlatformMessage>)::$_31&>(fp)()) std::__1::__invoke<flutter::Shell::OnEngineHandlePlatformMessage(fml::RefPtr<flutter::PlatformMessage>)::$_31&>(flutter::Shell::OnEngineHandlePlatformMessage(fml::RefPtr<flutter::PlatformMessage>)::$_31&&&) type_traits:4350
#9 0x10c74493a in void std::__1::__invoke_void_return_wrapper<void>::__call<flutter::Shell::OnEngineHandlePlatformMessage(fml::RefPtr<flutter::PlatformMessage>)::$_31&>(flutter::Shell::OnEngineHandlePlatformMessage(fml::RefPtr<flutter::PlatformMessage>)::$_31&&&) __functional_base:349
#10 0x10c740ee1 in std::__1::__function::__func<flutter::Shell::OnEngineHandlePlatformMessage(fml::RefPtr<flutter::PlatformMessage>)::$_31, std::__1::allocator<flutter::Shell::OnEngineHandlePlatformMessage(fml::RefPtr<flutter::PlatformMessage>)::$_31>, void ()>::operator()() functional:1572
#11 0x108dab03b in std::__1::function<void ()>::operator()() const functional:1923
#12 0x10a52bf03 in fml::MessageLoopImpl::FlushTasks(fml::FlushType) message_loop_impl.cc:121
#13 0x10a52ba17 in fml::MessageLoopImpl::RunExpiredTasksNow() message_loop_impl.cc:131
#14 0x10a576d22 in fml::MessageLoopDarwin::OnTimerFire(__CFRunLoopTimer*, fml::MessageLoopDarwin*) message_loop_darwin.mm:75
#15 0x7fff5095305f in __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ (CoreFoundation:x86_64h+0x5a05f)
#16 0x7fff50952c0b in __CFRunLoopDoTimer (CoreFoundation:x86_64h+0x59c0b)
#17 0x7fff50952751 in __CFRunLoopDoTimers (CoreFoundation:x86_64h+0x59751)
#18 0x7fff50933961 in __CFRunLoopRun (CoreFoundation:x86_64h+0x3a961)
#19 0x7fff50932ebd in CFRunLoopRunSpecific (CoreFoundation:x86_64h+0x39ebd)
#20 0x10a57740a in fml::MessageLoopDarwin::Run() message_loop_darwin.mm:46
#21 0x10a52b883 in fml::MessageLoopImpl::DoRun() message_loop_impl.cc:90
#22 0x10a52869a in fml::MessageLoop::Run() message_loop.cc:49
#23 0x10a56c95a in fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0::operator()() const thread.cc:34
#24 0x10a56c6da in decltype(std::__1::forward<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fp)()) std::__1::__invoke<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0&&) type_traits:4350
#25 0x10a56c5d2 in void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>&, std::__1::__tuple_indices<>) thread:342
#26 0x10a56bc7b in void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0> >(void*) thread:352
#27 0x7fff7ca9e2ea in _pthread_body (libsystem_pthread.dylib:x86_64+0x32ea)
#28 0x7fff7caa1248 in _pthread_start (libsystem_pthread.dylib:x86_64+0x6248)
#29 0x7fff7ca9d40c in thread_start (libsystem_pthread.dylib:x86_64+0x240c)
``` | engine,from: sanitizer,P2,team-engine,triaged-engine | low | Minor |
503,745,954 | godot | Multiple debug errors when breakpoint set in loaded script using `SceneTree` | **Godot version:**
3.2.alpha1.official
**OS/device including version:**
Mac OS X Mojave
**Issue description:**
Multiple debug errors when breakpoint set in a loaded script from the main scene.
```
E 0:00:01:0044 is_refusing_new_network_connections: No network peer is assigned. Unable to get 'refuse_new_connections'.
<C++ Error> Condition ' !network_peer.is_valid() ' is true. returned: false
<C++ Source> core/io/multiplayer_api.cpp:822 @ is_refusing_new_network_connections()
<Stack Trace> somescript.gd:4 @ _init()
TestScript.gd:5 @ _ready()
```
**Main Scene**

**Loaded script**

**Steps to reproduce:**
Create a Main Control scene and add the following code:
```
extends Control
func _ready():
var MySomescript = load("somescript.gd")
var myInst = MySomescript.new()
pass
```
Then create the script `somescript.gd `and add the following code:
```
extends SceneTree
var somevar = 20
var foo1='abcd'
func _init():
print(foo1)
```
Run without breakpoints, no `is_refusing_new_network_connections` errors appear.
Set a breakpoint on the `var somevar = 20` line. Run the project and notice the multiple debugger errors.
| bug,topic:editor,confirmed,topic:network | low | Critical |
503,747,388 | flutter | Service ID platform message is leaked in case the isolate dies before the message is received by the the same. | Caught by leak sanitizer in the `embedder_unittests` harness.
```
Indirect leak of 75 byte(s) in 3 object(s) allocated from:
#0 0x11574e522 in wrap__Znwm (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x66522)
#1 0x1078e52d6 in std::__1::__libcpp_allocate(unsigned long, unsigned long) new:259
#2 0x1079308a5 in std::__1::allocator<unsigned char>::allocate(unsigned long, void const*) memory:1800
#3 0x10793001e in std::__1::allocator_traits<std::__1::allocator<unsigned char> >::allocate(std::__1::allocator<unsigned char>&, unsigned long) memory:1549
#4 0x10792f668 in std::__1::vector<unsigned char, std::__1::allocator<unsigned char> >::__vallocate(unsigned long) vector:972
#5 0x10b172cca in std::__1::vector<unsigned char, std::__1::allocator<unsigned char> >::vector<std::__1::__wrap_iter<char*> >(std::__1::__wrap_iter<char*>, std::__1::enable_if<(__is_forward_iterator<std::__1::__wrap_iter<char*> >::value) && (is_constructible<unsigned char, std::__1::iterator_traits<std::__1::__wrap_iter<char*> >::reference>::value), std::__1::__wrap_iter<char*> >::type) vector:1212
#6 0x10b16b33e in std::__1::vector<unsigned char, std::__1::allocator<unsigned char> >::vector<std::__1::__wrap_iter<char*> >(std::__1::__wrap_iter<char*>, std::__1::enable_if<(__is_forward_iterator<std::__1::__wrap_iter<char*> >::value) && (is_constructible<unsigned char, std::__1::iterator_traits<std::__1::__wrap_iter<char*> >::reference>::value), std::__1::__wrap_iter<char*> >::type) vector:1205
#7 0x10b169ec6 in flutter::Engine::Run(flutter::RunConfiguration) engine.cc:160
#8 0x10b272823 in flutter::Shell::RunEngine(flutter::RunConfiguration, std::__1::function<void (flutter::Engine::RunStatus)>)::$_6::operator()() shell.cc:408
#9 0x10b2721ce in auto fml::internal::CopyableLambda<flutter::Shell::RunEngine(flutter::RunConfiguration, std::__1::function<void (flutter::Engine::RunStatus)>)::$_6>::operator()<>() const make_copyable.h:24
#10 0x10b27212a in decltype(std::__1::forward<fml::internal::CopyableLambda<flutter::Shell::RunEngine(flutter::RunConfiguration, std::__1::function<void (flutter::Engine::RunStatus)>)::$_6>&>(fp)()) std::__1::__invoke<fml::internal::CopyableLambda<flutter::Shell::RunEngine(flutter::RunConfiguration, std::__1::function<void (flutter::Engine::RunStatus)>)::$_6>&>(fml::internal::CopyableLambda<flutter::Shell::RunEngine(flutter::RunConfiguration, std::__1::function<void (flutter::Engine::RunStatus)>)::$_6>&&&) type_traits:4350
#11 0x10b27207a in void std::__1::__invoke_void_return_wrapper<void>::__call<fml::internal::CopyableLambda<flutter::Shell::RunEngine(flutter::RunConfiguration, std::__1::function<void (flutter::Engine::RunStatus)>)::$_6>&>(fml::internal::CopyableLambda<flutter::Shell::RunEngine(flutter::RunConfiguration, std::__1::function<void (flutter::Engine::RunStatus)>)::$_6>&&&) __functional_base:349
#12 0x10b26e381 in std::__1::__function::__func<fml::internal::CopyableLambda<flutter::Shell::RunEngine(flutter::RunConfiguration, std::__1::function<void (flutter::Engine::RunStatus)>)::$_6>, std::__1::allocator<fml::internal::CopyableLambda<flutter::Shell::RunEngine(flutter::RunConfiguration, std::__1::function<void (flutter::Engine::RunStatus)>)::$_6> >, void ()>::operator()() functional:1572
#13 0x10795e03b in std::__1::function<void ()>::operator()() const functional:1923
#14 0x1090def03 in fml::MessageLoopImpl::FlushTasks(fml::FlushType) message_loop_impl.cc:121
#15 0x1090dea17 in fml::MessageLoopImpl::RunExpiredTasksNow() message_loop_impl.cc:131
#16 0x109129d22 in fml::MessageLoopDarwin::OnTimerFire(__CFRunLoopTimer*, fml::MessageLoopDarwin*) message_loop_darwin.mm:75
#17 0x7fff5095305f in __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ (CoreFoundation:x86_64h+0x5a05f)
#18 0x7fff50952c0b in __CFRunLoopDoTimer (CoreFoundation:x86_64h+0x59c0b)
#19 0x7fff50952751 in __CFRunLoopDoTimers (CoreFoundation:x86_64h+0x59751)
#20 0x7fff50933961 in __CFRunLoopRun (CoreFoundation:x86_64h+0x3a961)
#21 0x7fff50932ebd in CFRunLoopRunSpecific (CoreFoundation:x86_64h+0x39ebd)
#22 0x10912a40a in fml::MessageLoopDarwin::Run() message_loop_darwin.mm:46
#23 0x1090de883 in fml::MessageLoopImpl::DoRun() message_loop_impl.cc:90
#24 0x1090db69a in fml::MessageLoop::Run() message_loop.cc:49
#25 0x10911f95a in fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0::operator()() const thread.cc:34
#26 0x10911f6da in decltype(std::__1::forward<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fp)()) std::__1::__invoke<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0&&) type_traits:4350
#27 0x10911f5d2 in void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>&, std::__1::__tuple_indices<>) thread:342
#28 0x10911ec7b in void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0> >(void*) thread:352
#29 0x7fff7ca9e2ea in _pthread_body (libsystem_pthread.dylib:x86_64+0x32ea)
``` | engine,from: sanitizer,P2,team-engine,triaged-engine | low | Minor |
503,748,225 | flutter | Typography related objects on are leaked on shell launch. | Caught by leak sanitizer in the `shell_unittests` harness.
Leak of the CoreText object.
```
Direct leak of 8192 byte(s) in 8 object(s) allocated from:
#0 0x1188a4393 in wrap_malloc (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x59393)
#1 0x7fff4efd4616 in SetUpProgramPtrs(fsg_SplineKey*) (libTrueTypeScaler.dylib:x86_64+0x4616)
#2 0x7fff4efd623b in TTGetStrikeSpecs (libTrueTypeScaler.dylib:x86_64+0x623b)
#3 0x7fff4ee438b4 in TConcreteFontScaler::GetFontInfo(FPFontInfo*) const (libFontParser.dylib:x86_64+0x278b4)
#4 0x7fff4ee73db1 in TFPFont::FillFontInfo(FPFontInfo&) const (libFontParser.dylib:x86_64+0x57db1)
#5 0x7fff4ee21cb7 in TFPFont::GetFontInfo() const (libFontParser.dylib:x86_64+0x5cb7)
#6 0x7fff4ee74dd0 in FPFontGetFontInfo (libFontParser.dylib:x86_64+0x58dd0)
#7 0x7fff50d4d3df in get_font_info (CoreGraphics:x86_64h+0xe3df)
#8 0x7fff510ff208 in get_font_info (CoreGraphics:x86_64h+0x3c0208)
#9 0x7fff50d4d39a in CGFontGetNumberOfGlyphs (CoreGraphics:x86_64h+0xe39a)
#10 0x7fff52539ee7 in TBaseFont::CopyGraphicsFont() const (CoreText:x86_64+0x13ee7)
#11 0x7fff52592ca1 in TSplicedFont::CopyGraphicsFont() const (CoreText:x86_64+0x6cca1)
#12 0x7fff52539e8f in TBaseFont::GetParserFont() const (CoreText:x86_64+0x13e8f)
#13 0x7fff5253225f in TFont::InitVariationValues() (CoreText:x86_64+0xc25f)
#14 0x7fff52531515 in TFont::FinishConstruction(CGFont*) (CoreText:x86_64+0xb515)
#15 0x7fff5252fbf5 in CTFontCreateWithFontDescriptor (CoreText:x86_64+0x9bf5)
#16 0x10ca4f92f in create_from_desc(__CTFontDescriptor const*) SkFontHost_mac.cpp:794
#17 0x10ca4f217 in SkFontStyleSet_Mac::createTypeface(int) SkFontHost_mac.cpp:2527
#18 0x10eabffbc in txt::FontCollection::CreateMinikinFontFamily(sk_sp<SkFontMgr> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) font_collection.cc:216
#19 0x10eabe090 in txt::FontCollection::FindFontFamilyInManagers(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) font_collection.cc:193
#20 0x10eabd256 in txt::FontCollection::GetMinikinFontCollectionForFamilies(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) font_collection.cc:155
#21 0x10b2b7faa in flutter::testing::ShellTest_ReloadSystemFonts_Test::TestBody() shell_unittests.cc:539
#22 0x11062d6b2 in void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2447
#23 0x1105d84a9 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2502
#24 0x1105d7ffc in testing::Test::Run() gtest.cc:2522
#25 0x1105da189 in testing::TestInfo::Run() gtest.cc:2698
#26 0x1105dc779 in testing::TestSuite::Run() gtest.cc:2828
#27 0x1105f5521 in testing::internal::UnitTestImpl::RunAllTests() gtest.cc:5285
#28 0x11063a7a2 in bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2447
#29 0x1105f4800 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2502
```
Leak of the CoreGraphics font.
```
Indirect leak of 44 byte(s) in 1 object(s) allocated from:
#0 0x11c640393 in wrap_malloc (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x59393)
#1 0x7fff7c8f39b7 in _Block_copy (libsystem_blocks.dylib:x86_64+0x9b7)
#2 0x7fff7c8f3a94 in _Block_object_assign (libsystem_blocks.dylib:x86_64+0xa94)
#3 0x7fff7c8f39f2 in _Block_copy (libsystem_blocks.dylib:x86_64+0x9f2)
#4 0x7fff7c85c5cf in _dispatch_Block_copy (libdispatch.dylib:x86_64+0x25cf)
#5 0x7fff7c86e20d in _dispatch_source_set_handler (libdispatch.dylib:x86_64+0x1420d)
#6 0x11c63f74e in wrap_dispatch_source_set_cancel_handler (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x5874e)
#7 0x7fff51bcdd64 in connectToCoreServicesD() (CarbonCore:x86_64+0x1d64)
#8 0x7fff51bcdc73 in getStatus() (CarbonCore:x86_64+0x1c73)
#9 0x7fff51bcdbee in scCreateSystemServiceVersion (CarbonCore:x86_64+0x1bee)
#10 0x7fff51bcd963 in FileIDTreeGetCachedPort (CarbonCore:x86_64+0x1963)
#11 0x7fff51bcd7cf in FSNodeStorageGetAndLockCurrentUniverse (CarbonCore:x86_64+0x17cf)
#12 0x7fff51bcd659 in FileIDTreeGetAndLockVolumeEntryForDeviceID (CarbonCore:x86_64+0x1659)
#13 0x7fff51bcd5b6 in FSMount::FSMount(unsigned int, FSMountNumberType, int*, unsigned int const*) (CarbonCore:x86_64+0x15b6)
#14 0x7fff51bcd530 in FSMountPrepare (CarbonCore:x86_64+0x1530)
#15 0x7fff64af1a2f in MountInfoPrepare(void***, unsigned int, int, void*, unsigned int const*, __CFURL const*, __CFError**) (CoreServicesInternal:x86_64+0x5a2f)
#16 0x7fff64af1427 in parseAttributeBuffer(__CFAllocator const*, unsigned char const*, unsigned char, attrlist const*, void const*, void**, _FileAttributes*, unsigned int*) (CoreServicesInternal:x86_64+0x5427)
#17 0x7fff64af0490 in corePropertyProviderPrepareValues(__CFURL const*, __FileCache*, __CFString const* const*, void const**, long, void const*, __CFError**) (CoreServicesInternal:x86_64+0x4490)
#18 0x7fff64af00ed in prepareValuesForBitmap(__CFURL const*, __FileCache*, _FilePropertyBitmap*, __CFError**) (CoreServicesInternal:x86_64+0x40ed)
#19 0x7fff64af3a0e in _FSURLCopyResourcePropertyValuesAndFlags (CoreServicesInternal:x86_64+0x7a0e)
#20 0x7fff50944bba in _CFURLCopyResourcePropertyValuesAndFlags (CoreFoundation:x86_64h+0x4bbba)
#21 0x7fff4ee6d245 in FPPathGetCatalogValues(char const*, FInfo*, unsigned long long*, unsigned long long*) (libFontParser.dylib:x86_64+0x51245)
#22 0x7fff4ee6aed4 in TFont::CreateFontEntities(char const*, bool, bool&, short, char const*, bool) (libFontParser.dylib:x86_64+0x4eed4)
#23 0x7fff4ee6d659 in TFont::CreateFontEntitiesForFile(char const*, bool, bool, short, char const*) (libFontParser.dylib:x86_64+0x51659)
#24 0x7fff4ee1d59d in FPFontCreateFontsWithPath (libFontParser.dylib:x86_64+0x159d)
#25 0x7fff50d4c7d6 in create_private_data_array_with_path (CoreGraphics:x86_64h+0xd7d6)
#26 0x7fff50d4c4de in CGFontCreateFontsWithPath (CoreGraphics:x86_64h+0xd4de)
#27 0x7fff50d4c108 in CGFontCreateFontsWithURL (CoreGraphics:x86_64h+0xd108)
#28 0x7fff5253a432 in CreateFontsWithURL(__CFURL const*, bool) (CoreText:x86_64+0x14432)
#29 0x7fff5253a1ca in CreateFontWithFontURL(__CFURL const*, bool) (CoreText:x86_64+0x141ca)
```
Leaking the system font. This is probably a false positive:
```
Direct leak of 74 byte(s) in 1 object(s) allocated from:
#0 0x11c640717 in wrap_realloc (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x59717)
#1 0x7fff4efdffec in ResizeRawMemory(void*, unsigned int, unsigned char) (libTrueTypeScaler.dylib:x86_64+0xffec)
#2 0x7fff4eff7b6c in ScalerNewBlock(memoryContext*, int, int, void*, unsigned char, unsigned char, short*) (libTrueTypeScaler.dylib:x86_64+0x27b6c)
#3 0x7fff4efd2919 in CreateScalerVariationBlock(fsg_SplineKey*, memoryContext*, unsigned int, FontVariation const*) (libTrueTypeScaler.dylib:x86_64+0x2919)
#4 0x7fff4efe05e2 in AssureStrikeBlocks(fsg_SplineKey*, memoryContext*, cacheStrike*) (libTrueTypeScaler.dylib:x86_64+0x105e2)
#5 0x7fff4effc8af in TTGetVariationValues (libTrueTypeScaler.dylib:x86_64+0x2c8af)
#6 0x7fff4ee792d0 in TConcreteFontScaler::GetVariationValues(unsigned long) const (libFontParser.dylib:x86_64+0x5d2d0)
#7 0x7fff4ee76dc0 in FPFontGetVariationValues (libFontParser.dylib:x86_64+0x5adc0)
#8 0x7fff52532270 in TFont::InitVariationValues() (CoreText:x86_64+0xc270)
#9 0x7fff52531515 in TFont::FinishConstruction(CGFont*) (CoreText:x86_64+0xb515)
#10 0x7fff5252fbf5 in CTFontCreateWithFontDescriptor (CoreText:x86_64+0x9bf5)
#11 0x7fff775d2f33 in -[__NSSharedFontInstanceInfo _platformFont] (UIFoundation:x86_64+0x4f33)
#12 0x7fff775d2c12 in -[__NSSharedFontInstanceInfo _textTransform] (UIFoundation:x86_64+0x4c12)
#13 0x7fff775d2be0 in -[__NSSharedFontInstanceInfo _matrix] (UIFoundation:x86_64+0x4be0)
#14 0x7fff775d2b5c in -[NSFont initWithInstanceInfo:renderingMode:] (UIFoundation:x86_64+0x4b5c)
#15 0x7fff775d29b8 in -[__NSSharedFontInstanceInfo fontInstanceForRenderingMode:] (UIFoundation:x86_64+0x49b8)
#16 0x7fff775d2732 in -[__NSFontTypefaceInfo fontInstanceForFontDescriptor:size:affineTransform:renderingMode:] (UIFoundation:x86_64+0x4732)
#17 0x7fff775d178f in __NSGetMetaFontInstance (UIFoundation:x86_64+0x378f)
#18 0x7fff7760089f in +[NSFont systemFontOfSize:] (UIFoundation:x86_64+0x3289f)
#19 0x112959357 in txt::GetDefaultFontFamily() platform_mac.mm:21
#20 0x11285f232 in txt::FontCollection::GetMinikinFontCollectionForFamilies(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) font_collection.cc:153
#21 0x10f059faa in flutter::testing::ShellTest_ReloadSystemFonts_Test::TestBody() shell_unittests.cc:539
#22 0x1143cf6b2 in void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2447
#23 0x11437a4a9 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2502
#24 0x114379ffc in testing::Test::Run() gtest.cc:2522
#25 0x11437c189 in testing::TestInfo::Run() gtest.cc:2698
#26 0x11437e779 in testing::TestSuite::Run() gtest.cc:2828
#27 0x114397521 in testing::internal::UnitTestImpl::RunAllTests() gtest.cc:5285
#28 0x1143dc7a2 in bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2447
#29 0x114396800 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2502
``` | engine,from: sanitizer,P2,team-engine,triaged-engine | low | Critical |
503,758,833 | flutter | Dart isolate group create callback called with incorrect signature. | Caught by undefined behavior sanitizer in the `shell_unittests` harness:
```
../../third_party/dart/runtime/lib/isolate.cc:164:44: runtime error: call to function flutter::DartIsolate::DartIsolateGroupCreateCallback(char const*, char const*, char const*, char const*, Dart_IsolateFlags*, std::__1::shared_ptr<flutter::DartIsolate>*, char**) through pointer to incorrect function type '_Dart_Isolate *(*)(const char *, const char *, const char *, const char *, Dart_IsolateFlags *, void *, char **)'
dart_isolate.cc:647: note: flutter::DartIsolate::DartIsolateGroupCreateCallback(char const*, char const*, char const*, char const*, Dart_IsolateFlags*, std::__1::shared_ptr<flutter::DartIsolate>*, char**) defined here
``` | engine,from: sanitizer,P2,team-engine,triaged-engine | low | Critical |
503,759,376 | terminal | Terminal touch screen scrolling stops immediately once you lift your finger off the screen (touch scrolling is not inertial) | <!--
The "core team" sounds really douchey. I know better because I work with them personally, though.
-->
### Environment
_Windows build number:_ Windows 10 version 1903 [[Version 10.0.18362.388](https://support.microsoft.com/en-us/help/4524147)]
_Windows Terminal version:_ [0.5.2762.0](https://github.com/microsoft/terminal/releases/tag/v0.5.2762.0)
You need a touch screen monitor or a device with a built-in touch screen.
### Reproduction steps
1. Start Terminal and open a cmd.exe tab.
1. Run this command: `dir /a C:\Windows\System32`
1. After the command completes, try using your finger on the touch screen to scroll through the output, using quick, long flicks of the finger.
### Expected behavior
When you lift your finger, the text continues to scroll in the same direction for a short period, eventually slowing to a stop. This is what happens when you try the above steps in the original console window, or when you scroll a webpage using touch in most web browsers.
### Actual behavior
When you lift your finger, the text stops scrolling immediately. | Help Wanted,Area-TerminalControl,Product-Terminal,Issue-Task,Priority-3 | low | Major |
503,762,215 | flutter | Data race on animator begin frame when the engine is shutting down. | Caught by thread sanitizer in `embedder_unittest` on shell shutdown.
```
WARNING: ThreadSanitizer: data race (pid=57738)
Write of size 8 at 0x7b6c0002eb58 by thread T13 (mutexes: write M396035296755253640):
#0 std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >::release() memory:2629 (embedder_unittests:x86_64+0x10211cc16)
#1 std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >::unique_ptr(std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >&&) memory:2481 (embedder_unittests:x86_64+0x10211cb4d)
#2 std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >::unique_ptr(std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >&&) memory:2481 (embedder_unittests:x86_64+0x1020d8cf6)
#3 flutter::Shell::~Shell() shell.cc:324 (embedder_unittests:x86_64+0x1020d8719)
#4 flutter::Shell::~Shell() shell.cc:314 (embedder_unittests:x86_64+0x1020d9726)
#5 std::__1::default_delete<flutter::Shell>::operator()(flutter::Shell*) const memory:2325 (embedder_unittests:x86_64+0x100226d94)
#6 std::__1::unique_ptr<flutter::Shell, std::__1::default_delete<flutter::Shell> >::reset(flutter::Shell*) memory:2638 (embedder_unittests:x86_64+0x100226c2b)
#7 std::__1::unique_ptr<flutter::Shell, std::__1::default_delete<flutter::Shell> >::~unique_ptr() memory:2592 (embedder_unittests:x86_64+0x100226b4a)
#8 std::__1::unique_ptr<flutter::Shell, std::__1::default_delete<flutter::Shell> >::~unique_ptr() memory:2592 (embedder_unittests:x86_64+0x100225056)
#9 flutter::EmbedderEngine::~EmbedderEngine() embedder_engine.cc:34 (embedder_unittests:x86_64+0x100224fd4)
#10 flutter::EmbedderEngine::~EmbedderEngine() embedder_engine.cc:34 (embedder_unittests:x86_64+0x1002250b6)
#11 FlutterEngineShutdown embedder.cc:907 (embedder_unittests:x86_64+0x10019b594)
#12 flutter::testing::UniqueEngineTraits::Free(_FlutterEngine*&) embedder_config_builder.h:24 (embedder_unittests:x86_64+0x10001e101)
#13 fml::UniqueObject<_FlutterEngine*, flutter::testing::UniqueEngineTraits>::FreeIfNecessary() unique_object.h:101 (embedder_unittests:x86_64+0x10001e016)
#14 fml::UniqueObject<_FlutterEngine*, flutter::testing::UniqueEngineTraits>::reset(_FlutterEngine* const&) unique_object.h:65 (embedder_unittests:x86_64+0x100077c89)
#15 flutter::testing::EmbedderTest_CanSpecifyCustomTaskRunner_Test::TestBody()::$_0::operator()() embedder_unittests.cc:218 (embedder_unittests:x86_64+0x1000ee7bc)
#16 auto fml::internal::CopyableLambda<flutter::testing::EmbedderTest_CanSpecifyCustomTaskRunner_Test::TestBody()::$_0>::operator()<>() const make_copyable.h:24 (embedder_unittests:x86_64+0x1000ee5d2)
#17 decltype(std::__1::forward<fml::internal::CopyableLambda<flutter::testing::EmbedderTest_CanSpecifyCustomTaskRunner_Test::TestBody()::$_0>&>(fp)()) std::__1::__invoke<fml::internal::CopyableLambda<flutter::testing::EmbedderTest_CanSpecifyCustomTaskRunner_Test::TestBody()::$_0>&>(fml::internal::CopyableLambda<flutter::testing::EmbedderTest_CanSpecifyCustomTaskRunner_Test::TestBody()::$_0>&&&) type_traits:4350 (embedder_unittests:x86_64+0x1000ee4ee)
#18 void std::__1::__invoke_void_return_wrapper<void>::__call<fml::internal::CopyableLambda<flutter::testing::EmbedderTest_CanSpecifyCustomTaskRunner_Test::TestBody()::$_0>&>(fml::internal::CopyableLambda<flutter::testing::EmbedderTest_CanSpecifyCustomTaskRunner_Test::TestBody()::$_0>&&&) __functional_base:349 (embedder_unittests:x86_64+0x1000ee40e)
#19 std::__1::__function::__func<fml::internal::CopyableLambda<flutter::testing::EmbedderTest_CanSpecifyCustomTaskRunner_Test::TestBody()::$_0>, std::__1::allocator<fml::internal::CopyableLambda<flutter::testing::EmbedderTest_CanSpecifyCustomTaskRunner_Test::TestBody()::$_0> >, void ()>::operator()() functional:1572 (embedder_unittests:x86_64+0x1000eb642)
#20 std::__1::function<void ()>::operator()() const functional:1923 (embedder_unittests:x86_64+0x10005c52c)
#21 fml::MessageLoopImpl::FlushTasks(fml::FlushType) message_loop_impl.cc:121 (embedder_unittests:x86_64+0x100e732ea)
#22 fml::MessageLoopImpl::RunExpiredTasksNow() message_loop_impl.cc:131 (embedder_unittests:x86_64+0x100e730db)
#23 fml::MessageLoopDarwin::OnTimerFire(__CFRunLoopTimer*, fml::MessageLoopDarwin*) message_loop_darwin.mm:75 (embedder_unittests:x86_64+0x100ea412b)
#24 __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ <null>:2125280 (CoreFoundation:x86_64h+0x5a05f)
#25 fml::MessageLoopImpl::DoRun() message_loop_impl.cc:90 (embedder_unittests:x86_64+0x100e72f87)
#26 fml::MessageLoop::Run() message_loop.cc:49 (embedder_unittests:x86_64+0x100e712ce)
#27 fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0::operator()() const thread.cc:34 (embedder_unittests:x86_64+0x100e9ce1e)
#28 decltype(std::__1::forward<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fp)()) std::__1::__invoke<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0&&) type_traits:4350 (embedder_unittests:x86_64+0x100e9cc5e)
#29 void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>&, std::__1::__tuple_indices<>) thread:342 (embedder_unittests:x86_64+0x100e9cb16)
#30 void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0> >(void*) thread:352 (embedder_unittests:x86_64+0x100e9c1e4)
Previous read of size 8 at 0x7b6c0002eb58 by thread T14:
#0 std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >::operator bool() const memory:2623 (embedder_unittests:x86_64+0x1020da782)
#1 flutter::Shell::OnAnimatorBeginFrame(fml::TimePoint) shell.cc:861 (embedder_unittests:x86_64+0x1020df958)
#2 non-virtual thunk to flutter::Shell::OnAnimatorBeginFrame(fml::TimePoint) shell.cc (embedder_unittests:x86_64+0x1020dfa3a)
#3 flutter::Animator::BeginFrame(fml::TimePoint, fml::TimePoint) animator.cc:140 (embedder_unittests:x86_64+0x10204f655)
#4 flutter::Animator::AwaitVSync()::$_3::operator()(fml::TimePoint, fml::TimePoint) const animator.cc:239 (embedder_unittests:x86_64+0x10207a666)
#5 decltype(std::__1::forward<flutter::Animator::AwaitVSync()::$_3&>(fp)(std::__1::forward<fml::TimePoint>(fp0), std::__1::forward<fml::TimePoint>(fp0))) std::__1::__invoke<flutter::Animator::AwaitVSync()::$_3&, fml::TimePoint, fml::TimePoint>(flutter::Animator::AwaitVSync()::$_3&&&, fml::TimePoint&&, fml::TimePoint&&) type_traits:4350 (embedder_unittests:x86_64+0x10207a4e5)
#6 void std::__1::__invoke_void_return_wrapper<void>::__call<flutter::Animator::AwaitVSync()::$_3&, fml::TimePoint, fml::TimePoint>(flutter::Animator::AwaitVSync()::$_3&&&, fml::TimePoint&&, fml::TimePoint&&) __functional_base:349 (embedder_unittests:x86_64+0x10207a398)
#7 std::__1::__function::__func<flutter::Animator::AwaitVSync()::$_3, std::__1::allocator<flutter::Animator::AwaitVSync()::$_3>, void (fml::TimePoint, fml::TimePoint)>::operator()(fml::TimePoint&&, fml::TimePoint&&) functional:1572 (embedder_unittests:x86_64+0x10207773c)
#8 std::__1::function<void (fml::TimePoint, fml::TimePoint)>::operator()(fml::TimePoint, fml::TimePoint) const functional:1923 (embedder_unittests:x86_64+0x1021ca39a)
#9 flutter::VsyncWaiter::FireCallback(fml::TimePoint, fml::TimePoint)::$_0::operator()() const vsync_waiter.cc:122 (embedder_unittests:x86_64+0x1021ca0f5)
#10 decltype(std::__1::forward<flutter::VsyncWaiter::FireCallback(fml::TimePoint, fml::TimePoint)::$_0&>(fp)()) std::__1::__invoke<flutter::VsyncWaiter::FireCallback(fml::TimePoint, fml::TimePoint)::$_0&>(flutter::VsyncWaiter::FireCallback(fml::TimePoint, fml::TimePoint)::$_0&&&) type_traits:4350 (embedder_unittests:x86_64+0x1021c9f0e)
#11 void std::__1::__invoke_void_return_wrapper<void>::__call<flutter::VsyncWaiter::FireCallback(fml::TimePoint, fml::TimePoint)::$_0&>(flutter::VsyncWaiter::FireCallback(fml::TimePoint, fml::TimePoint)::$_0&&&) __functional_base:349 (embedder_unittests:x86_64+0x1021c9e2e)
#12 std::__1::__function::__func<flutter::VsyncWaiter::FireCallback(fml::TimePoint, fml::TimePoint)::$_0, std::__1::allocator<flutter::VsyncWaiter::FireCallback(fml::TimePoint, fml::TimePoint)::$_0>, void ()>::operator()() functional:1572 (embedder_unittests:x86_64+0x1021c7032)
#13 std::__1::function<void ()>::operator()() const functional:1923 (embedder_unittests:x86_64+0x10005c52c)
#14 fml::MessageLoopImpl::FlushTasks(fml::FlushType) message_loop_impl.cc:121 (embedder_unittests:x86_64+0x100e732ea)
#15 fml::MessageLoopImpl::RunExpiredTasksNow() message_loop_impl.cc:131 (embedder_unittests:x86_64+0x100e730db)
#16 fml::MessageLoopDarwin::OnTimerFire(__CFRunLoopTimer*, fml::MessageLoopDarwin*) message_loop_darwin.mm:75 (embedder_unittests:x86_64+0x100ea412b)
#17 __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ <null>:2125280 (CoreFoundation:x86_64h+0x5a05f)
#18 fml::MessageLoopImpl::DoRun() message_loop_impl.cc:90 (embedder_unittests:x86_64+0x100e72f87)
#19 fml::MessageLoop::Run() message_loop.cc:49 (embedder_unittests:x86_64+0x100e712ce)
#20 fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0::operator()() const thread.cc:34 (embedder_unittests:x86_64+0x100e9ce1e)
#21 decltype(std::__1::forward<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fp)()) std::__1::__invoke<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0&&) type_traits:4350 (embedder_unittests:x86_64+0x100e9cc5e)
#22 void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>&, std::__1::__tuple_indices<>) thread:342 (embedder_unittests:x86_64+0x100e9cb16)
#23 void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0> >(void*) thread:352 (embedder_unittests:x86_64+0x100e9c1e4)
```
The same issue also happens on the idle notification:
```
WARNING: ThreadSanitizer: data race (pid=57967)
Write of size 8 at 0x7b6c0000fa58 by main thread:
#0 std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >::release() memory:2629 (embedder_unittests:x86_64+0x10211cc16)
#1 std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >::unique_ptr(std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >&&) memory:2481 (embedder_unittests:x86_64+0x10211cb4d)
#2 std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >::unique_ptr(std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >&&) memory:2481 (embedder_unittests:x86_64+0x1020d8cf6)
#3 flutter::Shell::~Shell() shell.cc:324 (embedder_unittests:x86_64+0x1020d8719)
#4 flutter::Shell::~Shell() shell.cc:314 (embedder_unittests:x86_64+0x1020d9726)
#5 std::__1::default_delete<flutter::Shell>::operator()(flutter::Shell*) const memory:2325 (embedder_unittests:x86_64+0x100226d94)
#6 std::__1::unique_ptr<flutter::Shell, std::__1::default_delete<flutter::Shell> >::reset(flutter::Shell*) memory:2638 (embedder_unittests:x86_64+0x100226c2b)
#7 std::__1::unique_ptr<flutter::Shell, std::__1::default_delete<flutter::Shell> >::~unique_ptr() memory:2592 (embedder_unittests:x86_64+0x100226b4a)
#8 std::__1::unique_ptr<flutter::Shell, std::__1::default_delete<flutter::Shell> >::~unique_ptr() memory:2592 (embedder_unittests:x86_64+0x100225056)
#9 flutter::EmbedderEngine::~EmbedderEngine() embedder_engine.cc:34 (embedder_unittests:x86_64+0x100224fd4)
#10 flutter::EmbedderEngine::~EmbedderEngine() embedder_engine.cc:34 (embedder_unittests:x86_64+0x1002250b6)
#11 FlutterEngineShutdown embedder.cc:907 (embedder_unittests:x86_64+0x10019b594)
#12 flutter::testing::UniqueEngineTraits::Free(_FlutterEngine*&) embedder_config_builder.h:24 (embedder_unittests:x86_64+0x10001e101)
#13 fml::UniqueObject<_FlutterEngine*, flutter::testing::UniqueEngineTraits>::FreeIfNecessary() unique_object.h:101 (embedder_unittests:x86_64+0x10001e016)
#14 fml::UniqueObject<_FlutterEngine*, flutter::testing::UniqueEngineTraits>::reset(_FlutterEngine* const&) unique_object.h:65 (embedder_unittests:x86_64+0x100077c89)
#15 flutter::testing::EmbedderTest_CanLaunchAndShutdownWithValidProjectArgs_Test::TestBody() embedder_unittests.cc:50 (embedder_unittests:x86_64+0x100077a43)
#16 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2447 (embedder_unittests:x86_64+0x103211ea4)
#17 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2502 (embedder_unittests:x86_64+0x1031d918e)
#18 testing::Test::Run() gtest.cc:2522 (embedder_unittests:x86_64+0x1031d9057)
#19 testing::TestInfo::Run() gtest.cc:2698 (embedder_unittests:x86_64+0x1031da2f9)
#20 testing::TestSuite::Run() gtest.cc:2828 (embedder_unittests:x86_64+0x1031dba30)
#21 testing::internal::UnitTestImpl::RunAllTests() gtest.cc:5285 (embedder_unittests:x86_64+0x1031e9d07)
#22 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2447 (embedder_unittests:x86_64+0x10321b7f4)
#23 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2502 (embedder_unittests:x86_64+0x1031e965e)
#24 testing::UnitTest::Run() gtest.cc:4873 (embedder_unittests:x86_64+0x1031e9456)
#25 RUN_ALL_TESTS() gtest.h:2453 (embedder_unittests:x86_64+0x10302b2cb)
#26 main run_all_unittests.cc:9 (embedder_unittests:x86_64+0x10302b26b)
Previous read of size 8 at 0x7b6c0000fa58 by thread T43:
#0 std::__1::unique_ptr<flutter::Engine, std::__1::default_delete<flutter::Engine> >::operator bool() const memory:2623 (embedder_unittests:x86_64+0x1020da782)
#1 flutter::Shell::OnAnimatorNotifyIdle(long long) shell.cc:871 (embedder_unittests:x86_64+0x1020dfc48)
#2 non-virtual thunk to flutter::Shell::OnAnimatorNotifyIdle(long long) shell.cc (embedder_unittests:x86_64+0x1020dfcfa)
#3 flutter::Animator::BeginFrame(fml::TimePoint, fml::TimePoint)::$_1::operator()() const animator.cc:164 (embedder_unittests:x86_64+0x1020718fb)
#4 decltype(std::__1::forward<flutter::Animator::BeginFrame(fml::TimePoint, fml::TimePoint)::$_1&>(fp)()) std::__1::__invoke<flutter::Animator::BeginFrame(fml::TimePoint, fml::TimePoint)::$_1&>(flutter::Animator::BeginFrame(fml::TimePoint, fml::TimePoint)::$_1&&&) type_traits:4350 (embedder_unittests:x86_64+0x10207171e)
#5 void std::__1::__invoke_void_return_wrapper<void>::__call<flutter::Animator::BeginFrame(fml::TimePoint, fml::TimePoint)::$_1&>(flutter::Animator::BeginFrame(fml::TimePoint, fml::TimePoint)::$_1&&&) __functional_base:349 (embedder_unittests:x86_64+0x10207163e)
#6 std::__1::__function::__func<flutter::Animator::BeginFrame(fml::TimePoint, fml::TimePoint)::$_1, std::__1::allocator<flutter::Animator::BeginFrame(fml::TimePoint, fml::TimePoint)::$_1>, void ()>::operator()() functional:1572 (embedder_unittests:x86_64+0x10206ea42)
#7 std::__1::function<void ()>::operator()() const functional:1923 (embedder_unittests:x86_64+0x10005c52c)
#8 fml::MessageLoopImpl::FlushTasks(fml::FlushType) message_loop_impl.cc:121 (embedder_unittests:x86_64+0x100e732ea)
#9 fml::MessageLoopImpl::RunExpiredTasksNow() message_loop_impl.cc:131 (embedder_unittests:x86_64+0x100e730db)
#10 fml::MessageLoopDarwin::OnTimerFire(__CFRunLoopTimer*, fml::MessageLoopDarwin*) message_loop_darwin.mm:75 (embedder_unittests:x86_64+0x100ea412b)
#11 __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ <null> (CoreFoundation:x86_64h+0x5a05f)
#12 fml::MessageLoopImpl::DoRun() message_loop_impl.cc:90 (embedder_unittests:x86_64+0x100e72f87)
#13 fml::MessageLoop::Run() message_loop.cc:49 (embedder_unittests:x86_64+0x100e712ce)
#14 fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0::operator()() const thread.cc:34 (embedder_unittests:x86_64+0x100e9ce1e)
#15 decltype(std::__1::forward<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fp)()) std::__1::__invoke<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0&&) type_traits:4350 (embedder_unittests:x86_64+0x100e9cc5e)
#16 void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>&, std::__1::__tuple_indices<>) thread:342 (embedder_unittests:x86_64+0x100e9cb16)
#17 void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0> >(void*) thread:352 (embedder_unittests:x86_64+0x100e9c1e4)
``` | engine,from: sanitizer,P2,team-engine,triaged-engine | low | Minor |
503,763,350 | flutter | Data race in MessageLoopTaskQueues disposal. | Caught by thread sanitizer in `embedder_unittests`.
```
WARNING: ThreadSanitizer: data race (pid=58113)
Write of size 8 at 0x7b0c00006e90 by main thread (mutexes: write M976290527882352256):
#0 operator delete(void*) <null> (libclang_rt.tsan_osx_dynamic.dylib:x86_64+0x6cbd5)
#1 std::__1::_DeallocateCaller::__do_call(void*) new:340 (embedder_unittests:x86_64+0x100008b66)
#2 std::__1::_DeallocateCaller::__do_deallocate_handle_size(void*, unsigned long) new:298 (embedder_unittests:x86_64+0x100008b02)
#3 std::__1::_DeallocateCaller::__do_deallocate_handle_size_align(void*, unsigned long, unsigned long) new:268 (embedder_unittests:x86_64+0x100008a92)
#4 std::__1::__libcpp_deallocate(void*, unsigned long, unsigned long) new:346 (embedder_unittests:x86_64+0x100008a16)
#5 std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, void*> >::deallocate(std::__1::__tree_node<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, void*>*, unsigned long) memory:1803 (embedder_unittests:x86_64+0x100e7c348)
#6 std::__1::allocator_traits<std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, void*> > >::deallocate(std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, void*> >&, std::__1::__tree_node<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, void*>*, unsigned long) memory:1557 (embedder_unittests:x86_64+0x100e7bee6)
#7 std::__1::__tree<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, std::__1::__map_value_compare<fml::TaskQueueId, std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, std::__1::less<fml::TaskQueueId>, true>, std::__1::allocator<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > > > >::erase(std::__1::__tree_const_iterator<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, std::__1::__tree_node<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, void*>*, long>) __tree:2577 (embedder_unittests:x86_64+0x100e889f9)
#8 unsigned long std::__1::__tree<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, std::__1::__map_value_compare<fml::TaskQueueId, std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, std::__1::less<fml::TaskQueueId>, true>, std::__1::allocator<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > > > >::__erase_unique<fml::TaskQueueId>(fml::TaskQueueId const&) __tree:2598 (embedder_unittests:x86_64+0x100e88678)
#9 std::__1::map<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> >, std::__1::less<fml::TaskQueueId>, std::__1::allocator<std::__1::pair<fml::TaskQueueId const, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > > > >::erase(fml::TaskQueueId const&) map:1292 (embedder_unittests:x86_64+0x100e752e6)
#10 fml::MessageLoopTaskQueues::Dispose(fml::TaskQueueId) message_loop_task_queues.cc:58 (embedder_unittests:x86_64+0x100e74ed3)
#11 fml::MessageLoopImpl::~MessageLoopImpl() message_loop_impl.cc:50 (embedder_unittests:x86_64+0x100e724bd)
#12 fml::MessageLoopDarwin::~MessageLoopDarwin() message_loop_darwin.mm:37 (embedder_unittests:x86_64+0x100ea4430)
#13 fml::MessageLoopDarwin::~MessageLoopDarwin() message_loop_darwin.mm:34 (embedder_unittests:x86_64+0x100ea44e6)
#14 fml::MessageLoopDarwin::~MessageLoopDarwin() message_loop_darwin.mm:34 (embedder_unittests:x86_64+0x100ea454a)
#15 fml::RefCountedThreadSafe<fml::MessageLoopImpl>::Release() const ref_counted.h:73 (embedder_unittests:x86_64+0x10026b5f4)
#16 fml::RefPtr<fml::MessageLoopImpl>::~RefPtr() ref_ptr.h:104 (embedder_unittests:x86_64+0x10026b53a)
#17 fml::RefPtr<fml::MessageLoopImpl>::~RefPtr() ref_ptr.h:102 (embedder_unittests:x86_64+0x1002688f6)
#18 fml::TaskRunner::~TaskRunner() task_runner.cc:21 (embedder_unittests:x86_64+0x100e9a43c)
#19 fml::TaskRunner::~TaskRunner() task_runner.cc:21 (embedder_unittests:x86_64+0x100e9a516)
#20 fml::TaskRunner::~TaskRunner() task_runner.cc:21 (embedder_unittests:x86_64+0x100e9a57a)
#21 fml::RefCountedThreadSafe<fml::TaskRunner>::Release() const ref_counted.h:73 (embedder_unittests:x86_64+0x100051174)
#22 fml::RefPtr<fml::TaskRunner>::~RefPtr() ref_ptr.h:104 (embedder_unittests:x86_64+0x1000510ba)
#23 fml::RefPtr<fml::TaskRunner>::~RefPtr() ref_ptr.h:102 (embedder_unittests:x86_64+0x100050046)
#24 fml::Thread::~Thread() thread.cc:42 (embedder_unittests:x86_64+0x100e9b37a)
#25 fml::Thread::~Thread() thread.cc:40 (embedder_unittests:x86_64+0x100e9b4e6)
#26 std::__1::default_delete<fml::Thread>::operator()(fml::Thread*) const memory:2325 (embedder_unittests:x86_64+0x100050dc4)
#27 std::__1::unique_ptr<fml::Thread, std::__1::default_delete<fml::Thread> >::reset(fml::Thread*) memory:2638 (embedder_unittests:x86_64+0x100050c5b)
#28 std::__1::unique_ptr<fml::Thread, std::__1::default_delete<fml::Thread> >::~unique_ptr() memory:2592 (embedder_unittests:x86_64+0x100050b7a)
#29 std::__1::unique_ptr<fml::Thread, std::__1::default_delete<fml::Thread> >::~unique_ptr() memory:2592 (embedder_unittests:x86_64+0x100050b16)
#30 flutter::ThreadHost::~ThreadHost() thread_host.cc:31 (embedder_unittests:x86_64+0x1021c39d4)
#31 flutter::ThreadHost::~ThreadHost() thread_host.cc:31 (embedder_unittests:x86_64+0x1021c3a56)
#32 flutter::EmbedderThreadHost::~EmbedderThreadHost() embedder_thread_host.cc:188 (embedder_unittests:x86_64+0x10027302d)
#33 flutter::EmbedderThreadHost::~EmbedderThreadHost() embedder_thread_host.cc:188 (embedder_unittests:x86_64+0x1002730f6)
#34 std::__1::default_delete<flutter::EmbedderThreadHost>::operator()(flutter::EmbedderThreadHost*) const memory:2325 (embedder_unittests:x86_64+0x100217fb4)
#35 std::__1::unique_ptr<flutter::EmbedderThreadHost, std::__1::default_delete<flutter::EmbedderThreadHost> >::reset(flutter::EmbedderThreadHost*) memory:2638 (embedder_unittests:x86_64+0x100217e4b)
#36 std::__1::unique_ptr<flutter::EmbedderThreadHost, std::__1::default_delete<flutter::EmbedderThreadHost> >::~unique_ptr() memory:2592 (embedder_unittests:x86_64+0x100217d6a)
#37 std::__1::unique_ptr<flutter::EmbedderThreadHost, std::__1::default_delete<flutter::EmbedderThreadHost> >::~unique_ptr() memory:2592 (embedder_unittests:x86_64+0x10019b1d6)
#38 flutter::EmbedderEngine::~EmbedderEngine() embedder_engine.cc:34 (embedder_unittests:x86_64+0x100224fed)
#39 flutter::EmbedderEngine::~EmbedderEngine() embedder_engine.cc:34 (embedder_unittests:x86_64+0x1002250b6)
#40 FlutterEngineShutdown embedder.cc:907 (embedder_unittests:x86_64+0x10019b594)
#41 flutter::testing::UniqueEngineTraits::Free(_FlutterEngine*&) embedder_config_builder.h:24 (embedder_unittests:x86_64+0x10001e101)
#42 fml::UniqueObject<_FlutterEngine*, flutter::testing::UniqueEngineTraits>::FreeIfNecessary() unique_object.h:101 (embedder_unittests:x86_64+0x10001e016)
#43 fml::UniqueObject<_FlutterEngine*, flutter::testing::UniqueEngineTraits>::~UniqueObject() unique_object.h:55 (embedder_unittests:x86_64+0x10001df86)
#44 fml::UniqueObject<_FlutterEngine*, flutter::testing::UniqueEngineTraits>::~UniqueObject() unique_object.h:55 (embedder_unittests:x86_64+0x1000041d6)
#45 flutter::testing::EmbedderTest_CanReloadSystemFonts_Test::TestBody() embedder_unittests.cc:248 (embedder_unittests:x86_64+0x100079fbf)
#46 void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2447 (embedder_unittests:x86_64+0x103211ea4)
#47 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) gtest.cc:2502 (embedder_unittests:x86_64+0x1031d918e)
#48 testing::Test::Run() gtest.cc:2522 (embedder_unittests:x86_64+0x1031d9057)
#49 testing::TestInfo::Run() gtest.cc:2698 (embedder_unittests:x86_64+0x1031da2f9)
#50 testing::TestSuite::Run() gtest.cc:2828 (embedder_unittests:x86_64+0x1031dba30)
#51 testing::internal::UnitTestImpl::RunAllTests() gtest.cc:5285 (embedder_unittests:x86_64+0x1031e9d07)
#52 bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2447 (embedder_unittests:x86_64+0x10321b7f4)
#53 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) gtest.cc:2502 (embedder_unittests:x86_64+0x1031e965e)
#54 testing::UnitTest::Run() gtest.cc:4873 (embedder_unittests:x86_64+0x1031e9456)
#55 RUN_ALL_TESTS() gtest.h:2453 (embedder_unittests:x86_64+0x10302b2cb)
#56 main run_all_unittests.cc:9 (embedder_unittests:x86_64+0x10302b26b)
Previous read of size 8 at 0x7b0c00006e90 by thread T39 (mutexes: write M973757253091900288):
#0 fml::TaskQueueId::operator int() const message_loop_task_queues.h:28 (embedder_unittests:x86_64+0x100e6a6d0)
#1 std::__1::less<fml::TaskQueueId>::operator()(fml::TaskQueueId const&, fml::TaskQueueId const&) const __functional_base:55 (embedder_unittests:x86_64+0x100e81afa)
#2 std::__1::__map_value_compare<fml::TaskQueueId, std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, std::__1::less<fml::TaskQueueId>, true>::operator()(fml::TaskQueueId const&, std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > > const&) const map:512 (embedder_unittests:x86_64+0x100e81861)
#3 std::__1::__tree_node_base<void*>*& std::__1::__tree<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, std::__1::__map_value_compare<fml::TaskQueueId, std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, std::__1::less<fml::TaskQueueId>, true>, std::__1::allocator<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > > > >::__find_equal<fml::TaskQueueId>(std::__1::__tree_end_node<std::__1::__tree_node_base<void*>*>*&, fml::TaskQueueId const&) __tree:2048 (embedder_unittests:x86_64+0x100e80d9b)
#4 std::__1::__tree_node_base<void*>*& std::__1::__tree<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, std::__1::__map_value_compare<fml::TaskQueueId, std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > >, std::__1::less<fml::TaskQueueId>, true>, std::__1::allocator<std::__1::__value_type<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > > > >::__find_equal<fml::TaskQueueId>(std::__1::__tree_end_node<std::__1::__tree_node_base<void*>*>*&, fml::TaskQueueId const&) const __tree:1476 (embedder_unittests:x86_64+0x100e8e406)
#5 std::__1::map<fml::TaskQueueId, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> >, std::__1::less<fml::TaskQueueId>, std::__1::allocator<std::__1::pair<fml::TaskQueueId const, std::__1::unique_ptr<fml::TaskQueueEntry, std::__1::default_delete<fml::TaskQueueEntry> > > > >::at(fml::TaskQueueId const&) const map:1542 (embedder_unittests:x86_64+0x100e76a8a)
#6 fml::MessageLoopTaskQueues::GetObserversToNotify(fml::TaskQueueId) const message_loop_task_queues.cc:179 (embedder_unittests:x86_64+0x100e77345)
#7 fml::MessageLoopImpl::FlushTasks(fml::FlushType) message_loop_impl.cc:123 (embedder_unittests:x86_64+0x100e73349)
#8 fml::MessageLoopImpl::RunExpiredTasksNow() message_loop_impl.cc:131 (embedder_unittests:x86_64+0x100e730db)
#9 fml::MessageLoopDarwin::OnTimerFire(__CFRunLoopTimer*, fml::MessageLoopDarwin*) message_loop_darwin.mm:75 (embedder_unittests:x86_64+0x100ea412b)
#10 __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ <null> (CoreFoundation:x86_64h+0x5a05f)
#11 fml::MessageLoopImpl::DoRun() message_loop_impl.cc:90 (embedder_unittests:x86_64+0x100e72f87)
#12 fml::MessageLoop::Run() message_loop.cc:49 (embedder_unittests:x86_64+0x100e712ce)
#13 fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0::operator()() const thread.cc:34 (embedder_unittests:x86_64+0x100e9ce1e)
#14 decltype(std::__1::forward<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fp)()) std::__1::__invoke<fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0&&) type_traits:4350 (embedder_unittests:x86_64+0x100e9cc5e)
#15 void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0>&, std::__1::__tuple_indices<>) thread:342 (embedder_unittests:x86_64+0x100e9cb16)
#16 void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, fml::Thread::Thread(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::$_0> >(void*) thread:352 (embedder_unittests:x86_64+0x100e9c1e4)
``` | engine,from: sanitizer,P2,team-engine,triaged-engine | low | Minor |
503,764,642 | flutter | ShellTest::InitializeWithDifferentThread causes thread sanitizer crash. | Caught by thread sanitizer in the `shell_unittests` target. This may be a false positive.
```
FATAL: ThreadSanitizer CHECK failed: /b/s/w/ir/kitchen-workdir/llvm-project/compiler-rt/lib/tsan/rtl/tsan_rtl_proc.cc:48 "((thr->proc1)) == ((nullptr))" (0x4b40000000000000, 0x0)
FATAL: ThreadSanitizer CHECK failed: /b/s/w/ir/kitchen-workdir/llvm-project/compiler-rt/lib/tsan/rtl/tsan_mman.cc:328 "((0)) != (0)" (0x0, 0x0)
ThreadSanitizer:DEADLYSIGNAL
==58641==ERROR: ThreadSanitizer: SEGV on unknown address 0x000000000000 (pc 0x000116c949e6 bp 0x7e8001681d70 sp 0x7e8001681d50 T6482002)
==58641==The signal is caused by a READ memory access.
==58641==Hint: address points to the zero page.
#0 __sanitizer::CombinedAllocator<__sanitizer::SizeClassAllocator32<__sanitizer::AP32>, __sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator32<__sanitizer::AP32> >, __sanitizer::LargeMmapAllocator<__sanitizer::NoOpMapUnmapCallback, __sanitizer::LargeMmapAllocatorPtrArrayStatic, __sanitizer::LocalAddressSpaceView> >::Allocate(__sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator32<__sanitizer::AP32> >*, unsigned long, unsigned long) <null>:23596976 (libclang_rt.tsan_osx_dynamic.dylib:x86_64+0x29e5)
==58641==Register values:
rax = 0x000000000000001d rbx = 0x4b40000000014f88 rcx = 0x0000000000000009 rdx = 0x000000000000000b
rdi = 0x00000000000001ff rsi = 0x0000000000000000 rbp = 0x00007e8001681d70 rsp = 0x00007e8001681d50
r8 = 0x0000000116d29940 r9 = 0x0000000000000010 r10 = 0x00000000000000b3 r11 = 0x0000000000000004
r12 = 0x0000000000007400 r13 = 0x0000000000000148 r14 = 0x4b4000000000db88 r15 = 0x0000000000000008
ThreadSanitizer can not provide additional info.
SUMMARY: ThreadSanitizer: SEGV (libclang_rt.tsan_osx_dynamic.dylib:x86_64+0x29e5) in __sanitizer::CombinedAllocator<__sanitizer::SizeClassAllocator32<__sanitizer::AP32>, __sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator32<__sanitizer::AP32> >, __sanitizer::LargeMmapAllocator<__sanitizer::NoOpMapUnmapCallback, __sanitizer::LargeMmapAllocatorPtrArrayStatic, __sanitizer::LocalAddressSpaceView> >::Allocate(__sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator32<__sanitizer::AP32> >*, unsigned long, unsigned long)
==58641==ABORTING
[1] 58641 abort ./out/host_debug_unopt/shell_unittests
``` | engine,from: sanitizer,P2,team-engine,triaged-engine | low | Critical |
503,764,697 | pytorch | Allow explicit gradients in torch.distributed.autograd.backward() API | The [torch.distributed.autograd.backward](https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/autograd/init.cpp#L108) API only allows scalars as the roots and doesn't allow explicit gradient tensors. Similar to the [torch.autograd.backward](https://pytorch.org/docs/stable/autograd.html#torch.autograd.backward) API, we should allow explicit gradients (`grad_tensors`) in this API as well.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini | triaged,module: rpc | low | Minor |
503,770,410 | flutter | Use of uninitialized value while accessing the pending tasks heap in the embedder task runner. | Caught by memory sanitizer while running `embedder_unittests` harness.
```
./third_party/libcxx/include/__hash_table:2137
./third_party/libcxx/include/unordered_map:1633
./flutter/shell/platform/embedder/embedder_task_runner.cc:39
./flutter/shell/platform/embedder/embedder_task_runner.cc:24
./flutter/runtime/dart_isolate.cc:245
./flutter/runtime/dart_isolate.cc:190
./flutter/runtime/dart_isolate.cc:739
./flutter/runtime/dart_isolate.cc:72
./flutter/runtime/runtime_controller.cc:75
./flutter/runtime/runtime_controller.cc:30
./third_party/libcxx/include/memory:3003
./flutter/shell/common/engine.cc:60
./third_party/libcxx/include/memory:3003
./flutter/shell/common/shell.cc:132
./flutter/fml/make_copyable.h:24
./third_party/libcxx/include/type_traits:3530
./third_party/libcxx/include/__functional_base:348
./third_party/libcxx/include/functional:1533
./third_party/libcxx/include/functional:1707
./third_party/libcxx/include/functional:1860
./third_party/libcxx/include/functional:2419
./flutter/fml/message_loop_impl.cc:121
./flutter/fml/message_loop_impl.cc:131
./flutter/fml/platform/linux/message_loop_linux.cc:89
./flutter/fml/platform/linux/message_loop_linux.cc:70
./flutter/fml/message_loop_impl.cc:90
./flutter/fml/message_loop.cc:49
./flutter/fml/thread.cc:34
./third_party/libcxx/include/type_traits:3530
./third_party/libcxx/include/thread:341
./third_party/libcxx/include/thread:351
``` | engine,from: sanitizer,P2,team-engine,triaged-engine | low | Minor |
503,781,066 | TypeScript | Add types for window.performance.getEntriesByType | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
## Search Terms
getEntriesByType
PerformanceMark
PerformanceMeasure
PerformanceResourceTiming
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
Add types for the [`getEntriesByType` API](https://developer.mozilla.org/en-US/docs/Web/API/Performance/getEntriesByType)
I found [this old issue (Jul 5, 2018)](https://github.com/microsoft/TypeScript/issues/25461) and saw that experimental API's are not included in `lib.dom.d.ts`. The following API's don't appear to have `experimental` warnings on them on the MDN docs. Hopefully this means these are no longer experimental and types can be added now?
https://developer.mozilla.org/en-US/docs/Web/API/PerformanceMark
https://developer.mozilla.org/en-US/docs/Web/API/PerformanceMeasure
https://developer.mozilla.org/en-US/docs/Web/API/PerformanceResourceTiming
<!-- A summary of what you'd like to see added or changed -->
## Use Cases
Getting specific strongly typed information for the entries when accessed via this API.
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
```ts
// resources would be of type PerformanceResourceTiming[]
const resources = window.navigation.getEntriesByType("resource");
// resources would be of type PerformanceMeasure[]
const measures = window.navigation.getEntriesByType("measure");
// marks would be of type PerformanceMark[]
const marks = window.navigation.getEntriesByType("mark");
// paints would still be of type PerformanceEntryList
const paints = window.navigation.getEntriesByType("paint");
```
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines (not sure about the first one):
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Bug,Help Wanted,Domain: lib.d.ts | low | Critical |
503,785,651 | godot | Disabling Script Editor removes the attach scripts option from contextual menu when right clicking nodes | **Godot version:**
3.2 alpha 1
**OS/device including version:**
windows 10
**Issue description:**
If you disable the Script Editor from the settings menu it completely disables the ability attaching scripts from the contextual menu.
**Steps to reproduce:**
1. Disable script editor from editor profiles
2. Right click on a node
3. Watch 'attach script' be missing from the contextual menu.
| bug,topic:editor,confirmed | low | Major |
503,798,734 | pytorch | [feature request] Reduction (torch.add / torch.logaddexp / torch.max / torch.min / torch.mean) of several tensors without extra copies/allocations / memory accesses } TensorList inputs support | If I understand correctly `sum(tensor_list)` will allocate and keep O(N) intermediate tensors (same with a for loop) where N is number of tensors, which can be quite large in the case of big DenseNet. I propose to maybe generalize `torch.add` to support more than two tensors as input.
Currently one can do: `functools.reduce(lambda acc, x: acc.add_(x), tensor_list, torch.zeros_like(tensor_list[0]))`, so it's not super-urging, but a more idiomatic, TorchScript-able way may be nice | triaged,function request | medium | Critical |
503,809,713 | tensorflow | how to assign value to a EagerTensor slice? ----'tensorflow.python.framework.ops.EagerTensor' object does not support item assignment | as in numpy or pytorch ,we can do someting like this, but how to do it with tf2.0.
the following code will raise exception as :
`'tensorflow.python.framework.ops.EagerTensor' object does not support item assignment`
prediction[:,:,0]=tf.math.sigmoid(prediction[:,:,0])
| stat:awaiting tensorflower,type:feature,comp:ops,TF 2.11 | high | Critical |
503,896,649 | vscode | Scrollbar in editor is off | Steps to Reproduce:
1. use web (fullscreen)
2. have problems panel open with problems (e.g. strict init issues)
3. open a file with problem
4. click on the error in the scrollbar to navigate to
=> the position seems off, it does not reveal in center
Click here:

Error not shown:

| bug,editor-scrollbar | low | Critical |
504,025,654 | pytorch | CTCLoss cuda backend large batch handling takes up to 1.8x more memory | ## π Bug
The special large batch / alphabet handling, although can provide up to 2.5x speedup sometimes, it comes at the cost of up to 1.8x more memory. For large targets, this can be a significant amount of increased memory usage (in the order of multiple GBs).
I think if the memory consumption can't be put down, then this should be optional, many users with tight memory constraints would prefer a slower implementation to a memory hungry one
## To Reproduce
To make a fair comparison of the effect, I changed [this line](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/LossCTC.cu#L642) from `if (is_large)` to `if (zero_infinity)` to make it simple to try setting `is_large` on & off without the hassle of adding a new argument, given the argument `zero_infinity` has negligible memory / time effects on other parts of the code
Then, I used the following snippet
```
from torch import nn
import torch
import math
import time
time_step = 5 # Input sequence length
batch_size = 4 # Batch size
target_sql = 3 # Target sequence length
iters = 5
warm = 3
ctc_loss = [nn.CTCLoss(reduction='sum',zero_infinity=False)]
ctc_loss += [nn.CTCLoss(reduction='sum',zero_infinity=True)]
for j in range(15):
time_step *= 2
target_sql *= 2
print('Logits length : ',time_step,'Label length : ',target_sql)
print('------------------------------------------')
x_lengths = torch.full(size=(batch_size,), fill_value=time_step, dtype=torch.long).cuda()
y_lengths = torch.full(size=(batch_size,), fill_value=target_sql, dtype=torch.long).cuda()
mem = [0,0]
etime = [0,0]
for r in [3,200,5000]: #vocab sizes
print('Vocabulary : ',r)
print('=================')
vocab_size = r
x = torch.randn(time_step, batch_size, vocab_size).log_softmax(2).detach().requires_grad_().cuda()
y = torch.randint(low=1, high=vocab_size-1, size=(batch_size, target_sql),dtype=torch.long).cuda()
for c in range(2):
s=0
for i in range(iters):
torch.cuda.synchronize()
a = time.perf_counter()
loss = ctc_loss[c](x, y, x_lengths, y_lengths).mean()
loss.backward()
torch.cuda.synchronize()
b = time.perf_counter()
del loss
if i>=warm:
s += b - a
mem[c] = torch.cuda.max_memory_allocated()/2**30
torch.cuda.reset_max_memory_allocated()
torch.cuda.empty_cache()
etime[c] = s/(iters-warm)
print(f'Average [{c}] Time : {etime[c]:.5f} sec Memory : {mem[c]:.5f} GB')
print(f'Speed up : {etime[0]/etime[1]:.3f} Memory increase ratio : {mem[1]/mem[0]:.3f} \n')
```
Which produced the following output.
Note that for each input / target length, we try both without (first line) and with (second line) large batch handling, then both the speed-up and increase on memory consumption is printed. This is tried for vocabularies of sizes [3,200,5000]
```
Logits length : 10 Label length : 6
------------------------------------------
Vocabulary : 3
=================
Average [0] Time : 0.00162 sec Memory : 0.00001 GB
Average [1] Time : 0.00368 sec Memory : 0.00002 GB
Speed up : 0.439 Memory increase ratio : 1.500
Vocabulary : 200
=================
Average [0] Time : 0.00172 sec Memory : 0.00007 GB
Average [1] Time : 0.00333 sec Memory : 0.00010 GB
Speed up : 0.516 Memory increase ratio : 1.445
Vocabulary : 5000
=================
Average [0] Time : 0.00466 sec Memory : 0.00150 GB
Average [1] Time : 0.00483 sec Memory : 0.00225 GB
Speed up : 0.966 Memory increase ratio : 1.497
Logits length : 20 Label length : 12
------------------------------------------
Vocabulary : 3
=================
Average [0] Time : 0.00167 sec Memory : 0.00075 GB
Average [1] Time : 0.00335 sec Memory : 0.00004 GB
Speed up : 0.500 Memory increase ratio : 0.048
Vocabulary : 200
=================
Average [0] Time : 0.00194 sec Memory : 0.00014 GB
Average [1] Time : 0.00393 sec Memory : 0.00020 GB
Speed up : 0.493 Memory increase ratio : 1.435
Vocabulary : 5000
=================
Average [0] Time : 0.00618 sec Memory : 0.00300 GB
Average [1] Time : 0.00630 sec Memory : 0.00449 GB
Speed up : 0.981 Memory increase ratio : 1.497
Logits length : 40 Label length : 24
------------------------------------------
Vocabulary : 3
=================
Average [0] Time : 0.00165 sec Memory : 0.00149 GB
Average [1] Time : 0.00327 sec Memory : 0.00012 GB
Speed up : 0.505 Memory increase ratio : 0.078
Vocabulary : 200
=================
Average [0] Time : 0.00224 sec Memory : 0.00030 GB
Average [1] Time : 0.00389 sec Memory : 0.00042 GB
Speed up : 0.576 Memory increase ratio : 1.397
Vocabulary : 5000
=================
Average [0] Time : 0.00951 sec Memory : 0.00602 GB
Average [1] Time : 0.00920 sec Memory : 0.00901 GB
Speed up : 1.033 Memory increase ratio : 1.495
Logits length : 80 Label length : 48
------------------------------------------
Vocabulary : 3
=================
Average [0] Time : 0.00176 sec Memory : 0.00299 GB
Average [1] Time : 0.00332 sec Memory : 0.00042 GB
Speed up : 0.530 Memory increase ratio : 0.142
Vocabulary : 200
=================
Average [0] Time : 0.00294 sec Memory : 0.00071 GB
Average [1] Time : 0.00411 sec Memory : 0.00095 GB
Speed up : 0.716 Memory increase ratio : 1.335
Vocabulary : 5000
=================
Average [0] Time : 0.01783 sec Memory : 0.01216 GB
Average [1] Time : 0.01603 sec Memory : 0.01812 GB
Speed up : 1.112 Memory increase ratio : 1.490
Logits length : 160 Label length : 96
------------------------------------------
Vocabulary : 3
=================
Average [0] Time : 0.00310 sec Memory : 0.00597 GB
Average [1] Time : 0.00341 sec Memory : 0.00164 GB
Speed up : 0.911 Memory increase ratio : 0.275
Vocabulary : 200
=================
Average [0] Time : 0.00377 sec Memory : 0.00188 GB
Average [1] Time : 0.00400 sec Memory : 0.00258 GB
Speed up : 0.943 Memory increase ratio : 1.371
Vocabulary : 5000
=================
Average [0] Time : 0.03427 sec Memory : 0.02477 GB
Average [1] Time : 0.02548 sec Memory : 0.03669 GB
Speed up : 1.345 Memory increase ratio : 1.481
Logits length : 320 Label length : 192
------------------------------------------
Vocabulary : 3
=================
Average [0] Time : 0.00293 sec Memory : 0.01194 GB
Average [1] Time : 0.00376 sec Memory : 0.00648 GB
Speed up : 0.779 Memory increase ratio : 0.543
Vocabulary : 200
=================
Average [0] Time : 0.00505 sec Memory : 0.00559 GB
Average [1] Time : 0.00657 sec Memory : 0.00836 GB
Speed up : 0.769 Memory increase ratio : 1.496
Vocabulary : 5000
=================
Average [0] Time : 0.05630 sec Memory : 0.05137 GB
Average [1] Time : 0.03410 sec Memory : 0.07521 GB
Speed up : 1.651 Memory increase ratio : 1.464
Logits length : 640 Label length : 384
------------------------------------------
Vocabulary : 3
=================
Average [0] Time : 0.00402 sec Memory : 0.02388 GB
Average [1] Time : 0.00386 sec Memory : 0.02577 GB
Speed up : 1.042 Memory increase ratio : 1.079
Vocabulary : 200
=================
Average [0] Time : 0.00852 sec Memory : 0.01850 GB
Average [1] Time : 0.00817 sec Memory : 0.02953 GB
Speed up : 1.042 Memory increase ratio : 1.596
Vocabulary : 5000
=================
Average [0] Time : 0.09079 sec Memory : 0.11005 GB
Average [1] Time : 0.08103 sec Memory : 0.15774 GB
Speed up : 1.121 Memory increase ratio : 1.433
Logits length : 1280 Label length : 768
------------------------------------------
Vocabulary : 3
=================
Average [0] Time : 0.01199 sec Memory : 0.05877 GB
Average [1] Time : 0.00849 sec Memory : 0.10569 GB
Speed up : 1.412 Memory increase ratio : 1.798
Vocabulary : 200
=================
Average [0] Time : 0.02243 sec Memory : 0.06629 GB
Average [1] Time : 0.01753 sec Memory : 0.11320 GB
Speed up : 1.279 Memory increase ratio : 1.708
Vocabulary : 5000
=================
Average [0] Time : 0.22960 sec Memory : 0.25007 GB
Average [1] Time : 0.16240 sec Memory : 0.34577 GB
Speed up : 1.414 Memory increase ratio : 1.383
Logits length : 2560 Label length : 1536
------------------------------------------
Vocabulary : 3
=================
Average [0] Time : 0.03153 sec Memory : 0.23473 GB
Average [1] Time : 0.02524 sec Memory : 0.41070 GB
Speed up : 1.249 Memory increase ratio : 1.750
Vocabulary : 200
=================
Average [0] Time : 0.08150 sec Memory : 0.24976 GB
Average [1] Time : 0.04038 sec Memory : 0.42573 GB
Speed up : 2.018 Memory increase ratio : 1.705
Vocabulary : 5000
=================
Average [0] Time : 0.39866 sec Memory : 0.61731 GB
Average [1] Time : 0.39342 sec Memory : 0.80872 GB
Speed up : 1.013 Memory increase ratio : 1.310
Logits length : 5120 Label length : 3072
------------------------------------------
Vocabulary : 3
=================
Average [0] Time : 0.19747 sec Memory : 0.93821 GB
Average [1] Time : 0.08911 sec Memory : 1.64171 GB
Speed up : 2.216 Memory increase ratio : 1.750
Vocabulary : 200
=================
Average [0] Time : 0.35623 sec Memory : 0.96900 GB
Average [1] Time : 0.13945 sec Memory : 1.67250 GB
Speed up : 2.554 Memory increase ratio : 1.726
Vocabulary : 5000
=================
Average [0] Time : 1.09736 sec Memory : 1.70069 GB
Average [1] Time : 0.93109 sec Memory : 2.40419 GB
Speed up : 1.179 Memory increase ratio : 1.414
Logits length : 10240 Label length : 6144
------------------------------------------
Vocabulary : 3
=================
Average [0] Time : 1.11494 sec Memory : 3.75141 GB
Average [1] Time : 0.65411 sec Memory : 6.56467 GB
Speed up : 1.705 Memory increase ratio : 1.750
Vocabulary : 200
=================
Average [0] Time : 1.77882 sec Memory : 3.81299 GB
Average [1] Time : 0.93758 sec Memory : 6.62626 GB
Speed up : 1.897 Memory increase ratio : 1.738
Vocabulary : 5000
=================
Average [0] Time : 3.43761 sec Memory : 5.27784 GB
Average [1] Time : 2.56783 sec Memory : 8.09110 GB
Speed up : 1.339 Memory increase ratio : 1.533
Logits length : 20480 Label length : 12288
------------------------------------------
Vocabulary : 3
=================
RuntimeError: CUDA out of memory. Tried to allocate 7.50 GiB (GPU 0; 11.91 GiB total capacity; 7.50 GiB already allocated; 3.22 GiB free; 2.37 MiB cached)
```
## Environment
PyTorch Version (e.g., 1.0): tested problem on master
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Python version: 3.6
CUDA/cuDNN version: CUDA: 10.0.130, cuDNN: 7.4.2
GPU models and configuration: tested on Titan XP
@t-vi | module: loss,module: cuda,module: memory usage,triaged,enhancement | low | Critical |
504,055,077 | pytorch | Make topk sort stable | ## π Bug
torch.topk with sorted=True doesn't return a result that is consistent across different values of k when dealing with duplicates values. The position of duplicated values in the returned sorted indices varies with k. The behavior varies between CPU and CUDA, and it's inconsistent both between and within the two backends.
This affects the reference implementation for computing accuracy in e.g. imagenet classification ([link](https://github.com/pytorch/examples/blob/ee964a2eeb41e1712fe719b83645c79bcbd0ba1a/imagenet/main.py#L407)), in the sense that passing topk=(1,5) or topk=(1,10) might give different top1 accuracies. The effect is especially notable on highly quantized models, where it's more common to have duplicated values in the output of a layer.
## To Reproduce
On CPU:
```python
In [1]: import torch
In [2]: x = torch.Tensor([1, 2, 5, 4, 5])
In [3]: print(x.topk(1, largest=True, sorted=True))
torch.return_types.topk(
values=tensor([5.]),
indices=tensor([4]))
In [4]: print(x.topk(2, largest=True, sorted=True))
torch.return_types.topk(
values=tensor([5., 5.]),
indices=tensor([2, 4]))
In [5]: print(x.topk(3, largest=True, sorted=True))
torch.return_types.topk(
values=tensor([5., 5., 4.]),
indices=tensor([2, 4, 3]))
In [6]: print(x.topk(4, largest=True, sorted=True))
torch.return_types.topk(
values=tensor([5., 5., 4., 2.]),
indices=tensor([2, 4, 3, 1]))
In [7]: print(x.topk(5, largest=True, sorted=True))
torch.return_types.topk(
values=tensor([5., 5., 4., 2., 1.]),
indices=tensor([2, 4, 3, 1, 0]))
```
On CUDA:
```python
In [1]: import torch
In [2]: y = torch.Tensor([1, 2, 5, 4, 5]).cuda()
In [3]: print(y.topk(1, largest=True, sorted=True))
torch.return_types.topk(
values=tensor([5.], device='cuda:0'),
indices=tensor([2], device='cuda:0'))
In [4]: print(y.topk(2, largest=True, sorted=True))
torch.return_types.topk(
values=tensor([5., 5.], device='cuda:0'),
indices=tensor([4, 2], device='cuda:0'))
In [5]: print(y.topk(3, largest=True, sorted=True))
torch.return_types.topk(
values=tensor([5., 5., 4.], device='cuda:0'),
indices=tensor([4, 2, 3], device='cuda:0'))
In [6]: print(y.topk(4, largest=True, sorted=True))
torch.return_types.topk(
values=tensor([5., 5., 4., 2.], device='cuda:0'),
indices=tensor([2, 4, 3, 1], device='cuda:0'))
In [7]: print(y.topk(5, largest=True, sorted=True))
torch.return_types.topk(
values=tensor([5., 5., 4., 2., 1.], device='cuda:0'),
indices=tensor([2, 4, 3, 1, 0], device='cuda:0'))
```
## Expected behavior
The indices should be always in the same order independently of the value of k. Ideally they should also be in the same order between CPU and GPU.
## Environment
Reproduced on latest pytorch-nightly.
```
python collect_env.py
Collecting environment information...
PyTorch version: 1.3.0.dev20190917
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: Quadro P6000
GPU 1: Quadro P6000
GPU 2: Quadro P6000
Nvidia driver version: 418.67
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.0
Versions of relevant libraries:
[pip] numpy==1.17.2
[pip] torch==1.3.0.dev20190917
[pip] torchvision==0.5.0a0+e8b830f
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py36he904b0f_0
[conda] mkl_fft 1.0.14 py36ha843d7b_0
[conda] mkl_random 1.1.0 py36hd6b4f25_0
[conda] pytorch 1.3.0.dev20190917 py3.6_cuda10.0.130_cudnn7.6.2_0 pytorch-nightly
[conda] torchvision 0.5.0.dev20190917 py36_cu100 pytorch-nightly
```
cc @ezyang @gchanan @zou3519 @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a | triaged,enhancement,module: determinism,OSS contribution wanted,module: sorting and selection | medium | Critical |
504,056,117 | rust | custom_test_frameworks doesn't work with plain functions | The following code:
```rust
#![feature(custom_test_frameworks)]
#![test_runner(my_test_runner)]
fn my_test_runner(tests: &[&fn() -> ()]) {
for test in tests {
test();
}
}
#[test_case]
fn test_1() {}
#[test_case]
fn test_2() {}
```
results in
```
11 | fn test_1() {}
| ^^^^^^^^^^^^^^ expected fn pointer, found fn item
|
= note: expected type `&fn()`
found type `&fn() {test_1}`
```
The workaround mentioned in https://stackoverflow.com/questions/27895946/expected-fn-item-found-a-different-fn-item-when-working-with-function-pointer doesn't work, as these functions are passed in by the compiler, so we cannot do the coercion.
Other variants that don't work:
```rust
fn my_test_runner(tests: &[fn() -> ()]) {
```
```rust
fn my_test_runner<F: Fn() -> ()>(tests: &[F]) {
```
```rust
fn my_test_runner<F: Fn() -> ()>(tests: &[&F]) {
``` | A-libtest,C-bug,requires-nightly,F-custom_test_frameworks | low | Minor |
504,059,021 | TypeScript | API: expose getOptionalType and getNonOptionalType | ## Suggestion
As API user I need to distinguish the `optionalType` and the real `undefined` type when handling optional chaining.
`getOptionalType` and `getNonOptionalType` are already present on `TypeChecker`, but not exposed in the public API.
/cc @rbuckton | Suggestion,In Discussion,API | low | Minor |
504,183,237 | flutter | iOS VoiceOver gets into buggy state when animating elevation to 0 | The following code will produce a buggy state of iOS VoiceOver when run on a real iOS device.
Not reproducible on Android.
1. Open the app on a real iOS
2. Turn on VoiceOver
3. Tap the FAB
4. Observe it getting into this strange state where swiping does not go to the next element:

Strangely, if the elevation is lerped to 1, or even 1e-6, instead of 0, the VoiceOver works as intended. Also, this only seems to affect buttons. Changing the children to text also does not reproduce. Finally, if 1 item is used in the row, the bug does also not reproduce.
```
import 'dart:ui';
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'iOS Semantics Bug Demo',
home: MyHomePage(),
);
}
}
class MyHomePage extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Text('Push the FAB with iOS voice over on to reproduce bug'),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
Navigator.of(context).push(FooPageRoute());
},
child: Icon(Icons.add),
),
);
}
}
class FooPageRoute<T> extends PageRoute<T> {
@override
bool get opaque => false;
@override
Color get barrierColor => null;
@override
String get barrierLabel => null;
@override
Duration get transitionDuration => Duration(milliseconds: 1000);
@override
bool get maintainState => false;
@override
Widget buildPage(
BuildContext context,
Animation<double> animation,
Animation<double> secondaryAnimation,
) {
return FooPage(animation);
}
}
class FooPage extends StatefulWidget {
FooPage(this.animation);
final Animation<double> animation;
@override
State<StatefulWidget> createState() => _FooPageState();
}
class _FooPageState extends State<FooPage> {
@override
Widget build(BuildContext context) {
final animation = widget.animation;
return Scaffold(
body: SafeArea(
child: SizedBox.expand(
child: AnimatedBuilder(
animation: animation,
builder: (context, child) {
// Animating the Padding is necessary to produce the voiceover bug.
return Padding(
padding: EdgeInsets.lerp(
EdgeInsets.only(
left: 100,
right: 100,
top: 100,
bottom: 100,
),
EdgeInsets.zero,
animation.value,
),
// Animated the elevation by using a Material with
// animationDuration zero, or using a PhysicalShape, both
// result in the voiceover bug.
child: Material(
// Lerping from 20 to 0 causes a voiceover bug.
// Lerping from 20 to 1 will not result in a voiceover bug.
elevation: lerpDouble(20, 0, animation.value),
animationDuration: Duration.zero,
child: Row(
// If the children are Buttons, then the bug persists.
// If there is only 1 child, the bug does not persist.
// If the children are 2 Texts, the bug does not persist.
children: [
IconButton(
icon: Icon(Icons.person),
onPressed: () {},
),
IconButton(
icon: Icon(Icons.chat),
onPressed: () {},
),
],
),
),
);
},
),
),
),
);
}
}
``` | platform-ios,framework,a: accessibility,customer: google,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-ios,triaged-ios | low | Critical |
504,198,812 | TypeScript | Narrowing doesn't recognize string constant as truthy | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.7-beta
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** cfa narrowing string truthy
**Code**
```ts
declare var a: { b: number } | null
!a || a.b > 3; // works
!a && true || a.b > 3 && true; // works
!a && "a was not defined" || a.b > 3 && "a.b was too big"; // error "Object is possibly null"
```
**Expected behavior:**
The narrowings all work
**Actual behavior:**
The third narrowing does not work and produces an error
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
[Playground](https://www.typescriptlang.org/play/?ts=3.7-Beta&ssl=1&ssc=1&pln=6&pc=1#code/CYUwxgNghgTiAEA3W8oC54G94CMMDsBXAWxxBngF94AfeIiCAKCYEIpa6oA6HeAPngBmANzwA9OPgB3APYwA1gGc2HAGRr4AFxiEENLrwHD4G7bpBjJM+ctWnNAIg7SoS+rK3xQAMwCW+CDAjpyoRoJCDvDORq7uWrKyuH4A5o5WUuQw8tEA8jgAVuBefu4ADrJKSn44EACe9ISMjkxAA)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
I suspect this is a duplicate, as most CFA narrowing bugs are. But I searched as best I could and didn't find any other bugs that matched what was going on here. It seems like typescript doesn't understand that a non-empty string-literal is true in this situation? | Bug | low | Critical |
504,217,142 | rust | Confusing MaybeUninit documentation | The [as_ptr](https://doc.rust-lang.org/std/mem/union.MaybeUninit.html#method.as_ptr]) method on a `MaybeUninit<T>` perhaps could use some better wording, or at least an example.
The docs say: "Writing to memory that this pointer (non-transitively) points to is undefined behavior (except inside an UnsafeCell<T>)."
Its unclear whether that means `&MaybeUninit<UnsafeCell<T>> -> *const UnsafeCell<T> -> *mut T` is okey, or that `&UnsafeCell<MaybeUninit<T>> -> *mut MaybeUninit<T> -> *mut T` is okey, if the resulting mutable pointer is written to.
Perhaps adding a little example of which of the two is okey, (or perhaps both) would help. | C-enhancement,T-libs-api,A-docs | low | Minor |
504,222,835 | rust | feature request: test binaries should support repeating runs | The ability to repeat tests in-process is useful when attempting to reproduce flaky test failures. Go's testing package has the `-count` flag (https://golang.org/cmd/go/#hdr-Testing_flags), googletest has `--gtest_repeat` (https://github.com/google/googletest/blob/master/googletest/docs/advanced.md#repeating-the-tests), but rust doesn't seem to have anything.
In a limited test, I ran a test binary 100 times with a filter that excluded all test cases, and that took over 15 seconds. It'd be nice to avoid that overhead.
cc @tmandry | T-compiler,T-dev-tools,C-feature-request,A-libtest | low | Critical |
504,228,901 | go | cmd/link/internal/ld: TestRuntimeTypeAttrInternal flaky on Windows | ```
#!watchflakes
post <- pkg == "cmd/link/internal/ld" && `This version of %1 is not compatible`
```
Seen on the `windows-amd64-2016` builder in https://storage.googleapis.com/go-build-log/4a655811/windows-amd64-2016_5d978c1a.log:
```
--- FAIL: TestRuntimeTypeAttrInternal (0.73s)
dwarf_test.go:973: could not run test program: fork/exec C:\Users\gopher\AppData\Local\Temp\1\TestRuntimeType696325427\out.exe: This version of %1 is not compatible with the version of Windows you're running. Check your computer's system information and then contact the software publisher.
FAIL
FAIL cmd/link/internal/ld 3.767s
```
CC @cherrymui @jeremyfaller | OS-Windows,NeedsInvestigation,compiler/runtime | low | Major |
504,261,269 | terminal | Enhance shell autocompletion with a cool new user interface and shell completion protocol | Hello! Thank you for the new interesting project!
I just want to let you know about [π Upterm](https://github.com/railsware/upterm) β really great proof of concept but it stopped because maintainer was gone. This terminal looks like 21st century terminal. Very sad that it isn't supported.





| Issue-Feature,Area-Interop,Area-UserInterface,Area-Extensibility,Product-Terminal,InclusionBacklog,InclusionBacklog-Windows TerminalWin32,A11yMAS,Disability-All | high | Critical |
504,273,941 | flutter | XcodeProjectInterpreter violates our "no timeouts" rule | If these processes are idempotent, then I suggest that we change this to not kill the first process, but to show a message after a few seconds saying "This seems to be taking longer than usual" and then start a second process in parallel, to see if it returns faster, and if it does, kill the first one. If the second takes more than a minute, kill it then try a third one, etc, leaving still the first one running, in case it's just being super slow. | c: regression,team,tool,P2,c: flake,team-tool,triaged-tool | low | Major |
504,299,808 | terminal | Now that FontInfo supports >32 characters, make the property sheet and settings support it too | branch reference: 49691f891aeabf02dba506d4c5080c49eac3aaba
followup to #602
To give conhost support for font names greater than 32 characters in length, we need to:
* Fix the registry deserializer to not blat data directly into memory
* Consideration: Make the registry value setter a lambda/function pointer, and bind the `Settings` members into the lambda. Very cool, lets us get rid of the split between `s_GetRegDword` and `s_GetRegString`.
* Figure out what to do with the `GetCurrentConsoleFontEx` API when font names are long
* Propagate the `wstring` and `wstring_view` changes up through `TrueTypeFontList` (modernizing it along the way) and propsheet
* Figure out how to get GDI to comply (`LOGFONTW` is capped at 32 characters as well) | Product-Conhost,Area-Settings,Issue-Task,Priority-3 | low | Minor |
504,301,416 | rust | Tracking issue for `#![feature(entry_insert)]` | This is a tracking issue for the `Entry::insert_entry` method on HashMap and BTreeMap introduced in https://github.com/rust-lang/rust/pull/60142#issuecomment-487416553.
- [x] Implementation for HashMap: #64656
- [x] ~~Stabilised in 1.59: #90345~~ Re-stabilised in 1.83: #130290
- ~~De-stabilized in #94105~~
```rust
impl<'a, K, V> Entry<'a, K, V> {
pub fn insert_entry(self, value: V) -> OccupiedEntry<'a, K, V> {β¦}
}
```
- [x] Implementation for BTreeMap: #133042
- [ ] Stabilization
| A-collections,T-libs-api,B-unstable,C-tracking-issue,disposition-merge,finished-final-comment-period,Libs-Tracked | medium | Critical |
504,306,994 | flutter | Update the Flutter template plugin to use the new embedding and refactor MethodCallHandler to a separate class | As part of the migration to the new plugin embedding, we are refactoring many of the plugins into separate `FlutterPlugin` and `MethodCallHandler` classes.
Once we're ready to start pushing all plugin developers to the new embedding, we should update the template plugin to do so as well. Having the template plugin use two classes will ensure a consistent layout for all plugins regardless of complexity or size, and make it easier to write unit tests of the MethodCallHandler portion.
/cc @amirh | platform-android,tool,P3,a: plugins,team-android,triaged-android | low | Minor |
504,309,796 | flutter | Test with Voiceover on on a real iOS device | We should be doing some integration testing on iOS with voiceover on, to make sure that certain things get selected or tha twe don't crash etc. when certain gestures/etc. are used. For example, it would enable a better regression test for https://github.com/flutter/engine/pull/12990.
AFAICT, this will require a dedicated device for it in the devicelab. We can't do it on a simulator, and I don't know that there's any way to turn on voiceover in a test. We could have a dedicated iOS device with voiceover enabled, and run some tests that way.
/cc @digiter @godofredoc @jonahwilliams @goderbauer @keyonghan | a: tests,platform-ios,a: accessibility,P2,team-ios,triaged-ios | low | Critical |
504,321,324 | pytorch | Support Anomaly detection for distributed autograd. | The local autograd engine has a very useful anomaly detection mode which provides detailed traces of why the backward pass might've failed: https://pytorch.org/docs/stable/autograd.html#anomaly-detection. We should figure out what anomaly detection would look like for distributed autograd.
cc @ezyang @SsnL @albanD @zou3519 @gqchen @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @aazzolini @xush6528 | module: autograd,triaged,module: rpc | low | Critical |
504,329,270 | godot | VCS does not push changes? | **Godot version:**
3.2 alpha
**OS/device including version:**
windows 10
**Issue description:**
i have to manually push changes done and commited in godot's new VCS system
Is this supposed to be this way or am i missing the push / pull button ?

| enhancement,topic:editor | low | Major |
504,330,767 | TypeScript | [Array.prototype.reduce] TS infers accumulator type as any[] | **TypeScript Version:** 3.7-Beta, "noImplicitAny" compiler mode
**Search Terms:**
Array.prototype.reduce, reduce type inference, reduce implicit any
**Code**
```ts
const result = [1].reduce((acc, item) => {
acc.push(item)
return acc
}, [])
result[0].toLowerCase()
```
**Expected behavior:**
1) `result` implicitly has type `any[]` -> compiler error ("noImplicitAny")
or
2) `result` has type `unknown[]` -> compiler error
**Actual behavior:**
`result` has type `any[]`, no errors
**Playground Link:** http://www.typescriptlang.org/play/?strictNullChecks=false&strictPropertyInitialization=false&ts=3.7-Beta&ssl=6&ssc=24&pln=1&pc=1#code/MYewdgzgLgBATgUwgVwDawLwwNoEYC6AdIgCbLAIAUlAhsMADQwCWUCAtgJQwYB8MAbwBQMUTDrBCAB2QQAFpVYdOIsYijI4YcfSEBfJtnwqhiFOmwAGIlBAAZEAHcEcAMI0IVTkA
**Related Issues:** https://github.com/microsoft/TypeScript/issues/25454, https://github.com/microsoft/TypeScript/issues/29604
P.S. It seems reasonable to align behavior with other empty array cases, e.g.:
```ts
let x = {
array: [] // Object literal's property 'array' implicitly has an 'any[]' type
}
let x = []
x[0].toLowerCase() // Variable 'x' implicitly has type 'any[]' in some locations where its type cannot be determined
``` | Suggestion,In Discussion | low | Critical |
504,341,126 | flutter | DefaultCupertinoLocalizations.delegate should fallback to English, always. | DefaultCupertinoLocalizations.delegate is only English, which is Ok. But then, for other languages it should always fallback to English.
It should never throw, because there will always be untranslated languages. So you will always need to have a fallback.
```
The following NoSuchMethodError was thrown building LayoutBuilder:
The getter 'myLabel' was called on null.
Receiver: null
Tried calling: myLabel
User-created ancestor of the error-causing widget was:
CupertinoAlertDialog file:///C:/Users/.../dialog_padrao.dart:38:15
When the exception was thrown, this was the stack:
#0 Object.noSuchMethod (dart:core-patch/object_patch.dart:51:5)
#1 CupertinoAlertDialog.build.<anonymous closure> (package:flutter/src/cupertino/dialog.dart:245:40)
#2 _LayoutBuilderElement._layout.<anonymous closure> (package:flutter/src/widgets/layout_builder.dart)
#3 BuildOwner.buildScope (package:flutter/src/widgets/framework.dart:2328:19)
#4 _LayoutBuilderElement._layout (package:flutter/src/widgets/layout_builder.dart:95:11)
``` | c: crash,framework,a: internationalization,f: cupertino,c: proposal,P2,team-design,triaged-design | low | Critical |
504,374,969 | pytorch | RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR | I am using pytorch 1.2.0 via CUDA 10 on ubuntu 16.04 and titan XP GPU.
When my batch size is 20 i get the mentioned error:
```
File "/home/alireza/anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 118, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/alireza/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR
```
interestingly, if I use:
```torch.backends.cudnn.enabled = False```
the error will disappears and the code run okay, BUT it will be 4 times slower...
also if i dont have
```torch.backends.cudnn.enabled = False```
in my code i notice that if i reduce the batch size to 10 the code will run.
Can you please help what is going on?
i also read [this report](https://github.com/pytorch/pytorch/issues/13219), but it was not useful. | module: cudnn,module: cuda,triaged | medium | Critical |
504,420,099 | TypeScript | Strict Reflect.apply Reflect.construct and Function.prototype.apply/call/bind | ## Search Terms
[Reflect.apply](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Reflect/apply)
[Reflect.construct](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Reflect/construct)
[Function.prototype.apply](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/apply)
[Function.prototype.call](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/call)
[Function.prototype.bind](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind)
## Suggestion
These functions should not return `any`.
Related:
https://github.com/microsoft/TypeScript/issues/212
https://github.com/microsoft/TypeScript/pull/27028
## Use Cases
```ts
function getString() {
return 'string'
}
class ClassA {}
Reflect.apply(getString, null, []) // expected string but got any
Reflect.construct(ClassA, []) // expected ClassA but got any
Function.prototype.apply.call(getString, undefined) // expected string but got any
Function.prototype.call.call(getString, undefined) // expected string but got any
Function.prototype.bind.call(getString, undefined)() // expected string but got any
```
## Examples
Reflect:
```ts
// old
function apply(target: Function, thisArgument: any, argumentsList: ArrayLike<any>): any;
// new
function apply<T>(target: (...args: any) => T, thisArgument: any, argumentsList: ArrayLike<any>): T;
// old
function construct(target: Function, argumentsList: ArrayLike<any>, newTarget?: any): any;
// new
function construct<T>(target: new (...args: any) => T, argumentsList: ArrayLike<any>, newTarget?: any): T;
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
π Add a new compiler option
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Major |
504,472,502 | TypeScript | Allow specifying interface implements clauses for the static side of classes | ## Search Terms
class static side syntax interface type expression
## Suggestion
Currently, you can only specify the static side interface of a class with a declaration. From [the handbook](https://www.typescriptlang.org/docs/handbook/interfaces.html#difference-between-the-static-and-instance-sides-of-classes):
```ts
const Clock: ClockConstructor = class Clock implements ClockInterface {
constructor(h: number, m: number) {}
tick() {
console.log("beep beep");
}
}
```
When I first wanted to do this (before looking at the docs), I tried to do it in this fashion:
```ts
class Clock: ClockConstructor implements ClockInterface {
...
}
```
And I was surprised to see that it didn't work. My proposal is to make this a valid syntax as it's more intuitive and understandable.
I believe that forcing class expressions conflicts with TypeScript's design goals:
>5. Produce a language that is composable and easy to reason about.
Why use a class expression when there is no need for it? Why change your actual JavaScript logic for something that exists only in TypeScript and not in your production code.
## Use Cases
Anywhere you need to set the interface of the static side of a class without having a need to specify it as an expression.
## Examples
Take the example from the playground:
```ts
interface ClockConstructor {
new (hour: number, minute: number);
}
interface ClockInterface {
tick();
}
class Clock: ClockConstructor implements ClockInterface {
constructor(h: number, m: number) {}
tick() {
console.log("beep beep");
}
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | high | Critical |
504,686,568 | rust | Occasional "cannot move out" error (E0505) when adding a match guard | Given the following snippet:
```rust
struct State {
val: u8,
xy: bool,
}
struct Test {
vec: Vec<State>,
}
impl Test {
fn new() -> Self {
Test { vec: Vec::new() }
}
fn get_mut(&mut self, id: usize) -> Result<&mut u8, Option<&mut State>> {
match self.vec.get_mut(id) {
Some(State { ref mut val, ref xy }) /*if true*/ => Ok(val),
other => Err(other),
}
}
}
```
This compiles just fine. Now uncomment the _if true_ match guard and compile fails:
```
error[E0505]: cannot move out of `_` because it is borrowed
--> src/lib.rs:18:13
|
15 | fn get_mut(&mut self, id: usize) -> Result<&mut u8, Option<&mut State>> {
| - let's call the lifetime of this reference `'1`
16 | match self.vec.get_mut(id) {
17 | Some(State { ref mut val, ref xy }) if true => Ok(val),
| ----------- ------- returning this value requires that borrow lasts for `'1`
| |
| borrow of value occurs here
18 | other => Err(other),
| ^^^^^ move out of value occurs here
```
Why does the insignificant match guard affect the borrowck?
Tested both on stable 1.38.0 and 1.40.0-nightly (032a53a06 2019-10-03). | C-enhancement,A-borrow-checker,T-compiler,A-NLL,NLL-polonius | low | Critical |
504,688,933 | opencv | Support I/O functionality through std::istream/ostream | ## OE-XX Support I/O functionality through std::istream/ostream
* Status: Draft
## Introduction and Rationale
OpenCV library includes functionality for I/O. Currently it is file-name based.
Current approach has several issues:
- lack of cross-platform Unicode symbols support (only UTF-8 file paths may work)
- it requires files (no direct access to Java packed resources from .JAR/.AAR) or has workarounds like `imdecode()`
## Proposed solutions
Extend functionality of existed I/O functions and their internal backends:
- imread/imwrite
- FileStorage
- DNN's network reader
Expected changes (in case of data reading):
- C++: add overload which accepts `std::istream`
- Python: add wrapper/implementation over `std::istream` for Python-based file/stream handles (like open() calls)
- Java: add wrapper/implementation over `std::istream` for Java InputStream / ByteBuffer / etc
Handling of Unicode files paths responsibility would be moved on the side of C++ standard library (through robust `std::wstring` or `std::filesystem::path`).
Similar idea is for Python/Java code.
Extra: Python/Java may support network(HTTP) input streams.
In-memory handling would be improved too (through `std::istream` wrapper).
## Possible issues
Not all image readers backends may support new feature (depends on 3rdparty).
## Impact on existing code, compatibility
No impact is expected on existing C++ code.
## References
Issues/PRs about I/O: #4292 #5631+[comment](https://github.com/opencv/opencv/issues/5631#issuecomment-291991001) #13368(PR) | evolution | low | Minor |
504,705,861 | rust | Adding imports (or even crates) to suggestions | Sometimes we (as in Rust errors or clippy lints) may suggest things that require adding a `use` somewhere, or even adding a crate to the dependencies.
I'd like to see support for this from the compiler, e.g. extend `Suggestion` with a structured way of adding or removing an import or crate dependency. | C-enhancement,T-compiler,A-suggestion-diagnostics,D-diagnostic-infra | low | Critical |
504,728,208 | angular | elements: too many change detection runs during initialization | # π bug report
### Affected Package
The issue is caused by package @angular/elements.
### Is this a regression?
No.
### Description
During custom element initialization (i.e. [initializeComponent()][1]), the default `NgElementStrategy`, [ComponentNgElementStrategy][2], (transitively) may call [setInputValue()][3], which in turn calls [scheduleDetectChanges()][4]. Right after, `initializeComponent()` calls [detectChanges()][5].
However, despite CD being run synchronously after `setInputValue()`, the scheduled, asynchronous CD (scheduled inside `setInputValue()`) will still run.
This results in two CDs per Angular custom element instance, which can negatively affect initial page loads or route transitions for pages/routes with many custom elements. (This is the case for example in https://angular.io/guide/router, which has ~200 custom elements.)
**Possible solution:**
I think it would be reasonable to cancel scheduled CD, if CD is triggered earlier than scheduled.
(See the reproduction below for a demo of the fix.)
### π¬ Minimal Reproduction
https://stackblitz.com/edit/ngelements-issue-33059?file=src/app/app.component.ts
### Anything else relevant?
Related issues: #23813, #33060
[1]: https://github.com/angular/angular/blob/9f0c549bc/packages/elements/src/component-factory-strategy.ts#L146-L162
[2]: https://github.com/angular/angular/blob/9f0c549bc/packages/elements/src/component-factory-strategy.ts#L45
[3]: https://github.com/angular/angular/blob/9f0c549bc/packages/elements/src/component-factory-strategy.ts#L168
[4]: https://github.com/angular/angular/blob/9f0c549bc/packages/elements/src/component-factory-strategy.ts#L139
[5]: https://github.com/angular/angular/blob/9f0c549bc/packages/elements/src/component-factory-strategy.ts#L158
| type: bug/fix,area: performance,area: elements,state: confirmed,P4 | low | Critical |
504,737,652 | angular | elements: no coalescing of change detection invocations across instances | # π bug report
### Affected Package
The issue is caused by package @angular/elements.
### Is this a regression?
No.
### Description
While Angular custom elements do coalesce change detection invocations per element instance (see [scheduleDetectChanges()][1]), there is no coalescing across other instances of the same element or other elements (even for those on the same injector or component tree.
This results in too many CDs on pages/routes with many custom elements and can negatively affect performance. (This is the case for example in https://angular.io/guide/router, which has ~200 custom elements.)
### Anything else relevant?
Related issues: #23813, #33059
_EDIT:_
_Similar issue with useful context (closed in favor of this one): #23813_
_PR (that would at least cover some usecases): #23885_
[1]: https://github.com/angular/angular/blob/9f0c549bc/packages/elements/src/component-factory-strategy.ts#L206-L215
| type: bug/fix,area: performance,area: elements,state: confirmed,P4 | low | Critical |
504,756,611 | pytorch | Unify warning logging mechanism | We have `TORCH_WARN` macro, and glog also provides `LOG(WARNING)` macro.
To recount the differences between these macros:
* `TORCH_WARN` integrates with Python warning mechanism, when the relevant C++ code was invoked from Python. Warnings reported this way can be handled using regular facilities for capturing and aggregating warnings; e.g., in some tests, we use this to ensure that a warning is actually reported. When Python is not involved, we simply print warnings to stderr
* `LOG(WARNING)` integrates with glog, which means that its visibility can be toggled via standard command line flags from the glog/gflags ecosystem. glog based logging is the standard logging mechanism in Facebook and integrates with Facebook logging infrastructure.
I believe an ideal warning mechanism combines the best parts of both of these systems (integration with Python, and integration with glog infrastructure). We should unify these mechanisms | module: internals,triaged,enhancement,better-engineering | low | Minor |
504,760,715 | pytorch | [FR][RFC] Build a serving framework to host and serve trained PyTorch models | ## π Feature
**Document: https://torchserve-docs.s3-us-west-2.amazonaws.com/docs/torchserve_architecture_v0.pdf**
**NOTE: The above document is a draft and subject to change**
Request to build a Serving framework to host and serve trained PyTorch models.
@alexwong
## Motivation
PyTorch provides an excellent and easy-to-use interface to train a model and also provides an easy to use optimized inference interface through JIT. Currently, there is a need for an optimized model serving solution to take any model trained using PyTorch framework into production. There exists multiple solutions to serve a PyTorch model in production, but most of the solutions are generic model serving solutions. There are multiple pain points that the PyTorch community currently face when trying to take a PyTorch model into production.
* Building a high performance web serving component to host PyTorch models is difficult to build and requires experience and domain knowledge.
* Adding custom pre-processing and post-processing for a model in service currently requires significant rework on the model server itself.
* Supporting multiple accelerators requires additional work.
* Any customization to the model server would require significant understanding of the existing serving framework itself and would also require significant rework.
We think the following are the most important requirements of a good PyTorch model serving framework.
## Pitch
We want to contribute to building a serving framework, which addresses the above pain points and more. We foresee the
A PyTorch model serving solution will have the following capabilities:
1. **Performance**: The server component should be highly-performant with low overhead from the serving framework. This implies that the average throughput must be high and the P90 latencies should be low. Its also important to get P50 and P90 latencies comparatively flat, signifying that all the requests are treated equally.
2. **Host Multiple Models**: The server component should be able to host multiple models at the same time and customers should be able to load/unload a model at runtime. The model serving framework should expose an endpoint for each model, which can be reached by any customer.
3. **High Availability**: The serving framework should be robust. Runtime errors of one model shouldnβt affect other models running on the server or the runtime of the server itself. There should be mechanisms to recuperate from any system out of resource errors.
4. **Metrics and Logs**: A production grade serving framework should provide insight into the runtime of the model. The serving framework should provide easy access to logs and metrics and also provide easy hooks to add new logs and metrics without needing to understand the serving framework at a deep level.
5. **Support both Eager mode and Scripted mode models**: The serving framework should support means to run PyTorch models in scripted mode for optimized execution.
6. **Support for multiple bindings**: The serving framework should have support for models loaded via Python (eager/torchscript) or C++ bindings (JIT IR traces).
7. **Supports HTTP and gRPC Endpoints**: The serving framework should come with a full set of HTTP endpoints for managing models as well as running inference on the models. PyTorch serve would also come with an SDK to easily customize the endpoints. The serving framework would also support gRPC endpoints.
8. **Ease of use and access**: The serving framework should be easy to set up on any platform (MacOS, Linux, Windows) and should be testable on these systems. Users should have the same experience to containerize the Serving framework and launch into production using any container orchestration mechanism. The PyTorch serve framework would also have a fully featured CLI to start and run the model server.
9. **Lightweight**: This implies that the serving component itself shouldnβt have multiple dependencies.
10. **Supports features such as request batching**: The serving framework would have features such as request batching, to optimally run inference on accelerators.
11. **Support model Versioning and A/B testing**: The serving framework should have capabilities to load multiple versions of the same model and run A/B tests on the model. This is very useful for when rolling out newer versions of a model into production. This can also be used to roll back if the new model is not as performant.
12. **Zero code serving**: While providing the feature to customize the pre-processing and post-processing of inference requests, the PyTorch serve framework should also allow customers to simply drop their trained models into the server and use it for inference. In other words, the PyTorch serving framework should come with sensible defaults for pre processing and post processing.
13. **Easy customizability**: The serving framework must be easy to customize for endpoints. This means easily modifying and adding new management endpoints, defining custom request batching algorithms, defining custom AuthZ and AuthN mechanisms.
14. **Support Accelerators**: A production grade model server should be able to run on GPU hosts as well as any other custom accelerator host.
15. **Web UI**: The PyTorch serving framework should come with a Web UI to allow interaction with a served model.
### Proposed Technical Details
The model server will be based on a micro-service based solution rather than a monolithic approach. This architecture would bring us the benefit of decoupling the work of handling ingress requests and running the actual inference. This also allows the PyTorch serving framework to scale beyond a single host. The high-level components of the serving framework are divided into a frontend and backend which share different responsibilities.
#### Frontend responsibilities:
1. **Manage connections**: In other words, the incoming requests and outgoing responses are managed by the frontend.
2. **Manage models** : The frontend is responsible for the lifecycle of the model. Each hosted model will have its unique endpoint created, which could take any data type and return any data-type back.
3. **Manage the backend**: The frontend is responsible for providing the models to be loaded onto the backend workers and also managing the backend workers themselves.
4. **Manage requests**: Requests coming into the serverβs frontend will be queued in model-sepecific queues for handling.
5. **Request distribution**: The frontend will be responsible for request distribution, to backend workers.
6. **Metrics and Logs** : Frontend will be responsible for metrics management, logs management and capturing any custom metrics and logs that come from the backend.
7. **Retrieve models from anywhere** : The frontend is also responsible for retrieving the models from cloud or local storage.
#### Backend responsibilities:
1. **Running inference**: Tightly integrate with PyTorch backend and also responsible for running any preprocessing required, running the *forward* method of the model with the incoming request and running post process on the inference response.
2. **Default pre-process, inference and post-process**: If no custom processing logic is provided to the backend, it would have default logic to run preprocess, inference and post process on the model.
3. **Publish custom metrics and logs**: Backend will have the capabilities to publish custom model level metrics and logs.
### Proposed sequence diagrams
#### Monitoring status of the Server

#### Checking status of the models

#### Running inference

#### Loading model

#### Deleting a model
=
## Next Steps
* We are looking for feedback/comments on this proposal. Specifically, we are looking for feedback on the list of capabilities outlined in this RFC and their priority. We also welcome feedback on our proposed design and implementation.
* Add details on proposed architecture and details on endpoint.
* Add additional sequence diagrams.
* Target Q4 2019 for Experimental release.
| feature,triaged | high | Critical |
504,764,087 | flutter | Support golden widget tests in Fuchsia tree | a: tests,customer: fuchsia,framework,dependency: fuchsia,platform-fuchsia,P2,team-framework,triaged-framework | low | Minor |
|
504,788,788 | create-react-app | Make font size of error overlay configurable | ### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
[The error overlay font size is too small ](https://github.com/facebook/create-react-app/blob/68f95d41334f65fd1dc12c329fda12c719e18b3f/packages/react-dev-utils/webpackHotDevClient.js#L72) for some people
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
It would be good if it could be configurable
### Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
None
### Additional context
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here, if you haven't already.
-->
None
| issue: proposal | low | Critical |
504,789,712 | opencv | cv::magnitudeSqr and cv::magnitude(InputArray xy,...) ? | OpenCV 4.1:
cv::cuda::magnitude... functions have more options than cv::magnitude..., because :
-there is a cv::cuda::magnitudeSqr()
-cv::cuda::magnitude(...) can accept x and y as interleaved rather than two planes.
-cv::magnitudeSqr() would require the use of ippsPowerSpectr() rather than ippsMagnitude()
-cv::magnitude(xy) would require the use of "ipp...32/64fc" rather than "ipp...32/64f" functions
-the biggest part of the work would be in ocl/hal to support new functions like magnitudeSqr() magnitude32c().
Considering that work, would it be accepted as a pull request, or would it be rejected as not useful enough to justify more code ?
| feature,category: core | low | Minor |
504,790,822 | pytorch | scatter_add allows index tensor that doesn't match input size in forward pass but fails on backward pass | ## π Bug
If you have an index tensor whose shape does not match the other tensor's shape but is broadcast-compatible, the forward pass of `torch.scatter_add()` will succeed but the backward pass will throw a RuntimeError saying that the shapes do not match exactly on the non-indexed dimension.
## To Reproduce
Repro script:
```python
import torch as th
src = th.zeros(8, 105238, 3)
ind = th.zeros(8, 626304, 1).long()
ofs = th.zeros(8, 626304, 3)
src.requires_grad_(True)
ofs.requires_grad_(True)
out = src.scatter_add(1, ind, ofs)
#out = src.scatter_add(1, ind.expand(-1, -1, 3), ofs) <<< this line works
l = out.sum()
l.backward()
```
Exception (CPU):
```
RuntimeError: Expected tensor [8, 626304, 1], src [8, 105238, 3] and index [8, 626304, 1] to have the same size ap
art from dimension 1
```
Exception (CUDA):
```
RuntimeError: invalid argument 2: Input tensor must have same size as output tensor apart from the specified dimen
sion at /mnt/home/gbschwartz/pytorch/aten/src/THC/generic/THCTensorScatterGather.cu:27
```
## Expected behavior
No exceptions thrown.
## Environment
- PyTorch Version (e.g., 1.0): 1.2.0
- OS (e.g., Linux): linux
- How you installed PyTorch (`conda`, `pip`, source): conda
- Build command you used (if compiling from source):
- Python version: 3.7
- CUDA/cuDNN version: 9.0 / 7.6.0
- GPU models and configuration: V100
cc @ezyang @gchanan @zou3519 | high priority,module: crash,triaged | medium | Critical |
504,801,917 | pytorch | [feature request] [dataloader] Pad variable-sized tensors in default_collate | It's more and more frequent to deal with variable-sized inputs (with max size being determined at collation time). Currently one deals with those by writing a custom `collate_fn`, but it adds some boilerplate.
I propose to evaluate the idea of putting this into the default `collate_fn`. Some ideas:
0) frequent case is 1d sequences and 2d images
1) allow `torch.stack` / `torch.cat` deal with padding variable-sized inputs
2) deal with padding at `collate_fn`
3) convert default_collate to Collate class / module with `__call__` interface. Then padding options can be put as arguments to the constructor.
5. One useful padding config is padding sequence size to multiples of some size like 16/64/128 (better for pytorch allocator)
cc @SsnL | module: dataloader,triaged,enhancement | low | Minor |
504,918,259 | godot | CLI debugger (--debug) does not work with standalone scripts (--script) | **Godot version:**
3.2.alpha1.official
**OS/device including version:**
Mac OS X Mojave
**Issue description:**
As per documentation from godot -h:
```
Debug options:
-d, --debug Debug (local stdout debugger).
-b, --breakpoints Breakpoint list as source::line comma-separated pairs, no spaces (use %20 instead).
```
I am trying to use the -b and -d options from the command line, but it's as if nothing happens.
`godot -s somescript.gd --debug --breakpoints somescript.gd::3`
**Steps to reproduce:**
Create `somescript.gd` script and execute above CLI command.
```gdscript
extends SceneTree
var somevar = 20
var foo1='abcd'
func _init():
print(foo1)
``` | bug,topic:core,confirmed | low | Critical |
504,923,782 | flutter | "Dart compiler exited unexpectedly" running `flutter test` on a non-Flutter package | ## Steps to Reproduce
I accidentally ran `flutter test` on a non-Flutter package and got "Dart compiler exited unexpectedly". While I do not expect `flutter test` to succeed running my tests in this package, I'd expect a more useful error message.
Currently this is reproducible by running `flutter test` on https://github.com/flutter/engine/tree/master/lib/web_ui.
<!--
Please tell us which target platform(s) the problem occurs (Android / iOS / Web / macOS / Linux / Windows)
Which target OS version, for Web, browser, is the test system running?
Does the problem occur on emulator/simulator as well as on physical devices?
-->
**Target Platform:** Linux
## Logs
`flutter test` output:
```
00:12 +0 -6: loading /home/yjbanov/code/flutter/engine/src/flutter/lib/web_ui/test/engine/recording_canvas_test.dart [E]
Failed to load "/home/yjbanov/code/flutter/engine/src/flutter/lib/web_ui/test/engine/recording_canvas_test.dart":
Compilation failed
Test: /home/yjbanov/code/flutter/engine/src/flutter/lib/web_ui/test/engine/recording_canvas_test.dart
Shell: /home/yjbanov/code/flutter/flutter/bin/cache/artifacts/engine/linux-x64/flutter_tester
00:12 +0 -6: loading /home/yjbanov/code/flutter/engine/src/flutter/lib/web_ui/test/text/word_breaker_test.dart [E]
Exception: the Dart compiler exited unexpectedly.
package:flutter_tools/src/base/common.dart 28:3 throwToolExit
package:flutter_tools/src/compile.dart 625:9 ResidentCompiler._compile.<fn>
package:stack_trace/src/stack_zone_specification.dart 129:26 StackZoneSpecification._registerUnaryCallback.<fn>.<fn>
package:stack_trace/src/stack_zone_specification.dart 209:15 StackZoneSpecification._run
package:stack_trace/src/stack_zone_specification.dart 129:14 StackZoneSpecification._regist
```
`flutter doctor -v`:
```
[β] Flutter (Channel unknown, v1.6.1-pre.1816, on Linux, locale en_US.UTF-8)
β’ Flutter version 1.6.1-pre.1816 at /home/yjbanov/code/flutter/flutter
β’ Framework revision 94f15559ea (2 days ago), 2019-10-07 09:37:14 -0700
β’ Engine revision 7d90779bb6
β’ Dart version 2.6.0 (build 2.6.0-dev.5.0 d6c6d12ebf)
[β] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
β’ Android SDK at /home/yjbanov/Android/Sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-28, build-tools 28.0.3
β’ Java binary at: /usr/local/buildtools/java/jdk/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_181-google-v7-270956655-270956655)
β’ All Android licenses accepted.
[!] Android Studio (not installed)
β’ Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/setup/#android-setup for detailed instructions).
[!] IntelliJ IDEA Ultimate Edition (version 2018.3)
β’ IntelliJ at /opt/intellij-ue-2018.3
β Flutter plugin not installed; this adds Flutter specific functionality.
β’ Dart plugin version 183.5153.38
β’ For information about installing plugins, see
https://flutter.dev/intellij-setup/#installing-the-plugins
[!] IntelliJ IDEA Ultimate Edition (version 2019.1)
β’ IntelliJ at /opt/intellij-ue-2019.1
β Flutter plugin not installed; this adds Flutter specific functionality.
β’ Dart plugin version 191.7830
β’ For information about installing plugins, see
https://flutter.dev/intellij-setup/#installing-the-plugins
[β] VS Code (version 1.37.1)
β’ VS Code at /usr/share/code
β’ Flutter extension version 3.4.1
[!] Connected device
! No devices available
! Doctor found issues in 4 categories.
```
| c: new feature,tool,P3,team-tool,triaged-tool | low | Critical |
504,940,614 | pytorch | torch.distributed.autograd.backward() should populate .grad field on Tensors by default. | In the implementation for the backward pass in https://github.com/pytorch/pytorch/pull/27022, we accumulate the gradients in the autograd context by default. In order to have symmetry with `torch.autograd.backwards`, the API should be something like this:
```
torch.distributed.autograd.backward(tensors, grads, accumulate_grad_on_tensors = True)
```
When `accumulate_grad_on_tensors` is set to False, we accumulate the grads on the autograd context, otherwise we accumulate the grads on the .grad field.
cc @ezyang @SsnL @albanD @zou3519 @gqchen @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @aazzolini @xush6528 | module: autograd,triaged,module: rpc | low | Minor |
504,941,122 | pytorch | Test re-entrant backward works with torch.distributed.autograd.backward() | Need to add a unit test to ensure re-entrant backward works with the distributed backward pass.
cc @ezyang @SsnL @albanD @zou3519 @gqchen @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @aazzolini @xush6528 | module: autograd,triaged,module: rpc | low | Minor |
504,944,051 | pytorch | π Graceful RPCAgent termination in multi-driver scenario | # Problem
* Currently, `sync()`, `join()` interface is implemented in and only for `ProcessGroupAgent`.
* Other `RPCAgent`s also need it, but it shouldnβt implement it again.
* What stops the current solution from being reused is that `ProcessGroupAgent` uses collective communication to achieve it, while by default, other `RPCAgent`s don't have this facility.
# Proposal
* Problem 1
* Statement
* Currently, we ask users to call `init_process_group` before calling `init_model_parallel`. One user error could be that the user passes 2 different init_methods, resulting 2 different node groups. One for c10d data parallel default process group and the other for model parallel node group.
* Solution
* We create a model parallel default process group inside of `init_model_parallel` for users. Notice, the default process group here is a model-parallel scope default. In order to make naming scopes of data parallelism and model parallelism distinct. We decide to do the following refactoring.
* Move `torch.distributed.*`, like `torch.distributed.barrier`, to `torch.distributed.data_parallel.*`.
* Keep `torch.distributed.rpc.*` in case data parallel want to use RPC in the future.
* Create `torch.distributed.model_parallel.*`. Inside it, it could import `torch.distributed.rpc` to use.
* The `init_model_parallel` API initializes a model parallel default process group for users. Users can call `torch.distributed.model_parallel.barrier` later.
* Problem 2
* Statement
* Is `ProcessGroupAgent::sync` still needed? It essentially called `barrier()` twice. Shouldnβt `barrier` + `drain` enough to gracefully shutdown symmetric workers (i.e. workers are not driven by a master)?
* Solution
* Remove `sync`, `join` interface in `RpcAgent`. For user cases where there is no master in charge, make every worker call `barrier` + `drain` at the end, before shutting down local `RPCAgent`.
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar | high priority,triaged,better-engineering,module: rpc | low | Critical |
504,965,655 | pytorch | Avoid RTTI in DistEngine | As mentioned in the comment here: https://github.com/pytorch/pytorch/pull/27022#discussion_r332294507, we should try and avoid RTTI in DistEngine.
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 | triaged,better-engineering,module: rpc | low | Minor |
504,973,823 | rust | Verify trait implsβ signatures before checking dyn compatibility | Given a code like this:
```rust
mod obj_unsafe {
pub trait Serialize {
fn serialize<T>(&self, t: T);
}
}
mod obj_safe {
pub trait Serialize {
fn serialize(&self);
}
}
trait DoSomething {
fn foo(&self) -> Box<dyn obj_safe::Serialize>;
}
impl DoSomething for () {
fn foo(&self) -> Box<dyn obj_unsafe::Serialize> {
unimplemented!()
}
}
```
We emit an error like this:
```
error[E0038]: the trait `obj_unsafe::Serialize` cannot be made into an object
--> src/lib.rs:18:5
|
18 | fn foo(&self) -> Box<dyn obj_unsafe::Serialize> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `obj_unsafe::Serialize` cannot be made into an object
|
= note: method `serialize` has generic type parameters
```
---
This error is something that was encountered by someone who read documentation and saw they needed to implement a method returning `Box<dyn Serialize>`, An obvious assumption is that this refers to `serde::Serialize`, so thatβs what they specified in their return type when implementing `DoSomething`.
The error they got does not point them at the problem at all. Sure, `Serialize` from serde is not object safe, but that is irrelevant, because they are trying to make an object of an entirely wrong trait in the first place.
Instead we should be emitting `error[E0053]: method `foo` has an incompatible type for trait` first. | C-enhancement,A-diagnostics,A-trait-system,T-compiler,A-trait-objects,A-dyn-compatibility | low | Critical |
504,980,062 | PowerToys | [FancyZones] Map keyboard shortcuts to zones directly to zone windows quicker | I want to hit a user definable shortcut and the focused window should resize to the zone which is mapped to this shortcut .
I would map some keys e.g. on the numpad to my e.g. 4 zones. Now I can just press one of these keys and my windows is resized to the chosen zone.
Cycling with WIN + arrow is nice, but just hitting a key is a lot faster! | Idea-Enhancement,FancyZones-Dragging&UI,FancyZones-Hotkeys,Product-FancyZones | high | Critical |
504,984,858 | PowerToys | PowerToys: very fast switching between apps with Alt + Tab should not show the window overview (like on linux/mac) to be less distracting | In Windows Alt+Tab shows immediately the overview. So regardless how fast you switch between apps, you see the overview flickering a short moment.
In macOS and Ubuntu the overview is only shown when you don't release the key very fast. Please have a look at an actual machine and get a feeling how it works there.
1. So with keeping the Alt key pressed, you get the task overview after 200ms (value is just a guess, but it's very quick and you don't notice the delay when you really want to see the overview).
2. But when releasing the keys instantly, you don't see the task overview at all, which makes fast switching between apps a lot less visually distracting.
It would be so great when I could have this in Windows too. Maybe an idea for a PowerToys tool?
| Idea-New PowerToy | medium | Major |
505,024,974 | terminal | Support for OpenType Contextual Alternates Fonts to Allow e.g. Sparklines | <!--
π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
I'd love to see support for fonts like [AtF Sparks](https://aftertheflood.com/projects/sparks/).
This will allow writing much richer console apps without the need for complex dialog boxes to e.g. monitor a running process.

<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
# Proposed technical implementation details (optional)
This is apparently achieved y supporting "OpenType Contextual Alternates".
<!--
A clear and concise description of what you want to happen.
-->

Support for fonts like ATF Sparks, will give us the ability to do line-based charting.
Apps could plot historic values as sparklines a-la TaskManager, or report the distribution of values using histogram-like charts. Very helpful for IoT devices, web-services, financial apps etc.
There are many applications that do not require a GUI, and adding charting facilities would give them a significant power boost. | Issue-Feature,Area-Rendering,Product-Terminal | low | Critical |
505,034,122 | neovim | wait(): timeout may occur, before condition is eval'd timeout/interval times | - `nvim --version`: NVIM v0.5.0-164-g3b3a40978
- `vim -u DEFAULTS` (version: ) behaves differently? N/A
- Operating system/version: N/A
- Terminal name/version: N/A
- `$TERM`: N/A
### Steps to reproduce using `nvim -u NORC`
Create the following test.vim.
```vim
function! Count()
let g:counter += 1
return g:counter
endfunction
let g:num_trails = 0
let timer = timer_start(1, { -> execute('sleep 150m') }, {'repeat' : -1 })
while(v:true)
let g:counter = 0
let g:num_trails += 1
call wait(100, 'Count() >= 5', 20)
if g:counter == 1
echomsg "g:counter is 1, trails "..g:num_trails
break
endif
endwhile
call timer_stop(timer)
```
```
nvim -u NORC
:so test.vim
```
### Actual behaviour
Eventually, the while loop ends.
One case:
```
g:counter is 1, trails 147752
```
### Expected behaviour
While loops should not end forever.
As is apparent from the code, after the condition is first evaluated(0 sec), the condition may not be evaluated once during the timeout period. Is this intended?
I think it should be implemented like follow, but is it wrong?
```C
#define WAIT_UNTIL(loop, multiqueue, timeout, condtion) \
do { \
if (!(condtion) && timeout != 0) { \
int remaining = timeout; \
uint64_t before = os_hrtime(); \
do { \
if (remaining > 0) { \
uint64_t now = os_hrtime(); \
remaining -= (int) ((now - before) / 1000000); \
before = now; \
if (remaining <= 0) { \
break; \
} \
} \
LOOP_PROCESS_EVENTS(loop, multiqueue, remaining); \
} while (!(condtion)); \
} \
} while(0)
```
If we keep the current implementation, I think the following help description should be modified.
> Condition is evaluated on user events, internal events, and every {interval} milliseconds (default: 200).
In practice, the condition is evaluated if the timeout has not been reached at the end of processing the user event, internal events. I don't know how to explain the interval, but there is no guarantee that the condition will be evaluated for every interval.
| documentation,event-loop | low | Major |
505,037,215 | flutter | Android Javadoc emits lots of warnings and errors | For some reason, this passes on CI but not locally. Even on CI, many warning and errors can be seen, e.g. https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket.appspot.com/8900017607924747344/+/steps/build_javadoc/0/stdout
Most of these are due to missing documentation for parameters/returns.
/cc @matthew-carroll @mklim | platform-android,engine,d: api docs,P2,team-android,triaged-android | low | Critical |
505,059,422 | pytorch | I can't set gpu is 1 it always use gpu 0 | here is my code
torch::Device deviceInfo(torch::kCUDA, 1);
module->to(deviceInfo);
but i got this error
terminate called after throwing an instance of 'c10::Error'
what(): CUDA out of memory. Tried to allocate 240.00 MiB (GPU 0; 10.92 GiB total capacity; 74.36 MiB already allocated; 147.50 MiB free; 11.64 MiB cached) (malloc at ../c10/cuda/CUDACachingAllocator.cpp:267)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6a (0x7ff1cbc4201a in /home/training/pytorch/torch/lib/libc10.so)
frame #1: <unknown function> + 0x2009c (0x7ff1c32fd09c in /home/training/pytorch/torch/lib/libc10_cuda.so)
frame #2: <unknown function> + 0x20db3 (0x7ff1c32fddb3 in /home/training/pytorch/torch/lib/libc10_cuda.so)
frame #3: at::native::empty_cuda(c10::ArrayRef<long>, c10::TensorOptions const&, c10::optional<c10::MemoryFormat>) + 0x282 (0x7ff1ce9fe2b2 in /home/training/pytorch/torch/lib/libtorch.so)
frame #4: <unknown function> + 0x584a765 (0x7ff1d16a0765 in /home/training/pytorch/torch/lib/libtorch.so)
frame #5: at::native::to(at::Tensor const&, c10::Device, c10::ScalarType, bool, bool) + 0x8b9 (0x7ff1cf382889 in /home/training/pytorch/torch/lib/libtorch.so)
frame #6: at::TypeDefault::to(at::Tensor const&, c10::Device, c10::ScalarType, bool, bool) + 0x25 (0x7ff1cf6c7535 in /home/training/pytorch/torch/lib/libtorch.so)
frame #7: <unknown function> + 0x54aca6e (0x7ff1d1302a6e in /home/training/pytorch/torch/lib/libtorch.so)
frame #8: torch::jit::load(std::unique_ptr<caffe2::serialize::ReadAdapterInterface, std::default_delete<caffe2::serialize::ReadAdapterInterface> >, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&) + 0x222 (0x7ff1d1304052 in /home/training/pytorch/torch/lib/libtorch.so)
frame #9: torch::jit::load(std::istream&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&) + 0x75 (0x7ff1d13041f5 in /home/training/pytorch/torch/lib/libtorch.so)
frame #10: TorchNet::loadModelEncode(char const*, unsigned char*) + 0x169 (0x40cbe9 in ./vehicleFeature)
frame #11: TorchNet::LoadModel(PES_Params_Config) + 0x5e (0x40cdfe in ./vehicleFeature)
frame #12: TorchOperation::LoadModel(PES_Params_Config) + 0x48 (0x418e98 in ./vehicleFeature)
frame #13: VehicleFeature::VehicleFeature(char const*, int) + 0xc2 (0x4192d2 in ./vehicleFeature)
frame #14: test() + 0x217 (0x41a217 in ./vehicleFeature)
frame #15: main + 0x9 (0x4098c9 in ./vehicleFeature)
frame #16: __libc_start_main + 0xf0 (0x7ff1ca45c830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #17: _start + 0x29 (0x409929 in ./vehicleFeature)
Aborted (core dumped)
cc @yf225 | needs reproduction,module: cpp,module: cuda,low priority,triaged | low | Critical |
505,065,653 | go | runtime: manual instrumentation of KeepAlive is fraught | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Likely, yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/bjorn/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/bjorn/src/go"
GOPROXY="http://claudette.applied-maths.local:13909"
GORACE=""
GOROOT="/home/bjorn/opt/go"
GOTMPDIR=""
GOTOOLDIR="/home/bjorn/opt/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/bjorn/work/src/i41healthapp/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build701296847=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
I am currently writing a game library in Go that avoids cgo and calls the
OS directly, certainly on Linux thanks to it's stable kernel API. (See:
https://gitlab.com/beoran/galago <https://gitlab.com/beoran/galago/blob/master/os/linux/input/input_linux.go>
)
I have an input Device like this:
```go
// Device models an input device
type Device struct {
FileName string
*os.File
// Cached device information
Info struct {
Events []SupportedEvent
Keys []SupportedKey
Axes []AbsoluteAxis
Rolls []RelativeAxis
Name string
ID string
}
}
// Keeps the device's file from being garbage collected.
func (d * Device) KeepAlive() {
runtime.KeepAlive(d.File)
}
// Icotl performs an ioctl on the given device
func (d * Device) Ioctl(code uint32, pointer unsafe.Pointer) error {
fmt.Printf("ioctl: %d %d %d\n", uintptr(d.Fd()), uintptr(code),
uintptr(pointer))
_, _, errno := syscall.Syscall(
syscall.SYS_IOCTL,
uintptr(d.Fd()),
uintptr(code),
uintptr(pointer))
if (errno != 0) {
return errno
}
d.KeepAlive()
return nil
}
```
Notice the KeepAlive? If I leave that out the program crashes, because the device 's io.File gets garbage collected. This took me quite some time to figure out and it is not obvious that that would happen, and that runitme.KeepAlive() is needed here. It would be great if I didn't have to manually insert runtime.KeepAlive calls when using os.File.Fd when using system calls. Or if there was at least vet/lint tooling to suggest when I would probably need it.
I posted this as a new issue to split it off from #34684, where it was a tangential issue. | NeedsInvestigation | medium | Critical |
505,087,740 | TypeScript | module resolution with @types | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.2
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
In typescript handhook, the module resolution is that:

but in the real project, there is no @types/moduleB.d.ts, it only has a @types/moduleB directory, it is confused that resolution strategy by handhook.
**Expected behavior:**
**Actual behavior:**
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Docs | low | Critical |
505,100,764 | pytorch | `nn.Sequential.__setattr__` appends to the execution list | ## π Bug
Assigning `nn.Module`s as attributes of an `nn.Sequential` will append them to the "execution list" and therefore automatically call in `forward`, instead of merely registering them for inclusion in `nn.Sequential.parameters()`. This is not mentioned in the documentation.
## To Reproduce
Running the following code:
```python
class CustomSeq(nn.Sequential):
def __init__(self):
super(CustomSeq, self).__init__(
nn.Linear(3, 4),
nn.Linear(4, 5),
)
self.bar = nn.Linear(4, 5)
net = CustomSeq()
print(net)
```
will result in
```
CustomSeq(
(0): Linear(in_features=3, out_features=4, bias=True)
(1): Linear(in_features=4, out_features=5, bias=True)
(bar): Linear(in_features=4, out_features=5, bias=True)
)
```
and fail when passed an `N x 3` tensor (because of 5 vs 4 shape mismatch at the end of execution).
## Expected behavior
I am not sure what the print format should be, but ideally I would like to see that only the modules passed in the constructor are executed in `forward` (the [documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential) only mentions constructors) and the modules added via `__setattr__` would be treated as if they were assigned to a regular `nn.Module`, that is registered but not automatically executed. Another solution would be to just document this as a feature.
My preference for the former is that I have code which looks like this
```python
class CustomSeq(nn.Sequential):
def __init__(self):
meta = MetaParameterModule(...)
super(CustomSeq, self).__init__(
CustomLayer(meta, ...),
CustomLayer(meta, ...),
...
)
self.meta = meta
```
where `MetaParameterModule` doesn't even implement `forward`, it just stores some learnable interpolation parameters global to `CustomSeq`, which I would like to easily access. I was very surprised seeing that running `CustomSeq()(input)` results in `NotImplementedError`. I can obviously workaround with something like
```python
@property
def meta(self):
return self[0].meta
```
but it's a bit awkward.
## Environment
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: None
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] Could not collect
[conda] cpuonly 1.0 0 pytorch
[conda] pytorch 1.2.0 py3.7_cpu_0 [cpuonly] pytorch
[conda] torchvision 0.4.0 py37_cpu [cpuonly] pytorch
| module: docs,low priority,triaged | low | Critical |
505,106,613 | vscode | Clarify that proxy settings are only applied to extensions | So for me to be able to get access to the extension library I need to use the command
code --proxy-server="192.168.80.3:8080;"
My settings.xml does not seem to apply anything but includes the following just to make my case. That proxy settings in the xml do not apply.
my settings.xml
{
"http.systemCertificates": false,
"http.proxySupport": "off",
"http.proxyStrictSSL": false
} | bug,proxy | low | Major |
505,107,827 | create-react-app | Create React App inside a non-empty folder | I use `create-react-app` to create a react app **inside an existing backend project** which will serve the react application. And I have to run the `eject` command to:
- change the target directory
- change the build logic and directories
- add `.less` support with some extra configuration
and so on...
To automate this process, I took a fork from `react-scripts` and I defined my folder structure. A
My final file system hierarchy should be something like:
```
- project directory
- foo
- bar
- baz
- resources
- react-app
- index.js
- package.json
- qux.php
```
So I edited the template folder to match these requirements and published the scripts to give a try.
`npx create-react-app . --scripts-version my-react-scripts`
And as you guessed, I got
> The directory . contains files that could conflict: Backend FSH.
However:
- I need my `package.json` to be located in the project root directory.
- My scripts will create a new directory called `react-app` inside the existing `resources` folder. And react-app will live inside this directory. I don't expect an override.
- My build is customized and will use my webpack plugin to generate the built files. I don't expect an unexpected override here as well.
### Describe the solution you'd like
- Since I know what I do, I want to have some way to run `create-react-app` in an existing folder with some files.
Maybe by setting a constant from my scripts or using an option after `npx create-react-app` call.
### Describe alternatives you've considered
I considered to update create-react-app to support it as well but that doesn't make sense since it would be almost impossible to follow up the updates and upgrades later.
### Additional context
https://github.com/facebook/create-react-app/issues/334
https://github.com/facebook/create-react-app/issues/2776
| issue: proposal,needs triage | low | Major |
505,122,218 | flutter | Flutter doesn't recognise external USB camera | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
Not recognizing external USB camera
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
## Proposal
Hi, I want to use my external usb camera but flutter does not recognize it, it just finds my rear and my front cameras. Please take a look at it .
Thank you
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
| c: new feature,platform-android,p: camera,package,team-ecosystem,P3,triaged-ecosystem | high | Critical |
505,124,184 | flutter | Include inherited widgets on new pushed routes | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
Why inherited widgets are removed when we push a new route to a navigator?
It is very annoying in almost all my projects : if I have a User object I want to share to the rest of the tree, I have to use a Provider wrapping each new pushed page. And the same if I have few others object I want to provide.
I don't want to provide it above the MaterialApp because I don't have theses objects at the beginning...
I really like Provider but this behaviour removes 50% of the interest for me, because at this point is easier to use regular constructor arguments on the new pushed page.
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
## Proposal
I guess the default removing (or not include) behaviour is needed for some use cases, so maybe we can just add a Mixin to the class being provided that tells the navigator to provide it to the next pushed routes?
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
| c: new feature,framework,f: routes,P3,team-framework,triaged-framework | low | Critical |
505,173,137 | go | cmd/compile: unneeded bounds checks | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +2197321 Tue Oct 8 23:53:55 2019 +0000 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What did you do?
I compiled the following functions to check if there were any bounds checks: https://godbolt.org/z/RSrNwR.
### What did you expect to see?
I expected to see no bounds checks.
### What did you see instead?
Instead, I see 7 different bounds checks generated by the Go compiler (highlighted in the site above).
I talked with @zdjones about this and it appears like all these bounds checks can be removed.
| Performance,NeedsInvestigation,compiler/runtime | low | Minor |
505,182,268 | vscode | Completion should be able to retrigger completions when accepted | re https://github.com/microsoft/vscode/issues/74054
HTML/CSS completion item sometimes retrigger completions upon insertion. We should make that an editor concept with (1) proper API and (2) a setting to enable/disable this. | feature-request,api,suggest | low | Minor |
505,188,995 | pytorch | Deployment training model at C + + end | When the model does not contain a custom layer, it can be deployed directly on the C + + side using JIT mechanism and libtorch library.
But my model contains a custom c++ and CUDA layer. Now that I'm deploying the model on the c++ side, do I need to compile the custom c++ and CUDA layers into the libtorch library?
Thank you!!!
cc @suo @yf225 | needs reproduction,oncall: jit,module: docs,module: cpp,triaged | low | Major |
505,265,129 | flutter | Remove an assertion from CupertinoTabBar widget | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
When using CupertinoTabBar, it has an assertion that property "items" must consist 2 items at least.
See https://github.com/flutter/flutter/blob/991153ada0cf571c8600ff416508ffafe665dfc0/packages/flutter/lib/src/cupertino/bottom_tab_bar.dart#L74
Therefore when single item exists (we have a legitimise use case for that), the widget's assertion will fail resulting whole screen being bloody red in debug.
## Proposal
I do not think Cupertino widgets should error when something does not conform to Apple's HIG. Warning message yes, but error? No thanks.
Therefore, I propose to remove the assertion and warn or log down for developer instead.
| framework,f: cupertino,c: proposal,a: error message,P3,team-design,triaged-design | low | Critical |
505,279,608 | flutter | Flutter driver timeout cause test to fail | Flutter driver timeout cause test to fail on linux emulator, docker android emulator and macos mojave android emulator and real android device
## Steps to Reproduce
Follow the steps on https://flutter.dev/docs/cookbook/testing/integration/introduction
**Target Platform: Android **
**Target OS version/browser: 25++**
**Devices: Emulator**
## Logs
flutter drive --debug --target=test_driver/app.dart
```
Using device Android SDK built for x86 64.
Starting application: test_driver/app.dart
Initializing gradle... 10.7s
Resolving dependencies... 59.7s
"build/app/outputs/apk/app.apk" does not exist.
Running Gradle task 'assembleDebug'... 47.1s
Built build/app/outputs/apk/debug/app-debug.apk.
Installing build/app/outputs/apk/app.apk... 408.9s (!)
I/flutter ( 5096): Observatory listening on http://127.0.0.1:42379/LdGM3IHbn2o=/
00:00 [32m+0[0m: Counter App (setUpAll)[0m
[info ] FlutterDriver: Connecting to Flutter application at http://127.0.0.1:33749/LdGM3IHbn2o=/
[trace] FlutterDriver: Isolate found with number: 2862227988220383
[warning] FlutterDriver: Unknown pause event type VMNoneEvent. Assuming application is ready.
00:15 [32m+0[0m[31m -1[0m: Counter App (setUpAll) [1m[31m[E][0m[0m
JSON-RPC error -32601 (method not found): Method not found
package:json_rpc_2/src/client.dart 110:64 Client.sendRequest
package:json_rpc_2/src/peer.dart 79:15 Peer.sendRequest
package:vm_service_client/src/scope.dart 64:23 Scope.sendRequestRaw
===== asynchronous gap ===========================
dart:async/zone.dart 1053:19 _CustomZone.registerUnaryCallback
dart:async-patch/async_patch.dart 71:23 _asyncThenWrapperHelper
package:flutter_driver/src/driver/driver.dart FlutterDriver.connect
test_driver/app_test.dart 18:36 main.<fn>.<fn>
===== asynchronous gap ===========================
dart:async/zone.dart 1053:19 _CustomZone.registerUnaryCallback
dart:async/zone.dart 967:22 _CustomZone.bindUnaryCallbackGuarded
dart:async/future.dart 530:34 Future.doWhile
dart:async/future.dart 490:12 Future.forEach
package:test_api/src/backend/declarer.dart 291:36 Declarer._setUpAll.<fn>.<fn>
dart:async/zone.dart 1124:13 _rootRun
dart:async/zone.dart 1021:19 _CustomZone.run
dart:async/zone.dart 1516:10 _runZoned
dart:async/zone.dart 1463:12 runZoned
package:test_api/src/backend/declarer.dart 291:14 Declarer._setUpAll.<fn>
package:test_api/src/backend/invoker.dart 400:25 Invoker._onRun.<fn>.<fn>.<fn>.<fn>
===== asynchronous gap ===========================
dart:async/zone.dart 1045:19 _CustomZone.registerCallback
dart:async/zone.dart 962:22 _CustomZone.bindCallbackGuarded
dart:async/timer.dart 52:45 new Timer
dart:async/timer.dart 87:9 Timer.run
dart:async/future.dart 174:11 new Future
package:test_api/src/backend/invoker.dart 399:21 Invoker._onRun.<fn>.<fn>.<fn>
DriverError: Failed to fulfill GetHealth due to remote error
Original error: JSON-RPC error -32601 (method not found): Method not found
Original stack trace:
package:json_rpc_2/src/client.dart 110:64 Client.sendRequest
package:json_rpc_2/src/peer.dart 79:15 Peer.sendRequest
package:vm_service_client/src/scope.dart 64:23 Scope.sendRequestRaw
===== asynchronous gap ===========================
dart:async/zone.dart 1053:19 _CustomZone.registerUnaryCallback
dart:async-patch/async_patch.dart 71:23 _asyncThenWrapperHelper
package:flutter_driver/src/driver/driver.dart FlutterDriver.connect
test_driver/app_test.dart 18:36 main.<fn>.<fn>
===== asynchronous gap ===========================
dart:async/zone.dart 1053:19 _CustomZone.registerUnaryCallback
dart:async/zone.dart 967:22 _CustomZone.bindUnaryCallbackGuarded
dart:async/future.dart 530:34 Future.doWhile
dart:async/future.dart 490:12 Future.forEach
package:test_api/src/backend/declarer.dart 291:36 Declarer._setUpAll.<fn>.<fn>
dart:async/zone.dart 1124:13 _rootRun
dart:async/zone.dart 1021:19 _CustomZone.run
dart:async/zone.dart 1516:10 _runZoned
dart:async/zone.dart 1463:12 runZoned
package:test_api/src/backend/declarer.dart 291:14 Declarer._setUpAll.<fn>
package:test_api/src/backend/invoker.dart 400:25 Invoker._onRun.<fn>.<fn>.<fn>.<fn>
===== asynchronous gap ===========================
dart:async/zone.dart 1045:19 _CustomZone.registerCallback
dart:async/zone.dart 962:22 _CustomZone.bindCallbackGuarded
dart:async/timer.dart 52:45 new Timer
dart:async/timer.dart 87:9 Timer.run
dart:async/future.dart 174:11 new Future
package:test_api/src/backend/invoker.dart 399:21 Invoker._onRun.<fn>.<fn>.<fn>
package:flutter_driver/src/driver/driver.dart 448:7 FlutterDriver._sendCommand
===== asynchronous gap ===========================
dart:async/zone.dart 1062:19 _CustomZone.registerBinaryCallback
dart:async-patch/async_patch.dart 80:23 _asyncErrorWrapperHelper
package:test_api/src/backend/invoker.dart Invoker._onRun.<fn>.<fn>.<fn>.<fn>
dart:async/future.dart 176:37 new Future.<fn>
package:stack_trace/src/stack_zone_specification.dart 209:15 StackZoneSpecification._run
package:stack_trace/src/stack_zone_specification.dart 119:48 StackZoneSpecification._registerCallback.<fn>
dart:async/zone.dart 1120:38 _rootRun
dart:async/zone.dart 1021:19 _CustomZone.run
dart:async/zone.dart 923:7 _CustomZone.runGuarded
dart:async/zone.dart 963:23 _CustomZone.bindCallbackGuarded.<fn>
package:stack_trace/src/stack_zone_specification.dart 209:15 StackZoneSpecification._run
package:stack_trace/src/stack_zone_specification.dart 119:48 StackZoneSpecification._registerCallback.<fn>
dart:async/zone.dart 1124:13 _rootRun
dart:async/zone.dart 1021:19 _CustomZone.run
dart:async/zone.dart 947:23 _CustomZone.bindCallback.<fn>
dart:async-patch/timer_patch.dart 21:15 Timer._createTimer.<fn>
dart:isolate-patch/timer_impl.dart 382:19 _Timer._runTimers
dart:isolate-patch/timer_impl.dart 416:5 _Timer._handleMessage
dart:isolate-patch/isolate_patch.dart 172:12 _RawReceivePortImpl._handleMessage
===== asynchronous gap ===========================
dart:async/zone.dart 1045:19 _CustomZone.registerCallback
dart:async/zone.dart 962:22 _CustomZone.bindCallbackGuarded
dart:async/timer.dart 52:45 new Timer
dart:async/timer.dart 87:9 Timer.run
dart:async/future.dart 174:11 new Future
package:test_api/src/backend/invoker.dart 399:21 Invoker._onRun.<fn>.<fn>.<fn>
00:15 [32m+0[0m[31m -1[0m: Counter App (tearDownAll)[0m
00:15 [32m+0[0m[31m -1[0m: [31mSome tests failed.[0m
Unhandled exception:
Dummy exception to set exit code.
#0 _rootHandleUncaughtError.<anonymous closure> (dart:async/zone.dart:1112:29)
#1 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
#2 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
#3 _Timer._runTimers (dart:isolate-patch/timer_impl.dart:391:30)
#4 _Timer._handleMessage (dart:isolate-patch/timer_impl.dart:416:5)
#5 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:172:12)
Stopping application instance.
Driver tests failed: 255
``` | framework,t: flutter driver,P2,team-framework,triaged-framework | low | Critical |
505,305,206 | godot | Command-line debugger does not allow editing debug commands once written | **Godot version:**
3.2.alpha1.official
**OS/device including version:**
Mac OS X Mojave
**Issue description:**
CLI --debug option does not allow editing of debug commands except backspace.
The debugger does not allow command line editing while in debug mode. You can only backspace over a command as you are typing it, but no arrow backwards over a command to edit.
In addition you cannot use up arrow to go back to previous commands.
I consider this a bug because it limits the used of the CLI debugger although some may say it is a feature.
**Session example notice backarrow echoes at bottom**
```bash
~/gdalphabug$ godot2 TestScript.tscn -d
arguments
0: godot2
1: TestScript.tscn
2: -d
Current path:/gdalphabug
Godot Engine v3.2.alpha1.official - https://godotengine.org
OpenGL ES 3.0 Renderer: NVIDIA GeForce GT 650M OpenGL Engine
Registered camera FaceTime HD Camera (Built-in) with id 1 position 0 at index 0
ERROR: 'The class variable 'x' is declared but never used in the script.'
ERROR: 'The local variable 'somevarx' is declared but never used in the block.'
Debugger Break, Reason: 'Division By Zero in operator '/'.'
*Frame 0 - res://TestScript.gd:7 in function '_init'
Enter "help" for assistance.
debug> help
Built-In Debugger command list:
c,continue Continue execution.
bt,backtrace Show stack trace (frames).
fr,frame <frame>: Change current frame.
lv,locals Show local variables for current frame.
mv,members Show member variables for "this" in frame.
gv,globals Show global variables.
p,print <expr> Execute and print variable in expression.
s,step Step to next line.
n,next Next line.
fin,finish Step out of current frame.
br,break [source:line] List all breakpoints or place a breakpoint.
delete [source:line]: Delete one/all breakpoints.
set [key=value]: List all options, or set one.
q,quit Quit application.
debug> gv
debug> mv
x: Null
somevar: 20
foo1: abcd
debug> set x=20^[[D^[[D^[[D^[[D^[[D^[[D
``` | bug,topic:editor,confirmed | low | Critical |
505,316,674 | TypeScript | TS2377 reported for constructors that unconditionally throw orΒ useΒ returnΒ override | ## TypeScript Version:
3.7.0-dev.2019-10-10
## Search Terms:
- TS2377
- TS 2377
- TS2377 throw
- TS 2377 throw
## Code
Code like this isΒ also generated byΒ [`webidl2js`](https://www.npmjs.com/package/webidl2js):
```ts
class Foo extends Object {
constructor () {
throw new TypeError("Illegal constructor");
}
}
```
Same with
```ts
class Foo extends Object {
constructor () {
return Object.create(new.target.prototype);
}
}
```
### Expected behavior:
No error
### Actual behavior:
```
error TS2377: Constructors for derived classes must contain a 'super' call.
2 constructor() {
~~~~~~~~~~~~~~~
3 throw new TypeError();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4 }
~~~~~
```
and
```
error TS2377: Constructors for derived classes must contain a 'super' call.
2 constructor() {
~~~~~~~~~~~~~~~
3 return Object.create(new.target.prototype);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4 }
~~~~~
```
### Playground Link:
[π](https://www.typescriptlang.org/play/?ssl=1&ssc=25&pln=1&pc=19#code/MYGwhgzhAEBiD29oFMAeAXZA7AJjA8gEYBWyw60A3gLABQ0D0w8WE6ATgK7nzsAUASip1Go6OgAW7eAHdoWZHIAqATwAOyAKLtp-AQG4RjAL51TtIA)
## Related Issues:
- #3696
| Suggestion,Awaiting More Feedback | low | Critical |
505,373,823 | flutter | [webview_flutter] Allow programmatic scroll control | Per https://github.com/flutter/plugins/pull/2107 users are interested in programmatically scrolling webviews and listening to current scroll positions. | c: new feature,customer: crowd,p: webview,package,team-ecosystem,P3,triaged-ecosystem | low | Major |
505,374,694 | vue | when <select> model and the option list changed at the same time, model may incorrectly set to `undefined` | ### Version
2.6.10
### Reproduction link
[https://jsfiddle.net/4fyrj95L/](https://jsfiddle.net/4fyrj95L/)
### Steps to reproduce
set model binded to select element and the array that iterated the option list at the same time
make sure the new model value do not match any of the new options
the model will be set to `undefined`
### What is expected?
model value to be set to `1`
### What is actually happening?
model's value set to `undefined`
---
This bug only appears when model and the options changed at the same and the new model value does not match any option, other situations behaves correctly as far as I tested.
<!-- generated by vue-issues. DO NOT REMOVE --> | bug,has workaround | low | Critical |
505,414,994 | pytorch | Documentation makefile should include torchvision | We should ideally be able to build all the docs in a single command. Right now we build the docs for pytorch and torchvision separately and "copy and paste" the docs for torchvision into pytorch. This is not very efficient and it's easy to forget this last step.
cc @ezyang | module: docs,triaged,module: doc infra | low | Minor |
505,418,894 | flutter | [google_maps_flutter] Sometimes requires the WAKE_LOCK permission | This was uncovered as part of debugging #42349.
Occasionally Maps requests the WAKE_LOCK permission when the virtual display is resized with a default `GoogleMap`. It's most likely coming from the Google Maps SDK, but there isn't any obvious reason why this should be. There are a few cases I could find in maps that require that permission, but none of them seem to be related to a default map being resized.
- There's a wearable version of maps that allows for an "ambient mode" that requires WAKE_LOCK.
- I'm also seeing a [`GeoFenceHadrwareImpl`](https://android.googlesource.com/platform/frameworks/base/+/master/core/java/android/hardware/location/GeofenceHardwareImpl.java) class in AOSP that also requires a wake lock, but I don't know if that itself is referenced anywhere with Maps. I don't think our plugin should be using either of those, but we are deferring to the SDK so it's hard to really tell.
In addition this reproduces extremely flakily, around 2% of the time in my testing. It's also only seen on Q, implying that it's the Maps SDK interacting with Android somehow.
The plugin should ideally avoid whatever code path requires this permission or at least alert developers to it and ask them to add it to their manifests.
| c: crash,platform-android,p: maps,package,dependency: android,e: OS-version specific,P2,team-android,triaged-android | low | Critical |
505,427,143 | TypeScript | Poor error recovery when `enum` is used as a parameter name | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.6.2
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** expression expected reserved word
**Code**
```ts
function converter<Enum>(values: string[], enums: Enum[]): Map<Enum, string> {
return new Map<Enum, string>(
enums.map((enum: Enum, index: number) => [ enum, values[index] ])
);
}
```
**Expected behavior:**
After some confusion why this displays so many red curlies, the error should make it clear to me that the reserved word `enum` is accidentaly being used as an argument name. An error is given on `enum` saying something along the lines of *You cannot use enum as a variable/argument because it is a reserved word*.
**Actual behavior:**
After a LOT of confusion, one realizes that the reserved word `enum` is accidentally being used as an argument name. An error is given on several parts in the above code, where the most helpful one is *Expression expected (ts1109)*, which is just unnecesarily confusing.
**[Playground Link](https://www.typescriptlang.org/play/?ssl=1&ssc=1&pln=2&pc=22#code/GYVwdgxgLglg9mABBBA3ApgJylgPAUTBAFsA+AClQEMAbEdAZwC5EGpMYwBzAbQF0ANInRFizRIRL8AlCwCyVAA4FRQth26lEAbwBQASEzooITEjDoA7ogXLJxNe05cKB-SJIMAdMSXlyHsQs9kKcACboAB4sogBGWNKIALxaPMKqiNR0jDzhUXyIfNIG0gDcugC+QA)** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
None that seem related. | Bug,Domain: Error Messages | low | Critical |
505,486,712 | rust | Formatting of std::backtrace::Backtrace | I'd like to propose some changes to formatting of the `Backtrace` type introduced in https://github.com/rust-lang/rust/issues/53487.
- Don't include newlines in the default `Debug` output. I personally find it pretty jarring when looking at the debug output of an error where everything is on the same line except the backtrace, and then any additional properties are printed way down on the last line.
I'd suggest instead printing a "list" of "maps", such that it'd look like `[{ function: "errors::new", file: "./src/new.rs", line: 21 }, ...]`.
- Allow passing the "precision" format flag to limit how many frames are printed. The most relevant frames are normally near the top, but deciding exactly how many depends on the app. Then a user could write `println!("short trace: {:.5}", trace)` to see just 5 frames.
- Remove `stack backtrace:\n` prefix entirely. When formatting, I'd think a user would pick their own prefix, like `println!("Call stack:\n{}", trace)`. | C-enhancement,T-libs-api,E-medium,PG-error-handling | medium | Critical |
505,487,334 | thefuck | Support `source ~/.bashrc` when installing | When you first install thefuck, you get this nice message.
```
$ fuck
Seems like fuck alias isn't configured!
Please put eval "$(thefuck --alias)" in your ~/.bashrc and apply changes with source ~/.bashrc or restart your shell.
Or run fuck second time for configuring it automatically.
More details - https://github.com/nvbn/thefuck#manual-installation
$ fuck
fuck alias configured successfully!
For applying changes run source ~/.bashrc or restart your shell.
```
Which is great and I love it, however, when I run `fuck` again, I get the same message.
```
$ fuck
fuck alias configured successfully!
For applying changes run source ~/.bashrc or restart your shell.
```
Seems like an oversight. It should suggest running `source ~/.bashrc`. | enhancement,help wanted,hacktoberfest | low | Major |
505,505,177 | youtube-dl | FranceTV: hlsnative corrupts soundtrack in videos with separate "audio only" HLS playlist | In several videos from www.france.tv, including all recent from "Passion Outremer" but also some others, there are several video only files and an audio only file. But the other video files are video+audio. This audio file triggers an error of ffmpeg after 10 seconds when merging, thus the video+audio merged file doesn't have sound included after these seconds. For an original sound file of about 20 MB, the sound extracted from the merged file is only 80 KB, including only these 10 seconds.
This problem is not strictly related to youtube-dl and maybe not to ffmpeg. I think that it comes from a basic copy protection system which includes an error in the file. In the browser, it works well, but with youtube-dl the sound track is truncated.
Please note that if I download the audio only file with "-f bestaudio", the sound track is also cut, because of the postprocessing with ffmpeg.
I found a workaround to keep the entire sound track for further postprocessing by using this option: "--fixup never", combined with "-f bestaudio".
I first tried to hear the sound with VLC or mplayer, it stops at 10s. Then I installed ffplay 4.1.4 (Debian package from deb-multimedia.org), which works, but I hear a very short pause in the sound at 10 seconds, then at 30s and maybe every 30s. I haven't tried with ffmpeg 4.2 or 4.2.1. Of course, I use the latest youtube-dl, which is 2019.09.28.
Maybe with ffmpeg options I can merge the video only file and the audio only file but I'm not an expert of ffmpeg and haven't found it in the documentation. If a fix is found with ffmpeg options, it would be useful to include it in youtube-dl code.
Here are some examples of these videos:
https://www.france.tv/documentaires/voyages/1073909-montagne-pelee-un-volcan-sous-haute-surveillance.html
https://www.france.tv/france-o/antilles-les-volcans-se-reveillent/1020287-montserrat-la-pompei-des-caraibes.html
https://www.france.tv/documentaires/voyages/1083115-soufriere-la-vieille-dame-indomptable.html
https://www.france.tv/france-o/antilles-les-volcans-se-reveillent/1084769-dominique-une-ile-en-ebullition.html
https://www.france.tv/documentaires/animaux-nature/1085375-des-volcans-aux-lagons.html
https://www.france.tv/documentaires/animaux-nature/1085373-reunion-le-volcan-rouge.html
https://www.france.tv/france-o/passion-outre-mer/763831-passion-outre-mer.html
For the concerned videos, "youtube-dl -F" gives for example:
format code extension resolution note
hls_v5_os-audio-aacl-64-Audio_FranΓ§ais mp4 audio only [fr]
hls_v5_os-191 mp4 256x144 191k , avc1.42C01E, 25.0fps, video only
hls_v5_os-321 mp4 320x180 321k , avc1.42C01E, 25.0fps, video only
hls_v5_os-609 mp4 512x288 609k , avc1.42C01E, 25.0fps, video only
hls_v5_os-880 mp4 704x396 880k , avc1.4D401F, 25.0fps, video only
hls_v5_os-1554 mp4 1024x576 1554k , avc1.4D401F, 25.0fps, video only (best)
Other videos with audio merged are OK.
I attach a log of youtube-dl downloading the audio only file, with the file cut by postprocessing, as you can see in the following "ls".
[rapport_de_bug_youtube-dl.log](https://github.com/ytdl-org/youtube-dl/files/3714580/rapport_de_bug_youtube-dl.log)
| geo-restricted | medium | Critical |
505,505,412 | terminal | Touch Zoom with two fingers don't work on Touch screen (but it works on Touch Pad) | <!--
π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
When I use two finger to zoom in or out (actually making text bigger or smaller), it works on my touch pad. But it actually don't work on my touch screen.
I think the experience would be much smoother if it worked the same on the touch pad and the touch screen.
What I also get is, if I scroll with finger on the touch screen it works. If I try to scroll with two fingers it flickers, as if the program doesn't realize I using two fingers and tries to read both inputs.
So the case with multiple fingers should probably be detected and handled specifically.
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
# Proposed technical implementation details (optional)
If I use two fingers on the touch screen and move them together or away from each other the font size should decrease or increase respectively, exactly as it is now if I do that on the touch screen.
<!--
A clear and concise description of what you want to happen.
-->
| Help Wanted,Issue-Bug,Area-TerminalControl,Product-Terminal,Priority-2 | low | Critical |
505,516,293 | godot | Script editor minimap is not exactly aligned when dragging the region | Godot 3.2 alpha1
When the edited script is long enough to make the minimap scroll while dragging it, the dragged area will actually drift away from the mouse.
Note: turning off "scroll past end of file" doesn't affect this.

| bug,topic:editor,confirmed | low | Minor |
505,527,112 | flutter | Reland "Test child isolates are terminated when root is shutdown" after test fix. | This patch was reverted in https://github.com/flutter/engine/pull/13067 due to LUCI failures. | team,engine,P2,team-engine,triaged-engine | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.