id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
363,029,898
flutter
How can I have an expansion panel with custom shadow?
Hi, is there a way to change the expansion panel's shadow?
c: new feature,framework,f: material design,P2,team-design,triaged-design
low
Major
363,045,965
scrcpy
Show FPS directly on screen
Thanks for you work, but could you please show the FPS on the screen instead of stdout. My English is poor, please don't mind.
feature request
low
Minor
363,061,422
angular
Animated child node's state is set to "void" when its parent is moved as a result of sorting.
In [animation_integration_spec.ts - 'should retain state styles when the underlying DOM structure changes even if there are no insert/remove animations'], if you wrap the item in another div and move the animation to that div, like this: `<div *ngFor="let item of items"> <div class="item" [@color]="colorExp">{{ item }}</div> </div>` you will see that item "1" loses background color after sorting. _Originally posted by @stmihai in https://github.com/angular/angular/pull/23534/comment#issuecomment-423910784_
type: bug/fix,area: animations,freq2: medium,P3
low
Minor
363,072,339
pytorch
caffe2's installation is still problematic...
> WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. > CRITICAL:root:Cannot load caffe2.python. Error: ~/.local/lib/python3.6/site-packages/caffe2/python/caffe2_pybind11_state.cpython-36m-x86_64-linux-gnu.so: undefined symbol: mkldnn_primitive_destroy > from caffe2.python import core > WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. > CRITICAL:root:Cannot load caffe2.python. Error: No module named 'caffe2.python.caffe2_pybind11_state'
caffe2
low
Critical
363,107,688
go
testing: testing a list of packages blocks the real-time output
### What version of Go are you using (`go version`)? `go version go1.11 darwin/amd64` ### Does this issue reproduce with the latest release? Yes, reproduced on tip - `go version devel +ce58a39fca Thu Sep 20 22:52:44 2018 +0000 darwin/amd64` ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/Users/ikorolev/Library/Caches/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/ikorolev/.gvm/pkgsets/go1.11/global:/Users/ikorolev/dev/go" GOPROXY="" GORACE="" GOROOT="/Users/ikorolev/.gvm/gos/go1.11" GOTMPDIR="" GOTOOLDIR="/Users/ikorolev/.gvm/gos/go1.11/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/_b/d1934m9s587_8t_6ngv3hnc00000gp/T/go-build880489854=/tmp/go-build -gno-record-gcc-switches -fno-common" ``` ### What did you do? `go test -v -count=1 ./...` doesn't print real-time test output if you got subpackages. ``` cd `mktemp -d` go mod init example.com/test cat <<EOD > main_test.go package main import ( "fmt" "testing" "time" ) func TestLong(t *testing.T) { fmt.Println("running") time.Sleep(2 * time.Second) fmt.Println("2 second later") } EOD mkdir subpkg cat <<EOD > subpkg/main_test.go package subpkg EOD go test -v -count=1 ./... ``` ### What did you expect to see? I expect to see the output successively as the test is running. And I will see it if run `go test -v -count=1 .` instead. ### What did you see instead? Using `./...` doesn't produce real-time output. The output is produced only when all the tests are finished. It's not convenient when you run tests on a CI and see only "go test -v ./..." line for several minutes. You don't have any idea if the tests have stuck somewhere or they are just too slow. I understand that such behaviour may be connected with running the tests in parallel, but maybe we could do anything about it?
Testing,NeedsInvestigation
low
Critical
363,114,907
pytorch
Caffe2 compiled with MKLDNN doesn't have device_type = MKLDNN
## Issue description Cannot use device_type = MKLDNN adter building Caffe2 from source with MKL and MKLDNN support. ``` self._device_opts = caffe2_pb2.DeviceOption() self._device_opts.device_type = caffe2_pb2.MKLDNN ``` Then, when using RunNetOnce: ``` RuntimeError: [enforce fail at operator.cc:90] gDeviceTypeRegistry()->count(type). Device type mkldnn not registered. ``` Also ``` workspace.C.has_mkldnn ``` returns False. ## Installation process 1. Latest MKL installation: ``` wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB \ && apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB \ && echo deb https://apt.repos.intel.com/mkl all main > /etc/apt/sources.list.d/intel-mkl.list \ && apt-get update \ && apt-get install -y intel-mkl-64bit-2019.0-045 ``` 2. Installing MKL-DNN from master to "/usr/local/mkl-dnn": ``` git clone --depth 1 https://github.com/01org/mkl-dnn.git \ && cd mkl-dnn/scripts && ./prepare_mkl.sh && cd .. \ && mkdir -p build && cd build && mkdir -p /usr/local/mkl-dnn \ && cmake -DCMAKE_INSTALL_PREFIX="/usr/local/mkl-dnn" .. \ && make -j"$(nproc)" && make install \ && cd ../../ && rm -rf mkl-dnn ``` ``` ln -s /usr/local/mkl-dnn/lib/libmklml_intel.so /usr/local/lib/libmklml_intel.so \ && ln -s /usr/local/mkl-dnn/lib/libmkldnn.so /usr/local/lib/libmkldnn.so \ && ln -s /usr/local/mkl-dnn/lib/libiomp5.so /usr/local/lib/libiomp5.so \ && ldconfig ``` 3. Installing PyTorch & Caffe2 from master for CPU inference (without CUDA): ``` git clone --depth 1 --recursive https://github.com/pytorch/pytorch.git \ && cd pytorch \ && MKLDNN_HOME="/usr/local/mkl-dnn" FULL_CAFFE2=ON NO_CUDA=TRUE python3 setup.py install \ && cd .. && rm -rf pytorch ``` 4. Seems like MKL-DNN is correctly found: ``` -- Found a library with BLAS API (mkl). -- Found a library with LAPACK API. (mkl) disabling CUDA because NOT USE_CUDA is set -- CuDNN not found. Compiling without CuDNN support -- MIOpen not found. Compiling without MIOpen support disabling ROCM because NOT USE_ROCM is set -- Found MKLDNN: /usr/local/mkl-dnn/include -- Found MKLDNN (include: /usr/local/mkl-dnn/include, library: /usr/local/mkl-dnn/lib/libmkldnn.so) ``` 5. Whole summary: ``` -- -- ******** Summary ******** -- General: -- CMake version : 3.5.1 -- CMake command : /usr/bin/cmake -- Git version : 76ab26c -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 5.4.0 -- BLAS : MKL -- CXX flags : -msse3 -msse4.1 -msse4.2 --std=c++11 -fvisibility-inlines-hidden -D_FORCE_INLINES -D_MWAITXINTRIN_H_INCLUDED -D__STRICT_ANSI__ -fopenmp -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-unused-but-set-variable -Wno-maybe-uninitialized -- Build type : Release -- Compile definitions : ONNX_NAMESPACE=onnx_torch;USE_GCC_ATOMICS=1;TH_BLAS_MKL;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1 -- CMAKE_PREFIX_PATH : /usr/lib/python3/dist-packages -- CMAKE_INSTALL_PREFIX : /pytorch/torch/lib/tmp_install -- -- BUILD_ATEN_MOBILE : OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_DOCS : OFF -- BUILD_PYTHON : ON -- Python version : 3.5.2 -- Python executable : /usr/bin/python3 -- Pythonlibs version : 3.5.2 -- Python library : /usr/lib/python3.5 -- Python includes : /usr/include/python3.5m -- Python site-packages: lib/python3/dist-packages -- BUILD_CAFFE2_OPS : ON -- BUILD_SHARED_LIBS : ON -- BUILD_TEST : ON -- USE_ASAN : OFF -- USE_CUDA : 0 -- USE_ROCM : OFF -- USE_EIGEN_FOR_BLAS : -- USE_FFMPEG : OFF -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LEVELDB : OFF -- USE_LITE_PROTO : OFF -- USE_LMDB : OFF -- USE_METAL : OFF -- USE_MKL : -- USE_MOBILE_OPENGL : OFF -- USE_NCCL : OFF -- USE_NERVANA_GPU : OFF -- USE_NNPACK : 1 -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENCV : OFF -- USE_OPENMP : OFF -- USE_PROF : OFF -- USE_REDIS : OFF -- USE_ROCKSDB : OFF -- USE_ZMQ : OFF -- USE_DISTRIBUTED : ON -- USE_MPI : ON -- USE_GLOO : ON -- USE_GLOO_IBVERBS : OFF -- Public Dependencies : Threads::Threads;caffe2::mkl -- Private Dependencies : nnpack;cpuinfo;/usr/lib/x86_64-linux-gnu/libnuma.so;/usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/libmpi.so;gloo;aten_op_header_gen;onnxifi_loader;rt;gcc_s;gcc;dl -- Configuring done -- Generating done ``` ## System Info PyTorch version: 1.0.0a0+76ab26c Is debug build: No CUDA used to build PyTorch: None OS: Ubuntu 16.04.5 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.5 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip] Could not collect [conda] Could not collect
caffe2
low
Critical
363,125,529
opencv
Documentation of available builders, docker images, directives for testing, etc
related: #12318 Most of us are unaware of: - all possible build directives besides the commonly utilized `force_builders` and `WIP` - how to filter specific test suits (e.g. `java`, `python2`) - how to filter specific tests (e.g. `Calib3d/EstimateAffinePartial2D.testConversion` - an alias to `gtest_filter`) - how to break tests on failure only in a specific builder? - how to break tests on failure in each builder (an alias to `gtest_break_on_failure`)? - how to globally break tests on failure in any builder? - are there event-driven directives? If not, could you please create some? For example: - start ARMv7 builder only if Linux tests pass (should not that be the default?) - restart a specific builder in case of its failure while other builder pass - automatically start/stop builders according to an updated `force_builders` from the PR description without requiring `commit --amend`. - any other directives? - all available builder names that can be included in `force_builders` directive (especially if there are ones not shown in the builders summary page) - the docker images we can use with `Custom` or `Docs` builders, etc. - are there any other builders that we can control its image? When should we do so? Could you please provide a documentation for all of that, and include a commented-out link to it in the [pull requests template](https://github.com/opencv/opencv/blob/master/.github/PULL_REQUEST_TEMPLATE.md)?
category: infrastructure
low
Critical
363,171,733
angular
No ability to check if <ng-content> was provided to the component
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [x] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> We can use `ng-content` in our components. But we don't have any options out of the box to check if anything is passed as `ng-content` or not. Actually, was anything passed between `>` and `</` of our component usage in HTML. We can do this by adding some wrap component (and if I understood correctly you can not use stuff like `ng-container` there). Like div. And you put ng-content to it. You can get ref for this div and check if this div has children. If yes - ng-content was provided, otherwise - no ng-content. Huge minus of this approach is you have to create some additional entity to check it. ## Expected behavior <!-- Describe what the desired behavior would be. --> We can create some util function for that. You will have ability to just call that util and detect if `ng-content` was passed. ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://stackblitz.com or similar (you can use this template as a starting point: https://stackblitz.com/fork/angular-gitter). --> This is not a bug. This example shows us how we can do this at the moment. https://stackblitz.com/edit/angular-xdirt8 ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> We will not be forced to add some wrappers and it will be easier to check this stuff. ## Environment <pre><code> Angular version: 6.0.0 <!-- Check whether this is still an issue in the most recent Angular version --> Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre> I tried to ask it on stackoverflow and I am not only one person who is wondered how to do this. https://stackoverflow.com/questions/52479608/how-to-check-if-ng-content-was-passed-into-the-component https://stackoverflow.com/questions/35107211/in-angular-2-how-to-check-whether-ng-content-is-empty https://stackoverflow.com/questions/38692881/how-to-check-whether-ng-content-exists I am not supa-pro in vuejs, but seems to me it has this feature out of the box: https://vuejs.org/v2/api/?#vm-slots
feature,area: core,core: content projection,feature: under consideration
high
Critical
363,200,090
TypeScript
Strange "Cannot use namespace 'Foo' as a type" error with dummy `declare module "foo";` declaration
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> Maybe this is a question, but it has come up multiple times on Stack Overflow (most recently [here](https://stackoverflow.com/q/52471580)) without a good answer, so I'm asking it here. <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** master (03af107) <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** "cannot use namespace" "as a type" TS2709 "declare module" **Code** ```ts declare module "foo"; declare module "bar" { import { Foo } from "foo"; let x: Foo; } ``` **Expected behavior:** An error that makes sense, or no error? **Actual behavior:** `error TS2709: Cannot use namespace 'Foo' as a type.` **Playground Link:** [link](https://www.typescriptlang.org/play/#src=declare%20module%20%22foo%22%3B%0D%0A%0D%0Adeclare%20module%20%22bar%22%20%7B%0D%0A%20%20%20%20import%20%7B%20Foo%20%7D%20from%20%22foo%22%3B%0D%0A%20%20%20%20let%20x%3A%20Foo%3B%0D%0A%7D) **Related Issues:** None found
Bug,Help Wanted
medium
Critical
363,205,972
go
proposal: image: decoding options
# Motivation The current `Decode` function in the `image` package provides no way to configure the underlying format decoder. The registration system used by `image` makes it difficult to extend the decoding behaviour of the registered format handlers. However there are many open issues that could be resolved with a small number of decoding options. This proposal describes a way of extending the existing format registration system to enable options to be passed to the individual format decoders. It introduces a `DecodeOptions` type to serve as an extension point. There are two broad areas of configuration that are considered. This proposal tackles them both but they are orthogonal and may be independently evaluated and implemented. The areas are: ## Avoiding large allocations on unprocessable images This class of problem is generally caused by faulty or malicious header information in the image file. Typically this could be a very large x or y dimension that causes a huge buffer to be allocated in preparation for reading a stream of pixel data. This is currently possible to mitigate by first decoding the image config to check the dimensions before proceeding to a full decode. However this has the disadvantage of either reading the input twice or buffering and re-reading to decode. Discussion in the relevant issues suggested that a 16k buffer would be suitable. The following issues refer to this use case (with #5050 being the overarching issue) * [5050](https://github.com/golang/go/issues/5050) - image/gif: decoding untrusted (very large) images can cause huge memory allocations * [10790](https://github.com/golang/go/issues/10790) - x/image/webp: excessive memory consumption * [12512](https://github.com/golang/go/issues/12512) - image/png: limit memory Decode can use while decoding * [10399](https://github.com/golang/go/issues/10399) - x/image/bmp: out of memory ## Being more tolerant of invalid input The image decoders in the standard library are strict and fail on invalid input. There are classes of invalid input that may be acceptable for some uses. The following issues suggest that a lenient decoding mode would be helpful: * [10447](https://github.com/golang/go/issues/10447) - image/jpeg: add options to partially decode or tolerantly decode invalid images? * [20804](https://github.com/golang/go/issues/20804) - image/gif: decoding gif returns `unknown block type: 0x01` error * [20899](https://github.com/golang/go/issues/20899) - image/png: Decode failing on bitmap * [20856](https://github.com/golang/go/issues/20856) - image/gif: decoding gif returns `frame bounds larger than image bounds` error ## Other options related issues In addition, the following issues suggest other areas that could benefit from decoding options but are not considered further in this proposal: * [8055](https://github.com/golang/go/issues/8055) - image: decode / resize into an existing buffer * [18098](https://github.com/golang/go/issues/18098) - proposal: add Validate functions to image/jpeg, image/png etc. * [4341](https://github.com/golang/go/issues/4341) - image/jpeg: correct for EXIF orientation? * [18365](https://github.com/golang/go/issues/18365) - image/png: no support for setting and retrieving the PPI/DPI # Package Changes The primary extensibility point is a new type. Its fields are discussed at the end of this section. type DecodeOptions struct Format decoders are registered via a new package level function `RegisterFormatDecoder` func RegisterFormatDecoder(f FormatDecoder) The FormatDecoder interface needs to be implemented by any package providing image format decoding: type FormatDecoder interface { // Name is the name of the format, like "jpeg" or "png". Name() string // Prefix is the magic prefix that identifies the format's encoding. The magic // string can contain "?" wildcards that each match any one byte. Prefix() string // Decode decodes an image using the supplied options. Decode(r io.Reader, o *DecodeOptions) (Image, string, error) // DecodeConfig decodes the color model and dimensions of an image using the supplied options. DecodeConfig(r io.Reader, o *DecodeOptions) (Config, string, error) } A new exported Decoder type is introduced: type Decoder struct { DecodeOptions } with a basic constructor function. func NewDecoder() *Decoder The Decoder type has the following method set: // Decode decodes an image. Decode(r io.Reader) (Image, string, error) // DecodeConfig decodes the color model and dimensions of an image. DecodeConfig(r io.Reader) (Config, string, error) These Decode methods will sniff the type of the image from the reader and look up an appropriate registered FormatDecoder. If one is available then its corresponding Decode method is called, passing the Reader and the Decoder's options. This requires the Decode to have access to package level registered format decoders. To configure decoding the developer will create a new Decoder and then set the appropriate fields before calling Decode or DecodeConfig. The following options are proposed. ## MaxHeight and MaxWidth MaxHeight and MaxWidth are DecodeOptions fields that set the maximum allowable dimensions of a decoded image. These fields restrict the allocation of large amounts of memory for faulty or malicious images. MaxHeight int MaxWidth int These options are only used by the Decode method. When a Decoder attempts to decode an image with dimensions exceeding either MaxHeight or MaxWidth then it should stop processing and return an error. A new error in the image package `ErrDimensionsExceeded` could be defined as standard or it could be left to the decoder to return its own error. DecodeConfig should return a Config containing the width and height described in the image data stream regardless of the config options. ## ReturnPartial ReturnPartial is a DecodeOptions field that instructs the decoder that it may return a partially decoded image when an error is encountered. ReturnPartial bool The current Decode behaviour is that no image data is returned on error. When this option is true the caller may receive a partial image from the decoding process when the decoder encounters a format related error during the decoding process. ## Backwards compatibility with existing registered formats Existing callers of RegisterFormat will be supported by adapting the existing `format` type and passing it to RegisterFormatDecoder, along these lines: func (f *format) Name() { return f.name } func (f *format) Prefix() { return f.magic } func (f *format) Decode(r io.Reader, o *Options) (Image, string, error) { m, err := f.decode(rr) return m, f.name, err } func (f *format) DecodeConfig(r io.Reader, o *Options) (Config, string, error) { m, err := f.decodeConfig(rr) return m, f.name, err } The existing package level `Decode` and `DecodeConfig` functions will be rewritten to create a decoder and call it without any options: func Decode(r io.Reader) (Image, string, error) { return NewDecoder().Decode(r) } func DecodeConfig(r io.Reader) (Config, string, error) { return NewDecoder().DecodeConfig(r) } ## Deprecation of RegisterFormat RegisterFormat would be marked as deprecated in favour of RegisterFormatDecoder.
Proposal,Proposal-Hold
low
Critical
363,220,024
flutter
when reporting for host for flutter_tools, we should report for 'CLI' host
*@devoncarew commented on Sep 21, 2018, 3:10 PM UTC:* We report the host for flutter_tools runs - this is used to know which IDE is driving flutter_tools; we key off the value of the `FLUTTER_HOST` environment variable. It would be nice to also report a 'host' for pure CLI usage. We could infer this by there being no value set for `FLUTTER_HOST`, and for the entrypoint command into the tool not being the `daemon` command (or, `flutter run --machine`). We should also update the docs for building tools on flutter_tools to specify that IDEs - and people driving flutter_tools - should specify an `FLUTTER_HOST` env variable. cc @mit-mit *This issue was moved by [devoncarew](https://github.com/devoncarew) from [dart-lang/sdk#34543](https://github.com/dart-lang/sdk/issues/34543).*
c: new feature,tool,P2,team-tool,triaged-tool
low
Minor
363,233,434
flutter
No actionable feedback from Flutter Doctor when Android Studio plugins not installed
This weekend I was providing assistance as @sobrienpdx tried to install Flutter on a brand new Mac laptop. After installing Android Studio, but before installing the "flutter" and "dart" plugins, Flutter Doctor produced the following messages: - Flutter plugin not installed; this adds Flutter specific functionality. - Dart plugin not installed; this adds Dart specific functionality. For @sobrienpdx (who had never used Android Studio before and had no familiarity with Android Studio plugins) this was not enough information to fix the problem. It would be very helpful if Flutter Doctor could provide a link to steps 1-5 on https://flutter.io/get-started/editor/#androidstudio, along with a short summary of those steps, to help the user figure out what to do next. (Note: it might also be helpful to alter https://flutter.io/get-started/editor/#androidstudio so that it's possible to link directly to steps 1-5). ## Steps to Reproduce 1. Install Flutter. 2. Install Android Studio, but do not install the Flutter or Dart plugins. 3. Run "flutter doctor".
tool,t: flutter doctor,a: first hour,P2,team-tool,triaged-tool
low
Minor
363,284,842
go
runtime: CallersFrames showing mangled package path for nonASCII pkg name
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go tip: go version devel +16687a3bbf Fri Sep 14 12:39:54 2018 +0000 linux/amd64 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? linux/amd64 ### What did you do? In your $GOPATH, unpack the attached tar file: [tarfile.gz](https://github.com/golang/go/files/2412519/tarfile.gz) which will create two subdirs ' Äfoo' and 'Äbar. Change directory to the second and do 'go run': ``` $ tar zxvf - < tarfile.gz ./Äbar/ ./Äbar/Äbar.go ./Äfoo/ ./Äfoo/Äfoo.go $ cd Äbar $ go run . - more:true | runtime.Callers - more:true | %c3%84foo.PrintTrace - more:true | %c3%84foo.Äblix - more:true | %c3%84foo.Äbar - more:true | main.main - more:true | runtime.main - more:false | runtime.goexit Äfoo.Äbar(33) returns 134 $ ``` Note that in the output from runtime.CallersFrames the functions in package Äbar are shown with packagepath in mangled form (e.g. %c3%84foo.Äblix) instead of regular form (Äbar.Äblix). ### What did you expect to see? Correct package path Äbar.Äblix in callersframe output ### What did you see instead? Mangled package path %c3%84foo Note: not sure if this is a real bug, pilot error on my part, or if this is WAI. Please close out if appropriate. Also not sure (assuming there is a bug) if this is a compiler issue or a runtime issue (could be that the runtime is simply spitting out whatever the compiler told it. Thanks -NM.
NeedsInvestigation,compiler/runtime
low
Critical
363,330,066
flutter
Android: Run Dart code upon device boot (this is a standard Android capability)
Android: Run Dart code upon device boot (this is a standard Android capability)
c: new feature,platform-android,engine,a: existing-apps,P2,team-android,triaged-android
low
Minor
363,330,587
flutter
Provide background Service and WorkManager execution out of the box for Dart execution
Internal: b/141566426 Provide background Service and WorkManager execution out of the box for Dart execution. For example, the embedding should offer a Service that spins up a FlutterEngine, passes arbitrary primitive parameters into the engine, and executes Dart code. Then some signal should be made available for the Service to shut down after the Dart work is complete. The Dart code should optionally be able to run as a foreground Service. Something similar also needs to be provided for WorkManager or a similar construct.
platform-android,engine,a: existing-apps,customer: money (g3),P2,team-android,triaged-android
low
Critical
363,342,403
TypeScript
ES6 module declarations should be marked to exclude them from `allowSyntheticDefaultImports`
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> This is the restated #27293. CC @ryanelian <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** master (471bc64) <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** esModuleInterop allowSyntheticDefaultImports error undefined default import export **Code:** In the `b` subdirectory, compile with `tsc -b .` and run with `node index.js`. ```ts // tsconfig.common.json { "compilerOptions": { "composite": true, "declaration": true, "target": "es6", "module": "commonjs", "esModuleInterop": true } } // a/tsconfig.json { "extends": "../tsconfig.common.json", "files": [ "index.ts" ] } // a/index.ts export const foo = 42; // b/tsconfig.json { "extends": "../tsconfig.common.json", "references": [ { "path": "../a" } ], "files": [ "index.ts" ] } // b/index.ts // Actual: compile OK. Expected: compile error. import A from "../a"; // Actual: runtime error. console.log(A.foo); ``` **Expected behavior:** The generated `a/index.d.ts` uses some new syntax to mark the module as an ES6 module, so `allowSyntheticDefaultImports` does not apply to it and the default import in `b/index.ts` is a compile error. **Actual behavior:** `allowSyntheticDefaultImports` applies to module `a`, so at compile time, the default import is accepted and resolves to the entire module, but at runtime, `A.foo` raises an error: ``` REDACTED/b/index.js:7 console.log(a_1.default.foo); ^ TypeError: Cannot read property 'foo' of undefined ``` **Playground Link:** N/A, multiple files **Related Issues:** #27293
Suggestion,In Discussion
low
Critical
363,379,334
go
gccgo: cgo not working on Solaris 10
### What version of Go are you using (`go version`)? `go version go1.10.3 gccgo (GCC) 8.2.1 20180813 solaris/sparc` ### Does this issue reproduce with the latest release? Yes. Compiled from source ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="sparc" GOBIN="" GOCACHE="/export/home/amandeep/.cache/go-build" GOEXE="" GOHOSTARCH="sparc" GOHOSTOS="solaris" GOOS="solaris" GOPATH="/opt/go_pkgs" GORACE="" GOROOT="/usr/gnu" GOTMPDIR="" GOTOOLDIR="/usr/gnu/libexec/gcc/sparc-sun-solaris2.10/8.2.1" GCCGO="/usr/gnu/bin/gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build699772262=/tmp/go-build -gno-record-gcc-switches -funwind-tables" ``` ### What did you do? Tried to compile a simple program: ``` package main /* #include <stdio.h> void testc() { printf("Hello cgo"); } */ import "C" func main() { C.testc() } ``` ### What did you see instead? Compilation fails with: ./abc.go:13:5: call of non-function C.testc
OS-Solaris,NeedsInvestigation
low
Critical
363,441,810
angular
AnimationPlayer sometimes throws
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> I wanted my `directive` plays an animation on host element while certain conditions met. To iterate animation, I shortly played with `animation-iteration-count` with no success, managed to make it work using `AnimationPlayer.play()`, `reset()` and `onDone`(GitHub repo [here](https://github.com/bob-lee/ng-notes/blob/master/projects/ng-idle-click/src/lib/ng-idle-click.directive.ts#L84)). But if it played an animation then the host component gets destroyed, it throws: ``` ERROR Error: Uncaught (in promise): Error: Unable to find the timeline player referenced by 1 Error: Unable to find the timeline player referenced by 1 at TimelineAnimationEngine.push../node_modules/@angular/animations/fesm5/browser.js.TimelineAnimationEngine._getPlayer (browser.js:2183) at TimelineAnimationEngine.push../node_modules/@angular/animations/fesm5/browser.js.TimelineAnimationEngine.command (browser.js:2203) at InjectableAnimationEngine.push../node_modules/@angular/animations/fesm5/browser.js.AnimationEngine.process (browser.js:3786) at AnimationRenderer.push../node_modules/@angular/platform-browser/fesm5/animations.js.AnimationRenderer.setProperty (animations.js:288) at issueAnimationCommand (animations.js:96) at RendererAnimationPlayer.push../node_modules/@angular/platform-browser/fesm5/animations.js.RendererAnimationPlayer._command (animations.js:75) at RendererAnimationPlayer.push../node_modules/@angular/platform-browser/fesm5/animations.js.RendererAnimationPlayer.reset (animations.js:90) at NgIdleClickDirective.push../dist/ng-idle-click/fesm5/ng-idle-click.js.NgIdleClickDirective.playDefaultAnimation (ng-idle-click.js:165) at NgIdleClickDirective.animationDone (ng-idle-click.js:78) at animations.js:172 at TimelineAnimationEngine.push../node_modules/@angular/animations/fesm5/browser.js.TimelineAnimationEngine._getPlayer (browser.js:2183) at TimelineAnimationEngine.push../node_modules/@angular/animations/fesm5/browser.js.TimelineAnimationEngine.command (browser.js:2203) at InjectableAnimationEngine.push../node_modules/@angular/animations/fesm5/browser.js.AnimationEngine.process (browser.js:3786) at AnimationRenderer.push../node_modules/@angular/platform-browser/fesm5/animations.js.AnimationRenderer.setProperty (animations.js:288) at issueAnimationCommand (animations.js:96) at RendererAnimationPlayer.push../node_modules/@angular/platform-browser/fesm5/animations.js.RendererAnimationPlayer._command (animations.js:75) at RendererAnimationPlayer.push../node_modules/@angular/platform-browser/fesm5/animations.js.RendererAnimationPlayer.reset (animations.js:90) at NgIdleClickDirective.push../dist/ng-idle-click/fesm5/ng-idle-click.js.NgIdleClickDirective.playDefaultAnimation (ng-idle-click.js:165) at NgIdleClickDirective.animationDone (ng-idle-click.js:78) at animations.js:172 at resolvePromise (zone.js:814) at zone.js:877 at ZoneDelegate.push../node_modules/zone.js/dist/zone.js.ZoneDelegate.invokeTask (zone.js:421) at Object.onInvokeTask (core.js:4053) at ZoneDelegate.push../node_modules/zone.js/dist/zone.js.ZoneDelegate.invokeTask (zone.js:420) at Zone.push../node_modules/zone.js/dist/zone.js.Zone.runTask (zone.js:188) at drainMicroTaskQueue (zone.js:595) at ZoneTask.push../node_modules/zone.js/dist/zone.js.ZoneTask.invokeTask [as invoke] (zone.js:500) at invokeTask (zone.js:1540) at IDBRequest.globalZoneAwareCallback (zone.js:1566) ``` ## Expected behavior <!-- Describe what the desired behavior would be. --> [`AnimationPlayer`](https://github.com/angular/angular/blob/master/packages/animations/browser/src/render/timeline_animation_engine.ts#L107) didn't throw when a player is being reused by calling `reset()` / `destroy()` and `play()`. ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://stackblitz.com or similar (you can use this template as a starting point: https://stackblitz.com/fork/angular-gitter). --> Tried to reproduce on [Stackblitz](https://stackblitz.com/edit/angular-animation-player-destroy-issue?file=src%2Fapp%2Fapp.component.ts) but couldn't (it didn't throw). Could reproduce on live app by following steps below: 1. Log in [here](https://ng-notes-abb75.firebaseapp.com/login) 2. On devtools' network tab, throttle with long latency, say 2.5s to make it animate 3. Go into `Issues` group, add any note and save (button should be animated) 4. Click user photo and logout (should throw) ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> ## Environment <pre><code> Angular version: 6.0.3 <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [x] Chrome (desktop) version 69 - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: 8.11.2 <!-- run `node --version` --> - Platform: Windows<!-- Mac, Linux, Windows --> Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
type: bug/fix,area: animations,freq2: medium,P4
low
Critical
363,471,344
rust
panic on failed dead code analysis
[playground](https://play.rust-lang.org/?gist=328f9cc0e4ebf8a32a91d7b235dd1127&version=nightly&mode=release&edition=2018) ``` rust const FOO: [u8; 2] = [42; 2]; fn main() { if FOO.len() > 2 { println!(" {}", FOO[3]); } } ``` I would expect that the constant propagation pass would prune the `println!` and possibly issue a warning about it. the result instead: ``` thread 'main' panicked at 'Tried to access element 3 of array/slice with length 2', librustc_mir/interpret/place.rs:265:17 stack backtrace: 0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49 1: std::sys_common::backtrace::print at libstd/sys_common/backtrace.rs:71 at libstd/sys_common/backtrace.rs:59 2: std::panicking::default_hook::{{closure}} at libstd/panicking.rs:211 3: std::panicking::default_hook at libstd/panicking.rs:227 4: rustc::util::common::panic_hook 5: std::panicking::rust_panic_with_hook at libstd/panicking.rs:481 6: std::panicking::continue_panic_fmt at libstd/panicking.rs:391 7: std::panicking::begin_panic_fmt at libstd/panicking.rs:346 8: rustc_mir::interpret::place::<impl rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::mplace_projection 9: rustc_mir::interpret::place::<impl rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::place_projection 10: rustc_mir::interpret::place::<impl rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::eval_place 11: rustc_mir::interpret::step::<impl rustc_mir::interpret::eval_context::EvalContext<'a, 'mir, 'tcx, M>>::run 12: rustc_mir::const_eval::eval_body_using_ecx 13: rustc_mir::const_eval::const_eval_provider 14: rustc::ty::query::__query_compute::const_eval 15: rustc::ty::query::<impl rustc::ty::query::config::QueryAccessors<'tcx> for rustc::ty::query::queries::const_eval<'tcx>>::compute 16: rustc::dep_graph::graph::DepGraph::with_task_impl 17: rustc::ty::context::tls::with_related_context 18: <rustc::ty::query::plumbing::JobOwner<'a, 'tcx, Q>>::start 19: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::force_query_with_job 20: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::get_query 21: rustc::ty::query::<impl rustc::ty::context::TyCtxt<'a, 'tcx, 'lcx>>::const_eval 22: rustc_mir::monomorphize::collector::collect_items_rec 23: rustc_mir::monomorphize::collector::collect_crate_mono_items::{{closure}} 24: rustc::util::common::time 25: rustc_mir::monomorphize::collector::collect_crate_mono_items 26: rustc::util::common::time 27: rustc_codegen_llvm::base::collect_and_partition_mono_items 28: rustc::ty::query::<impl rustc::ty::query::config::QueryAccessors<'tcx> for rustc::ty::query::queries::collect_and_partition_mono_items<'tcx>>::compute 29: rustc::dep_graph::graph::DepGraph::with_task_impl 30: rustc::ty::context::tls::with_related_context 31: <rustc::ty::query::plumbing::JobOwner<'a, 'tcx, Q>>::start 32: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::force_query_with_job 33: rustc::ty::query::plumbing::<impl rustc::ty::context::TyCtxt<'a, 'gcx, 'tcx>>::get_query 34: rustc_codegen_llvm::base::codegen_crate 35: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_utils::codegen_backend::CodegenBackend>::codegen_crate 36: rustc::util::common::time 37: rustc_driver::driver::phase_4_codegen 38: rustc_driver::driver::compile_input::{{closure}} 39: rustc::ty::context::tls::enter_context 40: <std::thread::local::LocalKey<T>>::with 41: rustc::ty::context::TyCtxt::create_and_enter 42: rustc_driver::driver::compile_input 43: rustc_driver::run_compiler_with_pool 44: <scoped_tls::ScopedKey<T>>::set 45: rustc_driver::run_compiler 46: <scoped_tls::ScopedKey<T>>::set 47: syntax::with_globals 48: __rust_maybe_catch_panic at libpanic_unwind/lib.rs:102 49: rustc_driver::run 50: rustc_driver::main 51: std::rt::lang_start::{{closure}} 52: std::panicking::try::do_call at libstd/rt.rs:59 at libstd/panicking.rs:310 53: __rust_maybe_catch_panic at libpanic_unwind/lib.rs:102 54: std::rt::lang_start_internal at libstd/panicking.rs:289 at libstd/panic.rs:392 at libstd/rt.rs:58 55: main 56: __libc_start_main 57: <unknown> query stack during panic: #0 [const_eval] const-evaluating `main` #1 [collect_and_partition_mono_items] collect_and_partition_mono_items end of query stack error: aborting due to previous error error: internal compiler error: unexpected panic ```
T-compiler,A-const-eval
low
Critical
363,499,816
go
x/net/html: fuzz this package
Given a couple of bugs reported by @tr3ee from malformed/incomplete tags like: * https://github.com/golang/go/issues/27702 * https://github.com/golang/go/issues/27704 * https://github.com/golang/go/issues/27842 * https://github.com/golang/go/issues/27846 whose reproducers are quite simple and have caused runtime panics or infinite hangs, perhaps fuzzing could help us discover what lurks beyond and even such cases. /cc @namusyaka @dgryski @dvyukov @bradfitz @nigeltao
Testing,help wanted
low
Critical
363,524,079
go
x/text: messages cannot contain '{'
### What version of Go are you using (`go version`)? go version go1.11 darwin/amd64 golang.org/x/text 905a57155faa8230500121607930ebb9dd8e139c ### What did you do? Ran `gotext update` with a message catalog containing the Translation `expected '{'`. ### What did you expect to see? Correct import of message catalog. ### What did you see instead? ``` gotext: generation failed: error: unmatched '}' ``` There are really three problems here. First, the substitution mechanism used in translations is too simplistic; it doesn't have any escaping mechanism to allow for the use of a bare '{' in the string. Second, the error message is wrong; it should say unmatched '{', not unmatched '}'. And third, the error message is utterly useless since it doesn't identify *which* translation has the problem.
NeedsInvestigation
low
Critical
363,533,219
pytorch
Create a custom function using the functions from math_cpu.cc source code in Caffe2?
I would like to write my own convolution function in C++ using the GEMM functions defined in math_cpu.cc. I am planning to write a python wrapper for the function and use in a python environment. Is this possible? The reason for this is that the dependencies are already present with the installation of caffe2.
caffe2
low
Minor
363,563,000
vscode
Extend tasks API to allow programmatic problem matching
The new tasks APIs (fetchTask(), executeTask(), task events...) makes it more convenient to use vscode's own task system to spawn child process (as opposed to using something like `child_process`, for example). Still, AFAICT, there are two bits missing in the API: 1. A method to get the underlying `Terminal` instance of a running task (maybe through `TaskExecution`?); 2. Events that would be fired whenever something is written in the process' stdout/stderr (that would probably go in the `Terminal` instance itself) This would be very useful to implement UI feedback (i.e progess notifications) while using vscode's task system. Right now, the only way I see to do that is to spawn my own proces, create an output channel and handle problem matching myself... Does that make sense?
feature-request,api,tasks
medium
Critical
363,641,265
rust
Confusing error message for `impl Trait` consts/statics that use generics
#53542 introduced `impl Trait` bindings in consts and statics (along with let bindings), but of course they cannot be assigned a value of a generic type. The error message is presently a bit confusing: ``` error[E0434]: can't capture dynamic environment in a fn item --> .../bindings.rs:4:29 | 4 | const foo: impl Clone = x; | ^ | = help: use the `|| { ... }` closure form instead ``` Example: ```rust #![feature(impl_trait_in_bindings)] fn a<T: Clone>(x: T) { const foo: impl Clone = x; } ``` A message like `cannot assign value of generic type to const or static of opaque type` would be better, I think (unless there's already a suitable error that can be reused). See also the [`bindings` test](https://github.com/rust-lang/rust/blob/16cf404f9853e716a216be32d05f5215ff821c00/src/test/ui/impl-trait/bindings.rs) from the above PR and its [stderr](https://github.com/rust-lang/rust/blob/16cf404f9853e716a216be32d05f5215ff821c00/src/test/ui/impl-trait/bindings.stderr). CC @cramertj @oli-obk
C-enhancement,A-diagnostics,T-compiler,A-impl-trait,F-impl_trait_in_bindings,requires-nightly
low
Critical
363,674,814
go
cmd/compile: unexpected performance difference accessing slices with different caps
Using tip (go version devel +b794ca64d2 Mon Sep 3 07:14:25 2018 +0000 linux/amd64) While implementing some bounds check optimizations in `image/draw` [CL 136935](https://go-review.googlesource.com/c/go/+/136935) I came across an unexpected performance difference between accessing elements of a slice **s** created like this s := spix[i : i+4 : len(spix)] and s := spix[i : i+4 : i+4] Both forms eliminate bound checks for accessing elements 0-3 of **s** but the second form has a significant performance improvement over the first (see CL for benchmarks). Building with `go build -gcflags="-d=ssa/check_bce/debug=1"` shows no difference in bounds checks between the two forms so presumably it has something to do with the resulting size of the slice. Attached is a simple program and disassembly derived from the `image/draw` code that demonstrates the effect. You can see that the assembly generated for the second form is quite a bit shorter. [bounds.go.txt](https://github.com/golang/go/files/2416170/bounds.go.txt) [bounds.asm.txt](https://github.com/golang/go/files/2416169/bounds.asm.txt)
Performance,NeedsFix,compiler/runtime
medium
Critical
363,676,325
kubernetes
Allow adding more volumeClaimTemplates to an existing statefulSet
<!-- This form is for bug reports and feature requests ONLY! If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/). If the matter is security related, please disclose it privately via https://kubernetes.io/security/. --> /kind feature /sig storage **What happened**: Currently, you get this if you want to update an existing statefulSet with a new volumeClaimTemplate: ``` Error: UPGRADE FAILED: StatefulSet.apps "my-app" is invalid: spec: Forbidden: updates to statefulSet spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden. ``` **What you expected to happen**: Allow the creation of volumeClaimTemplates in a statefulSet. **How to reproduce it (as minimally and precisely as possible)**: For example: ``` volumeClaimTemplates: - metadata: name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi + - metadata: + name: data2 + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 100Gi ``` **Anything else we need to know?**: Some background [here](https://stackoverflow.com/questions/52502074/add-additional-volumeclaimteplate-to-statefulset) **Environment**: - Kubernetes version (use `kubectl version`): all - Cloud provider or hardware configuration: all - OS (e.g. from /etc/os-release): all - Kernel (e.g. `uname -a`): all - Install tools: - Others:
sig/storage,kind/feature
high
Critical
363,717,750
pytorch
cuda test hangs if GPUs in Exclusive Process mode
Problem was seen on ppc64le, but should happen elsewhere as well (at least on other linux platforms). Problem is present since at least 0.4.1 (probably earlier) and still exists. Problem is seen with Anaconda's python 3.6; haven't tested with other interpeters. This is an issue in the test script, rather than with PyTorch itself. Test `test_multinomial_invalid_probs_cuda` will hang if the system's GPUs are in Exclusive Process mode, rather than in Default mode. See: https://docs.nvidia.com/cuda/cuda-c-programming-guide/#compute-modes The problem occurs when the `test_multinomial_invalid_probs_cuda` test spawns child processes. Those fail to access the GPU because the main process already has an active context. The child processes fail at the top of `test_cuda.py` while trying to check for MAGMA support: ``` if TEST_CUDA: torch.ones(1).cuda() # has_magma shows up after cuda is initialized TEST_MAGMA = torch.cuda.has_magma ``` That code raises a RuntimeException: ``` ... File "/tmp/tmp.ymeGX7otav/test/test_cuda.py", line 34, in torch.ones(1).cuda() # has_magma shows up after cuda is initialized RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable ``` That then repeats over and over forever. Here's a test program that demonstrates the problem: ``` import multiprocessing as mp import os # True if you have GPUs in Exclusive mode; False to simulate if True: import torch torch.ones(1).cuda() else: if mp.current_process().name != 'MainProcess': raise RuntimeError('simulate Exclusive Mode exception') def _spawn_method(method, ctype): ctx = mp.get_context(ctype) with ctx.Pool(1) as pool: return pool.map(method, [1, 2, 3]) def f(arg): return 'foo' if __name__ == '__main__': print('From fork: {}'.format(_spawn_method(f, 'fork'))) print('From spawn: {}'.format(_spawn_method(f, 'spawn'))) ``` Note the problem does not occur when `fork` is used as the multiprocessing context method, only when `spawn` is used. I'm not exactly sure what causes the multiprocessing code to get stuck here, but I suspect it's similar to the issue described at: http://jessenoller.com/2009/01/08/multiprocessingpool-and-keyboardinterrupt/ I'm also not sure of the best fix. Some possibilities are: - Switch the context method to `fork`? But is that available on Windows? - Add a `try`/`except` in the code that's testing for MAGMA? But what do do there at both prevents the hang and doesn't disturb the rest of `test_cuda` if some other unrelated failure occurs? - Detect Exclusive Process mode and skip this test in that case? Compute mode is included in the CUDA Device Properties, but that field isn't captured / exposed by PyTorch now. Would require work in `torch/csrc/cuda/Module.cpp`. - Document a restriction, recommendation, or caution for Exclusive Mode. cc @ngimel
todo,module: cuda,triaged
low
Critical
363,744,626
godot
Material not saved after shader saved to its own file
**Godot version:** 3.0.6 **OS/device including version:** macOS 10.13.6 **Issue description:** When a custom shader that's associated with a material is saved to its own file, the material is not updated on disk, causing an older version of the shader to be used the next time the project is opened (and in some cases when the scene is run), while the new version is used in the editor until the editor is closed. **Steps to reproduce:** Use the Godot UI for all of the following steps: 1. Create a MeshInstance in a new scene (Optional: Assign a Sphere mesh to the Mesh property to see changes) 2. Assign a "New ShaderMaterial" to the "Material Override" property (under the Geometry section) 3. Click on the shader material you've just created to open it 4. Assign a "New Shader" to the "Shader" property 5. Click on the shader you've just created and type some basic shader code like "shader_type spatial; // Version 1" (Optional: Assign e.g. a red color to ALBEDO in the fragment function to verify that your mesh is now using this shader) 6. Go back to the Shader Material you had created, and use "Save As" in the Inspector to save it to a file (save it as a .tres file). 7. Save your scene to make sure everything is saved. 8. Click on the "Shader" again in the Inspector to open the shader again 9. Use the Inspector's "Save As" menu to save the shader to its own .shader file 9. Change shader code, e.g. to "shader_type spatial; // Version 2" (Optional: Assign e.g. a green color to ALBEDO in the fragment function to verify that your mesh is now using this shader in the editor) 10. Save the scene again If you've assigned a sphere to the MeshInstance and changed the ALBEDO color in the shader to green, now your sphere should look Green in the editor. However, if you inspect the files in your file system, you'll notice that you have two versions of the shader code now, one in the material .tres file, and the other in the .shader file. While the editor uses the shader code in the .shader file, if you close godot and reload your project, the editor will revert back to using the old version of the shader from the .tres file. (As a side note, running the scene -- without closing godot -- also seems to use the saved version of the material, though I believe I've had one case where it used the in-memory version.) **Further analysis:** After step 10, do the following without closing Godot: 11. Go back to the Material 12. Increase "Render Priority" to 1, then decrease it back to 0 13. Save the scene again Now, if you inspect the files in the file system, you'll notice that the material file will be updated to point to the .shader file instead of including its own, stale shader code. **Conclusion:** The problem seems to be arising from the Material not being marked as dirty (for save operations) when its shader changes after the shader is saved to a file. Although the material is updated in memory correctly, because it's not marked as dirty, its resource file on disk doesn't get updated when saving the scene, until some other change is performed on the material to force it to save (e.g. manually saving it or changing some other arbitrary property of it, like "Render Priority", before saving the scene). The solution should be very simple: a material resource should be marked as dirty whenever its shader property changes, regardless of the reason for the change.
bug,topic:editor
low
Major
363,757,519
opencv
Undefined reference to 'gluLookAt' & 'gluPerspective'
##### System information (version) - OpenCV =>3.4.0-rc - Operating System / Platform => Fedora 28 x86_64 - Compiler => C++ ##### Detailed description Trying to compile from source but getting the following error. `[` 86%] Linking CXX executable ../../bin/cpp-example-detect_mser CMakeFiles/example_detect_mser.dir/detect_mser.cpp.o: In function `draw(void*)': detect_mser.cpp:(.text._ZL4drawPv+0x5a): undefined reference to `gluLookAt' CMakeFiles/example_detect_mser.dir/detect_mser.cpp.o: In function `main': detect_mser.cpp:(.text.startup.main+0x2523): undefined reference to `gluPerspective' collect2: error: ld returned 1 exit status make[2]: *** [samples/cpp/CMakeFiles/example_detect_mser.dir/build.make:130: bin/cpp-example-detect_mser] Error 1 make[1]: *** [CMakeFiles/Makefile2:24582: samples/cpp/CMakeFiles/example_detect_mser.dir/all] Error 2 make: *** [Makefile:163: all] Error 2 ##### Steps to reproduce 1. Clone contrib $ git clone https://github.com/opencv/opencv_contrib.git $ cd opencv_contrib $ git checkout 3.4.0 $ cd .. 2. Clone opencv $ git clone https://github.com/opencv/opencv.git $ cd opencv $ git checkout 3.4.0-rc 3. Compiling $ mkdir build $ cd build $ cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules .. $ make -j6
bug,category: build/install,category: samples
low
Critical
363,767,735
pytorch
Stop using make_intrusive directly; provide some make_tensor
Right now, if you are working with the lowest level of abstraction and you want to *create* a tensor, the current idiom is something like `Tensor(c10::make_intrusive<SparseTensorImpl>(some, args, ...))`. On further reflection, this is actually a bit strange, because the constructor you're using in this case is `Tensor(c10::intrusive_ptr<TensorImpl, UndefinedTensorImpl>)` and it's not obvious that `c10::intrusive_ptr<SparseTensorImpl>` is convertible to this. (It is, courtesy of #11260.) But what we really should do is just come up with a new constructor which handles all of this intrusive pointer under the hood. Something like: ``` template <typename T, typename... Args> Tensor make_tensor(Args... args) { return Tensor(c10::make_intrusive<T>(std::forward(args))); } ``` Then the user visible interface doesn't require looking into these conversions. cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @ailzhang @gchanan This should be easy to do, if you want to attempt this task, CC me and I can review.
module: internals,triaged
low
Minor
363,861,244
scrcpy
Detect incompatible Android versoin (API < 21)
Initial title: **Segmentation fault when trying scrcpy on Kindles** ``` /usr/local/share/scrcpy/scrcpy-server.jar: 1 file pushed. 1.8 MB/s (17874 bytes in 0.010s) error: closed 2018-09-26 01:12:43.306 scrcpy[2646:20517898] ERROR: "adb reverse" returned with value 1 2018-09-26 01:12:43.306 scrcpy[2646:20517898] WARN: 'adb reverse' failed, fallback to 'adb forward' Segmentation fault ``` Model: Kindle Fire HDX 8.9 (3rd generation)
enhancement,android4
low
Critical
363,919,891
vscode
[html] support less syntax in html file
For my information, it seems like vscode doesn't support less and sass in html file, so can we make an extensions to support it? some thing like this: ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Document</title> **<style lang="less"> .a { width: 200px; background-color: #af1b3b; .box { flex-direction: column; justify-content: center; align-items: center; } } </style>** </head> <body> </body> </html> ```
feature-request,html
medium
Minor
364,020,151
flutter
Unhandled exception when calling first time RenderRepaintBoundary.toImage()
**flutter doctor -v** ``` [√] Flutter (Channel beta, v0.8.2, on Microsoft Windows [Version 10.0.17134.285], locale de-DE) • Flutter version 0.8.2 at C:\flutter • Framework revision 5ab9e70727 (3 weeks ago), 2018-09-07 12:33:05 -0700 • Engine revision 58a1894a1c • Dart version 2.1.0-dev.3.1.flutter-760a9690c2 [√] Android toolchain - develop for Android devices (Android SDK 27.0.3) • Android SDK at C:\Users\marko\AppData\Local\Android\sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-27, build-tools 27.0.3 • Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b02) • All Android licenses accepted. [√] Android Studio (version 2.3) • Android Studio at C:\Program Files\Android\Android Studio_2_3_3 • Flutter plugin version 23.2.2 • Dart plugin version 173.4700 • Java version OpenJDK Runtime Environment (build 1.8.0_112-release-b06) [√] Android Studio (version 3.1) • Android Studio at C:\Program Files\Android\Android Studio • Flutter plugin version 28.0.1 • Dart plugin version 173.4700 • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b02) [√] VS Code, 64-bit edition (version 1.24.1) • VS Code at C:\Program Files\Microsoft VS Code • Flutter extension version 2.14.0 [√] Connected devices (1 available) • FIG LX1 • XEDDU18122003040 • android-arm64 • Android 8.0.0 (API 26) • No issues found! ``` **Error message log from Console:** ``` E/flutter (25726): [ERROR:flutter/shell/common/shell.cc(181)] Dart Error: Unhandled exception: E/flutter (25726): 'package:flutter/src/rendering/proxy_box.dart': Failed assertion: line 2623 pos 12: '!debugNeedsPaint': is not true. E/flutter (25726): #0 _AssertionError._doThrowNew (dart:core/runtime/liberrors_patch.dart:40:39) E/flutter (25726): #1 _AssertionError._throwNew (dart:core/runtime/liberrors_patch.dart:36:5) E/flutter (25726): #2 RenderRepaintBoundary.toImage (package:flutter/src/rendering/proxy_box.dart:2623:12) E/flutter (25726): #3 _PngHomeState._capturePng (file:///C:/Workspaces/AS/flutter/screenshot_a_widget/screenshoting_a_widget/lib/main.dart:49:37) E/flutter (25726): <asynchronous suspension> E/flutter (25726): #4 _InkResponseState._handleTap (package:flutter/src/material/ink_well.dart:503:14) E/flutter (25726): #5 _InkResponseState.build.<anonymous closure> (package:flutter/src/material/ink_well.dart:558:30) E/flutter (25726): #6 GestureRecognizer.invokeCallback (package:flutter/src/gestures/recognizer.dart:102:24) E/flutter (25726): #7 TapGestureRecognizer._checkUp (package:flutter/src/gestures/tap.dart:242:9) E/flutter (25726): #8 TapGestureRecognizer.handlePrimaryPointer (package:flutter/src/gestures/tap.dart:175:7) E/flutter (25726): #9 PrimaryPointerGestureRecognizer.handleEvent (package:flutter/src/gestures/recognizer.dart:315:9) E/flutter (25726): #10 PointerRouter._dispatch (package:flutter/src/gestures/pointer_router.dart:73:12) E/flutter (25726): #11 PointerRouter.route (package:flutter/src/gestures/pointer_router.dart:101:11) E/flutter (25726): #12 _WidgetsFlutterBinding&BindingBase&GestureBinding.handleEvent (package:flutter/src/gestures/binding.dart:143:19) E/flutter (25726): #13 _WidgetsFlutterBinding&BindingBase&GestureBinding.dispatchEvent (package:flutter/src/gestures/binding.dart:121:22) E/flutter (25726): #14 _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerEvent (package:flutter/src/gestures/binding.dart:101:7) E/flutter (25726): #15 _WidgetsFlutterBinding&BindingBase&GestureBinding._flushPointerEventQueue (package:flutter/src/gestures/binding.dart:64:7) E/flutter (25726): #16 _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerDataPacket (package:flutter/src/gestures/binding.dart:48:7) E/flutter (25726): #17 _invoke1 (dart:ui/hooks.dart:142:13) E/flutter (25726): #18 _dispatchPointerDataPacket (dart:ui/hooks.dart:99:5) ``` **Source code** of the program (mainly taken from the docs of proxy_box.dart in class RenderRepaintBoundary method: toImage): ```dart import 'dart:async'; import 'dart:typed_data'; import 'dart:ui' as ui; import 'package:flutter/material.dart'; import 'package:flutter/rendering.dart'; import 'package:flutter/services.dart'; void main() => runApp(new MyApp()); class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return new MaterialApp( title: 'Flutter Demo', theme: new ThemeData( primarySwatch: Colors.blue, ), home: new PngHome(), ); } } class PngHome extends StatefulWidget { PngHome({Key key}) : super(key: key); @override _PngHomeState createState() => _PngHomeState(); } class _PngHomeState extends State<PngHome> { GlobalKey globalKey = GlobalKey(); Future<void> _capturePng() async { RenderRepaintBoundary boundary = globalKey.currentContext.findRenderObject(); ui.Image image = await boundary.toImage(); ByteData byteData = await image.toByteData(format: ui.ImageByteFormat.png); Uint8List pngBytes = byteData.buffer.asUint8List(); print(pngBytes); } @override Widget build(BuildContext context) { return RepaintBoundary( key: globalKey, child: Container( color: Colors.green, child: Center( child: FlatButton( color: Colors.white, child: Text( 'Hello World', textDirection: TextDirection.ltr, ), onPressed: _capturePng, ), ), ), ); } } ``` **Reproduction:** - Start app - hit "Hello world" button Optional for wondering: - hit "Hello world" button again, then it is working. Expected: No exception on the first time hitting the button.
c: crash,framework,engine,c: rendering,customer: octopod,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-engine,triaged-engine
low
Critical
364,060,336
pytorch
Caffe2 issues with using Glog without GFlags
## Issue description Trying to compile caffe2 with Glog that is not compiled with GFlags is causing compilation errors. Is it required to build Glog with GFlags for Caffe2? If I `apt install libgoogle-glog-dev`, then I do not get the compilation errors. However this will also pull in GFlags. Also there does not appear to be a way to specify a custom install location for glog. I tried using `glog_DIR`, but it had no effect. I did manage to find GLOG_LIBRARY and GLOG_INCLUDE_DIR, which seems to work, but it is not available in the list of available options shown in ccmake. ## Code example Cmake configuration command: ``` cmake -DBUILD_SHARED_LIBS=ON -DCMAKE_BUILD_TYPE:STRING=Release -DUSE_GFLAGS=OFF -DUSE_GLOG=ON -DUSE_LMDB=OFF -DUSE_LEVELDB=OFF -DUSE_NUMA=ON -DUSE_OPENCV=OFF -DBUILD_PYTHON=OFF -DUSE_CUDA=ON -DUSE_CUDNN=ON -DUSE_MPI=OFF -DUSE_NCCL=OFF -DUSE_GLOO=OFF -DUSE_IDEEP=OFF -DUSE_MKLML=OFF -DUSE_MKLDNN=OFF -DGLOG_LIBRARY=/path/to/lib/libglog.so -DGLOG_INCLUDE_DIR=/path/to/glog/include ``` Glog was installed from source from tag: v0.3.5 from https://github.com/google/glog/tree/v0.3.5 These are the errors I am getting when compiling: ``` caffe2/core/logging.cc:84:12: error: ‘FLAGS_minloglevel’ is already declared in this scope using fLI::FLAGS_minloglevel; caffe2/core/logging.cc:85:12: error: ‘FLAGS_v’ is already declared in this scope using fLI::FLAGS_v; caffe2/core/logging.cc:86:12: error: ‘FLAGS_logtostderr’ is already declared in this scope using fLB::FLAGS_logtostderr; ``` ## System Info - PyTorch or Caffe2: Caffe2 - How you installed PyTorch (conda, pip, source): source - Build command you used (if compiling from source): `cmake -DBUILD_SHARED_LIBS=ON -DCMAKE_BUILD_TYPE=Release -DUSE_GFLAGS=OFF -DUSE_GLOG=ON -DUSE_LMDB=OFF -DUSE_LEVELDB=OFF -DUSE_NUMA=ON -DUSE_OPENCV=OFF -DBUILD_PYTHON=OFF -DUSE_CUDA=ON -DUSE_CUDNN=ON -DUSE_MPI=OFF -DUSE_NCCL=OFF -DUSE_GLOO=OFF -DUSE_IDEEP=OFF -DUSE_MKLML=OFF -DUSE_MKLDNN=OFF -DGLOG_LIBRARY=/path/to/lib/libglog.so -DGLOG_INCLUDE_DIR=/path/to/glog/include` - OS: Ubuntu 18.04 - PyTorch version: PYTORCH_COMMIT 784d34582828c1ebcb6bd2dfc0b5a4d135c422b9 - Python version: Python 3.6.5 :: Anaconda, Inc. - CUDA/cuDNN version: 10.0/7.3.0 - GPU models and configuration: TITAN X (Pascal) - GCC version (if compiling from source): 7.3.0 - CMake version: 3.12.2 - Versions of any other relevant libraries: Glog from source using v0.3.5 cmake flags when building Glog: ``` -DCMAKE_INSTALL_PREFIX=${EXTERNAL_INSTALL_LOCATION} -DBUILD_SHARED_LIBS=ON -DCMAKE_BUILD_TYPE=Release -DWITH_GFLAGS=OFF ```
caffe2
low
Critical
364,065,166
create-react-app
Babel JSX Syntax error when importing from a linked library
<!-- PLEASE READ THE FIRST SECTION :-) --> ### Is this a bug report? Yes <!-- If you answered "Yes": Please note that your issue will be fixed much faster if you spend about half an hour preparing it, including the exact reproduction steps and a demo. If you're in a hurry or don't feel confident, it's fine to report bugs with less details, but this makes it less likely they'll get fixed soon. In either case, please fill as many fields below as you can. If you answered "No": If this is a question or a discussion, you may delete this template and write in a free form. Note that we don't provide help for webpack questions after ejecting. You can find webpack docs at https://webpack.js.org/. --> ### Did you try recovering your dependencies? <!-- Your module tree might be corrupted, and that might be causing the issues. Let's try to recover it. First, delete these files and folders in your project: * node_modules * package-lock.json * yarn.lock Then you need to decide which package manager you prefer to use. We support both npm (https://npmjs.com) and yarn (http://yarnpkg.com/). However, **they can't be used together in one project** so you need to pick one. If you decided to use npm, run this in your project directory: npm install -g npm@latest npm install This should fix your project. If you decided to use yarn, update it first (https://yarnpkg.com/en/docs/install). Then run in your project directory: yarn This should fix your project. Importantly, **if you decided to use yarn, you should never run `npm install` in the project**. For example, yarn users should run `yarn add <library>` instead of `npm install <library>`. Otherwise your project will break again. Have you done all these steps and still see the issue? Please paste the output of `npm --version` and/or `yarn --version` to confirm. --> Yes ### Which terms did you search for in User Guide? <!-- There are a few common documented problems, such as watcher not detecting changes, or build failing. They are described in the Troubleshooting section of the User Guide: https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#troubleshooting Please scan these few sections for common problems. Additionally, you can search the User Guide itself for something you're having issues with: https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md If you didn't find the solution, please share which words you searched for. This helps us improve documentation for future readers who might encounter the same problem. --> (Write your answer here if relevant.) ### Environment <!-- To help identify if a problem is specific to a platform, browser, or module version, information about your environment is required. This enables the maintainers quickly reproduce the issue and give feedback. Run the following command in your React app's folder in terminal. Note: The result is copied to your clipboard directly. `npx create-react-app --info` Paste the output of the command in the section below. --> Running `npx create-react-app --info` does not seem to work. Getting: ``` Please specify the project directory: create-react-app <project-directory> For example: create-react-app my-react-app Run create-react-app --help to see all options. ``` However I am using: Node v8.11.3 Yarn 1.9.4 npm 6.4.1 react-scripts 2.0.0-next.2150693d ### Steps to Reproduce <!-- How would you describe your issue to someone who doesn’t know you or your project? Try to write a sequence of steps that anybody can repeat to see the issue. --> 1. Create a new Create React App (The actual App) 2. Create another Create React App (The "Library") 3. Link them together: Import a component from the Library into the App This used to work with react-scripts 2.0.0-next.a671462c ### Expected Behavior <!-- How did you expect the tool to behave? It’s fine if you’re not sure your understanding is correct. Just write down what you thought would happen. --> The app should compile just fine as it used to in 2.0.0-next.a671462c ### Actual Behavior <!-- Did something go wrong? Is something broken, or not behaving as you expected? Please attach screenshots if possible! They are extremely helpful for diagnosing issues. --> Application fails to compile with "Syntax error: Unexpected token" in the component, that was attempted to be imported from the other application. ![screen shot 2018-09-26 at 16 36 00](https://user-images.githubusercontent.com/13168478/46087325-5fc2c080-c1aa-11e8-9c50-35f048f41b8b.png) To me it seems like the babel loader is somehow not transpiling the imported component. ### Reproducible Demo <!-- If you can, please share a project that reproduces the issue. This is the single most effective way to get an issue fixed soon. There are two ways to do it: * Create a new app and try to reproduce the issue in it. This is useful if you roughly know where the problem is, or can’t share the real code. * Or, copy your app and remove things until you’re left with the minimal reproducible demo. This is useful for finding the root cause. You may then optionally create a new project. This is a good guide to creating bug demos: https://stackoverflow.com/help/mcve Once you’re done, push the project to GitHub and paste the link to it below: --> https://github.com/FelixKuehl/cra-alpha-issue-example <!-- What happens if you skip this step? We will try to help you, but in many cases it is impossible because crucial information is missing. In that case we'll tag an issue as having a low priority, and eventually close it if there is no clear direction. We still appreciate the report though, as eventually somebody else might create a reproducible example for it. Thanks for helping us help you! -->
tag: documentation
medium
Critical
364,111,070
electron
better documentation for Debugger.sendCommand callback behavior
https://electronjs.org/docs/api/debugger - #14810 - It seems perfectly natural to send certain init type messages like Page.enable before loading anything, but they either won't be sent or won't be processed until loading starts. If this delay can't be avoided it should be explained (that it will happen, whether the send or processing will be delayed) - #14811 - a successful completion of sendCommand delivers an empty error plain object. This along with the shape of the error plain object on failure should be documented.
enhancement :sparkles:,documentation :notebook:,pr welcome
low
Critical
364,113,017
flutter
Simplify symbolication
Our wiki [1] documents where to find symbol files and how to use them. It's a manual process. We should figure out how to automate this and make it easier for the user. [1] https://github.com/flutter/engine/wiki/Symbolicating-production-crash-stacks
c: new feature,team,tool,a: quality,a: debugging,P2,team-tool,triaged-tool
low
Critical
364,118,448
go
cmd/go: "go doc . foo" not working on Windows?
The documentation notes that starting an identifier with a capital letter forces the current package, but if you're attempting to look up something that is unexported then this workaround doesn't apply, which can be annoying sometimes. I propose, therefore, that prefixing an identifier with a `.`, such as `go doc .testing`, should force `go doc` to only look up the identifier in the current package. Alternatively, when looking something up the current package, as specified by the use of a capital letter, and `-u` is given, then it should also match unexported identifiers despite the case mismatch.
OS-Windows,NeedsInvestigation
low
Major
364,209,606
pytorch
Error: Internal Compiler Error (codegen): "there was an error in verifying the lgenfe output!"
If you attempt to put `at::optional` in a struct which is used from a cu file, you may get this error: ``` Error: Internal Compiler Error (codegen): "there was an error in verifying the lgenfe output!" ``` A minimal reproduction is this program: ``` #include "optional.h" struct O { O() {} // required! at::optional<int> x; }; void blah() { O o; } ``` compiled with `nvcc -c main.cu -std=c++11 --expt-relaxed-constexpr` (the `--expt-relaxed-constexpr` is mandatory). There is a simple workaround for this problem: **explicitly initialize all optional fields in your constructor.** So for example, the following code will compile successfully: ``` #include "optional.h" struct O { O() : x() {} at::optional<int> x; }; void blah() { O o; } ```
module: build,module: cuda,triaged,internals
low
Critical
364,217,893
TypeScript
Allow tsconfig.json when input files are specified
I don't know why the `tsconfig.json` is ignored even when the `--project` or `-p` option is specified. In my opinion, the right implementation should be: + If no `--project` or `-p` option: Ignore tsconfig.json + Otherwise: Use the configuration file specified. All the options used in the command line should overwrite the ones of the configuration file. E.g. The include/exclude keys of the `tsconfig.json` will be ignored when input files are specified.
Suggestion,In Discussion
high
Critical
364,227,023
rust
linkcheck should check links in diagnostics messages
The linkcheck tool does not check that links emitted by diagnostics messages are correct. E.g. we have at least one wrong link as per issue #54555. It would be useful to have linkcheck be able to check those links.
A-testsuite,C-enhancement,T-infra,T-release
low
Major
364,229,846
create-react-app
2.0.0 Source map build performance
<!-- PLEASE READ THE FIRST SECTION :-) --> ### Is this a bug report? No <!-- If you answered "Yes": Please note that your issue will be fixed much faster if you spend about half an hour preparing it, including the exact reproduction steps and a demo. If you're in a hurry or don't feel confident, it's fine to report bugs with less details, but this makes it less likely they'll get fixed soon. In either case, please fill as many fields below as you can. If you answered "No": If this is a question or a discussion, you may delete this template and write in a free form. Note that we don't provide help for webpack questions after ejecting. You can find webpack docs at https://webpack.js.org/. --> ### Environment <!-- To help identify if a problem is specific to a platform, browser, or module version, information about your environment is required. This enables the maintainers quickly reproduce the issue and give feedback. Run the following command in your React app's folder in terminal. Note: The result is copied to your clipboard directly. `npx create-react-app --info` Paste the output of the command in the section below. --> Environment: OS: macOS High Sierra 10.13.6 Node: 8.12.0 Yarn: 1.9.4 npm: 6.4.1 Watchman: 4.9.0 Xcode: Xcode 9.4.1 Build version 9F2000 Android Studio: 3.0 AI-171.4443003 Packages: (wanted => installed) react: ^16.5.2 => 16.5.2 react-dom: ^16.5.2 => 16.5.2 react-scripts: ^2.0.0 => 2.0.0 ### Steps to Reproduce <!-- How would you describe your issue to someone who doesn’t know you or your project? Try to write a sequence of steps that anybody can repeat to see the issue. --> 1. `npx create-react-app test-app` # must use [email protected] 2. `yarn add britecharts-react` 3. Render any graph in App.js 4. `yarn build` ### Expected Behavior yarn build runs in a reasonable time ### Actual Behavior Build time jumps to 90 seconds, from 15 seconds ### Reproducible Demo <!-- If you can, please share a project that reproduces the issue. This is the single most effective way to get an issue fixed soon. There are two ways to do it: * Create a new app and try to reproduce the issue in it. This is useful if you roughly know where the problem is, or can’t share the real code. * Or, copy your app and remove things until you’re left with the minimal reproducible demo. This is useful for finding the root cause. You may then optionally create a new project. This is a good guide to creating bug demos: https://stackoverflow.com/help/mcve Once you’re done, push the project to GitHub and paste the link to it below: --> [Example Project](https://github.com/cliedeman/cra2-sourcemap-performance) ``` [email protected] 14.03s. (without britecharts) [email protected] 89.04s (with britecharts) [email protected]: 28.73s. (with britecharts) [email protected] : 6.93s. (without britecharts) ```
issue: needs investigation
low
Critical
364,237,713
go
x/tools/playground: vet step uses different integer sizes than build step
These two links both emit red errors: https://play.golang.org/p/b1Bm8V8Eh4S https://play.golang.org/p/0ZYaKc42JZe One of them says that the array passed to the function is 16 bytes but the function requires 12, and the other one says that the array is 12 bytes but the function requires 16. The function definition is **exactly** the same in both cases, so they can't both be correct. CC @dmitshur @griesemer @bradfitz
NeedsDecision
low
Critical
364,246,755
flutter
Add state persistence listener to PluginRegistry.Registrar
I'm not sure about the iOS side of things. To solve #17950 and to provide convenient solutions to #6827, plugins must have a chance to persist some state before the Android host activity is destroyed. On Android, `onSaveInstanceState` on the activity is called before `onDestroy` and allows you to add values to a bundle. State restoration usually happens in the onCreate method of the activity. I propose the addition of a new listener interface to the `PluginRegistry`: ```java StatePersistenceListener { void onSaveInstanceState(Bundle outState); void onRestoreInstanceState(Bundle savedInstanceState); } ``` Use cases: ------------- * A router plugin that stores the current route even when the app is killed in background * Fix for the image_picker so that it works on all Android devices
engine,c: proposal,P2,a: plugins,team-engine,triaged-engine
low
Minor
364,319,906
godot
High frame delay causes rendering issues with auto exposure enabled [GLES3]
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** 3.1.alpha 5a03d50 **OS/device including version:** Windows 10 GPU: NVIDIA GeForce GTX 1070 MQ, driver: 411.63 GPU: Interl UHD Graphics 630 **Issue description:** When game starts, just as first scene renders, it appears for a brief second frozen, with distorted colors: ![Flash of color](https://thumbs.gfycat.com/TastyShortFlycatcher-size_restricted.gif) **Steps to reproduce:** Unable to reproduce in minimal project. Issue can be observed with [demo version of ΔV: Rings of Saturn 0.20.0, available on itch.io](https://koder.itch.io/dv-rings-of-saturn-demo) - observed with windows builds. It might be related to loading resources and auto-exposure of `WorldEnvironment`. Going to work around it by providing blank/black initial scene.
bug,topic:rendering,confirmed
medium
Major
364,354,305
go
wasm: scheduling of JavaScript callbacks is not fair
### What version of Go are you using (`go version`)? go version go1.11 linux/amd64 ### Does this issue reproduce with the latest release? yes ### Description I ran the webassembly benchmark in https://djhworld.github.io/post/2018/09/21/i-ported-my-gameboy-color-emulator-to-webassembly/ and as also noted on that page, the benchmark does not properly process keyboard input on chrome. See https://bugs.chromium.org/p/v8/issues/detail?id=8227 for the v8 tracking bug. Due to a performance issue in v8, the compiled go code runs slower in v8 than firefox. As a consequence, there is more pressure on the go scheduler. In particular, there is nearly always a go-routine ready to be executed with very few idle pauses. Looking at the scheduler implementation at https://github.com/golang/go/blob/eae5fc88c13d6902a7fe9c595fb02155eb037cbe/src/runtime/proc.go#L2368 this leads to starvation of the go-routines that are waiting for a callback from java-script. In case of the gameboy emulator, this means that no more I/O events are being processed.
NeedsInvestigation,arch-wasm
low
Critical
364,392,389
TypeScript
Allow for dropping tsconfig.json in arbitrary folder and specifying the source root with tsc command
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. --> ## Search Terms 1. tsconfig.json 1. TS18003 1. project root <!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily --> ## Suggestion I have been deving an automation publish tool, with typescript inside. Nice work for you! Since the tsconfig.json file need to be put in a center config folder other than project/source code root. amd i have tried to use `tsc --watch --project path-to-tsconfig.json` to make this future, which is not working for now. I got `TS18003` error, and i know why this happen: tsc just want to remind us we may have a wrong include or exclude config. 😄 I cannot find any tsc command should i use to conject with `--project` option to specify the project root. I find in the tsc sourcecode that the `compiler.parseJsonConfigFileContent(_, _, basePath);` function could let us specify the project basePath when parsing tsconfig.json, so, i want to ask if we can expose a command option to let use the same function as basePath, maybe `--configFileBasePath` or `--configContext`? Please let me know if there already is a solution for my use case. Thank you for your works! <!-- A summary of what you'd like to see added or changed --> ## Examples <!-- Show how this would be used and what the behavior would be --> `tsc --project path-to-tsconfig.json --configContext path-to-source-code-root` ## Checklist My suggestion meets these guidelines: * [X] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [X] This wouldn't change the runtime behavior of existing JavaScript code * [X] This could be implemented without emitting different JS based on the types of the expressions * [X] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,Awaiting More Feedback
low
Critical
364,420,676
angular
bug(animations): Element rendered twice when animated
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> ## Expected behavior <!-- Describe what the desired behavior would be. --> ## Minimal reproduction of the problem with instructions When triggering an animation of an element shown with `*ngIf` while an animation is already running, the element is rendered twice: <img width="419" alt="bildschirmfoto 2018-09-27 um 12 31 37" src="https://user-images.githubusercontent.com/5589029/46140843-84727300-c252-11e8-90ba-caf24f9b915f.png"> The pink and the blue box should only be shown once. This happens when hitting the toggle button multiple times in the following example: [angular 7.0.0-beta.7](https://stackblitz.com/edit/angular-gitter-swjfyh?file=app/app.component.html) This also occurs with angular 6.1.9: [angular 6.1.9](https://stackblitz.com/edit/angular-gitter-agjbr7?file=app%2Fapp.component.html) This issue seems related to https://github.com/angular/angular/issues/23302 Note: at no time there exists more than one component instance, see this example: https://stackblitz.com/edit/angular-gitter-dp3cub <img width="464" alt="bildschirmfoto 2018-09-27 um 12 58 12" src="https://user-images.githubusercontent.com/5589029/46141913-9a356780-c255-11e8-9fc9-dff38caeb0ba.png"> ## What is the motivation / use case for changing the behavior? It's a bug ## Environment <pre><code> Angular version: 7.0.0-beta.7 <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [x] Chrome (desktop) version 68 - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [x] Firefox version 62 - [x] Safari (desktop) version 12 - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX </code></pre>
type: bug/fix,area: animations,freq2: medium,P4
medium
Critical
364,448,303
pytorch
[caffe2] How to set different learning rates for different layers?
Hi, I would like to ask a question that how to set different learning rates for different layers in Caffe2. Who can provide some examples of this? Thanks in advance.
caffe2
low
Minor
364,455,262
TypeScript
Allow to explicitly pass type parameters via JSDoc
## Search Terms jsdoc generics type parameters constraints ## Suggestion It seems like it is not possible via JSDoc to explicitly tell compiler which type parameters to pass to a generic function. ## Use Cases In TypeScript it is possible to explicitly pass type parameters such as `mongoose.model<Model, Schema>('modelName', Schema)` while I could not find a way to do same with JSDoc. ## Examples `mongoose.model` has two signatures, first one with one type parameter and the other with two. To make use of the second signature we must pass types explicitly. ```ts export function model<T extends Document>( name: string, schema?: Schema, collection?: string, skipInit?: boolean ): Model<T>; export function model<T extends Document, U extends Model<T>>( name: string, schema?: Schema, collection?: string, skipInit?: boolean ): U; ``` ```ts // Choose second signature in typescript. const model = mongoose.model<Model, Schema>('modelName', Schema); // With JSDoc, compiler always chooses first signature and we receive a type mismatch error. /** @type {Schema & mongoose.Model<Model>} */ const model = mongoose.model('modelName', Schema); // But something like this would be great. const model = mongoose.model/**<Model, Schema>*/('modelName', Schema); ``` My apologies if this is already possible, but I've spend almost a week battling this. Related: https://github.com/Microsoft/TypeScript-Node-Starter/issues/101 ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,In Discussion,Domain: JSDoc,checkJs,Domain: JavaScript
high
Critical
364,458,404
go
proposal: spec: define identifiers to be NFKC-normalized
**Background** Go 1 allows Unicode letters as part of identifiers. Normally that's quite useful, but sometimes it can lead to confusion: some letters look like _and are acceptable substitutes for_ other similar letters.¹ For example, in [this code snippet]( https://play.golang.org/p/RxwYEczV4uM), the declared identifier is U+00B5 (MICRO SIGN), which is `AltGr+m` on the Linux `altgr-intl` layout, while its use is U+03BC (GREEK SMALL LETTER MU), which is on the long-press menu for the `π` key on the Google [Gboard](https://play.google.com/store/apps/details?id=com.google.android.inputmethod.latin&hl=en_US) on-screen keyboard for Android and iOS. Fortunately, the Unicode standard defines a number of [normalization forms](http://unicode.org/reports/tr15/) that are intended to smooth out these differences. As it happens, `µ` and `μ` are equivalent under the “compatibility” forms NFKC and NFKD. **Proposal** I propose that in Go 2, identifiers with the same NFKC normal form (with [identifier modifications](http://unicode.org/reports/tr31/#NFKC_Modifications)) should be considered the same identifier, and `gofmt` should automatically rewrite all identifiers within a program to their normal forms. **Compatibility** This change is not strictly Go 1 compatible: a Go 1 program may, for example, define and use distinct variables named `µ` and `μ` in the same scope. I argue that such programs are confusing at best, and actively misleading at worst, so the only programs that would break under this proposal should be fixed regardless of it. ---- (CC @mpvl @griesemer @robpike) ¹ In the example here, U+03BC is preferred over U+00B5: see https://unicode.org/reports/tr25/. ---- Revisions to this proposal: * Changed to apply the tr31 NFKC modifications for identifiers.
LanguageChange,Proposal,LanguageChangeReview
medium
Critical
364,460,232
godot
A lot of errors after delete folder with Script and Node
**Godot version:** 3.1 8fc92ae **OS/device including version:** Windows 10 **Issue description:** After deleting folder with script and node, godot shows a lot of errors and after trying to run Scene, editor show more errors and close game. There are errors: ``` ERROR: load_source_code: Condition ' err ' is true. returned: err At: modules/gdscript/gdscript.cpp:796 ERROR: load: Condition ' err != OK ' is true. returned: RES() At: modules/gdscript/gdscript.cpp:2122 ERROR: Failed loading resource: res://aa/Node2D.gd At: core/io/resource_loader.cpp:192 ERROR: poll: res://Node2D.tscn:3 - Parse Error: [ext_resource] referenced nonexistent resource at: res://aa/Node2D.gd At: scene/resources/scene_format_text.cpp:439 ERROR: load: Condition ' err != OK ' is true. returned: RES() At: core/io/resource_loader.cpp:155 ERROR: Failed loading scene: res://Node2D.tscn At: main/main.cpp:1704 ERROR: _get_socket_error: Socket error: 10054 At: drivers/unix/net_socket_posix.cpp:195 WARNING: cleanup: ObjectDB Instances still exist! At: core/object.cpp:2081 ERROR: Failed to get modified time for: C:/Users/rafal/PR/Projekty/qwer/aa/Node2D.gd At: drivers/windows/file_access_windows.cpp:306 ERROR: _reload_scripts: Condition ' !rel_script.is_valid() ' is true. Continuing..: At: editor/plugins/script_editor_plugin.cpp:728 ``` **Steps to reproduce:** 1. Create Node and save it in folder 2. Attach new script to node and save it in same folder 3. Save this scene/node 4. Delete this folder 5. Save Node 6. Try to run project Video: https://streamable.com/4gjd8
bug,topic:editor,confirmed
low
Critical
364,468,103
pytorch
[Caffe2] GAN No Gradients in Generator
Hi everyone, In the process of my Bachelor Degree I am building a DCGAN in caffe2. After following all the tutorials I have figured out a architecture. But when I run my Code the Generator does not update at all. I made some debugging and found out, that the Generator's layers do not receive any Gradients at all. Even my tutor can't find the source of the issue. I am grateful for any help! =) Here is my Code: # -*- coding: utf-8 -*- from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import os import shutil #import pickle as pkl import numpy as np from matplotlib import pyplot#, image from caffe2.python import brew, core, workspace, model_helper, optimizer, visualize, memonger, utils from caffe2.proto import caffe2_pb2 ############################################################################### # Data Import ############################################################################### workspace.ResetWorkspace() def DownloadResource(url, path): '''Downloads resources from s3 by url and unzips them to the provided path''' import requests, StringIO, zipfile print("Downloading... {} to {}".format(url, path)) r = requests.get(url, stream=True) z = zipfile.ZipFile(StringIO.StringIO(r.content)) z.extractall(path) print("Completed download and extraction.") current_folder = os.getcwd() print("The current folder is {}".format(current_folder) ) data_folder = os.path.join(current_folder, 'tutorial_data', 'mnist') root_folder = os.path.join(current_folder, 'tutorial_files', 'tutorial_mnist') db_missing = False if not os.path.exists(data_folder): os.makedirs(data_folder) print("Your data folder was not found!! This was generated: {}".format(data_folder)) # Look for existing database: lmdb if os.path.exists(os.path.join(data_folder,"mnist-train-nchw-lmdb")): print("lmdb train db found!") else: db_missing = True if os.path.exists(os.path.join(data_folder,"mnist-test-nchw-lmdb")): print("lmdb test db found!") else: db_missing = True # attempt the download of the db if either was missing if db_missing: print("one or both of the MNIST lmbd dbs not found!!") db_url = "http://download.caffe2.ai/databases/mnist-lmdb.zip" try: DownloadResource(db_url, data_folder) except Exception as ex: print("Failed to download dataset. Please download it manually from {}".format(db_url)) print("Unzip it and place the two database folders here: {}".format(data_folder)) raise ex if os.path.exists(root_folder): print("Looks like you ran this before, so we need to cleanup those old files...") shutil.rmtree(root_folder) os.makedirs(root_folder) workspace.ResetWorkspace(root_folder) print("training data folder:" + data_folder) print("workspace root folder:" + root_folder) ############################################################################### # Memory Management ############################################################################### # Kann zur Performanceoptimierung später eingesetzt werden. def optimize_gradient_memory(model, loss): model.net._net = memonger.share_grad_blobs( model.net, loss, set(model.param_to_grad.values()), # Due to memonger internals, we need a namescope here. Let's make one up; we'll need it later! namescope="memonger_needs_a_namescope", share_activations=False) ############################################################################### # Global Parameters ############################################################################### device_option = caffe2_pb2.DeviceOption(device_type=caffe2_pb2.CUDA) # Größere Batchsize oder learning_rate führt zu loss_d = nan batch_size = 32 arg_scope = {"order": "NCHW"} learning_rate = 0.00003 # For Leaky ReLU alpha = 0.1 # Create labels for D_Real. Only ones. Soft and noisy. label_real = np.random.rand(batch_size, 2048, 1, 1).astype('float32') * 0.1 + 0.9 # Create labels for D_Fake. Only zeros. Soft and noisy label_fake = np.random.rand(batch_size, 2048, 1, 1).astype('float32') * 0.1 # Create labels for G. Only ones. label_g = np.ones(batch_size, dtype=np.int32) # Dummy Blobs for evading deadlock between G and d_fake. # For G dummyblob_g = np.ones(batch_size) dummyblob_d_loss_fake = np.ones(1) # For D dummyblob_d = np.ones(batch_size*28*28).reshape(batch_size, 1, 28, 28) # Noise Data Input for the Generator noise = np.random.randn(batch_size, 100).astype(np.float32) # Insert all relevant data to workspace with core.DeviceScope(device_option): workspace.FeedBlob("tanh", dummyblob_d.astype(np.float32)) workspace.FeedBlob("sigmoid_d_real", dummyblob_d.astype(np.float32)) workspace.FeedBlob("sigmoid_d_fake", dummyblob_g.astype(np.float32)) workspace.FeedBlob("d_loss_fake", dummyblob_d_loss_fake.astype(np.float32)) workspace.FeedBlob("label_g", label_g) workspace.FeedBlob("label_fake", label_fake) workspace.FeedBlob("label_real", label_real) workspace.FeedBlob("noise", noise) ############################################################################### # Define Models ############################################################################### # Manage Input for D_real def AddInput(model, batch_size, db, db_type): # load the data data_uint8, label = model.TensorProtosDBInput( [], ["data_uint8", "label"], batch_size = batch_size, db=db, db_type=db_type) # cast the data to float data = model.Cast(data_uint8, "data", to=core.DataType.FLOAT) # scale data from [0,255] to [-1,1] data = model.Scale(data, data, scale=float(1./256 * 2 - 1)) # Caffe2 ist sehr generisch! Tatsächlich wird hier (fast) alles als # Operator behandelt. Dabei auch der ganz simple input! Die Backpropagation # welche die Werte der hidden Layers updated hat hier jedoch nichts verloren. # Deswegen muss caffe2, auf Grund seiner generischen Bauweise, noch einmal # extra gesagt werden, dass hier bitte keine updates erfolgen sollen. # Darum: StopGradient data = model.StopGradient(data, data) return data ############################################################################### # Define Generator def GenModel(model, data): # Input Layer: 1x1x100 # data is noise fc1 = brew.fc(model, data, "fc1", dim_in=100, dim_out=4096) reshape, oldshape = model.net.Reshape( ["fc1"], ["reshape", "oldshape"], shape = (batch_size, 1024, 2, 2)) relu0_g = brew.relu(model, reshape, "relu0_g") # Deconv Block 1: 256x2x2 deconv1 = brew.conv_transpose( model, relu0_g, "deconv1", dim_in=1024, dim_out=512, kernel=2, stride=2 ) batchnorm1_g = brew.spatial_bn( model, deconv1, "batchnorm1_g", 512, epsilon=1e-5, momentum=0.9, is_test=False ) relu1_g = brew.relu(model, batchnorm1_g, "relu1_g") # Deconv Block 2: 128x4x4 deconv2 = brew.conv_transpose( model, relu1_g, "deconv2", dim_in=512, dim_out=256, kernel=4, stride=1 ) batchnorm2_g = brew.spatial_bn( model, deconv2, "batchnorm2_g", 256, epsilon=1e-5, momentum=0.9, is_test=False ) relu2_g = brew.relu(model, batchnorm2_g, 'relu2_g') # Deconv Block 3: 64x7x7 deconv3 = brew.conv_transpose( model, relu2_g, "deconv3", dim_in=256, dim_out=128, kernel=2, stride=2 ) batchnorm3_g = brew.spatial_bn( model, deconv3, "batchnorm3_g", 128, epsilon=1e-5, momentum=0.9, is_test=False ) relu3_g = brew.relu(model, batchnorm3_g, 'relu3_g') # Deconv Block 4: 32x14x14 -> 1x28x28 deconv4 = brew.conv_transpose( model, relu3_g, "deconv4", dim_in=128, dim_out=1, kernel=2, stride=2 ) batchnorm4_g = brew.spatial_bn( model, deconv4, "batchnorm4_g", 1, epsilon=1e-5, momentum=0.9, is_test=False ) tanh = brew.tanh(model, batchnorm4_g, "tanh") return tanh ############################################################################### # Define Discriminator Real def DisModel_real(model, data): # convblock 1 28x28 -> 14x14 conv1_d = brew.conv(model, data, "conv1_d", dim_in=1, dim_out=128, kernel=2, stride=2, pad=0) batchnorm1_d = brew.spatial_bn( model, conv1_d, "batchnorm1_d", 128, epsilon=1e-5, momentum=0.9, is_test=False ) lrelu1_d = model.net.LeakyRelu(batchnorm1_d, "lrelu1_d", alpha=alpha) # convblock 2 14x14 -> 7x7 conv2_d = brew.conv(model, lrelu1_d, "conv2_d", dim_in=128, dim_out=256, kernel=2, stride=2, pad=0) batchnorm2_d = brew.spatial_bn( model, conv2_d, "batchnorm2_d", 256, epsilon=1e-5, momentum=0.9, is_test=False ) lrelu2_d = model.net.LeakyRelu(batchnorm2_d, "lrelu2_d", alpha=alpha) # convblock 3 7x7 -> 4x4 conv3_d = brew.conv(model, lrelu2_d, "conv3_d", dim_in=256, dim_out=512, kernel=1, stride=2, pad=0) batchnorm3_d = brew.spatial_bn( model, conv3_d, "batchnorm3_d", 512, epsilon=1e-5, momentum=0.9, is_test=False ) lrelu3_d = model.net.LeakyRelu(batchnorm3_d, "lrelu3_d", alpha=alpha) # convblock 4 4x4 -> 2x2 conv4_d = brew.conv(model, lrelu3_d, "conv4_d", dim_in=512, dim_out=512, kernel=2, stride=2, pad=0) batchnorm4_d = brew.spatial_bn( model, conv4_d, "batchnorm4_d", 512, epsilon=1e-5, momentum=0.9, is_test=False ) lrelu4_d = model.net.LeakyRelu(batchnorm4_d, "lrelu4_d", alpha=alpha) # Flatten 512x2x2 -> 2048x1x1 reshape_d, oldshape_d = model.net.Reshape( ["lrelu4_d"], ["reshape_d", "oldshape_d"], shape=(batch_size,2048,1,1)) sigmoid_real = model.net.Sigmoid(reshape_d, "sigmoid_d_real") return sigmoid_real ############################################################################### # Define Discriminator Fake def DisModel_fake(model, data): # convblock 1 28x28 -> 14x14 conv1_d = brew.conv(model, data, "conv1_d", dim_in=1, dim_out=128, kernel=2, stride=2, pad=0) batchnorm1_d = brew.spatial_bn( model, conv1_d, "batchnorm1_d", 128, epsilon=1e-5, momentum=0.9, is_test=False ) lrelu1_d = model.net.LeakyRelu(batchnorm1_d, "lrelu1_d", alpha=alpha) # convblock 2 14x14 -> 7x7 conv2_d = brew.conv(model, lrelu1_d, "conv2_d", dim_in=128, dim_out=256, kernel=2, stride=2, pad=0) batchnorm2_d = brew.spatial_bn( model, conv2_d, "batchnorm2_d", 256, epsilon=1e-5, momentum=0.9, is_test=False ) lrelu2_d = model.net.LeakyRelu(batchnorm2_d, "lrelu2_d", alpha=alpha) # convblock 3 7x7 -> 4x4 conv3_d = brew.conv(model, lrelu2_d, "conv3_d", dim_in=256, dim_out=512, kernel=1, stride=2, pad=0) batchnorm3_d = brew.spatial_bn( model, conv3_d, "batchnorm3_d", 512, epsilon=1e-5, momentum=0.9, is_test=False ) lrelu3_d = model.net.LeakyRelu(batchnorm3_d, "lrelu3_d", alpha=alpha) # convblock 4 4x4 -> 2x2 conv4_d = brew.conv(model, lrelu3_d, "conv4_d", dim_in=512, dim_out=512, kernel=2, stride=2, pad=0) batchnorm4_d = brew.spatial_bn( model, conv4_d, "batchnorm4_d", 512, epsilon=1e-5, momentum=0.9, is_test=False ) lrelu4_d = model.net.LeakyRelu(batchnorm4_d, "lrelu4_d", alpha=alpha) # Flatten 512x2x2 -> 2048x1x1 reshape_d, oldshape_d = model.net.Reshape( ["lrelu4_d"], ["reshape_d", "oldshape_d"], shape=(batch_size,2048,1,1)) sigmoid_fake = model.net.Sigmoid(reshape_d, "sigmoid_d_fake") return sigmoid_fake ############################################################################### # Define Training ############################################################################### # Training Operators for D def AddTrainingOperators_D(model_r, model_f, sigmoid_r, sigmoid_f, lr = learning_rate): xent_fake = model_f.net.SigmoidCrossEntropyWithLogits([sigmoid_f, "label_fake"], 'xent_fake') d_loss_fake = model_f.net.AveragedLoss(xent_fake, "d_loss_fake") xent_real = model_r.net.SigmoidCrossEntropyWithLogits([sigmoid_r, "label_real"], 'xent_real') d_loss_real = model_r.net.AveragedLoss(xent_real, "d_loss_real") d_loss_total_r = model_r.net.Add(["d_loss_real", "d_loss_fake"], "d_loss_total_r") d_loss_total_f = model_f.net.Add(["d_loss_real", "d_loss_fake"], "d_loss_total_f") model_r.AddGradientOperators([d_loss_total_r]) model_f.AddGradientOperators([d_loss_total_f]) optimizer.build_adam(model_f, lr) optimizer.build_adam(model_r, lr) ############################################################################### # Training Operators for G def AddTrainingOperators_Gen(model, fake_sigmoid, learning_rate=learning_rate): xent = model.net.LabelCrossEntropy([fake_sigmoid, "label_g"], 'xent_g') # compute the expected loss with the help of xent. loss_g = model.net.AveragedLoss(xent, "loss_g") # use the average loss to to add gradient operators to the model model.AddGradientOperators([loss_g]) # Use adam optimizer.build_adam(model, learning_rate) ############################################################################### # Create Models ############################################################################### # Create G generator = model_helper.ModelHelper(name="mnist_gen") # Create D_real discriminator_real = model_helper.ModelHelper( name="mnist_dis_real", arg_scope=arg_scope) # Create D_fake discriminator_fake = model_helper.ModelHelper( name="mnist_dis_fake", arg_scope=arg_scope) # Apply net-definitions with core.DeviceScope(device_option): # Get Data data = AddInput( discriminator_real, batch_size=batch_size, db=os.path.join(data_folder, 'mnist-test-nchw-lmdb'), db_type='lmdb') # With Data from noise vector tanh_gen = GenModel(generator, "noise") # Only filled with data from MNIST. true_sigmoid = DisModel_real(discriminator_real, data) # Only filled with data from G. fake_sigmoid = DisModel_fake(discriminator_fake, tanh_gen) # Add Trainingsoperators # For G AddTrainingOperators_Gen(generator, fake_sigmoid, learning_rate) # For D AddTrainingOperators_D(discriminator_real, discriminator_fake, true_sigmoid, fake_sigmoid, learning_rate) ############################################################################### #Initialize the network ############################################################################### workspace.RunNetOnce(discriminator_real.param_init_net) workspace.CreateNet(discriminator_real.net) workspace.RunNetOnce(generator.param_init_net) workspace.CreateNet(generator.net) workspace.RunNetOnce(discriminator_fake.param_init_net) workspace.CreateNet(discriminator_fake.net) print(workspace.Blobs()) ############################################################################### # Run the training procedure ############################################################################### # times the training will be running epochs = 10 steps = int(600/ batch_size) # MNIST size / batch_size -> 1 epoch # Containers for plotting progress loss_d = np.zeros(steps) loss_g = np.zeros(steps) images = np.empty((batch_size,1,28,28), np.float32) print("Total Number of Runs: {}".format(epochs * steps)) for e in range (epochs): # Zum Debuggen G Output als print. Bug: G updatet nicht. Werte bleiben gleich. tanh_out = workspace.FetchBlob('tanh') #print(tanh_out[0][0][0]) for i in range(steps): # Train D workspace.RunNet(discriminator_real.net) workspace.RunNet(discriminator_fake.net) # Noise Data Input for the Generator noise = np.random.randn(batch_size, 100).astype(np.float32) workspace.FeedBlob("noise", noise) # Train G workspace.RunNet(generator.net) loss_d[i] = workspace.FetchBlob("d_loss_total_r") loss_g[i] = workspace.FetchBlob("loss_g") # Nach dem Debugging wieder dekommentieren. """if (i % 50) == 0: print("Round: {}".format(i * (e+1))) print("LOSS D") print(workspace.FetchBlob("d_loss_total_r")) print("LOSS G") print(workspace.FetchBlob("loss_g")) """ ############################################################################### # After the execution is done, let's plot the values. print("Final D loss: {}".format(workspace.FetchBlob("d_loss_total_r"))) print("Final G loss: {}".format(workspace.FetchBlob("loss_g"))) pyplot.plot(loss_d, 'b') pyplot.plot(loss_g, 'r') pyplot.title("Summary of Training Run") pyplot.xlabel("Iteration") pyplot.legend(('Loss_d', 'Loss_g'), loc='upper right') # Plot G Results # Use visualize module to show the examples from the last batch that was fed to the model tanh_out = workspace.FetchBlob('tanh') pyplot.figure() pyplot.title("Last Batch from G") _ = visualize.NCHW.ShowMultiple(tanh_out) # Nur für das Debugging auskommentiert. Später wieder einfügen! #pyplot.show() ############################################################################### And here are the networks blobs. [u'AdamOptimizer_1_lr_gpu0', u'AdamOptimizer_2_lr_gpu0', u'lrelu4_d_grad_dims', u'batchnorm1_d', u'batchnorm1_d_b', u'batchnorm1_d_b_first_moment', u'batchnorm1_d_b_grad', u'batchnorm1_d_b_second_moment', u'batchnorm1_d_grad', u'batchnorm1_d_riv', u'batchnorm1_d_rm', u'batchnorm1_d_s', u'batchnorm1_d_s_first_moment', u'batchnorm1_d_s_grad', u'batchnorm1_d_s_second_moment', u'batchnorm1_d_siv', u'batchnorm1_d_sm', u'batchnorm1_g', u'batchnorm1_g_b', u'batchnorm1_g_riv', u'batchnorm1_g_rm', u'batchnorm1_g_s', u'batchnorm1_g_siv', u'batchnorm1_g_sm', u'batchnorm2_d', u'batchnorm2_d_b', u'batchnorm2_d_b_first_moment', u'batchnorm2_d_b_grad', u'batchnorm2_d_b_second_moment', u'batchnorm2_d_grad', u'batchnorm2_d_riv', u'batchnorm2_d_rm', u'batchnorm2_d_s', u'batchnorm2_d_s_first_moment', u'batchnorm2_d_s_grad', u'batchnorm2_d_s_second_moment', u'batchnorm2_d_siv', u'batchnorm2_d_sm', u'batchnorm2_g', u'batchnorm2_g_b', u'batchnorm2_g_riv', u'batchnorm2_g_rm', u'batchnorm2_g_s', u'batchnorm2_g_siv', u'batchnorm2_g_sm', u'batchnorm3_d', u'batchnorm3_d_b', u'batchnorm3_d_b_first_moment', u'batchnorm3_d_b_grad', u'batchnorm3_d_b_second_moment', u'batchnorm3_d_grad', u'batchnorm3_d_riv', u'batchnorm3_d_rm', u'batchnorm3_d_s', u'batchnorm3_d_s_first_moment', u'batchnorm3_d_s_grad', u'batchnorm3_d_s_second_moment', u'batchnorm3_d_siv', u'batchnorm3_d_sm', u'batchnorm3_g', u'batchnorm3_g_b', u'batchnorm3_g_riv', u'batchnorm3_g_rm', u'batchnorm3_g_s', u'batchnorm3_g_siv', u'batchnorm3_g_sm', u'batchnorm4_d', u'batchnorm4_d_b', u'batchnorm4_d_b_first_moment', u'batchnorm4_d_b_grad', u'batchnorm4_d_b_second_moment', u'batchnorm4_d_grad', u'batchnorm4_d_riv', u'batchnorm4_d_rm', u'batchnorm4_d_s', u'batchnorm4_d_s_first_moment', u'batchnorm4_d_s_grad', u'batchnorm4_d_s_second_moment', u'batchnorm4_d_siv', u'batchnorm4_d_sm', u'batchnorm4_g', u'batchnorm4_g_b', u'batchnorm4_g_riv', u'batchnorm4_g_rm', u'batchnorm4_g_s', u'batchnorm4_g_siv', u'batchnorm4_g_sm', u'conv1_d', u'conv1_d_b', u'conv1_d_b_first_moment', u'conv1_d_b_grad', u'conv1_d_b_second_moment', u'conv1_d_grad', u'conv1_d_w', u'conv1_d_w_first_moment', u'conv1_d_w_grad', u'conv1_d_w_second_moment', u'conv2_d', u'conv2_d_b', u'conv2_d_b_first_moment', u'conv2_d_b_grad', u'conv2_d_b_second_moment', u'conv2_d_grad', u'conv2_d_w', u'conv2_d_w_first_moment', u'conv2_d_w_grad', u'conv2_d_w_second_moment', u'conv3_d', u'conv3_d_b', u'conv3_d_b_first_moment', u'conv3_d_b_grad', u'conv3_d_b_second_moment', u'conv3_d_grad', u'conv3_d_w', u'conv3_d_w_first_moment', u'conv3_d_w_grad', u'conv3_d_w_second_moment', u'conv4_d', u'conv4_d_b', u'conv4_d_b_first_moment', u'conv4_d_b_grad', u'conv4_d_b_second_moment', u'conv4_d_grad', u'conv4_d_w', u'conv4_d_w_first_moment', u'conv4_d_w_grad', u'conv4_d_w_second_moment', u'd_loss_fake', u'd_loss_fake_grad', u'd_loss_real', u'd_loss_real_grad', u'd_loss_total_f', u'd_loss_total_f_autogen_grad', u'd_loss_total_r', u'd_loss_total_r_autogen_grad', u'data', u'data_grad', u'data_uint8', u'dbreader/home/kanani/workspace/tutorial_data/mnist/mnist-test-nchw-lmdb', u'deconv1', u'deconv1_b', u'deconv1_w', u'deconv2', u'deconv2_b', u'deconv2_w', u'deconv3', u'deconv3_b', u'deconv3_w', u'deconv4', u'deconv4_b', u'deconv4_w', u'fc1', u'fc1_b', u'fc1_w', u'iteration_mutex', u'label', u'label_fake', u'label_g', u'label_real', u'loss_g', u'loss_g_autogen_grad', u'lrelu1_d', u'lrelu1_d_grad', u'lrelu2_d', u'lrelu2_d_grad', u'lrelu3_d', u'lrelu3_d_grad', u'lrelu4_d', u'lrelu4_d_grad', u'noise', u'oldshape', u'oldshape_d', u'optimizer_iteration', u'relu0_g', u'relu1_g', u'relu2_g', u'relu3_g', u'reshape', u'reshape_d', u'reshape_d_grad', u'sigmoid_d_fake', u'sigmoid_d_fake_grad', u'sigmoid_d_real', u'sigmoid_d_real_grad', u'tanh', u'tanh_grad', u'xent_fake', u'xent_fake_grad', u'xent_g', u'xent_g_grad', u'xent_real', u'xent_real_grad'] Thanks everyone! =)
caffe2
low
Critical
364,476,930
go
cmd/go: generate should set GOFLAGS env variable
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? ``` go version go1.11 linux/amd64 ``` ### Does this issue reproduce with the latest release? Yes. ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/home/myitcv/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/myitcv/go-modules-by-example/.gopath" GOPROXY="" GORACE="" GOROOT="/home/myitcv/gos" GOTMPDIR="" GOTOOLDIR="/home/myitcv/gos/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/home/myitcv/go-modules-by-example/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build952142611=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? Given the following setup: ``` $ cd $(mktemp -d) $ go mod init example.com/hello go: creating new go.mod: module example.com/hello $ cat <<EOD >hello.go package hello EOD $ cat <<"EOD" >hello_js_wasm.go // +build constraint package hello //go:generate echo "GOARCH: $GOARCH; GOOS: $GOOS; GOFILE: $GOFILE; GOLINE: $GOLINE; GOPACKAGE: $GOPACKAGE" EOD ``` Then the following two command produce no output, i.e. `go generate` doesn't find any directives: ``` $ go generate $ GOOS=js GOARCH=wasm go generate ``` Only when we combine the setting of `GOOS`, `GOARCH` and build tags is the directive run: ``` $ GOOS=js GOARCH=wasm go generate -tags constraint GOARCH: wasm; GOOS: js; GOFILE: hello_js_wasm.go; GOLINE: 5; GOPACKAGE: hello ``` This makes sense; per `go help generate`, `GOOS` and `GOARCH` are passed through by `go generate` to each directive: ``` $ go help generate usage: go build [-o output] [-i] [build flags] [packages] ... Go generate sets several variables when it runs the generator: $GOARCH The execution architecture (arm, amd64, etc.) $GOOS The execution operating system (linux, windows, etc.) $GOFILE The base name of the file. $GOLINE The line number of the directive in the source file. $GOPACKAGE The name of the package of the file containing the directive. $DOLLAR A dollar sign. ... ``` However build tags are not, despite build tags clearly influencing the files that are scanned for directives. So assuming the observed behaviour is correct/consistent, I'd like to propose that we add a `GOBUILDTAGS` env variable to complement `GOOS` and `GOARCH`. This would pass through the `-tags` build flag: ``` -tags 'tag list' a space-separated list of build tags to consider satisfied during the build. For more information about build tags, see the description of build constraints in the documentation for the go/build package. ``` The use case here is needing to pass `GOOS`, `GOARCH` and build tags as supplied to `go generate` to a generator that uses [`go/packages`](https://godoc.org/golang.org/x/tools/go/packages) for type checking. cc @robpike @rsc @bcmills
NeedsInvestigation,FeatureRequest,GoCommand
low
Critical
364,479,890
go
cmd/go: ensure that 'go mod tidy' and `go get -u` do not introduce ambiguous imports
As noted in https://github.com/golang/go/issues/27858#issuecomment-425089484, when a package is moved between two modules that share one of the complete module paths as a prefix, each needs to `require` some baseline version of the other, for two reasons: 1. To ensure that `go get -u` doesn't drop any needed packages. 2. To ensure that any third module that depends on both of the common-prefix modules will always end up in a configuration that provides only one copy of each package. It would be really unfortunate if moving a package meant that you could never run `go mod tidy` the involved modules again, or that you have to apply some manual edit every time you run it. We should either modify `go mod tidy` to avoid dropping requirements that are involved in a cycle with the main module, or add some explicit annotation comment to tell it not to drop an apparently-unnecessary requirement.
NeedsDecision,early-in-cycle,modules
medium
Major
364,492,582
godot
Add "rename_dependencies" to ResourceLoader
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** 3.0.6 <!-- Specify commit hash if non-official. --> **Issue description:** <!-- What happened, and what was expected. --> Hey there, I'm trying to make a script that automates the process of **fixing dependencies**. Currently we have this available only using the interface, right? It's a very tedious job, especially if we have a lot of assets that we use through many projects. I dug into how this works in the Editor itself and I found that it uses [dependency_editor.cpp](https://github.com/godotengine/godot/blob/master/editor/dependency_editor.cpp#L69). There I found the logic of how it worked, and I made a simple script that does it as well. Until there everything was OK. But I reached the deadend when I was finally fixing the dependencies: Currently there is no easy way to reproduce the [`resource_loader.rename_dependencies`](https://github.com/godotengine/godot/blob/master/core/io/resource_loader.cpp#L434). This is not available to us, users. So I think it would be a good thing to make this method available in the `ResourceLoader` class that we can access in Godot.
enhancement,topic:editor
low
Minor
364,502,413
go
cmd/go: 'go mod why' should have an answer for every module in 'go list -m all'
Currently, if you run `go mod why -m` on the output of every module in `go list -m all`, some modules may come back with the answer `(main module does not need module […])`, even after a `go mod tidy`. The reason is that the module graph is conservative: `go list -m all` answers the question “which module versions are implied by the requirements of the main module?”, not “which modules does the main module need to download to complete `go list all`?”.¹ In contrast, `go mod why` explicitly ties its answer to `go list all`. We should have some flag to `go mod why` to answer the related `go list -m all` question, “what path of requirements imposes the version requirement on module `x`?” ---- ¹The command to answer the latter question, as it turns out, is: ``` go list -f '{{with .Module}}{{.Path}} {{.Version}}{{end}}' all | sort -u ```
NeedsFix,FeatureRequest,modules
high
Critical
364,505,506
go
testing: No way to view untruncated output from benchmarks
### What version of Go are you using (`go version`)? ``` go version go1.10.3 linux/amd64 ``` ### Does this issue reproduce with the latest release? Unknown, but the documentation has not changed. ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/home/barts/.cache/go-build" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/barts/go" GORACE="" GOROOT="/usr/lib/go-1.10" GOTMPDIR="" GOTOOLDIR="/usr/lib/go-1.10/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build014311929=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? I'm running a benchmark with the verbose flag turned on (`go test -bench . -v`) that calls `b.Logf()` multiple times. ### What did you expect to see? All logging output, like with tests. ### What did you see instead? The first few log calls, followed by `... [output truncated]`)
NeedsDecision
low
Critical
364,554,917
node
connection reset when using https server/client combi without keepalive
<!-- Thank you for reporting a possible bug in Node.js. Please fill in as much of the template below as you can. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify the affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you can. --> * **Version**: 10.11.0, 10.5.0 * **Platform**: Windows 10. Version 1803, build 17134.286 * **Subsystem**: https <!-- Please provide more details below this comment. --> We noticed that there are unexpected RST packages when inspecting the connection with Wireshark. This are our findings for now: - It does not happen when 'keepalive' is set to true. - It does not happen with http instead of https - It does not happen when using the client, e.g. to connect with google.com - it does not happen when using curl to connect with the server. - it does not happen if the server is running in the Linux environment Node does not trigger any error event. Data transfer goes well, but the connection is not closed correctly. The connection ends with: client -> server: ACK server -> client: FIN, ACK client -> server: ACK TLSv1.2: Encrypted Alert server -> Client: RST, ACK Test code: client: ``` const https = require("https"); const options = { hostname: "localhost", port: 8000, rejectUnauthorized: false, agent: new https.Agent({ keepAlive: false // bug will not happen if set on 'true' }) }; const req = https.request(options, (res) => { res.on("data", (data) => { // }); res.on("close", () => { console.log("response closed"); }); }); req.end() ``` Server: ``` const https = require("https"); https.createServer({ // add cert and key }, (req, res) => { res.statusCode = 200; res.end("hello world\n"); }).listen(8000); ```
help wanted,tls,https,windows
low
Critical
364,594,899
rust
Inaccurate lifetime mismatch diagnostic text when involving match
``` error[E0623]: lifetime mismatch --> bend/crates/sac/src/bin/bin.rs:281:34 | 277 | fn deep_or(&self, other: &Self) -> Self { | ----- ----- | | | these two types are declared with different lifetimes... ... 281 | (val, None) | (None, val) => val.clone(), | ^^^ ...but data from `self` flows into `other` here ```
C-enhancement,A-diagnostics,T-compiler
low
Critical
364,595,814
rust
suggesting for traits do not search the "prelude crates"
As [shown in this test case](https://github.com/rust-lang/rust/pull/54603#discussion_r221025575), if you have a crate in the prelude, but you have not imports its trait, and you try to use its methods, you don't get any nice suggestions. This is a shame! cc @davidtwco
C-enhancement,A-lints,A-diagnostics,T-compiler,A-edition-2018
low
Minor
364,615,714
vue
Check if tag is already in the window.customElements registry on top of checking config ignoredElements
### What problem does this feature solve? When Vue is mounting, the function `isUnknownElement` is called and checks, among others, whether the element tag is in the list of `ignoredElements`. In the context of an app that uses multiple web components, we don't want to manually add every new component's tag to the `ignoredElements` list. Maintaining that list could quickly become a headache. If the `isUnknownElement` also checks the custom elements registry, it could figure out that the element in question is not necessarily unknown. ### What does the proposed API look like? The API would stay the same, but on top of checking for `config.ignoredElements` in the function `isUnknownElement`, we would also check if `window.customElements.get(vnode.tag)` is defined or not. If it is defined, then we return false. I am happy to submit a pull request, I just wanted to discuss this first to make sure there aren't design constraints to adding something like this. <!-- generated by vue-issues. DO NOT REMOVE -->
feature request,improvement
low
Minor
364,622,457
go
net/mail: Provide Address.String() alternative which only converts to RFC 5322 and does not encode as RFC 2047
The `Address.String()` method sometimes encodes as RFC 2047 after the 5322 formatting depending on the characters in the address. However some services (Ala sparkpost transmission api header fields) want RFC 5322 formatted addresses in UTF8 and not encoded as RFC 2047. I'm not sure if the [quoting]( I don't think you can easily or safely reverse the 2047 encoding via the mime package because the code has two paths for quoting. I think you'd want a `String()` method which basically returned before [this](https://github.com/golang/go/blob/master/src/net/mail/message.go#L222).) is a part of rfc 2047 or not. I know this isn't required to the use the api but I'm wondering if there are any other use case which wouldn't require RFC 2047 encoding as well. If so it might be worthwhile to add an additional string/format function.
NeedsDecision,FeatureRequest
low
Major
364,634,338
go
x/tools/go/packages: can't distinguish dynamically-generated sources from checked-in ones
When using `go list -test`, we can distinguish user-provided sources from generated ones based on whether the paths are absolute, as documented [here](https://golang.org/cmd/go/#hdr-List_packages_or_modules) (emphasis mine): > By default, the lists **GoFiles, CgoFiles, and so on hold names of files in Dir** (that is, paths **relative to Dir**, not absolute paths). The **generated files** added when using the -compiled and -test flags **are absolute paths** referring to cached copies of generated Go source files. Although they are Go source files, the paths may not end in ".go". Unfortunately, `golang.org/x/tools/go/packages` removes that distinction by making all of the `GoFiles` it returns absolute, without apparently preserving enough information to undo that transformation. That makes it very difficult to use `go/packages` to write tools that transform or otherwise refactor user-provided source code, such as the one in [CL 137076](https://golang.org/cl/137076). CC @matloob @alandonovan @ianthehat @rsc
NeedsInvestigation,Tools,Refactoring
low
Minor
364,639,030
go
x/tools/go/packages: no way to easily distinguish the test version of the package from the real one
When `Config.Test` is set to `true`, `packages.Load` may return multiple `Package` instances with the same `PkgPath`. `go list` encodes the information about which package is which redundantly: both in the `ForTest` field and *also* in the reported `ImportPath` for test variants. `go/packages` strips out the `ImportPath` annotations, but *also* fails to propagate the `ForTest` field. As a result, the only way to distinguish the two packages is by checking which one's `GoFiles` are a superset of the other's. That works most of the time, but I suspect it will fail in some cases (for example, when `b` imports `a` and `a_test` imports `b`, the test version of `b` for `a_test` is compile with exactly the same sources as the non-test copy of `b`). CC: @matloob @alandonovan @ianthehat
NeedsInvestigation,Tools
low
Major
364,665,533
pytorch
Misleading step method in lr_scheduler.ReduceLROnPlateau
Since ReduceLROnPlateau takes an optimizer as argument, one can reasonably assume that it wraps the optimizer and the step method from the ReduceLROnPlateau object will actually call the step method of the optimizer. Alternatives: <li>put a warning in the documentation: <i>"the optimizer's parameters are not going to be updated"</i> <li>put an example with an <i>optimizer.step()</i> call in the documentation <li>change the step method name, why not <i>update</i> ? <li>implement a <i>optimizer_step</i> method to explicitly allow the user to call the optimizer step method from the ReduceLROnPlateau object</li><br/> cc @vincentqb
todo,module: optimizer,triaged
low
Minor
364,671,742
flutter
local_auth plugin Displays a FaceID error in emulator but the Try FaceID Again doesn't work
## Steps to Reproduce Write an app that use the local_auth plugin. Run it on an IOS Simulator with iPhone X, etc. on IOS 12. Using the simulator indicate a non-face match. A Dialog box (stock) comes up and says "Face Not Recognized" .. with a "Try FaceID again" .. Click on that button.. Nothing happens at all .. ## Logs ``` flutter: [welcomeScreen] - Pressed Biometric Use Button 1000 = 1; }, (null) on <LAClient: 0x600001f71ef0> 7 ), NSLocalizedDescription=User interaction is required.} on <LAClient: 0x600001f71ef0> 2 = "Please authenticate to login"; }, (null) on <LAClient: 0x600001f71ef0> flutter: [WelcomeScreen - build() flutter: [welcomeScreen - _buildBiometricOption] - useBio is true and hasStoredLogin is true flutter: [welcomeScreen - _buildBiometricOption] - useBio is true! ``` <!-- Finally, paste the output of running `flutter doctor -v` here. --> ``` [✓] Flutter (Channel beta, v0.8.2, on Mac OS X 10.14 18A391, locale en) • Flutter version 0.8.2 at /Users/sjm/Development/tools/flutter • Framework revision 5ab9e70727 (3 weeks ago), 2018-09-07 12:33:05 -0700 • Engine revision 58a1894a1c • Dart version 2.1.0-dev.3.1.flutter-760a9690c2 [✓] Android toolchain - develop for Android devices (Android SDK 27.0.3) • Android SDK at /Users/sjm/Library/Android/sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-27, build-tools 27.0.3 • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06) • All Android licenses accepted. [✓] iOS toolchain - develop for iOS devices (Xcode 10.0) • Xcode at /Applications/Xcode.app/Contents/Developer • Xcode 10.0, Build version 10A255 • ios-deploy 1.9.2 • CocoaPods version 1.5.3 [✓] Android Studio (version 3.2) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin version 26.0.1 • Dart plugin version 173.4700 • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06) [✓] Connected devices (2 available) • Android SDK built for x86 • emulator-5554 • android-x86 • Android 9 (API 28) (emulator) • iPhone X • 78590509-1E81-4378-8BDE-92EBAFDB00A0 • ios • iOS 12.0 (simulator) • No issues found! ```
platform-ios,p: local_auth,package,P2,team-ios,triaged-ios
low
Critical
364,700,083
angular
Add a customizable nonce attribute to injected style elements - CSP
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [x] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior No ability to define nonce attribute on style tag. ## Expected behavior I would like the ability to define a nonce generated on the server that angular will add to the inline styles. ## What is the motivation / use case for changing the behavior? So that I can comply with business requirements not to use 'unsafe-inline' in CSP. ## Environment <pre><code> Angular version: 6.1.8 Browser: - [x] Chrome (desktop) version 69 - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: 10.11.0 - Platform: Windows 10 Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
feature,freq2: medium,security,area: core,area: security,core: stylesheets,cross-cutting: CSP,feature: under consideration
high
Critical
364,704,806
go
cmd/go: get go list context function formatted as json
The `context` template function available to `go list` has some useful data, even after ignoring the overlap with `go env -json`. There's no way to output the `context` as json, making it hard to use the results programatically. cc: @bcmills @rsc @ianlancetaylor @alandonovan
NeedsInvestigation,FeatureRequest,GoCommand
low
Major
364,716,317
vscode
Breadcrumbs enhance-> double click change the edit state
Hi, Waiting for this wonderful feature comes in about the Breadcrumbs. now the Breadcrumbs is like this: ![image](https://user-images.githubusercontent.com/24240963/46183679-901e7200-c304-11e8-978f-75289a339b98.png) If I double click this specific room and it will change to the edit state that make me be able to copy the path or manually modify the path and jump when out of focus (will remain unchanged if it does not exist). I think this is more convenience way that I can get the path than others and more selective for quickly jump to anther file. below is the edit state I am support to: ``` src/models/TodoModel.js( #method | $property ) // more config for this feature will make better, like this // ( configName: type:default -- notation ) // isShowAbsolutePath: boolean:false -- is showing the absolute path when the Breadcrumbs is edit state. // isShowSuffix: boolean:true -- is showing the previously cursor indicate the file (method / property...) when the Breadcrumbs is edit ``` Thank you for consider this.
feature-request,breadcrumbs
low
Minor
364,730,172
rust
fix false negatives in explicit_outlives_requirements
Niko Matsakis [points out that](https://github.com/rust-lang/rust/pull/53013#discussion_r221053173) the explicit_outlives_requirements lint (from #53013, expected to land soon) should also fire on lifetime-outlives bounds and associated-type-outlives bounds, as illustrated by the following two examples: ```rust struct Foo<'a, 'b: 'a> { x: &'a &'b u32 } struct Bar<'a, T: Iterator> where T::Item: 'a { item: &'a T::Item, } ```
A-lints,A-lifetimes,T-compiler,C-bug,L-explicit_outlives_requirements
low
Minor
364,748,090
pytorch
clip_grad_norm_ does not work on grads of different types
see https://github.com/pytorch/pytorch/blob/117885128073c9c2b32f4b33c6c79df3895b7071/torch/nn/utils/clip_grad.py#L31-L34 example error message can be found at https://discuss.pytorch.org/t/error-when-doing-grad-clip-norm/26071/3 cc @albanD @mruberry @jbschlosser
todo,module: nn,triaged
low
Critical
364,754,361
flutter
App.framework does not support provisioning profiles.
<!-- Thank you for using Flutter! If you are looking for support, please check out our documentation or consider asking a question on Stack Overflow: * https://flutter.io/ * https://docs.flutter.io/ * https://stackoverflow.com/questions/tagged/flutter?sort=frequent If you have found a bug or if our documentation doesn't have an answer to what you're looking for, then fill our the template below. Please read our guide to filing a bug first: https://flutter.io/bug-reports/ --> ## Logs ``` [11:23:49]: $ set -o pipefail && xcodebuild -workspace Runner.xcworkspace -scheme Runner -destination 'generic/platform=iOS' -archivePath /Users/sandeepcmsm/Library/Developer/Xcode/Archives/2018-09-28/Runner\ 2018-09-28\ 11.23.49.xcarchive archive | tee /Users/sandeepcmsm/Library/Logs/gym/Runner-Runner.log | xcpretty [11:23:51]: ▸ Building Pods/FMDB [Release] [11:23:51]: ▸ Check Dependencies [11:23:51]: ▸ Processing FMDB-Info.plist [11:23:51]: ▸ Copying FMDatabase.h [11:23:51]: ▸ Copying FMDatabaseAdditions.h [11:23:52]: ▸ Copying FMDatabasePool.h [11:23:52]: ▸ Copying FMDatabaseQueue.h [11:23:52]: ▸ Copying FMDB-umbrella.h [11:23:52]: ▸ Copying FMDB.h [11:23:52]: ▸ Copying FMResultSet.h [11:23:52]: ▸ Compiling FMDatabase.m [11:23:53]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabase.m:1252:95: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:53]: ▸ - (BOOL)executeStatements:(NSString *)sql withResultBlock:(FMDBExecuteStatementsCallbackBlock)block { [11:23:53]: ▸ ^ [11:23:53]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabase.m:1394:51: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:53]: ▸ - (NSError*)inSavePoint:(void (^)(BOOL *rollback))block { [11:23:53]: ▸ ^ [11:23:53]: ▸ Compiling FMDatabaseAdditions.m [11:23:53]: ▸ Compiling FMDatabasePool.m [11:23:53]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabasePool.m:238:46: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:53]: ▸ - (void)inDatabase:(void (^)(FMDatabase *db))block { [11:23:53]: ▸ ^ [11:23:53]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabasePool.m:273:73: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:53]: ▸ - (void)inDeferredTransaction:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:53]: ▸ ^ [11:23:53]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabasePool.m:277:65: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:53]: ▸ - (void)inTransaction:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:53]: ▸ ^ [11:23:53]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabasePool.m:281:67: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:53]: ▸ - (NSError*)inSavePoint:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:53]: ▸ ^ [11:23:53]: ▸ Compiling FMDatabaseQueue.m [11:23:53]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabaseQueue.m:173:46: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:53]: ▸ - (void)inDatabase:(void (^)(FMDatabase *db))block { [11:23:53]: ▸ ^ [11:23:53]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabaseQueue.m:230:73: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:53]: ▸ - (void)inDeferredTransaction:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:53]: ▸ ^ [11:23:53]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabaseQueue.m:234:65: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:53]: ▸ - (void)inTransaction:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:53]: ▸ ^ [11:23:53]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabaseQueue.m:238:67: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:53]: ▸ - (NSError*)inSavePoint:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:53]: ▸ ^ [11:23:53]: ▸ Compiling FMDB-dummy.m [11:23:53]: ▸ Compiling FMResultSet.m [11:23:53]: ▸ Compiling FMDB_vers.c [11:23:53]: ▸ Compiling FMDatabase.m [11:23:54]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabase.m:1252:95: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:54]: ▸ - (BOOL)executeStatements:(NSString *)sql withResultBlock:(FMDBExecuteStatementsCallbackBlock)block { [11:23:54]: ▸ ^ [11:23:54]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabase.m:1394:51: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:54]: ▸ - (NSError*)inSavePoint:(void (^)(BOOL *rollback))block { [11:23:54]: ▸ ^ [11:23:54]: ▸ Compiling FMDatabaseAdditions.m [11:23:54]: ▸ Compiling FMDatabasePool.m [11:23:54]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabasePool.m:238:46: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:54]: ▸ - (void)inDatabase:(void (^)(FMDatabase *db))block { [11:23:54]: ▸ ^ [11:23:54]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabasePool.m:273:73: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:54]: ▸ - (void)inDeferredTransaction:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:54]: ▸ ^ [11:23:54]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabasePool.m:277:65: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:54]: ▸ - (void)inTransaction:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:54]: ▸ ^ [11:23:54]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabasePool.m:281:67: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:54]: ▸ - (NSError*)inSavePoint:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:54]: ▸ ^ [11:23:54]: ▸ Compiling FMDatabaseQueue.m [11:23:54]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabaseQueue.m:173:46: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:54]: ▸ - (void)inDatabase:(void (^)(FMDatabase *db))block { [11:23:54]: ▸ ^ [11:23:54]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabaseQueue.m:230:73: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:54]: ▸ - (void)inDeferredTransaction:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:54]: ▸ ^ [11:23:54]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabaseQueue.m:234:65: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:54]: ▸ - (void)inTransaction:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:54]: ▸ ^ [11:23:54]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/FMDB/src/fmdb/FMDatabaseQueue.m:238:67: parameter of overriding method should be annotated with __attribute__((noescape)) [-Wmissing-noescape] [11:23:54]: ▸ - (NSError*)inSavePoint:(void (^)(FMDatabase *db, BOOL *rollback))block { [11:23:54]: ▸ ^ [11:23:54]: ▸ Compiling FMDB-dummy.m [11:23:54]: ▸ Compiling FMResultSet.m [11:23:54]: ▸ Compiling FMDB_vers.c [11:23:54]: ▸ Linking FMDB [11:23:54]: ▸ Linking FMDB [11:23:55]: ▸ Generating 'FMDB.framework.dSYM' [11:23:55]: ▸ Touching FMDB.framework [11:23:55]: ▸ Aggregate Pods/Flutter [Release] [11:23:55]: ▸ Check Dependencies [11:23:55]: ▸ Building Pods/MTBBarcodeScanner [Release] [11:23:55]: ▸ Check Dependencies [11:23:55]: ▸ Processing MTBBarcodeScanner-Info.plist [11:23:55]: ▸ Copying MTBBarcodeScanner-umbrella.h [11:23:55]: ▸ Copying MTBBarcodeScanner.h [11:23:55]: ▸ Compiling MTBBarcodeScanner-dummy.m [11:23:55]: ▸ Compiling MTBBarcodeScanner.m [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:149:31: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ @property (nonatomic, strong) AVCapturePhotoOutput *output; [11:23:55]: ▸ ^ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:904:24: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ - (void)captureOutput:(AVCapturePhotoOutput *)captureOutput didFinishProcessingPhotoSampleBuffer:(CMSampleBufferRef)photoSampleBuffer previewPhotoSampleBuffer:(CMSampleBufferRef)previewPhotoSampleBuffer resolvedSettings:(AVCaptureResolvedPhotoSettings *)resolvedSettings bracketSettings:(AVCaptureBracketedStillImageSettings *)bracketSettings error:(NSError *)error { [11:23:55]: ▸ ^ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:904:222: 'AVCaptureResolvedPhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ - (void)captureOutput:(AVCapturePhotoOutput *)captureOutput didFinishProcessingPhotoSampleBuffer:(CMSampleBufferRef)photoSampleBuffer previewPhotoSampleBuffer:(CMSampleBufferRef)previewPhotoSampleBuffer resolvedSettings:(AVCaptureResolvedPhotoSettings *)resolvedSettings bracketSettings:(AVCaptureBracketedStillImageSettings *)bracketSettings error:(NSError *)error { [11:23:55]: ▸ ^ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:203:52: 'defaultDeviceWithDeviceType:mediaType:position:' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera [11:23:55]: ▸ ^ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:203:80: 'AVCaptureDeviceTypeBuiltInWideAngleCamera' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:537:25: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ self.output = [[AVCapturePhotoOutput alloc] init]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:563:52: 'defaultDeviceWithDeviceType:mediaType:position:' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:563:80: 'AVCaptureDeviceTypeBuiltInWideAngleCamera' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:840:9: 'AVCapturePhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCapturePhotoSettings *settings = [AVCapturePhotoSettings photoSettings]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:840:45: 'AVCapturePhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCapturePhotoSettings *settings = [AVCapturePhotoSettings photoSettings]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:909:42: 'JPEGPhotoDataRepresentationForJPEGSampleBuffer:previewPhotoSampleBuffer:' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ NSData *data = [AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer previewPhotoSampleBuffer:previewPhotoSampleBuffer]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:909:21: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ NSData *data = [AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer previewPhotoSampleBuffer:previewPhotoSampleBuffer]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ Compiling MTBBarcodeScanner_vers.c [11:23:55]: ▸ Compiling MTBBarcodeScanner-dummy.m [11:23:55]: ▸ Compiling MTBBarcodeScanner.m [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:149:31: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ @property (nonatomic, strong) AVCapturePhotoOutput *output; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:904:24: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ - (void)captureOutput:(AVCapturePhotoOutput *)captureOutput didFinishProcessingPhotoSampleBuffer:(CMSampleBufferRef)photoSampleBuffer previewPhotoSampleBuffer:(CMSampleBufferRef)previewPhotoSampleBuffer resolvedSettings:(AVCaptureResolvedPhotoSettings *)resolvedSettings bracketSettings:(AVCaptureBracketedStillImageSettings *)bracketSettings error:(NSError *)error { [11:23:55]: ▸ ^ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:904:222: 'AVCaptureResolvedPhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ - (void)captureOutput:(AVCapturePhotoOutput *)captureOutput didFinishProcessingPhotoSampleBuffer:(CMSampleBufferRef)photoSampleBuffer previewPhotoSampleBuffer:(CMSampleBufferRef)previewPhotoSampleBuffer resolvedSettings:(AVCaptureResolvedPhotoSettings *)resolvedSettings bracketSettings:(AVCaptureBracketedStillImageSettings *)bracketSettings error:(NSError *)error { [11:23:55]: ▸ ^ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:203:52: 'defaultDeviceWithDeviceType:mediaType:position:' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera [11:23:55]: ▸ ^ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:203:80: 'AVCaptureDeviceTypeBuiltInWideAngleCamera' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:537:25: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ self.output = [[AVCapturePhotoOutput alloc] init]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:563:52: 'defaultDeviceWithDeviceType:mediaType:position:' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:563:80: 'AVCaptureDeviceTypeBuiltInWideAngleCamera' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:840:9: 'AVCapturePhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCapturePhotoSettings *settings = [AVCapturePhotoSettings photoSettings]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:840:45: 'AVCapturePhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ AVCapturePhotoSettings *settings = [AVCapturePhotoSettings photoSettings]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:909:42: 'JPEGPhotoDataRepresentationForJPEGSampleBuffer:previewPhotoSampleBuffer:' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ NSData *data = [AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer previewPhotoSampleBuffer:previewPhotoSampleBuffer]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/temp/omcsaapp/ios/Pods/MTBBarcodeScanner/Classes/ios/Scanners/MTBBarcodeScanner.m:909:21: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ NSData *data = [AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer previewPhotoSampleBuffer:previewPhotoSampleBuffer]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ Compiling MTBBarcodeScanner_vers.c [11:23:55]: ▸ Linking MTBBarcodeScanner [11:23:55]: ▸ Linking MTBBarcodeScanner [11:23:55]: ▸ Generating 'MTBBarcodeScanner.framework.dSYM' [11:23:55]: ▸ Touching MTBBarcodeScanner.framework [11:23:55]: ▸ Building Pods/Reachability [Release] [11:23:55]: ▸ Check Dependencies [11:23:55]: ▸ Processing Reachability-Info.plist [11:23:55]: ▸ Copying Reachability-umbrella.h [11:23:55]: ▸ Copying Reachability.h [11:23:55]: ▸ Compiling Reachability-dummy.m [11:23:55]: ▸ Compiling Reachability.m [11:23:55]: ▸ Compiling Reachability_vers.c [11:23:55]: ▸ Compiling Reachability-dummy.m [11:23:55]: ▸ Compiling Reachability.m [11:23:55]: ▸ Compiling Reachability_vers.c [11:23:55]: ▸ Linking Reachability [11:23:55]: ▸ Linking Reachability [11:23:55]: ▸ Generating 'Reachability.framework.dSYM' [11:23:55]: ▸ Touching Reachability.framework [11:23:55]: ▸ Building Pods/image_picker [Release] [11:23:55]: ▸ Check Dependencies [11:23:55]: ▸ Processing image_picker-Info.plist [11:23:55]: ▸ Copying image_picker-umbrella.h [11:23:55]: ▸ Copying ImagePickerPlugin.h [11:23:55]: ▸ Compiling image_picker-dummy.m [11:23:55]: ▸ Compiling ImagePickerPlugin.m [11:23:55]: ▸ Compiling image_picker_vers.c [11:23:55]: ▸ Compiling image_picker-dummy.m [11:23:55]: ▸ Compiling ImagePickerPlugin.m [11:23:55]: ▸ Compiling image_picker_vers.c [11:23:55]: ▸ Linking image_picker [11:23:55]: ▸ Linking image_picker [11:23:55]: ▸ Generating 'image_picker.framework.dSYM' [11:23:55]: ▸ Touching image_picker.framework [11:23:55]: ▸ Building Pods/shared_preferences [Release] [11:23:55]: ▸ Check Dependencies [11:23:55]: ▸ Processing shared_preferences-Info.plist [11:23:55]: ▸ Copying shared_preferences-umbrella.h [11:23:55]: ▸ Copying SharedPreferencesPlugin.h [11:23:55]: ▸ Compiling shared_preferences-dummy.m [11:23:55]: ▸ Compiling SharedPreferencesPlugin.m [11:23:55]: ▸ Compiling shared_preferences_vers.c [11:23:55]: ▸ Compiling shared_preferences-dummy.m [11:23:55]: ▸ Compiling SharedPreferencesPlugin.m [11:23:55]: ▸ Compiling shared_preferences_vers.c [11:23:55]: ▸ Linking shared_preferences [11:23:55]: ▸ Linking shared_preferences [11:23:55]: ▸ Generating 'shared_preferences.framework.dSYM' [11:23:55]: ▸ Touching shared_preferences.framework [11:23:55]: ▸ Building Pods/url_launcher [Release] [11:23:55]: ▸ Check Dependencies [11:23:55]: ▸ Processing url_launcher-Info.plist [11:23:55]: ▸ Copying url_launcher-umbrella.h [11:23:55]: ▸ Copying UrlLauncherPlugin.h [11:23:55]: ▸ Compiling url_launcher-dummy.m [11:23:55]: ▸ Compiling UrlLauncherPlugin.m [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:27:31: 'SFSafariViewController' is only available on iOS 9.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ - (void)safariViewController:(SFSafariViewController *)controller [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:43:40: 'SFSafariViewController' is only available on iOS 9.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ - (void)safariViewControllerDidFinish:(SFSafariViewController *)controller { [11:23:55]: ▸ ^ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:29:31: comparison between pointer and integer ('NSInteger' (aka 'int') and 'void *') [11:23:55]: ▸ if (_previousStatusBarStyle != nil) { [11:23:55]: ▸ ^ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:111:45: comparison between pointer and integer ('NSInteger' (aka 'int') and 'void *') [11:23:55]: ▸ if (self->_previousStatusBarStyle != nil) { [11:23:55]: ▸ ~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:108:18: 'openURL:options:completionHandler:' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ [application openURL:url [11:23:55]: ▸ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:140:3: 'SFSafariViewController' is only available on iOS 9.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ SFSafariViewController *safari = [[SFSafariViewController alloc] initWithURL:url]; [11:23:55]: ▸ ^~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:140:38: 'SFSafariViewController' is only available on iOS 9.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ SFSafariViewController *safari = [[SFSafariViewController alloc] initWithURL:url]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ Compiling url_launcher_vers.c [11:23:55]: ▸ Compiling url_launcher-dummy.m [11:23:55]: ▸ Compiling UrlLauncherPlugin.m [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:27:31: 'SFSafariViewController' is only available on iOS 9.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ - (void)safariViewController:(SFSafariViewController *)controller [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:43:40: 'SFSafariViewController' is only available on iOS 9.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ - (void)safariViewControllerDidFinish:(SFSafariViewController *)controller { [11:23:55]: ▸ ^ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:29:31: comparison between pointer and integer ('NSInteger' (aka 'long') and 'void *') [11:23:55]: ▸ if (_previousStatusBarStyle != nil) { [11:23:55]: ▸ ^ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:111:45: comparison between pointer and integer ('NSInteger' (aka 'long') and 'void *') [11:23:55]: ▸ if (self->_previousStatusBarStyle != nil) { [11:23:55]: ▸ ~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:108:18: 'openURL:options:completionHandler:' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ [application openURL:url [11:23:55]: ▸ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:140:3: 'SFSafariViewController' is only available on iOS 9.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ SFSafariViewController *safari = [[SFSafariViewController alloc] initWithURL:url]; [11:23:55]: ▸ ^~~~~~~~~~~ [11:23:55]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/url_launcher-3.0.3/ios/Classes/UrlLauncherPlugin.m:140:38: 'SFSafariViewController' is only available on iOS 9.0 or newer [-Wunguarded-availability] [11:23:55]: ▸ SFSafariViewController *safari = [[SFSafariViewController alloc] initWithURL:url]; [11:23:55]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:55]: ▸ Compiling url_launcher_vers.c [11:23:55]: ▸ Linking url_launcher [11:23:55]: ▸ Linking url_launcher [11:23:55]: ▸ Generating 'url_launcher.framework.dSYM' [11:23:55]: ▸ Touching url_launcher.framework [11:23:55]: ▸ Building Pods/device_info [Release] [11:23:55]: ▸ Check Dependencies [11:23:55]: ▸ Processing device_info-Info.plist [11:23:55]: ▸ Copying device_info-umbrella.h [11:23:55]: ▸ Copying DeviceInfoPlugin.h [11:23:55]: ▸ Compiling device_info-dummy.m [11:23:55]: ▸ Compiling DeviceInfoPlugin.m [11:23:56]: ▸ Compiling device_info_vers.c [11:23:56]: ▸ Compiling device_info-dummy.m [11:23:56]: ▸ Compiling DeviceInfoPlugin.m [11:23:56]: ▸ Compiling device_info_vers.c [11:23:56]: ▸ Linking device_info [11:23:56]: ▸ Linking device_info [11:23:56]: ▸ Generating 'device_info.framework.dSYM' [11:23:56]: ▸ Touching device_info.framework [11:23:56]: ▸ Building Pods/camera [Release] [11:23:56]: ▸ Check Dependencies [11:23:56]: ▸ Processing camera-Info.plist [11:23:56]: ▸ Copying camera-umbrella.h [11:23:56]: ▸ Copying CameraPlugin.h [11:23:56]: ▸ Compiling camera-dummy.m [11:23:56]: ▸ Compiling CameraPlugin.m [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:38:24: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ - (void)captureOutput:(AVCapturePhotoOutput *)output [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:41:43: 'AVCaptureResolvedPhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ resolvedSettings:(AVCaptureResolvedPhotoSettings *)resolvedSettings [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:50:7: 'JPEGPhotoDataRepresentationForJPEGSampleBuffer:previewPhotoSampleBuffer:' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:49:19: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ NSData *data = [AVCapturePhotoOutput [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:65:52: this block declaration is not a prototype [-Wstrict-prototypes] [11:23:56]: ▸ @property(nonatomic, copy) void (^onFrameAvailable)(); [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:70:32: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ @property(readonly, nonatomic) AVCapturePhotoOutput *capturePhotoOutput; [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:140:26: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ _capturePhotoOutput = [AVCapturePhotoOutput new]; [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:155:3: 'AVCapturePhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ AVCapturePhotoSettings *settings = [AVCapturePhotoSettings photoSettings]; [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:155:39: 'AVCapturePhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ AVCapturePhotoSettings *settings = [AVCapturePhotoSettings photoSettings]; [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:477:10: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ [_registry textureFrameAvailable:textureId]; [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:435:5: 'AVCaptureDeviceDiscoverySession' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ AVCaptureDeviceDiscoverySession *discoverySession = [AVCaptureDeviceDiscoverySession [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:435:58: 'AVCaptureDeviceDiscoverySession' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ AVCaptureDeviceDiscoverySession *discoverySession = [AVCaptureDeviceDiscoverySession [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:436:44: 'AVCaptureDeviceTypeBuiltInWideAngleCamera' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ discoverySessionWithDeviceTypes:@[ AVCaptureDeviceTypeBuiltInWideAngleCamera ] [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ Compiling camera_vers.c [11:23:56]: ▸ Compiling camera-dummy.m [11:23:56]: ▸ Compiling CameraPlugin.m [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:38:24: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ - (void)captureOutput:(AVCapturePhotoOutput *)output [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:41:43: 'AVCaptureResolvedPhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ resolvedSettings:(AVCaptureResolvedPhotoSettings *)resolvedSettings [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:50:7: 'JPEGPhotoDataRepresentationForJPEGSampleBuffer:previewPhotoSampleBuffer:' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:49:19: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ NSData *data = [AVCapturePhotoOutput [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:65:52: this block declaration is not a prototype [-Wstrict-prototypes] [11:23:56]: ▸ @property(nonatomic, copy) void (^onFrameAvailable)(); [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:70:32: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ @property(readonly, nonatomic) AVCapturePhotoOutput *capturePhotoOutput; [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:140:26: 'AVCapturePhotoOutput' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ _capturePhotoOutput = [AVCapturePhotoOutput new]; [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:155:3: 'AVCapturePhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ AVCapturePhotoSettings *settings = [AVCapturePhotoSettings photoSettings]; [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:155:39: 'AVCapturePhotoSettings' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ AVCapturePhotoSettings *settings = [AVCapturePhotoSettings photoSettings]; [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:477:10: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ [_registry textureFrameAvailable:textureId]; [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:435:5: 'AVCaptureDeviceDiscoverySession' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ AVCaptureDeviceDiscoverySession *discoverySession = [AVCaptureDeviceDiscoverySession [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:435:58: 'AVCaptureDeviceDiscoverySession' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ AVCaptureDeviceDiscoverySession *discoverySession = [AVCaptureDeviceDiscoverySession [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/camera-0.2.3/ios/Classes/CameraPlugin.m:436:44: 'AVCaptureDeviceTypeBuiltInWideAngleCamera' is only available on iOS 10.0 or newer [-Wunguarded-availability] [11:23:56]: ▸ discoverySessionWithDeviceTypes:@[ AVCaptureDeviceTypeBuiltInWideAngleCamera ] [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ Compiling camera_vers.c [11:23:56]: ▸ Linking camera [11:23:56]: ▸ Linking camera [11:23:56]: ▸ Generating 'camera.framework.dSYM' [11:23:56]: ▸ Touching camera.framework [11:23:56]: ▸ Building Pods/connectivity [Release] [11:23:56]: ▸ Check Dependencies [11:23:56]: ▸ Processing connectivity-Info.plist [11:23:56]: ▸ Copying connectivity-umbrella.h [11:23:56]: ▸ Copying ConnectivityPlugin.h [11:23:56]: ▸ Compiling connectivity-dummy.m [11:23:56]: ▸ Compiling ConnectivityPlugin.m [11:23:56]: ▸ Compiling connectivity_vers.c [11:23:56]: ▸ Compiling connectivity-dummy.m [11:23:56]: ▸ Compiling ConnectivityPlugin.m [11:23:56]: ▸ Compiling connectivity_vers.c [11:23:56]: ▸ Linking connectivity [11:23:56]: ▸ Linking connectivity [11:23:56]: ▸ Generating 'connectivity.framework.dSYM' [11:23:56]: ▸ Touching connectivity.framework [11:23:56]: ▸ Building Pods/flutter_custom_tabs [Release] [11:23:56]: ▸ Check Dependencies [11:23:56]: ▸ Processing flutter_custom_tabs-Info.plist [11:23:56]: ▸ Copying CustomTabsPlugin.h [11:23:56]: ▸ Copying flutter_custom_tabs-umbrella.h [11:23:56]: ▸ Compiling CustomTabsPlugin.m [11:23:56]: ▸ Compiling flutter_custom_tabs-dummy.m [11:23:56]: ▸ Compiling flutter_custom_tabs_vers.c [11:23:56]: ▸ Compiling CustomTabsPlugin.m [11:23:56]: ▸ Compiling flutter_custom_tabs-dummy.m [11:23:56]: ▸ Compiling flutter_custom_tabs_vers.c [11:23:56]: ▸ Linking flutter_custom_tabs [11:23:56]: ▸ Linking flutter_custom_tabs [11:23:56]: ▸ Generating 'flutter_custom_tabs.framework.dSYM' [11:23:56]: ▸ Touching flutter_custom_tabs.framework [11:23:56]: ▸ Building Pods/flutter_local_notifications [Release] [11:23:56]: ▸ Check Dependencies [11:23:56]: ▸ Processing flutter_local_notifications-Info.plist [11:23:56]: ▸ Copying flutter_local_notifications-umbrella.h [11:23:56]: ▸ Copying FlutterLocalNotificationsPlugin.h [11:23:56]: ▸ Copying NotificationDetails.h [11:23:56]: ▸ Copying NotificationTime.h [11:23:56]: ▸ Compiling flutter_local_notifications-dummy.m [11:23:56]: ▸ Compiling FlutterLocalNotificationsPlugin.m [11:23:56]: ▸ Compiling NotificationDetails.m [11:23:56]: ▸ Compiling NotificationTime.m [11:23:56]: ▸ Compiling flutter_local_notifications_vers.c [11:23:56]: ▸ Compiling flutter_local_notifications-dummy.m [11:23:56]: ▸ Compiling FlutterLocalNotificationsPlugin.m [11:23:56]: ▸ Compiling NotificationDetails.m [11:23:56]: ▸ Compiling NotificationTime.m [11:23:56]: ▸ Compiling flutter_local_notifications_vers.c [11:23:56]: ▸ Linking flutter_local_notifications [11:23:56]: ▸ Linking flutter_local_notifications [11:23:56]: ▸ Generating 'flutter_local_notifications.framework.dSYM' [11:23:56]: ▸ Touching flutter_local_notifications.framework [11:23:56]: ▸ Building Pods/path_provider [Release] [11:23:56]: ▸ Check Dependencies [11:23:56]: ▸ Processing path_provider-Info.plist [11:23:56]: ▸ Copying path_provider-umbrella.h [11:23:56]: ▸ Copying PathProviderPlugin.h [11:23:56]: ▸ Compiling path_provider-dummy.m [11:23:56]: ▸ Compiling PathProviderPlugin.m [11:23:56]: ▸ Compiling path_provider_vers.c [11:23:56]: ▸ Compiling path_provider-dummy.m [11:23:56]: ▸ Compiling PathProviderPlugin.m [11:23:56]: ▸ Compiling path_provider_vers.c [11:23:56]: ▸ Linking path_provider [11:23:56]: ▸ Linking path_provider [11:23:56]: ▸ Generating 'path_provider.framework.dSYM' [11:23:56]: ▸ Touching path_provider.framework [11:23:56]: ▸ Building Pods/video_player [Release] [11:23:56]: ▸ Check Dependencies [11:23:56]: ▸ Processing video_player-Info.plist [11:23:56]: ▸ Copying video_player-umbrella.h [11:23:56]: ▸ Copying VideoPlayerPlugin.h [11:23:56]: ▸ Compiling video_player-dummy.m [11:23:56]: ▸ Compiling VideoPlayerPlugin.m [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/video_player-0.6.5/ios/Classes/VideoPlayerPlugin.m:93:55: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ if (_isLooping) { [11:23:56]: ▸ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/video_player-0.6.5/ios/Classes/VideoPlayerPlugin.m:97:57: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ if (_eventSink) { [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/video_player-0.6.5/ios/Classes/VideoPlayerPlugin.m:98:55: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ _eventSink(@{@"event" : @"completed"}); [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/video_player-0.6.5/ios/Classes/VideoPlayerPlugin.m:115:15: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ if (_disposed) return; [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/video_player-0.6.5/ios/Classes/VideoPlayerPlugin.m:119:16: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ [_player replaceCurrentItemWithPlayerItem:item]; [11:23:56]: ▸ ^ [11:23:56]: ▸ Compiling video_player_vers.c [11:23:56]: ▸ Compiling video_player-dummy.m [11:23:56]: ▸ Compiling VideoPlayerPlugin.m [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/video_player-0.6.5/ios/Classes/VideoPlayerPlugin.m:93:55: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ if (_isLooping) { [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/video_player-0.6.5/ios/Classes/VideoPlayerPlugin.m:97:57: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ if (_eventSink) { [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/video_player-0.6.5/ios/Classes/VideoPlayerPlugin.m:98:55: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ _eventSink(@{@"event" : @"completed"}); [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/video_player-0.6.5/ios/Classes/VideoPlayerPlugin.m:115:15: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ if (_disposed) return; [11:23:56]: ▸ ^ [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/hosted/pub.dartlang.org/video_player-0.6.5/ios/Classes/VideoPlayerPlugin.m:119:16: block implicitly retains 'self'; explicitly mention 'self' to indicate this is intended behavior [-Wimplicit-retain-self] [11:23:56]: ▸ [_player replaceCurrentItemWithPlayerItem:item]; [11:23:56]: ▸ ^ [11:23:56]: ▸ Compiling video_player_vers.c [11:23:56]: ▸ Linking video_player [11:23:56]: ▸ Linking video_player [11:23:56]: ▸ Generating 'video_player.framework.dSYM' [11:23:56]: ▸ Touching video_player.framework [11:23:56]: ▸ Building Pods/barcode_scan [Release] [11:23:56]: ▸ Check Dependencies [11:23:56]: ▸ Processing barcode_scan-Info.plist [11:23:56]: ▸ Copying barcode_scan-umbrella.h [11:23:56]: ▸ Copying BarcodeScannerViewController.h [11:23:56]: ▸ Copying BarcodeScannerViewControllerDelegate.h [11:23:56]: ▸ Copying BarcodeScanPlugin.h [11:23:56]: ▸ Compiling barcode_scan-dummy.m [11:23:56]: ▸ Compiling BarcodeScannerViewController.m [11:23:56]: ▸ Compiling BarcodeScanPlugin.m [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/git/flutter_barcode_reader-77ccdfbb60347286e83d38143eba40715b2261ae/ios/Classes/BarcodeScanPlugin.h:7:1: retain'ed block property does not copy the block - use copy attribute instead [-Wobjc-noncopy-retain-block-property] [11:23:56]: ▸ @property(nonatomic, retain) FlutterResult result; [11:23:56]: ▸ ^ [11:23:56]: ▸ Compiling barcode_scan_vers.c [11:23:56]: ▸ Compiling barcode_scan-dummy.m [11:23:56]: ▸ Compiling BarcodeScannerViewController.m [11:23:56]: ▸ Compiling BarcodeScanPlugin.m [11:23:56]: ▸ ⚠️ /Users/sandeepcmsm/.pub-cache/git/flutter_barcode_reader-77ccdfbb60347286e83d38143eba40715b2261ae/ios/Classes/BarcodeScanPlugin.h:7:1: retain'ed block property does not copy the block - use copy attribute instead [-Wobjc-noncopy-retain-block-property] [11:23:56]: ▸ @property(nonatomic, retain) FlutterResult result; [11:23:56]: ▸ ^ [11:23:56]: ▸ Compiling barcode_scan_vers.c [11:23:56]: ▸ Linking barcode_scan [11:23:56]: ▸ Linking barcode_scan [11:23:56]: ▸ Generating 'barcode_scan.framework.dSYM' [11:23:56]: ▸ Touching barcode_scan.framework [11:23:56]: ▸ Building Pods/sqflite [Release] [11:23:56]: ▸ Check Dependencies [11:23:56]: ▸ Processing sqflite-Info.plist [11:23:56]: ▸ Copying Operation.h [11:23:56]: ▸ Copying sqflite-umbrella.h [11:23:56]: ▸ Copying SqflitePlugin.h [11:23:56]: ▸ Compiling Operation.m [11:23:56]: ▸ Compiling sqflite-dummy.m [11:23:56]: ▸ Compiling SqflitePlugin.m [11:23:56]: ▸ Compiling sqflite_vers.c [11:23:56]: ▸ Compiling Operation.m [11:23:56]: ▸ Compiling sqflite-dummy.m [11:23:56]: ▸ Compiling SqflitePlugin.m [11:23:56]: ▸ Compiling sqflite_vers.c [11:23:56]: ▸ Linking sqflite [11:23:56]: ▸ Linking sqflite [11:23:56]: ▸ Generating 'sqflite.framework.dSYM' [11:23:57]: ▸ Touching sqflite.framework [11:23:57]: ▸ Building Pods/Pods-Runner [Release] [11:23:57]: ▸ Check Dependencies [11:23:57]: ▸ Processing Pods-Runner-Info.plist [11:23:57]: ▸ Copying Pods-Runner-umbrella.h [11:23:57]: ▸ Compiling Pods-Runner-dummy.m [11:23:57]: ▸ Compiling Pods_Runner_vers.c [11:23:57]: ▸ Compiling Pods-Runner-dummy.m [11:23:57]: ▸ Compiling Pods_Runner_vers.c [11:23:57]: ▸ Touching Pods_Runner.framework [11:23:57]: ▸ Building Runner/Runner [Release] [11:23:57]: ▸ Check Dependencies [11:23:57]: ▸ Running script '[CP] Check Pods Manifest.lock' [11:23:57]: ▸ Running script 'Run Script' [11:24:02]: ▸ Compiling AppDelegate.swift [11:24:03]: ▸ Compiling AppDelegate.swift [11:24:03]: ▸ Compiling GeneratedPluginRegistrant.m [11:24:04]: ▸ ⚠️ /Users/sandeepcmsm/Library/Developer/Xcode/DerivedData/Runner-cjwdifweejxxcqcqvgmqafkdlrks/Build/Intermediates.noindex/ArchiveIntermediates/Runner/BuildProductsPath/Release-iphoneos/barcode_scan/barcode_scan.framework/Headers/BarcodeScanPlugin.h:7:1: retain'ed block property does not copy the block - use copy attribute instead [-Wobjc-noncopy-retain-block-property] [11:24:04]: ▸ @property(nonatomic, retain) FlutterResult result; [11:24:04]: ▸ ^ [11:24:04]: ▸ Compiling Runner_vers.c [11:24:04]: ▸ Compiling GeneratedPluginRegistrant.m [11:24:04]: ▸ ⚠️ /Users/sandeepcmsm/Library/Developer/Xcode/DerivedData/Runner-cjwdifweejxxcqcqvgmqafkdlrks/Build/Intermediates.noindex/ArchiveIntermediates/Runner/BuildProductsPath/Release-iphoneos/barcode_scan/barcode_scan.framework/Headers/BarcodeScanPlugin.h:7:1: retain'ed block property does not copy the block - use copy attribute instead [-Wobjc-noncopy-retain-block-property] [11:24:04]: ▸ @property(nonatomic, retain) FlutterResult result; [11:24:04]: ▸ ^ [11:24:04]: ▸ Compiling Runner_vers.c [11:24:04]: ▸ Linking Runner [11:24:04]: ▸ Linking Runner [11:24:04]: ▸ Copying Flutter/flutter_assets [11:24:04]: ▸ Copying AppFrameworkInfo.plist [11:24:04]: ▸ Compiling Main.storyboard [11:24:07]: ▸ Compiling LaunchScreen.storyboard [11:24:07]: ▸ Processing Info.plist [11:24:07]: ▸ Generating 'Runner.app.dSYM' [11:24:07]: ▸ Copying Flutter/App.framework [11:24:07]: ▸ Copying Flutter/Flutter.framework [11:24:07]: ▸ Signing /Users/sandeepcmsm/Library/Developer/Xcode/DerivedData/Runner-cjwdifweejxxcqcqvgmqafkdlrks/Build/Intermediates.noindex/ArchiveIntermediates/Runner/InstallationBuildProductsLocation/Applications/Runner.app/Frameworks/App.framework [11:24:08]: ▸ Running script 'Thin Binary' [11:24:08]: ▸ Running script '[CP] Embed Pods Frameworks' [11:24:11]: ▸ Signing /Users/sandeepcmsm/Library/Developer/Xcode/DerivedData/Runner-cjwdifweejxxcqcqvgmqafkdlrks/Build/Intermediates.noindex/ArchiveIntermediates/Runner/InstallationBuildProductsLocation/Applications/Runner.app/Frameworks/Flutter.framework [11:24:12]: ▸ Touching Runner.app [11:24:14]: ▸ Signing /Users/sandeepcmsm/Library/Developer/Xcode/DerivedData/Runner-cjwdifweejxxcqcqvgmqafkdlrks/Build/Intermediates.noindex/ArchiveIntermediates/Runner/InstallationBuildProductsLocation/Applications/Runner.app [11:24:15]: ▸ Touching Runner.app.dSYM [11:24:15]: ▸ Archive Succeeded [11:24:15]: Generated plist file with the following values: [11:24:15]: ▸ ----------------------------------------- [11:24:15]: ▸ { [11:24:15]: ▸ "provisioningProfiles": { [11:24:15]: ▸ "org.omcsa.omcsaapp": "match AppStore org.omcsa.omcsaapp" [11:24:15]: ▸ }, [11:24:15]: ▸ "method": "app-store", [11:24:15]: ▸ "signingStyle": "manual" [11:24:15]: ▸ } [11:24:15]: ▸ ----------------------------------------- [11:24:15]: $ /usr/bin/xcrun /Library/Ruby/Gems/2.3.0/gems/fastlane-2.101.1/gym/lib/assets/wrap_xcodebuild/xcbuild-safe.sh -exportArchive -exportOptionsPlist '/var/folders/36/8bp7_qhs0s39xxlqm1748r4c0000gp/T/gym_config20180928-75053-h9971w.plist' -archivePath /Users/sandeepcmsm/Library/Developer/Xcode/Archives/2018-09-28/Runner\ 2018-09-28\ 11.23.49.xcarchive -exportPath '/var/folders/36/8bp7_qhs0s39xxlqm1748r4c0000gp/T/gym_output20180928-75053-1nhk4' + xcodebuild -exportArchive -exportOptionsPlist /var/folders/36/8bp7_qhs0s39xxlqm1748r4c0000gp/T/gym_config20180928-75053-h9971w.plist -archivePath '/Users/sandeepcmsm/Library/Developer/Xcode/Archives/2018-09-28/Runner 2018-09-28 11.23.49.xcarchive' -exportPath /var/folders/36/8bp7_qhs0s39xxlqm1748r4c0000gp/T/gym_output20180928-75053-1nhk4 2018-09-28 11:24:16.188 xcodebuild[76195:12498725] [MT] IDEDistribution: -[IDEDistributionLogging _createLoggingBundleAtPath:]: Created bundle at path '/var/folders/36/8bp7_qhs0s39xxlqm1748r4c0000gp/T/Runner_2018-09-28_11-24-16.187.xcdistributionlogs'. error: exportArchive: App.framework does not support provisioning profiles. Error Domain=IDEProvisioningErrorDomain Code=10 "App.framework does not support provisioning profiles." UserInfo={NSLocalizedDescription=App.framework does not support provisioning profiles., NSLocalizedRecoverySuggestion=App.framework does not support provisioning profiles, but provisioning profile match AppStore org.omcsa.omcsaapp has been manually specified. Remove this item from the "provisioningProfiles" dictionary in your Export Options property list.} ``` ** EXPORT FAILED ** ``` [11:24:16]: Exit status: 70 +---------------+-------------------------+ | Build environment | +---------------+-------------------------+ | xcode_path | /Applications/Xcode.app | | gym_version | 2.101.1 | | export_method | app-store | | sdk | iPhoneOS12.0.sdk | +---------------+-------------------------+ [11:24:16]: ▸ cd /Users/sandeepcmsm/temp/omcsaapp/ios [11:24:16]: ▸ export PATH="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/Users/sandeepcmsm/Library/Android/sdk/platform-tools:/Users/sandeepcmsm/Library/Android/sdk/tools:/Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home:/Users/sandeepcmsm/flutter/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin" [11:24:16]: ▸ /usr/bin/touch -c /Users/sandeepcmsm/Library/Developer/Xcode/DerivedData/Runner-cjwdifweejxxcqcqvgmqafkdlrks/Build/Intermediates.noindex/ArchiveIntermediates/Runner/BuildProductsPath/Release-iphoneos/Runner.app.dSYM [11:24:16]: ▸ ** ARCHIVE SUCCEEDED ** [11:24:16]: [11:24:16]: ⬆️ Check out the few lines of raw `xcodebuild` output above for potential hints on how to solve this error [11:24:16]: 📋 For the complete and more detailed error log, check the full log at: [11:24:16]: 📋 /Users/sandeepcmsm/Library/Logs/gym/Runner-Runner.log [11:24:16]: [11:24:16]: Looks like fastlane ran into a build/archive error with your project [11:24:16]: It's hard to tell what's causing the error, so we wrote some guides on how [11:24:16]: to troubleshoot build and signing issues: https://docs.fastlane.tools/codesigning/getting-started/ [11:24:16]: Before submitting an issue on GitHub, please follow the guide above and make [11:24:16]: sure your project is set up correctly. [11:24:16]: fastlane uses `xcodebuild` commands to generate your binary, you can see the [11:24:16]: the full commands printed out in yellow in the above log. [11:24:16]: Make sure to inspect the output above, as usually you'll find more error information there [11:24:16]: +------------------------------------+-------------------------------------------------------------+ | Lane Context | +------------------------------------+-------------------------------------------------------------+ | DEFAULT_PLATFORM | ios | | PLATFORM_NAME | ios | | LANE_NAME | ios beta | | SIGH_PROFILE_TYPE | app-store | | MATCH_PROVISIONING_PROFILE_MAPPING | {"org.omcsa.omcsaapp"=>"match AppStore org.omcsa.omcsaapp"} | +------------------------------------+-------------------------------------------------------------+ [11:24:16]: Error packaging up the application +------+------------------+-------------+ | fastlane summary | +------+------------------+-------------+ | Step | Action | Time (in s) | +------+------------------+-------------+ | 1 | default_platform | 0 | | 2 | match | 10 | | 💥 | build_app | 28 | +------+------------------+-------------+ [11:24:16]: fastlane finished with errors [!] Error packaging up the application ``` <!-- Include the full logs of the commands you are running between the lines with the backticks below. If you are running any "flutter" commands, please include the output of running them with "--verbose"; for example, the output of running "flutter --verbose create foo". --> <!-- If possible, paste the output of running `flutter doctor -v` here. --> ``` [✓] Flutter (Channel master, v0.9.5-pre.17, on Mac OS X 10.13.6 17G65, locale en-IN) • Flutter version 0.9.5-pre.17 at /Users/sandeepcmsm/flutter • Framework revision acbef6aac4 (13 hours ago), 2018-09-27 19:08:33 +0200 • Engine revision 38a646e14c • Dart version 2.1.0-dev.5.0.flutter-4cf2d3990b [✓] Android toolchain - develop for Android devices (Android SDK 28.0.2) • Android SDK at /Users/sandeepcmsm/Library/Android/sdk • Android NDK at /Users/sandeepcmsm/Library/Android/sdk/ndk-bundle • Platform android-28, build-tools 28.0.2 • ANDROID_HOME = /Users/sandeepcmsm/Library/Android/sdk • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06) • All Android licenses accepted. [✓] iOS toolchain - develop for iOS devices (Xcode 10.0) • Xcode at /Applications/Xcode.app/Contents/Developer • Xcode 10.0, Build version 10A255 • ios-deploy 1.9.2 • CocoaPods version 1.6.0.beta.1 [✓] Android Studio (version 3.2) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin version 28.0.2 • Dart plugin version 181.5616 • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06) [✓] IntelliJ IDEA Ultimate Edition (version 2018.1.5) • IntelliJ at /Applications/IntelliJ IDEA.app • Flutter plugin version 25.0.2 • Dart plugin version 181.4892.1 [✓] VS Code (version 1.27.2) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 2.18.0 [!] Connected device ! No devices available ! Doctor found issues in 1 category. ``` Cant export the build even tough its successfully archived. Im running on xcode 10. Am i missing something?
platform-ios,tool,t: xcode,P2,team-ios,triaged-ios
low
Critical
364,765,088
opencv
Infrastructure: Python incorrect test summary
Builders indicate that Python have passed 0 tests, which is not correct. Here is [an example build](http://pullrequest.opencv.org/buildbot/builders/precommit_linux64/builds/18369) #### Actual summary ``` test_python2 test_python2 ; passed: 0 ; test_python3 test_python3 ; passed: 0 ; ``` #### Expected summary ``` test_python2 test_python2 ; cases (tests): 64 (53) ; passed: 53 ; skipped: 11 ; test_python3 test_python3 ; cases (tests): 64 (53) ; passed: 53 ; skipped: 11 ; ```
category: infrastructure
low
Minor
364,780,335
TypeScript
__importStar sometimes inlined when using dynamic imports with --importHelpers
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.1.1 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** __importStar tslib **Code** a.ts: ```ts import('b').then(b => console.log(b)); ``` compile with ``` tsc a.ts --esModuleInterop --importHelpers ``` **Expected behavior:** some type errors, but emitted code should `require("tslib")` and use __importStar from there **Actual behavior:** some type errors and generated code looks like this: ``` var __importStar = (this && this.__importStar) || function (mod) { if (mod && mod.__esModule) return mod; var result = {}; if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) result[k] = mod[k]; result["default"] = mod; return result; }; Promise.resolve().then(function () { return __importStar(require('b')); }).then(function (b) { return console.log(b); }); ``` btw an example that works correctly: ``` import('c').then(b => console.log(b)); class Foo {} export class Bar extends Foo {} ``` both the extends and the export seem to be necessary **Playground Link:** N/A (--esModuleInterop doesn't exist in the playground) **Related Issues:** This is basically #21560 but replacing regular imports with dynamic imports
Bug
low
Critical
364,796,517
opencv
Issue with cv::resize
The function cv::resize behaves - from my perspective - incorrectly. Resizing images is not at all a trivial task, and I believe there is a misconception about scale and target image size. My main issue is that when giving the size parameter, cv::resize ignores the size parameters and overrides them to (as documented) (𝚍𝚘𝚞𝚋𝚕𝚎)𝚍𝚜𝚒𝚣𝚎.𝚠𝚒𝚍𝚝𝚑/𝚜𝚛𝚌.𝚌𝚘𝚕𝚜 and (𝚍𝚘𝚞𝚋𝚕𝚎)𝚍𝚜𝚒𝚣𝚎.height/𝚜𝚛𝚌.row𝚜, respectively. It should not do that, but respect the scales given AND the A more natural way, which I understand would break a lot of old code and is thus difficult to introduce, would be changing the scales to the more correct (𝚍𝚘𝚞𝚋𝚕𝚎)(𝚍𝚜𝚒𝚣𝚎.𝚠𝚒𝚍𝚝𝚑-1)/(𝚜𝚛𝚌.𝚌𝚘𝚕𝚜-1), same for rows. Therefore I propose to change cv::resize such that it does not ignore scale parameters when size is given. Let me illustrate on an example: Assume you have an image with 5x5 pixels. The most common interpretation of an image is that your image has 25 square pixels, sampled in the center of each pixel. That means, between the leftmost and rightmost pixels, there is a distance of FOUR (not five!) pixel widths. The image is 5 pixels wide, but the distance between the outermost pixels is only 4 pixels. If I want to scale this image down by a factor 2 in either direction, I want to make the pixels double the size. By doing that, the distance between the outermost pixels should be 2 pixels (it was 4 before). Since the width of the image is one pixel larger, the target image width (and height) will be 3 afterwards. This can be done without extrapolation. Now what does OpenCV do? Scaling from 5 to 3 pixels, it calculates this as a scaling factor of 3/5. That means, the pixels are made larger by a factor of 5/3. Now when sampling a grid of pixels with size 5/3 of the original pixels in the old grid, there's more than one way to do that. I can choose an offset in an interval of length 2/3 pixels, and get a valid interpolation for each. I haven't checked, most likely OpenCV chooses the center of this interval as offset. There's no way to choose the offset, but I'd like to be able to choose the scale factor so there is no degree of freedom (without extrapolation). Now, I'm arguing that scaling an image by a factor 2, scaling from 5 to 3 pixels exactly fits, while scaling from 4 to 2 pixels does not. To some people, this might feel counterintuitive, but when considering images as a grid of square pixels, it is the natural interpretation. Why is this so important for me? I don't just want to scale images down, I want to scale them up again. To the same size and same scale factors. If I scale with an exact factor not allowing for an offset for sampling the grid, I can do the same in the opposite direction. If I downsample with an offset, I need to extrapolate at upsampling, which I want to avoid. My solution right now: cv::remap instead of cv::resize. But this really feels like a hack, and I don't see a reason why a function taking four parameters (counting cv::Size as two) ignores half of them... (and that it ignores them is not documented)
category: imgproc,RFC,future
low
Minor
364,814,713
neovim
shada (viminfo) forgets marks, oldfiles
- `nvim --version`: ``` NVIM v0.3.2-dev Build type: RelWithDebInfo Lua 5.1 Compilation: /usr/bin/cc -g -O2 -fdebug-prefix-map=/build/neovim-1vOYWK/neovim-0.3.0~ubuntu1+git201809272035-64a8a8f-479a1d0=. -fstack-protector-strong -Wformat -Werror=format-security -Wconversion -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -O2 -g -DMIN_LOG_LEVEL=3 -Og -g -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wimplicit-fallthrough -Wvla -fstack-protector-strong -fdiagnostics-color=auto -Wno-array-bounds -DINCLUDE_GENERATED_DECLARATIONS -D_GNU_SOURCE -DNVIM_MSGPACK_HAS_FLOAT32 -DNVIM_UNIBI_HAS_VAR_FROM -I/build/neovim-1vOYWK/neovim-0.3.0~ubuntu1+git201809272035-64a8a8f-479a1d0/build/config -I/build/neovim-1vOYWK/neovim-0.3.0~ubuntu1+git201809272035-64a8a8f-479a1d0/src -I/build/neovim-1vOYWK/neovim-0.3.0~ubuntu1+git201809272035-64a8a8f-479a1d0/.deps/usr/include -I/usr/include -I/build/neovim-1vOYWK/neovim-0.3.0~ubuntu1+git201809272035-64a8a8f-479a1d0/build/src/nvim/auto -I/build/neovim-1vOYWK/neovim-0.3.0~ubuntu1+git201809272035-64a8a8f-479a1d0/build/include Compiled by buildd@lgw01-amd64-018 Features: +acl +iconv +jemalloc +tui See ":help feature-compile" system vimrc file: "$VIM/sysinit.vim" fall-back for $VIM: "/usr/share/nvim" ``` - Operating system/version: `Ubuntu 18.04.1 LTS \n \l` - Terminal name/version: xfce4-terminal 0.8.7.4 (Xfce 4.12) - `$TERM`: screen-256color (within tmux) ### Problem Hello, sorry I don't think I can have a reproducible example, I don't know yet when the shada file is cleaned but it seems to be cleaned once in a while. My shada settings are as follows : `shada='10000,<10000,s100,h,f1` For some reason, once in a while (every 3 days maybe), it seems that my shada file is wiped. The use case I have is that I use the `:oldfiles` command a lot to navigate to files that I have recently seen, however, the list keeps growing up to approximately 500 or so, and the next day, the list of oldfiles is back to something like 75. Maybe this is because I'm doing some mismanipulation (I sometimes run nvim as root, and also sometimes use nvim -u NONE), could that be one cause of the issue ? How can you help me debug this issue ?
bug-regression,core,editor-state
low
Critical
364,815,290
TypeScript
Improve error message for incompatible signatures in union type from typed/untyped function call
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. Cannot invoke an expression whose type lacks a call signature. Type '(() => string) | (T & Function)' has no compatible call signatures. [2349] --> ## Search Terms untyped function call error message union type incompatible signature <!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily --> ## Example This code errors as follows: ```ts function foo<T>(x: T | (() => string)) { if (typeof x === "function") { let a = x(); // ^ Cannot invoke an expression whose type lacks a call signature. //Type '(() => string) | (T & Function)' has no compatible call signatures. [2349] } } ``` which feels strange given that both of the following are fine: ```ts function fooL<T>(x: T) { if (typeof x === "function") { let a = x(); } } function fooR(x: () => string) { if (typeof x === "function") { let a = x(); } } ``` If an evaluation contexts accepts values of type `A` and values of type `B`, then it should accepts values of type `A | B`. I *think* what is going on is that the call signature from `T & Function` is *untyped*, which is incompatible with the *typed* signature of `() => string`. I don't think this is very obvious to a user, and they end up seeing an application that works for both branches of a union, but not their composition. ## Suggestion I think making the example work is out of scope. My suggestion is to improve the error message, something like: ```ts function foo<T>(x: T | (() => string)) { if (typeof x === "function") { let a = x(); // ^ Cannot invoke an expression whose type lacks a call signature. // Type '(() => string) | (T & Function)' has no compatible call signatures. [2349] // Cannot combine untyped function call with typed function call } } ``` <!-- A summary of what you'd like to see added or changed --> ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,Domain: Error Messages,Experience Enhancement
low
Critical
364,849,439
TypeScript
In JSDoc, `?` of conditional is frequently parsed as postfix-`?`
**TypeScript Version:** 3.2.0-dev.20180927 **Search Terms:** JSDoc conditional types Using [Conditional Types](https://www.typescriptlang.org/docs/handbook/advanced-types.html#conditional-types) in JSDoc comments confuses the TypeScript parser since the `T extends Y ? A : B` syntax looks similar to the `Y?` (meaning `Y|null`) syntax from JSDoc. **Code** ```js /** * @template {{}} T * @param {T} o * @returns {T extends Readonly<infer U> ? (keyof U)[] : string[]} */ function Object_keys(o) { let keys = Object.keys(o); // @ts-ignore: Type assertion for stripping `Readonly<…>` return keys; } ``` **Expected behavior:** No error should be reported. The type of `Object_keys` should be: `<T extends {}>(o: T): T extends Readonly<infer U> ? (keyof U)[] : string[]`. **Actual behavior:** TypeScript compiler shows an error (*"?" expected*) and mistakes the existing `?` operator for JSDoc `<type>|null` syntax. The actual type therefor ends up being: `<T extends {}>(o: T): T extends Readonly<infer U> | null ? (keyof U)[] : string[]`. **Possible fixes:** 1. When encountering a top-level (not within parenthesis) `?` token in an extends clause, scan ahead and check whether it is followed by an `|` or `&` token (type intersection or union operators). If yes, treat it as `|null` and keep processing; otherwise, assume it's the start of a conditional type declaration. (Requires at least an LF(2) parser.) 2. Prohibit the JSDoc `?` operator in the top-level of an extends clause, always causing it to be treated as the start of the conditional type declaration. **Playground Link:** None, Playground does not seem to support JavaScript with JSDoc instead of TypeScript as input.
Bug
medium
Critical
364,891,225
opencv
flann::Index::radiusSearch(): rename argument "radius" to "squaredRadius"
##### System information (version) - OpenCV => 3.4.1 - Operating System / Platform => Windows 64 Bit - Compiler => VS15 x64 ##### Detailed description flann::Index::radiusSearch() expects the **squared** radius but the argument is called `radius`. This is also not clear in https://www.cs.ubc.ca/research/flann/uploads/FLANN/flann_manual-1.8.4.pdf. I just found a comment at 3.2.4 `flann_radius_search()`
category: documentation,category: flann
low
Minor
364,895,141
material-ui
[RFC] color prop API
Currently our type declarations contain the following definition for color props: ```ts type Color = 'inherit' | 'primary' | 'secondary' | 'default'; ``` indicating that there is a library wide understanding what these color represent and that every component that has a color prop should implement each variant. However only `primary` and `secondary` are implemented for every component with a color prop. `inherit` and `default` are not implemented in every component. `default` doesn't even have a consistent style. ## Implementation overview | Component | primary | secondary | inherit | default | |------------|---------|-----------|---------|---------| | AppBar | x | x | x | x | | Badge | x | x | | x | | Button | x | x | x | x | | Chip | x | x | | x | | Icon | x | x | x | | | IconButton | x | x | x | x | | SvgIcon | x | x | x | | | Typography | x | x | x | x | ## `default` variant ### Implementation | Component | color | background-color | |------------|---------------------------------------------------------|-----------------------------------------------------------------------------------------| | AppBar | `theme.palette.getContrastText(backgroundColorDefault)` | `theme.palette.type === 'light' ? theme.palette.grey[100 ] : theme.palette.grey[900 ]` | | Badge | `theme.palette.textColor` (which is `undefined`) | `theme.palette.color` (also `undefined`) | | Button | `theme.typography.button` | global stylesheet | | Chip | `theme.palette.getContrastText(backgroundColor)` | `theme.palette.type === 'light' ? theme.palette.grey[300 ] : theme.palette.grey[700 ]` | | IconButton | `theme.palette.action.active` | `fade(theme.palette.action.active, theme.palette.action.hoverOpacity)` if `:hover` | | Typography | global stylesheet | global stylesheet | ### Proposal Remove it because: - not even the actual default value for the components - not mentioned in the [material spec](https://material.io/design/color/the-color-system.html#) - broken for `Badge` with no report (I was not able to determine when this actually broke but I guess this happened a few months ago; Edit: passed undefined even in 1.0.0-alpha.2) People can always set the `color` prop to `undefined` which will result in no applied css rules concerning color which is a proper default in my opinion. ## `inherit` variant ### Implementation | Component | color | backgroundColor | |------------|-------------------|-------------------| | AppBar | global stylesheet | global stylesheet | | Button | `inherit` | global stylesheet | | Icon | global stylesheet | global stylesheet | | IconButton | `inherit` | global stylesheet | | SvgIcon | global stylesheet | global stylesheet | | Typography | `inherit` | global stylesheet | Funny enough in `Icon` `fontSize="inherit" color="inherit"` causes `font-size: inherit;` but no defined `color` in css. Also the default for `fontSize` in those components is `default` and applies __always__ no css rules but the default for `color` is `inherit` which applies __sometimes__ no css rules. This might as well be removed. There is no value in a named default value in my opinion but this is should be discussed separately. ### Proposal No strong opinion about that one. Either repurpose this as a `default` replacement which means color and background-color are not set or actually set `inherit` which is the most obvious. AppBar for example does not do anything with `inherit` which might be confusing. /cc @mui-org/core-contributors
new feature,design: material,discussion,breaking change,RFC
medium
Critical
364,944,653
create-react-app
Proposal: Additional JS pre-processing on module build
As started in #5103, would like to propose a bit more of module processing than v2 is intended to have. My first impression on module build was "finally, the same build for all the code!", but as I see it is not actually true, @gaearon [explained](https://github.com/facebook/create-react-app/issues/5103#issuecomment-425459196) that modules should be in a valid JavaScript format already. Our **use case**: several projects depend on "core" package which was decided to use as a node module via git (not a git submodule for couple of reasons on convenience). Now have to build "core" project files with a separate script and Babel setup. It has some components, helpers, etc. And we want to have ability to use flat imports mainly. Components use JSX, which is not supported by module build as it figured out. ### Proposal: Additional JS pre-processing on module build Let's have couple options I see for the consideration: 1. Ability to "turn on" same preprocessing for modules as current project already has. 2. Ability to add something to Babel "plugins" section (like "transform-react-jsx"). 3. Ability to use those modules as is in application tests (see #5164 with example repos). Those options would probably have a better use with ability to specify this for the "list of modules/mask", or "per module".
issue: proposal
low
Major
364,945,717
flutter
Offline mode needs to be extended to all flutter commands
The `--offline` argument to `flutter packages get` is helpful in preventing packages from being downloaded, but since `flutter build` can also trigger getting packages, it needs to have a `--offline` added too. Ideally, I'd like the main tool to have the --offline option, and have the individual commands accommodate it if they can. It may mean that if SDK tools need downloading that the command fails, or that additional options be added to allow specification of file locations that might be needed. This is to assist people who not only want to work offline, but also people who want to build hermetic build systems that include Flutter, which is challenging right now because of the automatic download logic that the flutter tool contains.
c: new feature,tool,a: quality,P3,team-tool,triaged-tool
low
Major
364,953,229
angular
HttpTestingController can't mock HttpErrorResponse with successful status
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [x] Regression (a behavior that used to work and stopped working in a new release) </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> I want to test HttpInterceptor when HttpErrorResponse has a successful (200) status. The situation became possible with the latest http client if response JSON is not valid [source](https://github.com/angular/angular/blob/317d40d879cc5eac1073a9182384423596fc098b/packages/common/http/src/xhr.ts#L199). But it's not possible to Mock HttpErrorResponse with a successful status [source](https://github.com/angular/angular/blob/cf0968f98e844043a0f6c2548201f3c0dfd329a7/packages/common/http/testing/src/request.ts#L83). TestRequest.error() is not allowing to mock such a behavior NOTE: I try to intercept requests with html content and perform a special action on them. With the previous implementation of http client such responses were handled as successful HttpResponses and should be handled in next callback, now they considered as HttpErrorResponses and should be handled in error handler. The current http client also ignores response Content-Type taking into account only request's Content-Type(Accept header): [https://github.com/angular/angular/blob/317d40d879cc5eac1073a9182384423596fc098b/packages/common/http/src/xhr.ts#L182](source) I would rather analyze both response and request Content-Type but that can be tricky and not as straightforward as extending/fixing TestRequest. ## Expected behavior <!-- Describe what the desired behavior would be. --> I should be able to mock HttpErrorResponse with successful status because it's possible to reproduce this case with the real http client ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://stackblitz.com or similar (you can use this template as a starting point: https://stackblitz.com/fork/angular-gitter). --> ``` describe('HttpTestingController test', () => { let httpMock: HttpTestingController; let http: HttpClient; beforeEach(() => { TestBed.configureTestingModule({ imports: [HttpClientTestingModule] }); httpMock = TestBed.get(HttpTestingController); http = TestBed.get(HttpClient); }); it('should do request', () => { const uri = 'test\\api'; http.get<{ data: string }>(uri).subscribe((e) => console.log(e)); let response = httpMock.expectOne(uri); // shouldn't throw here: response.error(<any>{}, { status: 200, statusText: "OK", headers: new HttpHeaders({ 'Content-Type': 'text/html' }) }); }); }) ``` ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> I want to mock HttpErrorResponse with successful status to unit test custom HttpInterceptor ## Environment <pre><code> Angular version: 6.1.1 <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [x] Chrome (desktop) version XX - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: XX <!-- run `node --version` --> - Platform: <!-- Mac, Linux, Windows --> Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
type: bug/fix,help wanted,freq1: low,area: common/http,state: confirmed,P4
low
Critical
364,954,352
TypeScript
Stack overflow within collectDynamicImportOrRequireCalls
**TypeScript Version:** 3.1.0-dev.20180925 ``` git clone --depth=1 https://github.com/zuiidea/antd-admin.git cd antd-admin tsc --init tsc --allowJs --checkJs ``` Compiler crashes with the callstack: ``` RangeError: Maximum call stack size exceeded at Object.isRequireCall (node_modules\typescript\lib\tsc.js:7304:27) at collectDynamicImportOrRequireCalls (node_modules\typescript\lib\tsc.js:69739:24) at visitNode (node_modules\typescript\lib\tsc.js:12689:24) at Object.forEachChild (node_modules\typescript\lib\tsc.js:12890:24) at collectDynamicImportOrRequireCallsForEachChild (node_modules\typescript\lib\tsc.js:69754:20) at collectDynamicImportOrRequireCalls (node_modules\typescript\lib\tsc.js:69748:17) at visitNode (node_modules\typescript\lib\tsc.js:12689:24) at Object.forEachChild (node_modules\typescript\lib\tsc.js:12890:24) at collectDynamicImportOrRequireCallsForEachChild (node_modules\typescript\lib\tsc.js:69754:20) at collectDynamicImportOrRequireCalls (node_modules\typescript\lib\tsc.js:69748:17) ```
Bug,Crash
low
Critical
365,032,217
pytorch
Network surgery for transfer fails
## Per the pytorch/caffe2 Readme I am asking here. I would like to use an existing network definition and weights from the model zoo as the backbone for a new network. In this specific example the architecture will be squeezenet, and the new network simply has a different shape for the top parameterized layers ['conv10_w', 'conv10_b'], to accommodate a different set of classes from Imagenet. Unfortunately, it is not clear from the documentation, tutorials, or examples how to achieve this (to me). Some OS notes: I have built caffe2+OpenCV from source with the current master, into a python2.7.12 virtualenv, cuda 9.0, cuDNN 7.0. I wrote a script ( based on https://nbviewer.jupyter.org/gist/kyamagu/6cff70840c10ca374e069a3a7eb00cb4/dogs-vs-cats.ipynb ) that I think should do this: https://gist.github.com/johncorring/d735675e75add96fbdfbcc40fa00f3ba I get the following error message: Traceback (most recent call last): File "dogsvscats.py", line 184, in <module> shtyp = workspace.InferShapesAndTypes([train_model.net]) File "/home/john/Code/pytorch/build/caffe2/python/workspace.py", line 258, in InferShapesAndTypes blobdesc_prototxt = C.infer_shapes_and_types_from_workspace(net_protos) MemoryError: std::bad_alloc which isn't very helpful (especially since cross referencing against caffe2 docs doesn't yield anything). When I comment out the offending line and try to continue to training I recieve a seg fault that I have narrowed down to coming from line 204, workspace.RunNet(train_model.net). lldb returns the following stack trace: thread #1: tid = 9130, 0x00007fffaa112240 libcaffe2.so`void caffe2::math::CopyMatrix<float, caffe2::CPUContext>(int, int, float const*, int, int, float*, int, int, caffe2::CPUContext*) + 208, name = 'python', stop reason = signal SIGSEGV: address access protected (fault address: 0xb15400000) * frame #0: 0x00007fffaa112240 libcaffe2.so`void caffe2::math::CopyMatrix<float, caffe2::CPUContext>(int, int, float const*, int, int, float*, int, int, caffe2::CPUContext*) + 208 frame #1: 0x00007fffaa11392f libcaffe2.so`void caffe2::math::Im2Col<float, caffe2::CPUContext, (caffe2::StorageOrder)2>(int, int, int, int, int, int, int, int, int, int, int, int, int, float const*, float*, caffe2::CPUContext*, int) + 1087 frame #2: 0x00007fffaa3f52b1 libcaffe2.so`caffe2::ConvOp<float, caffe2::CPUContext>::RunOnDeviceWithOrderNCHW()::{lambda(caffe2::Tensor*)#1}::operator()(caffe2::Tensor*) const + 1169 frame #3: 0x00007fffaa3f77f8 libcaffe2.so`caffe2::ConvOp<float, caffe2::CPUContext>::RunOnDeviceWithOrderNCHW() + 2712 frame #4: 0x00007fffaa1c93ed libcaffe2.so`caffe2::ConvPoolOpBase<caffe2::CPUContext>::RunOnDevice() + 301 frame #5: 0x00007fffa9fb52e5 libcaffe2.so`caffe2::Operator<caffe2::CPUContext>::Run(int) + 229 frame #6: 0x00007fffaa09275c libcaffe2.so`caffe2::SimpleNet::Run() + 460 frame #7: 0x00007fffaa0aeb8a libcaffe2.so`caffe2::Workspace::RunNet(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 954 frame #8: 0x00007fffab11a277 caffe2_pybind11_state_gpu.so`void pybind11::cpp_function::initialize<caffe2::python::addGlobalMethods(pybind11::module&)::{lambda(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, bool)#21}, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, bool, pybind11::name, pybind11::scope, pybind11::sibling>(caffe2::python::addGlobalMethods(pybind11::module&)::{lambda(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, bool)#21}&&, bool (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, bool), pybind11::name const&, pybind11::scope const&, pybind11::sibling const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call) + 311 frame #9: 0x00007fffab160220 caffe2_pybind11_state_gpu.so`pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 3552 frame #10: 0x00000000004c30ce python`PyEval_EvalFrameEx + 29342 frame #11: 0x00000000004b9ab6 python`PyEval_EvalCodeEx + 774 frame #12: 0x00000000004c1e6f python`PyEval_EvalFrameEx + 24639 frame #13: 0x00000000004b9ab6 python`PyEval_EvalCodeEx + 774 frame #14: 0x00000000004c16e7 python`PyEval_EvalFrameEx + 22711 frame #15: 0x00000000004b9ab6 python`PyEval_EvalCodeEx + 774 frame #16: 0x00000000004eb30f python`??? + 63 frame #17: 0x00000000004e5422 python`PyRun_FileExFlags + 130 frame #18: 0x00000000004e3cd6 python`PyRun_SimpleFileExFlags + 390 frame #19: 0x0000000000493ae2 python`Py_Main + 1554 frame #20: 0x00007ffff7810830 libc.so.6`__libc_start_main(main=(python`main), argc=2, argv=0x00007fffffffda18, init=<unavailable>, fini=<unavailable>, rtld_fini=<unavailable>, stack_end=0x00007fffffffda08) + 240 at libc-start.c:291 frame #21: 0x00000000004933e9 python`_start + 41
caffe2
low
Critical
365,044,697
go
cmd/go: C compiler flags not passed when building buildid.s
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version go1.10.3 gccgo (GCC) 8.2.1 20180813 solaris/sparc ### Does this issue reproduce with the latest release? yes, compiled from source. ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="sparc" GOBIN="" GOCACHE="/home/amandeep/.cache/go-build" GOEXE="" GOHOSTARCH="sparc" GOHOSTOS="solaris" GOOS="solaris" GOPATH="/opt/go_pkgs" GORACE="" GOROOT="/usr/gnu" GOTMPDIR="" GOTOOLDIR="/usr/gnu/libexec/gcc/sparc-sun-solaris2.11/8.2.1" GCCGO="/usr/gnu/bin/gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build299260346=/tmp/go-build -gno-record-gcc-switches -funwind-tables" ``` ### What did you do? tried compiling a 64 bit lib with go install. In this particular example, trying to install glog ``` amandeep@s113ldom1:/opt/go_pkgs/src/github.com/golang/glog$ sudo GOPATH=/opt/go_pkgs CC='gcc -m64' CGO_CFLAGS='-m64' CGO_LDFLAGS='-m64' GOARCH=sparc64 CGO_ENABLED=1 go install -x -gccgoflags=-m64 WORK=/tmp/go-build365019570 mkdir -p $WORK/b001/ cd $WORK /usr/gnu/bin/gccgo -fgo-importcfg=/dev/null -c -x c - || true cd /opt/go_pkgs/src/github.com/golang/glog /usr/gnu/bin/gccgo -c -g -fdebug-prefix-map=$WORK=/tmp/go-build -gno-record-gcc-switches -fgo-pkgpath=github.com/golang/glog -o $WORK/b001/_go_.o -I $WORK/b001/_importcfgroot_ -m64 ./glog.go ./glog_file.go echo ' .section .go.buildid,"e"' >> $WORK/b001/_buildid.s echo ' .byte 0x79,0x71,0x58,0x37,0x74,0x64,0x76,0x6c' >> $WORK/b001/_buildid.s echo ' .byte 0x7a,0x59,0x53,0x39,0x41,0x6a,0x30,0x58' >> $WORK/b001/_buildid.s echo ' .byte 0x34,0x53,0x78,0x39,0x2f,0x79,0x71,0x58' >> $WORK/b001/_buildid.s echo ' .byte 0x37,0x74,0x64,0x76,0x6c,0x7a,0x59,0x53' >> $WORK/b001/_buildid.s echo ' .byte 0x39,0x41,0x6a,0x30,0x58,0x34,0x53,0x78' >> $WORK/b001/_buildid.s echo ' .byte 0x39' >> $WORK/b001/_buildid.s echo '' >> $WORK/b001/_buildid.s /usr/gnu/bin/gccgo -xassembler-with-cpp -I $WORK/b001/ -c -o $WORK/b001/_buildid.o -D GOOS_solaris -D GOARCH_sparc64 -D GOPKGPATH=github_com_golang_glog $WORK/b001/_buildid.s ar rcD $WORK/b001/_pkg_.a $WORK/b001/_go_.o $WORK/b001/_buildid.o /usr/gnu/libexec/gcc/sparc-sun-solaris2.11/8.2.1/buildid -w $WORK/b001/_pkg_.a # internal cp $WORK/b001/_pkg_.a /root/.cache/go-build/a4/a47820a6bcb6e72c26a3be36129e80b12a0f0193dd5959e2c7304a3b169d9af8-d # internal mkdir -p /opt/go_pkgs/pkg/gccgo_solaris_sparc64/github.com/golang/ cp $WORK/b001/_pkg_.a /opt/go_pkgs/pkg/gccgo_solaris_sparc64/github.com/golang/libglog.a rm -r $WORK/b001/ ``` More info on the mailing list about it: https://groups.google.com/forum/#!topic/golang-nuts/mRVt7Ge2iaM ### What did you expect to see? a 64-bit lib ### What did you see instead? a 32-bit lib ``` amandeep@s113ldom1:/opt/go_pkgs/src/github.com/golang/glog$ file /opt/go_pkgs/pkg/gccgo_solaris_sparc64/github.com/golang/libglog.a /opt/go_pkgs/pkg/gccgo_solaris_sparc64/github.com/golang/libglog.a: current ar archive, 32-bit symbol table ```
NeedsFix
low
Critical
365,072,962
vscode
[folding] clicking on on line number of folded line should select full folding range
Issue Type: <b>Bug</b> 1. Fold an area (e.g. a block in JSON) to a line 2. Click the margin of the folded line 3. Try to drag the mouse to select the whole folded area **Expected**: Can select (only) the whole folded area **Actual**: It's either the first line of folded area, or the whole folded area **and** next line. Other editors do not have this problem, but the solutions are different: - Visual Studio: The whole folded area is selected when you click the margin - Sublime Text: When you drag down, it expands the selection from first line to the whole folded area. If you keep dragging, it will expand the selection to include next line. **Workaround**: Click the beginning of the folded line, drag to the beginning of the next line. It works, but requires much more mouse precision than clicking/dragging in margin area. **Suggestion**: A simple question: how often do you want to select **only** the first line when you click the margin of **folded** area? I believe it's very rare, so Visual Studio's solution is desired. VS Code version: Code 1.27.2 (f46c4c469d6e6d8c46f268d1553c5dc4b475840f, 2018-09-12T16:17:45.060Z) OS version: Windows_NT x64 10.0.17134 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i7-8705G CPU @ 3.10GHz (8 x 3096)| |GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: enabled<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: unavailable_off| |Memory (System)|15.85GB (4.97GB free)| |Process Argv|C:\Program Files\Microsoft VS Code\Code.exe D:\Repos\modern\NovaApi| |Screen Reader|no| |VM|0%| </details> <!-- generated by issue reporter -->
help wanted,feature-request,editor-folding
low
Critical
365,076,870
pytorch
[Feature] Support Adaptive Max Gradient Norm / Clipping
## 🚀 Feature This feature is to use a moving windowed median or exponential moving average for gradient clipping and normalization. Additionally, it'd be nice to support batch skipping if the gradient norm is above a threshold which is usually an indication of an imminent gradient explosion. ## Motivation The threshold for gradient clipping is a hyperparameter that needs to be set uniquely for each model. However, its supported by at least two important and well-written papers that I know of that this parameter can be set automatically: - https://arxiv.org/abs/1212.0901 "The cutoff threshold for gradient clipping is set based on the average norm of the gradient over one pass on the data" - https://arxiv.org/abs/1804.09849 "To further stabilize training, we also use adaptive gradient clipping. We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion. More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average." In addition, I've found in my own experiments in audio synthesis that setting the parameter automatically is helpful. ## Pitch Implement a gradient clipping module that stores an internal state used for deciding the threshold for clipping.
triaged,enhancement,module: norms and normalization
low
Minor
365,078,717
javascript-algorithms
Add Persistent Vector?
I would like to contribute to adding [persistent vector](http://www.hypirion.com/musings/understanding-persistent-vector-pt-1), it's a cool immutable data structure. Thumbs up if it interests you.
enhancement
medium
Minor
365,087,721
rust
"overflow evaluating the requirement `...: std::marker::Sync`" instead of error
I ran into some code that causes rustc to recurse until it hits whatever you have the recursion limit set to (I tried up to `#![recursion_limit="80192"]`). The code in question should actually result in a compile time error, but not that one. Here's the minimal case I could come up with: ``` static FOO: Option<Foo> = None; type NotSend = *const (); struct Foo { foo: (Box<(Foo, NotSend)>, NotSend), } fn main() { } ``` Here's the error I would've expected to get: ``` Compiling playground v0.0.1 (file:///playground) error[E0277]: `*const ()` cannot be shared between threads safely --> src/main.rs:1:1 | 1 | static FOO: Option<Foo> = None; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `*const ()` cannot be shared between threads safely | = help: within `(Foo, *const ())`, the trait `std::marker::Sync` is not implemented for `*const ()` = note: required because it appears within the type `(Foo, *const ())` = note: required because of the requirements on the impl of `std::marker::Sync` for `std::ptr::Unique<(Foo, *const ())>` = note: required because it appears within the type `std::boxed::Box<(Foo, *const ())>` = note: required because it appears within the type `Foo` = note: required because it appears within the type `std::option::Option<Foo>` = note: shared static variables must have a type that implements `Sync` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. error: Could not compile `playground`. To learn more, run the command again with --verbose. ``` Here's what actually happened: ``` Compiling playground v0.0.1 (file:///playground) error[E0275]: overflow evaluating the requirement `std::ptr::Unique<(Foo, *const ())>: std::marker::Sync` --> src/main.rs:1:1 | 1 | static FOO: Option<Foo> = None; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: consider adding a `#![recursion_limit="128"]` attribute to your crate = note: required because it appears within the type `std::boxed::Box<(Foo, *const ())>` = note: required because it appears within the type `(std::boxed::Box<(Foo, *const ())>, *const ())` = note: required because it appears within the type `Foo` = note: required because it appears within the type `(Foo, *const ())` = note: required because of the requirements on the impl of `std::marker::Sync` for `std::ptr::Unique<(Foo, *const ())>` = note: required because it appears within the type `std::boxed::Box<(Foo, *const ())>` error: aborting due to previous error For more information about this error, try `rustc --explain E0275`. error: Could not compile `playground`. To learn more, run the command again with --verbose. ``` Here's a [link to the playground](https://play.rust-lang.org/?gist=cfa54cea5842d9b178a428f53f8a5445&version=stable&mode=debug&edition=2015). ## Meta The above output is copy and pasted from play, but I first found the issue locally with the following rust version: ``` rustc --version --verbose rustc 1.29.1 (b801ae664 2018-09-20) binary: rustc commit-hash: b801ae66425cf7c3c71052b19ef8f145b0d0513d commit-date: 2018-09-20 host: x86_64-apple-darwin release: 1.29.1 LLVM version: 7.0 ```
A-type-system,A-trait-system,T-compiler,C-bug,T-types
low
Critical
365,087,954
go
x/net/http2: Ability to set initial flow control in transport.
### What version of Go are you using (`go version`)? Go 1.11 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? linux + darwin on amd64 and android on arm. ### What did you do? Made an http2 request. ### What did you expect to see? This is a feature request. ### What did you see instead? Configurable values such as in the Server but for thetransport would be great :D As the latency gets higher this has a sizable effect. Currently it is at some huge value for connection and 4MB for stream. In a server that does lots of calls out that might want to be reduced, and in known high latency cases it may go up.
NeedsInvestigation,FeatureRequest
low
Major
365,093,777
opencv
Allow contributors to request reviews
I believe there is no harm to allow contributors to request a review after creating a PR.
category: infrastructure
low
Minor
365,111,683
rust
Borrow checker extends borrow range in code with early return
Borrow checker seems to extend mutable borrow of variable to the end of the function in case of an early return of value dependent on the borrow: ```rust fn foo(x: &mut u8) -> Option<&u8> { if let Some(y) = bar(x) { return Some(y) // comment this out to calm compiler } bar(x) } fn bar(x: &mut u8) -> Option<&u8> { Some(x) } ``` Gives: ``` error[E0499]: cannot borrow `*x` as mutable more than once at a time --> src/main.rs:7:9 | 4 | if let Some(y) = bar(x) { | - first mutable borrow occurs here ... 7 | bar(x) | ^ second mutable borrow occurs here 8 | } | - first borrow ends here ``` Compilation fails on stable, nightly and nightly with NLL, works on nightly with polonius. [Try it on Playground](https://play.rust-lang.org/?gist=166bc56651e9f52726237066d06624df)
A-borrow-checker,T-compiler,A-NLL,C-bug,fixed-by-polonius
medium
Critical
365,138,216
go
proposal: add container/queue
# Proposal: Built in support for high performance unbounded queue Author: Christian Petrin. Last updated: November 26, 2018 Discussion at: https://github.com/golang/go/issues/27935 Design document at https://github.com/golang/proposal/blob/master/design/27935-unbounded-queue-package.md ## Abstract I propose to add a new package, "container/queue", to the standard library to support an in-memory, unbounded, general purpose queue implementation. [Queues](https://en.wikipedia.org/wiki/Queue_(abstract_data_type)) in computer science is a very old, well established and well known concept, yet Go doesn't provide a specialized, safe to use, performant and issue free unbounded queue implementation. Buffered channels provide an excellent option to be used as a queue, but buffered channels are bounded and so doesn't scale to support very large data sets. The same applies for the standard [ring package](https://github.com/golang/go/tree/master/src/container/ring). The standard [list package](https://github.com/golang/go/tree/master/src/container/list) can be used as the underlying data structure for building unbounded queues, but the performance yielded by this linked list based implementation [is not optimal](https://github.com/christianrpetrin/queue-tests/blob/master/bench_queue.md). Implementing a queue using slices as suggested [here](https://stackoverflow.com/a/26863706) is a feasible approach, but the performance yielded by this implementation can be abysmal in some [high load scenarios](https://github.com/christianrpetrin/queue-tests/blob/master/bench_queue.md). ## Background Queues that grows dynamically has many uses. As an example, I'm working on a logging system called [CloudLogger](https://github.com/cloud-logger/docs) that sends all logged data to external logging management systems, such as [Stackdriver](https://cloud.google.com/stackdriver/) and [Cloudwatch](https://aws.amazon.com/cloudwatch/). External logging systems typically [rate limit](https://en.wikipedia.org/wiki/Rate_limiting) how much data their service will accept for a given account and time frame. So in a scenario where the hosting application is logging more data than the logging management system will accept at a given moment, CloudLogger has to queue the extra logs and send them to the logging management system at a pace the system will accept. As there's no telling how much data will have to be queued as it depends on the current traffic, an unbounded, dynamically growing queue is the ideal data structure to be used. Buffered channels in this scenario is not ideal as they have a limit on how much data they will accept, and once that limit has been reached, the producers (routines adding to the channel) start to block, making the adding to the channel operation an "eventual" synchronous process. A fully asynchronous operation in this scenario is highly desirable as logging data should not slow down significantly the hosting application. Above problem is a problem that, potentially, every system that calls another system faces. And in the [cloud](https://en.wikipedia.org/wiki/Cloud_computing) and [microservices](https://en.wikipedia.org/wiki/Microservices) era, this is an extremely common scenario. Due to the lack of support for built in unbounded queues in Go, Go engineers are left to either: 1) Research and use external packages, or 2) Build their own queue implementation Both approaches are riddled with pitfalls. Using external packages, especially in enterprise level software, requires a lot of care as using external, potentially untested and hard to understand code can have unwanted consequences. This problem is made much worse by the fact that, currently, there's no well established and disseminated open source Go queue implementation according to [this stackoverflow discussion](https://stackoverflow.com/questions/2818852/is-there-a-queue-implementation), [this github search for Go queues](https://github.com/search?l=Go&q=go+queue&type=Repositories) and [Awesome Go](https://awesome-go.com/). Building a queue, on the other hand, might sound like a compelling argument, but building efficient, high performant, bug free unbounded queue is a hard job that requires a pretty solid computer science foundation as well a good deal of time to research different design approaches, test different implementations, make sure the code is bug and memory leak free, etc. In the end what Go engineers have been doing up to this point is building their own queues, which are for the most part inefficient and can have disastrous, yet hidden performance and memory issues. As examples of poorly designed and/or implemented queues, the approaches suggested [here](https://stackoverflow.com/a/26863706) and [here](https://stackoverflow.com/a/11757161) (among many others), requires linear copy of the internal slice for resizing purposes. Some implementations also has memory issues such as an ever expanding internal slice and memory leaks. ## Proposal I propose to add a new package, "container/queue", to the standard library to support in-memory unbounded queues. The [proposed queue implementation](https://github.com/christianrpetrin/queue-tests/blob/master/queueimpl7/queueimpl7.go) offers [excellent performance and very low memory consumption](https://github.com/christianrpetrin/queue-tests/blob/master/bench_queue.md) when comparing it to three promising open source implementations ([gammazero](https://github.com/gammazero/deque), [phf](https://github.com/phf/go-queue) and [juju](https://github.com/juju/utils/tree/master/deque)); to use Go channels as queue; the standard list package as a queue as well as six other experimental queue implementations. The [proposed queue implementation](https://github.com/christianrpetrin/queue-tests/blob/master/queueimpl7/queueimpl7.go) offers the most balanced approach to performance given different loads, being significantly faster and still uses less memory than every other queue implementation in the [tests](https://github.com/christianrpetrin/queue-tests/blob/master/benchmark_test.go). The closest data structure Go has to offer for building dynamically growing queues for large data sets is the [standard list package](https://github.com/golang/go/tree/master/src/container/list). When comparing the proposed solution to [using the list package as an unbounded queue](https://github.com/christianrpetrin/queue-tests/blob/master/benchmark_test.go) (refer to "BenchmarkList"), the proposed solution is consistently faster than using the list package as a queue as well as displaying a much lower memory footprint. ### Reasoning There's [two well accepted approaches](https://en.wikipedia.org/wiki/Queue_(abstract_data_type)#Queue_implementation) to implementing queues when in comes to the queue underlying data structure: 1) Using linked list 2) Using array Linked list as the underlying data structure for an unbounded queue has the advantage of scaling efficiently when the underlying data structure needs to grow to accommodate more values. This is due to the fact that the existing elements doesn't need to be repositioned or copied around when the queue needs to grow. However, there's a few concerns with this approach: 1) The use of prev/next pointers for each value requires a good deal of extra memory 2) Due to the fact that each "node" in the linked list can be allocated far away from the previous one, navigating through the list can be slow due to its bad [memory locality](https://www.cs.cornell.edu/courses/cs3110/2012sp/lectures/lec25-locality/lec25.html) properties 3) Adding new values always require new memory allocations and pointers being set, hindering performance On the other hand, using a slice as the underlying data structure for unbounded queues has the advantage of very good [memory locality](https://www.cs.cornell.edu/courses/cs3110/2012sp/lectures/lec25-locality/lec25.html), making retrieval of values faster when comparing to linked lists. Also an "alloc more than needed right now" approach can easily be implemented with slices. However, when the slice needs to expand to accommodate new values, a [well adopted strategy](https://en.wikipedia.org/wiki/Dynamic_array#Geometric_expansion_and_amortized_cost) is to allocate a new, larger slice, copy over all elements from the previous slice into the new one and use the new one to add the new elements. The problem with this approach is the obvious need to copy all the values from the older, small slice, into the new one, yielding a poor performance when the amount of values that need copying are fairly large. Another potential problem is a theoretical lower limit on how much data they can hold as slices, like arrays, have to allocate its specified positions in sequential memory addresses, so the maximum number of items the queue would ever be able to hold is the maximum size a slice can be allocated on that particular system at any given moment. Due to modern memory management techniques such as [virtual memory](https://en.wikipedia.org/wiki/Virtual_memory) and [paging](https://en.wikipedia.org/wiki/Paging), this is a very hard scenario to corroborate thru practical testing. Nonetheless, this approach doesn't scale well with large data sets. Having said that, there's a third, newer approach to implementing unbounded queues: use fixed size linked slices as the underlying data structure. The fixed size linked slices approach is a hybrid between the first two, providing good memory locality arrays have alongside the efficient growing mechanism linked lists offer. It is also not limited on the maximum size a slice can be allocated, being able to hold and deal efficiently with a theoretical much larger amount of data than pure slice based implementations. ## Rationale ### Research [A first implementation](https://github.com/cloud-spin/queue) of the new design was built. The benchmark tests showed the new design was very promising, so I decided to research about other possible queue designs and implementations with the goal to improve the first design and implementation. As part of the research to identify the best possible queue designs and implementations, I implemented and probed a total of 7 experimental queue implementations. Below are a few of the most interesting ones. - [queueimpl1](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl1/queueimpl1.go): custom queue implementation that stores the values in a simple slice. Pop removes the first slice element. This is a slice based implementation that tests [this](https://stackoverflow.com/a/26863706) suggestion. - [queueimpl5](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl5/queueimpl5.go): custom queue implementation that stores the values in linked slices. This implementation tests the queue performance when storing the "next" pointer as part of the values slice instead of having it as a separate "next" field. The next element is stored in the last position of the internal slices, which is a reserved position. - [queueimpl7](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl7/queueimpl7.go): custom queue implementation that stores the values in linked slices. This implementation tests the queue performance when performing lazy creation of the internal slice as well as starting with a 1-sized slice, allowing it to grow up to 16 by using the built in append function. Subsequent slices are created with 128 fixed size. Also as part of the research, I investigated and probed below open source queue implementations as well. - [phf](https://github.com/phf/go-queue): this is a slice, ring based queue implementation. Interesting to note the author did a pretty good job researching and probing other queue implementations as well. - [gammazero](https://github.com/gammazero/deque): the deque implemented in this package is also a slice, ring based queue implementation. - [juju](https://github.com/juju/utils/tree/master/deque): the deque implemented in this package uses a linked list based approach, similarly to other experimental implementations in this package such as [queueimpl3](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl3/queueimpl3.go). The biggest difference between this implementation and the other experimental ones is the fact that this queue uses the standard list package as the linked list. The standard list package implements a doubly linked list, while the experimental implementations implements their own singly linked list. The [standard list package](https://github.com/golang/go/blob/master/src/container/list/list.go) as well as buffered channels were probed as well. ### Benchmark Results Add and remove 100 items<br/> Performance<br/> ![ns/op](https://github.com/christianrpetrin/queue-tests/blob/master/images/queue-100-items-perf-main.jpg?raw=true "Benchmark tests") <br/> Memory<br/> ![B/op](https://github.com/christianrpetrin/queue-tests/blob/master/images/queue-100-items-mem-main.jpg?raw=true "Benchmark tests") Add and remove 100k items<br/> Performance<br/> ![ns/op](https://github.com/christianrpetrin/queue-tests/blob/master/images/queue-100k-items-perf-main.jpg?raw=true "Benchmark tests") Memory<br/> ![B/op](https://github.com/christianrpetrin/queue-tests/blob/master/images/queue-100k-items-mem-main.jpg?raw=true "Benchmark tests") <br/> Aggregated Results<br/> Performance<br/> ![ns/op](https://github.com/christianrpetrin/queue-tests/blob/master/images/queue-line-perf-main.jpg?raw=true "Benchmark tests") Memory<br/> ![B/op](https://github.com/christianrpetrin/queue-tests/blob/master/images/queue-line-mem-main.jpg?raw=true "Benchmark tests") <br/> Detailed, curated results can be found [here](https://docs.google.com/spreadsheets/d/e/2PACX-1vRnCm7v51Eo5nq66NsGi8aQI6gL14XYJWqaeRJ78ZIWq1pRCtEZfsLD2FcI-gIpUhhTPnkzqDte_SDB/pubhtml?gid=668319604&single=true) Aggregated, curated results can be found [here](https://docs.google.com/spreadsheets/d/e/2PACX-1vRnCm7v51Eo5nq66NsGi8aQI6gL14XYJWqaeRJ78ZIWq1pRCtEZfsLD2FcI-gIpUhhTPnkzqDte_SDB/pubhtml?gid=582031751&single=true) Given above results, [queueimpl7](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl7/queueimpl7.go), henceforth just "impl7", proved to be the most balanced implementation, being either faster or very competitive in all test scenarios from a performance and memory perspective. Refer [here](https://github.com/christianrpetrin/queue-tests) for more details about the tests. The benchmark tests can be found [here](https://github.com/christianrpetrin/queue-tests/blob/master/benchmark_test.go). #### Impl7 Design and Implementation [Impl7](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl7/queueimpl7.go) was the result of the observation that some slice based implementations such as [queueimpl1](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl1/queueimpl1.go) and [queueimpl2](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl2/queueimpl2.go) offers phenomenal performance when the queue is used with small data sets. For instance, comparing [queueimpl3](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl3/queueimpl3.go) (very simple linked slice implementation) with [queueimpl1](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl1/queueimpl1.go) (very simple slice based implementation), the results at adding 0 (init time only), 1 and 10 items are very favorable for impl1, from a performance and memory perspective. ``` benchstat rawresults/bench-impl1.txt rawresults/bench-impl3.txt name old time/op new time/op delta /0-4 6.83ns ± 3% 472.53ns ± 7% +6821.99% (p=0.000 n=20+17) /1-4 48.1ns ± 6% 492.4ns ± 5% +924.66% (p=0.000 n=20+20) /10-4 532ns ± 5% 695ns ± 8% +30.57% (p=0.000 n=20+20) /100-4 3.19µs ± 2% 2.50µs ± 4% -21.69% (p=0.000 n=18+19) /1000-4 24.5µs ± 3% 23.6µs ± 2% -3.33% (p=0.000 n=19+19) /10000-4 322µs ± 4% 238µs ± 1% -26.02% (p=0.000 n=19+18) /100000-4 15.8ms ±10% 3.3ms ±13% -79.32% (p=0.000 n=20+20) name old alloc/op new alloc/op delta /0-4 0.00B 2080.00B ± 0% +Inf% (p=0.000 n=20+20) /1-4 16.0B ± 0% 2080.0B ± 0% +12900.00% (p=0.000 n=20+20) /10-4 568B ± 0% 2152B ± 0% +278.87% (p=0.000 n=20+20) /100-4 4.36kB ± 0% 2.87kB ± 0% -34.13% (p=0.000 n=20+20) /1000-4 40.7kB ± 0% 24.6kB ± 0% -39.54% (p=0.000 n=20+20) /10000-4 746kB ± 0% 244kB ± 0% -67.27% (p=0.000 n=20+20) /100000-4 10.0MB ± 0% 2.4MB ± 0% -75.85% (p=0.000 n=15+20) name old allocs/op new allocs/op delta /0-4 0.00 2.00 ± 0% +Inf% (p=0.000 n=20+20) /1-4 1.00 ± 0% 2.00 ± 0% +100.00% (p=0.000 n=20+20) /10-4 14.0 ± 0% 11.0 ± 0% -21.43% (p=0.000 n=20+20) /100-4 108 ± 0% 101 ± 0% -6.48% (p=0.000 n=20+20) /1000-4 1.01k ± 0% 1.01k ± 0% +0.50% (p=0.000 n=20+20) /10000-4 10.0k ± 0% 10.2k ± 0% +1.35% (p=0.000 n=20+20) /100000-4 100k ± 0% 102k ± 0% +1.53% (p=0.000 n=20+20) ``` Impl7 is a hybrid experiment between using a simple slice based queue implementation for small data sets and the fixed size linked slice approach for large data sets, which is an approach that scales really well, offering really good performance for small and large data sets. The implementation starts by lazily creating the first slice to hold the first values added to the queue. ```go const ( // firstSliceSize holds the size of the first slice. firstSliceSize = 1 // maxFirstSliceSize holds the maximum size of the first slice. maxFirstSliceSize = 16 // maxInternalSliceSize holds the maximum size of each internal slice. maxInternalSliceSize = 128 ) ... // Push adds a value to the queue. // The complexity is amortized O(1). func (q *Queueimpl7) Push(v interface{}) { if q.head == nil { h := newNode(firstSliceSize) // Returns a 1-sized slice. q.head = h q.tail = h q.lastSliceSize = maxFirstSliceSize } else if len(q.tail.v) >= q.lastSliceSize { n := newNode(maxInternalSliceSize) // Returns a 128-sized slice. q.tail.n = n q.tail = n q.lastSliceSize = maxInternalSliceSize } q.tail.v = append(q.tail.v, v) q.len++ } ... // newNode returns an initialized node. func newNode(capacity int) *Node { return &Node{ v: make([]interface{}, 0, capacity), } } ``` The very first created slice is created with capacity 1. The implementation allows the builtin append function to dynamically resize the slice up to 16 (maxFirstSliceSize) positions. After that it reverts to creating fixed size 128 position slices, which offers the best performance for data sets above 16 items. 16 items was chosen as this seems to provide the best balanced performance for small and large data sets according to the [array size benchmark tests](https://github.com/christianrpetrin/queue-tests/blob/master/bench_slice_size.md). Above 16 items, growing the slice means allocating a new, larger one and copying all 16 elements from the previous slice into the new one. The append function phenomenal performance can only compensate for the added copying of elements if the data set is very small, no more than 8 items in the benchmark tests. For above 8 items, the fixed size slice approach is consistently faster and uses less memory, where 128 sized slices are allocated and linked together when the data structure needs to scale to accommodate new values. Why 16? Why not 15 or 14? The builtin append function, as of "go1.11 darwin/amd64", seems to double the slice size every time it needs to allocate a new one. ```go ts := make([]int, 0, 1) ts = append(ts, 1) fmt.Println(cap(ts)) // Slice has 1 item; output: 1 ts = append(ts, 1) fmt.Println(cap(ts)) // Slice has 2 items; output: 2 ts = append(ts, 1) fmt.Println(cap(ts)) // Slice has 3 items; output: 4 ts = append(ts, 1) ts = append(ts, 1) fmt.Println(cap(ts)) // Slice has 5 items; output: 8 ts = append(ts, 1) ts = append(ts, 1) ts = append(ts, 1) ts = append(ts, 1) fmt.Println(cap(ts)) // Slice has 9 items; output: 16 ``` Since the append function will resize the slice from 8 to 16 positions, it makes sense to use all 16 already allocated positions before switching to the fixed size slices approach. #### Design Considerations [Impl7](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl7/queueimpl7.go) uses linked slices as its underlying data structure. The reason for the choice comes from two main observations of slice based queues: 1) When the queue needs to expand to accommodate new values, a new, larger slice needs to be allocated and used 2) Allocating and managing large slices is expensive, especially in an overloaded system with little available physical memory To help clarify the scenario, below is what happens when a slice based queue that already holds, say 1bi items, needs to expand to accommodate a new item. Slice based implementation - Allocate a new, [twice the size](https://en.wikipedia.org/wiki/Dynamic_array#Geometric_expansion_and_amortized_cost) of the previous allocated one, say 2 billion positions slice - Copy over all 1 billion items from the previous slice into the new one - Add the new value into the first unused position in the new slice, position 1000000001. The same scenario for impl7 plays out like below. Impl7 - Allocate a new 128 size slice - Set the next pointer - Add the value into the first position of the new slice, position 0 Impl7 never copies data around, but slice based ones do, and if the data set is large, it doesn't matter how fast the copying algorithm is. The copying has to be done and will take some time. The decision to use linked slices was also the result of the observation that slices goes to great length to provide predictive, indexed positions. A hash table, for instance, absolutely need this property, but not a queue. So impl7 completely gives up this property and focus on what really matters: add to end, retrieve from head. No copying around and repositioning of elements is needed for that. So when a slice goes to great length to provide that functionality, the whole work of allocating new arrays, copying data around is all wasted work. None of that is necessary. And this work costs dearly for large data sets as observed in the [tests](https://github.com/christianrpetrin/queue-tests/blob/master/bench_queue.md). #### Impl7 Benchmark Results Below compares impl7 with a few selected implementations. The tests name are formatted given below. - Benchmark/N-4: benchmark a queue implementation where N denotes the number of items added and removed to/from the queue; 4 means the number of CPU cores in the host machine. Examples: - Benchmark/0-4: benchmark the queue by creating a new instance of it. This only test initialization time. - Benchmark/100-4: benchmark the queue by creating a new instance of it and adding and removing 100 items to/from the queue. --- Standard list used as a FIFO queue vs impl7. ``` benchstat rawresults/bench-list.txt rawresults/bench-impl7.txt name old time/op new time/op delta /0-4 34.9ns ± 1% 1.2ns ± 3% -96.64% (p=0.000 n=19+20) /1-4 77.0ns ± 1% 68.3ns ± 1% -11.21% (p=0.000 n=20+20) /10-4 574ns ± 0% 578ns ± 0% +0.59% (p=0.000 n=18+20) /100-4 5.94µs ± 1% 3.07µs ± 0% -48.28% (p=0.000 n=19+18) /1000-4 56.0µs ± 1% 25.8µs ± 1% -53.92% (p=0.000 n=20+20) /10000-4 618µs ± 1% 260µs ± 1% -57.99% (p=0.000 n=20+18) /100000-4 13.1ms ± 6% 3.1ms ± 3% -76.50% (p=0.000 n=20+20) name old alloc/op new alloc/op delta /0-4 48.0B ± 0% 0.0B -100.00% (p=0.000 n=20+20) /1-4 96.0B ± 0% 48.0B ± 0% -50.00% (p=0.000 n=20+20) /10-4 600B ± 0% 600B ± 0% ~ (all equal) /100-4 5.64kB ± 0% 3.40kB ± 0% -39.72% (p=0.000 n=20+20) /1000-4 56.0kB ± 0% 25.2kB ± 0% -55.10% (p=0.000 n=20+20) /10000-4 560kB ± 0% 243kB ± 0% -56.65% (p=0.000 n=20+20) /100000-4 5.60MB ± 0% 2.43MB ± 0% -56.66% (p=0.000 n=18+20) name old allocs/op new allocs/op delta /0-4 1.00 ± 0% 0.00 -100.00% (p=0.000 n=20+20) /1-4 2.00 ± 0% 2.00 ± 0% ~ (all equal) /10-4 20.0 ± 0% 15.0 ± 0% -25.00% (p=0.000 n=20+20) /100-4 200 ± 0% 107 ± 0% -46.50% (p=0.000 n=20+20) /1000-4 2.00k ± 0% 1.02k ± 0% -48.95% (p=0.000 n=20+20) /10000-4 20.0k ± 0% 10.2k ± 0% -49.20% (p=0.000 n=20+20) /100000-4 200k ± 0% 102k ± 0% -49.22% (p=0.000 n=20+20) ``` Impl7 is: - Up to ~29x faster (1.2ns vs 34.9ns) than list package for init time (0 items) - Up to ~4x faster (3.1ms vs 13.1ms) than list package for 100k items - Uses ~1/2 memory (2.43MB vs 5.60MB) than list package for 100k items --- [impl1](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl1/queueimpl1.go) (simple slice based queue implementaion) vs impl7. ``` benchstat rawresults/bench-impl1.txt rawresults/bench-impl7.txt name old time/op new time/op delta /0-4 6.83ns ± 3% 1.18ns ± 3% -82.79% (p=0.000 n=20+20) /1-4 48.1ns ± 6% 68.3ns ± 1% +42.23% (p=0.000 n=20+20) /10-4 532ns ± 5% 578ns ± 0% +8.55% (p=0.000 n=20+20) /100-4 3.19µs ± 2% 3.07µs ± 0% -3.74% (p=0.000 n=18+18) /1000-4 24.5µs ± 3% 25.8µs ± 1% +5.51% (p=0.000 n=19+20) /10000-4 322µs ± 4% 260µs ± 1% -19.23% (p=0.000 n=19+18) /100000-4 15.8ms ±10% 3.1ms ± 3% -80.60% (p=0.000 n=20+20) name old alloc/op new alloc/op delta /0-4 0.00B 0.00B ~ (all equal) /1-4 16.0B ± 0% 48.0B ± 0% +200.00% (p=0.000 n=20+20) /10-4 568B ± 0% 600B ± 0% +5.63% (p=0.000 n=20+20) /100-4 4.36kB ± 0% 3.40kB ± 0% -22.02% (p=0.000 n=20+20) /1000-4 40.7kB ± 0% 25.2kB ± 0% -38.25% (p=0.000 n=20+20) /10000-4 746kB ± 0% 243kB ± 0% -67.47% (p=0.000 n=20+20) /100000-4 10.0MB ± 0% 2.4MB ± 0% -75.84% (p=0.000 n=15+20) name old allocs/op new allocs/op delta /0-4 0.00 0.00 ~ (all equal) /1-4 1.00 ± 0% 2.00 ± 0% +100.00% (p=0.000 n=20+20) /10-4 14.0 ± 0% 15.0 ± 0% +7.14% (p=0.000 n=20+20) /100-4 108 ± 0% 107 ± 0% -0.93% (p=0.000 n=20+20) /1000-4 1.01k ± 0% 1.02k ± 0% +1.09% (p=0.000 n=20+20) /10000-4 10.0k ± 0% 10.2k ± 0% +1.39% (p=0.000 n=20+20) /100000-4 100k ± 0% 102k ± 0% +1.54% (p=0.000 n=20+20) ``` Impl7 is: - Up to ~5x faster (1.18ns vs 6.83ns) than impl1 for init time (0 items) - Up to ~5x faster (3.1ms vs 15.8ms) than impl1 for 100k items - Uses ~1/4 memory (2.4MB vs 10MB) than impl1 for 100k items It's important to note that the performance and memory gains for impl7 is exponential like the larger the data set is due to the fact slice based implementations doesn't scale well, [paying a higher and higher price](https://en.wikipedia.org/wiki/Dynamic_array#Geometric_expansion_and_amortized_cost), performance and memory wise, every time it needs to scale to accommodate an ever expanding data set. --- [phf](https://github.com/phf/go-queue) (slice, ring based FIFO queue implementation) vs impl7. ``` benchstat rawresults/bench-phf.txt rawresults/bench-impl7.txt name old time/op new time/op delta /0-4 28.1ns ± 1% 1.2ns ± 3% -95.83% (p=0.000 n=20+20) /1-4 42.5ns ± 1% 68.3ns ± 1% +60.80% (p=0.000 n=20+20) /10-4 681ns ± 1% 578ns ± 0% -15.11% (p=0.000 n=18+20) /100-4 4.55µs ± 1% 3.07µs ± 0% -32.45% (p=0.000 n=19+18) /1000-4 35.5µs ± 1% 25.8µs ± 1% -27.32% (p=0.000 n=18+20) /10000-4 349µs ± 2% 260µs ± 1% -25.67% (p=0.000 n=20+18) /100000-4 11.7ms ±11% 3.1ms ± 3% -73.77% (p=0.000 n=20+20) name old alloc/op new alloc/op delta /0-4 16.0B ± 0% 0.0B -100.00% (p=0.000 n=20+20) /1-4 16.0B ± 0% 48.0B ± 0% +200.00% (p=0.000 n=20+20) /10-4 696B ± 0% 600B ± 0% -13.79% (p=0.000 n=20+20) /100-4 6.79kB ± 0% 3.40kB ± 0% -49.94% (p=0.000 n=20+20) /1000-4 57.0kB ± 0% 25.2kB ± 0% -55.86% (p=0.000 n=20+20) /10000-4 473kB ± 0% 243kB ± 0% -48.68% (p=0.000 n=20+20) /100000-4 7.09MB ± 0% 2.43MB ± 0% -65.77% (p=0.000 n=18+20) name old allocs/op new allocs/op delta /0-4 1.00 ± 0% 0.00 -100.00% (p=0.000 n=20+20) /1-4 1.00 ± 0% 2.00 ± 0% +100.00% (p=0.000 n=20+20) /10-4 15.0 ± 0% 15.0 ± 0% ~ (all equal) /100-4 111 ± 0% 107 ± 0% -3.60% (p=0.000 n=20+20) /1000-4 1.02k ± 0% 1.02k ± 0% +0.39% (p=0.000 n=20+20) /10000-4 10.0k ± 0% 10.2k ± 0% +1.38% (p=0.000 n=20+20) /100000-4 100k ± 0% 102k ± 0% +1.54% (p=0.000 n=20+20) ``` Impl7 is: - Up to ~23x faster (1.2ns vs 28.1ns) than phf for init time (0 items) - Up to ~3x faster (3.1ms vs 11.7ms) than phf for 100k items - Uses ~1/2 memory (2.43MB vs 7.09MB) than phf for 100k items --- Buffered channel vs impl7. ``` benchstat rawresults/bench-channel.txt rawresults/bench-impl7.txt name old time/op new time/op delta /0-4 30.2ns ± 1% 1.2ns ± 3% -96.12% (p=0.000 n=19+20) /1-4 87.6ns ± 1% 68.3ns ± 1% -22.00% (p=0.000 n=19+20) /10-4 704ns ± 1% 578ns ± 0% -17.90% (p=0.000 n=20+20) /100-4 6.78µs ± 1% 3.07µs ± 0% -54.70% (p=0.000 n=20+18) /1000-4 67.3µs ± 1% 25.8µs ± 1% -61.65% (p=0.000 n=20+20) /10000-4 672µs ± 1% 260µs ± 1% -61.36% (p=0.000 n=19+18) /100000-4 6.76ms ± 1% 3.07ms ± 3% -54.61% (p=0.000 n=19+20) name old alloc/op new alloc/op delta /0-4 96.0B ± 0% 0.0B -100.00% (p=0.000 n=20+20) /1-4 112B ± 0% 48B ± 0% -57.14% (p=0.000 n=20+20) /10-4 248B ± 0% 600B ± 0% +141.94% (p=0.000 n=20+20) /100-4 1.69kB ± 0% 3.40kB ± 0% +101.42% (p=0.000 n=20+20) /1000-4 16.2kB ± 0% 25.2kB ± 0% +55.46% (p=0.000 n=20+20) /10000-4 162kB ± 0% 243kB ± 0% +49.93% (p=0.000 n=20+20) /100000-4 1.60MB ± 0% 2.43MB ± 0% +51.43% (p=0.000 n=16+20) name old allocs/op new allocs/op delta /0-4 1.00 ± 0% 0.00 -100.00% (p=0.000 n=20+20) /1-4 1.00 ± 0% 2.00 ± 0% +100.00% (p=0.000 n=20+20) /10-4 10.0 ± 0% 15.0 ± 0% +50.00% (p=0.000 n=20+20) /100-4 100 ± 0% 107 ± 0% +7.00% (p=0.000 n=20+20) /1000-4 1.00k ± 0% 1.02k ± 0% +2.10% (p=0.000 n=20+20) /10000-4 10.0k ± 0% 10.2k ± 0% +1.61% (p=0.000 n=20+20) /100000-4 100k ± 0% 102k ± 0% +1.57% (p=0.000 n=20+20) ``` Impl7 is: - Up to ~25x faster (1.2ns vs 30.2ns) than channels for init time (0 items) - Up to ~2x faster (3.07ms vs 6.76ms) than channels for 100k items - Uses ~50% MORE memory (2.43MB vs 1.60MB) than channels for 100k items Above is not really a fair comparison as standard buffered channels doesn't scale (at all) and they are meant for routine synchronization. Nonetheless, they can and make for an excellent bounded FIFO queue option. Still, impl7 is consistently faster than channels across the board, but uses considerably more memory than channels. --- Given its excellent performance under all scenarios, the hybrid approach impl7 seems to be the ideal candidate for a high performance, low memory footprint general purpose FIFO queue. For above reasons, I propose to port impl7 to the standard library. All raw benchmark results can be found [here](https://github.com/christianrpetrin/queue-tests/tree/master/rawresults). ### Internal Slice Size [Impl7](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl7/queueimpl7.go) uses linked slices as its underlying data structure. The size of the internal slice does influence performance and memory consumption significantly. According to the [internal slice size bench tests](https://github.com/christianrpetrin/queue-tests/blob/master/queueimpl7/benchmark_test.go), larger internal slice sizes yields better performance and lower memory footprint. However, the gains diminishes dramatically as the slice size increases. Below are a few interesting results from the benchmark tests. ``` BenchmarkMaxSubsequentSliceSize/1-4 20000 76836 ns/op 53967 B/op 2752 allocs/op BenchmarkMaxSubsequentSliceSize/2-4 30000 59811 ns/op 40015 B/op 1880 allocs/op BenchmarkMaxSubsequentSliceSize/4-4 30000 42925 ns/op 33039 B/op 1444 allocs/op BenchmarkMaxSubsequentSliceSize/8-4 50000 36946 ns/op 29551 B/op 1226 allocs/op BenchmarkMaxSubsequentSliceSize/16-4 50000 30597 ns/op 27951 B/op 1118 allocs/op BenchmarkMaxSubsequentSliceSize/32-4 50000 28273 ns/op 27343 B/op 1064 allocs/op BenchmarkMaxSubsequentSliceSize/64-4 50000 26969 ns/op 26895 B/op 1036 allocs/op BenchmarkMaxSubsequentSliceSize/128-4 50000 27316 ns/op 26671 B/op 1022 allocs/op BenchmarkMaxSubsequentSliceSize/256-4 50000 26221 ns/op 28623 B/op 1016 allocs/op BenchmarkMaxSubsequentSliceSize/512-4 50000 25882 ns/op 28559 B/op 1012 allocs/op BenchmarkMaxSubsequentSliceSize/1024-4 50000 25674 ns/op 28527 B/op 1010 allocs/op ``` Given the fact that larger internal slices also means potentially more unused memory in some scenarios, 128 seems to be the perfect balance between performance and worst case scenario for memory footprint. Full results can be found [here](https://github.com/christianrpetrin/queue-tests/blob/master/bench_slice_size.md). ### API [Impl7](https://github.com/christianrpetrin/queue-tests/tree/master/queueimpl7/queueimpl7.go) implements below API methods. | Operation | Method | | --- | --- | | Add | func (q *Queueimpl7) Push(v interface{}) | | Remove | func (q *Queueimpl7) Pop() (interface{}, bool) | | Size | func (q *Queueimpl7) Len() int | | Return First | func (q *Queueimpl7) Front() (interface{}, bool) | As nil values are considered valid queue values, similarly to the map data structure, "Front" and "Pop" returns a second bool parameter to indicate whether the returned value is valid and whether the queue is empty or not. The reason for above method names and signatures are the need to keep compatibility with existing Go data structures such as the [list](https://github.com/golang/go/blob/master/src/container/list/list.go), [ring](https://github.com/golang/go/blob/master/src/container/ring/ring.go) and [heap](https://github.com/golang/go/blob/master/src/container/heap/heap.go) packages. Below are the method names used by the existing list, ring and heap Go data structures, as well as the new proposed queue. | Operation | list | ring | heap | queue | | --- | --- | --- | --- | --- | | Add | PushFront/PushBack | Link | Push | Push | | Remove | Remove | Unlink | Pop | Pop | | Size | Len | Len | - | Len | | Return First | Front | - | - | Front | For comparison purposes, below are the method names for [C++](http://www.cplusplus.com/reference/queue/queue/), [Java](https://docs.oracle.com/javase/7/docs/api/java/util/Queue.html) and [C#](https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.queue-1?view=netframework-4.7.2) for their queue implementation. | Operation | C++ | Java | C# | | --- | --- | --- | --- | | Add | push | add/offer | Enqueue | | Remove | pop | remove/poll | Dequeue | | Size | size | - | Count | | Return First | front | peek | Peek | ### Drawbacks The biggest drawback of the proposed implementation is the potentially extra allocated but not used memory in its head and tail slices. This scenario realizes when exactly 17 items are added to the queue, causing the creation of a full sized internal slice of 128 positions. Initially only the first element in this new slice is used to store the added value. All the other 127 elements are already allocated, but not used. ```go // Assuming a 128 internal sized slice. q := queueimpl7.New() // Push 16 items to fill the first dynamic slice (sized 16). for i := 1; i <= 16; i++ { q.Push(i) } // Push 1 extra item that causes the creation of a new 128 sized slice to store this value. q.Push(17) // Pops the first 16 items to release the first slice (sized 16). for i := 1; i <= 16; i++ { q.Pop() } // As unsafe.Sizeof (https://golang.org/pkg/unsafe/#Sizeof) doesn't consider the length of slices, // we need to manually calculate the memory used by the internal slices. var internalSliceType interface{} fmt.Println(fmt.Sprintf("%d bytes", unsafe.Sizeof(q)+(unsafe.Sizeof(internalSliceType) /* bytes per slice position */ *127 /* head slice unused positions */))) // Output for a 64bit system (Intel(R) Core(TM) i5-7267U CPU @ 3.10GHz): 2040 bytes ``` The worst case scenario realizes when exactly 145 items are added to the queue and 143 items are removed. This causes the queue struct to hold a 128-sized slice as its head slice, but only the last element is actually used. Similarly, the queue struct will hold a separate 128-sized slice as its tail slice, but only the first position in that slice is being used. ```go // Assuming a 128 internal sized slice. q := queueimpl7.New() // Push 16 items to fill the first dynamic slice (sized 16). for i := 1; i <= 16; i++ { q.Push(i) } // Push an additional 128 items to fill the first full sized slice (sized 128). for i := 1; i <= 128; i++ { q.Push(i) } // Push 1 extra item that causes the creation of a new 128 sized slice to store this value, // adding a total of 145 items to the queue. q.Push(1) // Pops the first 143 items to release the first dynamic slice (sized 16) and // 127 items from the first full sized slice (sized 128). for i := 1; i <= 143; i++ { q.Pop() } // As unsafe.Sizeof (https://golang.org/pkg/unsafe/#Sizeof) doesn't consider the length of slices, // we need to manually calculate the memory used by the internal slices. var internalSliceType interface{} fmt.Println(fmt.Sprintf("%d bytes", unsafe.Sizeof(q)+(unsafe.Sizeof(internalSliceType) /* bytes per slice position */ *(127 /* head slice unused positions */ +127 /* tail slice unused positions */)))) // Output for a 64bit system (Intel(R) Core(TM) i5-7267U CPU @ 3.10GHz): 4072 bytes ``` Above code was run on Go version "go1.11 darwin/amd64". ## Open Questions/Issues Should this be a deque (double-ended queue) implementation instead? The deque could be used as a stack as well, but it would make more sense to have a queue and stack implementations (like most mainstream languages have) instead of a deque that can be used as a stack (confusing). Stack is a very important computer science data structure as well and so I believe Go should have a specialized implementation for it as well (given the specialized implementation offers real value to the users and not just a "nice" named interface and methods). Should "Pop" and "Front" return only the value instead of the value and a second bool parameter (which indicates whether the queue is empty or not)? The implication of the change is adding nil values wouldn't be valid anymore so "Pop" and "Front" would return nil when the queue is empty. Panic should be avoided in libraries. The memory footprint for a 128 sized internal slice causes, in the worst case scenario, a 2040 bytes of memory allocated (on a 64bit system) but not used. Switching to 64 means roughly half the memory would be used with a slight ~2.89% performance drop (252813ns vs 260137ns). The extra memory footprint is not worth the extra performance gain is a very good point to make. Should we change this value to 64 or maybe make it configurable? Should we also provide a safe for concurrent use implementation? A specialized implementation that would rely on atomic operations to update its internal indices and length could offer a much better performance when comparing to a similar implementation that relies on a mutex. With the impending release of generics, should we wait to release the new queue package once the new generics framework is released? Should we implement support for the range keyword for the new queue? It could be done in a generic way so other data structures could also benefit from this feature. For now, IMO, this is a topic for another proposal/discussion. ## Summary I propose to add a new package, "container/queue", to the standard library to support an in-memory, unbounded, general purpose queue implementation. I feel strongly this proposal should be accepted due to below reasons. 1) The proposed solution was well researched and probed, being dramatically and consistently faster than 6 other experimental queue implementations as well 3 promising open source queue implementations as well the standard list package and buffered channels; it still consumes considerably less memory than every other queue implementation tested, except for buffered channels 2) The proposed solution uses a new, unique approach to building queues, yet its [implementation](https://github.com/christianrpetrin/queue-tests/blob/master/queueimpl7/queueimpl7.go) is clean and extremely simple. Both main methods, "Push" and "Pop", are composed of only 16 and 19 lines of code (total), respectively. The proposed implementation also have proper tests with 100% test coverage and should require minimal maintenance moving forward 3) I'll implement any changes the Go community feel are needed for the proposed solution to be worth of the standard library and the Go community ---------------- Update 11/26/2018. Due to many suggestions to make the queue a [deque](https://en.wikipedia.org/wiki/Double-ended_queue) and to deploy it as a proper external package, the deque package was built and deployed [here](https://github.com/ef-ds/deque). The proposal now is to add the deque package to the standard library instead of impl7. Refer [here](https://github.com/golang/go/issues/27935#issuecomment-441708112) for details.
Proposal,Proposal-Hold
high
Critical
365,144,528
rust
Incorrect information attached to slice::from_ref
Head over on the official documentation of [slice::from_ref](https://doc.rust-lang.org/std/slice/fn.from_ref.html). There’s a small information bubble that will trigger the following, confusing information: ![img_092918_180315](https://user-images.githubusercontent.com/506592/46247886-2c826a80-c412-11e8-9360-9e37438f9f8e.png) I think other functions from the `std` have this problem as well and this might be due to some macro or code generation. Cheers! :beers:
T-rustdoc
low
Major
365,148,509
material-ui
[colorManipulator][docs] Document the color helpers
<!-- Checked checkbox should look like this: [x] --> - [x] This is not a v0.x issue. <!-- (v0.x is no longer maintained) --> - [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate. ## Expected Behavior I've done a comparison of various JS color utility libraries. I found that `@material-ui/core/es/styles/colorManipulator.js` offers the smallest bundle size of all options I could find. Here's a comparison to other libraries (all numbers are minified; second number is minified + gzipped): - colorManipulator.js // 3.02kb, 1.25 gzipped - [kewler](https://github.com/adriantoine/kewler) // 7.24kb, 2.7kb gzipped - [spectra](https://github.com/avp/spectra) // 10.5kb, 3.38kb gzipped - [tinycolor2](https://github.com/bgrins/TinyColor) // 15.2kb, 5.28kb gzipped - [color](https://github.com/Qix-/color) // 23.1kb, 7.83kb gzipped - [chroma-js](https://github.com/gka/chroma.js/) // 39.8, 15.9kb gzipped ## Current Behavior @oliviertassinari has stated that the color manipulator is a [private API](https://github.com/mui-org/material-ui/issues/10789#issuecomment-375892148). He also openly asked if there were better open-source alternatives to the private implementation. In my view, there is no better alternative. colorManipulator.js offers basically identical functionality to all of the other libraries, and does it in the fewest amount of bytes. ## Context Color Manipulation is a very generic problem, this library has the best solution, but it is a private API. I think it could be easily publishable as a separate entity within the mono-repo structure. It's already fully unit tested, so there could be no disagreement that it would be in a good state to make publicly accessible.
new feature,docs,package: system
medium
Major
365,157,561
create-react-app
Write high performance webpack loader worker
x-ref: #5170 It'd be nice to write a high performance webpack worker that uses shared memory instead of IPC, automatic caching with customizable cache identifier, and anything else to make it really robust.
contributions: up for grabs!,tag: enhancement,difficulty: complex
low
Major
365,160,929
vscode
[scss] No autocomplete for built-in functions inside maps
Issue Type: <b>Bug</b> ```sass $something: #fff; $map: ( key: transparentize($something, 0.4), ); ``` There is no autocomplete for `transparentize` and some other functions (`mix`, `lighten`, `darken`, `red`, `rgb`, `saturate` ...) inside the map definition. --- VS Code version: Code - Insiders 1.28.0-insider (caaa5369cb4d22b7481e6ff97252cf92937709eb, 2018-09-28T05:18:20.266Z) OS version: Windows_NT x64 10.0.17134 <!-- generated by issue reporter -->
feature-request,css-less-scss
low
Critical
365,175,593
node
Almost guaranteed ECONNRESET on piped sockets if connecting to Node's HTTPS server which answers with "connection: 'close'" after setImmediate or setTimeout, on OSX
* **Version**: v10.11.0 * **Platform**: Darwin localhost.local 15.6.0 Darwin Kernel Version 15.6.0: Thu Jun 21 20:07:40 PDT 2018; root:xnu-3248.73.11~1/RELEASE_X86_64 x86_64 (aka OSX 10.11.6, also happens on High Sierra) * **Subsystem**: ??? In the following code, `pem` is [a module from NPM](https://www.npmjs.com/package/pem), and the rest is an HTTP proxy which serves CONNECT requests and pipes them to an HTTPS server defined in the same script: ```javascript const http = require('http'); const https = require('https'); const pem = require('pem'); const net = require('net'); const createHttpsServer = (callback) => { pem.createCertificate({ days: 365, selfSigned: true }, (error, {serviceKey, certificate, csr}) => { const server = https.createServer({ ca: csr, cert: certificate, key: serviceKey }, (req, res) => { setImmediate(() => { res.writeHead(200, { connection: 'close' }); res.end('OK'); }); }); server.listen((error) => { if (error) { console.error(error); } else { callback(null, server.address().port); } }); }); }; const createProxy = (httpsServerPort) => { const proxy = http.createServer(); proxy.on('connect', (request, requestSocket, head) => { const serverSocket = net.connect({ port: httpsServerPort }, 'localhost', () => { requestSocket.write( 'HTTP/1.1 200 Connection established\r\n\r\n' ); serverSocket.write(head); serverSocket.pipe(requestSocket); requestSocket.pipe(serverSocket); }); }); proxy.listen(9000); }; createHttpsServer((error, httpsServerPort) => { if (error) { console.error(error); } else { createProxy(httpsServerPort); } }); ``` If you run `curl --proxy http://localhost:9000 https://qweasd/ -k`, you'll see Curl receive the reply, and meanwhile the script will almost certainly fail with: ``` events.js:167 throw er; // Unhandled 'error' event ^ Error: read ECONNRESET at TCP.onStreamRead (internal/stream_base_commons.js:111:27) Emitted 'error' event at: at Socket.onerror (_stream_readable.js:690:12) at Socket.emit (events.js:182:13) at emitErrorNT (internal/streams/destroy.js:82:8) at emitErrorAndCloseNT (internal/streams/destroy.js:50:3) at process._tickCallback (internal/process/next_tick.js:63:19) ``` This issue is dependent on several parameters: - You're running OSX. El Capitan and High Sierra both exhibit this, an Ubuntu VM doesn't. - The server is HTTPS: an HTTP server and `curl --proxytunnel` work alright. - The server is running in the same script as the proxy. - The server answers with a `connection: 'close'` header. - The strangest one: the server request handler has `setImmediate` or `setTimeout` around the reply (with, I guess, any timeout value). `process.nextTick` doesn't do it. And if the handler instead serves the reply immediately, the server continues to answer perfectly alright until you hit a seemingly different ECONNRESET. If you adorn the sockets with `error` event handlers, you'll see that the errors actually don't happen on *each* request, and when they do, it's primarily on the outgoing socket from the proxy to the HTTPS server; and often, but not necessarily, a second ECONNRESET occurs on the incoming socket to the proxy.
help wanted,http
medium
Critical