id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
299,478,194
TypeScript
Add support for an `@export` tag
```ts /** * @export * @typedef {{a: number}} X */ export {}; ```
Suggestion,In Discussion,Domain: JSDoc,Domain: JavaScript
low
Major
299,493,708
go
spec: document special cases for floating-point maps
In https://github.com/golang/go/issues/20660#issuecomment-367781440, I noted that there are undocumented special cases in the handling of maps with floating-point keys. Regardless of what happens with that proposal for Go 2, we should fix the Go 1 spec to properly document those cases. ---- In [Map types](https://golang.org/ref/spec#Map_types), add: Elements may be added during execution using assignments and retrieved with index expressions **and range clauses**; they may be removed with the `delete` built-in function. ---- In [Index expressions](https://golang.org/ref/spec#Index_expressions), add: For a of map type `M`: * `x`'s type must be assignable to the key type of `M` * if the map contains an entry with key **equal to** `x`, `a[x]` is the map element with key `x` and the type of `a[x]` is the element type of `M` * if the map is `nil` or does not contain such an entry **and the index expression is not a left-hand side operand of an assignment**, `a[x]` is the zero value for the element type of `M` ---- In [Assignments](https://golang.org/ref/spec#Assignments), add: The assignment proceeds in two phases. First, the operands of index expressions and pointer indirections (including implicit pointer indirections in selectors) on the left and the expressions on the right are all evaluated in the usual order. Second, the assignments are carried out in left-to-right order. **If the left operand of an assignment is an index expression on a map that does not contain a key equal to the given key, the assignment creates a new entry in the map. If the map is `nil`, a run-time panic occurs.** ---- In [For statements with `range` clause](https://golang.org/ref/spec#For_range), add: ``` Range expression 1st value 2nd value […] map m map[K]V key k K see below V […] ``` […] 3. **For a map value, the "range" clause iterates over successive entries of the map. On successive iterations, the index value will be the key, and the second value will be the element associated with that key. If the key is or contains a NaN value, it may be associated with multiple entries; the elements of those entries can only be retrieved using "range" clauses (not index expressions).** The iteration order over maps is not specified and is not guaranteed to be the same from one iteration to the next. If a map entry that has not yet been reached is removed during iteration, the corresponding iteration value will not be produced. If a map entry is created during iteration, that entry may be produced during the iteration or may be skipped. The choice may vary for each entry created and from one iteration to the next. If the map is nil, the number of iterations is 0. ---- In [Deletion of map elements](https://golang.org/ref/spec#Deletion_of_map_elements), add: The built-in function delete removes the element with key **equal to** `k` from a map `m`.
Documentation,NeedsFix
low
Minor
299,503,006
rust
Incremental compilation regression with num-bignum
The num-bignum crate is slower by a factor 3-4 when compiling incrementally. This crate supports `cargo bench` which allows for an easy comparison of the performance. Without incremental compilation (`CARGO_INCREMENTAL=0 cargo bench`): ``` test divide_0 ... bench: 867 ns/iter (+/- 17) test divide_1 ... bench: 15,398 ns/iter (+/- 361) test divide_2 ... bench: 770,862 ns/iter (+/- 14,336) test fac_to_string ... bench: 2,210 ns/iter (+/- 7) test factorial_100 ... bench: 6,624 ns/iter (+/- 382) test fib2_100 ... bench: 1,777 ns/iter (+/- 54) test fib2_1000 ... bench: 33,224 ns/iter (+/- 1,143) test fib2_10000 ... bench: 2,109,744 ns/iter (+/- 21,395) test fib_100 ... bench: 871 ns/iter (+/- 30) test fib_1000 ... bench: 15,811 ns/iter (+/- 467) test fib_10000 ... bench: 1,037,026 ns/iter (+/- 32,844) test fib_to_string ... bench: 232 ns/iter (+/- 4) test from_str_radix_02 ... bench: 2,671 ns/iter (+/- 50) test from_str_radix_08 ... bench: 1,157 ns/iter (+/- 18) test from_str_radix_10 ... bench: 1,348 ns/iter (+/- 61) test from_str_radix_16 ... bench: 1,029 ns/iter (+/- 25) test from_str_radix_36 ... bench: 1,069 ns/iter (+/- 38) test hash ... bench: 78,637 ns/iter (+/- 2,226) test modpow ... bench: 24,375,306 ns/iter (+/- 541,434) test modpow_even ... bench: 62,169,859 ns/iter (+/- 1,558,466) test multiply_0 ... bench: 100 ns/iter (+/- 2) test multiply_1 ... bench: 15,523 ns/iter (+/- 445) test multiply_2 ... bench: 848,662 ns/iter (+/- 18,692) test multiply_3 ... bench: 1,972,284 ns/iter (+/- 41,682) test pow_bench ... bench: 5,377,001 ns/iter (+/- 173,286) test shl ... bench: 4,543 ns/iter (+/- 166) test shr ... bench: 1,984 ns/iter (+/- 103) test to_str_radix_02 ... bench: 2,196 ns/iter (+/- 65) test to_str_radix_08 ... bench: 772 ns/iter (+/- 16) test to_str_radix_10 ... bench: 6,922 ns/iter (+/- 209) test to_str_radix_16 ... bench: 637 ns/iter (+/- 12) test to_str_radix_36 ... bench: 6,851 ns/iter (+/- 190) test result: ok. 0 passed; 0 failed; 0 ignored; 32 measured; 0 filtered out Running target/release/deps/gcd-c7301f4be4104d87 running 8 tests test gcd_euclid_0064 ... bench: 9,621 ns/iter (+/- 380) test gcd_euclid_0256 ... bench: 57,871 ns/iter (+/- 1,377) test gcd_euclid_1024 ... bench: 268,900 ns/iter (+/- 6,397) test gcd_euclid_4096 ... bench: 1,786,538 ns/iter (+/- 50,910) test gcd_stein_0064 ... bench: 1,589 ns/iter (+/- 74) test gcd_stein_0256 ... bench: 6,351 ns/iter (+/- 106) test gcd_stein_1024 ... bench: 39,651 ns/iter (+/- 1,787) test gcd_stein_4096 ... bench: 351,423 ns/iter (+/- 18,916) ``` With incremental compilation: ``` test divide_0 ... bench: 2,212 ns/iter (+/- 120) test divide_1 ... bench: 46,185 ns/iter (+/- 1,319) test divide_2 ... bench: 2,164,688 ns/iter (+/- 35,890) test fac_to_string ... bench: 3,051 ns/iter (+/- 41) test factorial_100 ... bench: 18,034 ns/iter (+/- 427) test fib2_100 ... bench: 6,455 ns/iter (+/- 224) test fib2_1000 ... bench: 119,590 ns/iter (+/- 2,451) test fib2_10000 ... bench: 6,462,893 ns/iter (+/- 129,404) test fib_100 ... bench: 3,026 ns/iter (+/- 65) test fib_1000 ... bench: 56,443 ns/iter (+/- 1,542) test fib_10000 ... bench: 3,210,114 ns/iter (+/- 64,889) test fib_to_string ... bench: 433 ns/iter (+/- 40) test from_str_radix_02 ... bench: 7,403 ns/iter (+/- 288) test from_str_radix_08 ... bench: 2,741 ns/iter (+/- 99) test from_str_radix_10 ... bench: 3,737 ns/iter (+/- 115) test from_str_radix_16 ... bench: 2,830 ns/iter (+/- 78) test from_str_radix_36 ... bench: 3,354 ns/iter (+/- 80) test hash ... bench: 140,451 ns/iter (+/- 2,757) test modpow ... bench: 78,250,953 ns/iter (+/- 1,400,794) test modpow_even ... bench: 191,208,389 ns/iter (+/- 10,505,051) test multiply_0 ... bench: 396 ns/iter (+/- 12) test multiply_1 ... bench: 45,034 ns/iter (+/- 1,049) test multiply_2 ... bench: 3,110,243 ns/iter (+/- 114,081) test multiply_3 ... bench: 7,193,430 ns/iter (+/- 498,651) test pow_bench ... bench: 13,125,197 ns/iter (+/- 1,299,791) test shl ... bench: 7,582 ns/iter (+/- 739) test shr ... bench: 3,056 ns/iter (+/- 58) test to_str_radix_02 ... bench: 5,011 ns/iter (+/- 130) test to_str_radix_08 ... bench: 1,877 ns/iter (+/- 29) test to_str_radix_10 ... bench: 8,457 ns/iter (+/- 287) test to_str_radix_16 ... bench: 1,401 ns/iter (+/- 45) test to_str_radix_36 ... bench: 7,998 ns/iter (+/- 159) test result: ok. 0 passed; 0 failed; 0 ignored; 32 measured; 0 filtered out Running target/release/deps/gcd-c7301f4be4104d87 running 8 tests test gcd_euclid_0064 ... bench: 19,747 ns/iter (+/- 548) test gcd_euclid_0256 ... bench: 119,759 ns/iter (+/- 1,850) test gcd_euclid_1024 ... bench: 569,027 ns/iter (+/- 12,593) test gcd_euclid_4096 ... bench: 4,087,824 ns/iter (+/- 538,444) test gcd_stein_0064 ... bench: 6,511 ns/iter (+/- 2,291) test gcd_stein_0256 ... bench: 25,980 ns/iter (+/- 1,552) test gcd_stein_1024 ... bench: 138,574 ns/iter (+/- 5,185) test gcd_stein_4096 ... bench: 989,603 ns/iter (+/- 42,293) ``` num-bigint: f656829 cargo: 0.26.0-nightly (1d6dfea44 2018-01-26) rustc: 1.25.0-nightly (27a046e93 2018-02-18)
I-slow,C-enhancement,T-compiler,A-incr-comp
low
Critical
299,527,612
neovim
rpcnotify: no Vim error with "no notification handler registered for"
When using deoplete and using `:call rpcnotify(g:deoplete#_channel_id, 'foo')` you will get an error displayed ("no notification handler registered for "foo""), which is not a real Neovim error: it cannot be caught using `:try` and has no error number etc. I could not find the source for it in Neovim's sources, so it might come from some library being used for this. It would also be nice to get the list of registered handlers, and a list of current channels, so that you could better inspect them and e.g. check if a particular one is (still) valid.
documentation,provider,channels-rpc,remote-plugin
low
Critical
299,549,095
rust
🛠️ specialization permits empty impls when parent has no default items
It is potentially somewhat surprising that [the following impls are accepted](https://play.rust-lang.org/?gist=a99b1b66262b0f2a529473617a3e5062&version=nightly): ```rust #![feature(specialization)] trait Foo { fn bar(); } impl<T> Foo for T { fn bar() { } // no default } impl Foo for i32 { // OK so long as we define no items: // fn bar() { } } fn main() {} ``` The [original example](https://play.rust-lang.org/?gist=74df7c5648b164ce857b8db20fe49ed3&version=nightly) was unearthed by @eddyb and @varkor as part of a PR. To be clear, I don't see this as a *soundness* problem per se. Just a question of what we think it *ought* to do. cc @aturon
C-enhancement,A-trait-system,T-compiler,A-specialization,requires-nightly,F-specialization,T-types,S-types-deferred
low
Major
299,646,797
create-react-app
Invalid string length error thrown
<!-- PLEASE READ THE FIRST SECTION :-) --> ### Is this a bug report? Yes After running `react-script` for a while, I get an exception thrown with the stacktrace: ``` /Users/will/Code/collect/node_modules/webpack/lib/Stats.js:221 text += `\n @ ${current.readableIdentifier(requestShortener)}`; ^ RangeError: Invalid string length at formatError (/Users/will/Code/collect/node_modules/webpack/lib/Stats.js:221:30) at Array.map (<anonymous>) at Stats.toJson (/Users/will/Code/collect/node_modules/webpack/lib/Stats.js:229:56) at Compiler.<anonymous> (/Users/will/Code/collect/node_modules/html-webpack-plugin/index.js:68:44) at Compiler.applyPluginsAsyncSeries (/Users/will/Code/collect/node_modules/tapable/lib/Tapable.js:206:13) at Compiler.emitAssets (/Users/will/Code/collect/node_modules/webpack/lib/C ``` ### Did you try recovering your dependencies? <!-- Your module tree might be corrupted, and that might be causing the issues. Let's try to recover it. First, delete these files and folders in your project: * node_modules * package-lock.json * yarn.lock Then you need to decide which package manager you prefer to use. We support both npm (https://npmjs.com) and yarn (http://yarnpkg.com/). However, **they can't be used together in one project** so you need to pick one. If you decided to use npm, run this in your project directory: npm install -g npm@latest npm install This should fix your project. If you decided to use yarn, update it first (https://yarnpkg.com/en/docs/install). Then run in your project directory: yarn This should fix your project. Importantly, **if you decided to use yarn, you should never run `npm install` in the project**. For example, yarn users should run `yarn add <library>` instead of `npm install <library>`. Otherwise your project will break again. Have you done all these steps and still see the issue? Please paste the output of `npm --version` and/or `yarn --version` to confirm. --> Yes ### Which terms did you search for in User Guide? <!-- There are a few common documented problems, such as watcher not detecting changes, or build failing. They are described in the Troubleshooting section of the User Guide: https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#troubleshooting Please scan these few sections for common problems. Additionally, you can search the User Guide itself for something you're having issues with: https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md If you didn't find the solution, please share which words you searched for. This helps us improve documentation for future readers who might encounter the same problem. --> `invalid string length` ### Environment <!-- Please fill in all the relevant fields by running these commands in terminal. --> 1. `node -v`: `v8.4.0` 3. `yarn --version` (if you use Yarn): `1.3.2` 4. `npm ls react-scripts` (if you haven’t ejected): `1.1.1` Then, specify: 1. Operating system: Mac os 2. Browser and version (if relevant): n/a ### Steps to Reproduce <!-- How would you describe your issue to someone who doesn’t know you or your project? Try to write a sequence of steps that anybody can repeat to see the issue. --> (Write your steps here:) 1. run `yarn start` 2. make a few changes, causing recompliation 3. wait until the error happens ### Expected Behavior <!-- How did you expect the tool to behave? It’s fine if you’re not sure your understanding is correct. Just write down what you thought would happen. --> This shouldn't happen ### Actual Behavior <!-- Did something go wrong? Is something broken, or not behaving as you expected? Please attach screenshots if possible! They are extremely helpful for diagnosing issues. --> The error above gets thrown into the terminal, and the process stops. --- I've seen this error reported in a few different repos: * https://github.com/jantimon/html-webpack-plugin/issues/855 * https://github.com/webpack/webpack/issues/6501
issue: needs investigation
low
Critical
299,681,574
angular
Same route with animation and param
Hello, I used this child routing system with a transition animation. The problem is that if I want to go from one stage to another I do not have the animation. <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [x] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior When I call the index/state route the animation goes well the first time but not the next. ## Expected behavior The animation is restarted (enter & leave) and the data are updated on the template Here is the example of the problem in plunker: https://plnkr.co/edit/hCc1l3AOYecIWk8Vg63n I thought about this technique except that when we come back we hit a 404, I guess there is better to go back to the old state : this.router.navigate(['index']); setTimeout(() => { this.router.navigate(['index/state/', id]); }, 1); How to force the animation to leave and restart the opening of the component? Thank you ## Environment <pre><code> Angular version: 5.1 <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [x] Chrome (desktop) version 64.0.3282.167 - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: 8.9.4 - Platform: Mac </code></pre>
type: bug/fix,area: animations,freq2: medium,P4
low
Critical
299,697,181
go
cmd/go: give a better error message when building Go package with CGO_ENABLED=0
When trying to cross compile the go compiler silently ignores any files that use `import "C"` statement. In this example I've tried to compile for windows on a linux host machine, but this issue seems to be present on other host platforms as well. It seems to do this because it disables `CGO_ENABLED` being disabled by default when the host and target platform mismatch despite the required toolchains being present (mingw-w64 in my case). I managed to resolve this issue by enabling `CGO_ENABLED` and setting `CC` and `CXX`, but this issue is more about the unclear error message. ### What version of Go are you using (`go version`)? `go version go1.10 linux/amd64` ### Does this issue reproduce with the latest release? Yes. ### What operating system and processor architecture are you using (`go env`)? Without GOOS: ``` GOARCH="amd64" GOBIN="" GOCACHE="/home/ikkerens/.cache/go-build" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/usr/lib/go:/var/git/Go" GORACE="" GOROOT="/usr/local/go" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build897508503=/tmp/go-build -gno-record-gcc-switches" ``` With GOOS=windows ``` GOARCH="amd64" GOBIN="" GOCACHE="/home/ikkerens/.cache/go-build" GOEXE=".exe" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="windows" GOPATH="/usr/lib/go:/var/git/Go" GORACE="" GOROOT="/usr/local/go" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="0" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-m64 -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build822882573=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? 2 Go files: ```go // cgo.go package main import "C" func test() { } ``` ```go // main.go package main func main() { test() } ``` ### What did you expect to see? Some kind of warning that cgo.go is being skipped in compilation due to `CGO_ENABLED` being `0`. ### What did you see instead? ``` # test ./main.go:4:2: undefined: test ```
NeedsFix
medium
Critical
299,834,896
godot
Circular script preloading causes leak without warning (both with preload or named classes)
Godot 2.1.5 beta, Godot 2.1.4, Windows 10 64 bits I found out my project had a circular dependency between two scripts which were preloading each other into a `const` (though the game has worked perfectly fine so far, it's been a year). The use case was only to use `extends` on it. Workaround is to use `load`... The result of this is a *random* memory leak (which doesn't always happen) and two cryptic errors: ``` ERROR: SelfList<class GDFunction>::List::~List: Condition ' _first != 0 ' is true. At: core\self_list.h:80 ERROR: SelfList<class GDScript>::List::~List: Condition ' _first != 0 ' is true. At: core\self_list.h:80 WARNING: ObjectDB::cleanup: ObjectDB Instances still exist! At: core\object.cpp:1845 ERROR: ResourceCache::clear: Resources Still in use at Exit! At: core\resource.cpp:378 ``` Reproduction project; [CircularScriptPreload.zip](https://github.com/godotengine/godot/files/1753083/CircularScriptPreload.zip)
bug,discussion,topic:gdscript,confirmed
low
Critical
299,835,530
go
archive/zip: FileInfoHeader does not handle directories
Consider the following: ```go fi, _ := os.Stat("/home/") h, _ := zip.FileInfoHeader(fi) fmt.Println(h.Name) ``` This currently prints: ``` home ``` However, ZIP treats directories as files with a trailing "/" in the name, which `FileInfoHeader` should automatically append if `os.FileInfo.IsDir` reports true. This difference is minor since users are usually expected to replace the `FileHeader.Name` shortly afterwards anyways.
NeedsInvestigation,early-in-cycle
low
Minor
299,837,848
pytorch
Handle python_arg_parser dtype constants better
The default is a dtype name, but we find the default via the Type; we should use dtypes because this won't work if the Types aren't compiled.
todo,feature,triaged
low
Minor
299,855,779
pytorch
Perf regression: indexing 1-d tensor
After the Tensor/Variable merge we have a perf regression with indexing a 1-d tensor: As of 6a2afe3 ```python >>> import torch >>> x = torch.randn(1000) >>> %timeit x[0] 435 ns ± 3.34 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) >>> %timeit x[0] = 1 449 ns ± 1.41 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) ``` As of Tensor/Variable merge: ```python >>> import torch >>> x = torch.randn(1000) >>> %timeit x[0] 1000000 loops, best of 3: 1.05 µs per loop >>> %timeit x[0] = 1 100000 loops, best of 3: 4.02 µs per loop ``` Note that indexing a Python list is even faster: ```python >>> x = [0] * 1000 >>> %timeit x[0] 10000000 loops, best of 3: 36.3 ns per loop >>> %timeit x[0] = 1 10000000 loops, best of 3: 37.7 ns per loop ``` This affects things like: https://github.com/pytorch/examples/blob/4ef2d4d0c8524372d0047e050065edcac665ce1a/word_language_model/data.py#L39-L46
module: performance,in progress,triaged
low
Minor
299,877,213
rust
#[must_use] on associated function definition when trait declaration does not have it should still trigger warning when calling function concretely
I expect the following code ([Playpen](https://play.rust-lang.org/?gist=4212fdd85fabb3b79d20186f0a7f0acb&version=nightly)) to emit a warning on the last line of `main` that the result of `s.the_must_use_fn()` must be used. ```rust fn main () { let s = Struct; s.the_must_use_fn(); } trait Trait { fn the_must_use_fn(&self) -> String; } struct Struct; impl Trait for Struct { #[must_use] fn the_must_use_fn(&self) -> String { "".to_string() } } ``` Ref #43302 as the tracking issue for this feature.
C-enhancement,A-diagnostics,T-compiler
low
Minor
299,896,770
kubernetes
Figure out a way to manage the discrepancy on windows nodes from the linux node
https://github.com/kubernetes/kubernetes/pull/60275/files/2d942dab68b64e684dd2b75232cf9daabc6e0a95#diff-10055ae93a8699af13ceba0482fc43c3 I am expecting we have more windows specific config like code coming to Kubelet and other node components. We should figure out a way to handle this, so that the code can be easily managed? Instead of letting windows-special handling code like this scattered everywhere in our codebase. cc/ @feiskyer @michmike @kubernetes/sig-windows-bugs
kind/cleanup,sig/node,sig/windows,help wanted,lifecycle/frozen,good first issue,needs-triage
medium
Critical
299,921,626
opencv
Opencv single camera calibration problem
The opencv 2.4 operation I used was not a problem, and opencv 3.4 was wrong. OpenCV Error: an Assertion failed (_dst fixedType ()) in convertPointsHomogeneous, file/Volumes/build - storage/build/master_iOS - MAC/OpenCV/modules/calib3d/SRC/fundam CPP, line 1029 CvCalibrateCamera2 parameters should be wrong, but I contrast comment, type is no problem. ##### System information (version) ``` int cube_length=8; IplImage *show; int number_image=7; int a=1; CvSize image_size; //Size of image CvSize board_size=cvSize(8, 6);// Cvsize: one of the basic data types of OpenCV is to construct the function, width and height of the Cvsize type, indicating the size of the matrix box and the accuracy of pixels. Similar to the CvPoint structure, but the data members are the width and height of the integer type. int board_width=board_size.width; int board_height=board_size.height; int total_per_image=board_width*board_height;//The total number of corners of each graph. CvPoint2D32f * image_points_buf = new CvPoint2D32f[total_per_image]; //An array of stored angular coordinates. CvPoint2D32f * corners=new CvPoint2D32f[total_per_image];//The angular points group of an image. //Mainly used to transform into matrix form.CvMat* cvCreateMat( int rows, int cols, int type );Rows rows. Cols matrix columns. Type matrix element type, single-channel image of floating-point type. // the type can be any predefined type, and the structure of the predefined type is as follows: CV_<bit_depth> (S|U|F)C<number_of_channels>. CvMat * image_points=cvCreateMat(number_image*total_per_image,2,CV_32FC1); //The matrix of the image coordinates of the corner point. CvMat * object_points=cvCreateMat(number_image*total_per_image,3,CV_32FC1); //The matrix that stores the world coordinates of the angular point. CvMat * point_counts=cvCreateMat(number_image,1,CV_32SC1); //Stores the number of recognition points for each frame. CvMat * intrinsic_matrix=cvCreateMat(3,3,CV_32FC1); CvMat * distortion_coeffs=cvCreateMat(5,1,CV_32FC1); int count; //Store the actual number of points in each frame. int found; //Identify the mark bit of the calibrated plate corner and whether the corner point can be detected. int step; //Store the step length,step=successes*total_per_image; int successes=0; //The image frame number initialization of all corner points on the calibration board is successfully found. for (int K=1; K<7; K++) { UIImage *image=[UIImage imageNamed:[NSString stringWithFormat:@"%d",K]]; // UIImage *image=[UIImage imageNamed:@"2"]; show=[self CreateIplImageFromUIImage:(image)]; image_size=cvGetSize(show);//Size of image found=cvFindChessboardCorners(show,board_size,image_points_buf,&count, CV_CALIB_CB_ADAPTIVE_THRESH|CV_CALIB_CB_FILTER_QUADS); if(found==0){ }else{ IplImage * gray_image= cvCreateImage(cvGetSize(show),8,1); //Create headers and assign data to IplImage*. cvCvtColor(show,gray_image,CV_BGR2GRAY); // cvCvtColor(...),It is the color space conversion function in Opencv, which can realize the conversion of RGB color to HSV,HSI and other color space, and can also be converted to grayscale image. //Gets the subpixel corner. cvFindCornerSubPix(gray_image,image_points_buf,count,cvSize(11,11),cvSize(-1,-1), cvTermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER,30,0.1)); cvDrawChessboardCorners(show,board_size,image_points_buf,count,found); if(total_per_image==count){ step=successes*total_per_image; // for(int i=step,j=0;j<total_per_image;++i,++j) for (int j=0;j<total_per_image;j++) { //total_per_imageIs the total number of angles in an image. //Opencv is used to access matrix of each element in the macro, the macro applies only to the single channel matrix, CV_MAT_ELEM (matrix, elemtype, row, col) parameter matrix, to access matrix elemtype: type row of matrix element: the element to allow access to the number of rows col: the number of columns to be access elements // cvFindCornerSubPix So, we have the image_point_buf at each Angle point, and now we have it in image_points, and we have one row in each row, and the quotient is row x, and the remainder is column y. //Press the array of angular coordinates into the matrix image_points. CV_MAT_ELEM(*image_points,float,successes*total_per_image+j,0)=image_points_buf[j].x; CV_MAT_ELEM(*image_points,float,successes*total_per_image+j,1)=image_points_buf[j].y;//The points found are stored in coordinates. CV_MAT_ELEM(*object_points,float,successes*total_per_image+j,0)=(float)(j/cube_length); CV_MAT_ELEM(*object_points,float,successes*total_per_image+j,1)=(float)(j%cube_length); //The number of points found is stored in rows and columns. CV_MAT_ELEM(*object_points,float,successes*total_per_image+j,2)=0.0f; //0Single-precision floating point } CV_MAT_ELEM(*point_counts,int,successes,0)=total_per_image;//Access matrix Angle points. successes++; } } } printf("/*****Get the identified images.*****/\n"); //Initialize the camera internal reference matrix. CV_MAT_ELEM(*intrinsic_matrix,float,0,0)=1.0f;//fx CV_MAT_ELEM(*intrinsic_matrix,float,1,1)=1.0f;//fy // // To assign a value CvMat * object_points0= cvCreateMat(total_per_image*successes,3,CV_32FC1); CvMat * image_points0=cvCreateMat(total_per_image*successes,2,CV_32FC1); CvMat * point_counts0=cvCreateMat(successes,1,CV_32SC1); for (int i=0;i<successes*total_per_image;i++) { CV_MAT_ELEM(*image_points0,float,i,0)=CV_MAT_ELEM(*image_points,float,i,0); CV_MAT_ELEM(*image_points0,float,i,1)=CV_MAT_ELEM(*image_points,float,i,1); CV_MAT_ELEM(*object_points0,float,i,0)=CV_MAT_ELEM(*object_points,float,i,0); CV_MAT_ELEM(*object_points0,float,i,1)=CV_MAT_ELEM(*object_points,float,i,1); CV_MAT_ELEM(*object_points0,float,i,2)=0.0f; } for (int i=0;i<successes;i++) { CV_MAT_ELEM(*point_counts0,int,i,0)=CV_MAT_ELEM(*point_counts,int,i,0); } CvMat * camera_matrix=cvCreateMat(3,3,CV_32FC1); CvMat * distortion_coe=cvCreateMat(5,1,CV_32FC1); CvMat * rotation_vectors=cvCreateMat(successes,3,CV_32FC1); CvMat * translation_vectors=cvCreateMat(successes,3,CV_32FC1); cvCalibrateCamera2(object_points0,image_points0,point_counts0,image_size,camera_matrix //,distortion_coeffs,NULL,NULL,flags); ,distortion_coeffs,rotation_vectors,translation_vectors,0); ```
priority: low,category: calib3d,platform: ios/osx
low
Critical
299,938,610
rust
Operators in patterns have incorrect priorities
(Or at least unnatural priorities.) Binary range operators have higher priority than unary operators like `&` or `box`. ```rust #![feature(exclusive_range_pattern)] fn main() { // Interpreted as (&0) .. (10), ERROR mismatched types let x = &0 .. 10; match &0 { &0 .. 10 => {} // OK?! _ => {} } } ``` We can change the priorities for all the unstable kinds of ranges and come up with some deprecation story for stable `BEGIN ... END`. <!-- TRIAGEBOT_START --> <!-- TRIAGEBOT_ASSIGN_START --> <!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END --> <!-- TRIAGEBOT_ASSIGN_END --> <!-- TRIAGEBOT_END -->
C-enhancement,A-grammar,A-parser,T-lang,T-compiler,F-half_open_range_patterns,F-exclusive_range_pattern,A-patterns
low
Critical
299,948,567
vscode
Licensing for packaging
Sorry if this is the wrong place, StackOverflow would tag this as irrelevant if I posted there. I'm currently building a package template for Void Linux for VSCode, and I'm wondering what licensing concerns I should consider. I plan on using the prebuilt VSCode binaries available in tarballs from Microsoft. Thanks
linux,under-discussion,license
low
Minor
299,955,777
opencv
Expose Domain_Filter functionality
`photo` module has a `Domain Transform` filter implementation that can not be accessed from outside. It implemented in `npr.hpp` and it is only used by several exposed functions. It can be very useful by direct access/customization from the OpenCV user.
feature,category: photo
low
Minor
299,966,950
rust
Evaluation overflow with specialization feature
The current nightly compiler gives the error "overflow evaluating requirement" when a type is used as a trait with a default impl. For example: ```rust #![feature(specialization)] fn main() { println!("{}", <(usize) as TypeString>::type_string()); } trait TypeString { fn type_string() -> &'static str; } default impl<T> TypeString for T { fn type_string() -> &'static str { "unknown type" } } impl TypeString for () { fn type_string() -> &'static str { "()" } } ``` ([playground link](https://play.rust-lang.org/?gist=1c378d92ac201f1ed0051b3a27cb4b94&version=nightly)) ...gives the following error: ```text error[E0275]: overflow evaluating the requirement `usize: TypeString` --> src/main.rs:4:20 | 4 | println!("{}", <(usize) as TypeString>::type_string()); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | note: required by `TypeString::type_string` --> src/main.rs:8:5 | 8 | fn type_string() -> &'static str; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``` Note that if we change `<(usize) as TypeString>::type_string()` to `<() as TypeString>::type_string()` - `()` has a specialized impl - it compiles and runs correctly.
C-enhancement,A-diagnostics,T-compiler,A-specialization,F-specialization
low
Critical
299,971,979
angular
Support custom TypeScript transformers
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [*] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> Now angular-cli does not support custom TypeScript transformers. ## Expected behavior <!-- Describe what the desired behavior would be. --> Provide custom transformers as it does Webpack [ts-loader](https://github.com/TypeStrong/ts-loader#getcustomtransformers-----before-transformerfactory-after-transformerfactory---) and [awesome-typescript-loader](https://github.com/s-panferov/awesome-typescript-loader/blob/master/src/checker/runtime.ts#L324).
feature,area: compiler,feature: under consideration
medium
Critical
299,980,559
rust
clarify effects of lto, thinlto and codegen-units
There seems to be a lot of confusion about performance implications of lto, thinlto, codegen-units and default optimizations of build targets, maybe we can clarify this somehow. Where would be the best place for this?
E-hard,C-enhancement,P-medium,T-compiler,A-docs
medium
Major
300,018,110
angular
Nested FormGroup with 3 or more levels may cause ExpressionChangedAfterItHasBeenCheckedError
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior I have a nested FormGroup set up in the structure below: ``` formGroup1 : FormGroup { control1: FormControl; formGroup2: FormGroup { control2: FormControl; formGroup3: FormGroup { control3: FormControl; } } } ``` If formGroup1 and formGroup2 are valid, and formGroup3 is invalid by default, an ExpressionChangedAfterItHasBeenCheckedError is thrown. However, if formGroup2 is invalid, there are no errors. ## Expected behavior The nested form should be invalid with no errors. ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://stackblitz.com or similar (you can use this template as a starting point: https://stackblitz.com/fork/angular-gitter). --> Please see below Plunker for more details: https://plnkr.co/edit/Pz4js1r6TzSv3LgEsYW5 ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> Making nested formGroups work. ## Environment <pre><code> Angular version: 5.2.6 <!-- Check whether this is still an issue in the most recent Angular version --> Browser: All - [ ] Chrome (desktop) version XX - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: XX <!-- run `node --version` --> - Platform: <!-- Mac, Linux, Windows --> All Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
type: bug/fix,freq2: medium,area: forms,state: confirmed,P3
low
Critical
300,025,936
rust
libtest: utilize jobserver
Tests are normally CPU consuming tasks, and we can transparently utilize jobserver to achieve efficient core utilization. Some usecases I can think of: - Tests which may spawn threads (including rustc `codegen-units`, but not limited to) - Using a parent make somewhere
T-libs-api,C-feature-request,A-libtest
low
Minor
300,061,978
rust
Extend `rustc_on_unimplemented` to query for closures
Add a filter to `rustc_on_unimplemented` to allow the querying for [the existence of a closure in the requirement chain](https://github.com/rust-lang/rust/pull/48138/files#r169475454). This should allow handling #24909 and #33307 by providing a specific message recommending marking the closure as `move`.
C-enhancement,A-diagnostics,A-closures,T-compiler
low
Minor
300,066,977
rust
🔬 implement "always applicable impls"
Part of https://github.com/rust-lang/rust/issues/31844: In order to eventually stabilize specialization, we need to make it sound. The current plan for doing so is called "always applicable impls", and is explained [in this blog post](http://smallcultfollowing.com/babysteps/blog/2018/02/09/maximally-minimal-specialization-always-applicable-impls/). This issue exists to track the implementation of that proposal. This does not yet have any mentoring instructions. Ping me if you are interested though and we can talk it over! Or maybe I'll get to it before then.
C-enhancement,A-trait-system,T-compiler,A-specialization,F-specialization,T-types,S-types-deferred
low
Major
300,069,983
go
testing: start test log for caching before TestMain
### What version of Go are you using (`go version`)? `go version go1.10 windows/amd64` ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? ``` set GOARCH=amd64 set GOBIN= set GOCACHE=C:\Users\ben\AppData\Local\go-build set GOEXE=.exe set GOHOSTARCH=amd64 set GOHOSTOS=windows set GOOS=windows set GOPATH=C:\Users\ben\dev\go set GORACE= set GOROOT=C:\Go set GOTMPDIR= set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64 set GCCGO=gccgo set CC=gcc set CXX=g++ set CGO_ENABLED=1 set CGO_CFLAGS=-g -O2 set CGO_CPPFLAGS= set CGO_CXXFLAGS=-g -O2 set CGO_FFLAGS=-g -O2 set CGO_LDFLAGS=-g -O2 set PKG_CONFIG=pkg-config set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\ben\AppData\Local\Temp\go-build167965298=/tmp/go-build -gno-record-gcc-switches ``` ### What did you do? 1. Create a `test_something.go` file whose tests require a `main_test.go` file for setup/teardown 2. Create several tests in `test_something.go` 2. Run all of the tests 3. Observe that the code in `main_test` is executed 4. Change a non-go file that the test depends on ### What did you expect to see? `main_test` is executed again and a cache miss occurs. ### What did you see instead? `main_test` is not executed again and a cache hit occurs. See https://github.com/bakatz/golangissue21422 for more info.
NeedsDecision
medium
Critical
300,071,151
pytorch
scatter_add_ should support scalar source (including Python scalar)
I was looking to use `scatter_add_` to do `bincount`. ```python import torch a = torch.LongTensor([2, 0, 3, 3]) r = torch.LongTensor(5) # works r.zero_().scatter_(0, a, 1) # 1 # 0 # 1 # 1 # 0 # [torch.LongTensor of size 5] # scalar source doesn't work r.zero_().scatter_add_(0, a, 1) #Traceback (most recent call last): # File "<stdin>", line 1, in <module> #TypeError: scatter_add_ received an invalid combination of arguments - got (int, torch.LongTensor, int), #but expected (int dim, torch.LongTensor index, torch.LongTensor src) # no broadcasting? but no checking for memory bounds either? r.zero_().scatter_add_(0, a, torch.LongTensor([1])) #1.4033e+14 # 0.0000e+00 # 1.0000e+00 # 5.4931e+18 # 0.0000e+00 # [torch.LongTensor of size 5] # works ok r.zero_().scatter_add_(0, a, torch.LongTensor([1]).expand_as(a)) # 1 # 0 # 1 # 2 # 0 # [torch.LongTensor of size 5] ``` at `0.4.0a0+1fdb392` cc @mikaylagawarecki
triaged,module: scatter & gather ops
low
Critical
300,163,424
angular
Invalid return type of "DebugElement.query"
## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code>[ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report [ ] Feature request [ ] Documentation issue or request [ ] Support request </code></pre> ## Current behavior [`DebugElement.query` declares `DebugElement` as the return type](https://github.com/angular/angular/blob/master/packages/core/src/debug/debug_node.ts#L108). ## Expected behavior `DebugElement.query` declares `DebugElement | null` as the return type. ## Minimal reproduction of the problem with instructions ``` let debugElement: DebugElement = ...; let el = debugElement.query(By.CSS('not existent')).nativeElement; // explodes ``` ## What is the motivation / use case for changing the behavior? The result of `DebugElement.query` [can actually be `null`](https://github.com/angular/angular/blob/master/packages/core/src/debug/debug_node.ts#L110). The error should be caught by the compiler since the programmer should assert that the element was found. This is especially true for `By.CSS` predicate as it seems that the likelihood of typos and mistakes is pretty high. The reason why [this compiles without errors](https://github.com/angular/angular/blob/master/packages/core/src/debug/debug_node.ts#L110) is that the return type of array indexing is `T`, not `T | undefined`. This seems to be the [intended behaviour of the TypeScript compiler](https://github.com/Microsoft/TypeScript/issues/13778). ## Environment <pre><code>Angular version: latest Browser: all </code></pre>
type: bug/fix,area: testing,freq2: medium,state: confirmed,cross-cutting: types,P4
low
Critical
300,224,103
vue
Use better types to model prop type
### Version 2.5.13 ### Reproduction link [https://github.com/zsky/vue-date-type-issue](https://github.com/zsky/vue-date-type-issue) ### Steps to reproduce ``` npm i npm run build ``` ### What is expected? No typescript error ### What is actually happening? Typescript report error: Property 'getTime' does not exist on type 'string' *** I use vue with typescript, I want to set a component prop type as Date, so I do this: ```typescript Vue.extend({ props: { start: Date }, created() { this.start; // Expect type Date, but String } }); ``` Then I find something could be userful: In options.d.ts, ```typescript export type Prop<T> = { (): T } | { new (...args: any[]): T & object } ``` Make a simple test: ```typescript function test<T>(opts: { p1: Prop<T> }): T { return {} as T; } let result = test({ p1: Date }); // Expect type Date, but String ``` But I still don't know how to solve it, thanks for any suggestion. <!-- generated by vue-issues. DO NOT REMOVE -->
typescript
low
Critical
300,273,296
rust
Compiler Performance Tracking Issue
This issue is the landing page for all things compilation speed related. We define the most important usage scenarios, as seen by the @rust-lang/compiler team and the @rust-lang/wg-compiler-performance working group. Benchmarks based on these scenarios are then used as a metric for how well the compiler is doing, compile-time-wise. We then establish a model of the compilation process and use it to estimate the effect of optimization categories on the various usage scenarios. Finally, we list concrete optimization ideas and initiatives, relate them to categories and usage scenarios, and link to their various specific GH issues. # Usage Scenarios We track compiler performance in a number of use cases in order to gauge how compile times in the most important and common scenarios develop. We continuously measure how the compiler performs on [perf.rust-lang.org](perf.rust-lang.org). #48750 defines which concrete benchmarks go into measuring each usage scenario. In the following, "project" means a medium to large codebase consisting of dozens of crates, most of which come from [crates.io](https://crates.io). - **FROM-SCRATCH** - Compiling a project from scratch (*P-high*) This scenario occurs when a project is compiled for the first time, during CI builds, and when compiling after running `cargo clean`, `make clean` or something similar. The targeted runtime performance for builds like this is usually "fast enough", that is, the compiled program should execute with performance that does not hinder testing but it is not expected to have absolute peak performance as you would expect from a release build. - **SMALL-CHANGE** - Re-Compiling a project after a small change (*P-high*) This scenario is the most common during the regular edit-compile-debug cycle. Low compile times are the top-most priority in this scenario. The compiled runtime performance, again, should be good enough not to be an obstacle. - **RLS** - Continuously re-compiling a project for the Rust Language Server (*P-high*) This scenario covers how the Rust compiler is used by the [RLS](https://github.com/rust-lang-nursery/rls). Again, low compile times are the top-most priority here. The only difference to the previous scenario is that no executable program needs to be generated at all. The output here is diagnostics and the RLS specific `save-analysis` data. - **DIST** - Compiling a project for maximum runtime performance (*P-medium*) This scenario covers the case of generating release artifacts meant for being distributed and for reflecting runtime performance as perceived by end users. Here, compile times are of secondary importance -- they should be low if possible (especially for running benchmarks) but if there is a trade-off between compile time and runtime performance then runtime performance wins. Sometimes we also see people "measuring" Rust's compile times by compiling a "Hello World" program and comparing how long that takes in other languages. While we do have such a benchmark in our suite, we don't consider it one of the important usage scenarios. # A Model of the Compilation Process The compiler does lots of different things while it is compiling a program. Making any one of those things faster might improve the compile time in one of the scenarios above or it might not. This section will establish a few categories of tasks that the compiler executes and then will map each category to the scenarios that it impacts. ## Compilation Phases / Task Categories The compiler works in four large phases these days: 1. **Pre-Query Analysis** (pre-query) -- This roughly includes parsing, macro expansion, name resolution, and lowering to HIR. The tasks in this phase still follow the bulk processing paradigm and the results it produces cannot be cached for incremental compilation. This phase is executed on a single thread. 2. **Query-based Analysis** (query) -- This includes type checking and inference, lowering to MIR, borrow checking, MIR optimization, and translation to LLVM IR. The various sub-tasks in this phase are implemented as so-called *queries*, which are computations that form a DAG and the results of which can be tracked and cached for incremental compilation. Queries are also only executed if their result is requested, so in theory this system would allow for partially compiling code. This phase too is executed on a single thread. 3. **LLVM-based optimization and code generation** (llvm) -- Once the compiler has generated the LLVM IR representation of a given crate, it lets LLVM optimize and then translate it to a number of object files. For optimized builds this usually takes up 65-80% of the overall compilation time. This phase can be run on multiple threads in parallel, depending on compiler settings. Incremental compilation allows to skip LLVM work but is less effective than for queries because caching is more coarse-grained. 4. **Linking** (link) -- Finally, after LLVM has translated the program into object files, the output is linked into the final binary. This is done by an external linker which `rustc` takes care of invoking. Note that this describes the compilation process for a single crate. However, in the scenarios given above, we always deal with a whole graph of crates. Cargo will coordinate the build process for a graph of crates, only compiling crates the code of which has changed. For the overall compile time of a whole project, it is important to note that Cargo will compile multiple crates in parallel, but can only start compiling a crate once all its dependencies have been compiled. The crates in a project form a directed acyclic graph. ## Using Task Categories To Estimate Optimization Impact On Scenarios From the description above we can infer which optimizations will affect which scenarios: * Making a (pre-query) task faster will affect all scenarios as these are unconditionally executed in every compilation session. * Making a (query) task faster will also affect all scenarios as these are also executed in every session. * Making (llvm) execute faster will have most effect on FROM-SCRATCH and DIST, and some effect on SMALL-CHANGE. It will have no effect on RLS. * Making (link) faster helps with FROM-SCRATCH, DIST, and SMALL-CHANGE, since we always have to link the whole program in these cases. In the SMALL-CHANGE scenario, linking will be a bigger portion of overall compile time than in the other two. For RLS we don't do any linking. * Turning a (pre-query) task into a (query) task will improve SMALL-CHANGE and RLS because we profit from incremental compilation in these scenarios. If done properly, it should not make a difference for for the other scenarios. * Reducing the amount of work we generate for (llvm) will have the same effects as making (llvm) execute more quickly. * Reducing the time between the start of compiling a crate and the point where dependents of that crate can start compiling can bring superlinear compile-time speedups because it reduces contention in Cargo's parallel compilation flow. * Reducing the overhead for incremental compilation helps with SMALL-CHANGE and RLS and possibly FROM-SCRATCH. * Improving incr. comp. caching efficiency for LLVM artifacts helps with SMALL-CHANGE and possibly FROM-SCRATCH, but not DIST, which does not use incremental compilation, and RLS, which does not produce LLVM artifacts. * Improving generation of save-analysis data will help the RLS case, while this kind of data is not produced during any of the other scenarios. # Concrete Performance Improvement Initiatives There are various concrete initiatives of various sizes that strive to improve the compiler's performance. Some of them are far along, some of them are just ideas that need to be validated before pursuing them further. ## Incremental compilation Work on supporting incremental re-compilation of programs has been ongoing for quite a while now and it is available on stable Rust. However, there are still many opportunities for improving it. - Status: "version 1.0" available on stable. - Current progress is tracked in https://github.com/rust-lang/rust/issues/47660. - Affected usage scenarios: **SMALL-CHANGE**, **RLS** ## Query parallelization Currently to compiler can evaluate queries (which comprise a large part of the non-LLVM compilation phases) in a single-threaded fashion. However, since queries have a clear evaluation model which structures computations into a directed acyclic graph, it seems feasible to implement parallel evaluation for queries at the framework level. @Zoxc even has done a proof-of-concept implementation. This would potentially help with usage scenarios since all of them have to execute the query-part of compilation. - Status: Preliminary work in progress, experimental - Current progress is tracked in https://github.com/rust-lang/rust/issues/48685 - Affect usage scenarios: **FROM-SCRATCH**, **SMALL-CHANGE**, **RLS**, **DIST** ## MIR-only RLIBs "MIR-only RLIBs" is what we call the idea of not generating any LLVM IR or machine code for RLIBs. Instead, all of this would be deferred to when executables, dynamic libraries, or static C libraries are generated. This potentially reduces the overall workload for compiling a whole crate graph and has some non-performance related benefits too. However, it might be detrimental in some other usage scenarios, especially if incremental compilation is not enabled. - Status: Blocked on query parallelization - Current progress is tracked in https://github.com/rust-lang/rust/issues/38913 - Affected usage scenarios: **FROM-SCRATCH**, **SMALL-CHANGE**, **DIST** ## ThinLTO ThinLTO is an LLVM mode that allows to perform whole program optimization in a mostly parallel fashion. It is currently supported by the Rust compiler and even enabled by default in some cases. It tries to reduce compile times by distributing the LLVM workload to more CPU cores. At the same time the overall workload increases, so it can also have detrimental effects. - Status: available on stable, default for optimized builds - Current progress is tracked in https://github.com/rust-lang/rust/issues/45320 - Affected usage scenarios: **FROM-SCRATCH**, **DIST** ## Sharing generic code between crates Currently, the compiler will duplicate the machine code for generic functions within each crate that uses a specific monomorphization of a given function. If there is a lot of overlap this potentially means lots of redundant work. It should be investigated how much work and compile time could be saved by re-using monomorphizations across crate boundaries. - Status: Experimental implementation in #48779 - Current progress is tracked in https://github.com/rust-lang/rust/issues/47317 - Affected usage scenarios: **FROM-SCRATCH**, **SMALL-CHANGE**, **DIST** ## Sharing closures among generic instances We duplicate code for closures within generic functions even if they do not depend on the generic parameters of the enclosing function. This leads to redundant work. We should try to be smarter about it. - Status: Unknown - Current progress is tracked in https://github.com/rust-lang/rust/issues/46477 - Affected usage scenarios: **FROM-SCRATCH**, **SMALL-CHANGE**, **DIST** ## Perform inlining at the MIR-level Performing at least some amount of inlining at the MIR-level would potentially reduce the pressure on LLVM. It would also reduce the overall amount of work to be done because inlining would only have to be done once for all monomorphizations of a function while LLVM has to redo the work for each instance. There is an experimental implementation of MIR-inlining but it is not production-ready yet. - Status: Experimental implementation exists on nightly, not stable or optimized yet - Current progress is tracked in #43542 (kind of) - Affected usage scenarios: **FROM-SCRATCH**, **SMALL-CHANGE**, **DIST** ## Provide tools and instructions for profiling the compiler Profiling is an indispensable part of performance optimization. We should make it as easy as possible to profile the compiler and get an idea what it is spending its time on. That includes guides on how to use external profiling tools and improving the compiler internal profiling facilities. - Status: Idea - Current progress is tracked in (nowhere yet) - Affect usage scenarios: **FROM-SCRATCH**, **SMALL-CHANGE**, **RLS**, **DIST** ## Build released compiler binaries as optimized as possible There is still headroom for turning on more optimizations for building the `rustc` release artifacts. Right now this is blocked by a mix of CI restrictions and possibly outdated restrictions for when build Rust dylibs. - Status: In progress - Current progress is tracked in https://github.com/rust-lang/rust/issues/49180 - Affect usage scenarios: **FROM-SCRATCH**, **SMALL-CHANGE**, **RLS**, **DIST** Feel free to leave comments below if there's anything you think is pertinent to this topic!
metabug,I-compiletime,T-compiler,C-tracking-issue,WG-compiler-performance,S-tracking-impl-incomplete
high
Critical
300,320,516
go
runtime/race: simplify meta shadow mapping
Currently, the race detector runtime for Go assumes that the heap is contiguous and grows up. In particular, [`MapShadow`](https://github.com/llvm-mirror/compiler-rt/blob/5551897294887d632c71275cf11c5654fd80cda7/lib/tsan/rtl/tsan_rtl.cc#L242) always maps the meta shadow contiguously. According to @dvyukov's [comment](https://github.com/golang/go/issues/23900#issuecomment-367410078), this was done to deal with the fact that the Go heap grew irregularly and the meta shadow mapping has large alignment constraints. As of 2b415549, the Go heap is no longer contiguous and doesn't necessarily grow up. However, it also now grows in large, aligned blocks (as of 78846472, 4MB on Windows, 64MB elsewhere). This easily satisfies the alignment constraints on the meta shadow. Hence, the meta shadow mapping code can be simplified to take advantage of this while at the same time adding support for Go's new sparse heap layout.
RaceDetector,help wanted,NeedsFix,compiler/runtime
low
Minor
300,342,756
go
regexp/syntax: add examples for flag parsing
Reading the description of grouping and flags, I'm confused about how those flags would apply to theoretical strings. An example or two would be good to demonstrate the use and placement of flags, as well as the resulting capture groups.
Documentation,help wanted,NeedsInvestigation
low
Minor
300,399,236
go
net/http: make default configs have better timeouts
See https://github.com/golang/go/issues/23459. Client, Server, and Transport may all have timeout fields in which zero = infinity. Instead it should be a reasonable default.
NeedsInvestigation
low
Major
300,408,364
go
x/review/git-codereview: do not mail CLs with editor temp files, binaries
Motivated by #23800, git-codereview should probably refuse to mail CLs with editor temp files and binaries.
help wanted,NeedsFix
low
Major
300,438,037
godot
Allow drag and dropping audio effects on an audio bus
**Godot version:** 3 It would be convenient to be able to drag and drop a saved AudioEffect file on top of an audio bus to add the effect to it, see: ![audioeffect](https://user-images.githubusercontent.com/35355161/36702001-5f271172-1b23-11e8-8db4-dbfaf9e0eda0.png) I haven't found a way to add a saved effect to the bus layout using the editor, so drag and dropping would be the most intuitive way to add this functionality.
enhancement,topic:editor,usability
low
Minor
300,466,549
flutter
SliverGeometry documentation needs a diagram or an example
Hard to visualize all the various fields but can likely be easily explained with a picture.
framework,f: scrolling,d: api docs,P2,team-framework,triaged-framework
low
Minor
300,664,825
opencv
a bug in cuda detectMultiScale
##### System information (version) <!-- Example - OpenCV => 3.4 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> ##### Detailed description https://github.com/opencv/opencv/blob/cfe84b953cd4218df88144e1d58c2580540c6972/modules/cudaobjdetect/src/cascadeclassifier.cpp an assertion bug in line 179 ##### Steps to reproduce ![image](https://user-images.githubusercontent.com/6119906/36736025-8ed0a280-1ba5-11e8-81c2-5b67c497a4da.png)
feature,category: gpu/cuda (contrib)
low
Critical
300,670,561
pytorch
RuntimeError: $ Torch: not enough memory: you tried to allocate 72GB. Buy new RAM!
- OS: Red Hat Enterprise Linux Server release 7.2 (Maipo) - PyTorch version: pytorch 0.3.1 py35_cuda8.0.61_cudnn7.0.5_2 pytorch torchvision 0.2.0 py35heaa392f_1 pytorch - How you installed PyTorch (conda, pip, source): $ module load anaconda3/4.3.1 $ source activate pytorchenv $ conda install pytorch torchvision -c pytorch - Python version: $ python -V Python 3.5.5 - CUDA/cuDNN version: cuda8.0.61/cudnn7.0.5_2 - GPU models and configuration: $ nvidia-smi Tue Feb 27 10:06:13 2018 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 384.69 Driver Version: 384.69 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K80 On | 00000000:04:00.0 Off | 0 | | N/A 28C P8 26W / 149W | 1MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla K80 On | 00000000:05:00.0 Off | 0 | | N/A 22C P8 29W / 149W | 1MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla K80 On | 00000000:84:00.0 Off | 0 | | N/A 32C P8 26W / 149W | 1MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla K80 On | 00000000:85:00.0 Off | 0 | | N/A 24C P8 29W / 149W | 1MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 4 Tesla K80 On | 00000000:8A:00.0 Off | 0 | | N/A 24C P8 26W / 149W | 1MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 5 Tesla K80 On | 00000000:8B:00.0 Off | 0 | | N/A 33C P8 29W / 149W | 1MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 6 Tesla K80 On | 00000000:8E:00.0 Off | 0 | | N/A 26C P8 26W / 149W | 1MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 7 Tesla K80 On | 00000000:8F:00.0 Off | 0 | | N/A 35C P8 31W / 149W | 1MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ - GCC version (if compiling from source): N/A In addition, including the following information will also be very helpful for us to diagnose the problem: - A script to reproduce the bug. Please try to provide as minimal of a test case as possible. - Error messages and/or stack traces of the bug - Context around what you are trying to do Training a model with: ``` ... os.environ['CUDA_VISIBLE_DEVICES'] = "0" ... valid_loader = DataLoader(dataset=valid_dataset, batch_size=32, shuffle=False, num_workers=12) # ====== start trianing ======= cudnn.benchmark = True N_CLASSES = 8 BATCH_SIZE = 32 ``` I've tried batch sizes from 128 to 8, and using GPUs from just one to all 8. GPU node has plenty of RAM (124G): $ free total used free shared buff/cache available Mem: 131930696 6299776 124697204 16968 933716 124718092 Swap: 16777212 441996 16335216 Do you have a maximum RAM allocation limit hardcoded in PyTorch (file "THGeneral.c")? Thank you in advance! Complete error message: ``` $ python3 train.py ./out/ > train.out Traceback (most recent call last): File "train.py", line 120, in <module> augment_img = torch.stack(augment_img)[perm_index] File "/path-to-anaconda/Anaconda3/4.3.1/envs/pytorchenv/lib/python3.5/site-packages/torch/functional.py", line 64, in stack return torch.cat(inputs, dim) RuntimeError: $ Torch: not enough memory: you tried to allocate 72GB. Buy new RAM! at /opt/conda/conda-bld/pytorch_1518241081361/work/torch/lib/TH/THGeneral.c:253 ```
module: memory usage,triaged
medium
Critical
300,675,046
vscode
Allow extensions to add coloured markers in the gutter similar to source control
The source control functionality currently renders nice coloured markers down the gutter. It'd be nice if this functionality was available to extensions that aren't source control (for example code coverage).
feature-request,api,editor-api
low
Major
300,687,598
vscode
[razor] comment out razor code with @* *@
Issue Type: <b>Bug</b> 1. Edit a cshtml file 2. Select html code to comment out 3. Use comment out keymap (CTRL K + C) Expected Behavior: Proper comment is used based on syntax Select Html only line should use `<!-- -->` Select Razor should use `@* *@` Actual Behavior: After comment out keymap is pressed, the selected area has // at beginning of each line. https://www.screencast.com/t/mxAOaHv7qV VS Code version: Code - Insiders 1.21.0-insider (1a84a882737184488d965426747a05474f54a759, 2018-02-27T09:01:48.555Z) OS version: Windows_NT x64 10.0.16299 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz (8 x 2394)| |Memory (System)|15.88GB (3.92GB free)| |Process Argv|C:\Program Files\Microsoft VS Code Insiders\Code - Insiders.exe| |Screen Reader|no| |VM|0%| </details><details><summary>Extensions (8)</summary> Extension|Author (truncated)|Version ---|---|--- gitlens|eam|8.0.2 mssql|ms-|1.3.0 python|ms-|2018.1.0 csharp|ms-|1.14.0 PowerShell|ms-|1.6.0 vs-keybindings|ms-|0.1.7 team|ms-|1.122.0 vscode-icons|rob|7.20.0 </details> Reproduces only with extensions <!-- generated by issue reporter -->
feature-request,languages-basic,editor-comments
low
Critical
300,689,439
pytorch
Gaussian Sampling
In the release phase of the code of our article : Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge ( https://openreview.net/pdf?id=By4HsfWAZ ) we have implemented a Gaussian grid sampling scheme : https://github.com/pajotarthur/pytorch/blob/master/aten/src/THNN/generic/SpatialGridSamplerGaussian.c https://github.com/pajotarthur/pytorch/blob/master/aten/src/THCUNN/SpatialGridSamplerGaussian.cu Does it make sense to add it to the main pytorch repo ? cc @albanD @mruberry
feature,module: nn,triaged
low
Minor
300,707,166
flutter
Calendar, and Datepicker does not support Jalali Calendar
Hello and good day Recently I'm using Flutter, and unfortunately I cannot use its calendar and **date picker**, the problem is clear, it does not support **Persian Calendar** How can I use Persian calendar? Do you have any plan to support **Persian Jalali Calendar**?
c: new feature,framework,f: material design,f: date/time picker,a: internationalization,customer: crowd,P3,team-design,triaged-design
low
Major
300,716,638
TypeScript
Improve error when parameter is missing a name
**TypeScript Version:** 2.8.0-dev.20180227 **Code** ```ts declare class C { f({ x?: number }): void; g(): void; } ``` **Expected behavior:** At the `?`, `property in destructuring may not be marked optional`. **Actual behavior:** ``` src/a.ts(2,10): error TS1005: ',' expected. src/a.ts(2,20): error TS1005: ',' expected. src/a.ts(2,21): error TS1128: Declaration or statement expected. src/a.ts(2,22): error TS1128: Declaration or statement expected. src/a.ts(2,28): error TS1109: Expression expected. src/a.ts(3,8): error TS1005: ';' expected. src/a.ts(3,14): error TS1109: Expression expected. src/a.ts(4,1): error TS1128: Declaration or statement expected. ```
Suggestion,Help Wanted,Domain: Error Messages
low
Critical
300,716,770
go
crypto/x509: ensure CreateCertificate won't generate unparsable certificates
CreateCertificate should return an error instead of generating a certificate that will fail ParseCertificate. We should check that's the case and maybe test or enforce this. See #23995.
Testing,help wanted,NeedsFix
low
Critical
300,718,132
kubernetes
Missing validation in HPA
**Is this a BUG REPORT or FEATURE REQUEST?**: > Uncomment only one, leave it on its own line: > /kind bug > /kind feature **What happened**: It looks like validation of HPA is insufficient: - metric name in pods and object metrics should be checked using IsValidPathSegmentName as they're used as part of request to custom metrics API (this is analogous to https://github.com/kubernetes/kubernetes/pull/60433). - ValidateCrossVersionObjectReference allows for empty apiVersion, leading to non-functional HPA.
kind/bug,priority/important-soon,sig/autoscaling,lifecycle/rotten
medium
Critical
300,728,497
youtube-dl
[prosiebensat1] Unable to download playlist
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.02.26*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.02.26** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [x] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other https://www.sat1.de/tv/das-grosse-promibacken/playlists/das-grosse-promibacken-alle-ganzen-folgen ``` youtube-dl.exe https://www.sat1.de/tv/das-grosse-promibacken/playlists/das-grosse -promibacken-alle-ganzen-folgen -v [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['https://www.sat1.de/tv/das-grosse-promibacken/playl ists/das-grosse-promibacken-alle-ganzen-folgen', '-v'] [debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252 [debug] youtube-dl version 2018.02.26 [debug] Python version 3.4.4 (CPython) - Windows-7-6.1.7601-SP1 [debug] exe versions: ffmpeg N-69233-g1f13348, rtmpdump 2.5 [debug] Proxy map: {} [prosiebensat1] tv/das-grosse-promibacken/playlists/das-grosse-promibacken-alle- ganzen-folgen: Downloading webpage [prosiebensat1] 5558877: Downloading videos JSON ERROR: This video is DRM protected. Traceback (most recent call last): File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpfqykkd7x\bu ild\youtube_dl\YoutubeDL.py", line 785, in extract_info File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpfqykkd7x\bu ild\youtube_dl\extractor\common.py", line 440, in extract File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpfqykkd7x\bu ild\youtube_dl\extractor\prosiebensat1.py", line 445, in _real_extract File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpfqykkd7x\bu ild\youtube_dl\extractor\prosiebensat1.py", line 394, in _extract_clip File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpfqykkd7x\bu ild\youtube_dl\extractor\prosiebensat1.py", line 32, in _extract_video_info youtube_dl.utils.ExtractorError: This video is DRM protected. <end of log> ```
geo-restricted
low
Critical
300,754,555
TypeScript
Support find-all-references for module.exports
**Code** ```ts /// <reference path='fourslash.ts' /> // @allowJs: true // @Filename: /a.js ////module.exports = 0; // @Filename: /b.js ////const [|a|] = require("./a"); verify.noErrors(); const [r0] = test.ranges(); verify.referenceGroups(r0, []); ``` This would involve modifying `importTracker` to support `module.exports =` and `const x = require("...")` better.
Suggestion,In Discussion
low
Critical
300,758,175
rust
Document minimum size for `usize` and `isize`
Right now, `usize` and `isize` are "guaranteed" to be at least a byte long, but nothing more. It seems unlikely that Rust will support 8-bit targets in the future, but this is what the `TryFrom` implementations indicate. A lot of people assume that `usize` will be at least 32 bits but this is not true for all platforms. I think that the docs should be clarified to make people more aware of this behaviour.
P-medium,T-lang,A-docs,C-feature-request,WG-embedded
medium
Major
300,764,867
TypeScript
foo.ts is resolved before foo.d.ts even if the latter is in files[]
Affects at least TS 2.7.2 and 2.6.2. This problem appears when compiling files separately, like we do under Bazel. Imagine this simple app `src/lib.ts` ``` export const a = 1; ``` `src/main.ts` ``` import {a} from './lib'; ``` Imagine `lib.ts` was compiled separately, so there already exists `dist/lib.d.ts` ``` export declare const a = 1; ``` Now, I want to compile `main.ts` as a separate program. Given `src/tsconfig.json` ``` { "compilerOptions": { "rootDirs": [ ".", "../dist" ], "outDir": "../dist", "declaration": true }, "files": [ "main.ts", // lib was compiled separately "../dist/lib.d.ts" ] } ``` We see that `lib.d.ts` is already in the program before resolution begins. However the compiler resolves the import statement to the `lib.ts` file instead, adding it to the program, and tries to emit on top of an input, so the error is ``` $ ./node_modules/.bin/tsc -p src error TS5055: Cannot write file '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/dist/lib.d.ts' because it would overwrite input file. ``` Okay, we didn't want `lib.ts` in the program, so we should just use `--noResolve` to prevent that, but it's also broken: ``` $ ./node_modules/.bin/tsc -p src --noResolve --traceResolution ======== Resolving module './lib' from '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/src/main.ts'. ======== Module resolution kind is not specified, using 'NodeJs'. 'rootDirs' option is set, using it to resolve relative module name './lib'. Checking if '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/src/' is the longest matching prefix for '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/src/lib' - 'true'. Checking if '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/dist/' is the longest matching prefix for '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/src/lib' - 'false'. Longest matching prefix for '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/src/lib' is '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/src/'. Loading 'lib' from the root dir '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/src/', candidate location '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/src/lib'. Loading module as file / folder, candidate module location '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/src/lib', target file type 'TypeScript'. File '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/src/lib.ts' exist - use it as a name resolution result. ======== Module name './lib' was successfully resolved to '/usr/local/google/home/alexeagle/Projects/repro_ts_resolving_ts/src/lib.ts'. ======== src/main.ts(1,17): error TS2307: Cannot find module './lib'. ``` So far our workaround at Google is one of: - use Bazel sandboxing to make the inputs appear in different places - use our own custom compiler which elides any emit to files we don't expect However under Bazel we sometimes cannot sandbox (eg. on Windows) and cannot use our custom compiler (eg. when we are compiling it) so we are stuck. I found that `ts.CompilerOptions.suppressOutputPathCheck` could be a workaround, but `tsc` doesn't allow that flag from the command line or tsconfig.json (it's not in `optionDeclarations` in `commandLineParser.ts`)
Suggestion,Help Wanted,Committed
low
Critical
300,776,238
go
proposal: crypto/rand: guard against mutation of Reader variable
`crypto/rand` exposes an `io.Reader` variable `Reader` as "a global, shared instance of a cryptographically strong pseudo-random generator." Furthermore, `crypto/rand.Read` implicitly uses `Reader` for its crypto source. This seems problematic to me because then any package can just overwrite `crypto/rand.Reader` to point to some other object, affecting the security of any packages that rely on `crypto/rand.Read` or `crypto/rand.Reader` for security, e.g. `x/crypto/nacl`. One can say that a language can never ultimately defend against code running in your same process, but I think it should be possible to write something that depends on `crypto/rand` for security that wouldn't require auditing other packages for a single malicious variable write.[1] The main API flaw here, IMO, is that `Reader` is an `io.Reader` variable, whereas it should be a function that returns an `io.Reader`. A new API would look something like: ``` // Reader returns an io.Reader that reads from a cryptographically strong pseudo-random number generator. func Reader() io.Reader // Read is a helper function that calls Reader() and then passes it to io.ReadFull. func Read(b []byte) (n int, err error) ``` Alas, with the Go 1 compatibility guarantee `Reader` would have to remain, and `Read` would still have to use `Reader`. But the above could be added as new functions, say `MakeReader()` and `SafeRead()`. And the standard library (and other important external packages like `x/crypto/nacl`) could be changed to use those safe functions. [1] Without this flaw, a malicious package would have to use the unsafe package to poke around in the internals of `crypto/rand`, or call out to the external OS to e.g. try to redirect access to the random device, which seems easier to audit for than a write to `crypto/rand.Reader`. Of course, I'm already assuming that a project worried about this is vendoring all of its dependencies.
v2,Proposal
low
Critical
300,787,269
youtube-dl
[radiocanada] Doesn't extract Zone Jeunesse videos
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.02.26*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.02.26** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [x] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` youtube-dl http://ici.radio-canada.ca/jeunesse/scolaire/emissions/5299/belle-et-sebastien/episodes/389176/dragon-blanc/emission -v [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['http://ici.radio-canada.ca/jeunesse/scolaire/emissions/5299/belle-et-sebastien/episodes/389176/dragon-blanc/emission', '-v'] [debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252 [debug] youtube-dl version 2018.02.26 [debug] Python version 3.4.4 (CPython) - Windows-10-10.0.16299 [debug] exe versions: ffmpeg 3.3.4, ffprobe 3.3.4, rtmpdump 2.4 [debug] Proxy map: {} [generic] emission: Requesting header WARNING: Falling back on generic information extractor. [generic] emission: Downloading webpage [generic] emission: Extracting information ERROR: Unsupported URL: http://ici.radio-canada.ca/jeunesse/scolaire/emissions/5299/belle-et-sebastien/episodes/389176/dragon-blanc/emission Traceback (most recent call last): File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpfqykkd7x\build\youtube_dl\YoutubeDL.py", line 785, in extract_info File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpfqykkd7x\build\youtube_dl\extractor\common.py", line 440, in extract File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpfqykkd7x\build\youtube_dl\extractor\generic.py", line 3127, in _real_extract youtube_dl.utils.UnsupportedError: Unsupported URL: http://ici.radio-canada.ca/jeunesse/scolaire/emissions/5299/belle-et-sebastien/episodes/389176/dragon-blanc/emission ... <end of log> ``` --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: http://ici.radio-canada.ca/jeunesse/scolaire/emissions/5299/belle-et-sebastien/episodes/389176/dragon-blanc/emission Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights. --- ### Description of your *issue*, suggested solution and other information It does work on ici.tou.tv, but not emissions or Zone Jeunesse videos on ici.radio.canada.ca. Need to be supported.
geo-restricted
low
Critical
300,825,863
TypeScript
Type guards not working for indexed types with generics
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.7 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** indexed type guard **Code** ```ts function f<T extends string>(q: {[k:string]: number | string}, t: T) { const v= q[t]; if (typeof(v)!=="string") { const vv: number = v; } } ``` **Expected behavior:** Type of `v` is `string|number`, and in the `if` it is just number. Compiles. **Actual behavior:** Assignment to `vv` doesn't compile: string not assignable to number. When hovering, type of `v` is `{ [k: string]: string | number; }[T]`, so eliminating string from it does not change the type, it is still the same, and only when assigning to `vv` it gets resolved to `string|number` which then causes compilation error. Changing type of `t` to string fixes the problem. The problem also appears if `q` has type keyed by `[k in T]`. **Playground Link:** [link](https://www.typescriptlang.org/play/index.html#src=function%20f%3CT%20extends%20string%3E(q%3A%20%7B%5Bk%3Astring%5D%3A%20number%20%7C%20string%7D%2C%20t%3A%20T)%20%7B%0D%0A%20%20%20%20const%20v%3D%20q%5Bt%5D%3B%0D%0A%20%20%20%20if%20(typeof(v)!%3D%3D%22string%22)%20%7B%0D%0A%20%20%20%20%20%20%20%20const%20vv%3A%20number%20%3D%20v%3B%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A) **Related Issues:**
Bug
low
Critical
300,884,459
vscode
Feature request: Add support for modifier keys like right ctrl and right alt
### Issue Type Feature Request ### Description Currently VS Code treats left and right modifier keys as the same one (<kbd>ctrl</kbd>, <kbd>alt</kbd> and <kbd>shift</kbd>). I want to define different keyboard shortcuts for left and right modifier keys (e.g. <kbd>left ctrl</kbd> + <kbd>s</kbd> for saving files, and <kbd>right ctrl</kbd> + <kbd>s</kbd> for something else), but unfortunately this cannot be done. ### VS Code Info VS Code version: Code 1.20.0 (c63189deaa8e620f650cc28792b8f5f3363f2c5b, 2018-02-07T17:10:15.949Z) OS version: Linux x64 4.14.16-300.fc27.x86_64
feature-request,editor-core
high
Critical
300,923,956
vscode
Provide activation reason to extensions in activate call
It would be useful to know if the extension is being activated due to a command (in my case I want to a show a more specific message if the extension fails to activate due to a missing SDK, and the command gives me more context to tailor this message to the user).
feature-request,api
medium
Major
300,935,374
vue
Vue warns about missing required prop that has a default value
### Version 2.5.13 ### Reproduction link [http://jsfiddle.net/df4Lnuw6/207/](http://jsfiddle.net/df4Lnuw6/207/) ### Steps to reproduce Specify a required prop with a default value: ```js Vue.component('comp', { template: '<div>{{ typeof x }} {{ x }}</div>', props: { x: { type: Number, required: true, default: 5, }, }, }); ``` Render the component without specifying a value for that prop: ```html <comp></comp> ``` ### What is expected? The component should render the following without any prop validation errors: ```html <div>number 5</div> ``` ### What is actually happening? The component renders OK, but warns about missing required prop `x`. --- While it's true that prop `x` is not specified, since it has a default value, there should be no warning message. What exactly does `required` check for? It appears that it checks two things: 1. The prop should be *provided*, as in at least `<comp :x="..."></comp>`. 2. The prop value should be non-null and non-undefined. I think in the case where a required prop has a default value, (1) should be relaxed. Otherwise, how can I enforce a prop to never be null while also providing a default value if the prop value was not provided? <!-- generated by vue-issues. DO NOT REMOVE -->
discussion,improvement
medium
Critical
300,941,129
TypeScript
Find All Reference from requiring file to module file works only if not default CommonJS exports
_From @Hoishin on February 28, 2018 4:58_ > By "default CommonJS exports" I mean `module.exports = something`, rather than `module.exports.foo = something` or `module.exports = {foo: something}` Might belong to https://github.com/Microsoft/vscode/issues/21507, but I thought it is a bit different. - VSCode Version: 1.20.1 - OS Version: macOS High Sierra Version 10.13.3 Steps to Reproduce: 1. `module.exports` something ```js const f = 1234; module.exports = f; ``` 2. `require` it ```js const f = require('./above-file'); console.log(f); ``` 3. The `f` in `console.log` doesn't show references across files in Find All References. You can find references across files if you Find All References from the module file. However, 1. `module.exports.foo` something ```js const f = 1234; module.exports.foo = f; ``` 2. `require` it ```js const {foo} = require('./above-file'); console.log(foo); ``` 3. The `foo` in `console.log` DOES show reference across files in Find All References. You can find references across files if you Find All References from the module file. <!-- Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes _Copied from original issue: Microsoft/vscode#44700_
Bug,VS Code Tracked
low
Major
300,993,680
rust
Variable name corrections should take type into account
I'm not sure whether this is feasible in all cases, but as far as possible, suggestions to rename variables that are typos should take the types of the suggestions into account. Consider: ```rust fn index(i: usize, a: &[i32]) -> i32 { a[j] // ^ did you mean `a`? } ``` Here, `a` is clearly not a valid index for the slice (and we get an error if we attempt the suggestion), but it is suggested anyway. On the other hand, `i` clearly has the correct type and is a more sensible suggestion. <!-- TRIAGEBOT_START --> <!-- TRIAGEBOT_ASSIGN_START --> <!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END --> <!-- TRIAGEBOT_ASSIGN_END --> <!-- TRIAGEBOT_END -->
C-enhancement,A-diagnostics,T-compiler,D-invalid-suggestion
low
Critical
301,000,650
opencv
Zenodo DOI for OpenCV?
Quick question: for a project like OpenCV that's heavily used in an academic context, would it make sense to register a DOI at https://zenodo.org/ so that it can be cited properly in papers etc.?
RFC
low
Minor
301,037,811
opencv
vlc FTBFS with opencv-3.4.1
##### System information (version) - OpenCV => 3.4.1 - Operating System / Platform => Linux - Compiler => gcc-7-20180222 (a 7.3.1 snapshot) <!-- your description --> vlc-3.0.1 fails to build against opencv-3.4.1, because the C compiler produces and error at line 485 of /usr/include/opencv2/core/cvdef.h. The error message is: ``` /usr/include/opencv2/core/cvdef.h:485:1: error: unknown type name 'namespace' ``` This is because the C compiler is trying to compile C++ code. ##### Steps to reproduce 1. build and install opencv-3.4.1 2. configure vlc-3.0.1 to include opencv and then run make ##### Proposed fix The following patch to opencv allows the build of vlc to complete: ```.diff --- opencv-3.4.1/modules/core/include/opencv2/core/cvdef.h.orig 2018-02-28 10:16:45.000320632 +0000 +++ opencv-3.4.1/modules/core/include/opencv2/core/cvdef.h 2018-02-28 10:34:30.852346927 +0000 @@ -454,6 +454,7 @@ Cv64suf; // Integer types portatibility +#if defined __cplusplus #ifdef OPENCV_STDINT_HEADER #include OPENCV_STDINT_HEADER #else @@ -494,6 +495,9 @@ typedef ::uint64_t uint64_t; } #endif #endif +#else +#include <stdint.h> +#endif //! @} ``` With the patched opencv installed, I have also built the frei0r-plugin-1.6.0, gstreamer-plugins-bad-1.12.4 and LiVES-2.8.7, all of which use opencv (although I don't know whether they failed against an unpatched version)
category: build/install,RFC
medium
Critical
301,043,095
flutter
'[!]' in flutter doctor looks too much like an error state
## Flutter Doctor ``` C:\>flutter doctor -v [√] Flutter (on Microsoft Windows [Version 10.0.16299.248], locale en-US, channel beta) • Flutter version 0.1.4 at C:\flutter • Framework revision f914e701c5 (9 days ago), 2018-02-19 21:12:17 +0000 • Engine revision 13cf22c284 • Dart version 2.0.0-dev.27.0-flutter-0d5cf900b0 [√] Android toolchain - develop for Android devices (Android SDK 27.0.0) • Android SDK at C:\Users\zowhair\AppData\Local\Android\sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-27, build-tools 27.0.0 • Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01) [√] Android Studio (version 3.0) • Android Studio at C:\Program Files\Android\Android Studio • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01) [!] Connected devices ! No devices available ! Doctor found issues in 1 category. ```
tool,t: flutter doctor,a: quality,P3,team-tool,triaged-tool
low
Critical
301,048,766
node
Merge streams handling code for http2 streams & net.Socket
Hey :) It would be great if we could unify all the code from `net` and `http2` that is only concerned with pushing data to/from the underlying stream, ideally into a common base class of `net.Socket` and `Http2Stream`, so that we could also maybe port some of the other native streams (zlib, fs) to using `StreamBase` on the native side & generally just share a lot of code. This is probably not an *easy* task, and probably not doable in one pass, because the http2 and net implementations are always just *slightly* different, but if anybody wants to try to start working on this, feel free to reach out by commenting here.
help wanted,net,http2
medium
Critical
301,061,779
create-react-app
Make launch editor a standalone package.
The launchEditor module is super useful: https://github.com/facebook/create-react-app/blob/next/packages/react-dev-utils/launchEditor.js I'd like to use it standalone, outside of create-react-app. Is there anyone willing to send a PR to make this a standalone package? :)
issue: proposal
low
Major
301,065,980
flutter
Support for progressive jpegs
I don't know if it is possible right now, but it would be great if flutter had the capabilities of decoding an image progressivly instead of baseline if the image supports it.
c: new feature,engine,a: images,P2,team-engine,triaged-engine
low
Major
301,069,246
create-react-app
RFC: Source Packages
# RFC: Source Packages A developer-, tool-, and ecosystem- friendly mechanism for sharing source. co-authored with @gaearon ## Overview "Source packages" are the same as standard npm packages, except they may contain non-standard language features (e.g. React JSX) and are built by the consumer. "Source packages" are included as standard dependencies in package.json and declared as source by also including them in sourceDependencies: ``` package.json { "name": "myapp", "dependencies": {"pkg1": ">0.0.0"}, "sourceDependencies": ["pkg1"] // declare as source package } ``` Being standard npm packages, source packages can be managed by standard tools and can be truly modular since they can declare their own dependencies. Being built by the consumer, the consuming build can provide the same build features and developer experience for source packages (transpiling, hot-reloading, de-duping, etc.) as it does for its own source. Since source packages may contain non-standard language features, they should be marked as "private". They can be contained in monorepos and/or published to private registries. Source packages should be testable by the consumer, just like the consumer's own source. This facilitates concurrent development of shared components. This proposal does not include a mechansim for a source package to describe its source code, e.g. which language features it uses. The proposal assumes that the consumer knows which source packages it is including and is able to build them, e.g. that the included source packages have the same build requirements as the consumer's own source. ## Pseudo-algorithm for finding source packages ``` sourcePackages = Map() FindSourcePkgs(package.json): for each dep in package.json dependencies if dep is also in package.json sourceDependencies resolve dep package path if dep package.json has private flag sourcePackages[dep] = dep package path + dep package.json source FindSourcePkgs(dep package.json) FindSourcePkgs(initial package.json) ``` Supports: * Transitive source dependencies. * Monorepos and private registries. * Source entry points. ## Example This repo demonstrates many of the use cases supported by this proposal. ``` repo/ app1/ package.json name: "app1" dependencies: { "comp1": ">0.0.0", "kewl-comps": ">0.0.0" } sourceDependencies: [ "comp1", // single module in source package "kewl-comps" // multiple modules in source package ] src/ App.js import comp1 from 'comp1'; import kewl1 from 'kewl-comps/comp1'; comp1/ package.json name: "comp1" dependencies: { "comp2": ">0.0.0" } sourceDependencies: [ "comp2" // transitive source dependency ] private: true index.js import comp2 from 'comp2'; import intLib from './int-lib'; // internal transitive source dependency int-lib.js comp2/ package.json name: "comp2" private: true index.js kewl-comps/ package.json name: "kewl-comps" dependencies: { comp5: ">0.0.0" } sourceDependencies: [ comp5 ] source: "src" // source entry point private: true src/ comp1.js import comp2 from './comp2'; // internal transitive source dependency import comp5 from 'comp5'; comp2.js node_modules/ comp5/ // source dependency from private registry package.json name: "comp5" dependencies: { "comp6": ">0.0.0" } sourceDependencies: [ "comp6" // transitive source dependency ] source: "src" private: true src/ index.js import comp6 from 'comp6'; node_modules/ comp6/ // nested transitive source dependency from private registry package.json name: "comp6" private: true index.js ```
issue: proposal
medium
Major
301,099,621
vscode
Support environment variables for paths in .code-workspace files
### Issue Type Feature Request ### Description I'm attempting to use `{ "path": "${env:APPDATA}/Code/User" }` in my workspace to be able to easily open settings.json and keybindings.json as raw files (the custom panels take up too much space and I rarely need them) and it appears variables are not substituted when loading the workspace file. ### VS Code Info VS Code version: Code 1.20.1 (f88bbf9137d24d36d968ea6b2911786bfe103002, 2018-02-13T15:34:36.336Z) OS version: Windows_NT x64 10.0.16299
feature-request,workbench-multiroot
high
Critical
301,112,224
godot
Project Settings : Autoload tab -- confusing layout, columns cut off, icon illegible
**Godot version:** 3.0.1 **OS/device including version:** Windows 10 / Microsoft Surface Pro 4 **Issue description:** 3 usability issues in the Autoload dialog (see accompanying screenshot) 1) the input fields for node name and path are reversed from the order they appear in the table. Recommend keeping same order so user's mental model - established during input matches the table shown on-screen. 2) the 'Singleton' column is cut-off and the table columns don't appear to be adjustable. 3) the toggle icon for a node in the Singleton column is illegible - cannot tell whether on or off **Steps to reproduce:** 1. open the settings dialog 2. click the autoload tab 3. add an autoload node ![godot_301_settings_autoload_dialog](https://user-images.githubusercontent.com/253436/36802313-d897901c-1cb4-11e8-9901-aabc8231d783.PNG)
enhancement,topic:editor,usability
low
Minor
301,112,237
rust
Compiliation aborted because it can not delete dep-graph.bin
Probably not important, but I report it anyway. See this travis log: https://travis-ci.org/rust-lang-nursery/rand/jobs/347348670 The first test command finishes successfully, but the second fails because one of the (leftover?) incremental compilation files can not be deleted (because it does not exist). ``` The command "cargo test --tests --no-default-features --features=alloc" exited with 0. $ cargo test --all --features serde-1,log,nightly Compiling rand v0.4.2 (file:///home/travis/build/rust-lang-nursery/rand) error: Failed to delete invalidated or incompatible incremental compilation session directory contents `/home/travis/build/rust-lang-nursery/rand/target/debug/incremental/rand-1tsi5gagblzct/s-eyqipa1yw1-8k3grx-working/dep-graph.bin`: No such file or directory (os error 2). error: aborting due to previous error error: Could not compile `rand`. warning: build failed, waiting for other jobs to finish... error: build failed ```
T-compiler,A-incr-comp,C-bug
low
Critical
301,115,922
go
cmd/compile: tighten CFG as well as values
```go func f(x int, b bool) int { if x < 0 { x *= -1 } if b { return x } return 0 } ``` The compiled code for `f` matches the input closely: It checks `x < 0`, negates it if so, then checks b. But we only need to check and modify x if b is true. The compiler should rewrite this code into: ```go func f(x int, b bool) int { if b { if x < 0 { x *= -1 } return x } return 0 } ``` This is similar to the tighten pass, but instead of operating on values, it should operate on subsections of the CFG. Similar to the tighten pass, it should avoid moving work into a loop. Among other things, this would help avoid doing needless work when the return value of copy is unused; see https://go-review.googlesource.com/c/go/+/94596. Marking 1.11 optimistically, although I have no plans to work on this myself. cc @randall77 @dr2chase @cherrymui
Performance,compiler/runtime
low
Minor
301,132,756
flutter
`flutter channel foo` should have a option to upgrade first?
It's a bit silly that when I `flutter channel foo` it downloads all the artifacts for that version when I just plan to upgrade again immediately. Would be nice to be able to skip that download and/or just switch and upgrade at the same time?
c: new feature,tool,P3,team-tool,triaged-tool
low
Major
301,162,953
flutter
gradlew assembleDebug - Failed to run the Flutter compiler. Exit code: 255
## Steps to Reproduce Using Flutter tools from the command line. Channel - beta. While running a project. ``` [ +9 ms] Running 'gradlew assembleDebug'... [ +5 ms] [android\] D:\Studio\Flutter\proj\android\gradlew.bat -Ptarget=D:\Studio\Flutter\proj\lib/main.dart assembleDebug [+4227 ms] :app:preBuild UP-TO-DATE [ +70 ms] :app:preDebugBuild UP-TO-DATE [ +10 ms] :app:compileDebugAidl UP-TO-DATE [ +7 ms] :app:compileDebugRenderscript UP-TO-DATE [ +18 ms] :app:flutterBuildX86Jar UP-TO-DATE [ +7 ms] :app:checkDebugManifest UP-TO-DATE [ +6 ms] :app:generateDebugBuildConfig UP-TO-DATE [ +2 ms] :app:prepareLintJar UP-TO-DATE [ +2 ms] :app:cleanMergeDebugAssets UP-TO-DATE [+1564 ms] Failed to open: C:\WINDOWS\System32\flutter\bin\cache\artifacts\engine\windows-x64\vm_isolate_snapshot.bin [ +52 ms] Failed to run the Flutter compiler. Exit code: 255 [ +15 ms] :app:flutterDependenciesDebug FAILED [ +8 ms] FAILURE: Build failed with an exception. [ +11 ms] * Where: [ +1 ms] Script 'C:\WINDOWS\System32\flutter\packages\flutter_tools\gradle\flutter.gradle' line: 421 [ +3 ms] * What went wrong: [ +2 ms] Execution failed for task ':app:flutterDependenciesDebug'. [ +3 ms] > Process 'command 'C:\WINDOWS\System32\flutter\bin\flutter.bat'' finished with non-zero exit value 255 [ +7 ms] * Try: [ +1 ms] Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. [ +6 ms] * Get more help at https://help.gradle.org [ +2 ms] BUILD FAILED in 5s [ +11 ms] 9 actionable tasks: 1 executed, 8 up-to-date [ +396 ms] "flutter run" took 14,063ms. ``` ## Logs ``` Launching lib/main.dart on YU5010A in debug mode... Initializing gradle... 1.2s Resolving dependencies... 4.0s Running 'gradlew assembleDebug'... Failed to open: C:\WINDOWS\System32\flutter\bin\cache\artifacts\engine\windows-x64\vm_isolate_snapshot.bin Failed to run the Flutter compiler. Exit code: 255 FAILURE: Build failed with an exception. * Where: Script 'C:\WINDOWS\System32\flutter\packages\flutter_tools\gradle\flutter.gradle' line: 421 * What went wrong: Execution failed for task ':app:flutterDependenciesDebug'. > Process 'command 'C:\WINDOWS\System32\flutter\bin\flutter.bat'' finished with non-zero exit value 255 * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. * Get more help at https://help.gradle.org BUILD FAILED in 5s Gradle build failed: 1 ``` ``` Analyzing D:\Studio\Flutter\proj... No issues found! Ran in 8.8s ``` ## Flutter Doctor ``` [√] Flutter (Channel beta, v0.1.5, on Microsoft Windows [Version 10.0.10586], locale en-IN) • Flutter version 0.1.5 at C:\WINDOWS\System32\flutter • Framework revision 3ea4d06340 (6 days ago), 2018-02-22 11:12:39 -0800 • Engine revision ead227f118 • Dart version 2.0.0-dev.28.0.flutter-0b4f01f759 [√] Android toolchain - develop for Android devices (Android SDK 27.0.3) • Android SDK at D:\SDK • Android NDK at D:\SDK\ndk-bundle • Platform android-27, build-tools 27.0.3 • ANDROID_HOME = D:\SDK • Java binary at: D:\Studio\android-studio\jre\bin\java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01) [√] Android Studio (version 3.0) • Android Studio at D:\Studio\android-studio • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01) [!] IntelliJ IDEA Community Edition (version 2017.3) X Flutter plugin not installed; this adds Flutter specific functionality. X Dart plugin not installed; this adds Dart specific functionality. • For information about installing plugins, see https://flutter.io/intellij-setup/#installing-the-plugins [√] VS Code (version 1.20.1) • VS Code at C:\Program Files\Microsoft VS Code • Dart Code extension version 2.9.2 [√] Connected devices (1 available) • YU5010A • b275d0c9 • android-arm • Android 5.1.1 (API 22) ! Doctor found issues in 1 category. ``` Project never runs. Even tried removing the cache dir and re-downloading the setup, changing channel to master and upgrading the dependencies. Thanks!
tool,platform-windows,P3,team-tool,triaged-tool
low
Critical
301,187,015
flutter
API Inconsistency: Some Animations use named parameters, some do not
Animations that combine 2 animations together have very different signatures for creating them: ```dart TrainHoppingAnimation(this._currentTrain, this._nextTrain, { this.onSwitchedTrain }) // Unnamed CompoundAnimation( { @required this.first, @required this.next } ) // Required named params AnimationMean( { Animation<double> left, Animation<double> right } ) // Not required named, even though AnimationMean extends CompoundAnimation AnimationMax(Animation<T> first, Animation<T> next) // Unnamed ``` We should clean up these APIs to all work the same way. IMO I think the unnamed, required structure works well because it's more succinct and works better for the inheritance structure custom animations need to fit into: - When creating a new CompoundAnimation, having the constructor with only named parameters means that the dart analyzer doesn't complain when I create a subclass without a constructor - names like 'left' and 'right' don't really apply to how animations combine together. - names like 'first' and 'next' also don't apply very well in cases like the Mean, Max and Min animations. I'm applying the docs - api label, as that seems most appropriate.
framework,a: animation,c: API break,c: proposal,P2,team-framework,triaged-framework
low
Minor
301,222,962
go
reflect: cannot call *T methods on addressable Values of type T
I would expect this program to work: package main import "reflect" type T int func (*T) M() {} func main() { var t T v := reflect.ValueOf(&t).Elem() v.MethodByName("M").Call(nil) } Currently, it produces `panic: reflect: call of reflect.Value.Call on zero Value` panic, because `M` is not in the method set of `T` (only `*T`). This seems overly strict to me. The Go spec allows calling `t.M()` where t is an addressable value of type T; it's just implicitly executed as `(&t).M()`. I would expect package reflect to handle this implicit dereference, but it does not. For comparison, the spec also allows an implicit dereference to call value-receiver methods on pointer types, and package reflect *does* perform this implicit dereference. /cc @ianlancetaylor @dsnet
NeedsInvestigation,compiler/runtime
low
Major
301,236,839
rust
Multiple definitions of atomic builtins on armv5te-unknown-linux-gnu
Trying to cross-compile the Rust compiler for armv5te-unknown-linux-gnu fails with: ``` Compiling panic_unwind v0.0.0 (file:///srv/glaubitz/rustc-1.24.0-src/src/libpanic_unwind) error: linking with `arm-linux-gnueabi-gcc` failed: exit code: 1 | = note: "arm-linux-gnueabi-gcc" "-Wl,--as-needed" "-Wl,-z,noexecstack" "-L" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1/lib/rustlib/armv5te-unknown-linux-gnueabi/lib" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std0-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std1-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std10-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std11-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std12-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std13-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std14-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std15-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std2-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std3-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std4-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std5-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std6-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std7-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std8-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.std9-35a626bb176191a3631d2276b7a6938e.rs.rcgu.o" "-o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/libstd-c05b4157967490d3.so" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.crate.metadata.rcgu.o" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/std-c05b4157967490d3.crate.allocator.rcgu.o" "-Wl,-z,relro,-z,now" "-Wl,-O1" "-nodefaultlibs" "-L" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps" "-L" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/release/deps" "-L" "/srv/glaubitz/rustc-1.24.0-src/build/armv5te-unknown-linux-gnueabi/native/libbacktrace/.libs" "-L" "/srv/glaubitz/rustc-1.24.0-src/build/armv5te-unknown-linux-gnueabi/native/jemalloc/lib" "-L" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/build/compiler_builtins-830f2ed062697681/out" "-L" "/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1/lib/rustlib/armv5te-unknown-linux-gnueabi/lib" "-Wl,-Bstatic" "-Wl,--whole-archive" "-l" "backtrace" "-Wl,--no-whole-archive" "-Wl,-Bdynamic" "-l" "dl" "-l" "rt" "-l" "pthread" "-Wl,-Bstatic" "-Wl,--whole-archive" "/tmp/rustc.Z9Z1qX4dirp5/libpanic_unwind-d795c34fd7eb72c9.rlib" "-Wl,--no-whole-archive" "-Wl,--whole-archive" "/tmp/rustc.Z9Z1qX4dirp5/libunwind-0f4b95d88e53e455.rlib" "-Wl,--no-whole-archive" "-Wl,--whole-archive" "/tmp/rustc.Z9Z1qX4dirp5/liballoc_jemalloc-276e820bb149c519.rlib" "-Wl,--no-whole-archive" "-Wl,--whole-archive" "/tmp/rustc.Z9Z1qX4dirp5/liballoc_system-6d9370e956155877.rlib" "-Wl,--no-whole-archive" "-Wl,--whole-archive" "/tmp/rustc.Z9Z1qX4dirp5/liblibc-f874e85b0fc1f5b8.rlib" "-Wl,--no-whole-archive" "-Wl,--whole-archive" "/tmp/rustc.Z9Z1qX4dirp5/liballoc-c7758873e5bedec1.rlib" "-Wl,--no-whole-archive" "-Wl,--whole-archive" "/tmp/rustc.Z9Z1qX4dirp5/libstd_unicode-c9b8ae6d904bd621.rlib" "-Wl,--no-whole-archive" "-Wl,--whole-archive" "/tmp/rustc.Z9Z1qX4dirp5/libcore-c80c8268f013635d.rlib" "-Wl,--no-whole-archive" "/tmp/rustc.Z9Z1qX4dirp5/libcompiler_builtins-6389cf691b0a8db6.rlib" "-Wl,-Bdynamic" "-l" "gcc_s" "-l" "pthread" "-l" "c" "-l" "m" "-l" "rt" "-l" "pthread" "-l" "util" "-l" "util" "-shared" "-Wl,-rpath,$ORIGIN/../lib" = note: /usr/lib/gcc-cross/arm-linux-gnueabi/7/libgcc.a(linux-atomic.o): In function `__sync_fetch_and_add_4': (.text+0x0): multiple definition of `__sync_fetch_and_add_4' /tmp/rustc.Z9Z1qX4dirp5/libcompiler_builtins-6389cf691b0a8db6.rlib(compiler_builtins-6389cf691b0a8db6.compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs.rcgu.o):compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs:(.text.__sync_fetch_and_add_4+0x0): first defined here /usr/lib/gcc-cross/arm-linux-gnueabi/7/libgcc.a(linux-atomic.o): In function `__sync_fetch_and_sub_4': (.text+0x38): multiple definition of `__sync_fetch_and_sub_4' /tmp/rustc.Z9Z1qX4dirp5/libcompiler_builtins-6389cf691b0a8db6.rlib(compiler_builtins-6389cf691b0a8db6.compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs.rcgu.o):compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs:(.text.__sync_fetch_and_sub_4+0x0): first defined here (...) /usr/lib/gcc-cross/arm-linux-gnueabi/7/libgcc.a(linux-atomic.o): In function `__sync_synchronize': (.text+0xda4): multiple definition of `__sync_synchronize' /tmp/rustc.Z9Z1qX4dirp5/libcompiler_builtins-6389cf691b0a8db6.rlib(compiler_builtins-6389cf691b0a8db6.compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs.rcgu.o):compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs:(.text.__sync_synchronize+0x0): first defined here /usr/lib/gcc-cross/arm-linux-gnueabi/7/libgcc.a(linux-atomic.o): In function `__sync_lock_test_and_set_4': (.text+0xdb0): multiple definition of `__sync_lock_test_and_set_4' /tmp/rustc.Z9Z1qX4dirp5/libcompiler_builtins-6389cf691b0a8db6.rlib(compiler_builtins-6389cf691b0a8db6.compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs.rcgu.o):compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs:(.text.__sync_lock_test_and_set_4+0x0): first defined here /usr/lib/gcc-cross/arm-linux-gnueabi/7/libgcc.a(linux-atomic.o): In function `__sync_lock_test_and_set_2': (.text+0xde8): multiple definition of `__sync_lock_test_and_set_2' /tmp/rustc.Z9Z1qX4dirp5/libcompiler_builtins-6389cf691b0a8db6.rlib(compiler_builtins-6389cf691b0a8db6.compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs.rcgu.o):compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs:(.text.__sync_lock_test_and_set_2+0x0): first defined here /usr/lib/gcc-cross/arm-linux-gnueabi/7/libgcc.a(linux-atomic.o): In function `__sync_lock_test_and_set_1': (.text+0xe48): multiple definition of `__sync_lock_test_and_set_1' /tmp/rustc.Z9Z1qX4dirp5/libcompiler_builtins-6389cf691b0a8db6.rlib(compiler_builtins-6389cf691b0a8db6.compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs.rcgu.o):compiler_builtins1-4ad3f5eb4daeb9d58eb1d1b812eea230.rs:(.text.__sync_lock_test_and_set_1+0x0): first defined here collect2: error: ld returned 1 exit status error: aborting due to previous error error: Could not compile `std`. Caused by: process didn't exit successfully: `/srv/glaubitz/rustc-1.24.0-src/build/bootstrap/debug/rustc --crate-name std src/libstd/lib.rs --error-format json --crate-type dylib --crate-type rlib --emit=dep-info,link -C prefer-dynamic -C opt-level=2 --cfg feature="alloc_jemalloc" --cfg feature="backtrace" --cfg feature="jemalloc" --cfg feature="panic-unwind" --cfg feature="panic_unwind" -C metadata=c05b4157967490d3 -C extra-filename=-c05b4157967490d3 --out-dir /srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps --target armv5te-unknown-linux-gnueabi -L dependency=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps -L dependency=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/release/deps --extern unwind=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/libunwind-0f4b95d88e53e455.rlib --extern alloc_jemalloc=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/liballoc_jemalloc-276e820bb149c519.rlib --extern panic_unwind=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/libpanic_unwind-d795c34fd7eb72c9.rlib --extern libc=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/liblibc-f874e85b0fc1f5b8.rlib --extern alloc_system=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/liballoc_system-6d9370e956155877.rlib --extern alloc=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/liballoc-c7758873e5bedec1.rlib --extern compiler_builtins=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/libcompiler_builtins-6389cf691b0a8db6.rlib --extern panic_abort=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/libpanic_abort-28e388772239c881.rlib --extern core=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/libcore-c80c8268f013635d.rlib --extern std_unicode=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/deps/libstd_unicode-c9b8ae6d904bd621.rlib -L native=/srv/glaubitz/rustc-1.24.0-src/build/armv5te-unknown-linux-gnueabi/native/libbacktrace/.libs -l static=backtrace -l dl -l rt -l pthread -L native=/srv/glaubitz/rustc-1.24.0-src/build/armv5te-unknown-linux-gnueabi/native/jemalloc/lib -L native=/srv/glaubitz/rustc-1.24.0-src/build/x86_64-unknown-linux-gnu/stage1-std/armv5te-unknown-linux-gnueabi/release/build/compiler_builtins-830f2ed062697681/out` (exit code: 101) thread 'main' panicked at 'command did not execute successfully: "/usr/local/bin/cargo" "build" "--target" "armv5te-unknown-linux-gnueabi" "-j" "64" "--release" "--features" "panic-unwind jemalloc backtrace" "--manifest-path" "/srv/glaubitz/rustc-1.24.0-src/src/libstd/Cargo.toml" "--message-format" "json" expected success, got: exit code: 101', src/bootstrap/compile.rs:886:9 note: Run with `RUST_BACKTRACE=1` for a backtrace. failed to run: /srv/glaubitz/rustc-1.24.0-src/build/bootstrap/debug/bootstrap build Build completed unsuccessfully in 0:19:05 glaubitz@epyc:/srv/glaubitz/rustc-1.24.0-src$ ``` This happens because Rust's own libcompiler_builtins provides atomic builtins like __sync_synchronize despite the fact that gcc already provides these builtins itself. The builtins provided by libcompiler_builtins should therefore be disabled. I could not find where those are defined, but I could only find a reference to them: https://github.com/rust-lang/rust/blob/master/src/librustc_back/target/armv5te_unknown_linux_gnueabi.rs#L30 CC @jrtc27 CC @malbarbo
O-Arm,T-compiler,C-bug
low
Critical
301,238,455
kubernetes
kube-controller-manager: healthz should indicate that the garbage collector is running.
If the GC gets stuck, controller manager needs to be restarted. The GC should never get stuck, of course. But right now, if it did, I think we'd have no way to know?
sig/api-machinery,lifecycle/frozen
medium
Critical
301,266,814
vscode
Overview ruler becomes redundant
<!-- Do you have a question? Please ask it on https://stackoverflow.com/questions/tagged/vscode. --> <!-- Use Help > Report Issue to prefill these. --> - VSCode Version: 1.20.1 - OS Version: Windows 10 <!-- Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes Steps to Reproduce: 1. Create a typescript file with 2000-3000+ lines of code/blank lines (the more lines the worse it gets), usually starts at around 1500 lines+. 2. Click to place your cursor in the editor half way down the file somewhere 3. Scroll the page so cursor is at very top of screen 4. Observe overview ruler on right hand side of editor and the 'blip' used to indicate location of the cursor is incorrect (not sure what you call these but I refer to them as blips in this issue) ![image](https://user-images.githubusercontent.com/11023398/36825133-b7ea1e3c-1d6a-11e8-8487-fbfb15ee0d82.png) This has caused me a few problems as its also used for search, error and warning blips. Often it will say the item is in my current view when it's not. The bigger my file the more out of sync it is. I'm guessing there's some arbitrary limit to the height of the box used to indicate what the current view is showing and I can understand that there would need to be some limit as it may be hard to view things correctly the more lines of code you add. I propose a few solutions below, not sure which is best: 1) When you reach a point where you cant reduce the blip height and/or window preview height in ruler then don't show anything - This is the quickest and simplest solution but I believe it's better than showing incorrect data. Maybe need someway to show that its being disabled so users don't report another bug saying its not showing :) 2) Put the scroll bar in its own vertical container without the blips and preview and then show 2 or more vertical rulers to the right of it so as you scroll up and down the preview window will show in one of the vertical rulers (could be split when crossing between overview rulers), when everything fits on the one then the scroll bar and overview are combined as they are now, when you make the file bigger you would have two vertical overview rulers, then three up to some limit. 3) Keep it as it is now with one vertical ruler but only show the preview for x lines of code below and above current view pos. This means when you're half way down a 5000 line file the ruler may only show lines 1500 - 4000, by doing this you can keep the heights of the graphics at a reasonable value, need to show somehow at the top/bottom of the ruler if lines are being hidden. Keep up the good work!
feature-request,editor-scrollbar
low
Critical
301,294,332
youtube-dl
Cant download from sites like zee5.com and sunnxt.com
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.02.26*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.02.26** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [x] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [x] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` C:\ffmpeg\bin>youtube-dl -v "https://www.zee5.com/movies/details/bruce-lee-the-f ighter/0-0-2702/watch" [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['-v', 'https://www.zee5.com/movies/details/bruce-lee -the-fighter/0-0-2702/watch'] [debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252 [debug] youtube-dl version 2018.02.26 [debug] Python version 3.4.4 (CPython) - Windows-7-6.1.7601-SP1 [debug] exe versions: ffmpeg N-89948-ge3d946b3f4, ffprobe N-89948-ge3d946b3f4 [debug] Proxy map: {} [generic] watch: Requesting header WARNING: Falling back on generic information extractor. [generic] watch: Downloading webpage [generic] watch: Extracting information ERROR: Unsupported URL: https://www.zee5.com/movies/details/bruce-lee-the-fighte r/0-0-2702/watch Traceback (most recent call last): File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpfqykkd7x\bu ild\youtube_dl\YoutubeDL.py", line 785, in extract_info File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpfqykkd7x\bu ild\youtube_dl\extractor\common.py", line 440, in extract File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpfqykkd7x\bu ild\youtube_dl\extractor\generic.py", line 3127, in _real_extract youtube_dl.utils.UnsupportedError: Unsupported URL: https://www.zee5.com/movies/ details/bruce-lee-the-fighter/0-0-2702/watch --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://www.zee5.com/movies/details/bruce-lee:-the-fighter/0-0-2702/watch - Single video: https://sunnxt.com Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights. --- ### Description of your *issue*, suggested solution and other information Unable to download from sites like https://www.zee5.com and https://www.sunnxt.com i have the sunnxt pemium subscription... but i am unable download from the site....plz give support for these sites...
geo-restricted
low
Critical
301,295,465
flutter
Make it easy to override Scrollable's gesture detectors, e.g. to enable scale+scroll
Currently if you want a Scrollable that can both be scaled and scrolled, you have to have separate Scale and VerticalDrag gesture detectors, the scale in your code, and the vertical drag in the Scrollable. This makes the UI ugly, because it means the user has to know which operation they are doing when they begin, and if they move one finger as they're putting the second down, it'll trigger the scroll instead of the scale. Another use case is two orthogonal Scrollables sharing a single Pan gesture detector. We should make it easy to override how Scrollable listens to gestures.
c: new feature,framework,f: scrolling,f: gestures,customer: crowd,P3,team-framework,triaged-framework
medium
Major
301,336,021
pytorch
torch.jit.trace(network, data) fails if data is an OrderedDict
My network takes a bunch of inputs, and I was keeping them in an ordereddict so that I could keep track of what they were. When I tried to run TensorboardX on my network, it fails with: ```Traceback (most recent call last): File "tensorboard.py", line 63, in <module> main() File "tensorboard.py", line 60, in main writer.add_graph(network, data) File "/usr/local/lib/python2.7/dist-packages/tensorboardX/writer.py", line 400, in add_graph self.file_writer.add_graph(graph(model, input_to_model, verbose)) File "/usr/local/lib/python2.7/dist-packages/tensorboardX/graph.py", line 52, in graph trace, _ = torch.jit.trace(model, args) File "/usr/local/lib/python2.7/dist-packages/torch/jit/__init__.py", line 241, in trace return TracedModule(f, nderivs=nderivs)(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python2.7/dist-packages/torch/jit/__init__.py", line 266, in forward in_vars, in_struct = _flatten((args, tuple(kw_items)), self.state_dict(keep_vars=True).values()) File "/usr/local/lib/python2.7/dist-packages/torch/jit/__init__.py", line 568, in _flatten obj_vars = tuple(itertools.chain(function._iter_variables(obj), params)) File "/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py", line 277, in _iter for var in _iter(o): File "/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py", line 277, in _iter for var in _iter(o): File "/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py", line 281, in _iter "an input object of type " + torch.typename(obj)) ValueError: NestedIOFunction doesn't know how to process an input object of type collections.OrderedDict ``` Digging through, I found that the problem was in torch.jit.trace, tho, I'm less confident about trying to fix it. I'm running Linux, 4.4.0-112-generic, pytorch 0.3.1 on python 2.7.12 and I installed it with pip. (I don't think the CUDA information is relevant, because this fails on my laptop which doesn't have a GPU.) I'm attaching two files: One called "works.py" which works, if instead of using an OrderedDict, I use a list. and one called "fails.py" which fails, since it uses an OrderedDict. Thanks.. [demo.zip](https://github.com/pytorch/pytorch/files/1770599/demo.zip) cc @gmagogsfm
oncall: jit,module: bootcamp,days
low
Critical
301,337,802
angular
feat: `::part` support in Angular's Emulated CSS parser
## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [x ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior Angular has no real replacement for the deprecated /deep/ (aka >>> and ::ng-deep) which are all marked as deprecated. ## Expected behavior Angular should be at the forefront of web technology and provide today the technologies of tomorrow. ::part and ::theme provide a controlled and safe way to style components from outside without allowing abuse to the encapsulation that Shadow DOM is meant to provide. [Explaining article](https://meowni.ca/posts/part-theme-explainer/) [Spec Draft](https://tabatkins.github.io/specs/css-shadow-parts/#intro) I would believe that such declared parts can be treated almost like @Input variables that trigger change detection. Although declared on the @Component style part they will be compiled into the .ngstyle and be incoparated in the view. when another component uses the ::part annotated component the styles are propagated. ## What is the motivation / use case for changing the behavior? a component needs to allow external users to style its inner parts, let say a dropdown needs to have it's button, drop-down list & selection hover styleable. Angular version: 5.2.6 Browser: Any, it should compile away to allow browsers to support it today.
feature,area: core,core: CSS encapsulation,feature: under consideration
medium
Critical
301,445,325
kubernetes
Overaggressive Routes Controller
**Is this a BUG REPORT or FEATURE REQUEST?**: /kind bug **What happened**: `Route controller` is unnecessary too aggressive. The [current control loop](https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/route/route_controller.go#L115) loops through the node list every `10s`. Each node will generate a `GET` call to cloud resource manager (ARM in case of `azure`) to assert the route existence, This quickly burns through the quota afforded to each subscription (the umbrella concept used by `azure` to manage multiple clusters/ resource groups) by the underlaying cloud. Nodes does not change their own ip that frequent on `azure` or any cloud/om-prem deployment for that matter. Once the cluster burns through the quota its control plane will become practically unusable until quota refill, refresh or manual shutdown/start of controller to slow down the calls. Setting it to higher period is not the best solution, since we need the route controller to kick in in case of scale up/down or the unlikely case of node changing IPs. **What you expected to happen**: `Route Controller` must generate the smallest possible of cloud calls. Ideally, only when needed. And must be reasonably fast in reaction to changing state such as `new nodes`, `deleted nodes`, or changing `node ip` **How to reproduce it (as minimally and precisely as possible)**: Either run many small clusters (10 X 20 nodes) in the same `azure` subscription (or the equivalent concept on different environments), Or run one large cluster (400++ node). The controller will quickly log `HTTP 429` (in `azure` case) throttling errors **Proposed Solution** change `Route Controller` implementation to `Service Controller` [like](https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/service/service_controller.go#L136) implementation using `shared informer`. Because `node` object gets frequently updated (example node status updates) the controller should maintain an in memory cache of `node:ip` and should only call out the `cloud provider` when the node is new or its ip has changed. We can extend this by a `full sync` every long period just in case of faulty state drifts. **Anything else we need to know?**: N/A **Environment**: - Kubernetes version (use `kubectl version`): * - Cloud provider or hardware configuration: `azure` et al - OS (e.g. from /etc/os-release): * - Kernel (e.g. `uname -a`): * - Install tools: * - Others:*
kind/bug,area/cloudprovider,lifecycle/frozen,area/provider/azure,triage/unresolved,sig/cloud-provider,needs-triage
medium
Critical
301,456,329
vscode
Document highlight API command should return word highlights if no specific highlight provider is registered
The command should behave exactly the same as if the user sets a cursor somewhere.
feature-request,api,editor-symbols
low
Minor
301,529,155
TypeScript
API: expose IntrinsicType
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 --> <!-- Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the CONTRIBUTING guidelines: https://github.com/Microsoft/TypeScript/blob/master/CONTRIBUTING.md * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ --> <!-- If you have a QUESTION: THIS IS NOT A FORUM FOR QUESTIONS. Ask questions at http://stackoverflow.com/questions/tagged/typescript or https://gitter.im/Microsoft/TypeScript --> <!-- If you have a SUGGESTION: Most suggestion reports are duplicates, please search extra hard before logging a new suggestion. See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> <!-- If you have a BUG: Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.7.2, 2.8.0-dev.20180228 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** IntrinsicType There's currently no (official) way to distinguish the literal types `true` and `false`. Although `ts.TypeFlags.BooleanLiteral` is included in `ts.TypeFlags.Literal` it's no `ts.LiteralType`. Therefore I cannot access the value of the literal. I need access to the internal `ts.IntrinsicType` for that. TSLint just declared the type in their project, which might break at any time... My current workaround is `checker.typeToString(type) === 'false'`. If you expose `ts.IntrinsicType`, does it make sense to expose `ts.TypeFlags.Intrisic` too?
Suggestion,In Discussion,API
low
Critical
301,544,473
rust
[NLL] borrowed-universal-test.rs output is not great
The output of this test ... ```rust // compile-flags: -Znll-dump-cause #![feature(nll)] #![allow(warnings)] fn gimme(x: &(u32,)) -> &u32 { &x.0 } fn foo<'a>(x: &'a (u32,)) -> &'a u32 { let v = 22; gimme(&(v,)) //~^ ERROR borrowed value does not live long enough [E0597] } fn main() {} ``` is not great ``` [santiago@archlinux rust1 (borrowed_value_error)]$ rustc +stage1 src/test/ui/nll/borrowed-universal-error.rs -Znll-dump-cause error[E0597]: borrowed value does not live long enough --> src/test/ui/nll/borrowed-universal-error.rs:22:12 | 22 | gimme(&(v,)) | ^^^^ temporary value does not live long enough 23 | //~^ ERROR borrowed value does not live long enough [E0597] 24 | } | - temporary value only lives until here | note: borrowed value must be valid for the lifetime 'a as defined on the function body at 20:1... --> src/test/ui/nll/borrowed-universal-error.rs:20:1 | 20 | fn foo<'a>(x: &'a (u32,)) -> &'a u32 { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error If you want more information on this error, try using "rustc --explain E0597" ``` cc @nikomatsakis
C-enhancement,A-diagnostics,P-low,T-compiler,A-NLL
low
Critical
301,549,210
go
proposal: spec: allow cap(make([]T, m, n)) > n
Currently, the Go spec requires that `cap(make([]T, n)) == n` and `cap(make([]T, m, n)) == n`. I propose relaxing this constraint from `==` to `>=`. Rationale: the Go runtime already pads allocations up to the next malloc bucket size. By treating the user supplied capacity argument as a lower bound rather than exact bound, there's a possibility to make use of this padding space. Also, if users really need an exact capacity (which seems like it would be very rare), they can write `make([]T, n)[:n:n]` or `make([]T, m, n)[:m:n]`. (See also discussion in #24163.)
LanguageChange,Proposal,LanguageChangeReview
high
Critical
301,567,622
go
proposal: net/v2: make Pipe asynchronous
Currently, `net.Pipe` is documented as such: > Pipe creates a *synchronous*, in-memory, full duplex network connection; both ends implement the Conn interface. Reads on one end are matched with writes on the other, copying data directly between the two; there is no internal buffering. This behavior occurs because `net.Pipe` was just a thin wrapper over `io.Pipe`, which is implemented as being synchronous. However, most network connections are not synchronous as they involve buffering at multiple layers (in the OS, on the wire, etc). Thus, we should switch `net.Pipe` to act in an asynchronous fashion to better emulate true network connections.
v2,Proposal
medium
Major
301,615,204
go
cmd/compile: revisit append codegen
CLs 21813 and 22197 optimized the code generated for append. It was done in part to address #14921, but while I was there, I also focused on things like avoiding spills in the fast path. Several things have changed in the compiler since then: * The old backend is gone, so we are free to change the signature of growslice. * We are better at sinking spills to a narrower scope, so some of the spill-avoidance techniques used are no longer necessary. * Write barriers are inserted later in the SSA backend, rather than in the frontend and during SSA construction. All this suggests to me that we may be able to simplify and improve the code generated for append. I also suspect that the "no write barriers for in-place appends" optimization may have gotten lost during the write barrier move. More details about that in a comment-to-come over in #14921. That's a significant optimization. Tentatively marking this as a release blocker for that reason. cc @randall77 @cherrymui @aclements @mvdan
Performance
low
Minor
301,622,646
go
x/text/language: change of behavior for language matcher
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? 1.9.2 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? linux amd64 ### What did you do? The golang.org/x/text package seemed to have changed in the behaviour of language matching with an update a few days ago: ``` package main import ( "fmt" "golang.org/x/text/language" ) func main() { s := []language.Tag{language.MustParse("en"), language.MustParse("fr")} p := []language.Tag{language.MustParse("en-US"), language.MustParse("en")} l := language.NewMatcher(s) ll, _, _ := l.Match(p...) fmt.Println(ll) } ``` ### What did you expect to see? This used to print "en" but now prints "en-u-rg-uszzzz". This doesn't make sense because I only support "en" and "fr" so why is it returning something else? Switching the order of my preferred languages gives "en". Is there a rhyme or reason why the change because I cannot understand? Defeats the purpose of a language "matcher" if it is going to return languages that I don't support. If this is by design what is the best way to get just "en". Parent()? Base()? SomethingElse()? ### What did you see instead? "en-u-rg-uszzzz", a language I do not support.
NeedsInvestigation
medium
Critical
301,642,132
rust
missed optimization: fat pointers in two-variant enums with small second variants
Currently, there's a reasonably well-known optimization for two-variant enums, where one variant is a single pointer and the other has no fields. In this case, Rust uses its knowledge that `0x0` is never a valid pointer value to optimize the discriminant into the address: `0x0` is taken to mean that the non-pointer variant is the correct one. There's a missed optimization, however, when the pointer is a fat one (e.g. a slice or a trait object) and the second variant is small (specifically, `<= usize`). In this case, the enum could be optimized by having the address field zero still indicate the second variant, with the fields being stored where the size or vtable pointer would be stored on a trait object. https://play.rust-lang.org/?gist=7398c28c7f05bc76a06a7fb6d4af40fa&version=nightly
C-enhancement,T-compiler,C-optimization
low
Minor
301,694,819
vscode
SCM: Align next/previous commands between diff and regular editor
Extracted from https://github.com/Microsoft/vscode/issues/7717 I think the commands that got introduced should also work within a diff editor.
help wanted,feature-request,diff-editor
low
Major
301,748,271
flutter
"Charles" networking debugging tool prevents SDK download
Hi, when I run flutter doctor --verbose for first time, it's try to download Dart SDK version, but It can be ther for hours and nothing happen. I have try to delete flutter, and install it from Android Studio, but the problem it's the same. It clone flutter without problems, but when check for dart SDK version, it stay there forever. I have Windows 10 pro. What can I do?
tool,platform-windows,a: first hour,P3,team-tool,triaged-tool
low
Critical
301,758,624
rust
#![cfg_attr(not(test), no_std)] doesn't work when building multiple crate-types
``` $ cargo create --lib foo $ cd foo $ cat >> Cargo.toml <<EOF [lib] crate-type = ["rlib", "staticlib"] [profile.dev] panic="abort" EOF $ cat > src/lib.rs <<EOF #![cfg_attr(not(test), no_std)] #![feature(lang_items)] #[cfg(not(test))] #[lang = "panic_fmt"] #[no_mangle] pub fn panic_fmt(_: core::fmt::Arguments, _: &'static str, _: u32, _: u32) -> ! { loop {} } EOF $ cargo +nightly build Compiling foo v0.1.0 (file:///tmp/foo) Finished dev [unoptimized + debuginfo] target(s) in 0.13 secs $ cargo +nightly test Compiling foo v0.1.0 (file:///tmp/foo) error: language item required, but not found: `eh_personality` error: aborting due to previous error ``` That is an error that is expected with `no_std`, but the code is supposed to be essentially empty in the `test` configuration. It works when `crate-type` contains only one value (either one).
C-enhancement,T-compiler
low
Critical
301,796,099
pytorch
Redo torch.nn.functional docstring strategy
Now that we are stuffing all of our ATen bindings in `torch.` namespace, docstrings for these functions shouldn't live in `torch.nn.functional` anymore, because then you won't get docs unless you import `torch.nn.functional` first. CC @gchanan cc @jlin27
module: docs,triaged
low
Minor
301,808,852
rust
Add rustdoc option to make old docs "less important" to make search engine results more pertinent
I think about something like: ```bash rustdoc --robots-options="less-important" # or whatever option it is for search engine robots ``` However, this option will need to be called for the version specific docs generation (I'm talking about `doc.rust-lang.org/doc/1.x.y`) and not on "not specific" docs ( `doc.rust-lang.org/doc/stable`, ``doc.rust-lang.org/doc/beta` or `doc.rust-lang.org/doc/nightly`). Maybe you know something about this @Mark-Simulacrum? (wild guess) cc @QuietMisdreavus (does the option's name seems ok to you?)
C-enhancement,T-release
low
Minor
301,823,259
rust
Traits and associated types are not properly resolved in trait clauses
https://play.rust-lang.org/?gist=408db89ed821df7b75a2769ae92d3e9b&version=nightly
A-trait-system,A-associated-items,T-lang,C-bug,T-types,S-types-deferred
low
Major
301,838,384
pytorch
TestNN.test_data_parallel takes 10G of memory
Apply the following patch (with `pip install psutil`) to get resident memory information: ``` diff --git a/test/common.py b/test/common.py index 2980cd5..5752af3 100644 --- a/test/common.py +++ b/test/common.py @@ -10,6 +10,7 @@ from functools import wraps from itertools import product from copy import deepcopy from numbers import Number +import psutil import __main__ import errno @@ -148,6 +149,10 @@ class TestCase(unittest.TestCase): def setUp(self): set_rng_seed(SEED) + def tearDown(self): + process = psutil.Process(os.getpid()) + print("\nResident: {}".format(process.memory_info().rss)) + def assertTensorsSlowEqual(self, x, y, prec=None, message=''): max_err = 0 self.assertEqual(x.size(), y.size()) ``` Then run the test: ``` (/home/ezyang/Dev/onnx-fb-universe-env) [[email protected] ~/Dev/onnx-fb-universe/repos/pytorch] python test/test_nn.py TestNN.test_data_parallel Resident: 10068000768 . ---------------------------------------------------------------------- Ran 1 test in 17.433s OK ``` That's 10G!!! This is on an eight GPU box. This number is on a DEBUG build. Commit b181b59d31fd0bf34d098508c6c1ac55821f1630 CC @apaszke cc @mruberry @VitalyFedyunin
module: memory usage,module: tests,triaged
low
Critical
301,860,748
vscode
Proxy parameters are ignored and user-settings are missing a proxy-bypass option
### Issue Type Bug ### Description @joaomoreno mentioning you here as discussed in https://github.com/Microsoft/vscode-docs/pull/1069 **Steps to reproduce** 1. Start VSC using `code --proxy-server="myproxy:8080" --proxy-bypass-list="*.someinternaldomain" --ignore-certificate-errors` 2. Make sure in the user settings the options `http.proxy` and `http.proxyStrictSSL` are **not** set. **Expected outcome:** VSC should use the proxy myproxy:8080 for ALL protocols except when connecting so any subdomain on *.someinternaldomain*. Also it should ignore any SSL errors. This should be true for all VSC parts that require networking and all plug-ins that do so. **Actual outcome:** All submitted cmd parameters seem to be ignored atleast for parts of VSC. **Observed behaviour:** Create a JSON-Schema that is referencing both a public URL and an internal URL, e.g. ```JSON { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "something": { "$ref": "https://repo.someinternaldomain/some-internal-schema.json" } } } ``` where some-internal-schema.json can be an blank JSON such as ```JSON {} ``` VSCode will then try to resolve both: the internal and the public domains. Given the cmd params we used to start VSC both should work. But actually what I get is ```LOG Unable to load schema from 'repo.someinternaldomain'. Error: self signed certificate in certificate chain ``` Testing further, I set the option `"http.proxyStrictSSL" : false` in the VSC user settings, the certificate error now is replaced by the following error: ```LOG Unable to load schema from 'http://json-schema.org/draft-07/schema': Unable to connect to http://json-schema.org/draft-07/schema. Error: connect ECONNREFUSED 127.0.0.1:80 ``` Note that on the same system I am able to visit json-schema.org without any issues using the same proxy configuration. This can be also tested when setting `"http.proxy": "http://myproxy:8080"` in the user settings. Then the error will go away. Now public connections are working just fine but the internal connections will still not work because `--proxy-bypass-list`, just like all the other cmd parameters seems to be ignored. So the issue is: 1. chromium proxy parameters as shown above are ignored by some parts of VSC (used for JSON schema validation). Note that this also happens with `code --disable-extensions` so I believe this is a built-in feature. 2. a option to configure a proxy-bypass-list in the user-settings is missing (this would be a great addition not only because the parameters are not working but in general) ### VS Code Info VS Code version: Code 1.20.1 (f88bbf9137d24d36d968ea6b2911786bfe103002, 2018-02-13T15:34:36.336Z) OS version: Windows_NT x64 6.3.9600 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (2 x 2497)| |Memory (System)|8.00GB (6.40GB free)| |Process Argv|C:\Program Files\Microsoft VS Code\Code.exe| |Screen Reader|no| |VM|100%| </details><details><summary>Extensions (18)</summary> Extension|Author (truncated)|Version ---|---|--- rainbow-brackets|2gu|0.0.6 npm-intellisense|chr|1.3.0 vscode-markdownlint|Dav|0.13.0 vscode-eslint|dba|1.4.7 githistory|don|0.4.0 vscode-generate-getter-setter|DSK|0.4.2 gitlens|eam|8.0.2 EditorConfig|Edi|0.12.1 tslint|eg2|1.0.28 LogFileHighlighter|emi|2.1.1 json-tools|eri|1.0.2 docthis|joe|0.6.0 git-indicators|lam|2.1.1 quicktype|qui|8.5.86 linter-xo|sam|2.1.3 json-schema-validator|tbe|0.1.0 better-align|wwm|1.1.6 markdown-all-in-one|yzh|1.0.5 (2 theme extensions excluded) </details> Reproduces without extensions
bug,proxy
low
Critical
301,881,165
rust
Command.spawn posix_spawn support for NetBSD / DragonFlyBSD.
Issue #48624 is adding support for the more efficient `posix_spawn` in some cases of `Command.spawn`. The `posix_spawn` of NetBSD and DragonFlyBSD supports returning _ENOENT_ directly so these platforms can grow support for it. They just need the libc bindings (like in https://github.com/rust-lang/libc/commit/92d50c9c79e7038db097344d8f00e774277ee519) and then an update to libstd's Command target list as done in #48624. OpenBSD does not support this though as their implementation uses `fork` rather than `vfork` and lacks a communication back to the parent about the `exec` failure. This test .c file was used to check for the needed support: ``` #include <assert.h> #include <stdlib.h> #include <errno.h> #include <spawn.h> extern char **environ; int main(void) { pid_t pid; char *pargv[] = {"/nonexistent", NULL}; int status = posix_spawn(&pid, "/nonexistent", NULL, NULL, pargv, environ); assert(status == ENOENT); return 0; } ``` A failing assert means the platform cannot grow `posix_spawn` support.
A-runtime,C-enhancement,O-netbsd,O-dragonfly,T-libs
low
Critical
301,918,072
rust
Query Parallelization Tracking Issue
This issue is a sub-issue of https://github.com/rust-lang/rust/issues/48547: it tracks the in-progress effort to parallelize rustc across queries. This work is being spearheaded by @zoxc. ### Goals Allow rustc to execute queries in parallel with one another. Enable the use of rayon or other tools for intra-query parallelization as well. See [this internals thread](https://internals.rust-lang.org/t/parallelizing-rustc-using-rayon/6606) for more information. ### Overview of the plan - [x] Make all types in the compiler send and sync by landing @Zoxc's existing work - The existing work largely replaces `RefCell` usage with `Mutex` - In many cases, [the interior mutability itself can be refactored away](https://internals.rust-lang.org/t/parallelizing-rustc-using-rayon/6606/24?u=nikomatsakis). The plan is do this as a second step. - [x] Set up CI so this gets some testing (https://github.com/rust-lang/rust/issues/48607) - [x] Implement a thread pool that is integrated with Cargo's jobserver (https://github.com/rust-lang/rust/pull/56946) - [ ] Build list of all sources of shared state in the compiler by auditing PR history https://github.com/rust-lang/rust/issues/63643 - [ ] Audit each source of shared state, leading to either refactoring to reduce use of shared state, or persistent documentation covering invariants, atomicity, and lock orderings ### Pending refactorings - [ ] [ParseSess.included_mod_stack](https://github.com/rust-lang/rust/blob/master/src/libsyntax/parse/mod.rs#L51) might be problematic - [ ] Ensure that the err_count() API is not used to discover if errors happen in parallel code (cc #49737) - [ ] Find a way to order error messages deterministically so that if a query depends on another query, its error messages appear after the other query (cc #49737) - [ ] Make `mk_attr_id` use a scoped thread local or make it part of `ParseSess` - [ ] See if `GlobalCtxt.rcache`, `OnDiskCache.file_index_to_file` and `OnDiskCache.synthetic_expansion_infos` are faster as thread-locals - [ ] Review usages of `Session.lint_store` and `Session.buffered_lints` - [ ] Fix https://github.com/rust-lang/rust/issues/50507, as it may cause issues with parallel rustc - [ ] Find a way to deal with marking attributes as used - [ ] Ensure Rayon executes all remaining work when we panic inside the thread pool ### Completed refactorings - [x] Remove HIR inlining (https://github.com/rust-lang/rust/issues/49690) - [x] [Make GlobalCtxt implement Sync](#50108) - [x] Use scoped thread locals instead of globals, https://github.com/rust-lang/rust/pull/46193 - [x] Misc fixes required to use a rayon thread pool, https://github.com/rust-lang/rust/pull/46564 - [x] Ensure metadata [loaded by LlvmMetadataLoader](https://github.com/rust-lang/rust/blob/master/src/librustc_trans/metadata.rs#L25) does not get freed on the wrong thread - [x] [CurrentDepGraph.task_stack](https://github.com/rust-lang/rust/blob/master/src/librustc/dep_graph/graph.rs#L764) has to be made a `QueryJob` field. (implemented in https://github.com/rust-lang/rust/pull/49732) - [x] Check that optimization fuel is not used with parallel queries - [x] Review `libproc_macro` for issues, Find out which types should be `Send`, `Sync`. Deal with `Deref` impls for `Symbol`. (https://github.com/rust-lang/rust/pull/49219) - [x] [Run CI with cfg(parallel_queries)](https://github.com/rust-lang/rust/issues/48607) - [x] Refactor away `CStore::next_crate_num` - [x] Make `FileMap.lines`, `FileMap.multibyte_chars`, and `FileMap.non_narrow_chars` immutable (implemented in #50997) - [x] When executing queries, instead of keeping around a lock to the query map, create a job object immediately instead (https://github.com/rust-lang/rust/pull/50102) - [x] Ensure all diagnostics are emitted before returning from `DepGraph.try_mark_green` - [x] Refactor `DepGraphData.previous_work_products` so it becomes immutable (#50501) (implemented in #50524) - [x] Remove `DepGraphData.work_products` by threaded the value through `save_trans_partition() -> copy_module_artifacts_into_incr_comp_cache() -> OngoingCrateTranslation::join()` (#50500) (implemented in #50885) - [x] The `TransitiveRelation` doesn't really need to use a `RefCell`. In the future, it won't even be shared, but regardless the caching scheme [could be reworked to avoid `RefCell`](https://github.com/rust-lang/rust/pull/48587#issuecomment-369336651) (implemented in https://github.com/rust-lang/rust/pull/99702). - [x] Refactor `GlobalCtxt::layout_depth` so it does not need global mutable state (https://github.com/rust-lang/rust/issues/49735) (implemented in https://github.com/rust-lang/rust/pull/100748)
C-enhancement,I-compiletime,T-compiler,C-tracking-issue,WG-compiler-performance,A-parallel-queries,C-optimization
medium
Critical
301,924,894
youtube-dl
VEVO: Server returned 403 Forbidden (access denied)- but plays / works with other files.
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.03.03*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.03.03** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [x ] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` youtube-dl -v -f5086-2 --output "Gorgon City - Motorola GBUV71800186.mp4" http://hls-video.vevo.com/v5/GBUV71800186/hls/51a3e611-012b-4733-91fc-97c38383a3b2/index.m3u8 [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: [u'-v', u'-f5086-2', u'--output', u'Gorgon City - Motorola GBUV71800186.mp4', u'http://hls-video.vevo.com/v5/GBUV71800186/hls/51a3e611-012b-4733-91fc-97c38383a3b2/index.m3u8'] [debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2018.03.03 [debug] Python version 2.7.10 (CPython) - Darwin-16.7.0-x86_64-i386-64bit [debug] exe versions: ffmpeg 3.3.2, ffprobe 3.3.2 [debug] Proxy map: {} [generic] index: Requesting header [generic] index: Downloading m3u8 information [debug] Invoking downloader on u'http://hls-fas.vevo.com/v5/GBUV71800186/hls/51a3e611-012b-4733-91fc-97c38383a3b2/5200/GBUV71800186_1920x1080_h264_5200_aac_128.m3u8' [download] Destination: Gorgon City - Motorola GBUV71800186.mp4 [debug] ffmpeg command line: ffmpeg -y -loglevel verbose -headers 'Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0 (Chrome) ' -i 'http://hls-fas.vevo.com/v5/GBUV71800186/hls/51a3e611-012b-4733-91fc-97c38383a3b2/5200/GBUV71800186_1920x1080_h264_5200_aac_128.m3u8' -c copy -f mp4 'file:Gorgon City - Motorola GBUV71800186.mp4.part' ffmpeg version 3.3.2 Copyright (c) 2000-2017 the FFmpeg developers built with Apple LLVM version 8.1.0 (clang-802.0.42) configuration: --prefix=/usr/local/Cellar/ffmpeg/3.3.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda libavutil 55. 58.100 / 55. 58.100 libavcodec 57. 89.100 / 57. 89.100 libavformat 57. 71.100 / 57. 71.100 libavdevice 57. 6.100 / 57. 6.100 libavfilter 6. 82.100 / 6. 82.100 libavresample 3. 5. 0 / 3. 5. 0 libswscale 4. 6.100 / 4. 6.100 libswresample 2. 7.100 / 2. 7.100 libpostproc 54. 5.100 / 54. 5.100 [http @ 0x7fb335c0ee40] HTTP error 403 Forbidden http://hls-fas.vevo.com/v5/GBUV71800186/hls/51a3e611-012b-4733-91fc-97c38383a3b2/5200/GBUV71800186_1920x1080_h264_5200_aac_128.m3u8: Server returned 403 Forbidden (access denied) ERROR: ffmpeg exited with code 1 File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/local/bin/youtube-dl/__main__.py", line 19, in <module> youtube_dl.main() File "/usr/local/bin/youtube-dl/youtube_dl/__init__.py", line 471, in main _real_main(argv) File "/usr/local/bin/youtube-dl/youtube_dl/__init__.py", line 461, in _real_main retcode = ydl.download(all_urls) File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1989, in download url, force_generic_extractor=self.params.get('force_generic_extractor', False)) File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 796, in extract_info return self.process_ie_result(ie_result, download, extra_info) File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 850, in process_ie_result return self.process_video_result(ie_result, download=download) File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1623, in process_video_result self.process_info(new_info) File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1896, in process_info success = dl(filename, info_dict) File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1835, in dl return fd.download(name, info) File "/usr/local/bin/youtube-dl/youtube_dl/downloader/common.py", line 364, in download return self.real_download(filename, info_dict) File "/usr/local/bin/youtube-dl/youtube_dl/downloader/external.py", line 57, in real_download self.get_basename(), retval)) File "/usr/local/bin/youtube-dl/youtube_dl/downloader/common.py", line 166, in report_error self.ydl.report_error(*args, **kargs) File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 613, in report_error self.trouble(error_message, tb) File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 575, in trouble tb_data = traceback.format_list(traceback.extract_stack()) ... <end of log> ``` --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://www.vevo.com/watch/gorgon-city/motorola-(official-video)/GBUV71800186 To grab the video listed above, I would get the m3u8, find the format using -F and this would be the link: youtube-dl -f5086-2 --output "Gorgon City - Motorola GBUV71800186.mp4" http://hls-video.vevo.com/v5/GBUV71800186/hls/51a3e611-012b-4733-91fc-97c38383a3b2/index.m3u8 Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights. --- ### Description of your *issue*, suggested solution and other information The video link I listed above is shooting an error (a few other videos on VEVO are behaving the same way), not all. It plays correctly in all browsers and regions. Not sure why this is happening with this particular one and a few others. It's not a region issue as the video plays in the browser everywhere.
geo-restricted
low
Critical
301,932,451
flutter
How to do navigations inside tab pages.
The home page of my app is a tab. In the tab pages I want to navigate to to other screens but the navigated pages should stay within the constraints of the tab page instead of taking full screen space. Here is the code. Currently Screen3 takes up the entire space covering the bottom tab bar. ```dart import 'package:flutter/material.dart'; import 'package:flutter/cupertino.dart'; void main() { runApp(new MyApp()); } class MyApp extends StatelessWidget { static var routes = <String, WidgetBuilder> { '/screen3': (BuildContext context) => new Screen3(), }; static Route<BuildContext> _getRoute(RouteSettings settings) { var builder = routes[settings.name]; if(builder != null) { return new MaterialPageRoute( settings: settings, builder: builder, ); } return null; } Widget build(BuildContext context) { var app =new MaterialApp( home: new HomeTabs(), onGenerateRoute: _getRoute, ); return app; } } class HomeTabs extends StatefulWidget { HomeTabs() :super(); @override _HomeTabsState createState() => new _HomeTabsState(); } class _HomeTabsState extends State<HomeTabs> with SingleTickerProviderStateMixin { TabController controller; _HomeTabsState() { //controller=new TabController(length: 2, vsync: this); } @override Widget build(BuildContext context) { if(controller == null) { controller=new TabController(length: 2, vsync: this); } return new Scaffold( bottomNavigationBar: new Material ( child: new TabBar( tabs: <Tab> [ new Tab(icon: new Icon(Icons.person)), new Tab(icon: new Icon(Icons.email)) ], controller: controller, ), color: Colors.blue, ), body: new TabBarView( children: <Widget> [ new Screen1(), new Screen2() ], controller: controller, ) ); } } class Screen1 extends StatelessWidget { @override Widget build(BuildContext context) { return new Scaffold( // 1 appBar: new AppBar( title: new Text("Screen 1"), // screen title ), body: new Center( child: new RaisedButton( onPressed:(){ Navigator.of(context).pushNamed("/screen3"); }, child: new Text("Go to Screen 3"), ), ) ); } } class Screen2 extends StatelessWidget { @override Widget build(BuildContext context) { return new Scaffold( // 1 appBar: new AppBar( title: new Text("Screen 2"), // screen title ), body: new Center( child: new RaisedButton( onPressed:(){ Navigator.of(context).pushNamed("/screen3"); }, child: new Text("Go to Screen 3"), ), ) ); } } class Screen3 extends StatelessWidget { @override Widget build(BuildContext context) { return new Scaffold( // 1 appBar: new AppBar( title: new Text("Screen 3"), // screen title ), body: new Center( child: new RaisedButton( onPressed:(){ Navigator.of(context).pop(); }, child: new Text("Back"), ), ) ); } } ``` <img width="403" alt="home" src="https://user-images.githubusercontent.com/5688107/36925940-1d977314-1e2a-11e8-885b-babab679eb11.png"> <img width="405" alt="screen3" src="https://user-images.githubusercontent.com/5688107/36925941-1daf2838-1e2a-11e8-984a-1225cefa474d.png">
framework,d: api docs,f: routes,P3,team-framework,triaged-framework
medium
Major