id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
281,657,067
pytorch
[Proposal] Consistent `batch_first` effect for RNN modules
In current RNN (including `nn.RNN`, `nn.GRU`, `nn.LSTM`) implementation, the `batch_first` will only affect the dimension order of **input** and **output** variables. But there are also **hidden states** returned from RNN, and the *batch* dimension is always placed in the 2nd dimension regardless of the `batch_first` argument. Could someone tell me the advantages of such odd behaviour? Or I can help to patch this. cc @ezyang @gchanan @zou3519 @bdhirsh @jlin27 @mruberry @albanD
module: docs,module: nn,module: rnn,triaged
medium
Major
281,772,867
pytorch
ATen explicitly differentiated native function resolution hazard (call is ambiguous)
Functions in ATen either can have derivatives defined for them, or they can defer to the implementations of derivatives of the methods they invoke. Suppose you write this (assume f and g are functions): ``` #include <ATen/ATen.h> namespace at { namespace native { Tensor f(at::Tensor x) { // some primitive implementation (f will need an explicit derivative) } Tensor g(at::Tensor x) { return f(x); } }} ``` If you have ever written this, you may notice that compilation fails, with the compiler complaining that 'f is ambiguous'. Puzzled, you might slap on the prefix `at::native::` to shut up the compiler. At time of writing, I see `narrow` dispatches to `at::native::slice`. The hazard is this: *if* g was intended to be a non-primitive function whose derivative was to be deferred to f's derivative implementation, resolving the ambiguity to `at::native::` will simply not work, because you will fail to actually go through the variable unpacking/wrapping code that is done by the `at::` version of the function. (In the particular case of `narrow`, `slice` happens to be a non-primitive native function, solving the problem.) This issue will become more pressing as we add more "primitive" native functions to ATen, e.g., functions which must have their derivatives expressed directly. EDIT: there is also a case where this hazard will silently introduce a performance problem to your code: 1. You use `at::native` to call a non-primitive ATen native function, BUT 2. The non-primitive ATen function has a more efficient derivative defined for it. If you call the `at::native` version, you will fall back to the less efficient derivative. cc @ezyang @bhosmer @smessmer @ljk53
module: internals,triaged
low
Major
281,791,774
rust
Unsatisfied trait bound for inner type in result leads to imprecise error message
Given a function that returns an instance of a generic expecting an inner type to implement a trait but that bound is not satisfied, the compiler unhelpfully points to the entire function and does not specify which inner type is the root of the problem. Sample code ([full error case on git](https://git.neosmart.net/mqudsi/futuretest/src/branch/rust-46711)): ```rust fn test() -> MapErr<(), ()> { future::ok(()) .map_err(|String| ()) } ``` `MapErr` is a type that expects its first type parameter to implement the `Future` trait, `()` does not do so. The compiler generates the following: ```rust Compiling futuretest v0.1.0 (file:///mnt/c/Users/Mahmoud/git/futuretest) error[E0277]: the trait bound `(): futures::Future` is not satisfied --> src/main.rs:17:1 | 17 | / fn test() -> MapErr<(), ()> { 18 | | future::ok(()) 19 | | .map_err(|String| ()) 20 | | } | |_^ the trait `futures::Future` is not implemented for `()` | = note: required by `futures::MapErr` error: aborting due to previous error error: Could not compile `futuretest`. To learn more, run the command again with --verbose. ``` It is extremely unclear from the resulting error message what the problem is. The text specifies that `MapErr` requires `()` to implement `futures::Future`, but highlights the entire function instead of (ideally) just the first type parameter in the function return type. In this specifically chosen case, the error complains about `()` and `MapErr` - but trickily, `()` is just fine for the second type parameter to `MapErr<(), ()>`, it's the first `()` that needs to implement `futures::Future`. So to recap: * Error should point to the return type declaration instead of the entire function * Error needs to clarify which inner type must satisfy the trait in question
C-enhancement,A-diagnostics,T-compiler
low
Critical
281,840,489
bitcoin
Could the wallet count unconfirmed non-mempool change?
Just writing down some thoughts on this. I find it quite counter-intuitive that if you have a 1 BTC output and make a transaction spending 0.01 BTC and sending 0.99 BTC back to yourself in change, that unless that transaction is in your mempool, your balance drops from 1 to 0. Unconfirmed change in the mempool appears in available balance (assuming we can spend 0-conf change), but if it is not in the mempool it is not reflected at all, but the output spent in the transaction is still spent. This can occur because walletbroadcast=0 and you're delaying broadcast or using another broadcast method or because the initial transaction was evicted. In either case though it doesn't make sense to count the entire input as spent but not credit the change output. I believe it would make sense to include the unconfirmed non-mempool change in the pending balance. Unfortunately this is non-trivial. This can best be seen by the example of two transactions in the wallet that spend the same input. Naively you'd double count the change, which is clearly wrong. It's non-obvious which change you should count nor how you would go about implementing that if you did have a plan. One idea would be if you had a mempool package acceptance test (ala @sdaftuar) you could try to add all non-mempooled wallet transactions incrementally in time order, and take the resulting putative mempool state as what would count. But its starting to get a bit cumbersome. Another idea would be to check the unconfirmed non-mempool change for potential conflicts and only count the smallest change. There are other aspects of trickiness such as mixed debit transactions.
Brainstorming,Wallet
low
Major
281,890,693
pytorch
Variable outputs of stochastic functions should never require grad
I don't think we should bother to implement reparametrized gradients for Variable functions (`torch.distributions` will have them + I don't think it's possible for bernoulli), and if we keep setting these outputs to `requires_grad=True` and pushing back zeros some people might be confused. Also, this doesn't match the legacy behavior, and can make some in-place checks that passed before fail. Let's just make them non-differentiable... cc @vincentqb @fritzo @neerajprad @alicanb @vishwakftw
module: distributions,triaged
low
Minor
281,909,390
angular
Preload modules modules based on canLoad guard result
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [x] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> Currently, when using a preloadering strategy (like `PreloadAllModules`), the preloader does not process routes with a `canLoad` property and therefore does not preload them. There is no indication in the [`CanLoad` interface documentation](https://angular.io/api/router/CanLoad) that this will happen. https://github.com/angular/angular/blob/0f5c70d563b6943623a5940036a52fe077ad3fac/packages/router/src/router_preloader.ts#L112-L114 The [router guide](https://angular.io/guide/router#canload-blocks-preload) does note this behavior and that it is by design. However, it does not give any indication as to why. > The PreloadAllModules strategy does not load feature areas protected by a CanLoad guard. This is by design. The language in the guide also isn't very clear. When I originally read that part of the guide, I misunderstood "areas protected by a CanLoadGuard" as only referring to areas where the `canLoad` guard check returned false. ## Expected behavior <!-- Describe what the desired behavior would be. --> Preloaders should decide whether to preload based on the boolean result of the `canLoad` guard (or the boolean result of the observable/promise). If this is not possible, it should be more clear that `canLoad` negates any preloading strategy for that module. This could be through console warnings and/or better documentation. ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> - The `canLoad` guard allows preventing navigation before a lazy loaded module is downloaded/bootstraped - Preloading requires "loading" a module, therefore a `canLoad` option still makes logical sense - The suggested use of `canActivate` over `canLoad` when preloading wastes network/memory/cpu resources - The disabling of preloading by the `canLoad` option happens silently - `canLoad` returns an observable or promise so it can delay loading for an async task (i.e. user login) - custom preloading strategies are slightly harder to implement than a `canLoad` hook - custom preloading can only be configured on a root or lazy loaded module level - `PreloadAllModules` could be used in concert with `canLoad` for a simple selective loading strategy ## Environment <pre><code> Angular version: 5.0.3 <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [x] Chrome (desktop) version 63.0.3239.84 (Official Build) (64-bit) - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [x] IE version 11 - [ ] Edge version XX For Tooling issues: - Node version: v8.9.1 <!-- run `node --version` --> - Platform: Windows 7 64-bit <!-- Mac, Linux, Windows --> Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> - Angular CLI: 1.5.4 </code></pre>
feature,freq2: medium,area: router,router: lazy loading,feature: insufficient votes,feature: votes required
low
Critical
281,942,059
opencv
BOWKMeansTrainer Java API SIGSEGV on cluster()
##### System information (version) - OpenCV => 3.3.1 - Operating System / Platform => Debian Linux 64bit - Compiler => GCC ##### Detailed description I am trying to create a BOW dictionary from a pre-computed set of SIFT descriptors. The SIFT descriptors are limited to max 1750 features. Regardless of whether I use add() to add the descriptor, or whether I create my own Mat and call cluster with this mat both produce the same error. The output from my program is: ```Creating BOVW + Database size = 16296 + Loading IDs + Nominal training image count = 1000 + Actual expected training image count = 1018 + Step size = 16 + Loading Descriptors + Loaded 0 (0.00%) + Loaded 200 (19.65%) + Loaded 400 (39.29%) + Loaded 600 (58.94%) + Loaded 800 (78.59%) + Loaded 1000 (98.23%) + TOTAL Training image descriptors loaded = 1017 (ignored 1) + Dictionary size = 10000 + Clustering # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007f5bf827565c, pid=7558, tid=0x00007f5c585e9700 # # JRE version: OpenJDK Runtime Environment (8.0_151-b12) (build 1.8.0_151-8u151-b12-1-b12) # Java VM: OpenJDK 64-Bit Server VM (25.151-b12 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [opencv-3.3.1-linux_x86-64.so+0x3b165c] Java_org_opencv_features2d_BOWKMeansTrainer_cluster_11+0x1c # # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again # # An error report file with more information is saved as: # /home/barrypearce/workspace/imgtestrun/bsip-imgtest-1.0/hs_err_pid7558.log # # If you would like to submit a bug report, please visit: # http://bugreport.java.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # real 0m48.952s user 0m7.493s sys 0m0.517s ``` ##### Steps to reproduce I have a set of 16296 images where the keypoints and descriptors have been detected and extracted. They are then stored on disk. I have searches running on these which are successful and I have used the Mat serialisation code over 6 million times and current search code in the same program works without error. The java code is: ```BOWKMeansTrainer trainer = new BOWKMeansTrainer(DICTIONARY_SIZE, new TermCriteria(TermCriteria.COUNT, 100, 0.001), 1, Core.KMEANS_PP_CENTERS); int selector = 0; int descriptorCount; int ignoredCount = 0; Mat features = new Mat(); System.out.println(" + Loading Descriptors"); for (descriptorCount = 0; descriptorCount < trainingCount; descriptorCount++) { ObjectInputStream dataIn = new ObjectInputStream(new GZIPInputStream(new FileInputStream(Paths.get(idxDir.toString(), ids[selector] + ".sift.gz").toString()))); // // Load keypoints and descriptors for index. // MatOfKeyPoint idxKeypoints = ImageOpenCVUtils.deSerialiseKeypointMat(dataIn); if (idxKeypoints.total() != 0) { // Mat descriptors = ImageOpenCVUtils.deSerialiseMat(dataIn); // features.push_back(descriptors); trainer.add(ImageOpenCVUtils.deSerialiseMat(dataIn)); } else { // reset the descriptor count to avoid a hole. descriptorCount--; ignoredCount++; // System.out.println(" - Ignored - no keypoints"); } idxKeypoints.release(); idxKeypoints = null; dataIn.close(); if ((descriptorCount % 200) == 0) { System.out.printf(" + Loaded %d (%.2f%%)\n", descriptorCount, ((double)descriptorCount / (double)trainingCount) * 100.0); System.gc(); System.out.println("[Used:" + ((Runtime.getRuntime().totalMemory()/1024) - (Runtime.getRuntime().freeMemory()/1024)) + "K] [" + "[Free:" + (Runtime.getRuntime().freeMemory()/1024) + "K] [" + "[Total:" + (Runtime.getRuntime().totalMemory()/1024) + "K]"); System.out.flush(); } selector += stepSelector; if (selector >= dbSize) { // the selector doesnt quite match the DB size. We have enough regardless. break; } } System.out.printf(" + TOTAL Training image descriptors loaded = %d (ignored %d)\n\n", descriptorCount, ignoredCount); System.out.printf(" + Dictionary size = %d\n", DICTIONARY_SIZE); System.out.println(" + Clustering"); // Mat dictionary = trainer.cluster(features); Mat dictionary = trainer.cluster(); ```
priority: low,category: features2d,category: java bindings
low
Critical
281,946,396
kubernetes
Deployment enters creation hot-loop when rs field is mutated by API server
/kind bug ### What Happened A deployment enters create new replicaset hot-loop. In deployment's spec.template: ```yaml volumes: - emptyDir: sizeLimit: "0" name: foo ``` In its rs's spec.template: ```yaml volumes: - emptyDir: {} name: foo ``` This will happen when you create a Deployment that specifies `volumes.EmptyDir` in 1.7.0 - 1.7.5, and then upgrade the cluster to >= 1.8.0, with LocalStorageCapacityIsolation disabled. ### Root Cause Some background information: 1. In pod spec, a new field `volumes.EmptyDir.sizeLimit` was introduced in 1.7.0, it's an optional field, but incorrectly set as `resource.Quantity` type. 1. To fix the above issue, it's changed to pointer type `*resource.Quantity` later (#50163) in 1.8.0 and backported to 1.7.6. 1. In 1.8.0, this field is set to `nil` if the LocalStorageCapacityIsolation feature isn’t enabled: https://github.com/kubernetes/kubernetes/blob/v1.8.0/pkg/api/pod/util.go#L242 (this fix will soon be cherrypicked to the next 1.7.x release). 1. Deployment creates a new rs by copying its own template to the rs. Deployment finds a new rs by comparing the deployment's template against the replicasets it owns. If you create a Deployment that specifies `volumes.EmptyDir` in 1.7.0 - 1.7.5, it will incorrectly set sizeLimit to "0" by default, because of **1** mentioned above. ```yaml volumes: - emptyDir: sizeLimit: "0" name: foo ``` If you then upgrade the cluster to 1.8.0, the `sizeLimit: "0"` in rs will be cleared, because of **3** mentioned above . Deployment cannot find its new replicaset because of the template change, and continuous creating more new replicasets, which will still have different template after creation. ### Solution A possible solution is to implement `Create()` in dry-run mode, and have deployments to use dry-run created repliaset template (instead of deployment template) to compare and find current replicaset. This is a long term solution. A possible short term solution is to implement a hack that clears Deployment's `volumes.EmptyDir.sizeLimit` with ReplicaSets. The code here should do the trick: https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/registry/extensions/deployment/strategy.go#L90-L91 except that the Deployment needs to be updated to trigger this cleanup code. ### Workaround For someone who hit this issue, updating a deployment will trigger https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/registry/extensions/deployment/strategy.go#L90-L91 and thus solve the problem automatically. @kubernetes/sig-apps-bugs @liggitt
kind/bug,priority/important-soon,area/reliability,sig/apps,lifecycle/frozen
medium
Critical
281,966,834
flutter
We should expose the locale's preferred baseline
...once we support multiple baselines, anyway.
c: new feature,framework,a: internationalization,a: typography,P3,team-framework,triaged-framework
low
Minor
281,980,205
youtube-dl
Add support for Adobe Pass Auth TV Provider YouTubeTV
--- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.12.14*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.12.14** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [x] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` you http://watchdisneyxd.go.com/atomic-puppet/video/vdka4074861/01/26-finale-a-finale-b --ap-mso YouTubeTV --ap-username PRIV --ap-password PRIV -v [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['http://watchdisneyxd.go.com/atomic-puppet/video/vdka4074861/01/26-finale-a-finale-b', '--ap-mso', 'YouTubeTV', '--ap-username', 'PRIVATE', '--ap-password', 'PRIVATE', '-v'] Usage: you [OPTIONS] URL [URL...] you: error: Unsupported TV Provider, use --ap-list-mso to get a list of supported TV Providers ``` --- Would like to request support for YouTubeTV as an Adobe Pass MSO option. Running --ap-mso does not recognize YouTubeTV as an option that can be used. Going by the providers list https://sp.auth.adobe.com/adobe-services/config/ABC it should be able to use the service. If an account is needed, I do have an invite so access can be had.
tv-provider-account-needed
medium
Critical
282,026,478
kubernetes
Allow hot disabling of liveness probes
**Is this a BUG REPORT or FEATURE REQUEST?**: /kind feature **What happened**: From time to time, liveness probes do not behave as expected by the person that configured them and some conditions external to the container may make them fail (e.g. dependency on a database) and provoke endless restart loops. This makes it hard to debug those containers and/or get the system back to normal. **What you expected to happen**: I'd like to be able to disable a liveness probe on a existing pod without replacing it. Ideally, this would apply to running containers, without waiting them to be restarted. **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: This was suggested in the past (without the dynamic part): > At some point we'll probably want to add an enable/disable boolean, since it's convenient to be able to disable liveness probes without losing the parameter settings, but that's less critical than the above. cf. https://github.com/kubernetes/kubernetes/issues/12866#issue-101699336
priority/backlog,sig/node,kind/feature
high
Critical
282,229,325
go
net/mail: AddressList doesn't decode rfc2047 encoded words inside quotes
net/mail `AddressList` doesn't decode rfc2047 encoded words if they are inside quotes. The RFC mentions "An 'encoded-word' MUST NOT appear within a 'quoted-string'." Now before you close this bug, saying it is working as intended, let me try to convince you otherwise. A lot of clients break the rule above, and most services/libraries are programmed to work around it. For example **Gmail** will happily decode the string even if it is inside quotes. The way I see it, there are two paths we can follow: 1. Stick to RFC 2. Try to be compatible with the majority of the libraries/services out there. We can be strict and choose 1, in which case the library will not be of much use, since users of the software will complain. Or we can be pragmatic and choose 2. There is not much risk in decoding Q encoded words inside quotes. I hope you choose (2). Repro: https://play.golang.org/p/etkJkTfs3Q ### What version of Go are you using (`go version`)? 1.9 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? darwin/amd64 ### What did you do? https://play.golang.org/p/etkJkTfs3Q ### What did you expect to see? Decoded name ### What did you see instead? Undecoded name
help wanted,NeedsInvestigation
low
Critical
282,248,405
kubernetes
umbrella issue for HA (replicated) API Server
API Server relies on etcd for HA. But it also has internal objects/states that need to be initialized based on API objects in etcd. Example of that is CRDs that need a http handler. This issue is to keep track of all states we have in API server and make sure all of them are HA ready. Note that most of them are already HA. - [ ] CRD http handler: Coordination required between API servers to mark a CRD as ready (#57042) - [ ] Admission Webhooks: All API servers should became aware of webhooks before they mark as ready. Webhooks should be activated when all API servers are aware of them. (need an issue) - [ ] Audit api server code for more issues @kubernetes/sig-api-machinery-misc
sig/api-machinery,kind/feature,lifecycle/frozen
low
Major
282,249,034
go
proposal: cmd/vet: detect values assigned but not used
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go1.9 darwin/amd64 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" on MacOS X Sierra ### What did you do? I wrote this code below and then run go vet. err is never used or checked. ``` defer resp.Body.Close() body, err := ioutil.ReadAll(resp.Body) return body, resp.StatusCode, nil } ```` ### What did you expect to see? I expect that when I run $> go vet then I should see some error notification that this err variable is not used or something. ### What did you see instead? None.
Proposal,NeedsInvestigation,FeatureRequest,Proposal-Hold,Analysis
low
Critical
282,254,136
TypeScript
Unhelpful `--strictFunctionTypes` error when first method is `self(): this;`
**TypeScript Version:** 2.7.0-dev.20171214 **Code** ```ts declare class C<T> { self(): this; covariant(): T; contravariant: (t: T) => void; m(other: C<{}>): void; } new C<number>().m(new C<number>()); ``` **Expected behavior:** Error that `C<number>` isn't `C<{}>` because `C<{}>`'s `contravariant` can accept `(t: {})` but `C<number>` can only accept `(t: number)`. **Actual behavior:** ```ts src/a.ts(8,19): error TS2345: Argument of type 'C<number>' is not assignable to parameter of type 'C<{}>'. Types of property 'self' are incompatible. Type '() => C<number>' is not assignable to type '() => C<{}>'. Type 'C<number>' is not assignable to type 'C<{}>'. ```
Suggestion,In Discussion,Domain: Error Messages
low
Critical
282,257,399
vscode
[Feature] Local Workspace settings
I would like to be able to configure settings that are specific to my user and to a particular workspace. So, 'local' workspace settings? So there would be three locations for settings (for a single-folder workspace): * user settings * `${workspaceRoot}/.vscode/settings.json` * `${workspaceRoot}/.vscode/settings.local.json` This way, I can add `.vscode/settings.json` to git, and share project settings such as "exclude node_modules" but I can gitignore `.vscode/settings.local.json` and add things specific to that project that I don't want to share, such as git autofetch. I originally thought to suggest `.vscode/settings.user.json`, similar to how VS Pro handles similar configuration, but that might be confusing.
feature-request,config
high
Critical
282,264,901
pytorch
Fused RNN refactor plan
This ticket is to track our plan to refactor the fused RNN API (which makes use of the CuDNN implementation of RNNs). **Why is this difficult?** There are a number of factors which make fused RNNs unusual, compared to most of the other differentiable operations in PyTorch * It requires an unusually large, structured series of weights. Most differentiable operators have a fixed number of weights, but an RNN for an entire sequence must have weights for every layer of the RNN. To make matters worse, each layer needs not one but two tensors; the weight and the bias. No other operator in PyTorch behaves like this. * CuDNN requires these weights and inputs to be packed in a particular way. The required packing operations are frequently reported by users as an extremely confusing aspect of PyTorch. * Not only do the weights vary depending on the type of RNN, so do the hidden tensors. If you are an LSTM, you have both hx and cx; for other RNNs, only hx is needed. * RNN with dropout is a stateful API, which requires dropout descriptors to be passed between invocations to handle randomness. **What do we want to do?** Here are the desired goals of the RNN refactor: * Make CuDNN RNN available from ATen **Design ideas.** * The ATen API will take two tensors, `hx` and `cx`, with `cx` being undefined tensor for non-LSTM networks. cc @csarofeen @ptrblck
module: cudnn,triaged
low
Major
282,277,859
go
doc: explain what custom pprof profiles are and why they are useful
[Diagnostics](https://tip.golang.org/doc/diagnostics.html) page is briefly mentioning custom profiles but failing to give any introduction for those who doesn't know why custom profiles are useful. Explain custom profiles and share a few sample use cases.
Documentation,help wanted,NeedsFix
low
Minor
282,293,520
vscode
Feature Request: Enable/disable extensions from config file
- VSCode Version: 1.18.1 - OS Version: Windows 10 FU ### Explain: There are certain extensions that play well together, and it would be useful to be able to set a config file to enable and disable certain extensions in that workspace. This would be a config file, like the extensions recommendations, but with a series of parameters that would allow to enable and disable certain extensions. This would be like a config file for the "[Dis]Allow (Workspace)" setting.
feature-request,extensions
high
Critical
282,297,290
pytorch
Feature Request: CPU performance optimization with MKL-DNN
Hi, our team works on DL frameworks performance optimization on CPU. PyTorch CPU performance can be significantly improved with [MKL-DNN](https://github.com/01org/mkl-dnn), an open source deep neural network library on Intel CPU platform. Currently MKL-DNN supports Conv2d, Relu, BatchNorm, Pooling, Concat, etc. The CPU performance improvement will benefit cloud users as well as HPC researchers who have access to Xeon clusters. Will PyTorch be interested in the part? cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @VitalyFedyunin
triaged,module: mkldnn
medium
Major
282,334,490
go
x/tools/cmd/goimports: should not propose imports that are shadowed by vendored copies
When you format `import` sections, goimports help us out of kindness. However, it mistakenly mix GOPATH-depended packages with packages under the `vendor` directory. I expect it should exclude GOPATH-depended one from an `import` section. if I use any vendoring tool. (It maybe the role of golint, so if you mind, cound you point out?)
Tools
low
Major
282,479,306
opencv
cv::VideoCapture::set does not work when using GStreamer
##### System information (version) - OpenCV => 3.3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 ##### Detailed description The end of CvCapture_GStreamer::open contains this code: ``` status_ = gst_element_query_position(pipeline, FORMAT, &value_); if (!status_ || value_ != 0 || duration < 0) { CV_WARN(cv::format("Cannot query video position: status=%d value=%lld duration=%lld\n", (int)status_, (long long int)value_, (long long int)duration).c_str()); isPosFramesSupported = false; isPosFramesEmulated = true; emulatedFrameNumber = 0; } else isPosFramesSupported = true; ``` According to [GStreamer reference manual](https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstElement.html#gst-element-query-position), _"this query will usually only work once the pipeline is prerolled (i.e. reached PAUSED or PLAYING state)."_ So based on the documentation, OpenCV's check, if querying the video position is possible, will usually fail, as it is done too early: immediately after opening the video, when the the pipeline is not yet prerolled. In my tests, it always failed, which makes isPosFramesEmulated = true. If later we want to set the video frame index via `cv::VideoCapture::set(cv::CAP_PROP_POS_FRAMES, frameNumber)` this call has no effect, because CvCapture_GStreamer::setProperty does not do anything if isPosFramesEmulated==true. I tried the same using GStreamer directly. With the same videos, gst_element_query_position works without any problems as soon as the video is playing. Unfortunately, because of the flag isPosFramesEmulated OpenCV prevents to set the video frame index at any later time, though it would be perfectly possible. ##### Steps to reproduce In tutorial project bg_sub, in function processVideo before code line ` ss << capture.get(CAP_PROP_POS_FRAMES);` add a line `capture.set(CAP_PROP_POS_FRAMES,1000);` Set a breakpoint there and start debugging (passing "-vid testvideo.mpg" at command line). You will see that the subsequent get will return 1, which is not what we expect (it is `CvCapture_GStreamer::emulatedFrameNumber`). As explained above, this is caused by the OpenCV's open method that makes isPosFramesEmulated = true. During open, this is written to stdout: ``` (cpp-tutorial-bg_sub.exe:13676): GStreamer-CRITICAL **: gst_query_set_position: assertion 'format == g_value_get_enum (gst_structure_id_get_value (s, GST_QUARK (FORMAT)))' failed warning: Cannot query video position: status=1 value=-1 duration=18795 (C:\opencv\modules\videoio\src\cap_gstreamer.cpp:949) ``` The first warning message is from GStreamer (GST_DEBUG=4), the second the subsequent message from OpenCV (see line 948 of [cap_gstreamer.cpp](https://github.com/opencv/opencv/blob/3.3.1/modules/videoio/src/cap_gstreamer.cpp)).
bug,priority: low,category: videoio
low
Critical
282,492,737
opencv
very slow kdtree
I used `cv::flann::Index` with `cv::flann::KDTreeIndexParams()` and call `knnSearch` for every single point used to build the tree inside a parallel loop. The performance gets extremely slow as the number of points and CPU cores grows. I did a performance profiling, and it seems that all time is spent in calling kernel to allocate and free memory. The culprit seems to be [this](https://github.com/opencv/opencv/blob/6fe1898ab6d2a8a81ff2026e381b692cffe00266/modules/flann/include/opencv2/flann/kdtree_index.h#L448) that is called for every single point several times. It allocates an array of the entire data size, if that can not be avoided, at least should be done once and reused for every point.
category: flann
low
Major
282,527,681
flutter
Be able to create a new project with prepoluated class names instead of boilerplate project names (MyApp, MyHomePage)
This is just a feature request. Using `flutter create myapp` and `File -> New Flutter Project...` in Android Studio/IntelliJ both result in the sample code for MyApp (the increment counter app) being created. Could we please have a way to: * instantiate a blank project? * or instantiate something without so much code already written (e.g. see below)? * or something that can customize the name of the app (i.e. if I named my Flutter Application hello_world, MyApp would be called HelloWorld? Something like: ```dart import 'package:flutter/material.dart'; void main() => runApp(new HelloWorld()); class HelloWorld extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return new Container(); } } ``` Reasoning: * As a regular developer, the stuff in the sample app may not be relevant (classes would be in different files, comments would be removed). There's a bit of a mental overhead to renaming the classes. Chances are, you'd just delete all the code and start from scratch, so it would be great to have this option when creating a new project. * As a very new developer, one might be more interested in the just StatelessWidgets first (e.g. just how to do visual layouts), and worry about StatefulWidgets (the logic to the app) later.
c: new feature,tool,P2,team-tool,triaged-tool
low
Major
282,536,622
angular
preserveWhitespaces:false is too aggresive and not context aware
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior The following snippet should generate "Text1 Text2" but it generates "Text1Text2" when preserveWhitespaces is false. ``` <!-- Original --> <span>Text1</span> <span>Text2</span> <!-- After preserveWhitespace:false --> <span>Text1</span><span>Text2</span> ``` ## Expected behavior `preserveWhitespaces` should be smarter, perhaps it could be like an enum where I could provide what behavior do I want, something like this (assuming the setting is renamed to `spaceCollapsing`: * `preserve` - Behaves like `preserveWhitespaces:true` currently does, i.e. leaves things as is * `remove` - Behaves like `preserveWhitespaces:false` currently does, i.e. removes all spaces * `collapse` - whitespace is reduced to either a single space or a single newline depending on whether the original spaces contained a newline * `normalize` - whitespace is reduced to a single space ``` <!-- Original --> <span>Text1</span> <span>Text2</span> <!-- After preserveWhitespace:false --> <span>Text1</span> <span>Text2</span> ``` ## Minimal reproduction of the problem with instructions https://angular-plbqm5.stackblitz.io Notice `feugiat.Learn More!` when setting `preserveWhitespaces:false`, there is a lack of space and things look incorrect. ## What is the motivation / use case for changing the behavior? It causes unexpected behaviors that are hard to notice.
feature,area: core,core: basic template syntax,feature: under consideration
high
Critical
282,537,950
flutter
State object's dartdoc can use a lifecycle diagram
Maybe something like ![image](https://user-images.githubusercontent.com/156888/34059154-a14256d6-e192-11e7-827f-53766eabb427.png)
c: new feature,framework,d: api docs,a: quality,c: proposal,P3,team-framework,triaged-framework
low
Minor
282,556,119
material-ui
[Grid] CSS grid support
CSS-Grid is now newer and better grid system than flexbox, but they can be used together. Please see the link: https://css-tricks.com/snippets/css/complete-guide-grid/ And it's now supported by most browsers, and it's matter of time it will become the standard way of defining the grid layout. Would Material UI implement CSS Grid as well for incoming updates? ### Problems it could solve - Remove the need for negative paddings. - Works better for page layouts when the HTML is streamed in chunks which creates layout shifts ### Benchmark - https://getbootstrap.com/docs/5.2/layout/css-grid/ - https://styled-css-grid.js.org/ - https://github.com/twbs/bootstrap/pull/31813 and the [preview](https://deploy-preview-31813--twbs-bootstrap.netlify.app/docs/5.0/layout/css-grid/). - https://primer.style/components/Grid - https://orbit.kiwi/components/grid/react/ - https://twitter.com/jxnblk/status/1192475197963677698 - https://rsms.me/raster/, the [announcement](https://twitter.com/rsms/status/1096926590972289025) - Do we even need a component for it? https://twitter.com/kyehohenberger/status/1138159733767098368 - https://trello.com/c/xHKY1cHg/1345-grid-css-grid
new feature,waiting for 👍,component: Grid
high
Critical
282,567,646
opencv
Fixing broken cv2.VideoCapture by adding whitelist options for ffmpeg
A change to ffmpeg that results in things like 'crypto' not being in the default whitelist. As a consequence, cv2.videocapture('XXX.sdp') often fails (for example, for parrot bebop drones). The solution would be to add optional whitelist parameters to cv2.videocapture; is this something someone could do? For example, the following command successfully plays the stream from the prompt: "ffplay bebop.sdp -protocol_whitelist file,udp,rtp -fflags nobuffer" but the stream does not get captured by opencv <!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => :grey_question: - Operating System / Platform => :grey_question: - Compiler => :grey_question: ##### Detailed description <!-- your description --> ##### Steps to reproduce <!-- to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file -->
priority: low,category: videoio
low
Critical
282,592,179
vue
Date Fields appear empty on first load when initialized with Vue
### Version 2.5.11 ### Reproduction link [https://jsfiddle.net/u1wssnwL/3/](https://jsfiddle.net/u1wssnwL/3/) ### Steps to reproduce Create an input with a type date and initialize with any valid date value. View on Safari IOS and you'll see that it will appear as empty although the value is set because when you click the field, you'll see that the IOS datepicker will be set to the value you provided. It only affects date inputs that have been initialized with Vue and in the JSFiddle provided, there are 2 date inputs for easier reproduction ### What is expected? Both date inputs should have the same value ### What is actually happening? The date input that was initialized with Vue appears empty <!-- generated by vue-issues. DO NOT REMOVE -->
browser quirks
low
Major
282,597,065
go
compress/flate: decompression performance
### What version of Go are you using (`go version`)? ``` go version go1.9.2 linux/amd64 ``` ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/fabecassis/go" GORACE="" GOROOT="/usr/local/go" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build767750319=/tmp/go-build -gno-record-gcc-switches" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" ``` ### What did you do? I noticed that the performance of `"compress/gzip"` is close to 2x slower than the `gunzip(1)` command-line tool. I've tested multiple methods of performing the decompression in this simple repository: https://github.com/flx42/go-gunzip-bench In summary, on a `333 MB` gzipped file, for the read-decompress-write process: - The idiomatic go implementation takes 7.8s. - Piping to `gunzip(1)` takes 4.4s. - Using [cgzip](https://github.com/youtube/vitess/tree/master/go/cgzip), a cgo wrapper on top on zlib, it takes 3.4s Is there any room for improvement here for the standard implementation? The use case is decompression of downloaded Docker layers, in golang.
Performance,help wanted
medium
Critical
282,603,987
godot
3D handles get smaller when MSAA 16x is enabled (`3.x` only)
**Operating system or device, Godot version, GPU Model and driver (if graphics related):** master bca97e33cefdf2687ea766fd9bd7032aa98cd0e8, Linux 64, Nvidia prorietary divers 387.34 **Issue description:** <!-- What happened, and what was expected. --> Enabling MSAA 16x makes the editor 3D handles very tiny, it only happens when the level is 16 MSAA 16x ![screenshot from 2017-12-16 00-29-17](https://user-images.githubusercontent.com/1103897/34067943-6c0e82da-e1f8-11e7-9e83-981b9355f6a8.png) not MSAA 16x ![screenshot from 2017-12-16 00-29-02](https://user-images.githubusercontent.com/1103897/34067940-68bb80ba-e1f8-11e7-87b2-35bdb1f5046f.png) **Steps to reproduce:** **Link to minimal example project:** <!-- Optional but very welcome. You can drag and drop a zip archive to upload it. -->
bug,topic:rendering,confirmed,topic:3d
low
Major
282,609,406
vscode
[folding] configure initial collapse state
Maybe I missed something ... I recently migrated my javascript project from netbeans to visual studio code ... in netbeans we use ``` //<editor-fold defaultstate="collapsed" desc="MySection"> ... //</editor-fold> ``` for defining regions ... I saw I could modify javascript-language-configuration.json to ``` "folding": { "markers": { "start": "^\\s*//\\s*(?:(?:#?region\\b)|(?:<editor-fold\\b))", "end": "^\\s*//\\s*(?:(?:#?endregion\\b)|(?:</editor-fold>))" } } ``` SO my request includes 2 questions : 1. Is it possible to add the editor fold tag to the javascript language configuration (Identical as the configuration for java) 2. Could a defaultstate be added to regions, so when a file is opened regions will be either collapsed or expanded
feature-request,editor-folding
high
Critical
282,626,712
opencv
HEAD build fails with cuda 9.1, VS15.5.2, W10 16299
##### Summary HEAD (16.12.2017) Build fails with: nvcc error : 'cicc' died with status 0xC0000005 (ACCESS_VIOLATION) [I:\opencv-master\build\modules\core\opencv_core.vcxproj] ##### System information (version) - OpenCV HEAD (16 december 2017) | 3.4RC - Windows 10 enterprise 64bits 1709 (build 16299) - VS2017 15.5.2 - Windows SDK 1709 (build 16299) - Cuda 9.1 - compiler MSVC / 14.12.25827 ##### Detailed description OpenCV build fails at opencv_core, in a such manner: [output[opencv_core]fails.txt](https://github.com/opencv/opencv/files/1564840/output.opencv_core.fails.txt) ##### Steps to reproduce I use this bash / powershell script: $src = "I:/opencv-master/opencv" $build = "I:/opencv-master/build" $target = "Visual Studio 15 2017 Win64" $ipsxe = "I:\IntelSWTools\compilers_and_libraries\windows\bin\ipsxe-comp-vars.bat" $env:Path += ";C:\Program Files\CMake\bin" $env:CUDA_PATH = "I:\Nvidia\Cuda\9.1" $env:GSTREAMER_DIR = "I:/gstreamer/1.0/x86_64/" #cd I:/opencv-master/ #bash update.sh (`cd opencv && git pull && cd ../opencv_contrib && git pull`) cd $build #iex "$ipsxe intel64 vs2017" rm -r CMake* cmake -G $target ' -DBUILD_JAVA=false -DWITH_MATLAB=false -DBUILD_DOC=false -DBUILD_PERF_TESTS=false -DBUILD_TESTS=false ' -DOPENCV_ENABLE_NONFREE=true -DBUILD_opencv-apps=false -DENABLE_PYLINT=false -DENABLE_CXX11=true ' -DCUDA_SDK_ROOT_DIR="$env:CUDA_PATH" -DCUDA_ARCH_BIN="5.2" -DCUDA_VERBOSE_BUILD=true ' -DWITH_OPENCL=true ' -DCUDA_HOST_COMPILER="C:/Program Files (x86)/Microsoft Visual Studio/2017/Enterprise/VC/Tools/MSVC/14.12.25827/bin/Hostx86/x64/cl.exe" ' ..\opencv msbuild.exe OpenCV.sln /verbosity:m /m #msbuild.exe OpenCV.sln /p:Configuration=Release /verbosity:m /m ##### Comments Single quotes ending lines in the scripts should be replaced by backquotes / backticks (I used single quote to match Markdown formatting). I'm available to investigate.
priority: low,category: build/install,category: gpu/cuda (contrib)
low
Critical
282,633,029
vue
Is Vue performing unnecessary re-render when using $listeners?
### Version 2.5.11 (and earlier versions) ### Reproduction link [https://jsfiddle.net/xb4172g8/](https://jsfiddle.net/xb4172g8/) ### Steps to reproduce 1. Open console and observe while typing something into input fields 2. Enter some text into name field -> Vue re-renders all three Textfield components 3. In line 21 replace `{ ...this.$listeners }` with an empty object 4. Once again enter some text into any field -> Vue re-renders only updated Textfield ### What is expected? Vue should re-render only the component whose props has changed. ### What is actually happening? Using `$listeners` in component's render function causes the component to be rendered whenever its parent is updated even though his props hasn't changed. <!-- generated by vue-issues. DO NOT REMOVE -->
improvement
low
Major
282,685,438
flutter
ExpandIcon should use ThemeData instead of Color mapping
Currently, the `ExpandIcon` is hardcoded to a specific black-value from the `Colors` mapping. As `cardColor` can already be defined in `ThemeData`, it would only make sense to allow the `ExpandIcon` to have its color defined in `ThemeData` too as darker background colors can otherwise conflict in user interface design. https://github.com/flutter/flutter/blob/e85d099229283989839422ec1e4cef8fdc2af624/packages/flutter/lib/src/material/expand_icon.dart#L105
framework,f: material design,c: proposal,P2,team-design,triaged-design
low
Minor
282,689,960
go
cmd/vendor/golang.org/x/arch/arm: several issues in ARM disassembler's plan9 syntax
Though the ARM disassembler can decode most instruction binaries into gnu syntax correctly, there are several issues in plan9 syntax decoding. 1. op name 15f715e7| 1 plan9 SDIV R7, R5, R5 It should be "DIV R7, R5, R5", and many others are also incorrect. MLA should be MULA UDIV should be DIVU ...... 2. LDM/STM ed003be9| 1 plan9 LDMDB [R0,R2-R3,R5-R7], R11! 2.1 their op name should be MOVM 2.2 () should be used instead of [] 2.3 The DB/IA/DA/IB suffixes should be decoded to .W/.P suffixes 3. SWP/SWPB 939007e1| 1 plan9 SWP [R7], R3, R9 3.1. () should be used instead of [] 3.2 The parameter order is incorrect 4. VMRS/VMSR 109ae1ee| 1 plan9 VMSR R9, FPSCR The op name is incorrect 5. FP instructions 5.1 op name: VADD.F32->ADDF, VSUB.F64->SUBD, ...... 5.2 register name: S2->F1, S4->F2, ... 5.3 vcvt has incorrect form 5.4 vmov has incorrect form 6. XTB/XTH/XTBU/XTHU should be MOVB/MOVH 7. STREX/LDREX has incorrect parameter order 8. XTAB/XTAH/XTABU/XTAHU has incorrect form
NeedsFix
low
Major
282,736,322
rust
Issues with crash handling for iOS for Rust 1.22.1
I'm using `rustc 1.22.1 (05e2e1c41 2017-11-22)` and I'm calling a simple rust function from my iOS app that just panics. The rust function is in a static library and is compiled `cargo build` (debug with -O0). The architecture is aarch64-apple-ios. Here's the rust code: ``` extern crate rand; pub struct MyStruct { i: i32, } impl MyStruct { pub fn do_stuff(&mut self) -> String { panic!("that was a fail"); return String::from("asdf"); } } #[no_mangle] pub extern fn panicky_fn() { let mut m = MyStruct { i: 0 }; m.do_stuff(); } ``` When I call it like this, from my root view controller: ``` - (void)viewDidLoad { [super viewDidLoad]; [Fabric with:@[ [Crashlytics class]]]; // start the crash reporter CFTimeInterval start = CACurrentMediaTime(); srand(time(NULL)); if (rand() % 2 == 0) { panicky_fn(); } // more code... } ``` Then when the panic occurs, in the left-hand pane of Xcode, it just shows me this generic trace: ``` #0 0x00000001838e9348 in __pthread_kill () #1 0x00000001839fd354 in pthread_kill$VARIANT$mp () #2 0x0000000183858fd8 in abort () #3 0x00000001832bc068 in abort_message () #4 0x00000001832bc16c in default_terminate_handler() () #5 0x00000001832d454c in std::__terminate(void (*)()) () #6 0x00000001832d45c0 in std::terminate() () #7 0x00000001832e476c in objc_terminate () #8 0x0000000104651470 in _dispatch_client_callout () #9 0x000000010465db74 in _dispatch_block_invoke_direct () #10 0x00000001864a5a04 in __FBSSERIALQUEUE_IS_CALLING_OUT_TO_A_BLOCK__ () #11 0x00000001864a56a8 in -[FBSSerialQueue _performNext] () #12 0x00000001864a5c44 in -[FBSSerialQueue _performNextFromRunLoopSource] () #13 0x0000000183d78358 in __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ () #14 0x0000000183d782d8 in __CFRunLoopDoSource0 () #15 0x0000000183d77b60 in __CFRunLoopDoSources0 () #16 0x0000000183d75738 in __CFRunLoopRun () #17 0x0000000183c962d8 in CFRunLoopRunSpecific () #18 0x0000000185b27f84 in GSEventRunModal () #19 0x000000018d243880 in UIApplicationMain () #20 0x0000000104219b68 in main at /Users/michael/Snapchat/Dev/Rotationization/Rotationization/main.m:15 #21 0x00000001837ba56c in start () ``` And in the crash reporting service, Crashlytics, I just see a stack trace just like it. *However*, when I wrap the call to the rust function like so: ``` dispatch_async(dispatch_get_main_queue(), ^{ panicky_fn(); } ``` Then both the left-hand pane of Xcode as well as the crash reporting service show the correct stack trace. Can anyone help with this? I want to start using Rust in my app, but I don't feel comfortable doing so until I can monitor crashes with it.
A-debuginfo,C-enhancement,O-ios,T-compiler
low
Critical
282,741,848
kubernetes
Make CrashLoopBackoff timing tuneable, or add mechanism to exempt some exits
<!-- This form is for bug reports and feature requests ONLY! If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/). --> **Is this a BUG REPORT or FEATURE REQUEST?**: Feature request /kind feature **What happened**: As part of a development workflow, I intentionally killed a container in a pod with `restartPolicy: Always`. The plan was to do this repeatedly, as a quick way to restart the container and clear old state (and, in Minikube, to load image changes). The container went into a crash-loop backoff, making this anything but a quick option. **What you expected to happen**: I expected there so be some configuration allowing me to disable, or at least tune the timing of, the CrashLoopBackoff. **How to reproduce it (as minimally and precisely as possible)**: Create a pod with `restartPolicy: Always`, and intentionally exit a container repeatedly. **Anything else we need to know?**: I see that the backoff timing parameters are hard-coded constants here: https://github.com/kubernetes/kubernetes/blob/5f920426103085a28069a1ba3ec9b5301c19d075/pkg/kubelet/kubelet.go#L121 https://github.com/kubernetes/kubernetes/blob/5f920426103085a28069a1ba3ec9b5301c19d075/pkg/kubelet/kubelet.go#L155 One might reasonably expect these to be configurable at least at the kubelet level - say, by a setting like [these](https://github.com/kubernetes/kubernetes/blob/6b03a43b761a8b6f9eac3648cf499f157f161414/pkg/kubelet/apis/kubeletconfig/v1alpha1/types.go#L47). That would be sufficient for my use-case (local development with fast restarts), and presumably useful as an advanced configuration setting for production workloads. A more aggressive change would allow tuning per-pod. There are other options for my target workflow: * Put the pod in a Deployment or similar, kubectl delete the pod, let Kubernetes schedule another, work with the new pod. However, this is much slower than a container restart without backoff (and ironically causes more kubelet load than the backoff avoids). It also relies on using kubectl/the Kubernetes API to do the restart, as opposed to just exiting the container. * Run the server process as a secondary process in the container rather than the primary process. This means the server can be started/stopped without container backoff, but is trickier to implement and doesn't offer the same isolation guarantees as exiting the container and starting fresh. It also means I probably can't use the same image I deploy to production (because I probably don't want this extra restart-support stuff floating around in the production image). **Environment**: - Kubernetes version (use `kubectl version`): v1.8.0 - Cloud provider or hardware configuration: Minikube 0.23.0 with Virtualbox driver on OSX
sig/node,kind/feature,lifecycle/frozen
high
Critical
282,847,262
react
Number input gets cleared when typing period as decimal mark
<!-- Note: if the issue is about documentation or the website, please file it at: https://github.com/reactjs/reactjs.org/issues/new --> **Do you want to request a *feature* or report a *bug*?** Bug **What is the current behavior?** My OS and browser are configured to a locale that uses comma as the decimal mark (Finland for those interested). In the codepen below when I accidentally type a period after some number the whole input gets cleared. This is somehow related to the parent component's state being updated because if the `defaultValue` prop is removed from the `<Input>` component the input doesn't get cleared. **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem via https://jsfiddle.net or similar (template for React 16: https://jsfiddle.net/Luktwrdm/, template for React 15: https://jsfiddle.net/hmbg7e9w/).** https://codepen.io/anon/pen/aEOgNL?editors=0010 **What is the expected behavior?** The input should retain its visible value even if it would be invalid (can't be converted to a number). **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?** React 16.2.0 and Chrome 63 on macOS 10.12.6. Tested in Safari 11.0.2 and typing a period kinda works. After typing the first number after the period it gets converted to a comma but the cursor jumps to the beginning. Also tested in Firefox 57.0.1 but it seems to think that period is the correct decimal mark.
Type: Bug,Component: DOM
low
Critical
282,850,589
rust
Confusing error message: the trait `std::convert::Into` cannot be made into an object
I've tried to make a function which will make a new thread, accepting either an `&str` or a `String` for the new thread's name: ```rust /// Test helper to spawn a named thread. /// /// The thread executes the work in the `f` closure. #[cfg(test)] fn named_thread<'s, F>(name: Into<String>, f: F) -> JoinHandle<()> where F: FnOnce(), F: Send + 'static { let thr = thread::Builder::new() .name(String::from(name)) .spawn(f) .unwrap(); thr } ``` This gives: ``` error[E0038]: the trait `std::convert::Into` cannot be made into an object --> src/lib.rs:377:1 | 377 | / fn named_thread<'s, F>(name: Into<String>, f: F) -> JoinHandle<()> 378 | | where F: FnOnce(), F: Send + 'static { 379 | | let name = name.into(); 380 | | let thr = thread::Builder::new() ... | 384 | | thr 385 | | } | |_^ the trait `std::convert::Into` cannot be made into an object | = note: the trait cannot require that `Self : Sized` ``` Which, from a user POV, is confusing on a few levels: * I don't have any mention of `Sized` in my code (must be implicit). * Who is `Self` here? * Even if I knew where `Sized` came from, and who `Self` was, the error message doesn't tell me what I did wrong. I'd be interested to know what I did wrong, even if the error message can't be improved. Thanks
C-enhancement,A-diagnostics,P-low,T-compiler,D-verbose,A-trait-objects
low
Critical
282,901,617
neovim
swap file actions can't be typed with the current keymap
- `nvim --version`: ``` NVIM v0.2.1 Build type: Release LuaJIT 2.0.5 Compilation: /usr/local/Homebrew/Library/Homebrew/shims/super/clang -Wconversion -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -DNVIM_MSGPACK_HAS_FLOAT32 -DNVIM_UNIBI_HAS_VAR_FROM -DNDEBUG -DMIN_LOG_LEVEL=3 -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wimplicit-fallthrough -Wvla -fstack-protector-strong -fdiagnostics-color=auto -DINCLUDE_GENERATED_DECLARATIONS -I/tmp/neovim-20171109-46118-9kky38/neovim-0.2.1/build/config -I/tmp/neovim-20171109-46118-9kky38/neovim-0.2.1/src -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/opt/gettext/include -I/usr/include -I/tmp/neovim-20171109-46118-9kky38/neovim-0.2.1/build/src/nvim/auto -I/tmp/neovim-20171109-46118-9kky38/neovim-0.2.1/build/include Kompilerad av [email protected] ``` - Vim (version: ) behaves differently? N/A - Operating system/version: OSX - Terminal name/version: fish/bash/zsh - `$TERM`: screen-256color ### Steps to reproduce using `nvim -u NORC` Open a file which is already open/has crashed and left the swap file. ### Actual behaviour Neovim complains about a swap file already existing and gives some actions which can resolve it. Unfortunately, some of those keys can't be typed with my current keymap. I guess that vim detects that I use a Swedish keyboard and thus gives me options and instructions in Swedish (which is fine!) but I actually use a US keymap when writing code which means that `Å`/`å` on the physical keyboard is mapped to `[`. Thus I am not able to select this option without changing the keymap (which I have a shortcut for but it still rather inconvenient). ``` Växlingsfil "~/.local/share/nvim/swap/FILE_PATH" existerar redan! Växlingsfil "~/.local/share/nvim/swap//FILE_PATH" existerar redan! [Ö]ppna skrivskyddad, (R)edigera ändå, (Å)terskapa, (A)vsluta, A(v)bryt: ``` ### Expected behaviour I'd like to not need to switch keymap to select options. Though wheter that is solved by only using ascii characters for options or by selecting the language for options from the keymaps I can't say.
localization,ux,bug-vim
low
Critical
282,911,084
rust
Negative impls of Auto Traits (OIBIT) don't take into account Trait Bounds
The following code fails to compile: ```rust trait Foo {} auto trait FooAuto {} impl<T> !FooAuto for T where T: Foo {} fn test<T>(t: T) where T: FooAuto {} fn main() { test(1i32); } ``` It produces the following error. ```rust error[E0277]: the trait bound `i32: FooAuto` is not satisfied --> src/main.rs:11:5 | 10 | test(1i32); | ^^^^ the trait `FooAuto` is not implemented for `i32` | = note: required by `test` ``` ([Playpen](https://play.rust-lang.org/?gist=e9beeee97ab1c4f561e6359d2541523f&version=nightly)) Even though I restrict the `!FooAuto` implementation to types implementing `Foo` (which there is none of), the rust compiler does not seem to take that restriction into account. Instead, it removes the `FooAuto` auto-implementation from every type. If I remove the `!FooAuto` line, the code compiles just fine. The same problems happens when I use `impl<T: Foo>` instead of `where T: Foo` ([Playpen](https://play.rust-lang.org/?gist=dce37d3ab4b77579bde848e9809105b5&version=nightly)). Also using `impl FooAuto for .. {}` instead of `auto trait FooAuto` results in the same problem ([Playpen](https://play.rust-lang.org/?gist=403fce7680cb1119bcb11ea4c1a7e209&version=nightly)).
C-enhancement,A-specialization
low
Critical
282,939,505
TypeScript
"Variable is used before being assigned" even though it is assigned
<!-- BUGS: Please use this template. --> <!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript --> <!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.6.1 **Code** ```ts type Patch<T> = { [P in keyof T]?: Patch<T[P]> | ((value: T[P]) => T[P]) } function applyPatch<T extends object, K extends keyof T>(target: T, patch: Patch<T>): T { (Object.keys(patch) as K[]).forEach(key => { const patchValue = patch[key]; target[key] = typeof patchValue == "object" && !Array.isArray(patchValue) ? applyPatch(target[key] || {}, patchValue) : typeof patchValue == "function" // Error: Variable 'patchValue' is used before being assigned. ? patchValue(target[key]) : patchValue; // Error: Variable 'patchValue' is used before being assigned. }); return target; } ``` **Expected behavior:** No error **Actual behavior:** > Error: Variable 'patchValue' is used before being assigned. It goes away if I assert `patchValue` as non-null: ```ts const patchValue = patch[key]! ```
Bug
medium
Critical
282,975,575
react
backspace fails to clear values on input type='email'
<!-- Note: if the issue is about documentation or the website, please file it at: https://github.com/reactjs/reactjs.org/issues/new --> **Do you want to request a *feature* or report a *bug*?** bug **What is the current behavior?** email input doesn't control for whitespace // possible variation on [Issue 6368](https://github.com/facebook/react/issues/6368); however, 6368 shows up at 15.0.0 this bug shows up at 15.2 **If the current behavior is a bug, demo** no bug in React 15.0.0 no bug in React 15.1.0 bug on React 15.2 [React~15 fiddle](https://jsfiddle.net/cburnett/79z43qxn/9/) bug on React 16 [React~16 fiddle](https://jsfiddle.net/cburnett/q1297t5w/2/) **What is the expected behavior?** When a user presses down the backspace key and holds it, all values in the input are removed, including the whitespaces **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?** Works on 15.1.0 bug at >= 15.2.0 testing on Chrome 63 MacOs Sierra 10.12
Type: Bug,Component: DOM
low
Critical
283,010,979
flutter
Incorrect handling of Unicode on Windows is causing failures in test_test.dart
Unicode that is solely used for formatting is not being handled correctly when read out of `stderr` or `stdout` from the `ProcessResult`, likely due to how the VM decides whether or not to worry about fixing Unicode issues on Windows. Related dart-sdk issue [here](https://github.com/dart-lang/sdk/issues/28871).
tool,platform-windows,P3,team-tool,triaged-tool
low
Critical
283,023,933
go
doc: mention "purego" build tag convention somewhere
As the number of Go implementations continues to increase, the number of cases where the `unsafe` package is unlikely to work properly also rises. Currently, there is `appengine`, `gopherjs`, and possibly `wasm` where pointer arithmetic is not allowed. Currently, protobuf and other packages special cases [build tags for `appengine` and `js`](https://github.com/golang/protobuf/blob/e51d002/proto/pointer_unsafe.go#L32) and may need to add others in the near future. It does not scale very well to blacklist specific known Go implementations where unsafe does not work. My proposal is to document `safe` as a community agreed upon tag meaning that `unsafe` should not be used. It is conceivable that this concept be extended to the compiler rejecting programs that use `unsafe` when the `safe` tag is set, but I'm currently more interested as a library owner in knowing whether to avoid `unsafe` in my own packages. \cc @zombiezen @dneil @neelance @shurcooL
Documentation,Proposal,Proposal-Accepted
medium
Critical
283,039,896
pytorch
single-gpu works but multi-gpu hangs
My machine has 4 titanXP, CUDA9.0, CuDNN 7.5, Ubuntu 16.04 When I run an image classification task with single GPU, it runs just fine. But it hangs at the line model = nn.DataParallel(model) when I try to run with 2 or more GPUs. I ran p2pBandwidthLatencyTest and got the following report: P2P (Peer-to-Peer) GPU Bandwidth Latency Test] Device: 0, TITAN Xp, pciBusID: 3b, pciDeviceID: 0, pciDomainID:0 Device: 1, TITAN Xp, pciBusID: 5e, pciDeviceID: 0, pciDomainID:0 Device: 2, TITAN Xp, pciBusID: af, pciDeviceID: 0, pciDomainID:0 Device: 3, TITAN Xp, pciBusID: d8, pciDeviceID: 0, pciDomainID:0 Device=0 CAN Access Peer Device=1 Device=0 CAN Access Peer Device=2 Device=0 CAN Access Peer Device=3 Device=1 CAN Access Peer Device=0 Device=1 CAN Access Peer Device=2 Device=1 CAN Access Peer Device=3 Device=2 CAN Access Peer Device=0 Device=2 CAN Access Peer Device=1 Device=2 CAN Access Peer Device=3 Device=3 CAN Access Peer Device=0 Device=3 CAN Access Peer Device=1 Device=3 CAN Access Peer Device=2 ***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure. So you can see lesser Bandwidth (GB/s) in those cases. P2P Connectivity Matrix D\D 0 1 2 3 0 1 1 1 1 1 1 1 1 1 2 1 1 1 1 3 1 1 1 1 Unidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 2 3 0 423.18 11.44 11.37 11.36 1 11.42 426.77 11.41 11.36 2 11.38 11.38 425.52 11.34 3 11.36 11.40 11.31 427.77 Unidirectional P2P=Enabled Bandwidth Matrix (GB/s) D\D 0 1 2 3 0 426.11 7.59 8.38 6.35 1 10.19 426.39 8.36 9.08 2 8.39 9.08 426.03 10.16 3 8.39 9.09 10.14 427.85 Bidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 2 3 0 426.50 19.49 18.98 19.07 1 19.87 430.72 20.02 19.93 2 17.91 19.95 429.49 18.79 3 19.98 19.92 18.81 429.73 Bidirectional P2P=Enabled Bandwidth Matrix (GB/s) D\D 0 1 2 3 0 423.94 15.74 14.40 16.00 1 20.26 429.07 16.00 17.32 2 16.07 16.02 429.69 20.23 3 16.01 17.32 20.26 428.26 P2P=Disabled Latency Matrix (us) D\D 0 1 2 3 0 9.90 17.02 14.31 14.31 1 14.02 5.55 14.59 14.53 2 14.50 14.48 5.82 14.31 3 14.85 14.40 14.57 5.80 P2P=Enabled Latency Matrix (us) D\D 0 1 2 3 0 11.51 11.45 8.36 8.82 1 8.86 5.54 9.07 8.38 2 8.76 9.11 5.76 9.12 3 8.80 8.80 9.08 5.78 which seemed fine to me. Can anyone please help me out? Thanks! cc @csarofeen @ptrblck
module: cudnn,triaged,module: data parallel
low
Minor
283,159,668
angular
feature: force reference to element in template
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [x] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> In Angular template, a reference (without assignment) will refer to 1) DOM 2) component instance depends on if there's a component here, like: ```html <div #ref1>Content for element</div> <app-div #ref2>Content for component</app-div> <p>{{ ref1.prop }}</p><!--DOM Property--> <p>{{ ref2.prop }}</p><!--Component Property--> ``` That's seems fine, but once we have component without custom elements, say that: ```typescript @Component({ selector: '[appDiv]', template: '<ng-content></ng-content>' }) class MyComponent {} ``` Then the template changes to: ```html <div #ref1>Content for element</div> <div #ref2 appDiv>Content for component</div> <p>{{ ref1.prop }}</p><!--DOM Property--> <p>{{ ref2.prop }}</p><!--Component Property (not expected)--> ``` But in case we want to refer to native `<div>` element, that cannot be done in template. <sub>Bad example for using `div`, but may be important for other elements.</sub> One workaround is using `{ read: ElementRef }` in component, but there's too much boilerplate. ## Expected behavior <!-- Describe what the desired behavior would be. --> Able to force reference the DOM element via some pre-defined `exportAs` like: ```html <div #ref1>Content for element</div> <div #ref2="$element" appDiv>Content for component</div> <p>{{ ref1.prop }}</p><!--DOM Property--> <p>{{ ref2.prop }}</p><!--DOM Property--> ``` ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> https://stackblitz.com/edit/angular-9g5zgh?embed=1&file=app/app.component.html ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> In Angular component library, it's quite common to use *attribute selector* for components (easy to find example in material), but sometimes there're also case for dealing with the element rather than component. But should be consider low priority since it's easy to workaround. ## Environment <pre><code> Not applicable Angular version: X.Y.Z <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [ ] Chrome (desktop) version XX - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: XX <!-- run `node --version` --> - Platform: <!-- Mac, Linux, Windows --> Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
feature,area: core,core: local references,feature: under consideration
low
Critical
283,195,729
pytorch
Feature request: sparse matrix max(axis)
The current sparse support is pretty nice! However, there are some important of collation operations that can not be efficiently be composed out of matmul or operations on _values(). One that is in the way of my current work is axis-wise max. Ideally: ``` vec = sparsemat.max(axis=None, zero=0) ``` Where an optional zero argument could be used to specify the interpretation of unstored matrix entries. (E.g., could pass -Infinity if that is the intent.) In my application, I am using a sparse matrix to store only the significant log probabilities in a grid, and max() with a -Infinity default is the key operation needed to implement a stable row-wise logsumexp().) For reference: scipy.sparse's inclusion of max is here, although they miss the need for a zero override. https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.coo_matrix.max.html#scipy.sparse.csr_matrix.max cc @vincentqb
module: sparse,triaged,enhancement
low
Major
283,220,145
vscode
Tasks with target = TaskScope.Global as the scope do not show up in task picker UI
- VSCode Version: Code 1.18.1 (929bacba01ef658b873545e26034d1a8067445e9, 2017-11-16T18:23:26.125Z) - OS Version: Darwin x64 17.3.0 - Extensions: Extension|Author (truncated)|Version ---|---|--- code-runner|for|0.8.4 cpptools|ms-|0.14.2 Ruby|reb|0.15.0 --- Steps to Reproduce: 1. Create a vscode.Task using the new API passing vscode.TaskScope.Global as the target Reproduces without extensions: Yes I am developing an extension that provides tasks for each folder in a multiroot project and a few tasks that are applicable to all folders. Maybe I am missing something but the tasks that have TaskScope.Global as the target do not show up in the task picker UI. Is this behavior intentional?
bug,tasks
low
Major
283,362,419
rust
AArch64 benchmarking assertion failure
Repro: ``` #![cfg_attr(test, feature(test))] #[cfg(test)] extern crate test; #[cfg(test)] mod tests { use test::{Bencher, black_box}; #[bench] fn bench_map_scalar(b: &mut Bencher) { b.iter(|| { black_box(1) }); } } ``` .cargo/config ``` [target.aarch64-unknown-linux-gnu] linker = "aarch64-linux-gnu-gcc" ar = "aarch64-linux-gnu-ar" runner = "qemu-aarch64 -L /usr/aarch64-linux-gnu" ``` Outputs: ``` $ RUST_BACKTRACE=1 cargo bench Compiling aarch64-libtest-busted v0.1.0 (file:///home/adam/aarch64-libtest-busted) Finished release [optimized] target(s) in 0.94 secs Running target/release/deps/aarch64_libtest_busted-1c6093a510eef424 running 1 test test tests::bench_map_scalar ... bench: 0 ns/iter (+/- 0) test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured; 0 filtered out ``` ``` $ RUST_BACKTRACE=1 cargo bench --target=aarch64-unknown-linux-gnu Finished release [optimized] target(s) in 0.0 secs Running target/aarch64-unknown-linux-gnu/release/deps/aarch64_libtest_busted-51d12e210b730146 running 1 test test tests::bench_map_scalar ... thread 'main' panicked at 'assertion failed: pct <= hundred', /checkout/src/libtest/stats.rs:293:4 stack backtrace: 0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace at /checkout/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49 1: std::sys_common::backtrace::print at /checkout/src/libstd/sys_common/backtrace.rs:68 at /checkout/src/libstd/sys_common/backtrace.rs:57 2: std::panicking::default_hook::{{closure}} at /checkout/src/libstd/panicking.rs:381 3: std::panicking::default_hook at /checkout/src/libstd/panicking.rs:397 4: std::panicking::rust_panic_with_hook at /checkout/src/libstd/panicking.rs:577 5: std::panicking::begin_panic at /checkout/src/libstd/panicking.rs:538 6: test::stats::percentile_of_sorted at /checkout/src/libtest/stats.rs:293 7: test::stats::winsorize at /checkout/src/libtest/stats.rs:318 8: aarch64_libtest_busted::tests::bench_map_scalar 9: test::run_test at /checkout/src/libtest/lib.rs:1486 at /checkout/src/libtest/lib.rs:1617 10: test::run_tests_console at /checkout/src/libtest/lib.rs:1188 at /checkout/src/libtest/lib.rs:975 11: test::test_main at /checkout/src/libtest/lib.rs:291 12: test::test_main_static at /checkout/src/libtest/lib.rs:327 13: __rust_maybe_catch_panic at /checkout/src/libpanic_unwind/lib.rs:101 14: std::rt::lang_start at /checkout/src/libstd/panicking.rs:459 at /checkout/src/libstd/rt.rs:58 15: main 16: __libc_start_main error: bench failed ``` ``` $ rustc --version rustc 1.24.0-nightly (dc39c3169 2017-12-17) $ cargo --version cargo 0.25.0-nightly (930f9d949 2017-12-05) ``` Offending assertion is [here](https://github.com/rust-lang/rust/blob/master/src/libtest/stats.rs#L293)
O-Arm,T-libs-api,A-libtest,C-bug,O-AArch64
low
Critical
283,396,560
TypeScript
Handle JSDoc @param with hyphen
**TypeScript Version:** 2.7.0-dev.20171214 **Code** ```ts /** * @param x - Bla */ function foo(x) { } ``` Hover over `x` **Expected behavior:** Hyphen is ignored in any documentation returned. See JSDoc @param spec: http://usejsdoc.org/tags-param.html **Actual behavior:** Hyphen is returned as part of documentation: ``` [Trace - 2:58:07 PM] Response received: quickinfo (144). Request took 3 ms. Success: true Result: { "kind": "parameter", "kindModifiers": "", "start": { "line": 4, "offset": 14 }, "end": { "line": 4, "offset": 15 }, "displayString": "(parameter) x: any", "documentation": "- Bla", "tags": [] } ```
Suggestion,VS Code Tracked,Domain: JSDoc,Experience Enhancement
low
Minor
283,569,625
angular
updateValueAndValidity in valueChanges breaks model under certain circumstances
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [X] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> When disabling a FormGroup where a valueChanges subscriber of one control calls updateValueAndValidity on another control the aggregate value of the FormGroup breaks. ## Expected behavior <!-- Describe what the desired behavior would be. --> The value should stay the same as it is not changed by disabling the FormGroup. ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> https://plnkr.co/edit/hC0YcE The two input fields are correctly represented by the model {"a": "", "b": ""}. Click Disable. The model erroneously gets set to {}. Clicking Disable again recovers the model. Using formGroup.disable({emitEvent: false}) circumvents the issue if #12366 is circumvented/fixed. ## What is the motivation / use case for changing the behavior? I want to update validators of field b based on the value of field a. After that updateValueAndValidaty() needs to be called. For styling and other reasons using a FormGroup validator instead is not a feasible solution. ## Environment <pre><code> Angular version: 5.1.1 (plunkr) <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [X] Chromium (desktop) version 63.0.3239.108 </code></pre>
type: bug/fix,freq1: low,area: forms,state: confirmed,forms: Controls API,P4
low
Critical
283,598,119
nvm
Custom node installation location in .nvmrc
Feature request: Could nvmrc point to a local path where a custom node version is installed? When nvm would be run it would simply witch the path to that location. This would be useful when using https://github.com/eirslett/frontend-maven-plugin which downloads and installs a version of node as specified in maven's pom.xml.
feature requests
low
Major
283,653,451
react
Stop syncing value attribute for controlled inputs
Opening this as a follow up to some quick discussions in https://github.com/facebook/react/issues/11881. Syncing the `value` attribute has been a consistent source of bugs for us, and the benefits of doing so seem minimal. There's some previous discussion on the topic in https://github.com/facebook/react/pull/7359 and in other issues, I can't remember right now 😄 This would be a breaking change, so it would have to be done in a major release. ## Reasons to keep syncing * It prevents `form.reset()` from putting controlled form inputs into a weird state * Some browser extensions (not sure which) read from the `value` attribute in some cases (not sure which) * It can be useful for querying inputs with a specific value using an attribute selector ## Reasons to stop syncing * It will reduce the complexity of `react-dom` in a non-trivial way * In turn, it will likely reduce bundle size as well * We remove a whole class of bugs (fighting with browsers that want to be helpful about input values) * Syncing the input value to the attribute potentially exposes sensitive data to third party tools ([1](https://www.reddit.com/r/analytics/comments/7ukw4n/mixpanel_js_library_has_been_harvesting_passwords/)) ______ What do we think? Are these reasons good enough to keep syncing the `value` attribute? Are there other more critical reasons we should keep doing so? cc @nhunzaker @jquense @gaearon
Component: DOM,Type: Discussion,Type: Breaking Change
medium
Critical
283,698,690
go
proposal: x/crypto/acme/autocert: Manager should support DNS-01 verification
### What did you do? I've tried to setup autocert behind a firewall. ### What did you expect to see? https working flawlessly (using letsencrypt infrastructure) ### What did you see instead? Verification failed due the firewall. I believe dns-01 should be built into [Manager](https://godoc.org/golang.org/x/crypto/acme/autocert#Manager). It could have a function (i.e. SetTXT) field which if mutated is used by the Manager to set the TXT records required for the DNS verification.
Proposal,Proposal-Crypto
medium
Critical
283,701,180
go
sync: Pool example suggests incorrect usage
The operation of `sync.Pool` assumes that the memory cost of each element is approximately the same in order to be efficient. This property can be seen by the fact that `Pool.Get` returns you a random element, and not the one that has "the greatest capacity" or what not. In other words, from the perspective of the `Pool`, all elements are more or less the same. However, [the `Pool` example](https://go-review.googlesource.com/24371) stores `bytes.Buffer` objects, which have an underlying `[]byte` of varying capacity depending on how much of the buffer is actually used. Dynamically growing an unbounded buffers can cause a large amount of memory to be pinned and never be freed in a live-lock situation. Consider the following: ```go pool := sync.Pool{New: func() interface{} { return new(bytes.Buffer) }} processRequest := func(size int) { b := pool.Get().(*bytes.Buffer) time.Sleep(500 * time.Millisecond) // Simulate processing time b.Grow(size) pool.Put(b) time.Sleep(1 * time.Millisecond) // Simulate idle time } // Simulate a set of initial large writes. for i := 0; i < 10; i++ { go func() { processRequest(1 << 28) // 256MiB }() } time.Sleep(time.Second) // Let the initial set finish // Simulate an un-ending series of small writes. for i := 0; i < 10; i++ { go func() { for { processRequest(1 << 10) // 1KiB } }() } // Continually run a GC and track the allocated bytes. var stats runtime.MemStats for i := 0; ; i++ { runtime.ReadMemStats(&stats) fmt.Printf("Cycle %d: %dB\n", i, stats.Alloc) time.Sleep(time.Second) runtime.GC() } ``` Depending on timing, the above snippet takes around 35 GC cycles for the initial set of large requests (2.5GiB) to finally be freed, even though each of the subsequent writes only use around 1KiB. This can happen in a real server handling lots of small requests, where large buffers allocated by some prior request end up being pinned for a long time since they are not in `Pool` long enough to be collected. The example claims to be based on `fmt` usage, but I'm not convinced that `fmt`'s usage is correct. It is susceptible to the live-lock problem described above. I suspect this hasn't been an issue in most real programs since `fmt.PrintX` is typically not used to write very large strings. However, other applications of `sync.Pool` may certainly have this issue. I suggest we fix the example to store elements of fixed size and document this. \cc @kevinburke @LMMilewski @bcmills
Documentation,help wanted,NeedsFix,compiler/runtime
high
Critical
283,710,531
rust
Can't specialize `Drop`
The `Drop` trait has checks to ensure that the `impl` can't add new restrictions on generic parameters. ```rust struct S<T> { ... } impl<T: Foo> Drop for S<T> { ... } ``` The above won't compile (as intended?), as the `S` would have a Drop that's conditional based on the type parameter. However with specialization we could have something like: ```rust struct S<T> { ... } default impl<T> Drop for S<T> { ... } impl <T: Foo> Drop S<T> { ... } ``` Both of these examples yield the same error ``` error[E0367]: The requirement `T: Foo` is added only by the Drop impl. ``` My first instinct was that this had something to do with the type information getting lost on destructors, but that doesn't seem to be the case as the issue can be worked around by specializing a different trait, to which `drop` delegates to: ```rust struct S<T> { ... } // Specialized Drop implementation. trait SpecialDrop { fn drop( &mut self ); } default impl<T> SpecialDrop for S<T> { ... } impl<T: Foo> SpecialDrop for S<T> { ... } // Drop delegates to SpecialDrop. impl<T> Drop S<T> { fn drop(&mut self) { (self as &mut SpecialDrop).drop() } ``` Linking to #31844
C-enhancement,T-compiler,A-specialization,F-specialization
low
Critical
283,717,484
vscode
[json] suggest used values when property name has already been used
<!-- Do you have a question? Please ask it on http://stackoverflow.com/questions/tagged/vscode. --> <!-- Use Help > Report Issues to prefill these. --> - VSCode Version: 1.19.0 and 1.19.1 - OS Version: Windows 10 Home 64 bit Steps to Reproduce: vscode used to show autocomplete option for words that existed int the file. I use vscode for keep track of my notes. I keep my plain texts (*.txt) with language mode html. Same thing is happening in JSON files. I write a property (e.g. BusNumber) and when I try to retype the property it doesn't show up in autocomplte dropdown when I am typing in JSON and I have to retype the whole property again. It used to show up as soon as I started typing two letters. In plain text if an word existed in the file it used to come as an option for autocomplete now it is not. I use vscodevim and Prettify JSON. Even after disabling those I am having the issue. <!-- Launch with `code --disable-extensions` to check. --> Reproduces without extensions: Yes
feature-request,json
low
Major
283,732,344
go
cmd/compile: validity of program depends on method declaration order
https://play.golang.org/p/WCktUidLyc_3 is accepted while the equivalent program https://play.golang.org/p/Z-B9rBhYILd fails with an error. The only difference is the order of the method declarations. Esoteric case; recording this so we're aware of it.
compiler/runtime
low
Critical
283,733,204
flutter
Need a way to compute height of a row of baseline-aligned RenderBoxes
This comes up when trying to implement `RenderBox.computeMinIntrinsicHeight()`, and `RenderBox.computeMaxIntrinsicHeight()` for a renderer that contains more than one baseline-aligned child. I thought this would work (it should be consistent with https://github.com/flutter/flutter/issues/10014): ```dart double _lineHeight(double width, List<RenderBox> boxes) { double aboveBaseline = 0.0; double belowBaseline = 0.0; for (RenderBox box in boxes) { final double baseline = box.getDistanceToBaseline(TextBaseline.alphabetic); aboveBaseline = math.max(baseline, aboveBaseline); belowBaseline = math.max(box.getMinIntrinsicHeight(width) - baseline, belowBaseline); } return aboveBaseline + belowBaseline; } ``` If fails in a complex assertion due to the call to getDistanceToBaseline(): ``` Only call this function after calling layout on this box. You are only allowed to call this from the parent of this box during that parent's performLayout or paint functions. ... ``` @Hixie pointed out the difficulty here: > The distance to the baseline depends on the dimensions of the box, which aren't known until the box has been laid out. (For example, what's the distance to the baseline of a Center widget?) So perhaps `getDistanceToBaseline()` should be: `getDistanceToBaseline(baseline, { Size size })` where size defaults to this.size so long as the existing assertion is true. @Hixie also pointed out (vis the size parameter): > There's some caching logic going on that we'd have to tweak.
c: new feature,framework,P3,team-framework,triaged-framework
low
Minor
283,750,947
vscode
insertSnippet produces trailing spaces
<!-- Do you have a question? Please ask it on http://stackoverflow.com/questions/tagged/vscode. --> <!-- Use Help > Report Issues to prefill these. --> - VSCode Version: Version 1.19.1 (1.19.1) - OS Version: macOS 10.13.2 The `insertSnippet` function in the vscode api takes a snippet and inserts it at the given position with the indentation of the position prepended to each line. This is great, except that it also adds the indentation to any blank lines in the snippet causing unnecessary trailing spaces. The logic also doesn't respect carriage return special chars like `\r`. This problem has been touched upon before in this issue: https://github.com/Microsoft/vscode/issues/20112
feature-request,snippets
low
Minor
283,822,471
go
net: document that Dial/Listen with "ip4:tcp" or "ip6:tcp" fails on Windows
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version go1.9.2 windows/amd64 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? ```bash $ go env set GOARCH=amd64 set GOBIN= set GOEXE=.exe set GOHOSTARCH=amd64 set GOHOSTOS=windows set GOOS=windows set GOPATH=C:\Users\ilyaigpetrov\go set GORACE= set GOROOT=C:\Go set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64 set GCCGO=gccgo set CC=gcc set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 set CXX=g++ set CGO_ENABLED=1 set CGO_CFLAGS=-g -O2 set CGO_CPPFLAGS= set CGO_CXXFLAGS=-g -O2 set CGO_FFLAGS=-g -O2 set CGO_LDFLAGS=-g -O2 set PKG_CONFIG=pkg-config ``` Windwos 10 ### What did you do? [net: Dial/Listen with "ip4:tcp" or "ip6:tcp" fails on Windows](https://github.com/golang/go/issues/23193) I want this to be documented ### What did you expect to see? On page https://golang.org/pkg/net/ in section [Bugs](https://golang.org/pkg/net/#pkg-note-BUG) I want to see that Dial/Listen with "ip4:tcp" or "ip6:tcp" fails on Windows ### What did you see instead? This is not documented.
Documentation,OS-Windows,NeedsFix
low
Critical
283,826,498
youtube-dl
Proxy parameter is not working correctly inside of openload.py (PhantomJS)
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.12.14*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.12.14** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your *issue*? - [x] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- Currently openload.py is using PhantomJS in order the obtain the final URL, but it is not passing correctly the parameter --proxy URL to PhantomJS. Output is: https://openload.co/stream/None?mime=true In the Proxy Server I could see only one https call to openload servers, missing the rest of them which are being originated in openload.py lines 228-231: p = subprocess.Popen([ self.exe, '--ssl-protocol=any', self._TMP_FILES['script'].name ], stdout=subprocess.PIPE, stderr=subprocess.PIPE) Here we can see that the only parameter used for PhantomJS is "--ssl-protocol=any". I have tested it in the latest version (2017.12.14) and the issue is still there.
request,bug
low
Critical
283,834,982
rust
two-phase-borrows need a specification
Spawned off of https://github.com/rust-lang/rust/pull/46887#issuecomment-353156090: While our short term solution for two-phase borrow support (#46037) is to make it as conservative as we can (e.g. the still planned #46747), the reality is that its complex and needs a specification. (It might even be worth the exercise of adding them to the https://github.com/nikomatsakis/borrowck model )
E-hard,C-enhancement,P-medium,T-lang,T-compiler,A-NLL,NLL-reference
low
Major
283,835,976
opencv
cv::connectedComponentsWithStats crashes
##### System information (version) - OpenCV => 3.3.1 and latest master (3.4rc) - Operating System / Platform => Debian Linux (Sid) - Compiler => gcc 7.2.0 ##### Detailed description Using `cv::connectedComponentsWithStats` crashes in gdb as well as valgrind, and triggers an assertion if OpenCV is compiled with debugging information. valgrind says: ``` ==12872== Invalid read of size 4 ==12872== at 0x7650D7D: cv::connectedcomponents::CCStatsOp::operator()(int, int, int) (connectedcomponents.cpp:137) ==12872== by 0x7669E1F: cv::connectedcomponents::LabelingGrana<unsigned short, unsigned char, cv::connectedcomponents::CCStatsOp>::operator()(cv::Mat const&, cv::Mat&, int, cv::connectedcomponents::CCStatsOp&) (connectedcomponents.cpp:3764) ==12872== by 0x76520D3: int cv::connectedComponents_sub1<cv::connectedcomponents::CCStatsOp>(cv::Mat const&, cv::Mat&, int, int, cv::connectedcomponents::CCStatsOp&) (connectedcomponents.cpp:3977) ==12872== by 0x76516AF: cv::connectedComponentsWithStats(cv::_InputArray const&, cv::_OutputArray const&, cv::_OutputArray const&, cv::_OutputArray const&, int, int, int) (connectedcomponents.cpp:4032) ==12872== by 0x7651563: cv::connectedComponentsWithStats(cv::_InputArray const&, cv::_OutputArray const&, cv::_OutputArray const&, cv::_OutputArray const&, int, int) (connectedcomponents.cpp:4021) ``` The assertion is: ``` OpenCV Error: Assertion failed ((unsigned)i0 < (unsigned)size.p[0]) in at, file modules/core/include/opencv2/core/mat.inl.hpp, line 1096 ``` I then added some more debugging output: ```diff --- a/modules/core/include/opencv2/core/mat.inl.hpp +++ b/modules/core/include/opencv2/core/mat.inl.hpp @@ -54,6 +54,8 @@ #pragma warning( disable: 4127 ) #endif +#include <iostream> + namespace cv { CV__DEBUG_NS_BEGIN @@ -1091,6 +1093,7 @@ _Tp& Mat::at(int i0, int i1) { CV_DbgAssert(dims <= 2); CV_DbgAssert(data); + std::cout << "i0, size: " << (unsigned)i0 << " - " << (unsigned)size.p[0] << std::endl; CV_DbgAssert((unsigned)i0 < (unsigned)size.p[0]); CV_DbgAssert((unsigned)(i1 * DataType<_Tp>::channels) < (unsigned)(size.p[1] * channels())); CV_DbgAssert(CV_ELEM_SIZE1(traits::Depth<_Tp>::value) == elemSize1()); @@ -1102,6 +1105,7 @@ const _Tp& Mat::at(int i0, int i1) const { CV_DbgAssert(dims <= 2); CV_DbgAssert(data); + std::cout << "i0, size: " << (unsigned)i0 << " - " << (unsigned)size.p[0] << std::endl; CV_DbgAssert((unsigned)i0 < (unsigned)size.p[0]); CV_DbgAssert((unsigned)(i1 * DataType<_Tp>::channels) < (unsigned)(size.p[1] * channels())); CV_DbgAssert(CV_ELEM_SIZE1(traits::Depth<_Tp>::value) == elemSize1()); diff --git a/modules/imgproc/src/connectedcomponents.cpp b/modules/imgproc/src/connectedcomponents.cpp index 0ad7f6780..6220d276a 100644 --- a/modules/imgproc/src/connectedcomponents.cpp +++ b/modules/imgproc/src/connectedcomponents.cpp @@ -48,6 +48,8 @@ #include "precomp.hpp" #include <vector> +#include <iostream> + namespace cv{ namespace connectedcomponents{ @@ -131,6 +133,7 @@ namespace cv{ } void operator()(int r, int c, int l){ + std::cout << "CV: r=" << r << " c=" << c << " l=" << l << std::endl; int *row =& statsv.at<int>(l, 0); row[CC_STAT_LEFT] = MIN(row[CC_STAT_LEFT], c); row[CC_STAT_WIDTH] = MAX(row[CC_STAT_WIDTH], c); ``` and got: ``` CV: r=226 c=1247 l=2327 i0, size: 2327 - 1692 OpenCV Error: Assertion failed .... ``` Clearly, some access is going quite wrong here. ##### Steps to reproduce The image I am using to trigger this bug is attached (it is shown all black, but that is not true - it contains 0 and 1 as colors), the code to execute is this: ```.cpp #include <opencv2/core.hpp> #include <opencv2/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <iostream> int main(int argc, char** argv) { cv::Mat image = cv::imread("bin-8U.png", cv::IMREAD_GRAYSCALE); // after filling with 0 and 1, convert to 8bit for connectedComponentsWithStats image.convertTo(image, CV_8UC1); cv::Mat1w labels; cv::Mat1i stats; cv::Mat centroids; int numLabels = cv::connectedComponentsWithStats(image, labels, stats, centroids, 8, CV_16U); return 0; } ``` ![bin-8u](https://user-images.githubusercontent.com/8768950/34251124-4e2ccd94-e63f-11e7-9660-25e9fe674eca.png)
bug,category: imgproc
low
Critical
283,864,054
angular
EventEmitter: Gets wrong compiled by Closure Compiler
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [X] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> The EventEmitter class get wrongly compiled in the latest version of Google Closure Compiler (20171203.0.0). Resulting in `Cannot read property 'prototype' of undefined` error Code Emittet by 20171203.0.0 (Does not work): ```js $$jscomp$inherits$$($EventEmitter$$module$node_modules$$angular$core$esm2015$core$$, $module$node_modules$rxjs$Subject$$.$Subject$); ``` The problem in it self, is that the "$Subject$" property is undefind. Code Emittet by 20171112.0.0 (Does work): ```js $$jscomp$inherits$$($EventEmitter$$module$node_modules$$angular$core$esm2015$core$$, $module$node_modules$rxjs$Subject$Subject$$); ``` As you can see the difference, is that subject is not a property but the variable it self. https://github.com/angular/angular/blob/master/packages/core/src/event_emitter.ts#L58 I am not 100% sure if this is an Angular, RxJS or Closure Compiler issue, but hopefully someone can shine some light on it. ## Expected behavior <!-- Describe what the desired behavior would be. --> To get compiled correctly. ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> Create new project with Angular CLI, change project to use EventEmitter. Compile with Google Closure Compiler (Advarnced and dependency_mode=STRICT) ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> As a Google product Angular should compiler correctly with Google Closure Compiler. ## Environment <pre><code> Angular version: 5.1.2 <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [ ] Chrome (desktop) version XX - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: v8.9.1<!-- run `node --version` --> - Platform: Windows Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
area: packaging,freq2: medium,area: compiler,type: use-case,P3
low
Critical
283,871,408
rust
Tracking issue for future-incompatibility lint `tyvar_behind_raw_pointer`
This is the **summary issue** for the `tyvar_behind_raw_pointer` future-compatibility warning and other related errors. The goal of this page is describe why this change was made and how you can fix code that is affected by it. It also provides a place to ask questions or register a complaint if you feel the change should not be made. For more information on the policy around future-compatibility warnings, see our [breaking change policy guidelines](https://github.com/rust-lang/rfcs/blob/master/text/1122-language-semver.md). **What is the warning for?** This warning occurs when you call a method on a value whose type is a raw pointer to an unknown type (aka `*const _` or `*mut _`), or a type that dereferences to one of these. The most common case for this is casts, for example: ```Rust let s = libc::getenv(k.as_ptr()) as *const _; s.is_null() ``` This can be fixed by giving a type annotation to the cast: ```Rust let s = libc::getenv(k.as_ptr()) as *const libc::c_char; s.is_null() ``` The reason that this is going to become an error is because we are working on enabling arbitrary self types. Using that, you should be able to write functions such as: ```Rust #![feature(arbitrary_self_types)] struct MyType; impl MyType { fn is_null(self: *mut Self) -> bool { println!("?"); true } } ``` While that case is obviously silly, we can't prevent a sibling crate from implementing it, and such a function would make a call to `s.is_null()` when the type of `s` is an `*mut _` ambiguous. Therefore, to avoid that potential breakage, you have to annotate the type of your raw pointer before the point of the method call. After you fix these warnings, if you are working with raw pointers on nightly, you might want to check out `#![feature(arbitrary_self_types)]` yourself! It even works with trait objects: ```Rust #![feature(arbitrary_self_types, dyn_trait)] trait Foo { fn example(self: *mut Self); } struct S; impl Foo for S { fn example(self: *mut Self) { println!("Hello, I'm at {:?}", self); } } fn foo(foo: *mut dyn Foo) { foo.example(); } fn main() { // This is 100% safe and not UB, even through I am calling `foo` // with a null pointer - this is a *raw* null pointer, and these are // always ok. foo(0 as *mut S); } ``` While I'm at it, `arbitrary_self_types` also works for smart pointers, such as `Rc<T>` or `Arc<T>` (however, unfortunately we still have not figured out the best way to make it work with smart pointers to trait objects, so you can't yet create `dyn Bar` trait objects. We *are* planning on making it eventually work, so stay tuned!). ```Rust #![feature(arbitrary_self_types)] use std::rc::Rc; trait Bar { fn example(self: &Rc<Self>); } struct S; impl Bar for S { fn example(self: &Rc<Self>) { println!("Hi I'm called on an Rc."); let _x = self.clone(); // You can call Rc::clone on Self! } } fn main() { Rc::new(S).example(); } ``` **When will this warning become a hard error?** At the beginning of each 6-week release cycle, the Rust compiler team will review the set of outstanding future compatibility warnings and nominate some of them for **Final Comment Period**. Toward the end of the cycle, we will review any comments and make a final determination whether to convert the warning into a hard error or remove it entirely.
A-lints,T-lang,T-compiler,C-future-incompatibility,C-tracking-issue
medium
Critical
283,952,200
angular
property binding syntax: [label] instead of this [label]="label"
## I'm submitting a... <pre><code> [ X] Feature request </code></pre> ## Current behavior ``` @Component({ template: ' <my-other [label]="label"></my-other>' //... }) export class TestComponent { label = 'hello'; } ``` ## Expected behavior Having to only type this ``` <my-other [label]></my-other> ``` Or even better with a self closing tag ``` <my-other [label] /> ``` I assume this is not yet the case because angular wants to be close to custom elements spec. Nevertheless that'd be great.
feature,area: core,core: binding & interpolation,feature: under consideration
medium
Major
283,953,179
TypeScript
Typescript can't infer types when using Proxy
**Code** ```javascript let obj = { prop1: function () { }, prop2: 'hello', } let prox = new Proxy(obj, { get: function (target, name) { return 5; } }); prox.prop1. ``` **Expected behavior:** I would expect that when I type ```prox.prop1.```, I would get typescript suggestions for ```Number.prototype```, but instead, I get suggestions for ```Function.prototype```. ```prox.prop1``` will (according to typescript) still be callable as a function, but in runtime, it will clearly be a number and will throw an exception. Statically evaluate the proxy traps and determine the type of thing being returned to offer proper typescript intellisense.
Bug,Help Wanted,Domain: lib.d.ts
high
Critical
284,016,420
rust
Cannot generate unit struct with a macros
It doesn't seem possible to generate unit struct declaration from a macros in neither stable nor nightly version of Rust. [A simplified playground sample](https://play.rust-lang.org/?gist=0dbd7527efdae183f8bfc3fadb28c2d7&version=stable): ```rust macro_rules! def_struct { ($name:ident $($body:tt)*) => { pub struct $name $($body)* }; } def_struct!(X {}); // compiles def_struct!(Y); // doesn't fn main() {} ``` When trying to expand the second macro call, Rust complains: ```rust error: expected `where`, `{`, `(`, or `;` after struct name, found `<eof>` --> src/main.rs:3:16 | 3 | pub struct $name $($body)* | ^^^^^ ... 8 | def_struct!(Y); // doesn't | --------------- in this macro invocation ``` If I add a semicolon as it asks right after the struct definition, the unit macro call starts compiling fine: ```rust macro_rules! def_struct { ($name:ident $($body:tt)*) => { pub struct $name $($body)*; }; } def_struct!(Y); // compiles def_struct!(X {}); // doesn't fn main() {} ``` but the other one fails instead: ```rust error: macro expansion ignores token `;` and any following --> src/main.rs:3:31 | 3 | pub struct $name $($body)*; | ^ | note: caused by the macro expansion here; the usage of `def_struct!` is likely invalid in item context --> src/main.rs:8:1 | 8 | def_struct!(X {}); // compiles | ^^^^^^^^^^^^^^^^^^ ``` The two errors seem to contradict each other here. The only obvious workaround to make both cases work I could find is to split macro definition into two, one with semicolon and another without: ```rust macro_rules! def_struct { ($name:ident) => { pub struct $name; }; ($name:ident $body:tt) => { pub struct $name $body } } def_struct!(X {}); // compiles def_struct!(Y); // compiles fn main() {} ``` But this is not very clean and gets even harder in more complicated macros. Is there any reason for such behaviour or is it a bug and it would be possible to allow macro to generate unit structs that are not followed by a semicolon, just like any other?
C-enhancement,A-macros,T-lang
low
Critical
284,145,165
TypeScript
Inference problem
I only wish I could better describe the problem. I'm trying to properly declare the type for `all()`. Here's how it's used: ``` Promise.resolve('kromid') .then(all(identity, identity)) .then(([a, b]) => a.splita); ``` Problem is that `a` and `b` are inferred as `any` instead of `string`, so TS doesn't fail with the expected: > Property 'splita' does not exist on type 'string'. Did you mean 'split'? Here's the implementation: ``` function all<T1, Param>(a1: Res<Param, T1>): (p: Param) => Promise<[T1]>; function all<T1, T2, Param>(a1: Res<Param, T1>, a2: Res<Param, T2>): (p: Param) => Promise<[T1, T2]>; function all<T1, T2, T3, Param>(a1: Res<Param, T1>, a2: Res<Param, T2>, a3: Res<Param, T3>): (p: Param) => Promise<[T1, T2, T3]>; function all<T1, T2, T3, T4, Param>(a1: Res<Param, T1>, a2: Res<Param, T2>, a3: Res<Param, T3>, a4: Res<Param, T4>): (p: Param) => Promise<[T1, T2, T3, T4]>; function all<T1, T2, T3, T4, T5, Param>(a1: Res<Param, T1>, a2: Res<Param, T2>, a3: Res<Param, T3>, a4: Res<Param, T4>, a5: Res<Param, T5>): (p: Param) => Promise<[T1, T2, T3, T4, T5]>; function all<T1, T2, T3, T4, T5, T6, Param>(a1: Res<Param, T1>, a2: Res<Param, T2>, a3: Res<Param, T3>, a4: Res<Param, T4>, a5: Res<Param, T5>, a6: Res<Param, T6>): (p: Param) => Promise<[T1, T2, T3, T4, T5, T6]>; function all<T1, T2, T3, T4, T5, T6, T7, Param>(a1: Res<Param, T1>, a2: Res<Param, T2>, a3: Res<Param, T3>, a4: Res<Param, T4>, a5: Res<Param, T5>, a6: Res<Param, T6>, a7: Res<Param, T7>): (p: Param) => Promise<[T1, T2, T3, T4, T5, T6, T7]>; function all<T1, T2, T3, T4, T5, T6, T7, T8, Param>(a1: Res<Param, T1>, a2: Res<Param, T2>, a3: Res<Param, T3>, a4: Res<Param, T4>, a5: Res<Param, T5>, a6: Res<Param, T6>, a7: Res<Param, T7>, a8: Res<Param, T8>): (p: Param) => Promise<[T1, T2, T3, T4, T5, T6, T7, T8]>; function all<T1, T2, T3, T4, T5, T6, T7, T8, T9, Param>(a1: Res<Param, T1>, a2: Res<Param, T2>, a3: Res<Param, T3>, a4: Res<Param, T4>, a5: Res<Param, T5>, a6: Res<Param, T6>, a7: Res<Param, T7>, a8: Res<Param, T8>, a9: Res<Param, T9>): (p: Param) => Promise<[T1, T2, T3, T4, T5, T6, T7, T8, T9]>; function all<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, Param>(a1: Res<Param, T1>, a2: Res<Param, T2>, a3: Res<Param, T3>, a4: Res<Param, T4>, a5: Res<Param, T5>, a6: Res<Param, T6>, a7: Res<Param, T7>, a8: Res<Param, T8>, a9: Res<Param, T9>, a10: Res<Param, T10>): (p: Param) => Promise<[T1, T2, T3, T4, T5, T6, T7, T8, T9, T10]>; function all<Param>(...values: Res<Param, any>[]): (p: Param) => Promise<any[]>; function all<Param>(...values: Res<Param, any>[]): (p: Param) => Promise<any[]> { return param => Promise.all(values.map(obj => obj.apply ? obj(param) : obj)); } type Res<I, O> = ((i: I) => O | Promise<O>) | O | Promise<O> function identity<T>(a: T): T { return a; } ``` There already is an effort here: https://stackoverflow.com/q/47934804/592641. But doesn't cut it.
Bug
low
Minor
284,146,629
TypeScript
tsc includes previous output in result when allowJs is enabled and "exclude" is non-empty
**TypeScript Version:** 2.7.0-dev.20171222 **Code** For easy access, clone this gist: https://gist.github.com/WasabiFan/6922d6eb6224945ac809c216fdc37089 **test.ts:** ```ts // Source code here ``` **tsconfig.json:** ```json { "compilerOptions": { "target": "es5", "module": "commonjs", "outDir": "out", "allowJs": true }, "exclude": [ "node_modules" ] } ``` Removing the `exclude` block or disabling `allowJs` avoids the issue. Note that this _is_ an easy state to get into: in my case, I generated a VSCode extension with their official templates and then enabled `allowJs`. I didn't realize what was going on until my PC ran out of RAM, as it automatically ran the watch task and the depth grew every time I saved. **Expected behavior:** I can compile the project with `tsc` as many times as I'd like. **Actual behavior:** ``` PS D:\...> git clone https://gist.github.com/WasabiFan/6922d6eb6224945ac809c216fdc37089 tsc-recursion-demo-gist Cloning into 'tsc-recursion-demo-gist'... remote: Counting objects: 7, done. remote: Compressing objects: 100% (5/5), done. remote: Total 7 (delta 1), reused 0 (delta 0), pack-reused 0 Unpacking objects: 100% (7/7), done. PS D:\...> cd .\tsc-recursion-demo-gist\ PS D:\...\tsc-recursion-demo-gist> tsc -p . PS D:\...\tsc-recursion-demo-gist> tsc -p . error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/test.js' because it would overwrite input file. PS D:\...\tsc-recursion-demo-gist> tsc -p . error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/out/test.js' because it would overwrite input file. error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/test.js' because it would overwrite input file. PS D:\...\tsc-recursion-demo-gist> tsc -p . error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/out/out/test.js' because it would overwrite input file. error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/out/test.js' because it would overwrite input file. error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/test.js' because it would overwrite input file. PS D:\...\tsc-recursion-demo-gist> tsc -p . error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/out/out/out/test.js' because it would overwrite input file. error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/out/out/test.js' because it would overwrite input file. error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/out/test.js' because it would overwrite input file. error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/test.js' because it would overwrite input file. PS D:\...\tsc-recursion-demo-gist> tsc -p . error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/out/out/out/out/test.js' because it would overwrite input file. error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/out/out/out/test.js' because it would overwrite input file. error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/out/out/test.js' because it would overwrite input file. error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/out/test.js' because it would overwrite input file. error TS5055: Cannot write file 'D:/.../tsc-recursion-demo-gist/out/test.js' because it would overwrite input file. ``` <img src="https://user-images.githubusercontent.com/3310349/34297314-513d601a-e6cc-11e7-895c-e158930e2936.png" width="400" /> The fix (and the best option) is to exclude the output directory. But given that existing templates _don't_ do that, I think this is a significant issue. I don't understand why this doesn't happen when the `exclude` block is removed; I assume there's an internal safety to prevent this problem that's overridden.
Suggestion,In Discussion
low
Critical
284,165,409
TypeScript
Unions without discriminant properties do not perform excess property checking
<!-- BUGS: Please use this template. --> <!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript --> <!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.7.0-dev.201xxxxx **Code** ```ts // An object to hold all the possible options type AllOptions = { startDate: Date endDate: Date author: number } // Any combination of startDate, endDate can be used type DateOptions = | Pick<AllOptions, 'startDate'> | Pick<AllOptions, 'endDate'> | Pick<AllOptions, 'startDate' | 'endDate'> type AuthorOptions = Pick<AllOptions, 'author'> type AllowedOptions = DateOptions | AuthorOptions const options: AllowedOptions = { startDate: new Date(), author: 1 } ``` **Expected behavior:** An error that `options` cannot be coerced into type `AllowedOptions` because `startDate` and `author` cannot appear in the same object. **Actual behavior:** It compiles fine.
Suggestion,Needs Proposal
high
Critical
284,165,892
opencv
test_java: org.opencv.test.calib3d sporadic failure
[Linux32](http://pullrequest.opencv.org/buildbot/builders/master_noOCL-lin32/builds/237): ``` testSolvePnPListOfPoint3ListOfPointMatMatMatMat Max difference between expected and actiual Mats is 0.008481956505988819, that bigger than 0.001 junit.framework.AssertionFailedError: Max difference between expected and actiual Mats is 0.008481956505988819, that bigger than 0.0001 at org.opencv.test.OpenCVTestCase.compareMats(Unknown Source) at org.opencv.test.OpenCVTestCase.assertMatEEqual(Unknown Source) at org.opencv.test.calib3d.Calib3dTest.testSolvePnPListOfPoint3ListOfPointMatMatMatMat(Unknown Source) at orgg.opencv.test.OpenCVTestCase.runTest(Unknown Source) at org.opencv.test.OpenCVTestCase.runBare(Unknown Source) ```
bug,test,category: calib3d,category: java bindings
low
Critical
284,174,366
rust
Enforce informal properties of traits such as `PartialEq`
The trait `PartialEq` represents a partial equivalence — that is, a relation that is symmetric and transitive. However, Rust doesn't currently enforce these properties in any way. This means we get issues like this: ```rust struct A; struct B; impl PartialEq<B> for A { fn eq(&self, _other: &B) -> bool { true } } fn main() { let a = A {}; let b = B {}; a == b; // Works b == a; // Error (B does not implement PartialEq<A>) } ``` This is confusing, but it's usually not so much of an issue in user code, because it's easy to flip the operands. However, when attempting to write generic functions over these traits, you run into problems (for example in https://github.com/rust-lang/rust/pull/46934). At the very least there should be a lint warning/error for this. It'd be nice to have a generic solution for properties of traits, though that could come later. It'd be even nicer if the symmetric case for `PartialEq`, for instance, could be automatically implemented by Rust, though this could require quite a bit more machinery.
A-lints,A-trait-system,T-lang,C-feature-request
low
Critical
284,208,088
rust
test_mul_add failt in debug mode for arm-unknown-linux-gnueabi
This following test extract from libstd: ```rust #[cfg(test)] mod tests { macro_rules! assert_approx_eq { ($a:expr, $b:expr) => ({ let (a, b) = (&$a, &$b); assert!((*a - *b).abs() < 1.0e-6, "{} is not approximately equal to {}", *a, *b); }) } #[test] fn f64_test_mul_add() { assert_approx_eq!((-12.3f64).mul_add(-4.5, -6.7), 48.65); assert_approx_eq!(0.0f64.mul_add(8.9, 1.2), 1.2); } #[test] fn f32_test_mul_add() { assert_approx_eq!((-12.3f32).mul_add(-4.5, -6.7), 48.65); assert_approx_eq!(0.0f32.mul_add(8.9, 1.2), 1.2); } } ``` fails to run in raspbarry pi 3 and in qemu when compiled in debug mode for the target `arm-unknown-linux-gnueabi`. It works if compiled in release mode. `rustc 1.24.0-nightly (250b49205 2017-12-21)` was used. To test using [cross](https://github.com/japaric/cross), create a project execute `cross test --target arm-unknown-linux-gnueabi`. cross execute the test using qemu-arm, but I think it is not a problem with qemu because the test also fails running in a raspbarry pi 3. Also the test works if compiled in release mode. qemu error: ``` ---- tests::f32_test_mul_add stdout ---- thread 'tests::f32_test_mul_add' panicked at '38.4 is not approximately equal to 1.2', src/lib.rs:20:8 ---- tests::f64_test_mul_add stdout ---- thread 'tests::f64_test_mul_add' panicked at '38.4 is not approximately equal to 1.2', src/lib.rs:14:8 ``` raspberry pi error: ``` ---- tests::f32_test_mul_add stdout ---- thread 'tests::f32_test_mul_add' panicked at '-12.3 is not approximately equal to 48.65', src/lib.rs:19:8 ---- tests::f64_test_mul_add stdout ---- thread 'tests::f64_test_mul_add' panicked at '0 is not approximately equal to 48.65', src/lib.rs:13:8 ```
O-Arm,T-libs-api,C-bug
low
Critical
284,283,767
rust
Highlight when APIs panic in rustdoc
When an API specifically has a "will panic on X condition" guarantee (e.g. indexing APIs, unwrap, expect, perhaps allocating APIs) it would be nice if the stdlib could highlight these the way we do `unsafe` methods. https://twitter.com/0x424c41434b/status/944316751919140865 Worth prototyping this behind a feature flag for now, and then discussing whether we should be stabilizing it (or keeping it forever unstable for the stdlib). cc @rust-lang/docs (I'm mentoring 0x424c41434b on implementing this, even if we eventually decide not to do this)
T-rustdoc,C-enhancement
medium
Major
284,293,168
vue-element-admin
Speaking English
Hi everyone, I would like to say that if issues have English translation with their original (Chinese) descriptions it would be awesome. It has two sides, we (not Chinese speakers/readers) can understand the problem and read the solution, and also we may help to overcome the issue as well. So it could be a win-win situation for the individuals from other languages. Also I am not a native English speaker too, but those kind of platforms requires us to speak (I am clearly not forcing anyone). The documentation is as important as the code itself. We all know Vue has its attraction also due to its clear documentation. Thank you. PS: I would like to thank @PanJiaChen and all other contributors of this awesome project.
in plan
medium
Major
284,309,770
pytorch
torch.cuda.device_count() returns 1 using 4 TitanX setup.
nvidia-smi: ``` +-----------------------------------------------------------------------------+ │ | NVIDIA-SMI 384.66 Driver Version: 384.66 | │ |-------------------------------+----------------------+----------------------+ │ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | │ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | │ |===============================+======================+======================| │ | 0 GeForce GTX 108... On | 00000000:02:00.0 Off | N/A | │ | 20% 27C P8 16W / 250W | 0MiB / 11172MiB | 0% Default | │ +-------------------------------+----------------------+----------------------+ │ | 1 GeForce GTX 108... On | 00000000:03:00.0 Off | N/A | │ | 20% 30C P8 18W / 250W | 0MiB / 11172MiB | 0% Default | │ +-------------------------------+----------------------+----------------------+ │ | 2 GeForce GTX 108... On | 00000000:82:00.0 Off | N/A | │ | 20% 25C P8 17W / 250W | 0MiB / 11172MiB | 0% Default | │ +-------------------------------+----------------------+----------------------+ │ | 3 GeForce GTX 108... On | 00000000:83:00.0 Off | N/A | │ | 20% 26C P8 16W / 250W | 0MiB / 11172MiB | 0% Default | │ +-------------------------------+----------------------+----------------------+ │ │ +-----------------------------------------------------------------------------+ │ | Processes: GPU Memory | │ | GPU PID Type Process name Usage | │ |=============================================================================| │ | No running processes found | │ +-----------------------------------------------------------------------------+ ``` When I start 3 python instances where each selects a different GPU's and first print the number of devices using torch.cuda.device_cout() I get: ``` #devices: 1 File "/home/xxxxx/code/xxxxx/xxxxx/xxxxx/xxxxx.py", line 38, in main File "/home/xxxxx/env/lib/python3.6/site-packages/torch/cuda/__init__.py", line 223, in set_device torch._C._cuda_setDevice(device) RuntimeError: cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:88 THCudaCheck FAIL file=torch/csrc/cuda/Module.cpp line=88 error=10 : invalid device ordinal ``` When I run the same code on a node with 4 GTX1080Ti GPU's pytorch correctly detects all graphics cards and the code runs properly. Is this a setup problem? Or is this a issue related to pytorch? I would like to add that when I select a node with 4x GTX980Ti pytorch also only detects 1 device. cc @ngimel
needs reproduction,module: cuda,triaged
low
Critical
284,311,726
go
net: document that DialIP can only be used with WriteMsgIP of Connection-less protocols
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? `go version go1.9.2 linux/amd64` ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? ```sh-session go env GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/ilyaigpetrov/go" GORACE="" GOROOT="/usr/lib/go-1.9" GOTOOLDIR="/usr/lib/go-1.9/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build418566627=/tmp/go-build -gno-record-gcc-switches" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" ``` ### What did you do? `WriteMsgIP` can't be used after `DialIP` to TCP ```go package main import ( "net" "fmt" ) func main() { loIP := net.IPv4(127,0,0,1) // Accept TCP connections on localohst listener, err := net.ListenTCP("tcp", &net.TCPAddr{ IP: loIP , Port: 0}) if err != nil { panic(err) } go func(){ for { _, err := listener.Accept() if err != nil { fmt.Println(err) } } }() // Dial localhost ipconn, err := net.DialIP("ip:tcp", nil, &net.IPAddr{IP: loIP}) if err != nil { panic(err) } _, _, err = ipconn.WriteMsgIP([]byte{11,22,33}, []byte{}, &net.IPAddr{ IP: loIP }) if err != nil { panic(err) } fmt.Println("Heppy exit!") } ``` ### What did you expect to see? It doesn't work. Error appears: `write ip 127.0.0.1->127.0.0.1: use of WriteTo with pre-connected connection`. It seems `DialIP` can't be used with `WriteMsgIP` and __I want this to be documented__. ### What did you see instead? No documentation.
Documentation,NeedsInvestigation
low
Critical
284,337,016
godot
Editable subobjects issue
**Godot version:** <!-- If thirdparty or self-compiled, specify the build date or commit hash. --> Godot 3.0 Beta 2 **OS/device including version:** <!-- If graphics related, specify also GPU model and drivers. --> Windows 10 64bit Intel i7 6700k MSI GeForce GTX 1060 (6GB) 16 GB RAM DDR4 Z170 Gaming Pro (Mainborad) **Issue description:** <!-- What happened, and what was expected. --> If changed a included (linked) scene in the editor node tree to "editable subobjects" and change something (for example a animatedSprite to another animation) and after that changing the included (linked) scene back to "NOT editable subobjects" then the editor keeps the changed animation (if not restarting) and ingame it is showing the default animation which isn´t changed. **Steps to reproduce:** 1. Create a new scene and use a "KinematicBody2D" as root node 2. Add a "animatedSprite" Node as a child to the "KinematicBody2D" 3. Add two animations (simple two different pictures with an animation-name) 4. Save that scene for example as "charakter" or something 5. Create another new scene and use a "Node2D" as root node 6. add the created "charakter" scene two times as a child to the current "Node2D" 7. activate on both included charakter scenes the "editable subobjects" option (rightclick on the charakter scene on the nodetree) 8. Change the default animation from the "animationSprite" to the other animation (to see a difference) 9. Deactivate the "editable subobjects" on only one included scene 10. The changes are still there on both nodes in the editor but not ingame when the scene is started **Minimal reproduction project:** <!-- Optional but greatly speeds up debugging. You can drag and drop a zip archive to upload it. --> [EditableSubobjectIssue.zip](https://github.com/godotengine/godot/files/1584559/EditableSubobjectIssue.zip) - [ ] I searched the existing [GitHub issues](https://github.com/godotengine/godot/issues?utf8=%E2%9C%93&q=is%3Aissue+) for potential duplicates.
bug,topic:editor
low
Critical
284,383,756
rust
Function pointer docs may need updating
We may need documentation updated for function pointers when used in enums. I won't pretend I understand half of what's written here, but here's the log from IRC (#rust_beginners) where we debugged this issue: 1. [Rust Playground (`Sharlin`)](https://play.rust-lang.org/?gist=d25cb785bee399efe590293ab2c1a44a&version=stable) 2. [Rust Playground (`spear2`)](https://play.rust-lang.org/?gist=0cfe1a8e458fa50d5223054f7ff7298c&version=stable]) ``` Sharlin :: vermiculus: okay... wtf [1] vermiculus :: Sharlin: Right!? Sharlin :: what makes fns with ref arguments not implement Ord or anything vermiculus :: I 100% need the argument to be mutable, but I can't have it be mutable without being a ref, and apparently it's choking on the ref spear2 :: maybe it has to do with the reference having an implicit lifetime? vermiculus :: ...Ok the '100%' is not completely true, but it's the cleanest option Sharlin :: spear2: indeed it seems to be spear2 :: [2] vermiculus :: spear2: now there's a topic I still don't understand half or less Sharlin :: this is definitely something that should be documented in the fn docs :D vermiculus feels useful :D spear2 :: vermiculus: the way i understand it, every reference has an associated lifetime, but most of the time it can be 'inferred' and you don't see it Sharlin :: so this is about that higher ranked trait bound thingie vermiculus :: spear2: that's the extent of my understanding as well; I get tripped up trying to 'infer' the lifetime myself (effectively seeing what the compiler would see) Sharlin :: `Bar(fn(&i32))` is actually `Bar(for<'a> fn(&'a i32))` Sharlin :: which means the fn is not an ordinary function pointer but a higher something vermiculus :: spooky vermiculus :: that makes me wonder if this behavior is intentional or if there's a more explicit means to express this idea vermiculus :: throwing that explicit lifetime parameter is making the rest of the compiler go a little nutty -- wanting lifetime parameters for *everything* :( Sharlin :: yeah Sharlin :: so it goes ```
C-enhancement,P-medium,T-compiler
medium
Critical
284,394,195
rust
Avoid path to macro-generated extern crate in error messages
```rust use serde::Deserialize; // Expands to: // // const _IMPL_DESERIALIZE_FOR_S: () = { // extern crate serde as _serde; // impl<'de> _serde::Deserialize<'de> for S { /* ... */ } // } #[derive(Deserialize)] struct S; fn main() { // Requires S: serde::Serialize. serde_json::to_string(&S); } ``` The message as of rustc 1.24.0-nightly (c284f8807 2017-12-24): ``` error[E0277]: the trait bound `S: _IMPL_DESERIALIZE_FOR_S::_serde::Serialize` is not satisfied --> src/main.rs:16:5 | 16 | serde_json::to_string(&S); | ^^^^^^^^^^^^^^^^^^^^^ the trait `_IMPL_DESERIALIZE_FOR_S::_serde::Serialize` is not implemented for `S` | = note: required by `serde_json::to_string` ``` In this case it would be more desirable for the error message to refer to `serde::Serialize` rather than `_IMPL_DESERIALIZE_FOR_S::_serde::Serialize`. The extern crate means that the user's Cargo.toml includes the `serde` crate under `[dependencies]`, so showing `serde::Serialize` as the path seems reasonable. I understand that `serde::Serialize` could be ambiguous if the user's crate includes `mod serde` at the top level. For now we could ignore that case and show `serde::Serialize` anyway, or treat it as a special case and not show `serde::Serialize` if they have a `mod serde`. The special case would go away with https://github.com/rust-lang/rfcs/pull/2126 by showing `crate::serde::Serialize`. Fixing this would be valuable in allowing us to reinstate the lint against unused extern crate by addressing the usability issue reported in #44294. Mentioning @pnkfelix who worked on #46112.
C-enhancement,A-diagnostics,T-compiler,WG-diagnostics
low
Critical
284,431,789
rust
rustdoc-tool does not build with --enable-llvm-link-shared
Seems to be missing a `-L` flag: ``` = note: /usr/bin/ld: cannot find -lLLVMX86Disassembler /usr/bin/ld: cannot find -lLLVMX86AsmParser /usr/bin/ld: cannot find -lLLVMX86CodeGen ... ``` Any ideas on fixing this?
A-linkage,T-rustdoc,C-bug
low
Major
284,435,273
youtube-dl
Support request for thoptv.stream
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.12.23*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.12.23** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [x] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- --- ``` ... <end of log> ``` --- Website: http://thoptv.stream/ Single video: http://thoptv.stream/live/travelxp-hd-english/ ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://www.youtube.com/watch?v=BaW_jenozKc - Single video: https://youtu.be/BaW_jenozKc - Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights. ---
site-support-request
low
Critical
284,441,373
opencv
Access Violation error in cuvidCtxLockCreate when using cuda 9
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) - OpenCV => 3.4.0 - Operating System / Platform => Windows 10 Pro 64 Bit and Windows 10 Educational 64 Bit - Compiler => Visual Studio 2015 - Cuda => 9.0.176 - Nvidia card => GTX 1080 Ti and GeForce 940MX ##### Detailed description I have built OpenCV with Cuda 9 and now when i try to play video using ` cv::cudacodec::VideoReader` i get a Access violation error inside `VideoReaderImpl` on [this line](https://github.com/opencv/opencv/blob/047764f476e429e0ff63f9e23f7ac96f59539356/modules/cudacodec/src/video_reader.cpp#L99) ![error](https://user-images.githubusercontent.com/3811164/34339350-fe9cff6c-e998-11e7-88a8-2487d3a001dd.jpg) But it works fine when i use OpenCV built with Cuda 8.0 ##### Steps to reproduce ```#include <iostream> #include "opencv2/opencv_modules.hpp" #if defined(HAVE_OPENCV_CUDACODEC) #include <opencv2/core.hpp> #include <opencv2/cudacodec.hpp> #include <opencv2/highgui.hpp> int main(int argc, const char* argv[]) { const std::string fname = "rtsp://admin:[email protected]/media/video2"; cv::namedWindow("GPU", cv::WINDOW_NORMAL); cv::cuda::GpuMat d_frame; cv::Ptr<cv::cudacodec::VideoReader> d_reader = cv::cudacodec::createVideoReader(fname); for (;;) { if (!d_reader->nextFrame(d_frame)) break; cv::Mat frame; d_frame.download(frame); cv::imshow("GPU", frame); if (cv::waitKey(3) > 0) break; } return 0; } #else int main() { std::cout << "OpenCV was built without CUDA Video decoding support\n" << std::endl; return 0; } #endif ```
bug,priority: low,category: gpu/cuda (contrib)
low
Critical
284,455,413
javascript
Method chaining lets the receiver nominate a next receiver
Re https://github.com/airbnb/javascript#constructors--chaining Code like ```js luke.jump().setHeight(20); ``` enables luke to provide something other than luke itself as the receiver of the `setHeight` message. When this delegation of control is intended, such method chaining is great. OTOH, if the caller assumes it is equivalent to ```js luke.jump(); luke.setHeight(20); ``` then they depend on a contract that luke might not obey. That doesn't necessarily mean the style should not be used, but the hazard needs to be explained clearly whenever this style is encouraged.
pull request wanted,editorial
low
Minor
284,458,401
javascript
Distinguish doc-comments from other comments
https://github.com/airbnb/javascript#comments--multiline recommend `/** ... */` without even mentioning `/* ... */`. The former should only be used for doc-comments. The latter should be used for all comments that are not doc-comments, even if they are multiline.
editorial
low
Major
284,459,694
rust
Unreadable error messages on linking failure
This is a very annoying aspect of rustc - if it fails to link a library, it doesn't tell you the path, it doesn't tell you which library it couldn't find, it just spits out a completely unreadable error message and quits. For example: ``` Compiling deflate v0.7.17 error: could not exec the linker `cc`: No such file or directory (os error 2) | = note: "cc" "-Wl,--as-needed" "-Wl,-z,noexecstack" "-m64" "-L" "/home/felix/.rustup/toolchains/stable- x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "/home/felix/Development/srtmtoimage/target/debug/build/kernel32-sys- 23866d7aeb753806/build_script_build-23866d7aeb753806.build_script_build0.rust-cgu.o" "-o" "/home/felix/Development/srtmtoimage/target/debug/build/kernel32-sys- 23866d7aeb753806/build_script_build-23866d7aeb753806" "/home/felix/Development/srtmtoimage/target/debug/build/kernel32-sys- 23866d7aeb753806/build_script_build-23866d7aeb753806.crate.allocator.rust-cgu.o" "-Wl,--gc- sections" "-pie" "-Wl,-z,relro,-z,now" "-nodefaultlibs" "-L" "/home/felix/Development/srtmtoimage/target/debug/deps" "-L" "/home/felix/.rustup/toolchains/stable- x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-Wl,-Bstatic" "/home/felix/Development/srtmtoimage/target/debug/deps/libbuild-055e23f8aa405a7b.rlib" "/home/felix/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux- gnu/lib/libstd-fe0b1b991511fcaa.rlib" "/home/felix/.rustup/toolchains/stable-x86_64-unknown-linux- gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/librand-3d7b10e850a67e89.rlib" "/home/felix/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux- gnu/lib/liballoc_jemalloc-28484309357fd6f1.rlib" "/home/felix/.rustup/toolchains/stable-x86_64- unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc_system-751808ba756769d5.rlib" "/home/felix/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux- gnu/lib/libpanic_unwind-8cb97051d8238386.rlib" "/home/felix/.rustup/toolchains/stable-x86_64- unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libunwind-25cc9b024a02d330.rlib" "/home/felix/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux- gnu/lib/liblibc-d42e80cee81b06ce.rlib" "/home/felix/.rustup/toolchains/stable-x86_64-unknown-linux- gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc-78c21267a2dc15c1.rlib" "/home/felix/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux- gnu/lib/libstd_unicode-0e1b544c94586415.rlib" "/home/felix/.rustup/toolchains/stable-x86_64 -unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcore-0c5e3d6c117f8c44.rlib" "/home/felix/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux- gnu/lib/libcompiler_builtins-bd7cc5ada1e908e0.rlib" "-Wl,-Bdynamic" "-l" "dl" "-l" "rt" "-l" "pthread" "-l" "pthread" "-l" "gcc_s" "-l" "c" "-l" "m" "-l" "rt" "-l" "pthread" "-l" "util" error: aborting due to previous error error: Could not compile `rayon-core`. warning: build failed, waiting for other jobs to finish... error: Could not compile `kernel32-sys`. warning: build failed, waiting for other jobs to finish... error: build failed ``` Right, now try and read what the actual error is. In this case, the linker `cc` is missing, but the error message is similar if a system library is missing - ex. you tried to use curl-rs but you don't have libcurl installed. In that case, you have to search for the missing library somewhere in the last arguments. Please: - Don't just quit with "No such file or directory" - it's one of the most useless error messages ever. Please include at least the file name where rustc expected the linker / library to be. - If it is a linker error, please don't output the whole paths to the libraries. Just tell me which library wasn't found - Maybe include the **folders** that rust searched for the libraries (helpful for debugging $PATH issues) - Don't output the location of every object file / library, it's not helpful. That would be my suggestion for improving cargo. It's simply annoying to figure out which library is missing (99,99% it's a system dependency). Linked from https://github.com/rust-lang/cargo/issues/4863
A-linkage,C-enhancement,A-diagnostics,P-low,E-mentor,T-compiler,WG-diagnostics
medium
Critical
284,488,035
go
x/text/collate: ignores width even if the flag is not specified
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version go1.9 linux/amd64 ### Does this issue reproduce with the latest release? Did not check. ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOBIN="/home/sougou/dev/bin" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/sougou/dev" GORACE="" GOROOT="/home/sougou/go" GOTOOLDIR="/home/sougou/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build590551857=/tmp/go-build -gno-record-gcc-switches" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" ### What did you do? I was spot-checking the collation code to see if it matched specifications: if IgnoreWidth was not specified, I was expecting "ae" to not match "Æ". However, it still matched. Here's a simplified reproduction: ``` import ( "testing" "golang.org/x/text/collate" "golang.org/x/text/language" ) func TestUtf8GeneralCI(t *testing.T) { a := "ae" b := "Æ" coll := collate.New(language.Und, collate.IgnoreCase, collate.IgnoreDiacritics) if coll.CompareString(a, b) == 0 { t.Errorf("Compare(%q, %q): 0, want non-0", a, b) } } ``` ### What did you expect to see? Test should pass. ### What did you see instead? Test failed.
NeedsInvestigation
low
Critical
284,606,936
neovim
Multiline completion (also affects LSP textDocument/completion results)
Currently Vim (and by bloodline also NeoVim) allows only single line completions. This is quite limiting as there is also no way to determine beginning and end of the completed text so this leaves us in the limbo where we need to do some hackish things to achieve things like multiline snippets. Most obvious way to do such thing would be to make (`:h complete-items`): ```viml { 'word': "aaa\nbbb" } ``` to insert 2 lines: ``` aaa bbb ``` Currently it inserts ``` aaa^@bbb ``` Where `^@` is NUL character (`:h NL-used-for-NUL`). The problems with multiline completion is that there will be open question how to handle indentation. Alternatively we could allow `List` as `'word'` parameter where each line would be one item.
enhancement,completion
low
Major
284,616,073
go
testing: Helper's highest function call's information should be maintained for any subsequent/nested calls
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version go1.9.2 android/arm64 ### Does this issue reproduce with the latest release? Yes (I think 1.9.2 is the latest release) ### What operating system and processor architecture are you using (`go env`)? GOARCH="arm64" GOHOSTARCH="arm64" GOHOSTOS="android" GOOS="android" ### What did you do? Run the following program with "go test". I would have hoped that the reported error line is in the 2nd call to checkFor3s() in TestHelper(). Instead, the error line is in forEach() (line 9 for me but I think some blank lines may have been eaten below. I think the problem is that checkFor3s() uses the method forEach which is not marked with t.Helper(). In a more realistic case, forEach would be a non-testing utility method and so would not have the possibility of calling t.Helper(). Perhaps the semantics of t.Helper should be that the displayed error line is in the stack frame above the highest t.Helper() call? That way this example would be fixed, and also it would not be necessary to sprinkle t.Helper() calls everywhere if there are complex testing utility functions. ``` package helper import( "testing" ) func forEach(data []int, f func(int)) { for _, i := range data { f(i) } } func checkFor3s(tb testing.TB, data []int) { tb.Helper() forEach(data, func(i int) { tb.Helper() if i != 3 { tb.Error("Saw", i, "instead of 3") } }) } func TestHelper(t *testing.T) { checkFor3s(t, []int{3, 3, 3, 3}) checkFor3s(t, []int{3, 3, 2, 3}) checkFor3s(t, []int{3, 3, 3, 3}) } ```
help wanted,NeedsFix
low
Critical
284,618,525
rust
Types in std::os::raw should be same as libc crate
Right now, there's a bit of a discrepancy between these two, even though there shouldn't be. While the libc crate has [very complicated logic](https://github.com/rust-lang/libc/search?q=%22pub+type+c_long%22) to determine how the `c_*` types are defined, the standard library uses a [much simpler logic](https://github.com/rust-lang/rust/blob/81622c6b02536bdcf56145beb317da0d336703c1/src/libstd/os/raw.rs). I'm not sure how much these types are desired outside of libc; the only type that I've really seen used across the standard library is `c_char`, which honestly could just be replaced with an opaque struct with `repr(u8)` if it weren't already stabilised.
C-enhancement,T-libs-api
medium
Major
284,623,181
opencv
stitching: matrix.cpp error and core dumped
Using recent git version. ``` wget -O coredump1.png https://i.imgur.com/eNkIZtU.png wget -O coredump2.png https://i.imgur.com/DfV31L9.png ``` ``` md5sum coredump*.png baed18323666725f7c07b4b2186cc884 *coredump1.png 55417fdf2b1b14999c22eaa35828781b *coredump2.png ``` ``` ./opencv/build/bin/cpp-example-stitching_detailed --output test.png coredump*.png Finding features... [ INFO:0] Initialize OpenCL runtime... Features in image #1: 752 Features in image #2: 430 Finding features, time: 0.27072 sec Pairwise matchingPairwise matching, time: 0.0190275 sec Initial camera intrinsics #1: K: [1614, 0, 290.5; 0, 1614, 516.5; 0, 0, 1] R: [1, 0, 0; 0, 1, 0; 0, 0, 1] Initial camera intrinsics #2: K: [1614, 0, 290.5; 0, 1614, 516.5; 0, 0, 1] R: [1.0084002, -0.0029160879, -0.00041645425; -0.0039012616, 1.0122006, 0.46072242; -0.014846272, 0.019930283, 1.0090584] Camera #1: K: [7099.366127436171, 0, 290.5; 0, 7099.366127436171, 516.5; 0, 0, 1] R: [1, 2.910383e-11, 0; 5.8207661e-11, 1, 0; 0, 0, 1] Camera #2: K: [7087.369852449208, 0, 290.5; 0, 7087.369852449208, 516.5; 0, 0, 1] R: [0.99999684, 0.0024957163, 0.00017471006; -0.0025003864, 0.9946146, 0.10361284; 8.4819272e-05, -0.10361294, 0.9946177] Warping images (auxiliary)... Warping images, time: 13.5285 sec OpenCV Error: Insufficient memory (Failed to allocate 109281814084 bytes) in OutOfMemoryError, file /home/linux/opencv/modules/core/src/alloc.cpp, line 55 OpenCV Error: Assertion failed (u != 0) in create, file /home/linux/opencv/modules/core/src/matrix.cpp, line 436 terminate called after throwing an instance of 'cv::Exception' what(): /home/linux/opencv/modules/core/src/matrix.cpp:436: error: (-215) u != 0 in function create Aborted (core dumped) ```
bug,category: stitching
low
Critical
284,644,976
pytorch
Invoking MKL in multiprocessing with importing torch causes blocking
Using numpy.lialg.svd(invoking functions in MKL may be its triger) in multiprocessing with importing torch causes blocking. The following stops at "Start svd": ```python #!/usr/bin/env python import multiprocessing as mp import numpy import torch # Just to import, not using anymore def f(x): print('Start svd') numpy.linalg.svd(numpy.ones((100, 100))) print('End svd') def test1(): print('### Start test1 ###') print('Start multiprocessing') # mp.set_start_method('spawn') with mp.Pool(1) as p: p.map(f, [None]) print('End multiprocessing') if __name__ == '__main__': test1() ``` It works without the comment of set_start_method or with comment of importing torch, but it means that just importing torch affects the original multiprocessing in python or numpy.mkl. cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @VitalyFedyunin
module: multiprocessing,triaged,module: mkldnn,module: mkl
low
Minor
284,738,223
youtube-dl
Support request for familygo.ca
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.12.23*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.12.23** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [x] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` youtube-dl -v https://familygo.ca/chrgd/episode-details/sonic-boom/256678950939 [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['-v', 'https://familygo.ca/chrgd/episode-details/sonic-boom/256678950939'] [debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252 [debug] youtube-dl version 2017.12.23 [debug] Python version 3.4.4 - Windows-10-10.0.16299 [debug] exe versions: ffmpeg 3.3.4, ffprobe 3.3.4, rtmpdump 2.4 [debug] Proxy map: {} [generic] 256678950939: Requesting header WARNING: Falling back on generic information extractor. [generic] 256678950939: Downloading webpage [generic] 256678950939: Extracting information ERROR: Unsupported URL: https://familygo.ca/chrgd/episode-details/sonic-boom/256678950939 Traceback (most recent call last): File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp27__tkx0\build\youtube_dl\YoutubeDL.py", line 784, in extract_info File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp27__tkx0\build\youtube_dl\extractor\common.py", line 438, in extract File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp27__tkx0\build\youtube_dl\extractor\generic.py", line 3063, in _real_extract youtube_dl.utils.UnsupportedError: Unsupported URL: https://familygo.ca/chrgd/episode-details/sonic-boom/256678950939 ... <end of log> ``` --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://familygo.ca/chrgd/episode-details/sonic-boom/256678950939 Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights. --- ### Description of your *issue*, suggested solution and other information Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible. If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
tv-provider-account-needed
low
Critical
284,739,300
youtube-dl
Support request for knowledgekids.ca
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.12.23*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.12.23** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [x] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` youtube-dl -v http://www.knowledgekids.ca/videos/shutterbugs/bugathon-wind-blows-s1-e1 [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['-v', 'http://www.knowledgekids.ca/videos/shutterbugs/bugathon-wind-blows-s1-e1'] [debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252 [debug] youtube-dl version 2017.12.23 [debug] Python version 3.4.4 - Windows-10-10.0.16299 [debug] exe versions: ffmpeg 3.3.4, ffprobe 3.3.4, rtmpdump 2.4 [debug] Proxy map: {} [generic] bugathon-wind-blows-s1-e1: Requesting header WARNING: Falling back on generic information extractor. [generic] bugathon-wind-blows-s1-e1: Downloading webpage [generic] bugathon-wind-blows-s1-e1: Extracting information ERROR: Unsupported URL: http://www.knowledgekids.ca/videos/shutterbugs/bugathon-wind-blows-s1-e1 Traceback (most recent call last): File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp27__tkx0\build\youtube_dl\YoutubeDL.py", line 784, in extract_info File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp27__tkx0\build\youtube_dl\extractor\common.py", line 438, in extract File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp27__tkx0\build\youtube_dl\extractor\generic.py", line 3063, in _real_extract youtube_dl.utils.UnsupportedError: Unsupported URL: http://www.knowledgekids.ca/videos/shutterbugs/bugathon-wind-blows-s1-e1 ... <end of log> ``` --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: http://www.knowledgekids.ca/videos/shutterbugs/bugathon-wind-blows-s1-e1 Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights. --- ### Description of your *issue*, suggested solution and other information Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible. If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
geo-restricted
low
Critical
284,746,322
opencv
imshow through JNI does not work on Mac OSX
Hello, We are currently working on a Java project for which some image processing is done in C++ native code through JNI. However, we noticed that the cv::imshow() function does not work on Mac OSX if called through JNI. At the opposite, it works well if in a C++ only program, and works also well in Ubuntu for both cases (Java/JNI and C++ only). For instance, this JNI code: ```cpp /*! * \brief Dumb test. */ JNIEXPORT void JNICALL Java_Test_nDumbTest( JNIEnv* env, jobject obj) { // Create a black image and display it cv::Mat blackImg = cv::Mat::zeros(500,500,CV_8UC3); cv::imshow("Black image", blackImg); cv::waitKey(0); } ``` won't show anything on Mac OSX. There will be an icon in the Mac Dock showing that something should be displayed, but no windows shown in practice. Please find here my build information: ``` General configuration for OpenCV 3.4.0-dev ===================================== Version control: 3.4.0-7-g2e33844 Platform: Timestamp: 2017-12-27T11:11:31Z Host: Darwin 15.6.0 x86_64 CMake: 3.10.0-rc5 CMake generator: Unix Makefiles CMake build tool: /usr/bin/make Configuration: Release CPU/HW features: Baseline: SSE SSE2 SSE3 requested: SSE3 Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX AVX2 requested: SSE4_1 SSE4_2 AVX FP16 AVX2 SSE4_1 (3 files): + SSSE3 SSE4_1 SSE4_2 (1 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 (2 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX AVX (5 files): + SSSE3 SSE4_1 POPCNT SSE4_2 AVX AVX2 (9 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 C/C++: Built as dynamic libs?: YES C++ Compiler: /Library/Developer/CommandLineTools/usr/bin/c++ (ver 8.0.0.8000042) C++ flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -Wno-comment -Wno-implicit-fallthrough -fdiagnostics-show-option -Wno-long-long -Qunused-arguments -Wno-semicolon-before-method-body -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG C++ flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -Wno-comment -Wno-implicit-fallthrough -fdiagnostics-show-option -Wno-long-long -Qunused-arguments -Wno-semicolon-before-method-body -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG C Compiler: /Library/Developer/CommandLineTools/usr/bin/cc C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -Wno-comment -Wno-implicit-fallthrough -fdiagnostics-show-option -Wno-long-long -Qunused-arguments -Wno-semicolon-before-method-body -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -Wno-comment -Wno-implicit-fallthrough -fdiagnostics-show-option -Wno-long-long -Qunused-arguments -Wno-semicolon-before-method-body -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG Linker flags (Release): Linker flags (Debug): ccache: NO Precompiled headers: NO Extra dependencies: 3rdparty dependencies: OpenCV modules: To be built: calib3d core dnn features2d flann highgui imgcodecs imgproc java ml objdetect photo python_bindings_generator shape stitching superres ts video videoio videostab Disabled: js world Disabled by dependency: - Unavailable: cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev python2 python3 viz Applications: tests perf_tests apps Documentation: NO Non-free algorithms: NO GUI: Cocoa: YES VTK support: NO Media I/O: ZLib: build (ver 1.2.11) JPEG: build (ver 90) WEBP: build (ver encoder: 0x020e) PNG: build (ver 1.6.34) TIFF: build (ver 42 - 4.0.9) JPEG 2000: build (ver 1.900.1) OpenEXR: build (ver 1.7.1) Video I/O: DC1394: NO FFMPEG: YES avcodec: YES (ver 57.107.100) avformat: YES (ver 57.83.100) avutil: YES (ver 55.78.100) swscale: YES (ver 4.8.100) avresample: YES (ver 3.7.0) GStreamer: NO AVFoundation: YES gPhoto2: NO Parallel framework: GCD Trace: YES (with Intel ITT) Other third-party libraries: Intel IPP: 2017.0.3 [2017.0.3] at: /Users/denis/Library/opencv/build/3rdparty/ippicv/ippicv_mac Intel IPP IW: sources (2017.0.3) at: /Users/denis/Library/opencv/build/3rdparty/ippicv/ippiw_mac Lapack: YES (/System/Library/Frameworks/Accelerate.framework /System/Library/Frameworks/Accelerate.framework) Eigen: NO Custom HAL: NO NVIDIA CUDA: NO OpenCL: YES (no extra features) Include path: NO Link libraries: -framework OpenCL Python (for build): /usr/local/bin/python2.7 Java: ant: /usr/local/bin/ant (ver 1.10.1) JNI: /System/Library/Frameworks/JavaVM.framework/Headers /System/Library/Frameworks/JavaVM.framework/Headers /System/Library/Frameworks/JavaVM.framework/Headers Java wrappers: YES Java tests: YES Matlab: NO Install to: /usr/local ----------------------------------------------------------------- ``` Thank you,
priority: low,category: java bindings,platform: ios/osx
low
Critical
284,759,151
kubernetes
Allow specifying window over which metrics for HPA are collected
/kind feature # Use case The docs say "The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization." without specifying the window over which this was observed. Haven't found where it's defined but it seems like it's ~1-5 minutes. Now imaging a traffic pattern with short dips. If there is a dip in the calculation window, it will cause the HPA to scale down aggressively. Same applies the other way around, if there is a spike spanning the calculation window, it will add lots of replicas even if the spike is gone already. The only options to tweak is the cooldown period which doesn't help since it will only limit the number of scale operations, no the amount by which to scale. With longer cooldown the time of over/underprovisioning is just longer and shorter cooldown leads to trashing due to the number of scale operations. Instead it should made possible to specify the observation interval in the HPA spec. # Implementation From what I can tell, the HPA gets the metrics to calculate the number of replicas to add or remove from the metrics API without having any way to specify the observation window. Looking at the API here: https://github.com/kubernetes/metrics/blob/master/pkg/apis/metrics/v1beta1/types.go it seems like the metrics available via the Metrics api are already preprocessed and the window is determined by the reporter (kubelet/cadvisor?). With this design it seems impossible to get utilization over different periods of time ad-hoc. Was this discussed for the metrics api design? If the NodeMetrics contained the raw CPU seconds, the HPA could use them to calculate the utilization over a user provided window X (by keeping the value from X ago). It could be argued that such calcuation have no place in the HPA either and it should defer this to a metrics-server, but I don't know how this would work in the 'push model' we have right now where the kubelets report metrics instead of having the HPA/controller pull them when needed while specifying the window.
sig/autoscaling,kind/api-change,kind/feature,sig/instrumentation,lifecycle/frozen
high
Critical