id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
354,976,762 | pytorch | Where could I see all of the caffe2 operators and their arguments in python API? | Thanks for publishing the caffe2 and it is an excellent deep learning framework. I am wondering that how could I see all of the caffe2 operators. I see the caffe2 operators catalog https://caffe2.ai/docs/operators-catalogue.html but it might not cover all of the operators. For example, I could apply an operator called 'SmoothL1Loss' by model.net.SmoothL1Loss() but it is not in the website.
In addition, how could I know all of the arguments of these operators in python API? For example, I can apply convolutional layer by model.Conv(input, output, 3, 64, 3), how could I know other arguments' name?
Thank you very much!
| caffe2 | low | Minor |
354,995,970 | opencv | The parameter types for cv::solvePnP() is misleading | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
Please:
* Read the documentation to test with the latest developer build.
* Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue.
* Try to be as detailed as possible in your report.
* Report only one problem per created issue.
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
##### Detailed description
See the code
https://github.com/opencv/opencv/blob/ee1e1ce377aa61ddea47a6c2114f99951153bb4f/modules/calib3d/include/opencv2/calib3d.hpp#L692-L695
`rvec` and `tvec` are of type `OutputArray` indicating that they are used only for ouput.
But the code
https://github.com/opencv/opencv/blob/ee1e1ce377aa61ddea47a6c2114f99951153bb4f/modules/calib3d/src/solvepnp.cpp#L58
https://github.com/opencv/opencv/blob/ee1e1ce377aa61ddea47a6c2114f99951153bb4f/modules/calib3d/src/solvepnp.cpp#L71-L79
treats them as input when `useExtrinsicGuess==true`.
<!-- your description -->
##### Suggestions
The prototype should be decalared as `InputOutputArray`.
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
-->
| bug,category: calib3d | low | Critical |
354,998,205 | vscode | [html] Support SCSS in HTML with <style type="text/scss"> | I'm using a framework that lets me write SCSS code in HTML files, by marking them in a block with `<style type="text/scss">`. While VS Code seems to be perfectly fine with SCSS, it doesn't seem to support it inside a HTML file. As such, I get errors from the CSS parsers when using SCSS features, IntelliSense isn't working, etc. | feature-request,html | low | Critical |
355,034,044 | pytorch | [Caffe2] [Question] How to disable the bias parameter? | Is there any way to disable the bias parameter for a specific layer, e.g., for convolution layer? Is there something like **bias_term=[true or false] from Caffe**?
| caffe2 | low | Minor |
355,046,438 | pytorch | Error when building | Hello,
when I build , there is an error after "FULL_CAFFE2=1 python setup.py install".
/usr/local/lib/libopencv_imgcodecs.so.3.3: undefined reference to `TIFFReadEncodedStrip@LIBTIFF_4.0'
/usr/local/lib/libopencv_imgcodecs.so.3.3: undefined reference to `TIFFSetField@LIBTIFF_4.0'
/usr/local/lib/libopencv_imgcodecs.so.3.3: undefined reference to `TIFFSetWarningHandler@LIBTIFF_4.0'
/usr/local/lib/libopencv_imgcodecs.so.3.3: undefined reference to `TIFFSetErrorHandler@LIBTIFF_4.0'
/home/xuran/Detectron/pytorch/build/lib/libcaffe2.so: undefined reference to `leveldb::Status::ToString() const'
/home/xuran/anaconda3/lib/libleveldb.so.1: undefined reference to `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_create(unsigned long&, unsigned long)@GLIBCXX_3.4.21'
collect2: error: ld returned 1 exit status
make[2]: *** [bin/stream_test] Error 1
make[1]: *** [caffe2/CMakeFiles/stream_test.dir/all] Error 2
make: *** [all] Error 2
Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-nnpack --full-caffe2 caffe2 libshm gloo THD c10d'
Can you help me ? Thank you . | caffe2 | low | Critical |
355,056,901 | vue | An anti-pattern in computed property may cause performance issue | ### Version
2.5.17
### Reproduction link
[https://codesandbox.io/s/140202wlq](https://codesandbox.io/s/140202wlq)
### Steps to reproduce
Open the code sandbox link and click the button to use the computed property 'filteredEntities', and you will find it cost a lot of time.
After check the call tree, I found this is because the code calls the getter of 'entities' many times which will then call the `dependArray` function.
The code leading to this is:
```js
const len = this.entities.length;
for (let i = 0; i < len; i++) {
const e = this.entities[i];
// do something with e
}
```
If this.entities has a length of n, its getter will be called n + 1 times. And every time when the getter was called, it will call dependArray on the value of this.entities, which is the array with length n.
Since dependArray will iterate the value, the depend function will be called **(n + 1)^2** times totally.
I found this code when I'm reviewing someone's PR, and the original code use `for (let i = 0; i < this.entities.length; i++) ` and make another n times call to the getter.
It was easy to avoid this problem, such as using some array methods like `filter` to only call getter once.
Even cache the value of entities by `const cache = this.entities` can solve it.
Although it was easy to avoid of this pattern, I still think it is dangerous because the original code was not an obvious anti-pattern.
And this will not show any performance issue when the array is small, but may cause serious performance issue when the array is large in some production environment.
### What is expected?
I've read the related source code and I think this is the expected result of the observer system.
Then I read some chapter of the guide again to ensure there is no mention about this, so I'm not sure whether we need to add this as a NOT TODO in the computed property chapter.
### What is actually happening?
Described.
<!-- generated by vue-issues. DO NOT REMOVE --> | has PR | low | Major |
355,077,694 | vscode | Git - Linux ssh-agent/ssh-askpass problem | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
As already reported in #33814, #32097, #52137 the bug isn't fixed for me and the workarounds don't work either
<!-- Use Help > Report Issue to prefill these. -->
```
VSCode Version: 1.26.1
OS Version: Ubuntu 16.04
Date: 2018-08-16T18:34:20.517Z
Electron: 2.0.5
Chrome: 61.0.3163.100
Node.js: 8.9.3
V8: 6.1.534.41
Architecture: x64
Shell: zsh
```
Steps to Reproduce:
1. Open a Git repo folder
2. Call a Git action like sync, push etc
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
I'm using ssh-agent to keep all my keys opened and enter the passwords after I boot (`ssh-add ~/.ssh/id_rsa`). I can use git from the terminal without any problems, but "use the terminal then" is not a valid workaround for me when a whole feature doesn't work.
I've set some configs in my ~/.ssh/config file as well
```
Host *
ServerAliveInterval 60
AddKeysToAgent yes
IdentityFile ~/.ssh/id_rsa
Host gitlab.com
HostName gitlab.com
User git
Host github.com
HostName github.com
User git
```
But it still get this error when using a feature that calls a remote git action:
```
> git pull --tags origin master
ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists
```
I already tried installing ssh-askpass with `sudo apt-get install ssh-askpass-gnome ssh-askpass` but he asks me every single time for a password, so the ssh-agent is ignored | bug,git | medium | Critical |
355,088,678 | flutter | Ink reaction of IconButtons in AppBar is too big | Compared to native Android apps and the material spec, `IconButton`s in `AppBar`s should have a smaller ink reaction when pressed.
<img src="https://storage.googleapis.com/spec-host-backup/mio-design%2Fassets%2F0B9msDEx00QXmRVVQQlZBRnRGd2M%2Fpressed-04.png">
(Tile 1)
Not sure if that applies to all `IconButton`s. I also found this image
<img src="https://storage.googleapis.com/spec-host-backup/mio-design%2Fassets%2F0B9msDEx00QXmc0xTUGhCU2tQWE0%2Fpressed-03.png">
(Tile 1)
Source: https://material.io/design/interaction/states.html#pressed
For reference, also see:
https://material-components.github.io/material-components-web-catalog/#/component/icon-button
https://material-components.github.io/material-components-web-catalog/#/component/top-app-bar | framework,f: material design,a: fidelity,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Major |
355,118,615 | flutter | `flutter attach --machine` should report `Waiting for a connection` in JSON for editors to report progress | Currently this status is just written to stdout so the editor can't display a nicer representation (eg. a progress notification):
```
[12:32:30 GMT+0100 (BST)] [FlutterRun] [Info] <== [{"event":"daemon.connected","params":{"version":"0.4.1","pid":13567}}]
[12:32:30 GMT+0100 (BST)] [FlutterRun] [Info] <== Waiting for a connection from Flutter on Android SDK built for x86...
``` | c: new feature,tool,P3,team-tool,triaged-tool | low | Minor |
355,143,392 | pytorch | Multiprocess Deadlock when using np.transpose and torch.stack | Environment:
Ubuntu 16.04.4 LTS
Python 3.5.2
pytorch 0.4.0
numpy 1.14.5
Or just use horovod image `uber/horovod:0.13.10-tf1.9.0-torch0.4.0-py3.5`
I run np.transpose and torch.stack first in parent process, then run np.transpose and torch.stack in the subprocess, then subprocess hangs at torch.stack.
When I comment any np.transpose or torch.stack in parent process or subprocess, it run normally.
The minimal code to reproduce is
```
import torch.multiprocessing as t_mp
import time
import numpy as np
import torch
if __name__ == '__main__':
# hangs at stack tensor in subprocess,each of Method1-4 can run normally
# Python3.5 torch0.4.0
start = time.time()
print("Some ops in main process")
batch = np.random.rand(300, 300, 300)
# Method1: Common following one line
batch = batch.transpose((2, 0, 1))
batch = [torch.from_numpy(b) for b in batch]
# Method2: Common following one line
batch = torch.stack(batch, 0)
print("batch type ", type(batch))
print("Multiprocessing with 1 workers")
def _read_worker():
batch = np.random.rand(300, 300, 300)
print("transpose")
# Method3: Common following one line
batch = batch.transpose((2, 0, 1))
print("numpy to tensor")
batch = [torch.from_numpy(b) for b in batch]
print("torch stack tensor")
# Method4: Common following one line
batch = torch.stack(batch, 0)
print("batch type in worker", type(batch))
w = t_mp.Process(
target=_read_worker,
)
w.daemon = True # ensure that the worker exits on process exit
w.start()
w.join()
print("done")
```
It really confused me. I didn't even know how to debug it. Can anyone shed a light? | module: multiprocessing,triaged | low | Critical |
355,198,151 | react | Relax ToString consistency guarantees | We recently chatted about https://github.com/facebook/react/pull/13367 and related work (e.g. https://github.com/facebook/react/pull/13394) with @sebmarkbage, and he raised a good point.
It seems like overall treating them consistently is adding significant overhead in the implementation readability. And there’s undoubtedly runtime overhead to it too. There are two separate issues here:
* **warning** for invalid values
* ensuring that the output for invalid values is **consistent** (e.g. functions are always skipped)
The conclusion we came to is that we should keep **warning** for bad values, but **as long as we warn, consistency is not necessary**. It's fine if we sometimes stringify a function, and sometimes skip it, as long as we always warn for those cases
**Our guiding principle for invalid inputs should be that we handle them with the least amount of overhead** (both at runtime, and in terms of code size), not that they’re always handled the same way.
One exception to this is probably Symbols because they throw when stringified. So it seems like skipping them is actually desirable — unless we're okay with errors. | Type: Enhancement,Component: DOM,React Core Team | medium | Critical |
355,263,332 | go | runtime: use parent goroutine's stack for new goroutines | This is a speculative idea from discussion with @aclements. Jotting it down so it doesn't get lost and to open discussion.
Spinning off many goroutines that grow their stack and then exit can be a source of performance pain. See e.g. #18138. One approach to fixing this is [tracking and using statistics about stack growth behavior per spawn site](https://github.com/golang/go/issues/18138#issuecomment-264228318).
Another possible approach is to:
* switch immediately in the scheduler from parent goroutine to spawned goroutine
* use the (free part of) the parent's stack as the spawn's stack
* relocate the spawn's stack as needed
The idea here is that the parent goroutine's stack is likely to have plenty of free space, so if the spawn is short-lived, we can avoid doing any stack allocation for it at all, much less multiple stack growths.
If the spawn lives long enough that the parent goroutine starts running again, then this is worse. So it might be worth tracking some basic statistics on the fly to decide whether to do this.
| Performance,NeedsInvestigation,compiler/runtime | low | Major |
355,292,842 | rust | Rustc does not warn about `use` with paths incompatible with `uniform_paths` for edition 2018 | ## Updated Description:
[This code generates no warnings when compiled with the 2015 edition](https://play.rust-lang.org/?version=beta&mode=debug&edition=2015):
```rust
#![deny(rust_2018_compatibility)]
#![allow(unused_imports)]
extern crate rayon;
mod foo {
mod rayon {}
use rayon::Scope;
}
```
but [when compiled with the 2018 edition fails to compile](https://play.rust-lang.org/?version=beta&mode=debug&edition=2018&gist=37c27a00bd5b5e5d06e1218be5860d7e):
```
error[E0432]: unresolved import `rayon::Scope`
--> src/lib.rs:9:9
|
9 | use rayon::Scope;
| ^^^^^^^^^^^^ no `Scope` in `foo::rayon`
error[E0659]: `rayon` is ambiguous (name vs any other name during import resolution)
--> src/lib.rs:9:9
|
9 | use rayon::Scope;
| ^^^^^ ambiguous name
|
note: `rayon` could refer to the module defined here
--> src/lib.rs:7:5
|
7 | mod rayon {}
| ^^^^^^^^^^^^
= help: use `self::rayon` to refer to this module unambiguously
note: `rayon` could also refer to the extern crate imported here
--> src/lib.rs:4:1
|
4 | extern crate rayon;
| ^^^^^^^^^^^^^^^^^^^
= help: use `::rayon` to refer to this extern crate unambiguously
error: aborting due to 2 previous errors
Some errors occurred: E0432, E0659.
For more information about an error, try `rustc --explain E0432`.
```
## Old Description
Rustc does not warn about `use` with paths incompatible with `uniform_paths` for edition 2018. This means that rustfix it will not fix your code for edition 2018.
To reproduce create the following project:
`Cargo.toml`:
```toml
[package]
name = "foobar"
version = "0.0.0"
authors = ["Foo <[email protected]>"]
[dependencies]
criterion = "0.2"
```
`src/lib.rs`:
```rust
#![allow(unused)]
#![feature(rust_2018_preview, uniform_paths)]
extern crate criterion;
use criterion::Criterion;
```
If you run `cargo +nightly check` you will not get any warning, which means that when you run `cargo +nightly fix --edition` nothing will change, but the code is not compilable in rust 2018. Change `Cargo.toml` to enable edition 2018 (by adding `cargo-features = ["edition"]` and `package.edition = "2018"`) and run `cargo +nightly build`:
```text
$ cargo +nightly build
[...]
error: `criterion` import is ambiguous
--> src/lib.rs:4:5
|
3 | extern crate criterion;
| ----------------------- can refer to `self::criterion`
4 | use criterion::Criterion;
| ^^^^^^^^^ can refer to external crate `::criterion`
|
= help: write `::criterion` or `self::criterion` explicitly instead
= note: relative `use` paths enabled by `#![feature(uniform_paths)]`
[...]
```
`cargo +nightly fix --edition` should have changed line 4 to `use ::criterion::Criterion`.
Ref: Bug in cargo: rust-lang/cargo#5905 | A-lints,T-compiler,C-future-incompatibility,C-bug,A-edition-2018 | medium | Critical |
355,306,210 | go | cmd/go: allow verifying vendored code | `go mod verify` is extremely useful for validating the integrity of modules in the local cache.
It would be great if projects that choose to vendor their modules (then presumably building with `go build -mod vendor ...`) had a similar command to verify the integrity of modules in that directory.
This would satisfy a major requirement that many projects need to account for in their CI process-- ensuring that vendored code hasn't been tampered with. | NeedsInvestigation,FeatureRequest,modules | high | Critical |
355,306,813 | deno | Don't assume paths are always valid UTF8 | This is a longer term goal.
* Don't use String/str for paths on the rust side (we should start doing this today).
* Use flatbuffer byte vectors instead of strings to encode paths in messages.
* On the javascript side have an *optional, low-level API* that reports paths are typed arrays and not strings.
This was an issue in Node: https://github.com/nodejs/node-v0.x-archive/issues/2387 | suggestion,chore | low | Major |
355,350,493 | TypeScript | In JS, assignment to property of class has no contextual type | **TypeScript Version:** 3.1.0-dev.20180829
**Code**
```ts
class C {
/**
* @param {number} n
* @return {number}
*/
m = n => n * 2;
}
const c = new C();
c.m = n => n * 3;
```
**Expected behavior:**
No error.
**Actual behavior:**
`src/a.js:10:7 - error TS7006: Parameter 'n' implicitly has an 'any' type.`
There is no error for the equivalent TypeScript code (using `m = (n: number): number => n * 2;`).
**Related Issues:**
#25926 | Bug,Domain: JSDoc,checkJs,Domain: JavaScript | low | Critical |
355,354,474 | flutter | Discussion: Separate Scaffold and DrawerLayout | Is there any reason why the `Scaffold` is responsible for the page itself (app bars, floating action button...) and the drawers? It seems like there is no unidirectional connection between the drawer rendering and the page itself.
It would be handy if there was a `DrawerLayout` widget that is only responsible for rendering drawers.
Developers could place the `DrawerLayout` above the `Navigator` so that it does not disappear when the route changes. | framework,f: material design,c: proposal,P2,team-design,triaged-design | low | Minor |
355,375,696 | kubernetes | Scarce Resource Bin Packing Priority Function | /kind feature
/sig scheduling
**What happened**:
Consider a 3 node cluster with following resources availability
*Node 1* extended-resource = intel.com/foo : 4
*Node 2* extended-resource = intel.com/foo : 4
*Node 3* extended-resource = intel.com/foo : 2
Run Job spec with following requests for extended resources
*Job 1* extended-resource-requests = intel.com/foo : 2
*Job 2* extended-resource-requests = intel.com/foo : 4
*Job 2* extended-resource-requests = intel.com/foo : 4
Default scheduler
Job 1 ---> Node 1
Job 2 ---> Node 2
Job 3 ---> Pending state (Not scheduled due to resource fragmentation)
**What you expected to happen**:
Job 1 ---> Node 3
Job 2 ---> Node 1 or 2
Job 3 ---> Node 1 or 2
**How to reproduce it (as minimally and precisely as possible)**:
Patch Nodes as follows
For Node 1 and 2
```sh
curl --header "Content-Type: application/json-patch+json" \
--request PATCH \
--data '[{"op": "add", "path": "/status/capacity/intel.com~1foo”, "value": "4"}]' \
http://localhost:8001/api/v1/nodes/<node-name>/status
```
For Node 3
```sh
curl --header "Content-Type: application/json-patch+json" \
--request PATCH \
--data '[{"op": "add", "path": "/status/capacity/intel.com~1foo”, "value": "2"}]' \
http://localhost:8001/api/v1/nodes/<node -name>/status
```
Run Following Pod Spec
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod-1
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
intel.com/foo: 2
---
apiVersion: v1
kind: Pod
metadata:
name: test-pod-2
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
intel.com/foo: 4
---
apiVersion: v1
kind: Pod
metadata:
name: test-pod-3
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
intel.com/foo: 4
```
| sig/scheduling,kind/feature,lifecycle/frozen | medium | Major |
355,376,289 | godot | Fullscreen mode makes CPU usage go significantly up - even on an empty scene ! - when running in a display monitor of 3440x1440. | **Godot version:**
3.0.6 (stable.official.8314054)
**OS/device including version:**
Win 10, Intel i7-7700K @ 4.20GHz, NVidia GTX 1080, Monitor 3440x1440
**Issue description:**
- When running godot in full screen mode, the CPU usage goes very high (at least for some monitors). For example, running a near empty scene (only one node in it) in fullscreen, makes my CPU usage go up to 15-20%..! When running this scene in windowed mode (even if maxed), the CPU usage is at 2% (with the same background apps running).
- This happens only when the game is running in my main display (monitor), whose resolution is 3440x1440. When I run the game fullscreen in an extra monitor (a pen display) whose reso is 1920x1080, CPU usage stays nice and low.
**Extra information:**
- The problem happens only after the fullscreen is toggled on, and goes away after it's toggled off (while running the game).
- When was testing this with a game (a 2D side scroller), tested that the FPS stayed at 60.
- It doesn't seem to matter what the project resolution is set at. For the game I had it 1920x1080 and the window/display/stretch/mode was "2d" and aspect "keep", but as I said, this happens right out of the box as well (with "ignore" and "disabled").
- For the empty scene it made no difference whether V-Sync was enabled or not. However, in my game, when I had VSync disabled in NVidia's Control Panel, the CPU usage did NOT go up when entering fullscreen. (Makes no sense to me at all why disabling vsync makes this problem disappear in a complex scene, but not in a trivial one.)
- The behaviour seems to be the same whether running in editor or as stand alone.
- Haven't had any similar fullscreen / windowed issues with any other game played on this computer.
**Steps to reproduce:**
- Open Task Manager or other performance monitor for testing purposes.
- Create a new Godot project.
- Create an new scene and put a Node in it and save the scene.
- Run the scene in a display monitor whose size is 3440x1440 (very possibly others, too). Watch CPU usage (mine is about 2% with Godot editor and some background apps running). Change window size, no problem.
- Open project settings and set display/window/size/fullscreen to true. Save and run the scene again, and watch CPU usage (mine stays steadily at about 15% or 20% with same background apps running).
**Minimal reproduction project:**
Note. The core issue seems to be about the display monitor resolution. This is just a near empty project with fullscreen toggled on.
[FullscreenProblem.zip](https://github.com/godotengine/godot/files/2334157/FullscreenProblem.zip)
| bug,platform:windows,topic:core,confirmed | low | Major |
355,376,946 | flutter | TextField does not lose input focus when window loses focus. | To reproduce (Android):
- place a flutter app in split screen mode with another app.
- Begin editing a text field.
- Tap into the other app, this will dismiss the keyboard
- Observe that the text field still appears to have focus, the cursor is blinking et cetera.
What should happen:
Upon focusing the other android app, the Flutter input focus should be reset. | a: text input,customer: fuchsia,platform-android,framework,f: focus,has reproducible steps,P2,found in release: 3.3,found in release: 3.4,team-android,triaged-android | low | Major |
355,386,693 | TypeScript | Indexed this types don't narrow when their referenced property narrows | **TypeScript Version:** 3.1.0-dev.20180829
**Search Terms:** this index narrowing generics
**Code**
```ts
type Keys = 'string' | 'number';
type Switch<T extends Keys> =
T extends 'string' ? string :
T extends 'number' ? number :
never;
interface Main<T extends Keys> {
kind: T;
value: Switch<this['kind']>;
}
let x: Main<Keys> = null as any;
x.kind;
x.value;
if (x.kind === 'string') {
x.kind; // Narrowed to "string"
x.value;
} else {
x.kind; // Narrowed to "number"
x.value;
}
```
**Expected behavior:**
In the first if block, `x.value` should be of type `string`. In the second if block, `x.value` should be of type `number`.
**Actual behavior:**
In both cases `x.value` is not narrowed, despite `x.kind` narrowing successfully.
**Related Issues:** [#24085](https://github.com/Microsoft/TypeScript/issues/24085)
| Suggestion,In Discussion | low | Minor |
355,389,308 | rust | Fully qualified path to nested consts complains about ambiguous associated type | I believe the following code should be accepted by the compiler, but currently complains with `error[E0223]: ambiguous associated type`:
```
struct C;
impl C {
const X: usize = 42;
}
struct S;
impl S {
const foo: C = C;
}
fn main() {
let _ = S::foo::X;
}
```
Might be needed before https://github.com/rust-lang/rust/issues/26760#issuecomment-305523540 can be done. | A-type-system,A-trait-system,A-associated-items,T-compiler,C-bug,T-types | low | Critical |
355,398,667 | TypeScript | Support using --module and --outDir flags in conjunction with --build | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
--build --module --outDir
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
I would like to be able to pass `--module` and `--outDir`, in `--build` mode so that I can have multiple npm scripts for different build outputs without needing multiple tsconfig files.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
In [Aurelia vNext](https://github.com/aurelia/aurelia) we've fully adopted the new 3.0 build system and are starting to prepare for publishing. We want to publish the build outputs for all different module systems (for all packages) so that consumers can easily point to the one that's compatible with their project.
We have been able to do this pre-3.0 like so:
```json
"build:commonjs": "tsc -m commonjs --outDir dist/commonjs",
"build:amd": "tsc -m amd --outDir dist/amd",
"build:system": "tsc -m system --outDir dist/system",
"build:umd": "tsc -m umd --outDir dist/umd",
"build:es2015": "tsc -m es2015 --outDir dist/es2015",
```
Now when trying to do this, it doesn't work:
```json
"build:commonjs": "tsc -b -m commonjs --outDir dist/commonjs",
"build:amd": "tsc -b -m amd --outDir dist/amd",
"build:system": "tsc -b -m system --outDir dist/system",
"build:umd": "tsc -b -m umd --outDir dist/umd",
"build:es2015": "tsc -b -m es2015 --outDir dist/es2015",
"build:esnext": "tsc -b -m esnext --outDir dist/esnext",
```
Results in:
```
message TS6096: File (...)/packages/runtime/-m' does not exist
message TS6096: File (...)/packages/runtime/commonjs' does not exist
```
And nothing else happening. I read in the docs that only 4 particular flags are supported in conjunction with `--build` so it seems this is simply not implemented.
## Examples
<!-- Show how this would be used and what the behavior would be -->
We'd like to have something like this in the packages package.json:
```
"build:commonjs": "tsc -b -m commonjs --outDir dist/commonjs",
"build:amd": "tsc -b -m amd --outDir dist/amd",
"build:system": "tsc -b -m system --outDir dist/system",
"build:umd": "tsc -b -m umd --outDir dist/umd",
"build:es2015": "tsc -b -m es2015 --outDir dist/es2015",
"build:esnext": "tsc -b -m esnext --outDir dist/esnext",
"build": "run-p build:*",
```
And when calling `npm run build` from the top-level package (using lerna...), that would produce build outputs in all module formats for all packages
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,In Discussion | low | Critical |
355,399,793 | pytorch | A bug in roi_align.cc? [Caffe2] | There maybe a bug in roi_align.cc.
Line 164 and Line 165
T roi_width = std::max(roi_end_w - roi_start_w, (T)1.);
T roi_height = std::max(roi_end_h - roi_start_h, (T)1.);
I think it should be:
T roi_width = std::max(roi_end_w - roi_start_w + 1, (T)1.);
T roi_height = std::max(roi_end_h - roi_start_h + 1, (T)1.); | caffe2 | low | Critical |
355,404,978 | pytorch | non-shuffling data loaders can affect random states, thus the results of shuffling data loaders. | ## Issue description
See the title. While it's arguable this behavior is bug or not, I feel it's not very intuitive. Maybe it's good to doc this.
For an actual example, see below.
In the example, I have two data loaders, one shuffling for training, one non-shuffling for validation. Turns out, the order of data sample points for the shuffling training loader depends on how many times I call the non-shuffling validation loader first.
Overall, I think making **behaviorally deterministic** operations affect random states is certainly unintuitive; for example, you would not expect that running a few matrix multiplications in `numpy` can affect a random number generation operation that follows these matrix multiplications.
## Code example
```python
# In[1]:
from torch.utils.data import DataLoader
from torch.utils.data.dataset import TensorDataset
import torch
print(torch.__version__)
# In[2]:
import numpy as np
np.random.seed(0)
# In[3]:
dataset_train = np.random.randint(100, size=(10,)).astype(np.float32)
dataset_val = np.random.randint(100, size=(10,)).astype(np.float32)
print('train\n',dataset_train)
print('val\n',dataset_val)
# train
# [44. 47. 64. 67. 67. 9. 83. 21. 36. 87.]
# val
# [70. 88. 88. 12. 58. 65. 39. 87. 46. 88.]
# In[4]:
dataloader_train = DataLoader(TensorDataset(torch.tensor(dataset_train)),
batch_size=10, shuffle=True)
dataloader_val = DataLoader(TensorDataset(torch.tensor(dataset_val)),
batch_size=10, shuffle=False)
def loop_one(dataloader_this):
for (x,) in dataloader_this:
print(x)
# In[5]:
# only train
np.random.seed(0)
torch.manual_seed(0)
loop_one(dataloader_train)
# no val, we get the following for dataloader_train
# tensor([67., 21., 9., 64., 44., 36., 47., 83., 87., 67.])
# In[6]:
# val + train
np.random.seed(0)
torch.manual_seed(0)
loop_one(dataloader_val)
loop_one(dataloader_train)
# one val
# tensor([67., 21., 9., 83., 64., 36., 47., 87., 67., 44.])
# In[7]:
# val + val + train
np.random.seed(0)
torch.manual_seed(0)
loop_one(dataloader_val)
loop_one(dataloader_val)
loop_one(dataloader_train)
# two val
# tensor([21., 64., 67., 36., 87., 83., 9., 44., 47., 67.])
# In[8]:
# val + val + reseed + train
np.random.seed(0)
torch.manual_seed(0)
loop_one(dataloader_val)
loop_one(dataloader_val)
np.random.seed(0)
torch.manual_seed(0)
loop_one(dataloader_train)
# reseed
# tensor([67., 21., 9., 64., 44., 36., 47., 83., 87., 67.])
```
## System Info
I ran that `collect_env.py` script under a Singularity container with Ubuntu 14.04 host.
```
Collecting environment information...
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.5 LTS
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce GTX TITAN Black
GPU 1: Tesla K40c
Nvidia driver version: 384.111
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
```
cc @SsnL @VitalyFedyunin @ejguan | todo,module: dataloader,triaged | low | Critical |
355,432,260 | youtube-dl | Youku download failure | C:\BUILD\youtube>c:\Apps\youtube-dl.exe -f bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best https://v.youku.com/v_show/id_XNzM2NTAxNDAw.html --verbose
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-f', 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best', 'https://v.youku.com/v_show/id_XNzM2NTAxNDAw.html', '--verbose']
[debug] Encodings: locale cp936, fs mbcs, out cp936, pref cp936
[debug] youtube-dl version 2018.08.28
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.17134
[debug] exe versions: ffmpeg 3.4.2, ffprobe 3.4.2
[debug] Proxy map: {}
[youku] XNzM2NTAxNDAw: Retrieving cna info
[youku] XNzM2NTAxNDAw: Downloading JSON metadata
ERROR: Youku server reported error -6004: 客户端无权播放,201; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpc4srko6s\build\youtube_dl\YoutubeDL.py", line 792, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpc4srko6s\build\youtube_dl\extractor\common.py", line 502, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpc4srko6s\build\youtube_dl\extractor\youku.py", line 189, in _real_extract
youtube_dl.utils.ExtractorError: Youku server reported error -6004: 客户端无权播放,201; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
C:\BUILD\youtube>c:\Apps\youtube-dl.exe --version
2018.08.28 | cant-reproduce | low | Critical |
355,464,775 | TypeScript | convert a property/element access expression to a getter and setter | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
refactor, getter, setter
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
convert a property/element access expression to a getter and setter
<!-- A summary of what you'd like to see added or changed -->
## Use Cases
`this.a.b.c.d = 1`
to
```ts
get /*RENAME*/ temp1 () {
return this.a.b.c.d
}
set temp1 (value) {
this.a.b.c.d = value
}
/// ...
this.temp1 = 1
```
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,Awaiting More Feedback | low | Critical |
355,473,473 | rust | Const qualification | cc @eddyb @RalfJung
Let's figure out what exactly we need, because I'm very confused about it.
So, currently const qualification is doing a few things at once:
1. figure out all the feature gates and forbid things not allowed in constants/statics/const_fn at all or without the feature gates
2. figure out what things are promotable by ruling out values containing `Drop` or `UnsafeCell` types (`None` is always ok, even if `Some(value)` would not be due to the type, even if `Option<T>` would technically propagate said type.
3. In order to guarantee this for promotion we also check the bodies of `const fn` and `const` for `Drop` and `UnsafeCell` value creations, even if it is ignored for the constant itself and only checked when the constant is used in a value checked for promotion
Why I want to stop looking at bodies, and instead just check the final value of constants:
* This analysis is imperfect. E.g. `(UnsafeCell::new(42), 42).1` is treated as if you end up with an `UnsafeCell` value
* If we keep looking at the body, we're essentially saying that a `const fn`'s body is not allowed to change, even for changes which would not change the final value for any input, because such a change might subtly lead to the analysis suddenly thinking there's an `UnsafeCell` in the final value
Why we cannot just look at the final value right now:
* when promoting associated constants inside a generic function we might not have enough information to actually compute the final value. We'd need to wait for monomorphization to tell us whether the value is problematic. This is obviously not something we want. All the analyses should run before monomophization
Solution brainstorm:
1. don't promote calls to `const fn` if its return type may contain `UnsafeCell` or `Drop`. So `Option::<String>` is not promoted, even if the actual value is `None`. (not a breaking change, since there are no stable const fn for which this could break any code)
2. Always assume the worst with associated constants (already the case https://play.rust-lang.org/?gist=36546b7a589178413e28ba09f1cd0201&version=stable&mode=debug&edition=2015 ) So we don't promote associated constant uses unless monomorphized. | A-const-eval | medium | Critical |
355,478,772 | pytorch | undefined reference to 'caffe2::Caffe2FlagsRegistry[abi:cxx11]()' | I finished compiling caffe2 from source on pytorch repository following this instructions: [(https://caffe2.ai/docs/getting-started.html?platform=ubuntu&configuration=compile)]
Now I am making some simple code to test caffe2 with C++ and getting this error:
```
g++ -c -std=gnu++0x -msse2 -I. -I/usr/local/cuda-9.0/include -I/home/ficha/Documents/external/pytorch/aten/src -I/usr/local/include -I/usr/include -I/home/ficha/Documents/external/pytorch/build/caffe2/proto -I/home/ficha/Documents/external/pytorch/thrid_party -I/home/ficha/Documents/external/pytorch/caffe2_include demo.cpp -o demo.o
g++ demo.o -I. -I/usr/local/cuda-9.0/include -I/home/ficha/Documents/external/pytorch/aten/src -I/usr/local/include -I/usr/include -I/home/ficha/Documents/external/pytorch/build/caffe2/proto -I/home/ficha/Documents/external/pytorch/thrid_party -I/home/ficha/Documents/external/pytorch/caffe2_include -L/usr/local/lib -L/usr/local/cuda-9.0/lib64 -L/usr/local/cuda-9.0/lib -lcudart -lcublas -lcurand -lglog -lgflags -lprotobuf -lleveldb -lsnappy -llmdb -lboost_system -lm -lopencv_core -lopencv_highgui -lopencv_imgproc -lboost_thread -lstdc++ -lcblas -latlas -L/home/ficha/Documents/external/pytorch/build/lib/ -lcaffe2_gpu -lcaffe2 -lhdf5_hl -lhdf5 -lopencv_imgcodecs -L/home/ficha/Documents/external/pytorch/build/lib -L/usr/lib/x86_64-linux-gnu -L/usr/local/cuda-9.0/lib64 -L/home/ficha/Documents/external/pytorch/thrid_party -o demo
demo.o: In function 'caffe2::(anonymous namespace)::Caffe2FlagParser_init_net::Caffe2FlagParser_init_net(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)':
demo.cpp:(.text+0xa4): undefined reference to 'bool caffe2::Caffe2FlagParser::Parse<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*)'
demo.o: In function 'caffe2::(anonymous namespace)::Caffe2FlagParser_predict_net::Caffe2FlagParser_predict_net(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)':
demo.cpp:(.text+0xe4): undefined reference to 'bool caffe2::Caffe2FlagParser::Parse<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*)'
demo.o: In function 'caffe2::(anonymous namespace)::Caffe2FlagParser_file::Caffe2FlagParser_file(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)':
demo.cpp:(.text+0x124): undefined reference to 'bool caffe2::Caffe2FlagParser::Parse<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*)'
demo.o: In function 'caffe2::(anonymous namespace)::Caffe2FlagParser_classes::Caffe2FlagParser_classes(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)':
demo.cpp:(.text+0x164): undefined reference to 'bool caffe2::Caffe2FlagParser::Parse<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*)'
demo.o: In function 'caffe2::(anonymous namespace)::Caffe2FlagParser_size::Caffe2FlagParser_size(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)':
demo.cpp:(.text+0x1a4): undefined reference to 'bool caffe2::Caffe2FlagParser::Parse<int>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int*)'
demo.o: In function '__static_initialization_and_destruction_0(int, int)':
demo.cpp:(.text+0x10d2): undefined reference to 'caffe2::Caffe2FlagsRegistry[abi:cxx11]()'
demo.cpp:(.text+0x11e8): undefined reference to 'caffe2::Caffe2FlagsRegistry[abi:cxx11]()'
demo.cpp:(.text+0x12fe): undefined reference to 'caffe2::Caffe2FlagsRegistry[abi:cxx11]()'
demo.cpp:(.text+0x1414): undefined reference to 'caffe2::Caffe2FlagsRegistry[abi:cxx11]()'
demo.cpp:(.text+0x14df): undefined reference to 'caffe2::Caffe2FlagsRegistry[abi:cxx11]()'
collect2: error: ld returned 1 exit status
makefile:52: recipe for target 'demo' failed
make: *** [demo] Error 1
```
The code I want to compile is this:
```
#include <caffe2/core/init.h>
#include <caffe2/core/net.h>
#include <caffe2/utils/proto_utils.h>
#include <fstream>
using namespace std;
CAFFE2_DEFINE_string(init_net, "res/squeezenet_init_net.pb",
"The given path to the init protobuffer.");
CAFFE2_DEFINE_string(predict_net, "res/squeezenet_predict_net.pb",
"The given path to the predict protobuffer.");
CAFFE2_DEFINE_string(file, "res/image_file.jpg", "The image file.");
CAFFE2_DEFINE_string(classes, "res/imagenet_classes.txt", "The classes file.");
CAFFE2_DEFINE_int(size, 227, "The image file.");
namespace caffe2 {
}
int main(int argc, char** argv) {
caffe2::GlobalInit(&argc, &argv);
google::protobuf::ShutdownProtobufLibrary();
cout << "caffe loaded correctly" << endl;
return 0;
}
```
This is the result of the cmake
```
-- ******** Summary ********
-- General:
-- CMake version : 3.5.1
-- CMake command : /usr/bin/cmake
-- Git version : v0.1.11-9987-gb885dea-dirty
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 5.4.0
-- BLAS : MKL
-- CXX flags : -msse3 -msse4.1 -msse4.2 --std=c++11 -fvisibility-inlines-hidden -rdynamic -DONNX_NAMESPACE=onnx_torch -fopenmp -D__GLIBCXX_USE_CXX11_ABI=0 -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow
-- Build type : Release
-- Compile definitions : USE_GCC_ATOMICS=1;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1
-- CMAKE_PREFIX_PATH : /home/ficha/.envs/pytorch0_4/lib/python3.5/site-packages
-- CMAKE_INSTALL_PREFIX : /home/ficha/Documents/external/pytorch/torch/lib/tmp_install
--
-- BUILD_CAFFE2 : 1
-- BUILD_ATEN_MOBILE : OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : 1
-- Python version : 3.5.2
-- Python executable : /home/ficha/.envs/pytorch0_4/bin/python
-- Pythonlibs version : 3.5.2
-- Python library : /home/ficha/.envs/pytorch0_4/lib/python3.5
-- Python includes : /usr/include/python3.5m
-- Python site-packages: lib/python3.5/site-packages
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : OFF
-- USE_ASAN : OFF
-- USE_CUDA : 1
-- CUDA static link :
-- USE_CUDNN : ON
-- CUDA version : 9.0
-- cuDNN version : 7.1.3
-- CUDA root directory : /usr/local/cuda-9.0
-- CUDA library : /usr/local/cuda-9.0/lib64/stubs/libcuda.so
-- cudart library : /usr/local/cuda-9.0/lib64/libcudart_static.a;-pthread;dl;/usr/lib/x86_64-linux-gnu/librt.so
-- cublas library : /usr/local/cuda-9.0/lib64/libcublas.so;/usr/local/cuda-9.0/lib64/libcublas_device.a
-- cufft library : /usr/local/cuda-9.0/lib64/libcufft.so
-- curand library : /usr/local/cuda-9.0/lib64/libcurand.so
-- cuDNN library : /usr/local/cuda-9.0/lib64/libcudnn.so.7
-- nvrtc : /usr/local/cuda-9.0/lib64/libnvrtc.so
-- CUDA include path : /usr/local/cuda-9.0/include
-- NVCC executable : /usr/local/cuda-9.0/bin/nvcc
-- CUDA host compiler : /usr/bin/cc
-- USE_TENSORRT : OFF
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FFMPEG : OFF
-- USE_GFLAGS : ON
-- USE_GLOG : ON
-- USE_GLOO : ON
-- USE_GLOO_IBVERBS : OFF
-- USE_LEVELDB : ON
-- LevelDB version : 1.18
-- Snappy version : 1.1.3
-- USE_LITE_PROTO : OFF
-- USE_LMDB : ON
-- LMDB version : 0.9.17
-- USE_METAL : OFF
-- USE_MKL :
-- USE_MOBILE_OPENGL : OFF
-- USE_MPI : ON
-- USE_NCCL : ON
-- USE_SYSTEM_NCCL : OFF
-- USE_NERVANA_GPU : OFF
-- USE_NNPACK : 1
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : ON
-- OpenCV version : 3.2.0
-- USE_OPENMP : OFF
-- USE_PROF : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : OFF
-- Public Dependencies : Threads::Threads;gflags;glog::glog
-- Private Dependencies : nnpack;cpuinfo;/usr/lib/x86_64-linux-gnu/liblmdb.so;/usr/lib/x86_64-linux-gnu/libleveldb.so;/usr/lib/x86_64-linux-gnu/libsnappy.so;/usr/lib/x86_64-linux-gnu/libnuma.so;opencv_core;opencv_highgui;opencv_imgproc;opencv_imgcodecs;opencv_videoio;opencv_video;/usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/libmpi.so;gloo;gloo;aten_op_header_gen;onnxifi_loader;rt;gcc_s;gcc;dl
-- Configuring done
-- Generating done
```
Can anyone please bring me some help? If any more information is required, let me know. Thanks community.
cc @malfet @seemethere @walterddr | module: build,triaged | low | Critical |
355,497,202 | vscode | Cancel file operations from opening an editor when closing it | Issue Type: <b>Bug</b>
Found a bug.
When you clicked to open huge file, for ex. 1.5 GB txt, and then immediately closed it - VS Code will try to open it anyway, RAM will be overflow, and Studio crashes.
VS Code version: Code 1.26.1 (493869ee8e8a846b0855873886fc79d480d342de, 2018-08-16T18:38:57.434Z)
OS version: Windows_NT x64 10.0.17134
<!-- generated by issue reporter --> | feature-request,freeze-slow-crash-leak,file-io,keep | low | Critical |
355,500,541 | opencv | solveLP() produces infeasible results | ##### System information (version)
- OpenCV => 3.0
- Operating System / Platform => Ubuntu 16.04 64-bit
- Compiler => gcc 4.9.4
##### Detailed description
I observed that `solveLP()` might produce infeasible results. When I do `A*x` Some of the constraints are not satisfied, i.e. `A*x > b` for some rows of `A`. Note that in my example `solveLP()` returns `SOLVELP_MULTI`, but it shouldn't matter it should still produce a feasible solution.
##### Steps to reproduce
I have attached the matrices to reproduce the issue. I have renamed them as argument names in the `solveLP()` signature. Just pass `Func` and `Constr` to `solveLP()` and obtain `z`. I have also attached `z` and `A*z` (`Constr_prod_z`) for convenience.
You can observe that element at 161 of `Constr_prod_z` is `4.376743` which is clearly greater than `1` when it shouldn't be according to constraints (you can just print `Constr.row(161)` and look at the last element to see `1`). This is not the only example of constraint violation.
[mats.zip](https://github.com/opencv/opencv/files/2335467/mats.zip)
| bug,category: core,confirmed,effort: few days,Hackathon | low | Major |
355,500,684 | pytorch | [...] operator for masked select does not broadcast anymore | I have the last pytorch version from source, pulled this morning, ubuntu 16.04, CUDA 9.1
the `tensor[byte_tensor]`, which i thought to be equivalent to masked_select doesn't broadcast, while I am pretty sure it did broadcast before.
Simple example to recreate the problem :
```python
a = torch.randn(10,10,10)
b = (a[:,0] > 0).unsqueeze(1) # size 10 x 1 x 10
a.masked_select(b) # works
a[b] # doesn't work
```
EDIT: did some tests with pytorch 0.3.0, and more specifically, it worked with Variable, but not with tensor
```python
a = torch.randn(10,10,10)
b = (a[:,0] > 0).unsqueeze(1) # size 10 x 1 x 10
a[b] # doesn't work
Variable(a)[b] # works
```
cc @heitorschueroff | todo,triaged,module: sorting and selection | low | Minor |
355,505,825 | go | proposal: encoding/json: Support for nested values in JSON tags | I have often run into situations where it would convenient to translate nested JSON objects into flat structs.
An example would be the AWS dynamodb API that returns nested objects based on the saved type (https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_GetItem.html), i.e.:
```json
{
"Item": {
"somekey": {
"S": "somestr"
}
}
}
```
Lets say we know we saved "somekey" as a string.
Thus it would be convinient, if we could avoid the need for the nested structs:
```golang
package main
import(
"encoding/json"
"fmt"
)
var str = `
{
"Item": {
"somekey": {
"S": "somestr"
},
"otherkey": {
"N": "123"
}
}
}
`
type Data struct {
Somekey string`json:"Item.somekey.S"`
Otherkey uint64`json:"Item.otherkey.N,string"`
}
func main() {
d := Data{}
if err := json.Unmarshal([]byte(str), &d); err != nil {
fmt.Println(err)
return
}
fmt.Println("Data struct:", d)
}
```
This allows one to get saner data structures with minimal code, where previously one would have to manually translate the structs to achieve this. | Proposal,Proposal-Hold | medium | Major |
355,507,785 | flutter | TimePicker can not only set a specific minute! | **My App has a feature that can only select 0 and 30 minutes!**
**TimePickerDialog can select 0~60 [minutes!**]

**Can you provide API for this!**
| c: new feature,framework,f: material design,f: date/time picker,P3,team-design,triaged-design | low | Minor |
355,539,061 | flutter | Hiding StatusBar and BottomNavigationBar animation of SystemChrome.setEnabledSystemUIOverlays is not so UI Friendly | A difference in the visibility animation of `StatusBar` and `BottomNavigationBar` is perceptible.
All I did was
```dart
onTap:(){
makeItFullScreen
? SystemChrome.setEnabledSystemUIOverlays([])
: SystemChrome.setEnabledSystemUIOverlays(
[SystemUiOverlay.bottom, SystemUiOverlay.top]);
makeItFullScreen = !makeItFullScreen;
}
```
The flickering of that particular part(bottom) of the screen needs to be fixed I guess.
I tested this on emulator and an actual device.
Most of us might not have noticed this because we usually use the default(white) background of our `Scaffold`.
<img src="https://user-images.githubusercontent.com/19249966/44848531-c240a380-ac74-11e8-85a2-8bbbb84e2b1e.gif" width ="300">
Flutter Doctor Logs:
```
[✓] Flutter (Channel dev, v0.7.2, on Linux, locale en_US.UTF-8)
• Flutter version 0.7.2 at /home/daksh/flutter
• Framework revision f8a2fc7c28 (3 days ago), 2018-08-27 20:58:30 +0200
• Engine revision af42b6dc95
• Dart version 2.1.0-dev.1.0.flutter-ccb16f7282
[✓] Android toolchain - develop for Android devices (Android SDK 27.0.3)
• Android SDK at /home/daksh/Android/Sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-27, build-tools 27.0.3
• ANDROID_HOME = /home/daksh/Android/Sdk
• Java binary at: /home/daksh/android-studio/jre/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
• All Android licenses accepted.
[✓] Android Studio (version 3.1)
• Android Studio at /home/daksh/android-studio
• Flutter plugin version 25.0.1
• Dart plugin version 173.4700
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
[✓] IntelliJ IDEA Community Edition (version 2018.1)
• IntelliJ at /home/daksh/Downloads/idea-IC-181.5281.24
• Flutter plugin version 25.0.2
• Dart plugin version 181.4892.1
[✓] VS Code (version 1.26.1)
• VS Code at /usr/share/code
• Flutter extension version 2.17.1
[✓] Connected devices (1 available)
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 8.1.0 (API 27) (emulator)
• No issues found!
```
Cross-link https://stackoverflow.com/questions/52095813/visibility-animation-of-bottomnavigationbarandroid-is-somehow-not-so-ui-friend | platform-android,framework,f: material design,a: quality,a: layout,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-android,triaged-android | low | Major |
355,588,894 | angular | validatorChange triggers valueChanges emit even if model stays the same | ## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
</code></pre>
## Current behavior
When a control is implementing the `Validator` interface and informs forms infrastructure about validator condition changes (through `registerOnValidatorChange`) the `valueChanges` is emitting each time validator change is called, even if model stays the same.
## Expected behavior
<!-- Describe what the desired behavior would be. -->
`valueChanges` should emit only if the underlying model value changes, regardless of the validation changes.
## Minimal reproduction of the problem with instructions
https://stackblitz.com/edit/angular-value-change-e79u33?file=app%2Fapp.component.ts
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
`valueChanges` should not emit when the model is not changing - otherwise its basic contract is broken.
## Environment
<pre><code>
Angular version: ^6.0.0
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser: ALL | type: bug/fix,breaking changes,freq2: medium,area: forms,state: confirmed,P4 | low | Critical |
355,627,594 | vue | Vue treat every element attribute named like 'v-[something]' as directive | ### Version
2.5.17
### Reproduction link
[http://jsfiddle.net/wf48v9de/3/](http://jsfiddle.net/wf48v9de/3/)
### Steps to reproduce
Run the fiddle and look at console.
### What is expected?
If there is not registered directive named 'v-fake', Vue should ignore this attribute of an element.
### What is actually happening?
Vue treat every 'v-[something]' attribute as directive, regardless if it is registered globally or locally, or not at all. It is trying to compute the value of 'v-' beginning attribute, and in effect throws TypeError, as it doesn't find property named like string passed to attribute.
---
if it is intentional, there should be information about such behavior in Vue docs. However in my opinion it should ignore such attributes.
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | low | Critical |
355,637,770 | vue-element-admin | Would this be compatible to nuxt? | did any of you guys ported it and have a skeleton starter in nuxt? | 待定需求 | low | Major |
355,639,559 | rust | Stack overflow with Boxed array | This is possibly the same bug as
https://github.com/rust-lang/rust/issues/40862
Using the latest version of Rust
rustc 1.27.2 (58cc626de 2018-07-18)
The following code causes a stack overflow
```rust
#[test]
fn test_boxed() {
let a = Box::new([-1; 3000000]);
}
```
# Workarounds
## Using `Vec<T>`
This does not have overhead.
```rust
let boxed_slice: Box<[u8]> = vec![0; 100_000_000].into_boxed_slice();
let boxed_array: Box<[u8; 100_000_000]> = vec![0; 100_000_000].into_boxed_slice().try_into().unwrap();
```
## Unstably using `new_uninit`
This requires an unstable API and unsafe, but is more flexible.
```rust
let mut b = Box::new_uninit();
unsafe { std::ptr::write_bytes(b.as_mut_ptr(), 0, 100_000_000) };
let boxed_array: Box<[u8; 100_000_000]> = unsafe { b.assume_init() };
``` | A-LLVM,A-codegen,T-compiler,A-box | high | Critical |
355,673,518 | TypeScript | `@typedef` doesn't give error for bad or missing typename | ```js
/** @typedef {number} 3 */
/** @typedef {number */
```
**Expected behavior:**
Errors on both.
1. "'3' is not a valid type alias name.'
2. Missing type alias name.
**Actual behavior:**
Errors on neither, and neither alias can be used.
If you have a declaration following the typedef, you will get the name of that declaration instead. In the second case that's OK, but in the first case the '3' should still be an error:
```js
/** @typedef {number} 3 - this should still be an error */
var Type1;
/** @typedef {number} */
var Type2; // should be allowed
```
Note that the type name is required: even if you provide a following name to use, we'll treat any valid identifier in the comment text as the type name in preference to the following name:
```js
/** @typedef {number} A type */
var Type3;
```
This defines a type `A` not `Type3`. That's why we should issue an error when, for example, the text is '3' instead of 'A'. | Suggestion,Experience Enhancement | low | Critical |
355,677,766 | vscode | Allow changing triple-click (and double-click) behaviour | Currently when I triple-click a word, the whole line gets selected.
I'm finding this annoying, as I seem to be doing it by accident, when intending to double-click to select just one word.
So I want to be able to customize triple-click to behave identically to double-click.
Searching user settings I cannot find anything for "triple". I also cannot find anything for controlling double-click behaviour. Or even how to define the action for "select word" and "select line". (I've also checked the keymap config, in case mouse actions were in there.)
(That is surprising, so if the config options I want already exist, then my feature request is to make them more easily discoverable!)
| feature-request,editor-commands | medium | Critical |
355,693,351 | go | net/http: TimeoutHandler hides panic locations | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
v1.10.3
### Does this issue reproduce with the latest release?
Not sure, I am finding a solution with my current version
### What operating system and processor architecture are you using (`go env`)?
OS: linux, arch: amd64
### What did you do?
This is a demo
```package main
import (
"log"
"net/http"
"time"
)
type sampleHandler string
func (s sampleHandler) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
var mm interface{}
log.Print("This will be panic")
log.Print(mm.(string))
}
func main() {
go func() {
http.ListenAndServe("localhost:8888", http.TimeoutHandler(sampleHandler("sample"), time.Second*10, "Timeout"))
}()
time.Sleep(time.Second * 2)
http.Get("http://localhost:8888/")
}
```
### What did you expect to see?
I expect to see stack trace point to where cause panic (`log.Print(mm.(string))`) so that I can debug app easily
### What did you see instead?
Stack trace point me `panic(p)` statement in `net/http.timeoutHandler` (`go/src/net/http/server.go:3144`). I know it make sense because that is the place that call `panic`. Are there any good solution to passing stack trace from handler inside `TimeoutHandler`?
| NeedsInvestigation | medium | Critical |
355,705,732 | flutter | Add new features to Dismissible widget | The Dismissible widget is awesome but has very limited functionality which makes it not usable in many real world applications. Let's consider adding the following features (in order of priority):
1. Allow onResize() callback to get the direction in which the item was dismissed. This is needed to implement two types of functions depending on swipe direction (e.g. bookmark and delete).
2. Add an "undo" function to restore the dismissed item by moving it back to its position. This may be useful in case if user wants to undo the dismiss action.
3. Allow custom animation in the background widgets for example by adding a new callback which gets the slider's position. The animation can be a function of a slider position, like in Google Inbox (resizing the icon on the background).
| c: new feature,framework,f: material design,c: proposal,P2,team-design,triaged-design | low | Major |
355,728,130 | rust | 35% performance regression in generated code since 1.24 | ## Summary
[Claxon](https://github.com/ruuda/claxon), when compiled with Rust 1.25.0 or later, takes 1.36 times as long to run as a version compiled with Rust 1.23.0. When compiled with Rust 1.24.0, it takes 1.45 times as long to run.
It looks like there was a severe regression in generated code in 1.24.0. 1.25.0 improved a bit again, but is still significantly worse than 1.23.0.
## Steps to reproduce
Prepare:
```
git clone https://github.com/ruuda/claxon --branch v0.4.1
cd claxon/testsamples
./populate.sh
cp p2.flac p3.flac p4.flac extra
cd ..
mkdir rust_bench
```
You can also copy your own files into `testsamples/extra` if you happen to have some lying around.
Then, with a Rust 1.23.0 toolchain, or after changing [this line](https://github.com/ruuda/claxon/blob/v0.4.1/tools/benchmark.sh#L31) to use `cargo +1.23.0`:
```
tools/benchmark.sh rust_bench/23
```
Then, with a Rust 1.25.0 toolchain, or after updating the script to use `cargo +1.25.0`:
```
tools/benchmark.sh rust_bench/25
tools/compare_benches.r rust_bench/23_*_all.dat rust_bench/25_*_all.dat
```
Output in my case:
```
| p10 | 18.0 ± 0.3 ns | 1.370 ± 0.033 |
| p50 | 18.9 ± 0.4 ns | 1.363 ± 0.043 |
| p90 | 21.1 ± 2.1 ns | 1.388 ± 0.181 |
| μ | 19.4 ± 0.7 ns | 1.359 ± 0.065 |
| τ | 65.9 ± 3.6 MiB/s | 0.736 ± 0.053 |
```
The numbers in the rightmost column show the running time of the benchmark compiled with Rust 1.25.0 relative to the running time of the benchmark compiled with Rust 1.23.0.
For Rust 1.23.0 vs Rust 1.24.0 I get these results:
```
| p10 | 19.8 ± 0.2 ns | 1.501 ± 0.028 |
| p50 | 20.5 ± 0.2 ns | 1.473 ± 0.036 |
| p90 | 21.4 ± 0.4 ns | 1.413 ± 0.120 |
| μ | 20.7 ± 0.2 ns | 1.449 ± 0.048 |
| τ | 61.8 ± 3.3 MiB/s | 0.690 ± 0.049 |
```
For Rust 1.23.0 vs Rust 1.28.0 I get these results:
```
| p10 | 18.3 ± 0.2 ns | 1.392 ± 0.026 |
| p50 | 18.9 ± 0.1 ns | 1.360 ± 0.031 |
| p90 | 19.7 ± 0.4 ns | 1.301 ± 0.111 |
| μ | 19.1 ± 0.2 ns | 1.339 ± 0.043 |
| τ | 66.9 ± 3.8 MiB/s | 0.747 ± 0.055 |
```
For Rust 1.29.0-beta.1 and 1.30.0-nightly (3edb355b7 2018-08-03) I get similar results.
To show that the setup works, this is Rust 1.13.0 vs Rust 1.23.0, which shows no significant difference:
```
| p10 | 13.2 ± 0.2 ns | 0.989 ± 0.022 |
| p50 | 13.9 ± 0.3 ns | 0.993 ± 0.027 |
| p90 | 15.2 ± 1.3 ns | 1.036 ± 0.088 |
| μ | 14.3 ± 0.4 ns | 1.013 ± 0.034 |
| τ | 89.6 ± 4.2 MiB/s | 0.988 ± 0.067 |
```
And Rust 1.25.0 vs Rust 1.28.0 does not show a significant difference either:
```
| p10 | 18.3 ± 0.2 ns | 1.016 ± 0.021 |
| p50 | 18.9 ± 0.1 ns | 0.998 ± 0.024 |
| p90 | 19.7 ± 0.4 ns | 0.937 ± 0.097 |
| μ | 19.1 ± 0.2 ns | 0.985 ± 0.037 |
| τ | 66.9 ± 3.8 MiB/s | 1.015 ± 0.080 |
```
## Details
Claxon is a decoder for the flac audio format; the benchmark decodes every file in `testsamples/extra` 5 times and collects statistics about the duration.
The benchmark is compiled with `-C target_cpu=native`. <details><summary>I use a Skylake i7.</summary>
```
vendor_id : GenuineIntel
cpu family : 6
model : 94
model name : Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
stepping : 3
microcode : 0xc6
cpu MHz : 3394.795
cache size : 6144 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 7
initial apicid : 7
fpu : yes
fpu_exception : yes
cpuid level : 22
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 5186.00
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:
```
</details> | I-slow,P-medium,T-compiler,regression-from-stable-to-stable,C-bug | low | Critical |
355,745,154 | TypeScript | Javascript: Object.assign to assign property values for classes is not respected | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
Perhaps TS should be able to identify `Object.assign(this, ...)` then infer the properties that are assigned to `this`? Or is this not possible to do? I am not sure.
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:**
Version 3.0.3
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
- object assign
- class
- does not exist on type
**Code**
A.js
```js
export default class A {
constructor ({ x, y, z }) {
Object.assign(this, {x, y, z});
}
f () {
return this.x;
}
}
```
**Expected behavior:**
TS should be able to identify `x` as a property of `this`.
**Actual behavior:**
Throwing:
```
Property 'x' does not exist on type 'A'.
```
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
http://www.typescriptlang.org/play/#src=export%20default%20class%20A%20%7B%0D%0A%20%20constructor%20(%7B%20x%2C%20y%2C%20z%20%7D)%20%7B%0D%0A%20%20%20%20Object.assign(this%2C%20%7Bx%2C%20y%2C%20z%7D)%3B%0D%0A%20%20%7D%0D%0A%20%20f%20()%20%7B%0D%0A%20%20%20%20return%20this.x%3B%0D%0A%20%20%7D%0D%0A%7D
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
no
**Side Note:**
I am using TS with Javascript because YouCompleteMe switched to using TSServer for semantic completion. I absolutely love TS so was really happy about the switch! Thanks for the great work! | Suggestion,Experience Enhancement | high | Critical |
355,751,785 | flutter | `flutter screenshot --type=skia` can't record images in real devices | engine,dependency: skia,P2,team-engine,triaged-engine | low | Major |
|
355,772,049 | pytorch | use_system_nccl flag does not work? | I try to use nccl lib on my machine when installing pytorch from source.
What I did is
`export USE_SYSTEM_NCCL=1`
`export NCCL_ROOT_DIR="~/nccl_2.1.15-1+cuda9.0_x86_64/"`
`export NCCL_LIB_DIR="~/nccl_2.1.15-1+cuda9.0_x86_64/lib"`
`export NCCL_INCLUDE_DIR="~/nccl_2.1.15-1+cuda9.0_x86_64/include/"`
and then
`python setup.py install`
But the installation log shows that
-- USE_MPI : OFF
-- USE_NCCL : ON
-- USE_SYSTEM_NCCL : OFF
idk how to solve it. Anyone can help with it?
cc @malfet @seemethere @walterddr | module: build,triaged | low | Minor |
355,772,189 | go | net: FileConn() and Dial() only return public net.Conn implementations | In Go 1.11 and 1.10, the documentation does not say anything about which implementations of `net.Conn` are actually returned by `net.Dial()` (and friends) and `net.FileConn()`.
The current implementations of `net.Dial()` and `net.FileConn()` only return public implementations of `net.Conn` (or an error). Taking advantage of this by type-asserting to the actual implementation returned by these functions/methods is extremely useful. As such, it would be nice to have this behavior documented so users can rely on it.
I'll send out a CL to extend the documentation and a test or two shortly. | Documentation,NeedsInvestigation | low | Critical |
355,774,202 | terminal | Bringing back the console graphics screen-buffers? | How hard would it be to bring back the console graphics screen-buffers functionality to conhost, which previously existed in Windows <= 2003 and was used by NTVDM to display the output of graphics DOS applications while being run in windowed mode? (see e.g. https://virtuallyfun.com/wordpress/2009/08/08/softpc-%e2%80%93at-version-3/ ) (hint: done using CreateConsoleScreenBuffer with special parameters.)
This feature could also be useful for those other applications (e.g. some Far-Manager plugins) that attempt through convoluted hacks (aka. giving the illusion) to display graphics contents (e.g. thumbnails of image files) in the console window.
Or for these 3rd-party emulators (DOS, etc...) that can start from the command-line, but are currently forced to create a separate window when starting a graphics app. | Issue-Feature,Product-Conhost,Area-Output,Area-Server | low | Major |
355,776,503 | rust | Improve suggestion for lifetime error in pattern | I ran into [this lifetime issue](https://play.rust-lang.org/?gist=d9584412582bbb07cba97ebae726b86e&version=stable&mode=debug&edition=2015):
```rust
if let Some(v) = var {
a.push(&v);
}
```
which is [easily resolved](https://play.rust-lang.org/?gist=35242df073812059e21e772f084362c9&version=stable&mode=debug&edition=2015) as follows:
```rust
if let Some(ref v) = var {
a.push(&v);
}
```
Perhaps this could be suggested to the user?
/cc @estebank | C-enhancement,A-diagnostics,A-lifetimes,T-compiler | low | Critical |
355,792,055 | pytorch | [Caffe2] Error C2492: data with thread storage duration may not have dll interface | ## Issue description
Getting error C2492 from `caffe2\core\net_async_base.h` when building Caffe2 as DLL on Windows.
```
caffe2\core\net_async_base.h(103): error C2492: 'caffe2::AsyncNetBase::stream_counters_': data with thread storage duration may not have dll interface (compiling source file caffe2\core\net_async_tracing.cc) [build\caffe2\caffe2.vcxproj]
caffe2\core\net_async_base.h(103): error C2492: 'protected: static std::vector<int,std::allocator<int> > caffe2::AsyncNetBase::stream_counters_': data with thread storage duration may not have dll interface (compiling source file caffe2\core\net_async_tracing.cc) [build\caffe2\caffe2.vcxproj]
caffe2\core\net_async_base.h(103): error C2492: 'caffe2::AsyncNetBase::stream_counters_': data with thread storage duration may not have dll interface (compiling source file caffe2\core\net_async_polling.cc) [build\caffe2\caffe2.vcxproj]
caffe2\core\net_async_base.h(103): error C2492: 'protected: static std::vector<int,std::allocator<int> > caffe2::AsyncNetBase::stream_counters_': data with thread storage duration may not have dll interface (compiling source file caffe2\core\net_async_polling.cc) [build\caffe2\caffe2.vcxproj]
caffe2\core\net_async_base.h(103): error C2492: 'caffe2::AsyncNetBase::stream_counters_': data with thread storage duration may not have dll interface (compiling source file caffe2\core\net_async_scheduling.cc) [build\caffe2\caffe2.vcxproj]
caffe2\core\net_async_base.h(103): error C2492: 'protected: static std::vector<int,std::allocator<int> > caffe2::AsyncNetBase::stream_counters_': data with thread storage duration may not have dll interface (compiling source file caffe2\core\net_async_scheduling.cc) [build\caffe2\caffe2.vcxproj]
caffe2\core\net_async_base.h(103): error C2492: 'caffe2::AsyncNetBase::stream_counters_': data with thread storage duration may not have dll interface (compiling source file caffe2\core\net_async_base.cc) [build\caffe2\caffe2.vcxproj]
caffe2\core\net_async_base.h(103): error C2492: 'protected: static std::vector<int,std::allocator<int> > caffe2::AsyncNetBase::stream_counters_': data with thread storage duration may not have dll interface (compiling source file caffe2\core\net_async_base.cc) [build\caffe2\caffe2.vcxproj]
caffe2\core\net_async_base.h(103): error C2492: 'caffe2::AsyncNetBase::stream_counters_': data with thread storage duration may not have dll interface (compiling source file caffe2\core\net_dag.cc) [build\caffe2\caffe2.vcxproj]
caffe2\core\net_async_base.h(103): error C2492: 'protected: static std::vector<int,std::allocator<int> > caffe2::AsyncNetBase::stream_counters_': data with thread storage duration may not have dll interface (compiling source file caffe2\core\net_dag.cc) [build\caffe2\caffe2.vcxproj]
```
## System Info
- PyTorch or Caffe2: Caffe2
- How you installed PyTorch (conda, pip, source): source
- Build command you used (if compiling from source): cmake
- OS: Win10
- PyTorch version: master
- VS version (if compiling from source): 2017 15.8.1
- CMake version: 3.12.1
| caffe2 | low | Critical |
355,802,153 | go | testing: document best practices for avoiding compiler optimizations in benchmarks | ### What did you do?
1. I wanted to make sure I was creating accurate benchmarks.
2. I found and read [Dave Cheney's 2013 blog post on how to write benchmarks in Go](https://dave.cheney.net/2013/06/30/how-to-write-benchmarks-in-go). In the "A note on compiler optimisations" section he mentions that it is best practice to assign results to local and package level variables to avoid optimizations.
3. I went to https://golang.org/pkg/testing/#hdr-Benchmarks
### What did you expect to see?
I expected to see documentation on how to correctly write benchmarks that avoid compiler optimizations and examples that reflect best practices.
If the techniques described in Dave's blog post are no longer necessary, I expected to see explicit documentation to that effect.
### What did you see instead?
Neither of those things. | Documentation,help wanted,NeedsInvestigation | medium | Critical |
355,812,667 | go | crypto/x509/pkix: Name.String() follows obsolete RFC | Please answer these questions before submitting your issue. Thanks!
The format of a certificate Subject (or Issuer) when made into a String is following RFC 2253 (obsolete) instead of 4514. Using the same format as given by "openssl x509 -in cert.pem -text -noout" would be much more useful.
### What version of Go are you using (`go version`)?
1.11
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
linux, amd64
### What did you do?
Print the issuer (or subject) dn from an x509 certificat
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
package main
import (
"crypto/x509"
"encoding/pem"
"fmt"
)
func main() {
rawPem :=
`-----BEGIN CERTIFICATE-----
MIIDizCCAnOgAwIBAgIJAOnd3oVtvRW4MA0GCSqGSIb3DQEBCwUAMFwxFzAVBgNV
BAMMDnNvbWVkb21haW4uY29tMRcwFQYDVQQKDA5NeSBDb3Jwb3JhdGlvbjETMBEG
A1UECwwKT3JnIFVuaXQgMTETMBEGA1UECwwKT3JnIFVuaXQgMjAeFw0xODA4MzEw
MTE5MzBaFw0xODA5MzAwMTE5MzBaMFwxFzAVBgNVBAMMDnNvbWVkb21haW4uY29t
MRcwFQYDVQQKDA5NeSBDb3Jwb3JhdGlvbjETMBEGA1UECwwKT3JnIFVuaXQgMTET
MBEGA1UECwwKT3JnIFVuaXQgMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
ggEBAKCmoAGdRmzfb807ZQw1NAIhdqeKeNi7niZeYrzocBavzHd/6WfsdF/FqLjB
NH6BRwB7j86O7hyETJcfcXnrJGsKcbBTspMjDBHp+XDvoWElfx7W7A1F/ZI51MjK
OZkWFgNl2bGJIvFNAe25fQTlRtjW6/OC+SxXssaHufKjTJDqGo3YlEOvVWcMGXiS
cJBOnDTDABt6caAf9QbPFS6SI7Qq7East5xRATkY3Hz9CM5EU5x6j+frO30gvsQs
eyIRU6vHgKEsnO90hxF0TQtXiI4IrYL/ofa6J7Ncpnerj0/0+6Kw2qgV5qse8/BS
H0AUXwNp9gm/WMDI0ehiwLDUFIcCAwEAAaNQME4wHQYDVR0OBBYEFGl1dJ7BFREN
NK2BF5NDqimHP9zfMB8GA1UdIwQYMBaAFGl1dJ7BFRENNK2BF5NDqimHP9zfMAwG
A1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBADjJ34L0Rrz+suetYK+NKiXG
8dQcbMOYxSFg8HjNB7ZtL97lRpeRwoPx4IjpcbuYtvKdxVuDea0753VNe7Q3qowt
0k3IEgzVTN5fTInFisLQG9jCfkhHByeNrOgLs6qrk8O+6SqMcOgVuNmMzZDhlXj+
9drp70xZtLLUN9zFbFESlFoq8GBd4CeerMNn/eU+ukFI/outLU+0+y9lpXwrHglk
9VGJtB40NiSNfhb8MqNTVgPnyEDTOHEStbYddmDTFXtuvGe5b0+j5DBQao87JPr/
rwUus39HqmFbSZtkxEjFI71Dh4Q0HnoDHM8+GyJohYxqk81fQlJJGW0M2XEWT3g=
-----END CERTIFICATE-----`
pemBlock, _ := pem.Decode([]byte(rawPem))
cert, _ := x509.ParseCertificate(pemBlock.Bytes)
fmt.Printf("%s\n", cert.Issuer)
}
### What did you expect to see?
CN=somedomain.com, O=My Corporation, OU=Org Unit 1, OU=Org Unit 2
### What did you see instead?
CN=somedomain.com,OU=Org Unit 1+OU=Org Unit 2,O=My Corporation
| NeedsDecision | low | Critical |
355,842,197 | go | x/build: add a Windows Server, version 1803 (~"Windows 10") builder | GCE supports "Windows Server, version 1803":
https://cloud.google.com/compute/docs/instances/windows/
We should run such a builder.
See:
https://go-review.googlesource.com/c/go/+/131976
https://docs.microsoft.com/en-us/windows-server/get-started/get-started-with-1803
It would've prevented #25722.
/cc @johnsonj (who noted these things on CL 131976), @dmitshur
| OS-Windows,Builders,FeatureRequest,new-builder | low | Minor |
355,885,958 | go | net/http: first byte of second request in hijack mode is lost | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.11 darwin/amd64
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/jjz/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/jjz/go"
GOPROXY=""
GORACE=""
GOROOT="/opt/go"
GOTMPDIR=""
GOTOOLDIR="/opt/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/x2/t219h_ks1wl4gjwnz82xc17h0000gn/T/go-build513787949=/tmp/go-build -gno-record-gcc-switches -fno-common"
### What did you do?
Hijack client http request to backend server in go http server. If the underlay connection is reused which means more than one http request is send over the connection, when first request has body size >0, first byte of the second request will be eat.
To reproduce this issue, I wrote a simple testcase. Client send requests to a transparent proxy who forward requests to backend server in hijack mode. The backend server is a simple echo server, response request method and body to its client. Below are the codes:
#### Proxy & Server
```
package main
import (
"net/http"
"net"
"sync"
"io"
"fmt"
"bytes"
)
func echo(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
var buf bytes.Buffer
io.Copy(&buf, r.Body)
resp := fmt.Sprintf("method: %s, body: %s", r.Method, buf.String())
w.Write([]byte(resp))
}
type Proxy struct {
Backend string
}
func (p *Proxy) ServeHTTP(w http.ResponseWriter, r *http.Request) {
fmt.Printf("Receive a request from %v\n", r.RemoteAddr)
hj, ok := w.(http.Hijacker)
if !ok {
http.Error(w, "webserver doesn't support hijacking", http.StatusInternalServerError)
return
}
backendConn, err := net.Dial("tcp", p.Backend)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
defer backendConn.Close()
clientConn, bufrw, err := hj.Hijack()
if err != nil {
return
}
defer clientConn.Close()
if err := r.Write(backendConn); err != nil {
return
}
var wg = sync.WaitGroup{}
wg.Add(2)
copy := func(dst io.Writer, src io.Reader) {
io.Copy(dst, src)
if conn, ok := dst.(interface {
CloseWrite() error
}); ok {
conn.CloseWrite()
}
wg.Done()
}
bufferedReader := io.LimitReader(bufrw, int64(bufrw.Reader.Buffered()))
mr := io.MultiReader(bufferedReader, clientConn)
go copy(clientConn, backendConn)
go copy(backendConn, mr)
wg.Wait()
}
func main() {
go http.ListenAndServe(":8081", http.HandlerFunc(echo))
p := &Proxy{Backend: "localhost:8081"}
http.ListenAndServe(":8080", p)
}
```
#### Client:
```
package main
import (
"net/http"
"bytes"
"io"
"os"
"fmt"
)
func req(client *http.Client, msg string) {
body := bytes.NewBufferString(msg)
r, err := http.NewRequest("POST", "http://localhost:8080/", body)
if err != nil {
panic(err)
}
resp, err := client.Do(r)
if err != nil {
panic(err)
}
defer resp.Body.Close()
if _, err := io.Copy(os.Stdout, resp.Body); err != nil {
panic(err)
}
}
func main() {
c := &http.Client{}
fmt.Println("Send 'Hello'")
req(c, "Hello")
fmt.Println()
fmt.Println("Send 'World'")
req(c, "World")
}
```
1. Run the server in terminal: `go run server.go`
2. Open another terminal, run `go run client.go`
Output show that the letter P (of word POST) POST is lost.
### What did you expect to see?
```
go run client.go
Send 'Hello'
method: POST, body: Hello
Send 'World'
method: POST, body: World
```
### What did you see instead?
```
go run client.go
Send 'Hello'
method: POST, body: Hello
Send 'World'
method: OST, body: World
```
| NeedsInvestigation | low | Critical |
355,913,029 | TypeScript | A way to specify class properties using JSDoc | ## Search Terms
JSDoc class properties any key `@property`
## Suggestion
I would like to be able to extend the type of an ES6 class using JSDoc comments, such that TypeScript understands there are other properties that might be added to the object later.
Alternatively, I would like a way to indicate that a given class acts as a container that can contain any possible key.
## Use Cases
I have a class that is used like a DTO. It has some properties that are guaranteed to be there, but is also used for a lot of optional properties. Take, for example:
```js
class DTO {
constructor(id) {
/**
* @type {string}
*/
this.id = id;
}
}
```
TypeScript now recognizes the object type, but whenever I try to use other properties, it complains. Currently, I'm resorting to something like this as a work-around:
```js
class DTO {
constructor(id) {
/**
* @type {string}
*/
this.id = id;
/**
* @type {Object?}
*/
this.otherProperty;
}
}
```
But it's ugly and verbose, and worse, it includes actual JavaScript code, that serves no purpose other than to provide type information.
## Examples
Rather, what I would like to do is something like this (that I would like to be equivalent to the snippet above):
```js
/**
* @property {Object} [otherProperty]
*/
class DTO {
constructor(id) {
/**
* @type {string}
*/
this.id = id;
}
}
```
Another equivalent alternative could be (but currently doesn't compile because of a "Duplicate identifier" error):
```js
/**
* @typedef {Object} DTO
* @property {string} id
* @property {Object} [otherProperty]
*/
class DTO {
constructor(id) {
this.id = id;
}
}
```
Or, otherwise, some way to indicate this class can be extended with anything. Which means I would like a way to specify the following TypeScript using JSDoc *and* I want it to apply to an existing ES6 class (because I do use the class to be able to do `instanceof` checks):
```ts
interface DTO {
id: string,
[key: string]: any,
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,Domain: JavaScript,Experience Enhancement | medium | Critical |
355,915,406 | flutter | drawercontroller stop at tint screen on both open and close drawer | 1. When drag or click icon to open drawer, drawer didn't open but the screen were tinted. I can then drag out the drawer.
2. When trigger `Navigator.pop(context);` The drawer closed but the screen remind tinted, need extra tap to clear.
Tested in android device and emulator, `drawerCallback` simply updated a variable
Code in
```dart
Scaffold(
drawer: DrawerController(child: DrawerOnly(), alignment: DrawerAlignment.start, drawerCallback: drawerCallback),
```
instead of (this works)
```
drawer: DrawerOnly(),
```
| framework,f: material design,P3,team-design,triaged-design | medium | Major |
355,950,549 | vue | 2 transition-groups with different tags v-if / v-else = Cannot read property '$vnode' of null | ### Version
2.5.16
### Reproduction link
[https://codepen.io/anon/pen/BOpJZb](https://codepen.io/anon/pen/BOpJZb)
### Steps to reproduce
1- Open the console
2- Click the button
### What is expected?
In fact I don't know. I would expect not to see the error but I'm thinking maybe I'm doing something wrong.
### What is actually happening?
Cannot read property '$vnode' of null appears in the console
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement,has workaround,transition | low | Critical |
355,969,050 | vscode | Allow to compare 3 files | VS Code allows to easily compare two files by choosing `Select for compare` and `Compare with selected`.
But we can only select two files.
`diff3`, `kdiff3` and `vimdiff` do the job but I would prefer it integrated into VS Code.
This would be helpful!
**Edit**: would also be nice to be able to compare 4 files 😄 [vimdiff](http://vimdoc.sourceforge.net/htmldoc/diff.html) and [diffuse](http://diffuse.sourceforge.net/) do | feature-request,diff-editor | high | Critical |
355,985,125 | opencv | assert fires in imencode for .hdr because the ImageEncoders write into a file instead of buffer | ##### System information (version)
- OpenCV = 3.4.1 (but probably any)
- Operating System / Platform = Android
- Compiler = GCC (probably more)
##### Detailed description
When trying to save HDR data into a buffer via imencode as .hdr or the programm stops with
```
Error: Assertion failed (code) in bool cv::imencode(const cv::String&, cv::InputArray, std::vector<unsigned char>&, const std::vector<int>&), file /build/master_pack-android/opencv/modules/imgcodecs/src/loadsave.cpp, line 929
```
The problem seems to be that both ImageDecoder classes internally want to write to files instead of the assigned buffer. I don't know if the data will be read back to the buffer, if the encoder was able to write the file. **EDIT** The fallback for encoders that do not support buffer mode uses a temporary file that is then read back into the buffer. I assume that the temporary file cannot be created, resulting in an error.
##### Steps to reproduce
On Android, run this code:
```
std::vector<uchar> buffer;
cv::Mat image(10, 10, CV_32F);
imencode(".hdr", image, buffer)
```
This is probably reproducible outside Android when you run this code in a directory where the user has no write privileges, as the code will try to create the files ".hdr" or respectively.
| priority: low,platform: android,category: imgcodecs | low | Critical |
356,015,753 | rust | Various FFI run-pass tests probably should be using `repr(C)` as lint is instructing. | The following run-pass tests cause the `improper_ctypes` lint to fire. It would probably be fine (and trivial) to add `#[repr(C)]` to them.
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/extern-pass-TwoU16s.rs#L17
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/extern-pass-TwoU32s.rs#L17
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/extern-pass-TwoU64s.rs#L17
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/extern-pass-TwoU8s.rs#L17
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/extern-return-TwoU16s.rs#L13
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/extern-return-TwoU32s.rs#L13
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/extern-return-TwoU64s.rs#L13
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/extern-return-TwoU8s.rs#L13
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/foreign-fn-with-byval.rs#L14
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/issue-3656.rs#L23
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/issue-5754.rs#L13
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/issue-6470.rs#L14
(I'm just filing this ticket so I have something to point at from a PR for #53764.)
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-testsuite,T-compiler,C-bug,A-repr | low | Minor |
356,017,337 | rust | run-pass/extern-pass-empty is probably a bogus thing to test | The following code hits the lint that warns about trying to put `#[repr(C)]` on an empty struct.
https://github.com/rust-lang/rust/blob/c2afca36672a85248f4da3e8db8cdfac198ad4ad/src/test/run-pass/extern-pass-empty.rs#L33-L34
(This seems like a bogus test that was caught by working on fixing #53764.) | A-testsuite,A-FFI,P-medium,T-compiler,S-blocked | low | Major |
356,028,986 | godot | Load as Image for Android compatability | **Godot version:**
3.1
**Issue description:**
In 3.1 I have tested the `image.load()` method on the desktop and android and have found that this function does not load a Texture or an Image resource AS an image recognizable on Android. It only works on the desktop.
However, `image = load()` WILL load an Image resource (not a Texture resource) as an image recognized by Android
To make this work, I have to manually re-import the .png as an Image (default is Texture) then use `image = load()` instead of the built-in `Image.load()` method.
`Image.load()` method should load a Texture or Image resource as a true Image recognizable on Android. It should automatically convert a Texture to an image.
| enhancement,platform:android,topic:import | low | Minor |
356,041,292 | rust | Function missing PartialEq when type not explicitly annotated | Suppose we have an extern function foo for use in FFI. For some reason an explicit type annotation allows use of the PartialEq trait, but no type annotation does not (even though the error message states the type is identical: `unsafe extern "C" fn() -> i32 {foo}`):
```rust
#[no_mangle]
pub unsafe extern "C" fn foo() -> i32 { return 1; }
fn main() {
let mut fn_ptr: unsafe extern "C" fn() -> i32 = foo;
let mut fn_ptr2 = foo;
fn_ptr == fn_ptr; // Ok
fn_ptr2 == fn_ptr2; // Err
}
```
Which errors with:
```shell
error[E0369]: binary operation `==` cannot be applied to type `unsafe extern "C" fn() -> i32 {foo}`
--> src/main.rs:8:5
|
8 | fn_ptr2 == fn_ptr2; // Err
| ^^^^^^^^^^^^^^^^^^
|
= note: an implementation of `std::cmp::PartialEq` might be missing for `unsafe extern "C" fn() -> i32 {foo}`
``` | A-type-system,T-compiler,T-types | low | Critical |
356,043,471 | go | os: Readdir swallows partial results if it fails to Lstat any file in the listing | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.10.1 linux/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOHOSTOS="linux"
### What did you do?
Test program here
https://play.golang.org/p/Xgvoi1GQGUv
It is hard to reproduce but when using fuse sometimes the mount point becomes disconnected. In this case when you try to do a lstat() of it you get "transport endpoint is not connected" (Google for that to see that it is a common thing). Anyway the issue is really that when a directory contains a disconnected mount point like this, then Golang's readdir variants (os.Open + Readdir and ioutils.ReadDir()) break out of the loop as soon as they can not Lstat one of the files in the directory and this either returns no results in the case of ioutils or randomly less results in the case of os.Open + Readdir.
This is very surprising and leads to weird program failures because an external event (unclean mount point) suddenly causes the entire directory listing to fail in Go programs. The directory is fine for e.g. ls - you can see how /bin/ls shows it:
```
$ ls -l /
ls: cannot access '/mnt': Transport endpoint is not connected
total 2235132
drwxr-xr-x 2 root root 12288 Aug 24 07:35 bin
drwxr-xr-x 3 root root 4096 Aug 30 06:38 boot
...
drwxr-xr-x 2 root root 4096 May 3 15:48 lib64
drwx------ 2 root root 16384 Sep 26 2017 lost+found
drwxr-xr-x 5 root root 4096 Feb 9 2018 media
d????????? ? ? ? ? ? mnt
drwxr-xr-x 4 root root 4096 Jun 28 12:41 opt
dr-xr-xr-x 362 root root 0 Jun 27 23:02 proc
...
-rw------- 1 root root 2147483648 Dec 17 2017 swapfile
dr-xr-xr-x 13 root root 0 Sep 1 01:28 sys
drwxrwxrwt 89 root root 102400 Sep 1 01:51 tmp
drwxr-xr-x 14 root root 4096 Dec 29 2017 usr
```
The issue is in os/dir_unix.go exiting out of the loop in case of an lstat error.
A more robust implementation is possible by copying out the code from dir_unix.go and modifying it (https://play.golang.org/p/UFvrro7cqqu) so maybe this is a valid workaround but IMHO almost every user of ioutil.ReadDir() expects to get some results back - it is unexpected to just have the entire directory listing fail because someone has put a fuse mount inside it.
Semantically to me at least, when I call ReadDir on a directory (say "/"), then any error that I get back should relate to the directory itself. I was surprised that I was unable to ReadDir("/") when a subdir of "/" was actually un-stat'able. The error does not seem to relate to the thing I was calling the function on.
### What did you expect to see?
I expected partial results with all the results that could be stat'ed. Maybe add an empty FileInfo for the bad mount point or omit it.
### What did you see instead?
ReadDir() returns an error and no results - weird since I can totally do an "ls -l /" and this works fine.
| NeedsInvestigation | low | Critical |
356,108,243 | go | encoding/asn1: error when unmarshalling SEQUENCE OF SET |
### What version of Go are you using (`go version`)?
go1.11 windows/amd64
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
```
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\xxx\AppData\Local\go-build
set GOEXE=.exe
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=C:\dev;C:\gopath
set GOPROXY=
set GORACE=
set GOROOT=C:\Go
set GOTMPDIR=
set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\xxx\AppData\Local\Temp\go-build551125774=/tmp/go-build -gno-rec
ord-gcc-switches
```
### What did you do?
```golang
package main
import (
"encoding/asn1"
"fmt"
)
//` schema:
//
// World-Schema DEFINITIONS AUTOMATIC TAGS ::=
// BEGIN
// SomeStruct ::= SEQUENCE
// {
// id INTEGER,
// field1 SEQUENCE OF SomeSet
// }
// SomeSet ::= SET
// {
// field2 INTEGER
// }
// END
//`
// data
// {
// "id":1,
// "field1": [{"field2": 1}, {"field2": 2}]
// }
// encode with http://asn1-playground.oss.com/
const encodedDer = "\x30\x0F\x80\x01\x01\xA1\x0A\x31\x03\x80\x01\x01\x31\x03\x80\x01\x02"
type SomeStruct struct {
ID int `asn1:"tag:0"`
SomeSetSlice []SomeSet `asn1:"tag:1,set"`
}
type SomeSet struct {
Field2 int `asn1:"tag:0"`
}
func main() {
var b SomeStruct
_, err := asn1.Unmarshal([]byte(encodedDer), &b)
if err != nil {
panic(err)
}
fmt.Println(b)
}
```
play link:
https://play.golang.org/p/V-za5Cu1wkr
### What did you expect to see?
SEQUENCE OF SET should be properly unmarshalled into a slice of structs. [Documentation](https://golang.org/pkg/encoding/asn1/#Unmarshal) says that SET can be unmarshalled into a struct. So I believe a SEQUENCE OF SET is expected to be properly unmarshalled into a slice of structs.
### What did you see instead?
asn1: structure error: sequence tag mismatch.
The problem might be in [getUniversalTag](https://github.com/golang/go/blob/579768e0785f14032e3a971ad03f2deb33427e2d/src/encoding/asn1/common.go#L168) function when it is called from `parseSequenceOf`. E.g.
```golang
case reflect.Struct:
return false, TagSequence, true, true
```
When struct is an element of slice there is no possibility to tag a struct as a SET (17) , it's always tagged as a SEQUENCE (16).
| NeedsInvestigation | low | Critical |
356,108,586 | rust | Show a suggestion after the compiler fails when you implement `from` and forget to mark as `pub` | Let say we have a `main.rs` like this ...
```rust
use std::io::{self, Write};
use std::process::Command;
mod cmd;
mod error;
use cmd::Cmd;
use error::Error;
fn main() -> Result<(), io::Error> {
let stdin = io::stdin();
let mut stdout = io::stdout();
loop {
let mut line = String::new();
print!("> ");
stdout.flush()?;
stdin.read_line(&mut line)?;
match Cmd::from(&line) {
Ok(cmd) => {
let output = Command::new(cmd.binary).args(cmd.args).output()?;
print!("{}", String::from_utf8_lossy(&output.stdout));
}
Err(Error::NoBinary) => {}
}
}
}
```
an `error.rs` like this ...
```rust
#[derive(Debug)]
pub enum Error {
NoBinary,
}
```
and `cmd.rs` like this ...
```rust
use error::Error;
// A command consists of a binary and its arguments
pub struct Cmd<'a> {
pub binary: &'a str,
args: Vec<&'a str>,
}
impl<'a> Cmd<'a> {
// Extract the command and its arguments from the commandline
fn from(line: &'a str) -> Result<Self, Error> {
let mut parts = line.split_whitespace();
let binary = parts.nth(0).ok_or_else(|| Error::NoBinary)?;
let args = parts.collect();
Ok(Cmd { binary, args })
}
}
```
If you try to compile this you get the following error ...
```
error[E0308]: mismatched types
--> src/main.rs:20:25
|
20 | match Cmd::from(&line) {
| ^^^^^ expected struct `cmd::Cmd`, found reference
|
= note: expected type `cmd::Cmd<'_>`
found type `&std::string::String`
error[E0308]: mismatched types
--> src/main.rs:21:13
|
21 | Ok(cmd) => {
| ^^^^^^^ expected struct `cmd::Cmd`, found enum `std::result::Result`
|
= note: expected type `cmd::Cmd<'_>`
found type `std::result::Result<_, _>`
error[E0308]: mismatched types
--> src/main.rs:25:13
|
25 | Err(Error::NoBinary) => {}
| ^^^^^^^^^^^^^^^^^^^^ expected struct `cmd::Cmd`, found enum `std::result::Result`
|
= note: expected type `cmd::Cmd<'_>`
found type `std::result::Result<_, _>`
```
The code is wrong because the `from` fn is not `pub`. As soon as you mark it as `pub` everything works as expected.
The problem is that I'd expect a mention in the error report that the `from` considered by the compiler is not the one you're trying to call. It's just that you forgot to mark the fn as `pub`. | A-diagnostics,A-trait-system,A-visibility,T-compiler,C-feature-request,D-confusing,D-terse | low | Critical |
356,153,131 | flutter | Describe CupertinoColor provenance | For each entry, describe how it was derived.
At the top of the file, have a ordered list of preferred means of getting these values.
1- Apple design Sketch file
2- Third-party design Sketch file
3- Color picker from simulator
4- Eyeballing | framework,f: cupertino,d: api docs,P2,team-design,triaged-design | low | Minor |
356,188,758 | rust | improve error message when returning iterator with reference into stack frame | This example ([playground](https://play.rust-lang.org/?gist=dbf25e623505ebbb9a118b9155107fbc&version=stable&mode=debug&edition=2015)):
```rust
use std::rc::Rc;
fn iterate(data: Rc<Vec<u32>>) -> impl Iterator<Item = u32> {
data.iter().cloned()
}
fn main() {
for x in iterate(Rc::new(vec![1, 2, 3])) {
println!("Hi! {}", x);
}
}
```
gives this error:
```
error[E0597]: `data` would be dropped while still borrowed
--> src/main.rs:4:5
|
4 | data.iter().cloned()
| ^^^^ borrowed value does not live long enough
5 | }
| - borrowed value only lives until here
|
= note: borrowed value must be valid for the static lifetime...
```
I think we could do better. For example, we don't explain anything about how `impl Iterator` disallows lifetimes; and I think we can clarify *why* `data is being dropped.
I don't yet have a concrete suggestion, though, have to think about it.
Related to https://github.com/rust-lang/rust/issues/52534 -- in some sense a specific instance of that. | C-enhancement,A-diagnostics,T-compiler | low | Critical |
356,193,720 | opencv | Test: Fix build information from tests on Windows platform (MSBuild) | related #12067
CMake with MSBuild generator doesn't fill "Configuration" value.
Debug/Release build type is defined via compilation parameters.
```
Build type: N/A
WARNING: build value differs from runtime: Release
``` | bug,test | low | Critical |
356,237,623 | rust | Provide a split method which doesn't consume the element used to split | The [split](https://doc.rust-lang.org/std/primitive.slice.html#method.split) method consumes the element used to split, how you can see in the code below. But in some cases would be nice to move the element to the "first or second slice" like the [split_at](https://doc.rust-lang.org/std/primitive.slice.html#method.split_at) method does, but to **n** slices.
The numpy library has a [method ](https://docs.scipy.org/doc/numpy/reference/generated/numpy.split.html)like that.
https://play.rust-lang.org/?gist=94dd413054604e4efffa96a5e8ed72b5&version=stable&mode=debug&edition=2015
**Code**
```rust
#[derive(Clone, Debug)]
struct Delta {
pub val: i32
}
fn main() {
let mut v = vec![Delta{ val: 1 }, Delta{ val: 5 }, Delta{ val: 1 }];
let new_v : Vec<Vec<Delta>> = v.split(|x| x.val > 4)
.into_iter()
.map(|x| x.to_vec())
.collect();
println!("{:?}", new_v);
}
```
**Output**
```
[[Delta { val: 1 }], [Delta { val: 1 }]]
```
**New Split Method1 Output**
```
[[Delta { val: 1 }, Delta { val: 5 }], [Delta { val: 1 }]]
```
**New Split Method2 Output**
```
[[Delta { val: 1 }], [Delta { val: 5 }, Delta { val: 1 }]]
``` | T-libs-api,C-feature-request | low | Critical |
356,238,032 | pytorch | A Caffe2 Implementation of Pose Estimation | Hi, I would like to provide a link for a caffe2 implementation of human pose estimation at here. The link is at: https://github.com/eddieyi/caffe2-pose-estimation .
Thanks. | caffe2 | low | Minor |
356,241,046 | go | crypto/sha1: add native SHA1 instruction implementation for AMD64 | I've transliterated the [Linux kernel](https://github.com/torvalds/linux/blob/master/arch/x86/crypto/sha1_ni_asm.S) version of native SHA1 instructions to [Go's flavor of assembly](https://gist.githubusercontent.com/BenLubar/d11e801907892a6ee8d7ce1e0e4db144/raw/sha1block_amd64.s). The result is a [1.5x to 3x speed-up](https://gist.github.com/BenLubar/d11e801907892a6ee8d7ce1e0e4db144/raw/70a5cee937d0bfff056497718d924613f1f11e32/cmp.txt) on my Ryzen 5 1600. Could this be included in Go, similar to the AVX2 implementation that is already in crypto/sha1? | Performance,NeedsDecision | low | Major |
356,262,379 | vue-element-admin | 左侧侧边栏页面DOM结构异常 - use fragment | <img width="1043" alt="2018-09-02 3 53 17" src="https://user-images.githubusercontent.com/23553358/44953799-0a590380-aecb-11e8-9fcc-f9d1b9d820d5.png">
| feature | low | Minor |
356,262,516 | javascript-algorithms | Suggestion: ship production ready build to npm | Hey,
I noticed that the current version deployed on npm (i.e. https://www.npmjs.com/package/javascript-algorithms-and-data-structures) is more or less a mirror of the source code here. `javascript-algorithms` is really awesome and someone might want to use it as an external library on production (e.g. a web app or node app). Unfortunately, the use of ES modules, complicates things a bit.
May I suggest:
1. Transpiling code to ES2015, using CommonJS instead of ES modules;
2. Removing redundant source code (e.g. comments, README, etc) to reduce package size.
I'd be more that willing to provide a PR using rollup + babel.
| enhancement | low | Major |
356,279,939 | TypeScript | Input source maps | ## Suggestion
Support input source maps, for the case where a TS file is generated programatically from some other source file, in order to permit end-to-end mapping.
Such source maps could either be specified in the `.ts` file using the `//# SourceMappingURL=foo.ts.map` notation, searched for under the name `*.ts.map`, or found in-line in the `.ts` sources, etc., depending on various options.
The effect would be that the `*.js.map` file generated (or inlined) would be the equivalent of what is known as "applying" the input source map to the tsc-generated sourcemap. In other words, the sourcemap would map lines in the `*.js` file all the way back to the original file(s) from which the `*.ts` file was generated.
## Use Cases
This issue has come up in the context of some literate programming hacking I am doing. The programmer works on "literate programming documents" (actually, a flavor of Markdown), and then "tangles" (to use Knuth's terminology) that into a TS file.
My tangler generates the `*.ts.map` file. Currently, what I am doing is to provide an extra utility which the programmer can use to "apply" the map I create to the one created by `tsc` to get lines in the `*.js` all the way back to the literate programming document. However, this is a bit clumsy, and requires some setup. It would be nicer if `tsc` could take in source maps, apply them itself, and output `*.js.map` files reflecting that mapping.
## Examples
Currently, the user does something like the following:
modernlit --sourcemap foo.lit.md # creates foo.ts and foo.ts.map
tsc --sourcemap foo.ts # creates foo.js and foo.js.map
mlsourcemap foo.ts.map foo.js.map # rewrites foo.js.map by "applying" foo.ts.map
("modernlit" is the name I am currently using for my literate programming system.)
With the feature I am proposing, the call to `mlsourcemap` could be eliminated. `tsc` would find `foo.ts.map`, and perform the mapping, generating a `foo.js.map` which reflected it having been "applied" to the JS-to-TS sourcemap. This would avoid the user having to remember to call `mlsourcemap`, and avoid me having to write it, maintain, and document it.
For information on "applying" (remapping) sourcemaps, as implemented in the Mozilla source-maps library, see [here](https://github.com/mozilla/source-map#sourcemapgeneratorprototypeapplysourcemapsourcemapconsumer-sourcefile-sourcemappath).
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,Awaiting More Feedback | low | Minor |
356,291,059 | rust | Unhelpful help for E0387 "consider changing this closure to take self by mutable reference" | I just got this error message (on current stable, beta, and the nightly mentioned below):
```plain
error[E0387]: cannot borrow data mutably in a captured outer variable in an `Fn` closure
--> src/main.rs:30:57
|
30 | .for_each(|filename| tally_words(filename, &mut words).unwrap());
| ^^^^^
|
help: consider changing this closure to take self by mutable reference
--> src/main.rs:30:19
|
30 | .for_each(|filename| tally_words(filename, &mut words).unwrap());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
As you can see it suggests to do no change at all:
```diff
- .for_each(|filename| tally_words(filename, &mut words).unwrap());
+ .for_each(|filename| tally_words(filename, &mut words).unwrap());
```
You can reproduce this by checking out <https://github.com/hello-rust/show/pull/45/commits/d495abe606d7f4111c7d2535bb6808c212817865> and running `cd episode/9/rust/pascal/; cargo build`.
<details>
<summary>Rust version</summary>
<pre><code>rustc 1.30.0-nightly (f8d34596f 2018-08-30)
binary: rustc
commit-hash: f8d34596ff74da91e0877212a9979cb9ca13eb7e
commit-date: 2018-08-30
host: x86_64-apple-darwin
release: 1.30.0-nightly
LLVM version: 7.0</code></pre>
</details> | C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics,D-invalid-suggestion | low | Critical |
356,293,307 | go | cmd/compile: autotemps can make stack frame too large | When the order pass introduces temporaries, it always allocates them on the stack, even if they are too big for the stack.
```
package main
import "reflect"
func f() {
g(reflect.TypeOf([2000000000]*byte{}))
}
//go:noescape
func g(p reflect.Type)
```
We should use the same rules as escape analysis does to decide if we should put the temporary on the stack or the heap.
Before order:
```
. . . AS2-list
. . . . NAME-reflect.i a(true) l(1376) x(0) class(PAUTO) tc(1) addrtaken assigned used INTER-interface {}
. . . AS2-rlist
. . . . CONVIFACE l(6) esc(no) tc(1) implicit(true) INTER-interface {}
. . . . . ARRAYLIT l(6) esc(h) tc(1) ARRAY-[2000000000]*byte
```
After order:
```
. AS l(6) tc(1)
. . NAME-main..autotmp_5 a(true) l(6) x(0) class(PAUTO) esc(N) tc(1) assigned used ARRAY-[2000000000]*byte
. . ARRAYLIT l(6) esc(h) tc(1) ARRAY-[2000000000]*byte
. AS2 l(6) tc(1)
. AS2-list
. . NAME-reflect.i a(true) l(1376) x(0) class(PAUTO) tc(1) addrtaken assigned used INTER-interface {}
. AS2-rlist
. . CONVIFACE l(6) esc(no) tc(1) implicit(true) INTER-interface {}
. . . NAME-main..autotmp_5 a(true) l(6) x(0) class(PAUTO) esc(N) tc(1) assigned used ARRAY-[2000000000]*byte
. VARKILL l(6) tc(1)
. . NAME-main..autotmp_5 a(true) l(6) x(0) class(PAUTO) esc(N) tc(1) assigned used ARRAY-[2000000000]*byte
``` | NeedsFix,compiler/runtime | low | Major |
356,309,769 | flutter | FR: add .of() method to RenderBox | Currently we have to do this
`(context.findRenderObject() as RenderBox).size.height`
It would be nice to be able to do this:
`RenderObject.of(context).size.height`
Which matches the semantics of MediaQuery
`MediaQuery.of(context).size.height` | c: new feature,framework,P3,team-framework,triaged-framework | low | Minor |
356,310,070 | TypeScript | Type narrowing not working as expected in else | For the following code with `strictNullChecks` enabled:
```ts
interface Base {
a: {} | null;
b: {} | null;
}
function test(base: Base): void {
if (!base.a && !base.b) {
const result: null = null;
} else if (!base.a && base.b) {
const result: {} = base.b;
} else if (base.a && !base.b) {
const result: {} = base.a;
} else {
const result: {} = base.b;
}
}
```
I'm seeing this error on the last result:
```
Type '{} | null' is not assignable to type '{}'.
Type 'null' is not assignable to type '{}'.
```
I expect it to work as the else implies:
```
!(!base.a && !base.b) && !(!base.a && base.b) && !(base.a && !base.b)
= (base.a || base.b) && (base.a || !base.b) && (!base.a || base.b)
= (base.a || base.b) && (base.a) && (base.b)
note: !base.b and !base.a are canceled out as the expression cannot be true if one
of them is false
= base.a && base.b
```
[TS play link](http://www.typescriptlang.org/play/#src=interface%20Base%20%7B%0D%0A%20%20%20%20a%3A%20%7B%7D%20%7C%20null%3B%0D%0A%20%20%20%20b%3A%20%7B%7D%20%7C%20null%3B%0D%0A%7D%0D%0A%0D%0Afunction%20test(base%3A%20Base)%3A%20void%20%7B%0D%0A%20%20%20%20if%20(!base.a%20%26%26%20!base.b)%20%7B%0D%0A%20%20%20%20%20%20%20%20const%20result%3A%20null%20%3D%20null%3B%0D%0A%20%20%20%20%7D%20else%20if%20(!base.a%20%26%26%20base.b)%20%7B%0D%0A%20%20%20%20%20%20%20%20const%20result%3A%20%7B%7D%20%3D%20base.b%3B%0D%0A%20%20%20%20%7D%20else%20if%20(base.a%20%26%26%20!base.b)%20%7B%0D%0A%20%20%20%20%20%20%20%20const%20result%3A%20%7B%7D%20%3D%20base.a%3B%0D%0A%20%20%20%20%7D%20else%20%7B%0D%0A%20%20%20%20%20%20%20%20const%20result%3A%20%7B%7D%20%3D%20base.b%3B%0D%0A%20%20%20%20%7D%0D%0A%7D) | Suggestion,In Discussion | low | Critical |
356,328,836 | youtube-dl | [Viki] ffmpeg embed subtitles error | ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.09.01*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.09.01**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
```
~/Downloads/youtube-dl --write-sub --sub-format srt --sub-lang en --embed-subs 'https://www.viki.com/videos/1026409v-my-love-from-the-star-episode-1' -v
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'--write-sub', u'--sub-format', u'srt', u'--sub-lang', u'en', u'--embed-subs', u'https://www.viki.com/videos/1026409v-my-love-from-the-star-episode-1', u'-v']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.09.01
[debug] Python version 2.7.13 (CPython) - Linux-4.9.0-3-amd64-x86_64-with-debian-9.1
[debug] exe versions: ffmpeg 3.2.7-1, ffprobe 3.2.7-1
[debug] Proxy map: {}
[viki] 1026409v: Downloading video JSON
[viki] 1026409v: Downloading video streams JSON
[debug] Default format spec: bestvideo+bestaudio/best
[info] Writing video subtitles to: My Love From the Star - Episode 1-1026409v.en.srt
[debug] Invoking downloader on u'https://v4.viki.io/1026409v/1026409v_high_480p_1708160136.mp4?e=1535939464&h=1b543edad20d7f05d2eee1ff5cd5cfec'
[download] Destination: My Love From the Star - Episode 1-1026409v.mp4
[download] 100% of 348.84MiB in 00:01
[ffmpeg] Embedding subtitles in 'My Love From the Star - Episode 1-1026409v.mp4'
[debug] ffmpeg command line: ffmpeg -y -i 'file:My Love From the Star - Episode 1-1026409v.mp4' -i 'file:My Love From the Star - Episode 1-1026409v.en.srt' -map 0 -c copy -map '-0:s' '-c:s' mov_text -map '1:0' '-metadata:s:s:0' 'language=eng' 'file:My Love From the Star - Episode 1-1026409v.temp.mp4'
ERROR: Last message repeated 1 times
Traceback (most recent call last):
File "/home33/megamind/Downloads/youtube-dl/youtube_dl/YoutubeDL.py", line 2047, in post_process
files_to_delete, info = pp.run(info)
File "/home33/megamind/Downloads/youtube-dl/youtube_dl/postprocessor/ffmpeg.py", line 393, in run
self.run_ffmpeg_multiple_files(input_files, temp_filename, opts)
File "/home33/megamind/Downloads/youtube-dl/youtube_dl/postprocessor/ffmpeg.py", line 204, in run_ffmpeg_multiple_files
raise FFmpegPostProcessorError(msg)
FFmpegPostProcessorError: Last message repeated 1 times
...
<end of log>
```
I'm trying to download Video from Viki it works perfectly but when i say it to embed subs it throws me the above error tried multiple files same issue persists (issue with both vtt and srt file types)
I also use embed subs with Viu but it works great there only having issues with VIU
| geo-restricted | low | Critical |
356,343,552 | bitcoin | rpc: Wrong `gettransaction` info for a coinjoin | Take a look at this transaction:
https://testnet.smartbit.com.au/tx/947ddb7d22b0c7e8edd3a56a92b8ec771aa8b18667ef508270f928b14821e924
I've double checked with `getaddressinfo` and only the first input is mine, and the second output is mine. So the transaction looks like
Inputs: [1.02 BTC mine, 1.1 BTC not-mine]
Outputs: [0.10481803 BTC not-mine, 2.01 BTC mine]
So clearly this transaction makes me 0.99 BTC richer. (-1.02 + 2.01)
So now let's look at what `gettransaction` returns:
```
{
"amount": -0.10481803,
"fee": 1.09481803,
"confirmations": 0,
"trusted": false,
"txid": "947ddb7d22b0c7e8edd3a56a92b8ec771aa8b18667ef508270f928b14821e924",
"walletconflicts": [
],
"time": 1535929790,
"timereceived": 1535929790,
"bip125-replaceable": "no",
"details": [
{
"address": "2N8qCRsKMJ3qNYo8LRb9kLpy3qiigAuedMT",
"category": "send",
"amount": -0.10481803,
"vout": 0,
"fee": 1.09481803,
"abandoned": false
},
{
"address": "2MzFcfPF56Jp6BC3FJk6yXwCY6FzhnbGt3o",
"category": "send",
"amount": -2.01000000,
"label": "",
"vout": 1,
"fee": 1.09481803,
"abandoned": false
},
{
"address": "2MzFcfPF56Jp6BC3FJk6yXwCY6FzhnbGt3o",
"category": "receive",
"amount": 2.01000000,
"label": "",
"vout": 1
}
],
"hex": "020000000001022eb1b5698216f56eea3770c949a5d9555dcd1062571cc3113cae68132a5ba5660100000017160014f67a19d36a915d7e0360d3a66d1f055c7bd34687feffffffe165f072311e71825b47a4797221d7ae56d4b40b7707c540049aee43302448a401000000171600142fa1c410210f8228467e53f450c9e4e56337895ffeffffff028bf09f000000000017a914aaf6ba68cf6a2369445f119596adb84412d28d3a874004fb0b0000000017a9144cdbced31bab78d8a248204a961a4ad475d6b48a87024730440220724dbbdc278315ed367036b163749702c7810730dc2b17ba3226f72c7d0078ab02207945ffb8ffc78bcefa207ac8e172983b5c3d91f64c3cb869cdcd288e6c1d85610121037426b52590172ce72df80829e4a9f7530224306399c399080fc917d19ae44a72024730440220168c025d48f3ee4d626d37a85c92a75e778649e966d8f008caeed688cb6da73902202e00fb70c80e0ba591a12a8feb067cf3216eb054ac5059d0f38ce805bbdd478701210213be1638b4620ccacf4e1bec9748c3fc27561aa7169dbd1639be00bb20236ab800000000"
}
```
Note the "fee" thing it has is pretty confused. The actual fee is: 0.00518197 BTC
Running on the 0.17 branch if it makes a difference. | RPC/REST/ZMQ | low | Major |
356,351,562 | youtube-dl | [Viki] doesn't return episode number or any of the series output template |
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.09.01*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.09.01**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Feature request (request for a new functionality)
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Question
- [ ] Other
---
```
~/Downloads/youtube-dl -o '%(episode_number)s.%(ext)s' 'https://www.viki.com/videos/1033597v-my-love-from-the-star-epilogue-episode-22' -v [debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-o', u'%(episode_number)s.%(ext)s', u'https://www.viki.com/videos/1033597v-my-love-from-the-star-epilogue-episode-22', u'-v']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.09.01
[debug] Python version 2.7.13 (CPython) - Linux-4.9.0-3-amd64-x86_64-with-debian-9.1
[debug] exe versions: ffmpeg 3.2.7-1, ffprobe 3.2.7-1
[debug] Proxy map: {}
[viki] 1033597v: Downloading video JSON
[viki] 1033597v: Downloading video streams JSON
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on u'https://v4.viki.io/1033597v/1033597v_high_480p_1403051005.mp4?e=1535950501&h=ccaabefd815378cd29ddd877e2ce3cb7'
[download] Destination: NA.mp4
[download] 100% of 12.95MiB in 00:00
```
---
As you can see instead of returning **Episode number** it returns **NA**
I did skim through the code once there was no mention on returning this or any of the filename commands mentioned in the homepage in the output template section
```
Available for the video that is an episode of some series or programme:
series (string): Title of the series or programme the video episode belongs to
season (string): Title of the season the video episode belongs to
season_number (numeric): Number of the season the video episode belongs to
season_id (string): Id of the season the video episode belongs to
episode (string): Title of the video episode
episode_number (numeric): Number of the video episode within a season
episode_id (string): Id of the video episode
```
Though i can read it and understand whats missing unfortunately i'm not good enough to write the code to add this! | geo-restricted | low | Critical |
356,397,452 | pytorch | [Caffe2] How to correctly add Caffe2 libraries to Visual Studio to write C++ programs? | Hello all,
I successfully built Caffe2 from source on windows 10. Now, I would like to use caffe2 libraries in Visual Studio 2015 to write C++ programs. **How should I add the libraries and from what path in order to be able to include the libraries as below?**
```
#include "caffe2/core/common.h"
#include "caffe2/utils/proto_utils.h"
#include "caffe2/core/workspace.h"
#include "caffe2/core/tensor.h"
#include "caffe2/core/init.h"
```
Should I add only these libraries?

I followed this guide https://caffe2.ai/docs/cplusplus_tutorial.html . I added the following directories in Visual Studio:
D:\Yeverino\git_projects\pytorch\build\lib\Release
D:\Yeverino\git_projects\pytorch\build\caffe2\proto\Caffe2_PROTO.dir\Release
D:\Yeverino\git_projects\pytorch\caffe2\cuda_rtc\core
D:\Yeverino\git_projects\pytorch\caffe2\proto
D:\Yeverino\git_projects\pytorch\build
D:\Yeverino\git_projects\pytorch\build\caffe2

However, I got errors like below when I tried to include #include <blob.h> in a project:
`Error C1083 Cannot open include file: 'caffe2/core/common.h': No such file or directory ConsoleApplication1 D:\Yeverino\git_projects\pytorch\caffe2\cuda_rtc\core\blob.h 11 `
Once I changed `#include "caffe2/core/blob_serializer_base.h"` to `#include "blob_serializer_base.h"`, error was fixed and other similar errors were detected due to the prefix "caffe2/$package/" in the includes. I really want to avoid renaming all includes.
| caffe2 | low | Critical |
356,488,574 | TypeScript | Type the tagName string property of Element derivations to aid overloading and for discriminated unions | ## Search Terms
tagName, nodeName
## Suggestion
Typing the tagName property in lib.dom.d.ts for Element and derivations ( or only concrete derivations and a discriminated type alias of these )
e.g
interface SVGGElement extends SVGGraphicsElement {
//..existing definition
tagName:"g"
}
## Use Cases
Simpler to write overloads, overloaded by single argument type ( order does not matter )
Discriminated Unions
## Examples
interface SVGCircleElement{
tagName:"circle"
}
interface SVGGElement{
tagName:"g"
}
interface SVGRectElement{
tagName:"rect"
}
Without above order matters with overload
adopt(node: SVGGElement): G;//issue with this before circle overload
adopt(node: SVGCircleElement): Circle;
With above and
type SVGTestUnionType=SVGCircleElement|SVGRectElement;
function(circleOrG:SVGTestUnionType){
switch(circleOrG.tagName){
case "circle":
//circle members available
break;
case "rect":
//rect members available
break;
}
}
## Checklist
My suggestion meets these guidelines: YES
* [ ] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [ ] This wouldn't change the runtime behavior of existing JavaScript code
* [ ] This could be implemented without emitting different JS based on the types of the expressions
* [ ] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,In Discussion | low | Minor |
356,530,949 | pytorch | [pytorch][feature request] Cosine distance / simialrity between samples of own tensor or two tensors | ## Issue description
This issue came about when trying to find the cosine similarity between samples in two different tensors. To my surprise `F.cosine_similarity` performs cosine similarity between pairs of tensors with the same index across certain dimension. I was expecting something like:
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.cosine_similarity.html
that is, a cosine similarity measure across all pairs of samples.
I can't really see the use cases of the current implementation of `F.cosine_similarity`. Since it is the special case of getting the diagonal of what I describe or using `F.pairwise_distance` with an extra normalize parameters. Perhaps would be nice to know what are the use cases for the current implementation.
In order to mantain compatibility, I suggest creating an `F.cosine_distance` function and layer similar to:
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.cosine.html
which operates in tensors, similar to the sklearn implementation
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.cosine_distances.html
I don't think this is difficult to implement btw
## Code example
```
def cosine_distance(x1, x2=None, eps=1e-8):
x2 = x1 if x2 is None else x2
w1 = x1.norm(p=2, dim=1, keepdim=True)
w2 = w1 if x2 is x1 else x2.norm(p=2, dim=1, keepdim=True)
return 1 - torch.mm(x1, x2.t()) / (w1 * w2.t()).clamp(min=eps)
```
Example:
```
>>> m1 = torch.tensor([[0, 1],
[0, 1]], dtype=torch.float64)
>>> cosine_distance(m1, m1)
tensor([[0., 0.],
[0., 0.]], dtype=torch.float64)
>>> cosine_distance(m1)
tensor([[0., 0.],
[0., 0.]], dtype=torch.float64)
>>> m2 = torch.tensor([[1, 0],
[0, 1]], dtype=torch.float64)
>>> cosine_distance(m1, m2)
tensor([[1., 0.],
[1., 0.]], dtype=torch.float64)
>>> m3 = torch.tensor([[0, 0],
[0, 0]], dtype=torch.float64)
>>> cosine_distance(m3)
tensor([[1., 1.],
[1., 1.]], dtype=torch.float64)
>>> cosine_distance(m3, m3)
tensor([[1., 1.],
[1., 1.]], dtype=torch.float64)
```
Creation of the layer from here is straight forward. if you think this might be a good idea I would like to make a PR
cc @albanD @mruberry @rgommers @heitorschueroff | module: nn,triaged,module: numpy,function request,module: distance functions | medium | Critical |
356,568,507 | rust | Incremental compilation fails when a generic function uses a private symbol | The way that incremental compilation works breaks code that uses private symbols. For example, the following code will fail to compile when using incremental compilation (e.g. the default `cargo build`), but will build successfully when incremental compilation is disabled (e.g. the default `cargo build --release`):
```rust
// I have reproduced this issue for the following targets:
// mips-unknown-linux-gnu
// x86_64-unknown-linux-gnu
// x86_64-apple-darwin
// aarch64-apple-ios
fn foo<T>() {
// See m:<mangling> at https://llvm.org/docs/LangRef.html#data-layout
// This is only reproducible when using ELF, Mips, or Mach-O mangling.
#[cfg_attr(all(target_os = "linux", not(target_arch = "mips")),
export_name = "\x01.L_this_is_a_private_symbol")]
#[cfg_attr(any(target_os = "macos", target_os = "ios"),
export_name = "\x01L_this_is_a_private_symbol")]
#[cfg_attr(target_arch = "mips", export_name = "\x01$_this_is_a_private_symbol")]
static X: usize = 0;
// Use the static symbol in a way that prevents the optimizer from
// optimizing it away.
unsafe { std::ptr::read_volatile(&X as *const _); }
}
fn main() {
foo::<usize>();
}
```
Output (for both `cargo build` and `cargo build --release`):
```
$ cargo build
Compiling zzz v0.1.0 (file:///private/tmp/zzz)
LLVM ERROR: unsupported relocation of undefined symbol 'L_this_is_a_private_symbol'
error: Could not compile `zzz`.
To learn more, run the command again with --verbose.
$ cargo build --release
Compiling zzz v0.1.0 (file:///private/tmp/zzz)
Finished release [optimized] target(s) in 0.26s
```
I have reproduced this issue for the following targets:
- `mips-unknown-linux-gnu`
- `x86_64-unknown-linux-gnu`
- `x86_64-apple-darwin`
- `aarch64-apple-ios`
I'm interested in fixing this issue, and have been trying to better understand [`librustc_mir/monomorphize/partitioning.rs`](https://github.com/rust-lang/rust/blob/master/src/librustc_mir/monomorphize/partitioning.rs) (which I assume is where the problem is). I'll update this issue if I've made any progress, but I would appreciate any guidance anyone might be able to offer.
These private symbols are useful for low-level FFI work (e.g. the linker on macOS/iOS will deduplicate selector names only if they have a private symbol name). | A-linkage,T-compiler,C-bug | low | Critical |
356,610,294 | godot | Label clip_text clips content | **Godot version:** 3.1alpha 19d5789
**Issue description:** A label with `clip_text` set to true clips its children. I would expect `clip_text` to only clip the text itself, and `Control.clip_content` to be the one responsible for children visibility. Here's an example with a color_rect as the label's child (drawing below parent), `clip_text` enabled on the right.

edit: Just realized `clip_text` is subject to removal as as per #16863. Should I close this issue? | bug,confirmed,topic:gui | low | Minor |
356,611,799 | rust | disambiguate between multiple suggestions and a single multi-span suggestion; or, JSON error format is not round-trippable | ## Summary
To solve rust-lang-nursery/rustfix#141 (and one can only assume that RLS faces a similar problem), we need to be able to tell the difference between multiple suggestions (of which we likely only want to apply one), and a single suggestion that happens to touch multiple spans. The `DiagnosticBuilder` API supports this distinction by offering separate [`span_suggestions`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_errors/struct.Diagnostic.html#method.span_suggestions) and [`multipart_suggestion`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_errors/struct.Diagnostic.html#method.multipart_suggestion) methods. However, it looks like the actual JSON output conflates these two cases?! (I _hope_ I've simply misunderstood something; @estebank @oli-obk @killercup @nrc, please tell I'm just being stupid and wrong and confused here; the embarrassment of that would be less bad than than the headache of having to fix this.)
## Diagnosis
The relevant layout of fields is this: `Diagnostic` [has a vec of many](https://github.com/rust-lang/rust/blob/cd5c26f0eb48c8f32ea86e9f2434d905ff2cfc74/src/librustc_errors/diagnostic.rs#L28) `CodeSuggestion`s, a `CodeSuggestion` [has a vec of many](https://github.com/rust-lang/rust/blob/cd5c26f0eb48c8f32ea86e9f2434d905ff2cfc74/src/librustc_errors/lib.rs#L99) `Substitution`s, and a `Substitution` [has a vec of many](https://github.com/rust-lang/rust/blob/cd5c26f0eb48c8f32ea86e9f2434d905ff2cfc74/src/librustc_errors/lib.rs#L113) `SubstitutionPart`s.
[`span_suggestions` pushes](https://github.com/rust-lang/rust/blob/cd5c26f0eb48c8f32ea86e9f2434d905ff2cfc74/src/librustc_errors/diagnostic.rs#L303-L308) one `CodeSuggestion` with multiple `Substitution`s (each of which has a single `SubstitutionPart`) onto an existing diagnostic-builder. (So, arguably either the method name `span_suggestions` or the field name `suggestions` is a misnomer—it's the "substitutions" that are plural here, not the "suggestions"; you'd have to call `.span_suggestion` multiple times to get multiple elements in `suggestions`. But leave that aside for the moment.)
[`multipart_suggestion` pushes](https://github.com/rust-lang/rust/blob/cd5c26f0eb48c8f32ea86e9f2434d905ff2cfc74/src/librustc_errors/diagnostic.rs#L286-L292) one `CodeSuggestion` with one `Substitution` with multiple `SubstitutionParts` onto an existing diagnostic-builder.
All this is fine. The problem comes when we serialize diagnostics to JSON for `--error-format json`. The [serialized diagnostic format](https://github.com/rust-lang/rust/blob/cd5c26f0eb48c8f32ea86e9f2434d905ff2cfc74/src/libsyntax/json.rs#L97-L109) contains a `children` field whose elements are also serialized diagnostics (with no children themselves). The suggestions [are converted](https://github.com/rust-lang/rust/blob/cd5c26f0eb48c8f32ea86e9f2434d905ff2cfc74/src/libsyntax/json.rs#L175-L184) and [included as "children"](https://github.com/rust-lang/rust/blob/cd5c26f0eb48c8f32ea86e9f2434d905ff2cfc74/src/libsyntax/json.rs#L212-L214), but in doing so, we [flat-map all the substitution parts together](https://github.com/rust-lang/rust/blob/cd5c26f0eb48c8f32ea86e9f2434d905ff2cfc74/src/libsyntax/json.rs#L323-L338), making it impossible to know with certainty which parts came from the same `Substitution`.
## Concrete examples
The following program (taken [from the rustfix test suite](https://github.com/rust-lang-nursery/rustfix/blob/ef816e0688f31b0613fdded80f474d30cc5ee46e/tests/edge-cases/skip-multi-option-lints.rs), but let's call it `ambiguous-display.rs` here) fails to compile because `Display` is not in scope. (Actually, it'll still fail after fixing that, but that doesn't matter for our purpose here.)
```rust
fn main() {
let xs = vec![String::from("foo")];
let d: &Display = &xs;
println!("{}", d);
}
```
We get two mutually-exclusive suggestions to use `std::fmt::Display` and `std::path::Display`, [issued in librustc_resolve](https://github.com/rust-lang/rust/blob/ee73f80dc963707df3b3da82976556d64cac5752/src/librustc_resolve/lib.rs#L4771-L4785).
The terminal error output is:
```
error[E0412]: cannot find type `Display` in this scope
--> ambiguous-display.rs:3:13
|
3 | let d: &Display = &xs;
| ^^^^^^^ not found in this scope
help: possible candidates are found in other modules, you can import them into scope
|
1 | use std::fmt::Display;
|
1 | use std::path::Display;
```
The JSON error format is:
```json
{
"message": "cannot find type `Display` in this scope",
"code": {
"code": "E0412",
"explanation": "\nThe type name used is not in scope.\n\nErroneous code examples:\n\n```compile_fail,E0412\nimpl Something {} // error: type name `Something` is not in scope\n\n// or:\n\ntrait Foo {\n fn bar(N); // error: type name `N` is not in scope\n}\n\n// or:\n\nfn foo(x: T) {} // type name `T` is not in scope\n```\n\nTo fix this error, please verify you didn't misspell the type name, you did\ndeclare it or imported it into the scope. Examples:\n\n```\nstruct Something;\n\nimpl Something {} // ok!\n\n// or:\n\ntrait Foo {\n type N;\n\n fn bar(_: Self::N); // ok!\n}\n\n// or:\n\nfn foo<T>(x: T) {} // ok!\n```\n\nAnother case that causes this error is when a type is imported into a parent\nmodule. To fix this, you can follow the suggestion and use File directly or\n`use super::File;` which will import the types from the parent namespace. An\nexample that causes this error is below:\n\n```compile_fail,E0412\nuse std::fs::File;\n\nmod foo {\n fn some_function(f: File) {}\n}\n```\n\n```\nuse std::fs::File;\n\nmod foo {\n // either\n use super::File;\n // or\n // use std::fs::File;\n fn foo(f: File) {}\n}\n# fn main() {} // don't insert it for us; that'll break imports\n```\n"
},
"level": "error",
"spans": [
{
"file_name": "ambiguous-display.rs",
"byte_start": 64,
"byte_end": 71,
"line_start": 3,
"line_end": 3,
"column_start": 13,
"column_end": 20,
"is_primary": true,
"text": [
{
"text": " let d: &Display = &xs;",
"highlight_start": 13,
"highlight_end": 20
}
],
"label": "not found in this scope",
"suggested_replacement": null,
"suggestion_applicability": null,
"expansion": null
}
],
"children": [
{
"message": "possible candidates are found in other modules, you can import them into scope",
"code": null,
"level": "help",
"spans": [
{
"file_name": "ambiguous-display.rs",
"byte_start": 0,
"byte_end": 0,
"line_start": 1,
"line_end": 1,
"column_start": 1,
"column_end": 1,
"is_primary": true,
"text": [
{
"text": "fn main() {",
"highlight_start": 1,
"highlight_end": 1
}
],
"label": null,
"suggested_replacement": "use std::fmt::Display;\n\n",
"suggestion_applicability": "Unspecified",
"expansion": null
},
{
"file_name": "ambiguous-display.rs",
"byte_start": 0,
"byte_end": 0,
"line_start": 1,
"line_end": 1,
"column_start": 1,
"column_end": 1,
"is_primary": true,
"text": [
{
"text": "fn main() {",
"highlight_start": 1,
"highlight_end": 1
}
],
"label": null,
"suggested_replacement": "use std::path::Display;\n\n",
"suggestion_applicability": "Unspecified",
"expansion": null
}
],
"children": [],
"rendered": null
}
],
"rendered": "error[E0412]: cannot find type `Display` in this scope\n --> ambiguous-display.rs:3:13\n |\n3 | let d: &Display = &xs;\n | ^^^^^^^ not found in this scope\nhelp: possible candidates are found in other modules, you can import them into scope\n |\n1 | use std::fmt::Display;\n |\n1 | use std::path::Display;\n |\n\n"
}
```
The following program (`dot-dot-not-last.rs`) will fail to compile because `..` can only appear last in a struct pattern.
```rustc
struct Point { x: isize, y: isize }
fn main() {
let p = Point { x: 1, y: 2 };
let Point { .., y, } = p;
}
```
We get one unique suggestion that needs to touch multiple spans (removing the `..` from its original location and inserting it at the end), issued [in the parser](https://github.com/rust-lang/rust/blob/ee73f80dc963707df3b3da82976556d64cac5752/src/libsyntax/parse/parser.rs#L3904-L3910).
The terminal error output is:
```
error: expected `}`, found `,`
--> dot-dot-not-last.rs:5:19
|
5 | let Point { .., y, } = p;
| --^
| | |
| | expected `}`
| `..` must be at the end and cannot have a trailing comma
help: move the `..` to the end of the field list
|
5 | let Point { y, .. } = p;
| -- ^^^^
```
The JSON error output is:
```json
{
"message": "expected `}`, found `,`",
"code": null,
"level": "error",
"spans": [
{
"file_name": "dot-dot-not-last.rs",
"byte_start": 101,
"byte_end": 102,
"line_start": 5,
"line_end": 5,
"column_start": 19,
"column_end": 20,
"is_primary": true,
"text": [
{
"text": " let Point { .., y, } = p;",
"highlight_start": 19,
"highlight_end": 20
}
],
"label": "expected `}`",
"suggested_replacement": null,
"suggestion_applicability": null,
"expansion": null
},
{
"file_name": "dot-dot-not-last.rs",
"byte_start": 99,
"byte_end": 102,
"line_start": 5,
"line_end": 5,
"column_start": 17,
"column_end": 20,
"is_primary": false,
"text": [
{
"text": " let Point { .., y, } = p;",
"highlight_start": 17,
"highlight_end": 20
}
],
"label": "`..` must be at the end and cannot have a trailing comma",
"suggested_replacement": null,
"suggestion_applicability": null,
"expansion": null
}
],
"children": [
{
"message": "move the `..` to the end of the field list",
"code": null,
"level": "help",
"spans": [
{
"file_name": "dot-dot-not-last.rs",
"byte_start": 99,
"byte_end": 102,
"line_start": 5,
"line_end": 5,
"column_start": 17,
"column_end": 20,
"is_primary": true,
"text": [
{
"text": " let Point { .., y, } = p;",
"highlight_start": 17,
"highlight_end": 20
}
],
"label": null,
"suggested_replacement": "",
"suggestion_applicability": "Unspecified",
"expansion": null
},
{
"file_name": "dot-dot-not-last.rs",
"byte_start": 106,
"byte_end": 107,
"line_start": 5,
"line_end": 5,
"column_start": 24,
"column_end": 25,
"is_primary": true,
"text": [
{
"text": " let Point { .., y, } = p;",
"highlight_start": 24,
"highlight_end": 25
}
],
"label": null,
"suggested_replacement": ".. }",
"suggestion_applicability": "Unspecified",
"expansion": null
}
],
"children": [],
"rendered": null
}
],
"rendered": "error: expected `}`, found `,`\n --> dot-dot-not-last.rs:5:19\n |\n5 | let Point { .., y, } = p;\n | --^\n | | |\n | | expected `}`\n | `..` must be at the end and cannot have a trailing comma\nhelp: move the `..` to the end of the field list\n |\n5 | let Point { y, .. } = p;\n | -- ^^^^\n\n"
}
```
We'd want Rustfix (and similar tools) to apply _both_ of the suggestions in `children[0].spans` in the case of dot-dot-not-last.rs, but only one of them (offering a choice in an interactive mode) for ambiguous-display.rs. But how is Rustfix supposed to reliably tell the difference? (In the specific case of `span_suggestions`, you can check that the start and end spans are the same in `children[0].spans`, but I'd really rather not _rely_ on that to merely infer something that the format, I argue, _should_ state with certainty.)
This issue should likely receive the A-diagnostics and T-dev-tools (and, one might argue, P-high) labels.
## Current proposed resolution
This issue is currently proposed to be left as a future improvement; [see this comment for the current status](https://github.com/rust-lang/rust/issues/53934#issuecomment-831396123)
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"yerke"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | C-cleanup,A-diagnostics,E-mentor,P-high,T-compiler,E-medium,WG-diagnostics,D-diagnostic-infra | medium | Critical |
356,652,934 | godot | Canvas Modulate should use light mask | Canvas modulate can be thought of as a light that applies uniformly to everything in 2D. It would be helpful, if you could apply it to some nodes, but not others based on the light mask. | enhancement,topic:rendering | low | Minor |
356,687,181 | go | x/mobile: application freeze after resumed from suspend state on iOS | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.8.1 darwin/amd64
### Does this issue reproduce with the latest release?
YES
### What did you do?
Git clone the [basic](https://godoc.org/golang.org/x/mobile/example/basic) example, change the lifecycle code to:
```
switch e.Crosses(lifecycle.StageAlive) {
case lifecycle.CrossOn:
glctx, _ = e.DrawContext.(gl.Context)
onStart(glctx)
a.Send(paint.Event{})
case lifecycle.CrossOff:
onStop(glctx)
glctx = nil
}
```
In the original code, use `switch e.Crosses(lifecycle. StageVisible) ` to manage lifecycle, but this code will result in create/destroy gl resource each time the application resume/pause. In practical app, we should use `e.Crosses(lifecycle.StageAlive)`, but this will result in application freeze if app just resumed from suspend state in iOS. I have written a native iOS app with Obj-C/GLKView, but it doesn't have the problem, it must be a bug in gomobile .
(PS: Android will just lost glContext, I have created another [issue](https://github.com/golang/go/issues/26962) for Android).
### What did you expect to see?
The example don't freeze after resumed from suspend state on iOS.
### What did you see instead?
The example freezed.
| mobile | low | Critical |
356,705,774 | angular | Pseudo-events do not work properly for different input languages | ## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Performance issue
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[ ] Other... Please describe:
</code></pre>
## Current behavior
I'm creating an event listener using Angular's pseudo-event:
`@HostListener('window:keyup.control.q', ['$event'])`
But it's only working when user's keyboard input language is set to English. In all other cases it's not working.
## Expected behavior
It is expected that pseudo-events will use event.code, which is not dependent on user's input language. So in case of Russian language selected `event.key === 'й'` but `event.code === 'KeyQ'`, so I still able to handle hotkeys properly.
## Minimal reproduction of the problem with instructions
https://stackblitz.com/edit/angular-gitter-u3nhhd
## What is the motivation / use case for changing the behavior?
I18n support
## Environment
<pre><code>
Angular version: 6.0.2
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [x] Chrome (desktop) version 68
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [x] Firefox version 61
- [x] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: XX <!-- run `node --version` -->
- Platform: <!-- Mac, Linux, Windows -->
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
| type: bug/fix,freq2: medium,area: core,core: event listeners,P3 | low | Critical |
356,712,219 | go | proposal: crypto/tls: add support for AES-CCM | Hi! I am working in a project that requires `AES-CCM` cipher suite within TLS. I know that `crypto/tls` aims to support a limited safe subset of TLS. But since TLS 1.3 will only support the following cipher suites:
TLS_AES_128_GCM_SHA256
TLS_AES_256_GCM_SHA384
TLS_CHACHA20_POLY1305_SHA256
TLS_AES_128_CCM_SHA256
TLS_AES_128_CCM_8_SHA256
Reducing that list from 5 to only 3 choices seems pretty unfair.
I've seen some working golang `AES-CCM` implementations around github. Is there any specific reason why this cipher suite is not included?
Thanks!
Update:
Another option could be to port the code from BoringSSL: https://github.com/google/boringssl/blob/master/crypto/cipher_extra/e_aesccm.c
/cc @FiloSottile @agl
| Proposal,Proposal-Hold,Proposal-Crypto | medium | Critical |
356,786,759 | rust | Regression on nightly since LLVM 8 upgrade: `thread` sanitizer doesn't compile anymore | Hi, the [fuzzer](https://crates.io/crates/honggfuzz) I maintain is failing to build on the latest nightlies:
The interesting part of the error log seems to be:
```
note: /usr/bin/ld: __sancov_guards has both ordered [`__sancov_guards' in /home/travis/build/rust-fuzz/honggfuzz-rs/example/hfuzz_target/x86_64-unknown-linux-gnu/release/deps/example-934b6185f6a63e31.example.ajs5tmgw-cgu.0.rcgu.o] and unordered [`__sancov_guards' in /home/travis/build/rust-fuzz/honggfuzz-rs/example/hfuzz_target/x86_64-unknown-linux-gnu/release/deps/example-934b6185f6a63e31.example.ajs5tmgw-cgu.0.rcgu.o] sections
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
```
You can find the full log here:
https://travis-ci.org/rust-fuzz/honggfuzz-rs/jobs/424079778
I bisected on my computer the exact rust version that fails and it seems to be related to the LLVM 8 upgrade.
This version works well:
```
# rustup default nightly-2018-09-01
# rustc -vV
rustc 1.30.0-nightly (aaa170beb 2018-08-31)
binary: rustc
commit-hash: aaa170bebe31d03e2eea14e8cb06dc2e8891216b
commit-date: 2018-08-31
host: x86_64-unknown-linux-gnu
release: 1.30.0-nightly
LLVM version: 7.0
```
This version doesn't:
```
# rustup default nightly-2018-09-02
# rustc -vV Tue 04 Sep 2018 02:25:07 PM CEST
rustc 1.30.0-nightly (28bcffead 2018-09-01)
binary: rustc
commit-hash: 28bcffead74d5e17c6cb1f7de432e37f93a6b50c
commit-date: 2018-09-01
host: x86_64-unknown-linux-gnu
release: 1.30.0-nightly
LLVM version: 8.0
``` | A-LLVM,T-compiler,A-sanitizers | medium | Critical |
356,791,460 | pytorch | Ubuntu 16.04 setup.py error - undefined reference to elfLink_Get_FatBinary_From_Object' /usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to elf32_section_header' | **Editorial note:** Hi! Are you on this issue because of an 'undefined reference' error? Please do the following steps before replying "me too":
1. Check that your missing symbol actually is the same as the one here. If it is, please add your environment configuration to this ticket.
2. Check if your missing symbol shows up on the list here https://github.com/pytorch/pytorch/labels/topic%3A%20undefined%20reference ; if so, please file a relevant bug
3. If it is not, please file a new bug for your issue.
The most common me too's on this issue are:
* Build error: Undefined symbols for architecture x86_64: "_MPI_Comm_rank" on OS X (please go to #7065)
* Cannot build libtorch: SLEEF does not allow in-source builds #17181
----
Hello, I have been trying to install pytorch for roughly a week now, and I created a dual boot just for that purpose. The setup.py tool runs until it gets to 100% and then I get the following error. If anybody has any recommendations, I would be appreciate a ton!
```
$ python setup.py install
...
[100%] Linking CXX executable ../../bin/test_api
/usr/bin/ld: warning: libnvidia-fatbinaryloader.so.396.44, needed by /usr/lib/x86_64-linux-gnu/libcuda.so, not found (try using -rpath or -rpath-link)
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elfLink_Get_FatBinary_From_Object'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf32_section_header'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `gpuInfoRunsOn'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf_end'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf32_shnum'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `fatBinaryCtl'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf64_symbol_shndx'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elfLink_Free_Fatbinary'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf32_string_at_offset'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf32_symbol_shndx'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `fatBinaryCtl_Compile'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf32_file_header'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `fatBinaryCtl_PickCandidate'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf64_string_at_offset'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elfLink_Start'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf32_typed_section_header'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elfLink_Finish'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `gpucompSetLogLine'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elfLink_Next_Library_Member'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf64_section_name'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf_is_64bit'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf_size'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `fatBinaryCtl_Delete'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf64_symbol_name'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf64_shnum'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elfLink_Add_Cubin'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf64_section_contents'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf64_section_header'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf32_named_section_header'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elfLink_Delete'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elfLink_Finish_Reading_Library'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf64_typed_section_header'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `gpucompRestoreLogLine'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf64_file_header'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elfLink_Load_Host_Object'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elfLink_Start_Reading_Library'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elfLink_Free_Host_Object'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf64_named_section_header'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf32_symbol_name'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `elf32_section_name'
/usr/lib/x86_64-linux-gnu/libcuda.so: undefined reference to `fatBinaryCtl_Create'
collect2: error: ld returned 1 exit status
caffe2/torch/CMakeFiles/test_api.dir/build.make:345: recipe for target 'bin/test_api' failed
make[2]: *** [bin/test_api] Error 1
CMakeFiles/Makefile2:3810: recipe for target 'caffe2/torch/CMakeFiles/test_api.dir/all' failed
make[1]: *** [caffe2/torch/CMakeFiles/test_api.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-nnpack --use-mkldnn nccl caffe2 libshm gloo THD c10d'
``` | module: build,triaged,module: undefined reference | medium | Critical |
356,795,574 | go | x/build/cmd/gopherbot: ignore versions in other paragraphs of backport requests | In https://github.com/golang/go/issues/27486#issuecomment-418350497, I attempted to set up a backport issue for Go 1.11. However, apparently because I mentioned “from 1.9 or 1.10” in the justification, `gopherbot` constructed a backport issue for 1.10 too.
I think we should make `gopherbot` ignore versions that are separated from the first version in the comment by any punctuation other than a comma. (However, we should be careful not to stop after a period that appears as part of a version string.)
So, for example:
`@gopherbot, p—— backport to Go 1.11: this is a regression from 1.9.` should backport to 1.11 only.
`@gopherbot, p—— backport to Go 1.11, and possibly 1.10: this is a bad runtime bug.` should backport to both 1.11, 1.10.
(CC @FiloSottile ) | Builders,NeedsFix,FeatureRequest,Backport | low | Critical |
356,903,819 | TypeScript | Quick info for parameter redundantly shows @param tag | **TypeScript Version:** 3.1.0-dev.20180830
**Code**
```ts
/**
* @param p
* Once upon a time there was a parameter named p.
* P lived in a function called f.
* Then a big bad wolf came and blew f down.
*/
function f(p: number): void {}
```
**Expected behavior:**
Documentation for `p` is shown only once.
**Actual behavior:**
Documentation is shown twice -- once as the normal documentation (good), once as documentation for the `@param` tag which is redundant since we took the documentation from there. This doubles the size of the quick info.

| Suggestion,Committed,Domain: Quick Info | low | Minor |
356,909,060 | javascript-algorithms | Add TimSort | Chrome 70 Array.prototype.sort use the stable TimSort algorithm
https://twitter.com/mathias/status/1036626116654637057 | enhancement | medium | Minor |
356,911,243 | tensorflow | Optimizing slice of variable not possible | Applying the gradient of a variable slice currently results in a `NotImplemented` error of tf.train.Optimizer.
**The following two examples are working:**
```python
### WORKING ###
X = tf.Variable(2, dtype=tf.float32)
y = tf.constant(10, dtype="float32")
loss = y - (X*X)
variables=[X]
gradient = tf.gradients(loss, variables)
gradient = [(g, v) for g, v in zip(gradient, variables)]
train_op = tf.train.AdamOptimizer().apply_gradients(gradient)
```
```python
### WORKING ###
big_X = tf.Variable([2,3,4], dtype=tf.float32)
X = big_X[0]
y = tf.constant(10, dtype="float32")
loss = y - (X*X)
train_op = train_op = tf.train.AdamOptimizer().minimize(loss)
```
**The following example throws an error:**
```python
### NOT WORKING ###
big_X = tf.Variable([2,3,4], dtype=tf.float32)
X = big_X[0]
y = tf.constant(10, dtype="float32")
loss = y - (X*X)
variables=[X]
gradient = tf.gradients(loss, variables)
gradient = [(g, v) for g, v in zip(gradient, variables)]
train_op = tf.train.AdamOptimizer().apply_gradients(gradient)
```
The error:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2963, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-22-10282dee2005>", line 10, in <module>
train_op = tf.train.AdamOptimizer().apply_gradients(gradient)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/optimizer.py", line 605, in apply_gradients
update_ops.append(processor.update_op(self, grad))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/optimizer.py", line 189, in update_op
raise NotImplementedError("Trying to update a Tensor ", self._v)
NotImplementedError: ('Trying to update a Tensor ', <tf.Tensor 'strided_slice_9:0' shape=() dtype=float32>)
```
------------------------
### System information
- **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**: yes
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04
- **Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device**: NA
- **TensorFlow installed from (source or binary)**: binary
- **TensorFlow version (use command below)**: v1.10.1-0-g4dcfddc5d1 1.10.1
- **Python version**: 3.6.5
- **Bazel version (if compiling from source)**: NA
- **GCC/Compiler version (if compiling from source)**: NA
- **CUDA/cuDNN version**: NA
- **GPU model and memory**: NA
- **Exact command to reproduce**: NA | stat:awaiting tensorflower,type:bug,comp:ops,TF 2.11 | low | Critical |
356,956,836 | go | x/build/cmd/gopherbot: backport issues should retain topic labels of original | GopherBot created https://github.com/golang/go/issues/27498 from a source issue that had the labels “Documentation” and “modules”, but did not carry over those labels.
It makes sense for GopherBot to strip out status labels (such as NeedsFix), but I think it should retain the others.
(CC: @FiloSottile) | Builders,NeedsFix | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.