id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
381,260,075 | go | cmd/go: should help users understand when a new GOOS or GOARCH breaks their build | We ran to an interesting issue today in the Go newbies channel on the Slack workspace, specifically around a user's build breaking in between Go v1.10.x and v1.11.x. I'm not sure if there's really a tenable solution here, but I thought it was worth starting a discussion with the Go authors.
This user upgraded from Go v1.10.x to Go v1.11.2 with no other changes to their code, and they were then met with build failures:
```
./swaguidist.go:15:38: undefined: swaggerUiBundleJs
./swaguidist.go:17:38: undefined: swaggerUiStandalonePresetJs
```
When the user tried to figure out what was wrong, they incorrectly asserted that because some of the files in the package were large the compiler was ignoring them. They reached out to the Slack workspace to see whether anyone was aware of such a limitation, but thankfully we happened to spot the files ending in `*_js.go`.
While I think it's ideal to expect everyone to read the release notes in their entirety, I don't think we'll ever get to that place. It makes me wonder if there're are any options available to us that would make these things a little more approachable and actionable (especially to newbies), versus something like a variable not defined error. | NeedsDecision | low | Critical |
381,292,048 | go | testing: Benchmark reporting incorrect stats when running in parallel | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11 linux/amd64
</pre>
Applies to earlier versions as well
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build896591388=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Using the following test file:
```
package bechmarkperf
import (
"strconv"
"testing"
"time"
)
func BenchmarkSleepSerial(b *testing.B) {
for i := 0; i < b.N; i++ {
sleepFunc()
}
}
func BenchmarkSleepParallel(b *testing.B) {
for i := 1; i <= 8; i *= 2 {
b.Run(strconv.Itoa(i), func(b *testing.B) {
b.SetParallelism(i)
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
sleepFunc()
}
})
})
}
}
func sleepFunc() {
time.Sleep(1 * time.Millisecond)
}
```
Running in parallel, results were ~1ms / <parallelism> for `ns/op`. This mean that doubling threadcount would halve ns/op.
### What did you expect to see?
ns/op hold the same and not go below 1ms.
### What did you see instead?
When running with 128 goroutines, go the following output:
```
BenchmarkSleepParallel/8-16 100000 12024 ns/op 0 B/op 0 allocs/op
```
This is reporting that the benchmark process is taking well under 1ms (which is the sleep duration above). | NeedsInvestigation | low | Critical |
381,295,543 | godot | Viewport update mode lost when nested under ViewportContainer | **Godot version:**
3.0.6 (issue also present in 3.1 Alpha2)
**OS/device including version:**
macOS 10.13.6
**Issue description:**
When a `Viewport` is used with a `ViewportContainer`, its update mode (`render_target_update_mode` parameter) silently reverts back to "Always" whenever the project/scene is run or the scene tab is changed.
The culprit doesn't seem to be `Viewport` but `ViewportContainer`, as `Viewport`'s update mode sticks when used in other configurations, such as when set up to render to a `ViewportTexture` used by a `TextureRect`.
**Workarounds:**
Replacing ViewportContainer with a simple TextureRect+ViewportTexture combo avoids the issue.
**Why is this important if there's such a simple workaround?:**
This is a big deal, as one wouldn't notice the issue if a scene under a Viewport is set up as a static one that is configured to render only once (e.g. to convert a 3D model/scene to a 2D static image). A scene that is set up to render once would typically not be set up to change at all, and hence provide no visual indication to the developer that it is being re-rendered every frame, wasting CPU & GPU resources. (I personally didn't realize for quite awhile that my many instances of a home-made Label3D class -- to overcome Godot's lack of a Label3D class at the moment -- were being rendered every frame even though I set up Label3D to render each instance once and only once whenever either the text changes or the scene that uses it asks the text to be rendered in a different resolution. The only reason I noticed this is because of the parameter change that I kept seeing in version control diffs.)
Considering the impact it can cause on a deployed project, ViewportContainer should either be removed, documented better, or -- in my opinion -- should be left in without any documentation changes, but it should not force Viewport's update mode back to "Always".
**Steps to reproduce:**
Nest a Viewport under a ViewportContainer, set its "Update Mode" to "Once", then run the scene or switch to another scene tab in the editor and then go back to the original scene. Note the change of "Update Mode" back to "Always".
**Minimal reproduction project:**
The attached project below has two scenes in it. Both scenes have a cube that continuously rotates, but is set up to render only once.
- `ViewportTexture.tscn` demonstrates the problem. When run, the expected behaviour is to see a stationary box, as the "Update Mode" was set to "Once" at the time the scene was created. But the cube rotates, and the Update Mode that is loaded is set to "Always". Set "Update Mode" to "Once" and run the scene again; note that it makes no difference, i.e. that you can still see the box rotate.
- `TextureRect.tscn` demonstrates a workaround to the problem where the Viewport is used to render to a ViewportTexture used by a `TextureRect`. In this case, the display doesn't update as the cube continues to rotate, as expected. I also added a little handler for the space bar on the keyboard to trigger a new snapshot (press space bar to take new snapshots of the cube).
[ViewportUpdateModeIssue.zip](https://github.com/godotengine/godot/files/2586540/ViewportUpdateModeIssue.zip)
| bug,topic:rendering,confirmed,topic:gui | medium | Major |
381,334,429 | three.js | skinning vertex shader possible optimization | ##### Description of the problem
https://github.com/KhronosGroup/glTF/issues/1397#issuecomment-408232715
offers an alternative implementation for the skinning vertex shader at
https://github.com/mrdoob/three.js/blob/dev/src/renderers/shaders/ShaderChunk/skinning_vertex.glsl :
```js
vec4 skinVertex = bindMatrix * vec4( transformed, 1.0 );
vec4 skinned = vec4( 0.0 );
skinned += boneMatX * skinVertex * skinWeight.x;
skinned += boneMatY * skinVertex * skinWeight.y;
skinned += boneMatZ * skinVertex * skinWeight.z;
skinned += boneMatW * skinVertex * skinWeight.w;
transformed = ( bindMatrixInverse * skinned ).xyz;
```
alternative:
```js
vec4 skinVertex = bindMatrix * vec4( transformed, 1.0 );
mat4 skinMatrix = mat4( 0.0 );
skinMatrix += skinWeight.x * boneMatX;
skinMatrix += skinWeight.y * boneMatY;
skinMatrix += skinWeight.z * boneMatZ;
skinMatrix += skinWeight.w * boneMatW;
skinned = vec4(skinMatrix * skinVertex);
transformed = ( bindMatrixInverse * skinned ).xyz;
```
The alternative applies the weights to the matrices and then uses a single mat4 vec4 mult. to calculate the result of the skinning. It uses less GPU operations (9 vs. 12) but in terms of resolved scalar scalar multiplications and additions would not offer advantages over the existing shader. But mat4 and vec4 operations are likely highly optimized on the GPU.
An additional advantage of the alternative is that the final skinMatrix could likely be reused by the skin normal shader fragment (or vice versa) (https://github.com/mrdoob/three.js/blob/dev/src/renderers/shaders/ShaderChunk/skinnormal_vertex.glsl).
While the alternative shader code is tested successfully with all available Three skinning examples and glTF examples, I am, unfortunately, not in a position to do performance testing, across GPUs.
##### Three.js version
- [x] Dev
##### Browser
- [x] All of them
##### OS
- [x] All of them
| Suggestion | low | Major |
381,354,521 | go | cmd/vet: add compilebench to track 'go vet std' | We should update compilebench to measure the cost of cmd/vet analysis.
```
go install cmd/vet
for i in 1 2 3 4 5
do
go clean -cache
time go vet std >/dev/null 2>&1 # clean build
time go vet std >/dev/null 2>&1 # noop incremental build
done
``` | ToolSpeed,NeedsInvestigation,Analysis | low | Major |
381,362,027 | godot | Viewports do not inherit project default rendering settings | **Godot version:**
Godot 3.1 Alpha 2
**OS/device including version:**
Windows 10 64 Bit
NVIDIA GTX 960M
Driver v416.16
**Issue description:**
Viewports do not inherit project default settings, requiring the user to manually set MSAA, Shadow Atlas, etc
**Minimal reproduction project:**
[GLES3.zip](https://github.com/godotengine/godot/files/2587236/GLES3.zip)
| enhancement,topic:core,topic:rendering,documentation | low | Major |
381,371,217 | TypeScript | instanceof narrowing should preserve generic types from super to child type | ## Search Terms
* instanceof generic
* narrow instanceof
## Suggestion
The following code has a rather unexpected behavior today:
```ts
class Parent<T> {
x: T;
}
class Child<S> extends Parent<S> {
y: S;
}
function example(obj: Parent<number>) {
if (obj instanceof Child) {
const child = obj;
}
}
```
The narrowed type that `child` gets is `Child<any>`.
I'm requesting that the type instead gets narrowed to `Child<number>`, which is what I originally assumed would happen.
My proposal is, in general, to infer the type arguments to the narrowed-to class type whenever they can be determined from the original type.
## Precise Behavior
Consider the following code today:
```ts
const x: P = ...;
if (x instanceof C) {
x;
}
```
Today, if `C` is a generic class, `x` gets the narrowed type `C<any, any, any, ...>`; otherwise it just gets the type `C`.
My proposal is to consider the following piece of code, and use it to inform
Imagine that `C` has a no-argument constructor. Then the following piece of code is valid today:
```ts
function onlyP(c: P) { ... }
onlyC(new C());
```
in order to make it valid, the compiler infers type arguments for `C` that make it into a subtype of `P`. My proposal is to use the same strategy to infer the type arguments for `C` in an `instanceof` narrowing.
Consider the following examples today, and their corresponding `instanceof` narrowings:
```ts
function ex1(x: Parent<string>) { }
ex1(new Child()); // inferred arguments: <string>
function ex2(x: Parent<number | string>) { }
ex2(new Child()); // inferred arguments: <number | string>
function ex3(x: Parent<number> | string) { }
ex3(new Child()); // inferred arguments: <number>
function ex4(x: Parent<number> | Parent<string>) { }
ex4(new Child()); // inferred arguments: Child<number | string>
// Note: the above errors, because it Child<number|string> actually fails to be a subtype of
// Parent<number> | Parent<string>.
// We can either choose to infer Child<any> in this case, or use the (incorrect, but more-precise)
// inference Child<number | string>.
```
## Examples
The original use-case I had in mind was roughly the following:
```ts
abstract class Obtainer<T> {
__phantom: T = null as T;
}
abstract class Fetcher<T> extends Obtainer<T> {
public abstract fetch(): Promise<T>;
}
abstract class Dependency<T, D> extends Obtainer<T> {
public abstract dependencies(): Obtainer<D>;
public abstract compute(got: D): T;
}
async function obtain<T>(obtainer: Obtainer<T>): Promise<T> {
if (obtainer instanceof Fetcher) {
// obtainer: Fetcher<T>
// currently, it's a Fetcher<any>
return await obtainer.fetch();
}
if (obtainer instanceof Dependency) {
// obtainer: Dependency<T, any>
// currently, it's a Dependency<any, any>
const dependencies = obtainer.dependencies();
return obtainer.compute(await obtain(dependencies));
}
throw new Error("not implemented");
}
```
(note: there's still one extraneous `any` in the above, since the `D` parameter cannot be inferred)
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* This *is* a breaking change. However, I *think* it could only really break code that was already broken. If really necessary, it could be put behind a new `--strictGenericNarrowing` flag.
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* No change to runtime; just inferred generic types.
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [ x This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
* I think this is an easy win for a more sound type system without any negative impact
Also, I am interested in contributing this change if it's approved! | Suggestion,In Discussion,Domain: Control Flow | medium | Critical |
381,373,636 | pytorch | Make torch.multiprocessing.SpawnContext usable | See https://github.com/pytorch/pytorch/pull/14039#issuecomment-439219408 | module: multiprocessing,feature,triaged | low | Minor |
381,377,662 | flutter | Second slow tap on the same spot brings up selection controls | As described in the title. Right now the controls only appear on long press. | a: text input,c: new feature,platform-ios,framework,P2,team-ios,triaged-ios | low | Major |
381,415,412 | TypeScript | map through enum | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.1.6
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** enum
**Code**
```ts
enum ABC {
A = 'A',
B = 'B',
C = 'C',
}
// No Error
for (const key in ABC) {
console.log(ABC[key]);
}
// Error: Element implicitly has an 'any' type because index expression is not of type 'number'
Object.keys(ABC).map(key => console.log(ABC[key]));
```
**Expected behavior:**
`Object.keys, values, entries` on `enum ABC` infer type of `ABC`'s key and value.
**Actual behavior:**
Cannot get inferred type.
Shouldn't type of key from enum be 'A' | 'B' | 'C'? | Suggestion,Needs Proposal | low | Critical |
381,460,672 | TypeScript | Compiler warning for always true/false strict inequality expression | ## Search Terms
type guard warning if always true false non-identity inequality
## Suggestion
The compiler should emit a warning if it can determine that an expression using a strict inequality check (`!==`) always returns true or false. It already does this for strict equality checks (`===`).
## Use Cases
Like the warnings for the strict equality check, it helps identifying unreachable code blocks or unnecessary if blocks which can get introduced during refactoring of existing code or during carefree creation of new code.
## Examples
The following function definition emits an error ("This condition will always return 'false' since the types '"abc"' and '"xyz"' have no overlap."):
```ts
function fn(p: "abc") {
if (p === "xyz") {
}
}
```
But this function definition compiles just fine (a possible error message could be "This condition will always return 'true' since the types '"abc"' and '"abc"' overlap completely":
```ts
function fn2(p: "abc") {
if (p !== "abc") {
}
}
```
It even correctly type-guards the type of `p` inside the if block (it becomes `never`).
Two other scenarios which trigger this non-warning, but could be harder to detect:
```ts
// typeof checks
function fn3(p: string) {
if (typeof p !== "string") {
}
}
// the length of a tuple
function fn4(p: []) {
if (p.length !== 0) {
}
}
```
The last example doesn't type-guard the value of p correctly to `never` even though `p.length` is correctly of type `0`, maybe this is a separate feature request.
## Problems / Discussion Points
* I'm not sure whether this qualifies as a breaking change, since it introduces a warning for code constructs which compile totally fine at the moment.
* This could be extended for other checks as well (e.g. for the `<`/`<=` and `>`/`>=` operators) even though this would be pretty edge-casey.
## Checklist
My suggestion meets these guidelines:
* [ ] (Unsure) This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Critical |
381,465,936 | javascript-algorithms | Knapsack has error |
```
var uu = [
new KnapsackItem({ value: 3, weight: 2 }),
new KnapsackItem({ value: 4, weight: 3 }),
new KnapsackItem({ value: 5, weight: 4 }),
new KnapsackItem({ value: 7, weight: 5 }),
];
var b = new Knapsack(uu, 7);
b.solveZeroOneKnapsackProblem()
``` | bug | low | Critical |
381,500,490 | rust | issue-44056.rs was incorrectly moved to compile pass and should not if hardware does not support AVX | See https://github.com/rust-lang/rust/pull/55667.
There are two problems:
* the issue-44056.rs test was incorrectly moved from run-pass (https://github.com/rust-lang/rust/issues/44056) to compile-pass: this test must be executed
* the issue-44056.rs test should not run on non-AVX hardware, and this cannot be checked in the test itself. That is, the test runner has to detect whether the host supports AVX, and then and only then, attempt to run the test.
cc @infinity0 @RalfJung @nikic @alexcrichton | A-testsuite,T-compiler,E-help-wanted | low | Minor |
381,532,086 | rust | Cycle detected when processing existential type | [playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2015&gist=c35f11c83bb2e0fdbde12749f8a4927c)
```rust
#![feature(existential_type)]
use std::rc::Rc;
existential type Foo: Fn() -> usize;
fn foo() -> Foo {
let rc = Rc::new(5);
move || { *rc.as_ref() }
}
fn assert_send<T: Send>(_: &T) {
}
fn main() {
let f = foo();
assert_send(&f);
println!("{}", f());
}
```
```
error[E0391]: cycle detected when processing `Foo`
--> src/main.rs:5:1
|
5 | existential type Foo: Fn() -> usize;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: ...which requires processing `main`...
--> src/main.rs:15:1
|
15| fn main() {
| ^^^^^^^^^
note: ...which requires evaluating trait selection obligation `Foo: std::marker::Send`...
= note: ...which again requires processing `Foo`, completing the cycle
```
Commenting out the `assert_send(&f);` line compiles and runs fine ([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2015&gist=ae3e28eec9d7b8c81fea970152a0e466)), moving `Foo` and `foo` into a sub-module correctly identifies that `Foo: !Send` ([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2015&gist=e6cb72c4424ba8b9b1477301701a5c50)). | A-diagnostics,T-compiler,A-impl-trait,F-type_alias_impl_trait,requires-nightly | medium | Critical |
381,543,737 | create-react-app | Application crashes on IE11 and lower, Syntax error | ### Is this a bug report?
Yes
### Did you try recovering your dependencies?
Yes
### Environment
node version: 8.11.1,
npm version: 5.6.0
This problem is specific for IE11 running on Windows 10
### Steps to Reproduce
1. npx create-react-app my-app
2. cd my-app
3. npm start
4. Open application in IE11 browser
### Expected Behavior
I expected my app to load, like it does in Chrome, Firefox and Safari.
### Actual Behavior
Got console error - "Syntax error" after running npm start command. Debugger points to ansi-regex dependecy in node-modules, the problem is ES6 arrow function syntax. Any ideas how can I fix this? I tried using react-app-polyfill package but with no results.

It seems related to this old topic: https://github.com/facebook/create-react-app/issues/2691
ansi-regex in version 3.00 or higher exports arrow functions instead of regular functions and it causes error in IE11. Previously package version was downgraded, but now it seems to be bumped to 3.00 again | issue: bug | medium | Critical |
381,543,838 | pytorch | [feature request] bincount along specified dimension(s) | ## 🚀 Feature
`torch.bincount` along specified dimension(s).
cc @mruberry @rgommers @heitorschueroff | triaged,module: numpy,module: sorting and selection,function request | low | Major |
381,563,824 | youtube-dl | Safari download JSON metadata: HTTP Error 403: Forbidden | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.11.07*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [ x] I've **verified** and **I assure** that I'm running youtube-dl **2018.11.07**
### Before submitting an *issue* make sure you have:
- [ x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [ x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x ] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', '-u', 'PRIVATE', '-p', 'PRIVATE', 'https://www.safaribooksonline.com/videos/red-hat-certified/9780133929171']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2018.11.07
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.17134
[debug] exe versions: none
[debug] Proxy map: {}
[safari:course] Downloading login form
[safari:course] Logging in
[safari:course] 9780133929171: Downloading course JSON
ERROR: Unable to download JSON metadata: HTTP Error 403: Forbidden (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpj6iiia1g\build\youtube_dl\extractor\common.py", line 605, in _request_webpage
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpj6iiia1g\build\youtube_dl\YoutubeDL.py", line 2211, in urlopen
File "C:\Python\Python34\lib\urllib\request.py", line 470, in open
File "C:\Python\Python34\lib\urllib\request.py", line 580, in http_response
File "C:\Python\Python34\lib\urllib\request.py", line 508, in error
File "C:\Python\Python34\lib\urllib\request.py", line 442, in _call_chain
File "C:\Python\Python34\lib\urllib\request.py", line 588, in http_error_default
...
<end of log>
```
| cant-reproduce | low | Critical |
381,686,811 | go | proposal: cmd/go: subcommands to add and remove modules from the module cache | For a number of use-cases, it would be helpful to be able to upload modules to the module cache from source code (not just zip files!) in a local directory or repository.
Some examples:
* Testing changes involving cyclic module requirements (#27542, #27899).
* Bootstrapping a server to serve the module containing itself (#28029, but that can be worked around using `replace` directives).
* Storing and distributing private modules using version control systems that don't work well with zip files (related to #27618).
To support those use-cases, I propose the following subcommands:
* `go mod pack [MODULE[@VERSION]] DIR`: construct a module in the module cache from the module source code rooted at `DIR` (at version `VERSION`). If the `MODULE` is omitted, it is inferred from `DIR/go.mod`. If `@VERSION` is provided, it must be a valid semantic version, and `go mod pack` fails if that version already exists with different contents. If `@VERSION` is omitted, `DIR` must be within a supported version control repository, and `go mod pack` will attempt to infer the version from the repo state (commits and tags).
* `go mod unpack MODULE[@VERSION] DIR`: download the contents of `MODULE` to `DIR`. If `@VERSION` is omitted, use the active version from the main module (if any), or `latest` if no version is active. (In contrast to `go mod vendor`, `go mod unpack` would unpack the entire contents of the module — not just the packages in the import graph of the main module.)
* `go clean -m MODULE[@VERSION]`: remove `MODULE@VERSION` from the module cache. If run within a module, also remove the corresponding entry from its `go.sum` file. If `@VERSION` is omitted, remove all versions of `MODULE` from the module cache (and `go.sum` file).
CC @hyangah @jadekler @rsc @myitcv @thepudds @rasky @rogpeppe @FiloSottile | Proposal,NeedsInvestigation,Proposal-Hold,modules | medium | Critical |
381,697,568 | TypeScript | Provide a way to specify inheritance explicitly | **TypeScript Version:** 3.0.0-dev.201xxxxx
Its seems there's really no way currently to setup inheritance when working with complex mixins, and/or extending prototypes manually. While mixin support landed on TypeScript recently, it only addresses very simple cases. It's usefulness ends when you also want mixin that inherit from different classes on their own. To illustrate it with an example:
```js
export class Component extends HTMLElement implements ComponentCore {
constructor() {
super();
ComponentCore.init(this);
}
}
```
where the ComponentCore.init does mixin style implementation of ComponentCore. Tried the mixin approach documented, but that doesn't seem to work when you want to extend from HTMLElement here.
Now, moving on a real-life example with a lot more context: https://github.com/prasannavl/icomponent/blob/v5.1.1/packages/icomponent/src/component.ts#L6
I had to jump through hoops and do something like
```ts
ComponentCore.extend(ComponentImpl);
export interface IComponent extends ComponentImpl, ComponentStatics, ComponentCore, Constructor<IComponent> { }
export const Component: IComponent = ComponentImpl as any;
```
in order to even get inheritance working properly. This is further complicated, when you extend from these classes, since there's currently it's ` const Component`, and as such it is not always interpreted by the compiler as a class -- As as example - a few lines down in the same: https://github.com/prasannavl/icomponent/blob/v5.1.1/packages/icomponent/src/component.ts#L30
You can't pass `Component` into generics parameters, since it's no longer a class.
**Possible solutions**:
- A simple and effective solution would be to just be even annotate the class and having different parents.
// @ts-inherit: IComponentCore, HTMLElement, etc,
- Extend the `extends` syntax to inherit multiple classes with say "extends X, !Y, !Z` -- The exclamation is the indicator of these special classes
- Allow implements to inherit a class without actually implementing it, with the same as above syntax. This is probably the least intrusive way, but unlike `extends`, it forces you explicitly define interfaces to be able to do so.
| Suggestion,Needs Proposal | low | Minor |
381,732,389 | pytorch | [sparse] add descriptions and examples for methods at torch.sparse doc page | - There is no descriptions / examples for the methods listed at at doc page
- It will be much easier for new comers to learn how to use sparse if we improve doc a little bit
cc @aocsa @nikitaved @pearu @mruberry @IvanYashchuk | module: sparse,triaged | low | Minor |
381,740,049 | godot | Editing dependencies and accidentally linking to the same dependency, will always update the other matching dependency | **Godot version:** 3.0
**OS/device including version:** Windows 10
**Issue description:**
For my project setup I have 3D characters composed of different interchangeable parts contained in separate scenes, that I swap out to create new characters, using the edit dependencies dialog. On occasion sometimes I "corrupt" the scene by choosing a file that's already referenced in the dependencies. By "corrupt" I mean I'm unable to undo my changes, and make the dependency unique, it will always update the other dependency to match.
**Steps to reproduce:**
Create 5 scenes, Scene A, Scene B, Scene C, Scene D, Scene E
Have Scene A, contain B and C.
Edit the dependency of B and C both to D
Now edit B OR C to E, this will update both B and C to E, when I expect one to only update
**Suggestion:**
If this is intended, I think a warning dialog should pop up. Otherwise make scene dependencies have unique links. | bug,topic:core,topic:editor,confirmed,usability | low | Major |
381,748,388 | godot | Directory inconsistency when opening an editor FileDialog | Godot 3.1 alpha2
Windows 10 64 bits
When setting up a different default directory for new projects, I found the location at which the File Dialog opens is inconsistent. It doesnt remember where it was, and also gets the drive letter wrong.
In Godot 3.0.6
------------------
I open a project, I go to Editor Settings, and this is the current directory:

If I open the file dialog to set a new path, it opens in the same directory as my project, with correct drive letter:

If I navigate elsewhere and close the dialog, it remembers where it was if I open it again.
In Godot 3.1 alpha2
-----------------------
I open a project, go to Editor Settings, it starts out the same:

But, if I click on the directory icon to change it, it opens in the directory indicated by the property, which I guess was an intented change? If yes, that would also explain why the dialog no longer remember where it was when opening it again. But more importantly, you will also notice that the drive letter is wrong, which forces me to select `C:`, and then `D:` again to trigger the change and see the correct drive.

Also, if I write just `D:` in the address bar instead of using the dropdown, it doesn't just go to `D:`, but instead warps straight into my project folder (Which is in `D:/some/where/in/a/folder`). This did not happen in 3.0.6. | bug,topic:editor,confirmed | low | Minor |
381,800,499 | pytorch | running caffe2 float16 tensors results in aten runtime error | ## 🐛 Bug
I am trying to create an FP16 tensor in caffe2, and I am following the method described in https://github.com/pytorch/pytorch/blob/master/caffe2/proto/caffe2.proto#L86 (set type to FLOAT16, store float16 as unsigned short in int32_data)
However, upon attempting to run the network using workspace.RunNet, I get:
RuntimeError: storage_.IsType<T>() ASSERT FAILED at /home/nathan/pytorch/aten/src/ATen/core/TensorImpl.h:588, please report a bug to PyTorch. Tensor type mismatch, caller expects elements to be float, while tensor contains int. Error from operator:
input: "data" input: "scaled_datascale_w" output: "scaled_data" type: "Mul" arg { name: "axis" i: 1 } arg { name: "broadcast" i: 1 } arg { name: "float16_compute" i: 1 }
## To Reproduce
This comes from converting an existing Float32 model into Float16. The model is loaded and the ops and blobs are copied. To do the Float32->Float16 conversion of the tensors,
I follow the comments at https://github.com/pytorch/pytorch/blob/master/caffe2/proto/caffe2.proto#L86:
// Note about float16: in storage we will basically convert float16 byte-wise
// to unsigned short and then store them in the int32_data field.
which gets done as follows:
tensor.data_type = caffe2_pb2.TensorProto.FLOAT16
float16_list = list(arr.flatten().astype(np.float16))
ushort_list = [ struct.unpack('H', struct.pack('e', f16))[0] for f16 in float16_list]
so the tensor looks like:
[dims: 32
dims: 32
dims: 3
dims: 3
data_type: FLOAT16
int32_data: 10967
int32_data: 10247
... ]
For the initialization, since float16 data went in the int32 data, I use a GivenTensorIntFill like this:
op = core.CreateOperator(
"GivenTensorIntFill", [], [tensor.name],
arg=[
MakeArgument("shape", list(tensor.dims)),
MakeArgument("values", tensor.int32_data)])
init_net.op.extend([op])
RunNetOnce with the init net seems okay, but doing workspace.RunNet with the predict net gets the runtime error out of ATen
## Environment
PyTorch version: 1.0.0a0+5b15a50
Is debug build: No
CUDA used to build PyTorch: 8.0.61
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 8.0.61
GPU models and configuration:
GPU 0: GeForce GTX 1060 6GB
GPU 1: GeForce GTX 1080
Nvidia driver version: 410.48
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.6.0.21
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.4
/usr/lib/x86_64-linux-gnu/libcudnn_static_v6.a
/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
| caffe2 | low | Critical |
381,822,224 | TypeScript | Type assertions using Exact, Subset, Superset | I have suffered several times with the limitations of casting like so:
```js
res.json(<Foo>{foo:'bar'});
```
the above cast/assertion will work even if the anonymous object does not have all the fields in type `Foo`.
For example, this question I had: https://stackoverflow.com/questions/53328459/prevent-compilation-unless-all-fields-exist
This feature request is for something like this:
```js
res.json(Exact<Foo>{foo:'bar'});
res.json(Subset<Foo>{foo:'bar'});
res.json(Superset<Foo>{foo:'bar'});
```
note this is similar to the existing construct `Partial`, as in `Partial<T>`, hopefully the above are self-explanatory. I am not sure if both `Subset<>` and `Superset<>` make sense, but one of them should. | Suggestion,In Discussion | low | Major |
381,834,270 | opencv | connectedcomponentswithstats Is Not Thread-Safe | ##### System information (version)
- OpenCV =>3.3.1
- Operating System / Platform => Windows 10 - 64 Bit
- Compiler => Visual Studio 2015
##### Detailed description
connectedcomponentswithstats() is not thread-safe. It hangs sometimes (maybe 5% of the times). I'm guessing that it happens in the first few threads.
##### Steps to reproduce
This program should always return "5 5" (we have 5 threads) but sometimes it returns "5 4" (one hangs).
Something that can't be noticed easily when using the program because it triggers no errors.
```
// std::atomic<int> first = 0, last = 0;
ThreadPool pool(thread::hardware_concurrency()); // 2 threads
for (int i = 0; i < 5; i++) {
pool.enqueue([i, binaryImg] { // Add the thread.
// code ...
first++;
Mat labels, stats, centroids;
int objCount = connectedComponentsWithStats(binaryImg, labels, stats, centroids, 8); // PROBLEM. Sometimes the following lines inside this block won't be executed.
last++;
// code ...
});
}
pool.~ThreadPool(); // Join all the threads.
cout << first << ' ' << last << endl;
```
| category: imgproc,incomplete,needs reproducer,needs investigation | low | Critical |
381,851,737 | rust | rustdoc: "Implementations on Foreign Types" sidebar items should link to specific impls | Each of the impl sections in the "Implementations on Foreign Types" section has its own ID. Consider the [foreign impls](https://docs.rs/tokio/0.1.12/tokio/prelude/trait.Stream.html#foreign-impls) for the `Stream` trait from `tokio`. The first impl header is `#impl-Stream`, the second impl header is `#impl-Stream-1`, the third impl header is `#impl-Stream-2`, and so on.
But, the links in the sidebar do not jump to these specific IDs. Instead, they all link to `#impl-Stream`, or the first impl in the list. For a trait like `Stream`, which has many methods and many foreign implementors, this makes it difficult to get to a specific impl (to see the associated types, for example). You have to search the page, collapse the sections, or just scroll.
Since unique IDs are already generated for the impl sections, it would be great if the sidebar items used them to make navigation easier. | T-rustdoc,C-bug,A-rustdoc-ui,S-needs-repro | low | Minor |
381,855,781 | material-ui | [material-ui][docs] Add examples rendering in iframes, popups, and shadow DOM | <!--- Provide a general summary of the issue in the Title above -->
Material-UI Componentss not rendering properly when those are inside <iframe>, I have used many iframe react-comps but still Material UI is not working in any of them.
<!--
Thank you very much for contributing to Material-UI by creating an issue! ❤️
To avoid duplicate issues we ask you to check off the following list.
-->
There already closed threads without proper answers.
<!-- Checked checkbox should look like this: [x] -->
- [x] This is not a v0.x issue. <!-- (v0.x is no longer maintained) -->
- [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate and I haven't find any solution.
## Expected Behavior
It should render components in iframe as well
<!---
Describe what should happen.
-->
## Current Behavior
<!---
Describe what happens instead of the expected behavior.
-->
It is not rendering the components inside iframe, showing plain html elements without any styles
## Steps to Reproduce
<!---
Provide a link to a live example (you can use codesandbox.io) and an unambiguous set of steps to reproduce this bug.
Include code to reproduce, if relevant (which it most likely is).
This codesandbox.io template _may_ be a good starting point:
https://codesandbox.io/s/github/mui-org/material-ui/tree/master/examples/create-react-app
If you're using typescript a better starting point would be
https://codesandbox.io/s/github/mui-org/material-ui/tree/master/examples/create-react-app-with-typescript
If YOU DO NOT take time to provide a codesandbox.io reproduction, should the COMMUNITY take time to help you?
<iframe src="https://codesandbox.io/embed/rjo0r3yooo" style="width:100%; height:500px; border:0; border-radius: 4px; overflow:hidden;" sandbox="allow-modals allow-forms allow-popups allow-scripts allow-same-origin"></iframe>
-->
Link: https://codesandbox.io/s/rjo0r3yooo
1. Please go to sandbox link provided in the above
## Context
<!---
What are you trying to accomplish? How has this issue affected you?
Providing context helps us come up with a solution that is most useful in the real world.
-->
## Your Environment
<!---
Include as many relevant details about the environment with which you experienced the bug.
If you encounter issues with typescript please include version and tsconfig.
-->
| Tech | Version |
|--------------|---------|
| Material-UI | v3.5.1 |
| React | v16.6.3 |
| Browser | chrome |
| docs,priority: important | medium | Critical |
381,865,163 | godot | Nodes magically moves to bottom of tree hierarchy |
**Godot version:**
0afdc5c559520204987544d30560745dbf29a390
**OS/device including version:**
Windows 10, 64-bit, GTX 960
**Issue description:**
With an instance scene and "editable children" selected, add a Particles2D node to the child of that instance scene, and place it somewhere in the hierarchy, press F6 to scene preview. Then, switch to a different scene, then switch back and you will notice that Particles2D node is at the bottom.
**This does not happen on 3.0.6**
Example:

| bug,topic:core,confirmed | low | Major |
381,883,720 | rust | Support path prefix which refers to per-user DosDevices ("\??\C:\...") | `std::path::Prefix` does not support windows prefixes like `\??\C:\...`.
[See more about prefix which refers to per-user DosDevices](https://reverseengineering.stackexchange.com/questions/3798/c-question-marks-in-paths)
```rust
#![allow(unused)]
use std::path::{Component, Path, Prefix};
fn main() {
let path = Path::new(r"\??\C:\Users\Rust\Pictures\Ferris");
assert!(path.is_absolute());
}
```
([Playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2015&gist=7e4c26dd016ce69c73fe076721f62b64))
Errors:
```
Compiling playground v0.0.1 (/playground)
Finished dev [unoptimized + debuginfo] target(s) in 0.61s
Running `target/debug/playground`
thread 'main' panicked at 'assertion failed: path.is_absolute()', src/main.rs:7:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.
``` | O-windows,T-libs-api,A-io | low | Critical |
381,899,078 | TypeScript | namespace containing only 'const enum' can be used as value -> causing runtime error | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.0-dev.20181110
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** namespace const enum
**Code**
```ts
namespace foo {
export const enum Foo {}
}
console.log(foo);
```
**Expected behavior:**
Error when using `foo` as a value. It's already an error to use a namespace that contains only types (and is therefore not emitted) as value.
The Symbol of the ModuleDeclaration actually contains `constEnumOnlyModule: true`, it just needs to be checked.
**Actual behavior:**
Doesn't emit the namespace in transpiled output, but it's still referenced by `console.log(foo);`
**Playground Link:** https://agentcooper.github.io/typescript-play/#code/HYQwtgpgzgDiDGEAEAzA9mpBvAUE-SEAHjGgE4AuS8awUVEwArmEgGIbYC+OPNdaADYQAdILQBzABTo0ASgDcQA
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Bug | low | Critical |
381,905,079 | pytorch | Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-nnpack --use-mkldnn --use-qnnpack caffe2' | ## 🐛 Bug
I think [issue 13962](https://github.com/pytorch/pytorch/issues/13962) may have been closed prematurely. I just did a fresh `git clone --recursive https://github.com/pytorch/pytorch` and followed thei installation steps and still have this issue on my machine.
## To Reproduce
Steps to reproduce the behavior:
Source installation on Ubuntu 16.04 with gcc 5.4.0, CUDA 9.0, Python 3.7.
Compiler error log below:
```
NO PROBLEMS TO THIS POINT....
[ 59%] Building CXX object caffe2/CMakeFiles/caffe2_pybind11_state.dir/python/pybind_state_dlpack.cc.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/caffe2_gpu_generated_THCTensorScatterGather.cu.o
[ 60%] Building CXX object caffe2/torch/lib/THD/CMakeFiles/THD.dir/base/Cuda.cpp.o
[ 60%] Building CXX object modules/observers/CMakeFiles/caffe2_observers.dir/observer_config.cc.o
[ 60%] Linking CXX shared library ../../lib/libcaffe2_module_test_dynamic.so
[ 60%] Built target caffe2_module_test_dynamic
[ 60%] Building CXX object modules/observers/CMakeFiles/caffe2_observers.dir/perf_observer.cc.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/caffe2_gpu_generated_THCTensorTopK.cu.o
[ 60%] Building CXX object caffe2/torch/lib/THD/CMakeFiles/THD.dir/base/DataChannel.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/caffe2_pybind11_state.dir/python/pybind_state_nomni.cc.o
[ 60%] Building CXX object caffe2/CMakeFiles/caffe2_pybind11_state.dir/python/pybind_state_registry.cc.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/caffe2_gpu_generated_THCTensorSort.cu.o
[ 60%] Building CXX object caffe2/CMakeFiles/caffe2_pybind11_state.dir/python/pybind_state_ideep.cc.o
[ 60%] Linking CXX shared library ../../lib/libcaffe2_observers.so
[ 60%] Building CXX object caffe2/torch/lib/THD/CMakeFiles/THD.dir/base/DataChannelRequest.cpp.o
[ 60%] Built target caffe2_observers
[ 60%] Building CXX object caffe2/torch/lib/THD/CMakeFiles/THD.dir/base/RPCType.cpp.o
[ 60%] Building CXX object caffe2/torch/lib/THD/CMakeFiles/THD.dir/base/data_channels/DataChannelGloo.cpp.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/caffe2_gpu_generated_THCSortUtils.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/caffe2_gpu_generated_THCTensorMode.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorSortByte.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareTByte.cu.o
[ 60%] Building CXX object caffe2/torch/lib/THD/CMakeFiles/THD.dir/base/data_channels/DataChannelMPI.cpp.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathPointwiseByte.cu.o
[ 60%] Building CXX object caffe2/torch/lib/THD/CMakeFiles/THD.dir/base/data_channels/DataChannelNccl.cpp.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareByte.cu.o
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:31:17: error: ‘ncclInt8’ was not declared in this scope
{at::kChar, ncclInt8},
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:32:17: error: ‘ncclUint8’ was not declared in this scope
{at::kByte, ncclUint8},
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:35:16: error: ‘ncclInt32’ was not declared in this scope
{at::kInt, ncclInt32},
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:38:1: error: could not convert ‘{{at::kChar, <expression error>}, {at::kByte, <expression error>}, {at::kFloat, ncclFloat}, {at::kDouble, ncclDouble}, {at::kInt, <expression error>}, {at::kLong, ncclInt64}, {at::kHalf, ncclHalf}}’ from ‘<brace-enclosed initializer list>’ to ‘std::unordered_map<at::ScalarType, ncclDataType_t>’
};
^
In file included from /usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:1:0:
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp: In member function ‘void thd::DataChannelNccl::_destroyNcclResources(THDGroup)’:
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:187:40: error: void value not ignored as it ought to be
NCCL_CHECK(ncclCommDestroy(comm));
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp: In member function ‘thd::NcclResourcePair thd::DataChannelNccl::_getNcclResourcePair(std::vector<at::Tensor>&, THDGroup)’:
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:295:29: error: ‘ncclGroupStart’ was not declared in this scope
NCCL_CHECK(ncclGroupStart());
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:302:27: error: ‘ncclGroupEnd’ was not declared in this scope
NCCL_CHECK(ncclGroupEnd());
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp: In member function ‘virtual void thd::DataChannelNccl::allReduce(std::vector<at::Tensor>&, THDReduceOp, THDGroup)’:
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:417:29: error: ‘ncclGroupStart’ was not declared in this scope
NCCL_CHECK(ncclGroupStart());
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:432:27: error: ‘ncclGroupEnd’ was not declared in this scope
NCCL_CHECK(ncclGroupEnd());
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp: In member function ‘virtual void thd::DataChannelNccl::allGather(std::vector<at::Tensor>&, std::vector<at::Tensor>&, THDGroup)’:
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:466:29: error: ‘ncclGroupStart’ was not declared in this scope
NCCL_CHECK(ncclGroupStart());
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:473:27: error: invalid conversion from ‘void*’ to ‘int’ [-fpermissive]
output[i].data_ptr(),
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:474:23: error: invalid conversion from ‘int64_t {aka long int}’ to ‘ncclDataType_t’ [-fpermissive]
input[i].numel(),
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:477:15: error: cannot convert ‘ncclDataType_t’ to ‘void*’ for argument ‘4’ to ‘ncclResult_t ncclAllGather(const void*, int, ncclDataType_t, void*, ncclComm_t, cudaStream_t)’
stream));
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:480:27: error: ‘ncclGroupEnd’ was not declared in this scope
NCCL_CHECK(ncclGroupEnd());
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp: In member function ‘virtual void thd::DataChannelNccl::reduce(std::vector<at::Tensor>&, THDReduceOp, thd::rank_type, THDGroup)’:
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:516:29: error: ‘ncclGroupStart’ was not declared in this scope
NCCL_CHECK(ncclGroupStart());
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:532:27: error: ‘ncclGroupEnd’ was not declared in this scope
NCCL_CHECK(ncclGroupEnd());
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp: In member function ‘virtual void thd::DataChannelNccl::broadcast(std::vector<at::Tensor>&, thd::rank_type, THDGroup)’:
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:568:29: error: ‘ncclGroupStart’ was not declared in this scope
NCCL_CHECK(ncclGroupStart());
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:582:27: error: ‘ncclGroupEnd’ was not declared in this scope
NCCL_CHECK(ncclGroupEnd());
^
/usr/local/src/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.hpp:16:26: note: in definition of macro ‘NCCL_CHECK’
ncclResult_t error = cmd; \
^
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathReduceByte.cu.o
caffe2/torch/lib/THD/CMakeFiles/THD.dir/build.make:153: recipe for target 'caffe2/torch/lib/THD/CMakeFiles/THD.dir/base/data_channels/DataChannelNccl.cpp.o' failed
make[2]: *** [caffe2/torch/lib/THD/CMakeFiles/THD.dir/base/data_channels/DataChannelNccl.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMaskedByte.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorSortChar.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareTChar.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathPointwiseChar.cu.o
CMakeFiles/Makefile2:8621: recipe for target 'caffe2/torch/lib/THD/CMakeFiles/THD.dir/all' failed
make[1]: *** [caffe2/torch/lib/THD/CMakeFiles/THD.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareChar.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathReduceChar.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMaskedChar.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorSortShort.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareTShort.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathPointwiseShort.cu.o
[ 60%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareShort.cu.o
[ 60%] Linking CXX shared module python/caffe2_pybind11_state.cpython-37m-x86_64-linux-gnu.so
[ 60%] Built target caffe2_pybind11_state
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathReduceShort.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMaskedShort.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorSortInt.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareTInt.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathPointwiseInt.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareInt.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathReduceInt.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMaskedInt.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorSortLong.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareTLong.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathPointwiseLong.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareLong.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathReduceLong.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMaskedLong.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorSortHalf.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareTHalf.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathPointwiseHalf.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareHalf.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathReduceHalf.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMaskedHalf.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorSortFloat.cu.o
[ 61%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareTFloat.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathPointwiseFloat.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareFloat.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathReduceFloat.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMaskedFloat.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorSortDouble.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareTDouble.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathPointwiseDouble.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathCompareDouble.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMathReduceDouble.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/generated/caffe2_gpu_generated_THCTensorMaskedDouble.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_AbsCriterion.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_Abs.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_BCECriterion.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_ClassNLLCriterion.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_Col2Im.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_DistKLDivCriterion.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_ELU.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_FeatureLPPooling.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_GatedLinearUnit.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_HardTanh.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_Im2Col.cu.o
[ 62%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_IndexLinear.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_L1Cost.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_LeakyReLU.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_LogSigmoid.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_LookupTableBag.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_LookupTable.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_MarginCriterion.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_MSECriterion.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_MultiLabelMarginCriterion.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_MultiMarginCriterion.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_RReLU.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_Sigmoid.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SmoothL1Criterion.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SoftMarginCriterion.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SoftPlus.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SoftShrink.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SparseLinear.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialAdaptiveAveragePooling.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialAdaptiveMaxPooling.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialAveragePooling.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialClassNLLCriterion.cu.o
[ 63%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialConvolutionLocal.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialConvolutionMM.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialCrossMapLRN.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialDepthwiseConvolution.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialDilatedConvolution.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialDilatedMaxPooling.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialFractionalMaxPooling.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialFullConvolution.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialFullDilatedConvolution.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialMaxPooling.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialMaxUnpooling.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialReflectionPadding.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialReplicationPadding.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialSubSampling.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialUpSamplingBilinear.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_SpatialUpSamplingNearest.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_Sqrt.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_Square.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_Tanh.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_TemporalConvolution.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_TemporalMaxPooling.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_TemporalReflectionPadding.cu.o
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_TemporalReplicationPadding.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_TemporalRowConvolution.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_TemporalUpSamplingLinear.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_TemporalUpSamplingNearest.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricAdaptiveAveragePooling.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricAdaptiveMaxPooling.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricAveragePooling.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricConvolution.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricDilatedConvolution.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricDilatedMaxPooling.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricFractionalMaxPooling.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricFullConvolution.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricFullDilatedConvolution.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricMaxPooling.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricMaxUnpooling.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricReplicationPadding.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricUpSamplingNearest.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_VolumetricUpSamplingTrilinear.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/cuda/detail/caffe2_gpu_generated_IndexUtils.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_Activation.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_BatchLinearAlgebra.cu.o
[ 65%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_BinaryOpsKernel.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_CUDAScalar.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_DistanceKernel.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_Distributions.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_Dropout.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_Embedding.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_EmbeddingBag.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_GridSampler.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_LinearAlgebra.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_Loss.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_LossCTC.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_Normalization.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_RNN.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_Reduce.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_ReduceOpsKernel.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_Resize.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_RoiPooling.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_SoftMax.cu.o
/usr/local/src/pytorch/aten/src/ATen/native/cuda/DistanceKernel.cu(30): warning: function "at::native::<unnamed>::dists<scalar_t>::sign [with scalar_t=float]" was declared but never referenced
/usr/local/src/pytorch/aten/src/ATen/native/cuda/DistanceKernel.cu(30): warning: function "at::native::<unnamed>::dists<scalar_t>::sign [with scalar_t=double]" was declared but never referenced
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_SparseMM.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_SpectralOps.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_SummaryOps.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_TensorCompare.cu.o
[ 66%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_TensorFactories.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_TensorTransformations.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_Unique.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_WeightNorm.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/sparse/cuda/caffe2_gpu_generated_SparseCUDABlas.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/sparse/cuda/caffe2_gpu_generated_SparseCUDATensor.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/sparse/cuda/caffe2_gpu_generated_SparseCUDATensorMath.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/core/caffe2_gpu_generated_THCCachingAllocator.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/core/caffe2_gpu_generated_context_gpu.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/utils/caffe2_gpu_generated_math_gpu.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_abs_op.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_accumulate_op.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_accuracy_op.cu.o
/usr/local/src/pytorch/caffe2/core/context_gpu.cu(24): warning: function "c10::<unnamed>::C10FlagParser_caffe2_cuda_memory_pool::C10FlagParser_caffe2_cuda_memory_pool(const std::__cxx11::string &)" was declared but never referenced
/usr/local/src/pytorch/caffe2/core/context_gpu.cu(32): warning: function "c10::<unnamed>::C10FlagParser_caffe2_cub_bin_growth::C10FlagParser_caffe2_cub_bin_growth(const std::__cxx11::string &)" was declared but never referenced
/usr/local/src/pytorch/caffe2/core/context_gpu.cu(37): warning: function "c10::<unnamed>::C10FlagParser_caffe2_cub_min_bin::C10FlagParser_caffe2_cub_min_bin(const std::__cxx11::string &)" was declared but never referenced
/usr/local/src/pytorch/caffe2/core/context_gpu.cu(42): warning: function "c10::<unnamed>::C10FlagParser_caffe2_cub_max_bin::C10FlagParser_caffe2_cub_max_bin(const std::__cxx11::string &)" was declared but never referenced
/usr/local/src/pytorch/caffe2/core/context_gpu.cu(47): warning: function "c10::<unnamed>::C10FlagParser_caffe2_cub_max_managed_mb::C10FlagParser_caffe2_cub_max_managed_mb(const std::__cxx11::string &)" was declared but never referenced
/usr/local/src/pytorch/caffe2/core/context_gpu.cu(53): warning: function "c10::<unnamed>::C10FlagParser_caffe2_cub_print_allocation_events::C10FlagParser_caffe2_cub_print_allocation_events(const std::__cxx11::string &)" was declared but never referenced
/usr/local/src/pytorch/caffe2/core/context_gpu.cu(59): warning: function "c10::<unnamed>::C10FlagParser_caffe2_gpu_memory_tracking::C10FlagParser_caffe2_gpu_memory_tracking(const std::__cxx11::string &)" was declared but never referenced
/usr/local/src/pytorch/caffe2/core/context_gpu.cu(63): warning: function "c10::<unnamed>::C10FlagParser_caffe2_gpu_memory_report_interval_mb::C10FlagParser_caffe2_gpu_memory_report_interval_mb(const std::__cxx11::string &)" was declared but never referenced
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_acos_op.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_affine_channel_op.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_arg_ops.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_asin_op.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_assert_op.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_atan_op.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_batch_gather_ops.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_batch_matmul_op.cu.o
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_batch_moments_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_boolean_mask_ops.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_boolean_unmask_ops.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_cast_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_cbrt_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_ceil_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_channel_backprop_stats_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_channel_shuffle_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_channel_stats_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_clip_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_copy_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_cos_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_cosh_op.cu.o
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(66): warning: function "caffe2::math::<unnamed>::AddFunctor<T>::operator() [with T=double]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(66): warning: function "caffe2::math::<unnamed>::AddFunctor<T>::operator() [with T=float]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(66): warning: function "caffe2::math::<unnamed>::AddFunctor<T>::operator() [with T=int64_t]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(66): warning: function "caffe2::math::<unnamed>::AddFunctor<T>::operator() [with T=int32_t]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(66): warning: function "caffe2::math::<unnamed>::AddFunctor<c10::Half>::operator()" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(67): warning: function "caffe2::math::<unnamed>::SubFunctor<T>::operator() [with T=double]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(67): warning: function "caffe2::math::<unnamed>::SubFunctor<T>::operator() [with T=float]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(67): warning: function "caffe2::math::<unnamed>::SubFunctor<T>::operator() [with T=int64_t]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(67): warning: function "caffe2::math::<unnamed>::SubFunctor<T>::operator() [with T=int32_t]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(67): warning: function "caffe2::math::<unnamed>::SubFunctor<c10::Half>::operator()" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(68): warning: function "caffe2::math::<unnamed>::MulFunctor<T>::operator() [with T=double]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(68): warning: function "caffe2::math::<unnamed>::MulFunctor<T>::operator() [with T=float]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(68): warning: function "caffe2::math::<unnamed>::MulFunctor<T>::operator() [with T=int64_t]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(68): warning: function "caffe2::math::<unnamed>::MulFunctor<T>::operator() [with T=int32_t]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(68): warning: function "caffe2::math::<unnamed>::MulFunctor<c10::Half>::operator()" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(69): warning: function "caffe2::math::<unnamed>::DivFunctor<T>::operator() [with T=double]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(69): warning: function "caffe2::math::<unnamed>::DivFunctor<T>::operator() [with T=float]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(69): warning: function "caffe2::math::<unnamed>::DivFunctor<T>::operator() [with T=int64_t]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(69): warning: function "caffe2::math::<unnamed>::DivFunctor<T>::operator() [with T=int32_t]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(69): warning: function "caffe2::math::<unnamed>::DivFunctor<c10::Half>::operator()" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(1866): warning: function "caffe2::math::<unnamed>::FloatTransform<T>::operator() [with T=c10::Half]" was declared but never referenced
/usr/local/src/pytorch/caffe2/utils/math_gpu.cu(1899): warning: function "caffe2::math::<unnamed>::SqrTransform<T>::operator() [with T=float]" was declared but never referenced
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_cosine_embedding_criterion_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_cross_entropy_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_cube_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_data_couple_gpu.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_deform_conv_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_distance_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_dropout_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_elementwise_div_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_elementwise_linear_op.cu.o
[ 68%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_elementwise_mul_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_elementwise_ops.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_elu_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_enforce_finite_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_ensure_cpu_output_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_filler_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_find_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_floor_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_gather_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_given_tensor_byte_string_to_uint8_fill_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_given_tensor_fill_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_glu_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_group_norm_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_gru_unit_op_gpu.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_half_float_ops.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_hard_sigmoid_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_instance_norm_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_integral_image_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_layer_norm_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_leaky_relu_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_lengths_pad_op.cu.o
[ 69%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_lengths_tile_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_local_response_normalization_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_logit_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_loss_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_lp_pool_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_lstm_unit_op_gpu.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_margin_ranking_criterion_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_max_pool_with_index.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_mean_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_mem_query_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_moments_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_multi_class_accuracy_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_normalize_ops.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_one_hot_ops.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_pack_segments.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_pad_op_gpu.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_perplexity_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_piecewise_linear_transform_op.cu.o
/usr/local/src/pytorch/caffe2/operators/mem_query_op.cu(9): warning: function "caffe2::<unnamed>::GetGPUMemoryUsageOp::GetGPUMemoryUsageOp(const caffe2::OperatorDef &, caffe2::Workspace *)" was declared but never referenced
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_pool_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_pow_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_prelu_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_reciprocal_op.cu.o
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_reduce_front_back_max_ops.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_reduce_front_back_sum_mean_ops.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_reduce_ops.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_reduction_ops.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_relu_n_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_relu_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_replace_nan_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_resize_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_reverse_packed_segs_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_rmac_regions_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_roi_align_gradient_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_roi_align_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_roi_align_rotated_gradient_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_roi_align_rotated_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_roi_pool_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_rsqrt_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_segment_reduction_op_gpu.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_selu_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_sequence_ops.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_sigmoid_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_sin_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_sinh_op.cu.o
[ 71%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_slice_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_softmax_ops.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_softplus_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_softsign_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_space_batch_op_gpu.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_sparse_normalize_op_gpu.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_sparse_to_dense_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_spatial_batch_norm_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_stump_func_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_summarize_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_swish_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_tan_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_tanh_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_thresholded_relu_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_tile_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_top_k.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_transpose_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_unique_ops.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_upsample_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_utility_ops.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/caffe2_gpu_generated_weighted_sample_op.cu.o
[ 72%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/operators/rnn/caffe2_gpu_generated_recurrent_network_op_gpu.cu.o
[ 73%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_adadelta_op_gpu.cu.o
[ 73%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_adagrad_op_gpu.cu.o
[ 73%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_adam_op_gpu.cu.o
[ 73%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_fp16_momentum_sgd_op.cu.o
/usr/local/src/pytorch/caffe2/operators/summarize_op.cu(37): warning: function "caffe2::<unnamed>::summary_stats_unary_op<T>::operator() [with T=float]" was declared but never referenced
/usr/local/src/pytorch/caffe2/operators/summarize_op.cu(57): warning: function "caffe2::<unnamed>::summary_stats_binary_op<T>::operator() [with T=float]" was declared but never referenced
[ 73%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_fp32_momentum_sgd_op.cu.o
[ 73%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_lars_op_gpu.cu.o
[ 73%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_momentum_sgd_op_gpu.cu.o
[ 73%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_rmsprop_op_gpu.cu.o
[ 73%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/caffe2_gpu_generated_yellowfin_op_gpu.cu.o
Scanning dependencies of target caffe2_gpu
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cudnn/AffineGridGenerator.cpp.o
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cudnn/BatchNorm.cpp.o
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cudnn/RNN.cpp.o
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cudnn/LossCTC.cpp.o
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cudnn/Conv.cpp.o
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/miopen/BatchNorm_miopen.cpp.o
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cudnn/GridSampler.cpp.o
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/miopen/Conv_miopen.cpp.o
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/cuda/CUDAContext.cpp.o
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/sparse/cuda/SparseCUDATensor.cpp.o
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/cuda/CUDAGenerator.cpp.o
[ 73%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/cuda/detail/CUDAGuardImpl.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/cuda/CUDATypeDefault.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/cuda/CUDAStream.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/CUDAByteType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/cuda/PinnedMemoryAllocator.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/cuda/detail/CUDAHooks.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/CUDAUnaryOps.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/CUDACharType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/CUDACopy.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/CUDADoubleType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/CUDAFloatType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/CUDAHalfType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/CUDAIntType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/CUDALongType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/CUDAShortType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/RegisterCUDA.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/SparseCUDAByteType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/SparseCUDACharType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/SparseCUDADoubleType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/SparseCUDAFloatType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/SparseCUDAIntType.cpp.o
[ 74%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/SparseCUDALongType.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/SparseCUDAShortType.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/THCCachingAllocator.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/THCCachingHostAllocator.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/THCGeneral.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/THCStorageCopy.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/THCStream.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/THCTensor.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/THCTensorCopy.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/THCTensorRandom.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/cudnn/Descriptors.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/cudnn/Handle.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/cudnn/Types.cpp.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/contrib/aten/aten_op_cuda.cc.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/contrib/gloo/allreduce_ops_gpu.cc.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/contrib/gloo/broadcast_ops_gpu.cc.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/contrib/gloo/common_world_ops_gpu.cc.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/contrib/nccl/cuda_nccl_gpu.cc.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/contrib/nccl/cuda_nccl_op_gpu.cc.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/core/common_cudnn.cc.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/core/blob_serialization_gpu.cc.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/core/common_gpu.cc.o
[ 75%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/core/event_gpu.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/core/net_async_dag_gpu.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/db/create_db_op_gpu.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/distributed/file_store_handler_op_gpu.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/mpi/mpi_ops_gpu.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/conv_op_cache_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/conv_op_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/conv_transpose_op_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/dropout_op_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/elu_op_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/local_response_normalization_op_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/order_switch_ops_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/relu_op_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/sigmoid_op_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/softmax_op_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/tanh_op_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/transpose_op_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/utility_ops_cudnn.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/communicator_op_gpu.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/concat_split_op_gpu.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/conv_op_gpu.cc.o
[ 76%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/conv_op_shared_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/conv_transpose_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/counter_ops_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/do_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/elementwise_add_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/elementwise_sub_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/exp_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/expand_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/expand_squeeze_dims_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/free_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/fully_connected_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/if_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/im2col_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/load_save_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/locally_connected_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/log_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/matmul_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/negate_gradient_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/negative_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/order_switch_ops_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/prepend_dim_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/reshape_op_gpu.cc.o
[ 77%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/scale_op_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/shape_op_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/sqr_op_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/sqrt_op_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/stop_gradient_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/tensor_protos_db_input_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/while_op_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/zero_gradient_op_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/rnn/recurrent_op_cudnn.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/rnn/recurrent_network_blob_fetcher_op_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/rnn/recurrent_network_executor_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/queue/queue_ops_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/iter_op_gpu.cc.o
[ 78%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/sgd/learning_rate_op_gpu.cc.o
[ 78%] Linking CXX shared library ../lib/libcaffe2_gpu.so
[ 78%] Built target caffe2_gpu
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-nnpack --use-mkldnn --use-qnnpack caffe2'
``` | module: build,triaged | medium | Critical |
381,907,484 | create-react-app | Create-react-app test debug not stopping at breakpoint | I am really not sure what I am doing wrong here. I am trying to write unit tests and am using Enzyme and Jest (obviously). Here is what my package.json looks like
```
{
"name": "balena-io-test",
"version": "0.1.0",
"private": true,
"dependencies": {
....
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"test:debug": "react-scripts --inspect-brk test --runInBand",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": "react-app"
},
"browserslist": [
">0.2%",
"not dead",
"not ie <= 11",
"not op_mini all"
],
"devDependencies": {
"enzyme": "^3.7.0",
"enzyme-adapter-react-16": "^1.7.0",
"jest": "^23.6.0",
"jest-mock": "^23.2.0",
"react-scripts": "^2.1.1",
"redux-mock-store": "^1.5.3"
}
}
```
Now test are running just fine with npm test and I am trying to debug with npm run test:debug.
Initially the react-script breaks at the first line which is what I would be expect but after that jest just runs without stopping at any debugger break points. I also cannot see any of the .test files in chrome. I am not sure if it is my environment which is to blame or something to do with create-react-app.
Here is a test I am trying to debug
```
describe('<LightingDevices>', ()=> {
let wrapper;
beforeAll(()=>{
debugger; <------ should stop here
const updateDevice = jest.fn();
const setUpAllLights = jest.fn();
wrapper = shallow(<LightingDevices updateDevice={updateDevice} setUpAllLights={setUpAllLights}/>);
});
....
```
| issue: bug,issue: needs investigation | low | Critical |
381,918,512 | rust | Lint: spurrious unused_parens on assignment of comparison | The following code triggers the `unused_parens` lint, but without the parens the equality operator can be (visually) confused with the assignment.
```rust
fn main()
{
let bar = "";
let _foo = (bar == "baz");
}
``` | C-enhancement,A-lints,A-diagnostics,T-compiler,L-unused_parens | low | Minor |
381,941,998 | rust | 2018 idioms: rustc-internal crates linted to be removed when they can't | Discovered in https://github.com/rust-lang/cargo/issues/6323
```rust
#![feature(extern_crate_item_prelude)]
#![feature(rustc_private)]
#![warn(rust_2018_idioms)]
extern crate syntax;
#[allow(unused_imports)]
use syntax::ast;
fn main() {}
```
yields:
```
warning: unused extern crate
--> src/main.rs:5:1
|
5 | extern crate syntax;
| ^^^^^^^^^^^^^^^^^^^^ help: remove it
|
note: lint level defined here
--> src/main.rs:3:9
|
3 | #![warn(rust_2018_idioms)]
| ^^^^^^^^^^^^^^^^
= note: #[warn(unused_extern_crates)] implied by #[warn(rust_2018_idioms)]
```
This may be fixed by https://github.com/rust-lang/rust/pull/55884, but it's not clear to me at least | A-lints,E-needs-test,A-suggestion-diagnostics,A-edition-2018 | low | Minor |
381,952,173 | vue-element-admin | complex-table loading time | complex-table is taking lot of time to load if records are more say 100000.
you can change the /mock/article.js const count = 100000 to replicate this issue.
| not vue-element-admin bug | low | Minor |
381,969,678 | godot | Option to enable key frame diamonds in Animation Editor | I love the new changes/updates in the Animation Editor. Except just one little quirk that makes it hard for me is the overlay images on keyframes
For example:

It makes it really hard/confusing to read the timeline when you go back to edit, because you don't really know where the keyframes are. Either you have to zoom in quite a bit, or click on the keyframe image overlay to see the edges. The left side of the rectangular image overlay is where the actual keyframe is at.
It was much more easier (IMO) to see them with the little diamonds instead. My suggestion is just if that could be an option. Maybe under Editor Settings -> Editors -> Animation | enhancement,topic:editor,usability,topic:animation | low | Minor |
381,972,164 | angular | fakeAsync does not fail with pending microtasks | <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 📚 Docs or angular.io bug report
### Description
The testing guide states that `flushMicrotasks` should be used when pending microtasks are expected at the end of a test to prevent an error from being thrown. What `fakeAsync` [actualy does](https://github.com/angular/angular/blob/4c2ce4e8ba4c5ac5ce8754d67bc6603eaad4564a/packages/core/testing/src/fake_async_fallback.ts#L78), is call `flushMicrotasks` after calling the test and only checks for pending timers.
## 🔬 Minimal Reproduction
### What's the affected URL?**
https://angular.io/guide/testing#testing-utility-apis
### Reproduction Steps**
<!-- If applicable please list the steps to take to reproduce the issue -->
<!-- ✍️edit:-->
### Expected vs Actual Behavior**
<!-- If applicable please describe the difference between the expected and actual behavior after following the repro steps. -->
<!-- ✍️edit:-->
## 📷Screenshot
<!-- Often a screenshot can help to capture the issue better than a long description. -->
<!-- ✍️upload a screenshot:-->
## 🔥 Exception or Error
<pre><code>
<!-- If the issue is accompanied by an exception or an error, please share it below: -->
<!-- ✍️-->
</code></pre>
## 🌍 Your Environment
### Browser info
<!-- ✍️Is this a browser specific issue? If so, please specify the device, browser, and version. -->
### Anything else relevant?
<!-- ✍️Please provide additional info if necessary. -->
| type: bug/fix,area: testing,freq2: medium,P3,docsarea: testing,area: docs | low | Critical |
381,978,696 | go | cmd/go/internal/modfetch: TestCodeRepo failures due to external repo changes | ```
$ gotip version
go version devel +fe562cebf1 Sun Nov 18 16:13:13 2018 +0000 linux/amd64
```
To reproduce:
```
$ cd src/cmd/go/internal/modfetch
$ gotip test -run=TestCodeRepo
--- FAIL: TestCodeRepo (58.16s)
--- FAIL: TestCodeRepo/gopkg.in_yaml.v2/v2 (3.32s)
coderepo_test.go:368: info.Version = "v2.2.2-0.20181115110504-51d6538a90f8", want "v2.2.1"
coderepo_test.go:371: info.Name = "51d6538a90f86fe93ac480b35f37b2be17fef232", want "5420a8b6744d3b0345ab293f6fcba19c978f1183"
coderepo_test.go:374: info.Short = "51d6538a90f8", want "5420a8b6744d"
coderepo_test.go:377: info.Time = 2018-11-15 11:05:04 +0000 UTC, want 2018-03-28 19:50:20 +0000 UTC
FAIL
exit status 1
FAIL cmd/go/internal/modfetch 60.812s
```
[Spotted on the longtest builder](https://build.golang.org/log/27113d3c49cc2b99a1b440d14504d4831f30c999). | Testing,NeedsFix,GoCommand | medium | Critical |
381,982,989 | vscode | Indent/Outdent with tab key does not honor editor.autoIndent=false | Issue Type: <b>Bug</b>
When I have:
/*
* Comment
*/
If I select it and hit tab, I get:
/*
* Comment
*/
If instead I had hit shift-tab, I get:
/*
* Comment
*/
Same happens with `Ctrl-]` and `Ctrl-[` *(if those are supposed to make a difference)*
I hoped turning off autoIndent would stop this, but no dice. I also turned off C++ formatting in the JSON config:
{
"editor.autoIndent": false,
"editor.detectIndentation": false,
"C_Cpp.formatting": "Disabled"
}
I have done due diligence by asking (and bountying, +100) a question on StackOverflow to get an answer on what to do about this, but no one seems to know:
https://stackoverflow.com/q/53326134/preserve-spacing-on-indent-or-outdent-with-tab-in-vscode
If someone can provide a resolution here, be sure to go there and get the bounty points.
VS Code version: Code 1.29.1 (bc24f98b5f70467bc689abf41cc5550ca637088e, 2018-11-15T19:13:36.375Z)
OS version: Windows_NT x64 10.0.17134
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 x 2808)|
|GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: enabled<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled|
|Memory (System)|15.86GB (1.40GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (3)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-github|Kni|0.30.0
python|ms-|2018.10.1
cpptools|ms-|0.20.1
</details>
<!-- generated by issue reporter --> | typescript,javascript,editor-autoindent,under-discussion,on-unit-test | medium | Critical |
381,996,313 | flutter | Support running on ChromeOS emulator (Pixelbook_beta_API_25) | I am hoping the team could consider supporting the new Chrome OS emulator ?
My app is on the Android and IOS play store and i want to get it tested for Chrome OS.
I can launch the ChromeOS emulator, but Flutter refuses to see that its running.
**My Log:**
````
flutter emulators --launch Pixelbook_beta_API_25
cd ./example && flutter run
flutter emulators --launch Pixelbook_beta_API_25
cd /Users/apple/workspace/go/src/github.com/zino-app/graphql-flutter/example && flutter run
No connected devices.
Run 'flutter emulators' to list and start any available device emulators.
If you expected your device to be detected, please run "flutter doctor" to diagnose
potential issues, or visit https://flutter.io/setup/ for troubleshooting tips.
make: *** [ex-chromeos-run] Error 1
````
**My Env:**
````
x-MacBook-Pro:~ apple$ flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel dev, v0.11.6, on Mac OS X 10.14.1 18B75, locale en-DE)
[✓] Android toolchain - develop for Android devices (Android SDK 27.0.3)
[✓] iOS toolchain - develop for iOS devices (Xcode 10.1)
[✓] Android Studio (version 3.2)
[✓] VS Code (version 1.29.1)
[!] Connected device
x-MacBook-Pro:~ apple$ flutter emulators
3 available emulators:
Pixel_2_API_28 • pixel_2 • Google • Pixel 2 API 28
Pixelbook_beta_API_25 • Pixelbook (beta) • Google • Pixelbook (beta) API 25
apple_ios_simulator • iOS Simulator • Apple
````
**To Reproduce:**
1. Install the emulator in Android Studio.
https://developer.android.com/topic/arc/emulator
2. Make a simple Flutter example
3. Fire it up as per my Log:
| tool,platform-chromebook,P2,team-tool,triaged-tool | low | Critical |
382,012,714 | TypeScript | chained generic mixins fail | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** chained generic mixins
**Code**
```ts
type Constructor<T> = new(...args: any[]) => T;
interface XInterface {
value: string;
}
function make<T extends Constructor<any>>(Base: T) {
return class extends Base implements XInterface {
value = "Hello!"
}
}
// Great. Works.
class _X1 extends make(Function) { hello() { this.value; }}
type _T1 = Constructor<HTMLElement>;
// Oops. But changing `make(Base)`, to `make(Base as _T1)` works perfectly,
// which makes no sense.
function make2<T extends _T1>(Base: T) {
return class extends make(Base) { }
}
```
**Expected behavior:**
Compile.
**Actual behavior:**
Type '{ new (...args: any[]): make<T>.(Anonymous class); prototype: make<any>.(Anonymous class); } & T' is not a constructor function type.
| Suggestion,In Discussion | low | Critical |
382,022,071 | rust | Enable strict HANDLE checking for all Windows Rust programs | To help protect against bugs in unsafe or third-party code, the Rust compiler should emit code to enable strict `HANDLE` checking for all Windows Rust programs. The process will receive a fatal error if it manipulates a `HANDLE` that is not valid, such as using an uninitialized `HANDLE` or calling `CloseHandle` twice.
See MSDN for `SetProcessMitigationPolicy` and `PROCESS_MITIGATION_STRICT_HANDLE_CHECK_POLICY`:
https://docs.microsoft.com/en-us/windows/desktop/api/processthreadsapi/nf-processthreadsapi-setprocessmitigationpolicy
Strict `HANDLE` checking might cause compatibility problems for Rust programs that depends on third-party libraries that misuse `HANDLE`s. As a general rule, strict `HANDLE` checking cannot be turned off once it is turned on, so there would be no backdoor to allow sloppy third-party code to run without raising a `HANDLE` exception. If that compatibility constraint is too severe, strict `HANDLE` checking could be limited to debug builds or disabled with an opt-out compiler flag.
Here is how Firefox enables strict `HANDLE` checking for its sandbox processes:
https://searchfox.org/mozilla-central/rev/5117a4c4e29fcf80a627fecf899a62f117368abf/security/sandbox/chromium/sandbox/win/src/process_mitigations.cc#120-131 | A-runtime,O-windows,C-enhancement,A-security,T-libs-api | low | Critical |
382,023,878 | rust | Restrict Windows DLL search path as a precaution against DLL pre-loading attacks | Windows' standard DLL search path contains directories that can be vulnerable to DLL pre-loading attacks. An application can use the `SetDefaultDllDirectories` API to specify a default DLL search path for the process that eliminates the most vulnerable directories and limits the other directories that are searched.
For example, as a precaution, Firefox removes the current directory from the DLL search path and then restricts the DLL search path to the application's installation directory, the Windows system directory, and any paths explicitly added using the `AddDllDirectory` or `SetDllDirectory` APIs.
https://searchfox.org/mozilla-central/rev/5117a4c4e29fcf80a627fecf899a62f117368abf/toolkit/mozapps/update/updater/loaddlls.cpp#15-30
https://searchfox.org/mozilla-central/rev/5117a4c4e29fcf80a627fecf899a62f117368abf/security/sandbox/chromium/sandbox/win/src/process_mitigations.cc#46-58
To help protect against DLL pre-loading attacks, the Rust compiler could emit similar code to restrict its DLL search path for all Windows Rust programs. Changing the DLL search path could cause compatibility problems for Windows Rust programs that assume they can implicitly load DLLs in the current directory without explicitly configuring their DLL search path. The workaround is for those programs to configure their DLL search path using the the `AddDllDirectory` or `SetDllDirectory` APIs.
See MSDN for `SetDefaultDllDirectories`:
https://docs.microsoft.com/en-us/windows/desktop/api/libloaderapi/nf-libloaderapi-setdefaultdlldirectories
| O-windows,C-enhancement,A-security | low | Major |
382,028,521 | vscode | Git - Staging when ignoring whitespace, doesn't stage changed whitespace | Issue Type: <b>Bug</b>
1. Indent several lines of code, and save. These should appear as changes in Git.
2. Click on "Open changes".
3. Ensure whitespace changes are hidden (which is the default).
4. Select the indented lines, and stage them with "Stage selected ranges".
5. Nothing is staged. Even though whitespace is hidden, shouldn't changes like indentation be staged, if they're within the selected range? Otherwise I end up with lines like `if (true)` staged, since it's a text change, but the block contained within its braces as not staged, since it's just an indentation change.
VS Code version: Code 1.29.1 (bc24f98b5f70467bc689abf41cc5550ca637088e, 2018-11-15T19:06:21.742Z)
OS version: Darwin x64 18.0.0
<!-- generated by issue reporter --> | bug,git | low | Critical |
382,031,768 | rust | Missing optimization with signed pointer offset | I am trying to elide the pointer offset of a slice indexing operation.
I tried this code:
```rust
pub fn index(table: &[u128; 4], idx: i32) -> u128 {
table[(idx as usize & 0b11_0000) >> 4]
}
```
with `RUST_BACKTRACE=full RUSTFLAGS='--emit=asm' cargo build --release`.
I expected to see this happen:
```asm
example::index:
and esi, 48
mov rax, qword ptr [rsi + rdi]
mov rdx, qword ptr [rsi + rdi + 8]
ret
```
(selects two bits, already in the pointer offset position)
Instead, this happened:
```asm
example::index:
shr esi, 4
and esi, 3
shl rsi, 4
mov rax, qword ptr [rdi + rsi]
mov rdx, qword ptr [rdi + rsi + 8]
ret
```
A godbolt link for comparison with an unsafe version which does apply the optimization: https://godbolt.org/z/0QsA3z
## Meta
```bash
rustc --version --verbose
rustc 1.32.0-nightly (6b9b97bd9 2018-11-15)
binary: rustc
commit-hash: 6b9b97bd9b704f85f0184f7a213cc4d62bd9654c
commit-date: 2018-11-15
host: x86_64-apple-darwin
release: 1.32.0-nightly
LLVM version: 8.0
```
Backtrace:
none
| A-LLVM,I-slow,T-compiler | low | Major |
382,034,339 | rust | Run proc macro invocations in separate threads. | #49219 introduces `proc_macro::bridge::server::{ExecutionStrategy,SameThread,CrossThread}`.
`SameThread` had to be used because `CrossThread` was a significant performance regression.
Ideally, we'd use `CrossThread`, which spawns a thread for each invocation, to prevent (and discourage) proc macros from using TLS for state between invocations.
But we'd have to figure out how to make it not regress performance too much, if at all possible.
(it could be the use of channels, which are overkill since they never have more than one value)
cc @dtolnay @alexcrichton @petrochenkov
<hr/>
(**TODO**: update with regressions, if any, after the cross-thread crater run finishes on #49219) | C-enhancement,I-compiletime,A-macros,T-compiler | medium | Major |
382,045,270 | flutter | Pixel-snapping in font rendering leads to jittery font scaling animations even when using optical sizing | 
It's a very magical question and it looks like they're shaking, I think my code logic is fine.
```dart
class DataConfig {
static int dataTargetIndex = 0;
}
const List<String> _titleList = [
DataScreenType.basic,
DataScreenType.promotion
];
class _DataViewModel {
String targetDataTitle;
_DataViewModel(this.targetDataTitle);
static _DataViewModel fromStore(Store<AppState> store) {
return _DataViewModel(store.state.manager.targetDataTitle);
}
@override
int get hashCode => targetDataTitle.hashCode;
@override
bool operator ==(other) =>
identical(this, other) ||
other is _DataViewModel && targetDataTitle == other.targetDataTitle;
}
class DataScreen extends StatefulWidget {
static const String routeName = '/data/data_screen';
const DataScreen({Key key}) : super(key: key);
@override
State<StatefulWidget> createState() {
return _DataScreenState();
}
}
class _DataScreenState extends State<DataScreen>
with SingleTickerProviderStateMixin {
TabController _tabController;
@override
void dispose() {
_tabController.dispose();
super.dispose();
}
@override
void initState() {
super.initState();
DataConnectApi.promotionData(
intervalType: StoreContainer.access.interval,
isBusinessGroup: StoreContainer.access.isBusinessGroup);
DataConnectApi.basicData(DateTime.now(), StoreContainer.access.interval);
_tabController = TabController(length: _titleList.length, vsync: this);
_tabController.index = DataConfig.dataTargetIndex;
_tabController.addListener(() {
DataConfig.dataTargetIndex = _tabController.index;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: TitleBar(
hasBackward: false,
child: DataCommon.buildBusinessList(context, BusinessListType.data),
action: DataCommon.buildDataTitleBarAction(context,
isDetailScreen: false, onDateChanged: () {
DataConnectApi.promotionData(
intervalType: StoreContainer.access.interval,
isBusinessGroup: StoreContainer.access.isBusinessGroup);
DataConnectApi.basicData(
DateTime.now(), StoreContainer.access.interval);
})),
body: _buildDataScreenBody(context));
}
Widget _buildDataScreenBody(BuildContext context) {
return StoreConnector<AppState, _DataViewModel>(
distinct: true,
converter: (store) => _DataViewModel.fromStore(store),
builder: (context, vm) {
if (vm.targetDataTitle.isNotEmpty) {
_tabController.index =
_titleList.indexWhere((item) => item == vm.targetDataTitle);
StoreContainer.global
.dispatch(SwitchBusinessDataAction(targetDataTitle: ''));
}
return Column(children: <Widget>[
Container(
color: HColors.primary,
height: 30.0,
width: double.infinity,
child: Align(
alignment: Alignment.center,
child: TabBar(
controller: _tabController,
indicatorSize: TabBarIndicatorSize.label,
indicatorColor: Colors.white,
indicatorWeight: HStyles.borderStyle.borderWidth,
isScrollable: true,
labelColor: Colors.white,
labelStyle: TextStyle(fontSize: HFontSizes.medium),
unselectedLabelColor: Colors.white.withOpacity(0.5),
unselectedLabelStyle:
TextStyle(fontSize: HFontSizes.normal),
tabs: _titleList
.map((test) => Tab(text: test))
.toList()))),
Expanded(
child: TabBarView(
controller: _tabController,
physics: NeverScrollableScrollPhysics(),
children: [DataBasic(), DataCampaign()]))
]);
});
}
}
```
flutter doctor -v
```
[✓] Flutter (Channel beta, v0.11.3, on Mac OS X 10.14 18A391, locale en-CN)
• Flutter version 0.11.3 at /Users/dpuntu/flutter
• Framework revision 72bf075e8d (9 days ago), 2018-11-09 20:36:17 -0800
• Engine revision 5646e86a6f
• Dart version 2.1.0 (build 2.1.0-dev.9.3 9c07fb64c4)
[✓] Android toolchain - develop for Android devices (Android SDK 28.0.3)
• Android SDK at /Users/dpuntu/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• ANDROID_HOME = /Users/dpuntu/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 10.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.1, Build version 10B61
• ios-deploy 1.9.2
• CocoaPods version 1.5.3
[✓] Android Studio (version 3.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 28.0.1
• Dart plugin version 173.4700
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
[!] IntelliJ IDEA Community Edition (version 2017.3)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
• For information about installing plugins, see
https://flutter.io/intellij-setup/#installing-the-plugins
[✓] VS Code (version 1.28.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 2.19.0
[✓] Connected device (2 available)
• Redmi 6A • 4e2396217d22 • android-arm • Android 8.1.0 (API 27)
• iPhone XS • 05E5BE17-A9E5-4830-8699-E28E09B7772D • ios • iOS 12.1 (simulator)
```
| engine,a: animation,f: material design,a: typography,has reproducible steps,P2,found in release: 1.22,found in release: 3.1,team-engine,triaged-engine | low | Critical |
382,067,150 | opencv | ‘memcpy’ was not declared in this scope (Ubuntu 16.04, opencv2.4.13) | I'm trying to install opencv2.4.13 for Ubuntu 16.04 but get a bug like this when running `make` command
```
[ 1%] Built target opencv_core_pch_dephelp
[ 1%] Built target pch_Generate_opencv_core
[ 2%] Building NVCC (Device) object modules/core/CMakeFiles/cuda_compile.dir/__/dynamicuda/src/cuda/cuda_compile_generated_matrix_operations.cu.o
/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
return (char *) memcpy (__dest, __src, __n) + __n;
^
CMake Error at cuda_compile_generated_matrix_operations.cu.o.cmake:266 (message):
Error generating file
/home/savvycom/opencv-2.4.13.6/release/modules/core/CMakeFiles/cuda_compile.dir/__/dynamicuda/src/cuda/./cuda_compile_generated_matrix_operations.cu.o
modules/core/CMakeFiles/opencv_core.dir/build.make:198: recipe for target 'modules/core/CMakeFiles/cuda_compile.dir/__/dynamicuda/src/cuda/cuda_compile_generated_matrix_operations.cu.o' failed
make[2]: *** [modules/core/CMakeFiles/cuda_compile.dir/__/dynamicuda/src/cuda/cuda_compile_generated_matrix_operations.cu.o] Error 1
CMakeFiles/Makefile2:890: recipe for target 'modules/core/CMakeFiles/opencv_core.dir/all' failed
make[1]: *** [modules/core/CMakeFiles/opencv_core.dir/all] Error 2
Makefile:160: recipe for target 'all' failed
make: *** [all] Error 2
```
My cmake configuration is:
```
cmake \
-D CMAKE_BUILD_TYPE=Release \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D CUDA_GENERATION=Kepler \
-D BUILD_LIBPROTOBUF_FROM_SOURCES=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D PYTHON2_EXECUTABLE=/usr/bin/python2.7 \
-D PYTHON_INCLUDE_DIR=/usr/include/python2.7 \
-D PYTHON_INCLUDE_DIR2=/usr/include/x86_64-linux-gnu/python2.7 \
-D PYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython2.7.so \
-D PYTHON2_NUMPY_INCLUDE_DIRS=/usr/lib/python2.7/dist-packages/numpy/core/include/ \
-D BUILD_EXAMPLES=ON ..
```
I've tried the solution found here [https://github.com/opencv/opencv/issues/6500#issuecomment-221762139](url) but it doesn't work
Can anyone help me please ? | bug,priority: low,category: build/install,affected: 2.4,category: gpu/cuda (contrib) | low | Critical |
382,067,403 | kubernetes | Ability to adjust number of inodes of filesystems | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
I want to add a feature to adjust number of inodes (or customized mkfs options) for block PVs.
**Why is this needed**:
Sometimes, people need to store a large number of small files (typically, images for data mining), the default calculation of number of inodes in ext3/ext4 may be too small.
For ext3/ext4 filesystems, they allocate inodes at creation:
- `-i bytes-per-inode`: Specify the bytes/inode ratio.
- `-I inode-size`: Specify the size of each inode in bytes.
- `-N number-of-nodes`: Overrides the default calculation of the number of inodes that should be reserved for the filesystem (which is based on the number of blocks and the bytes-per-inode ratio)
For other filesystems, e.g. `btrfs` has dynamic inode allocation and `xfs` has a space limit (percent of filesystem), this is filesystem-specific.
Workaround solution:
- Use other filesystems if possible, e.g. btrfs/xfs.
<!-- DO NOT EDIT BELOW THIS LINE -->
/kind feature | sig/storage,kind/feature,lifecycle/frozen | low | Major |
382,067,977 | go | cmd/compile: fuse constant comparison with division by a constant | func ssa(a uint32) uint32 {
if a / 60 > 45 {
return a
} else {
return a - 1
}
}
is compiled to
00000 (4) TEXT "".ssa(SB)
00001 (4) FUNCDATA $0, gclocals·33cdeccccebe80329f1fdbee7f5874cb(SB)
00002 (4) FUNCDATA $1, gclocals·33cdeccccebe80329f1fdbee7f5874cb(SB)
00003 (4) FUNCDATA $3, gclocals·33cdeccccebe80329f1fdbee7f5874cb(SB)
v7 00004 (5) PCDATA $2, $0
v7 00005 (5) PCDATA $0, $0
v7 00006 (5) MOVL $2290649225, AX
v12 00007 (5) MOVL "".a(SP), CX
v16 00008 (5) IMULQ CX, AX
v13 00009 (5) SHRQ $37, AX
v8 00010 (5) CMPL AX, $45
b1 00011 (5) JLS 14
v15 00012 (6) MOVL CX, "".~r1+8(SP)
b2 00013 (6) RET
v18 00014 (8) LEAL -1(CX), AX
v21 00015 (8) MOVL AX, "".~r1+8(SP)
b4 00016 (8) RET
00017 (?) END
maybe better should equal to something like
func ssa(a uint32) uint32 {
if a > 2700 {
return a
} else {
return a - 1
}
}
| Performance,compiler/runtime | low | Minor |
382,140,392 | TypeScript | Generic derived value type | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
Hey guys.
I can not see if I'm doing anything wrong here. I am expecting to be able to get exact type from generic type ...
Is this just limitation of typescript or i am doing anything whrong here?
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.1.3
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
Generic derived value type
**Code**
```ts
export type filterTypeName = 'price' | 'price_range';
export type priceFilterValue = {
from: number;
to: number;
};
export type priceRangeFilterValue = {
fromRange: number;
toRange: number;
};
export type filterValueType<T extends filterTypeName> =
T extends 'price' ? priceFilterValue
: T extends 'price_range' ? priceRangeFilterValue : never;
type productFilterParameters<T extends filterTypeName> = {
filter: T;
value: filterValueType<T>;
};
function addProductFilter<T extends filterTypeName>(params: productFilterParameters<T>) {
switch (params.filter) {
case 'price':
const priceFrom = params.value.from; //Property 'from' does not exist on type 'filterValueType<T>'
break;
default:
break;
}
}
// parameter value is derivered based on filter type... not working id function definition
addProductFilter({ filter: 'price', value: { from: 1, to: 2 } });
//expected behavior something like this
type productFilterParametersCorrect = {
filter: 'price';
value: priceFilterValue;
} | {
filter: 'price_range';
value: priceRangeFilterValue;
}
function addProductFilterCorrect(params: productFilterParametersCorrect) {
switch (params.filter) {
case 'price':
const priceFrom = params.value.from; // correct
break;
case 'price_range':
const priceRangeFrom = params.value.fromRange; // correct
default:
break;
}
}
```
**Expected behavior:**
**Code**
```ts
const value: priceFilterValue
```
**Actual behavior:**
**Code**
```ts
const value: filterValueType<T>
```
**Playground Link:**
[Playground](http://www.typescriptlang.org/play/#src=export%20type%20filterTypeName%20%3D%20'price'%20%7C%20'price_range'%3B%0D%0Aexport%20type%20priceFilterValue%20%3D%20%7B%0D%0A%20%20priceRangeFrom%3A%20number%3B%0D%0A%20%20priceRangeTo%3A%20number%3B%0D%0A%7D%3B%0D%0Aexport%20type%20priceRangeFilterValue%20%3D%20%7B%0D%0A%20%20from%3A%20number%3B%0D%0A%20%20to%3A%20number%3B%0D%0A%7D%3B%0D%0A%0D%0Aexport%20type%20filterValueType%3CT%20extends%20filterTypeName%3E%20%3D%20%20%0D%0A%20%20%20%20T%20extends%20'price'%20%3F%20priceFilterValue%0D%0A%20%20%20%20%3A%20T%20extends%20'price_range'%20%3F%20priceRangeFilterValue%20%3A%20never%3B%0D%0A%0D%0A%0D%0Atype%20productFilterParameters%3CT%20extends%20filterTypeName%3E%20%3D%20%7B%0D%0A%20%20filter%3A%20T%3B%0D%0A%20%20value%3A%20filterValueType%3CT%3E%3B%20%20%0D%0A%20%20clearOtherFilters%3A%20boolean%3B%0D%0A%7D%3B%0D%0A%0D%0Afunction%20addProductFilter%3CT%20extends%20filterTypeName%3E(params%3A%20productFilterParameters%3CT%3E)%20%7B%20%0D%0A%20%20%20%20switch%20(params.filter)%20%7B%20%0D%0A%20%20%20%20%20%20%20%20case%20'price'%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20const%20value%20%3D%20params.value%3B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20break%3B%0D%0A%20%20%20%20%20%20%20%20default%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20break%3B%20%20%20%20%20%20%20%20%20%20%20%20%0D%0A%20%20%20%20%7D%0D%0A%7D)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Suggestion,Experience Enhancement,Domain: Conditional Types | low | Critical |
382,148,304 | pytorch | [Caffe2] Non-spatial batchnorm optimization | ## 🚀 Feature
To track the task | caffe2 | low | Minor |
382,175,735 | rust | Trivial dependencies on large crates pull in massive amounts of debuginfo | ```
extern crate rusoto_core;
const REGION: rusoto_core::Region = rusoto_core::Region::UsEast1;
fn main() {
println!("Hello, region {:?}", ®ION);
}
```
`rusoto_core::Region` is a very simple enum type. This doesn't run any interesting code. The resulting Linux debug-build binary is 41MB. If I replace `®ION` with `0` the binary is 7.5MB.
`readelf -a` shows that `.text` is 370,558 bytes. `.debug_info` is 10,541,677 bytes. The other debug sections account for most of the rest.
Inspecting the debuginfo shows DWARF compilation units for `rusoto_core` and lots of its dependencies that are entirely dead code. For example:
```
COMPILE_UNIT<header overall offset = 0x0034b008>:
< 0><0x0000000b> DW_TAG_compile_unit
DW_AT_producer clang LLVM (rustc version 1.30.1 (1433507eb 2018-11-07))
DW_AT_language DW_LANG_Rust
DW_AT_name /home/roc/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_json-1.0.33/src/lib.rs
DW_AT_stmt_list 0x001534c8
DW_AT_comp_dir /home/roc/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_json-1.0.33
DW_AT_GNU_pubnames yes(1)
DW_AT_low_pc 0x00000000
DW_AT_ranges 0x000a7120
ranges: 89 at .debug_ranges offset 684320 (0x000a7120)
[ 0] <offset pair low-off: 0x00000001 addr 0x00000001 high-off: 0x00000001 addr 0x00000001>
[ 1] <offset pair low-off: 0x00000001 addr 0x00000001 high-off: 0x00000001 addr 0x00000001>
[ 2] <offset pair low-off: 0x00000001 addr 0x00000001 high-off: 0x00000001 addr 0x00000001>
[ 3] <offset pair low-off: 0x00000001 addr 0x00000001 high-off: 0x00000001 addr 0x00000001>
```
... followed by 85 more identical ranges. These all indicate empty code ranges; all code for this CU has been stripped by the linker. However, this CU still has a ton of debuginfo for types and for functions. E.g.:
```
< 4><0x00001802> DW_TAG_subprogram
DW_AT_low_pc 0x00000000
DW_AT_high_pc <offset-from-lowpc>262
DW_AT_frame_base len 0x0001: 57: DW_OP_reg7
DW_AT_linkage_name _ZN91_$LT$core..slice..Iter$LT$$u27$a$C$$u20$T$GT$$u20$as$u20$core..iter..iterator..Iterator$GT$4next1
7hc6f1932985554eebE
DW_AT_name next<serde_json::value::Value>
DW_AT_decl_file 0x00000005 /home/roc/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_json-1.0.33/libcore/slice/m
od.rs
DW_AT_decl_line 0x00000978
DW_AT_type <0x000007b0>
< 5><0x00001820> DW_TAG_formal_parameter
DW_AT_location len 0x0002: 9128: DW_OP_fbreg 40
DW_AT_name self
DW_AT_decl_file 0x00000003 /home/roc/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_json-1.0.33/src/lib.rs
DW_AT_decl_line 0x00000001
DW_AT_type <0x00002f94>
< 5><0x0000182e> DW_TAG_inlined_subroutine
DW_AT_abstract_origin <0x00001a7d>
DW_AT_low_pc 0x00000000
DW_AT_high_pc <offset-from-lowpc>123
DW_AT_call_file 0x00000005 /home/roc/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_json-1.0.33/libcore/slice/mod.rs
DW_AT_call_line 0x00000982
< 6><0x00001842> DW_TAG_formal_parameter
DW_AT_location len 0x0002: 9138: DW_OP_fbreg 56
DW_AT_abstract_origin <0x00001a97>
< 6><0x0000184a> DW_TAG_formal_parameter
DW_AT_location len 0x0003: 91c000: DW_OP_fbreg 64
DW_AT_abstract_origin <0x00001aa2>
< 6><0x00001853> DW_TAG_lexical_block
DW_AT_low_pc 0x00000000
DW_AT_high_pc <offset-from-lowpc>36
< 7><0x00001860> DW_TAG_variable
DW_AT_location len 0x0003: 91d000: DW_OP_fbreg 80
DW_AT_abstract_origin <0x00001aae>
< 5><0x0000186b> DW_TAG_template_type_parameter
DW_AT_type <0x00000064>
DW_AT_name T
```
All `DW_AT_low_pc`s of `DW_TAG_subprogram`, `DW_TAG_lexical_block` and `DW_TAG_inlined_subroutine` in this CU are zero. There are a lot of CUs like this. | A-debuginfo,C-enhancement,I-compiletime,T-compiler,I-heavy | medium | Critical |
382,204,449 | rust | Tracking issue for feature(repr128); enums with 128-bit discriminants | This issue tracks `repr128`, i.e. "enums with 128-bit discriminant" as per https://github.com/rust-lang/rfcs/pull/1504.
Originally this was tracked in https://github.com/rust-lang/rust/issues/35118.
-------------
> Nothing has changed re repr128… It is still the case that at least LLVM’s debuginfo API blocks us from implementing it at all. There’s very little incentive to work on it, though, and I’m not planning to do anything in that direction until a very convincing use-case for 128-bit enum discriminants comes up.
\- @nagisa | A-LLVM,B-RFC-approved,T-lang,C-tracking-issue,I-lang-nominated,S-tracking-unimplemented | medium | Critical |
382,228,153 | go | fmt: behavior of pointer-like format arguments is undocumented | Funcs and channels have one thing in common - they're all pointers underneath. This means one can use `%p` to print them, much like actual pointers.
It's documented that `%b, %d, %o, %x and %X verbs also work with pointers`, so those can also be used for both funcs and channels too.
But nowhere is it documented that channels and functions are treated as pointers when formatting. The closest thing is this piece:
```
The default format for %v is:
[...]
chan: %p
```
I think we should extend the documentation to clarify a number of things:
* Funcs and channels are printed exactly like pointers
* The default format for `%v` of a function is also `%p`
* Maps are printed like pointers **only** with `%p`
That last point is especially tricky. For example, `fmt.Printf("%p", map[bool]bool{true: false}` will correctly print an address, but the same with `%b` or `%d` will fail as `fmt` recurses into the key and value elements, which aren't numbers.
I encountered this while fixing some bugs in vet, and I was surprised by how lightly `fmt` documents all this. A few of the behaviors surprised me too. I'll send a docs CL draft soon.
/cc @robpike @alandonovan @martisch | Documentation,NeedsInvestigation | low | Critical |
382,260,566 | go | x/net/http/httproxy: Never uses proxy for localhost | `golang.org/x/net/http/httproxy` has a [hardcoded rule](https://github.com/golang/net/blob/adae6a3d119ae4890b46832a2e88a95adc62b8e7/http/httpproxy/proxy.go#L183) whereby requests to `localhost` never use a proxy.
This may be a sensible default, but there should be a way to override it. There is no intrinsic reason why a proxy cannot or should not be used for `localhost`. My use case is that I have mocks of remote HTTP services running on `localhost`, and I want to use [mitmproxy](https://mitmproxy.org/) to debug the Go program’s traffic to/from these services.
One solution might be to use the current default **unless** there’s a non-empty `NO_PROXY`/`NoProxy` environment variable: then I could use some dummy value like `NO_PROXY=foo.invalid`. But a 100% backwards-compatible solution would require some hack along the lines of `NO_PROXY=but_actually_localhost_ok` or `HTTP_PROXY_LOCALHOST=yes`.
| NeedsInvestigation | medium | Critical |
382,337,658 | rust | rustdoc "expand all docs" button does not expand "hidden undocumented items" | In nightly rustdoc, trait items that aren't overridden are collapsed as "hidden undocumented items." (This is slightly inaccurate, since they may have documentation from the trait declaration.) Clicking the `[+]` ("expand all docs") link at the top of the page expands all docs except for these. This means there is no longer any way to actually expand *all* documentation.
My use case for this is using the browser's "find in page" feature to search for method names, or to answer questions like "does this type have any methods that take `&mut self`?" | T-rustdoc | low | Major |
382,351,631 | go | x/crypto/ssh: knownhosts does not handle multiple keys with same type | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
go version go1.11 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/rom/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/rom/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.11/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.11/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/h0/fll06vqd6wd6mqndvvtfy75r0000gn/T/go-build120797240=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
Open a SSH connection to a host that has multiple keys in my known_hosts file.
There is a load balancer is in front of two SSH servers that maintain different keys. Although I do not agree the setup is best practice, OpenSSH allows for multiple keys+types for the same hostname.
I used a simple test application to validate:
<pre>
package main
import (
"flag"
"fmt"
"log"
"net"
"os"
"golang.org/x/crypto/ssh"
"golang.org/x/crypto/ssh/agent"
"os/user"
"path/filepath"
"golang.org/x/crypto/ssh/knownhosts"
)
var (
USER = flag.String("user", os.Getenv("USER"), "ssh username")
HOST = flag.String("host", "localhost", "ssh server hostname")
PORT = flag.Int("port", 22, "ssh server port")
PASS = flag.String("pass", os.Getenv("SOCKSIE_SSH_PASSWORD"), "ssh password")
SIZE = flag.Int("s", 1<<15, "set max packet size")
)
func init() {
flag.Parse()
}
func main() {
var auths []ssh.AuthMethod
if aconn, err := net.Dial("unix", os.Getenv("SSH_AUTH_SOCK")); err == nil {
auths = append(auths, ssh.PublicKeysCallback(agent.NewClient(aconn).Signers))
}
if *PASS != "" {
auths = append(auths, ssh.Password(*PASS))
}
callback, err := GetKnownHostsCallback()
config := ssh.ClientConfig{
User: *USER,
Auth: auths,
HostKeyCallback: callback,
}
addr := fmt.Sprintf("%s:%d", *HOST, *PORT)
conn, err := ssh.Dial("tcp", addr, &config)
if err != nil {
log.Fatalf("unable to connect to [%s]: %v", addr, err)
}
conn.Close()
}
func GetKnownHostsCallback() (ssh.HostKeyCallback, error) {
usr, err := user.Current()
if err != nil {
return nil, err
}
name := filepath.Join(usr.HomeDir, ".ssh", "known_hosts")
log.Printf("Using known hosts file %s", name)
f, err := knownhosts.New(name)
if err != nil {
return nil, err
}
return func(addr string, remote net.Addr, key ssh.PublicKey) error {
log.Printf("Checking known host %s (%v)", addr, remote)
return f(addr, remote, key)
}, nil
}
</pre>
### What did you expect to see?
I expected the host key to be validated in the same manner as OpenSSH.
### What did you see instead?
<pre>
ssh: handshake failed: knownhosts: key mismatch
</pre>
In my test, I added a fake key of the same type and hostname. If the valid key was first, it worked fine. Anything else would fail.
I noticed this from crypto/ssh/knownhosts/knownhosts.go:
<pre>
func (db *hostKeyDB) checkAddr(a addr, remoteKey ssh.PublicKey) error {
// TODO(hanwen): are these the right semantics? What if there
// is just a key for the IP address, but not for the
// hostname?
// Algorithm => key.
knownKeys := map[string]KnownKey{}
for _, l := range db.lines {
if l.match(a) {
typ := l.knownKey.Key.Type()
if _, ok := knownKeys[typ]; !ok {
knownKeys[typ] = l.knownKey
}
}
}
</pre>
Which will only look at the first key of a given type. To work around this, I added another key type for the other server and it worked fine. However, I think this should handle multiple key/type combinations.
| NeedsInvestigation | low | Critical |
382,371,816 | go | proposal: encoding/asn1: expose parseTagAndLength | When working with ldap over the network, the reader does not know in advance the size of the request (the size of the buffer to read).
To know that the full request was read you need to parse the tag and length. This is just about what the function parseTagAndLength does. You only need to add the offset to the length.
To expose this function I suggest adding a public ParseTagAndLength that exposes this functionality and calculates the total expected buffer length.
usage example:
```
func readRequest(conn net.Conn, readBuf []byte) ([]byte, error) {
const minReadBuf = 1024
var err error
readBuf = readBuf[0:cap(readBuf)]
var nRead = 0
var tagSize = 1
for nRead < tagSize {
nBufStart := nRead
if (len(readBuf) - nBufStart) < minReadBuf {
readBuf = append(readBuf, make([]byte, minReadBuf)...)
}
nRead, err = conn.Read(readBuf[nBufStart:])
// no request: return
if err != nil || nRead == 0 {
return readBuf[0:nBufStart], err
}
nRead += nBufStart
var rawVal asn1.RawValue
rawVal, tagSize, err = asn1.ParseTagAndLength(readBuf[0:nRead])
}
return readBuf[0:nRead], err
}
``` | Proposal,Proposal-Crypto | low | Critical |
382,394,339 | pytorch | Failed to run 'bash ../tools/build_pytorch_libs.sh --use-nnpack --use-mkldnn --use-qnnpack caffe2' | ## 🐛 Bug
I've scanned through the other issues and saw similar ones, yet I saw my environment slightly differs so i decided to open a separate report. Feel free to disagree.
After trying to build pytorch inside a virtualenv, I ran into this error:
`Makefile:138: recipe for target 'all' failed`
`make: *** [all] Error 2`
`Failed to run 'bash ../tools/build_pytorch_libs.sh --use-nnpack --use-mkldnn --use-qnnpack caffe2'`
## To Reproduce
Steps to reproduce the behavior:
`git clone --recursive https://github.com/pytorch/pytorch`
`cd pytorch`
`python setup.py install`
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
A compilation success
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): https://github.com/pytorch/pytorch , master
- OS (e.g., Linux): Raspbian stretch 4.14.34-v7+ armv7l GNU/Linux
- How you installed PyTorch (`conda`, `pip`, source): source
- Build command you used (if compiling from source): see above
- Python version: Python 2.7.13
- CUDA/cuDNN version: NA
- GPU models and configuration: NA
- Any other relevant information: NA
## Additional context
The ouput from where it started going south;
```
(...) Scanning dependencies of target ATEN_CPU_FILES_GEN_TARGET
[ 12%] Generating ../aten/src/ATen/CPUByteType.cpp, ../aten/src/ATen/CPUByteType.h, ../aten/src/ATen/CPUCharType.cpp, ../aten/src/ATen/CPUCharType.h, ../aten/src/ATen/CPUCopy.cpp, ../aten/src/ATen/CPUDoubleType.cpp, ../aten/src/ATen/CPUDoubleType.h, ../aten/src/ATen/CPUFloatType.cpp, ../aten/src/ATen/CPUFloatType.h, ../aten/src/ATen/CPUGenerator.h, ../aten/src/ATen/CPUHalfType.cpp, ../aten/src/ATen/CPUHalfType.h, ../aten/src/ATen/CPUIntType.cpp, ../aten/src/ATen/CPUIntType.h, ../aten/src/ATen/CPULongType.cpp, ../aten/src/ATen/CPULongType.h, ../aten/src/ATen/CPUShortType.cpp, ../aten/src/ATen/CPUShortType.h, ../aten/src/ATen/Declarations.yaml, ../aten/src/ATen/Functions.h, ../aten/src/ATen/NativeFunctions.h, ../aten/src/ATen/RegisterCPU.cpp, ../aten/src/ATen/RegisterCPU.h, ../aten/src/ATen/SparseCPUByteType.cpp, ../aten/src/ATen/SparseCPUByteType.h, ../aten/src/ATen/SparseCPUCharType.cpp, ../aten/src/ATen/SparseCPUCharType.h, ../aten/src/ATen/SparseCPUDoubleType.cpp, ../aten/src/ATen/SparseCPUDoubleType.h, ../aten/src/ATen/SparseCPUFloatType.cpp, ../aten/src/ATen/SparseCPUFloatType.h, ../aten/src/ATen/SparseCPUIntType.cpp, ../aten/src/ATen/SparseCPUIntType.h, ../aten/src/ATen/SparseCPULongType.cpp, ../aten/src/ATen/SparseCPULongType.h, ../aten/src/ATen/SparseCPUShortType.cpp, ../aten/src/ATen/SparseCPUShortType.h, ../aten/src/ATen/TypeDefault.cpp, ../aten/src/ATen/TypeDefault.h, ../aten/src/ATen/TypeExtendedInterface.h, ../aten/src/ATen/CUDAByteType.cpp, ../aten/src/ATen/CUDAByteType.h, ../aten/src/ATen/CUDACharType.cpp, ../aten/src/ATen/CUDACharType.h, ../aten/src/ATen/CUDACopy.cpp, ../aten/src/ATen/CUDADoubleType.cpp, ../aten/src/ATen/CUDADoubleType.h, ../aten/src/ATen/CUDAFloatType.cpp, ../aten/src/ATen/CUDAFloatType.h, ../aten/src/ATen/CUDAGenerator.h, ../aten/src/ATen/CUDAHalfType.cpp, ../aten/src/ATen/CUDAHalfType.h, ../aten/src/ATen/CUDAIntType.cpp, ../aten/src/ATen/CUDAIntType.h, ../aten/src/ATen/CUDALongType.cpp, ../aten/src/ATen/CUDALongType.h, ../aten/src/ATen/CUDAShortType.cpp, ../aten/src/ATen/CUDAShortType.h, ../aten/src/ATen/RegisterCUDA.cpp, ../aten/src/ATen/RegisterCUDA.h, ../aten/src/ATen/SparseCUDAByteType.cpp, ../aten/src/ATen/SparseCUDAByteType.h, ../aten/src/ATen/SparseCUDACharType.cpp, ../aten/src/ATen/SparseCUDACharType.h, ../aten/src/ATen/SparseCUDADoubleType.cpp, ../aten/src/ATen/SparseCUDADoubleType.h, ../aten/src/ATen/SparseCUDAFloatType.cpp, ../aten/src/ATen/SparseCUDAFloatType.h, ../aten/src/ATen/SparseCUDAIntType.cpp, ../aten/src/ATen/SparseCUDAIntType.h, ../aten/src/ATen/SparseCUDALongType.cpp, ../aten/src/ATen/SparseCUDALongType.h, ../aten/src/ATen/SparseCUDAShortType.cpp, ../aten/src/ATen/SparseCUDAShortType.h
[ 12%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/batch_normalization.cpp.o
Scanning dependencies of target mkalias
Scanning dependencies of target mkrename
[ 12%] Building C object sleef/src/libm/CMakeFiles/mkalias.dir/mkalias.c.o
[ 12%] Building C object sleef/src/libm/CMakeFiles/mkrename.dir/mkrename.c.o
[ 12%] Linking C executable ../../bin/mkalias
[ 12%] Linking C executable ../../bin/mkrename
[ 12%] Built target mkalias
[ 12%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/convolution.cpp.o
[ 12%] Built target mkrename
[ 12%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/convolution_relu.cpp.o
In file included from /home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/type_helpers.hpp:29:0,
from /home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/batch_normalization.cpp:21:
/home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/math_utils.hpp: In function ?int mkldnn::impl::math::ilog2q(size_t)?:
/home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/math_utils.hpp:84:8: error: right shift count >= width of type [-Werror=shift-count-overflow]
CP(32); CP(16); CP(8); CP(4); CP(2); CP(1);
^
/home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/math_utils.hpp:83:55: note: in definition of macro ?CP?
# define CP(pw) do { if (v >= (1ull << pw)) { v >>= pw; p += pw; } } while(0)
^~
cc1plus: all warnings being treated as errors
third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/build.make:62: recipe for target 'third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/batch_normalization.cpp.o' failed
make[2]: *** [third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/batch_normalization.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
Scanning dependencies of target python_copy_files
In file included from /home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/type_helpers.hpp:29:0,
from /home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/convolution.cpp:21:
/home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/math_utils.hpp: In function ?int mkldnn::impl::math::ilog2q(size_t)?:
/home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/math_utils.hpp:84:8: error: right shift count >= width of type [-Werror=shift-count-overflow]
CP(32); CP(16); CP(8); CP(4); CP(2); CP(1);
^
/home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/math_utils.hpp:83:55: note: in definition of macro ?CP?
# define CP(pw) do { if (v >= (1ull << pw)) { v >>= pw; p += pw; } } while(0)
^~
In file included from /home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/type_helpers.hpp:29:0,
from /home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/convolution_relu.cpp:21:
/home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/math_utils.hpp: In function ?int mkldnn::impl::math::ilog2q(size_t)?:
/home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/math_utils.hpp:84:8: error: right shift count >= width of type [-Werror=shift-count-overflow]
CP(32); CP(16); CP(8); CP(4); CP(2); CP(1);
^
/home/w/mpdj/pytorch/third_party/ideep/mkl-dnn/src/common/math_utils.hpp:83:55: note: in definition of macro ?CP?
# define CP(pw) do { if (v >= (1ull << pw)) { v >>= pw; p += pw; } } while(0)
^~
cc1plus: all warnings being treated as errors
third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/build.make:110: recipe for target 'third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/convolution_relu.cpp.o' failed
make[2]: *** [third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/convolution_relu.cpp.o] Error 1
Scanning dependencies of target mkmasked_gnuabi
[ 12%] Building C object sleef/src/libm/CMakeFiles/mkmasked_gnuabi.dir/mkmasked_gnuabi.c.o
[ 12%] Linking C executable ../../bin/mkmasked_gnuabi
cc1plus: all warnings being treated as errors
third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/build.make:86: recipe for target 'third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/convolution.cpp.o' failed
make[2]: *** [third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/convolution.cpp.o] Error 1
CMakeFiles/Makefile2:1263: recipe for target 'third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/all' failed
make[1]: *** [third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 12%] Built target mkmasked_gnuabi
[ 12%] Built target python_copy_files
[ 12%] Built target ATEN_CPU_FILES_GEN_TARGET
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Failed to run 'bash ../tools/build_pytorch_libs.sh --use-nnpack --use-mkldnn --use-qnnpack caffe2'
```
---
Hopefully this report can be helpful, and ideas on how to work around be would much appreaciated :).
cc @malfet | module: build,triaged | medium | Critical |
382,461,084 | flutter | Flutter exposes gnarly message from GitHub about RSA keys | ## Steps to Reproduce
Just ran `flutter upgrade` and got this warning message:
```
$ flutter upgrade
Upgrading Flutter from /Users/timsneath/flutter...
Warning: Permanently added the RSA host key for IP address '140.82.118.3' to the list of known hosts.
From github.com:flutter/flutter
5385132c6..921117f4d master -> origin/master
72bf075e8..7a005e1dc beta -> origin/beta
d44aa57c1..7a005e1dc dev -> origin/dev
* [new tag] v0.11.7 -> v0.11.7
```
Per `whois`, the IP address `140.82.118.3` is associated with GitHub. However:
- this is being presented by a Flutter tool, rather than GitHub;
- it's not clear that we're simply passing on a GitHub message;
- the message itself is a little unusual (why are you telling me this? why now?);
- it's not super-obvious how to verify this unless you know about (or remember) the `whois` command.
One proposal is that since we already do a bunch of filtering of stdout, we could filter such messages and only emit them in `--verbose` mode. Or emit them but when we spot them, add a bit of help text. | tool,a: quality,P2,team-tool,triaged-tool | low | Minor |
382,486,022 | react | Warn when calling dispatch() from useEffect() cleanup function on unmounting | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
**Do you want to request a *feature* or report a *bug*?**
bug
**What is the current behavior?**
action dispatched in return callback of `useEffect` seem to not work
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:**
https://codesandbox.io/s/5yqmo128v4
only foo -> baz is logged
```javascript
import React, { useState, useEffect, useReducer } from "react";
import ReactDOM from "react-dom";
import "./styles.css";
function reducer(state, action) {
console.log("bar", action); // not logged
// debugger
return state;
}
function Foo({ value }) {
const [state, dispatch] = useReducer(reducer, {});
useEffect(
() => {
return () => {
console.log("foo");
// debugger
dispatch({ type: "foo" });
// debugger
console.log("baz");
};
},
[state, value]
);
return <p>{value}</p>;
}
function App() {
const [value, set] = useState(0);
return (
<div className="App">
<button onClick={() => set(value + 1)}>INC</button>
{value % 2 ? <Foo value={value} /> : null}
</div>
);
}
const rootElement = document.getElementById("root");
ReactDOM.render(<App />, rootElement);
```
**What is the expected behavior?**
bar is logged in console
(foo -> baz -> bar`action`)
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
react: "16.7.0-alpha.2",
react-dom: "16.7.0-alpha.2"
| Type: Discussion,Component: Hooks | medium | Critical |
382,524,687 | pytorch | C++ model load error | I got error when load model from a tracing typescripts model. The trace model is simple:
```python
import torch
import torchvision
from torchvision import transforms
from PIL import Image
from time import time
import numpy as np
# An instance of your model.
model = torchvision.models.resnet18(pretrained=True)
model.eval()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("resnet50_typescripts.pt")
# evalute time
batch = torch.rand(64, 3, 224, 224)
start = time()
output = traced_script_module(batch)
stop = time()
print(str(stop - start) + "s")
# read image
image = Image.open('dog.png').convert('RGB')
default_transform = transforms.Compose([
transforms.Resize([224, 224]),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
image = default_transform(image)
# forward
output = traced_script_module(image.unsqueeze(0))
print(output[0, :10])
# print top-5 predicted labels
labels = np.loadtxt('synset_words.txt', dtype=str, delimiter='\n')
data_out = output[0].data.numpy()
sorted_idxs = np.argsort(-data_out)
for i, idx in enumerate(sorted_idxs[:5]):
print('top-%d label: %s, score: %f' % (i, labels[idx], data_out[idx]))
```
C++ program is load that model, but even load can not pass. (Build is success)
And error got:
```
terminate called after throwing an instance of 'c10::Error'
what(): file_format_version <= kMaxSupportedFileFormatVersion ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.h:213, please report a bug to PyTorch. Attempted to read a PyTorch file with version %llu, but the maximum supported version for reading is %llu. Your PyTorch installation may be too old.21 (readAndValidateFileHeader at /pytorch/caffe2/serialize/inline_container.h:213)
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f7a85d6b0d1 in /pt_codes/pt_cpp/libtorch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f7a85d6a99a in /pt_codes/pt_cpp/libtorch/lib/libc10.so)
frame #2: <unknown function> + 0x5cb0b0 (0x7f7ab9d900b0 in /pt_codes/pt_cpp/libtorch/lib/libtorch.so.1)
frame #3: <unknown function> + 0x5cb681 (0x7f7ab9d90681 in /pt_codes/pt_cpp/libtorch/lib/libtorch.so.1)
frame #4: <unknown function> + 0x5c8cb6 (0x7f7ab9d8dcb6 in /pt_codes/pt_cpp/libtorch/lib/libtorch.so.1)
frame #5: torch::jit::load(std::istream&) + 0xb4 (0x7f7ab9d8eb64 in /pt_codes/pt_cpp/libtorch/lib/libtorch.so.1)
frame #6: torch::jit::load(std::string const&) + 0x27 (0x7f7ab9d8ec37 in /pt_codes/pt_cpp/libtorch/lib/libtorch.so.1)
frame #7: <unknown function> + 0x8e9b (0x559c7be0ce9b in ./build/ptcpp)
frame #8: __libc_start_main + 0xe7 (0x7f7a853edb97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #9: <unknown function> + 0x801a (0x559c7be0c01a in ./build/ptcpp)
```
What is wrong with current c++ API? I think it is a bug or something? because the error message is not well printed though.
cc @suo | oncall: jit,triaged | medium | Critical |
382,534,540 | TypeScript | Props requiring casting on complex type (and defaultProps not settable) | **Search Terms:**
defaultProps React.RefForwardingComponent ExoticComponent
**Code**
```typescript
// import { cx } from "emotion";
import * as React from "react";
// import modifiers, { ModifierProps } from "../../modifiers";
interface RenderAsExoticComponent<
TOwnProps,
TDefaultComponent extends
| keyof JSX.IntrinsicElements
| React.ComponentType<any>
>
extends Pick<
React.ForwardRefExoticComponent<any>,
keyof React.ForwardRefExoticComponent<any>
> {
(
props: React.ComponentPropsWithRef<TDefaultComponent> &
TOwnProps & { renderAs?: never },
): JSX.Element | null;
<TAsComponent extends keyof JSX.IntrinsicElements | React.ComponentType<any>>(
props: React.ComponentPropsWithRef<TAsComponent> &
TOwnProps & { renderAs: TAsComponent },
): JSX.Element | null;
}
function renderAsComponent<
TOwnProps,
TDefaultElement extends React.ComponentType<any> | keyof JSX.IntrinsicElements
>(
factory: React.RefForwardingComponent<
any,
TOwnProps & {
renderAs?: React.ComponentType<any> | keyof JSX.IntrinsicElements;
className?: string;
}
>,
defaultElement: TDefaultElement,
) {
const forward = React.forwardRef(factory);
forward.defaultProps = { renderAs: defaultElement };
// todo: apparently a bug, use workaround
// forward.defaultProps = {};
// forward.defaultProps.renderAs = defaultElement;
return forward as RenderAsExoticComponent<TOwnProps, TDefaultElement>;
}
interface ModifierProps {
textColor?: "white" | "black";
pull?: "left" | "right";
}
const Element = renderAsComponent<ModifierProps, "div">(
({ className, renderAs, ...allProps }, ref) => {
const props = {
// className: cx(className, modifiers.classNames(allProps)) || undefined,
ref,
// ...modifiers.clean(allProps),
};
return React.createElement(renderAs!, props);
},
"div",
);
export default Element;
```
```ts
export const Example: React.SFC<{}> = () => (
<Element textColor="white" pull={"left" as "left"}>
Child
</Element>
);
```
**Expected behavior:**
1) `defaultProps` can be set directly
2) `props` work without casting
**Actual behavior:**
1) `defaultProps` cannot be set directly (see workaround in code)
2) `props` do not work without being cast.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
No. | Bug | low | Critical |
382,562,771 | pytorch | Caffe2->ONNX conversion issue due to spatial-bn | I have used this reference https://github.com/caffe2/caffe2/issues/703 to implement resnet model in caffe2 because of mobile export api issues in caffe2. But after applying this changes I am not been able to convert the resnet caffe2 model into onnx model due to unordered sequence of batch-normalization params(running-mean, scale, offset, running-variance).
**Following is error which is caused:**
```
WARNING:caffe2.python.workspace:Original python traceback for operator `1` in network `deploy_net` in exception above (most recent call last):
Traceback (most recent call last):
File "temp_model_sqeezenet.py", line 33, in 〈module〉
value_info,
File "/home/parthr/anaconda2/lib/python2.7/site-packages/caffe2/python/onnx/frontend.py", line 331, in caffe2_net_to_onnx_model
model = make_model(cls.caffe2_net_to_onnx_graph(*args, **kwargs),
File "/home/parthr/anaconda2/lib/python2.7/site-packages/caffe2/python/onnx/frontend.py", line 221, in caffe2_net_to_onnx_graph
inputs)
File "/home/parthr/anaconda2/lib/python2.7/site-packages/caffe2/python/onnx/helper.py", line 62, in c2_native_run_net
ws.RunNetOnce(predict_net)
File "/home/parthr/anaconda2/lib/python2.7/site-packages/caffe2/python/onnx/workspace.py", line 63, in f
return getattr(workspace, attr)(*args, **kwargs)
File "/home/parthr/anaconda2/lib/python2.7/site-packages/caffe2/python/workspace.py", line 201, in RunNetOnce
StringifyProto(net),
File "/home/parthr/anaconda2/lib/python2.7/site-packages/caffe2/python/workspace.py", line 180, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at operator.cc:138] schema-〉Verify(operator_def). Operator def did not pass schema checking: input: "conv1_1" input: "bn1_s_0" input: "bn1_b_0" input: "bn1_rm_0" input: "bn1_riv_0" output: "bn1_1" output: "bn1_rm_1" output: "bn1_riv_1" output: "bn1_sm_1" output: "bn1_siv_1" name: "" type: "SpatialBN" arg { name: "is_test" i: 0 } arg { name: "use_cudnn" i: 1 } arg { name: "cudnn_exhaustive_search" i: 0 } arg { name: "order" s: "NCHW" }
frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::__cxx11::basic_string〈char, std::char_traits〈char〉, std::allocator〈char〉 〉 const&, void const*) + 0x68 (0x7fdac2551778 in /home/parthr/anaconda2/lib/python2.7/site-packages/caffe2/python/../../../../libc10.so)
frame #1: 〈unknown function〉 + 0x1273197 (0x7fdac3
```9c9
| caffe2 | low | Critical |
382,589,729 | godot | Delete file - Case sensitive - | **Godot version:**

**OS/device including version:**
Win7-64
**Issue description:**
I save a "Ground.tscn" scene as "ground.tscn".
Godot alert the old file will be overwritten.
Then in FileSstem dock I see 2 files : Ground.rscn and ground.tscn.
I delete Ground.tscn and I cannot access anymore to ground.tsnc.
**Steps to reproduce:**
See before.
**Minimal reproduction project:**
NA | bug,platform:windows,topic:editor,confirmed | low | Minor |
382,632,053 | go | net: FreeBSD build failed with net.inet.tcp.blackhole=2 | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.2 freebsd/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/sternix/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="freebsd"
GOOS="freebsd"
GOPATH="/home/sternix/go"
GOPROXY=""
GORACE=""
GOROOT="/opt/go/1_11_2/go"
GOTMPDIR=""
GOTOOLDIR="/opt/go/1_11_2/go/pkg/tool/freebsd_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build323464270=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
i want to build golang (master) from source
i follow the instructions from https://golang.org/doc/install/source
% git rev-parse --short HEAD
90777a34cf
### What did you expect to see?
ALL TESTS PASSED
### What did you see instead?
Failed: exit status 1
With default net.inet.tcp.blackhole=0 setting its compiled successfully
but with
sudo sysctl net.inet.tcp.blackhole=2
failed with errors as attached
net.inet.tcp.blackhole: Do not send RST on segments to closed ports
i test build with FreeBSD 11.2 amd64 and FreeBSD 12.0-RC1 amd64 with same results
Thanks.
[go_build.txt](https://github.com/golang/go/files/2599596/go_build.txt) | NeedsInvestigation | low | Critical |
382,645,694 | pytorch | [bug] inconsistent behavior of indexing | ## 🐛 Bug
Trying to get incorrect index of a tensor may lead to `IndexError` or `RuntimeError` which looks inconsistent to me. For example, it doesn't matches `numpy` experience.
## To Reproduce
Steps to reproduce the behavior:
```
In [197]: tensor = torch.rand(100)
In [198]: tensor[100]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-198-7933b057773b> in <module>()
----> 1 tensor[100]
IndexError: index 100 is out of bounds for dimension 0 with size 100
In [199]: tensor[np.array([0, 100])]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-199-aa44e153a40d> in <module>()
----> 1 tensor[np.array([0, 100])]
RuntimeError: index 100 is out of bounds for dimension 0 with size 100
```
## Expected behavior
I'd expect `IndexError` to be raised in both cases, as `numpy` does.
## Environment
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.14
GCC version: Could not collect
CMake version: version 3.8.2
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA | triaged,module: advanced indexing | low | Critical |
382,677,360 | svelte | Await expressions | Just want to capture a thought I had the other day: it might be neat to have inline `await` expressions in templates. We already have `{#await ...}` blocks but they're overkill in some situations — you have to declare a name for the resolved value, which you might only be using once, and you might not need to worry about error states depending on what you're doing.
Imagine something like this (v3 syntax):
```html
<script>
async function fibonacci(n) {
return await fibonacciWorker.calculate(n);
}
</script>
<input type=number bind:value={n}>
<p>The {n}th Fibonacci number is {await fibonacci(n)}</p>
```
It would integrate with [Suspense](https://github.com/sveltejs/svelte/issues/1736), so it'd be convenient for doing this sort of thing (where `loadImage` resolves to its input, but only after ensuring that the image is loaded):
```html
<script>
import loadImage from './utils.js';
</script>
{#each thumbnails as thumbnail}
<img alt={thumbnail.description} src="{await loadImage(thumbnail.src)}">
{/each}
```
Of course, you'd need some way to have placeholder content, for situations where you're not using Suspense. Maybe this?
```html
<p>The {n}th Fibonacci number is {await fibonacci(n) or '...'}</p>
``` | feature request | medium | Critical |
382,690,320 | TypeScript | Compiler hangs with large JS file with --allowJs | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** [email protected]
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
Compiler hangs large file
**Code**
wget --no-check-certificate https://raw.githubusercontent.com/hyperledger/composer-concerto/4e0a9aa990fb065d9eedb54e7086473054b0015a/lib/introspect/parser.js
tsc --allowJs --outDir out-tsc parser.js
**Expected behavior:**
It completes and writes some output, or some errors
**Actual behavior:**
It hangs indefinitely
**Related Issues:**
https://github.com/Microsoft/TypeScript/issues/17033
| Bug,Domain: Control Flow | low | Critical |
382,704,533 | TypeScript | Suggest import nearest current file. | ## Search Terms
vs code, language service, suggestions, auto complete, import, symbol, nearest current file.
## Suggestion
In VS Code the auto complete dropdown for auto importing seems to have a random order. I think the user experience could be improved by ordering the exported symbols in the dropdown in order of how close the file path is to your current file path.
## Use Cases/Examples
See the example directory structure below.
- moduleA/
- constants.ts
- index.ts
- moduleB/
- constants.ts
- index.ts
Say that both moduleA/constants.ts and moduleB/constants.ts export a variable like `foobar`. When I type `foobar` in moduleA/index.ts it's likely that I'm trying to auto import `foobar` from moduleA/constants.ts`.
In the current approach the dropdown suggests `foobar` from both moduleA/constants.ts and moduleB/constants.ts, however, the order doesn't seem to be guaranteed so sometimes it will suggest moduleB's `foobar` first and other times moduleA's `foobar`.
I think this could be improved by ordering suggestions in terms of how close their source file path is to the current file path, such that moduleA's `foobar` always appears before moduleB's `foobar` when I'm in moduleA/index.ts. For the sake of completeness, I think it should show moduleB's `foobar` first when I'm in moduleB/index.ts.
I hope I've explained that clearly enough.
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Domain: Completion Lists,Experience Enhancement | low | Major |
382,740,720 | rust | Tracking issue for future-incompatibility lint `coherence_leak_check` | This is the **tracking issue** for the "universe transition". The goal of this page is describe why this change was made and how you can fix code that is affected by it. It also provides a place to ask questions or register a complaint if you feel the change should not be made. For more information on the policy around bug fixes that affect existing code, see our [breaking change policy guidelines][guidelines].
[guidelines]: https://forge.rust-lang.org/rustc-bug-fix-procedure.html
#### What was the change and what code is affected?
This change (introduced in https://github.com/rust-lang/rust/pull/55517) fixed a number of bugs (e.g., https://github.com/rust-lang/rust/issues/32330) in how the Rust compiler handled higher-ranked types (e.g., `fn(&u8)` or -- more explicitly -- `for<'a> fn(&'a u8)`) and trait bounds (e.g., `for<'a> T: Trait<'a>`). One of these changes could however affect existing code. In particular, the compiler in the past would accept trait implementations for identical functions that differed only in where the lifetime binder appeared:
```rust
trait SomeTrait { }
impl SomeTrait for for<'a> fn(&'a u8) { }
impl<'a> SomeTrait for fn(&'a u8) { }
```
We no longer accept both of these impls. Code relying on this pattern would now have to introduce "newtypes", like `struct Foo(for<'a> fn(&'a u8))`.
#### History of this change
This change was first made in #55517 without a "warning period". This was done because (a) it would be challenging to issue precise warnings for affected code and (b) a crater run found zero regressions.
However, the change was subsequently backed out in https://github.com/rust-lang/rust/pull/58592, [owing to concerns about unintended side-effects](https://github.com/rust-lang/rust/issues/56105#issuecomment-465634661).
The change was re-added in https://github.com/rust-lang/rust/pull/65232, but this time with a warning period. We are currently completing the implementation and trying to figure out how to draw the line in terms of what should become a hard error.
#### Affected crates and patterns
This section is for collecting patterns and examples to look into or other related issues.
* wasm-bindgen was reported to get warnings in #65232
* @quark-zju commented that their crate was affected [here](https://github.com/rust-lang/rust/issues/56105#issuecomment-606379619)
* https://github.com/rust-lang/rust/issues/73154 -- order dependent coercion | A-lints,A-lifetimes,T-compiler,C-future-incompatibility,C-tracking-issue,S-tracking-impl-incomplete,T-types | medium | Critical |
382,776,389 | rust | Refactorings for rustc_codegen_ssa | This is a list of things I noticed in `rustc_codegen_{utils,ssa}` which I did like to be changed.
* [x] Move https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_utils/symbol_names/index.html to `rustc_codegen_ssa`.
* [x] Remove https://doc.rust-lang.org/nightly/nightly-rustc/src/rustc_codegen_utils/lib.rs.html#64 (now unused)
* [ ] https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/debuginfo/enum.FunctionDebugContext.html
* [x] Inline `debuginfo_disabled_message` and `should_be_ignored_message` into `as_ref`.
* [ ] https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/debuginfo/struct.FunctionDebugContextData.html
* [x] Remove the `Cell` from the field `source_locations_enabled` and adapt several functions to take a mutable reference to `FunctionDebugContext{,Data}` instead.
---
Edit:
* [x] https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/backend/trait.BackendTypes.html `Context` is not used inside `rustc_codegen_ssa`
* [ ] <strike>https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/type_/trait.BaseTypeMethods.html</strike> https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_llvm/context/struct.CodegenCx.html#method.set_struct_body Why is `set_struct_body` even a thing?
* [ ] https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/consts/trait.ConstMethods.html All methods take &self, but a codegen backend may need to mutate stuff to create values.
* [ ] https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/statics/trait.StaticMethods.html All methods take &self, but a codegen backend may need to mutate stuff to create values.
Edit: update for current status | C-cleanup,A-codegen,T-compiler,A-cranelift,A-gcc | low | Critical |
382,861,363 | go | encoding/asn1: Unmarshalling implicitly tagged GeneralizedTime unmarshalls as UTCTime | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.2 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/chris/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/chris/Git/gorims"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/go"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build216350556=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Add an implicitly tagged field to a struct of type time.Time and tagged it with "generalized". Marshalled the struct and then unmarshalled it. Unmarshalling failed because it parses the time as a UTCTime.
https://play.golang.org/p/XOKSctdaYl5
### What did you expect to see?
Unmarshalling observe the "generalized" attribute tag and parse the time as a Generalized Time
### What did you see instead?
Unmarshalling parsed the time as a UTCTime which failed and produced the zero val for the time.
Looking at the current test suite, it appears that the implementation of the "generalized" attribute tag is incomplete:
```
func TestImplicitTaggedTime(t *testing.T) {
// An implicitly tagged time value, that happens to have an implicit
// tag equal to a GENERALIZEDTIME, should still be parsed as a UTCTime.
// (There's no "timeType" in fieldParameters to determine what type of
// time should be expected when implicitly tagged.)
der := []byte{0x30, 0x0f, 0x80 | 24, 0xd, '9', '1', '0', '5', '0', '6', '1', '6', '4', '5', '4', '0', 'Z'}
var result implicitTaggedTimeTest
if _, err := Unmarshal(der, &result); err != nil {
t.Fatalf("Error while parsing: %s", err)
}
if expected := time.Date(1991, 05, 06, 16, 45, 40, 0, time.UTC); !result.Time.Equal(expected) {
t.Errorf("Wrong result. Got %v, want %v", result.Time, expected)
}
}
```
But there IS a timeType set by the use of the "generalized" tag.
| NeedsInvestigation | low | Critical |
382,886,258 | go | net/v2: meta bug for v2 changes | This is a meta/index bug for things we might change in a possible future `net/v2` (~ *"Go 2"*) package, including both links to other issues, or just comments with API problems/cleanups that are too little to warrant their own bugs.
(This is **not** about `net/http`, `net/mail`, `net/rpc`, `net/smtp`, `net/nextproto`, or `net/url`. Those should have their own bugs if/when needed.)
| v2 | low | Critical |
382,894,481 | vscode | [snippets] Support for global or project-level variables | I originally posted in #59557, but I realized the OP was asking for a variable *defined within the snippet*.
## This request is for adding support for user-overridable variables defined *outside* of the snippet code, perhaps in:
1. A global/project-level config file
2. Environment variables
NOTE: # 2 [has a precedent in TextMate](https://macromates.com/manual/en/environment_variables).
# JavaScript Use Cases
## Use Case 1: `let` vs `const`
ES2015 introduced `let` and `const`. This comes down to a personal preference for many users.
This could be accommodated by allowing the user to set a variable like `${VARIABLE_DECL_KIND}` to either `const` or `let`.
## Use Case 2: `;`
Ending statements with `;` is also a stylistic preference for most JavaScript code.
This could be accommodated using a `${STATEMENT_TERMINATOR}` that could be defined as either `;` or an empty string.
I know many similar requests are being closed in favor of #10561, but I'm wondering if short of that, this could be implemented by adding another `VariableResolver` implemention, e.g.: `EnvironmentVariableResolver` or `ConfigFileVariableResolver`? | feature-request,snippets | low | Major |
382,917,771 | flutter | RouteAware.didPushNext documentation is incorrect | Internal: b/292548580
"Called when a new route has been pushed, and the current route is no longer visible."
It's called before the route transition is finished; therefore, the current route is still visible, just obscured. This is an important distinction in some scenarios like one I ran into today where I wanted to do something as soon as the route containing the current widget was no longer visible whatsoever. | framework,customer: mulligan (g3),d: api docs,f: routes,P2,team-framework,triaged-framework | low | Minor |
382,924,900 | opencv | dnn: PROTOBUF_USE_DLLS | I'm on Win7 x64, Visual Studio 2015, trying to build opencv v4.0.0.
Since I haven't managed to build opencv's dnn module with whatever protobuf comes with it, I've built libprotobuf (and protoc) standalone. it seems cmake automatically configures that build for shared libraries (dlls).
now, when I try to configure opencv with cmake, it detects all the paths, but when building, I get about half a dozen linker errors ("unresolved external symbol", LNK2001) right when opencv_dnn400 is linked.
I can only resolve them by doing what [they did](https://github.com/Microsoft/vcpkg/issues/1353): add `PROTOBUF_USE_DLLS`. strangely, simply defining this as a cmake variable within cmake-gui didn't work (for me). I actually have to edit the corresponding CMakeLists.txt.
Maybe this could gain some auto-detection or be resolved some other way? at the very least, I hope to leave this breadcrumb for others who run into this problem. | priority: low,category: build/install,category: dnn | low | Critical |
382,926,953 | vue-element-admin | 面包屑导航实现子路由之间的跳转无需新打开tagViews | // 活动管理
{
path: '/activity',
redirect: '/activity/index',
component: Layout,
children: [{
path: 'index',
component: () => import('@/views/activity/index'),
name: 'activityIndex',
meta: {
title: '活动管理',
icon: 'dashboard',
noCache: true
}
}]
},
{
path: '/activity',
component: Layout,
redirect: '/activity/index',
name: 'activity',
hidden: true,
meta: {
title: '活动管理',
icon: 'dashboard',
noCache: true
},
children: [{
path: 'add',
component: () => import('@/views/activity/components/add'),
name: 'activityAdd',
meta: {
title: '活动添加',
noCache: true
}
},
{
path: 'edit/:id(\\d+)',
component: () => import('@/views/activity/components/edit'),
name: 'activityEdit',
meta: {
title: '活动修改',
noCache: true
},
hidden: true
},
//详情页面
{
path: 'detail/:id(\\d+)',
hidden: true,
component: () => import('@/views/activity/components/detail'),
name: 'activityDetail',
meta: {
title: '活动详情',
icon: 'star',
noCache: true
}
}
]
}
侧边导航只会显示一个,右侧不同子路由的切换,不会新增新的tagViews,实现了 “首页/父路由/子路由1”到“首页/父路由/子路由2”的切换,切换的同时关闭子路由1,有没有可能实现模拟浏览器的后退按钮,后退就直接返回上次的路由 | enhancement :star:,feature | low | Minor |
382,934,460 | pytorch | [Caffe2] How can I use detectron with pytorch? | Hi guys :sweat_smile: :sweat_smile: I have a question but I don't know if this is the right place to ask or not.
I am using pytorch and I wanted to use detectron and they are saying that I have to install caffe2 first and I did so but when type `from caffe2.python import core`, I see this
`WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
CRITICAL:root:Cannot load caffe2.python. Error: /lib64/libc.so.6: version 'GLIBC_2.14' not found (required by /home/qwe123/data/conda_packages/lib/python3.6/site-packages/caffe2/python/caffe2_pybind11_state.cpython-36m-x86_64-linux-gnu.so)`
I tried other solutions I found on here but I could not get this to work. I only installed caffe2 because I wanted to use detectron, Can I use it with pytorch without the need of caffe2? My version of pytorch of 4.0, Should I download pytorch version 1.0 to use detectron with it?
| caffe2 | low | Critical |
383,029,416 | pytorch | Pytorch C++ API with cuda : Expected object of backend CPU but got backend CUDA for sequence element 1 in sequence argument at position #1 'tensors' | ## 🐛 Bug
Thanks for your teams' great work! But during using the C++ API of pytorch on gpu, there are some confusing bugs. When I try to load a .pt file as module and then do a forward operator, I got an exception.
## To Reproduce
Here are my code and exception, the .pt file is generated by torch.jit.trace(model, example).cuda()
my code:
```cpp
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load("my_model_path.pt");
module->to(torch::kCUDA);
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({model_input_size}).cuda())
auto output = module->forward(inputs).toTensor();
```
the state of variable:
I'have checked the tensor that be pushed to inputs vector is Variable[CUDAFloatType], and the model.pt is generated on cuda.
```
Exception:
terminate called after throwing an instance of 'c10::Error'
what(): Expected object of backend CPU but got backend CUDA for sequence element 1 in sequence argument at position #1 'tensors' (checked_tensor_list_unwrap at /pytorch/aten/src/ATen/Utils.h:87)
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7fd638cc1ba1 in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7fd638cc146a in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libc10.so)
frame #2: <unknown function> + 0x7b22d8 (0x7fd65adf72d8 in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libcaffe2.so)
frame #3: <unknown function> + 0x7b29bc (0x7fd65adf79bc in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libcaffe2.so)
frame #4: at::native::cat(c10::ArrayRef<at::Tensor>, long) + 0xa4 (0x7fd65ad09624 in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libcaffe2.so)
frame #5: at::TypeDefault::cat(c10::ArrayRef<at::Tensor>, long) const + 0x4f (0x7fd65aed7cff in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libcaffe2.so)
frame #6: torch::autograd::VariableType::cat(c10::ArrayRef<at::Tensor>, long) const + 0x1bc (0x7fd6699d5cdc in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libtorch.so.1)
frame #7: <unknown function> + 0x52b2a8 (0x7fd669b022a8 in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libtorch.so.1)
frame #8: torch::jit::ConstantPropagation(torch::jit::Node*, bool) + 0x450 (0x7fd669c06c30 in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libtorch.so.1)
frame #9: torch::jit::ConstantPropagation(torch::jit::Block*, bool) + 0x44 (0x7fd669c07a14 in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libtorch.so.1)
frame #10: torch::jit::ConstantPropagation(std::shared_ptr<torch::jit::Graph>&) + 0x18 (0x7fd669c07b18 in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libtorch.so.1)
frame #11: <unknown function> + 0x5d39a0 (0x7fd669baa9a0 in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libtorch.so.1)
frame #12: torch::jit::GraphExecutor::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0x19d (0x7fd669bab2cd in /home/chr/action-sdk/libs/libtorch-latest-gpu/libtorch-gpu/libtorch/lib/libtorch.so.1)
frame #13: torch::jit::script::Method::run(std::vector<c10::IValue, std::allocator<c10::IValue> >&) + 0xb4 (0x456096 in ./action)
frame #14: torch::jit::script::Method::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >) + 0x4a (0x4560f8 in ./action)
frame #15: torch::jit::script::Module::forward(std::vector<c10::IValue, std::allocator<c10::IValue> >) + 0x81 (0x456f8f in ./action)
frame #16: main + 0x550 (0x4538df in ./action)
frame #17: __libc_start_main + 0xf0 (0x7fd6380b2830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #18: _start + 0x29 (0x43e0f9 in ./action)
[1] 35218 abort (core dumped)
```
I have read the source code and find that the construct function of module->forward() only accept vector type. But the vector type can't match the check by the tensor lib. Could you give me some advice and help about how to change the vector type match the check by ATen? Thank you very much.
## Environment
- PyTorch Version (e.g., 1.0): 1.0
- OS (e.g., Linux): ubuntu 16.04
- How you installed PyTorch (`conda`, `pip`, source): conda
- Build command you used (if compiling from source): following the guide on official page
- Python version:3.6
- CUDA/cuDNN version: cuda 8.0 + cudnn6.0
- GPU models and configuration:
- Any other relevant information:
| oncall: jit | low | Critical |
383,046,750 | flutter | Implement empty table rendering | Implement empty table rendering. Includes dividers that fill viewport. | c: new feature,framework,a: fidelity,f: cupertino,P2,team-design,triaged-design | low | Minor |
383,096,133 | go | cmd/link: shared object constructor functions are not invoked with internal linker and musl libc | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
1.11.2
<pre>
$ go version
go version go1.11.2 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build960632520=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
- Create a dynamically linked Go application with the internal linker.
- Create a shared object (DLL) which has a constructor function - e.g. a C function attributed with \_\_attribute\_\_((constructor))
- Use LD_PRELOAD to preload the shared object into the Go application process - e.g LD_PRELOAD=libsotest.so ./myapp
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
This issue could be the root cause for failing alpine tests in https://github.com/golang/go/issues/20120.
I have created docker image with a reproducer for this issue: https://gist.github.com/Hollerberg/5fc64f8abf0f16d4e801c4ad348f21b6
Download the files from the github gist and execute `build-and-run-container.sh`. This script will build the container image. Within the docker build process, `libpreload.so` is built from `preload.c`, and the Go application `gogo.go` is built once to `gogo-int-linker` using the internal and once to `gogo-ext-linker` using the external (system) linker.
Once the docker image is built and started, the two applications can be started with script `runtest.sh`.
The shared object constructor function in `libpreload.so` will write a message to stdout. Preloading the shared object into `gogo-int-linker` will not execute the constructor and therefore no message is printed to console. Repeating this with `gogo-ext-linker`, the constructor message is printed to console.
The root cause for this behavior is a subtle difference between musl and glibc dynamic linker behavior. glibc seems to call shared object constructors in dynamic linker context (`_dl_start_user`), while musl libc executes constructors in `__libc_start_main` called by [crt1.c](https://git.musl-libc.org/cgit/musl/tree/crt/crt1.c). musl libc dynamic linker loads the application and preloaded/depending shared object to memory. Then it [jumps to the application entry point](https://git.musl-libc.org/cgit/musl/tree/ldso/dynlink.c#n1742). Typically, the application entry point is the code defined in [crt1.c](https://git.musl-libc.org/cgit/musl/tree/crt/crt1.c) - but not in case when the application has been linked with the internal Go linker - `__libc_start_main` is never invoked.
See also Rich Felkers comments on [musl libc mailing list](https://www.openwall.com/lists/musl/2018/11/21/3)
Short background info on why we are preloading shared objects into Go processes - my company provides an application monitoring solution and the shared object pulls diverse data from the executing Go application. Thus, the use case is very relevant to us in order to be able to support musl libc based systems without constraints.
### What did you expect to see?
The constructor function in the preloaded shared object should be invoked before Go application main function.
### What did you see instead?
The shared object constructor is not invoked at all.
| help wanted,NeedsInvestigation,compiler/runtime | low | Critical |
383,100,914 | rust | Type mismatch reporting is slow when `gtk` crate is present | This code takes a long time (3-4 seconds) in item-bodies checking before reporting the type mismatch:
```rust
extern crate gtk;
fn bar(param: ()) {}
fn main() {
bar(true);
}
```
It compiles (or fails to compile) nearly instantly when the `extern crate` is removed or the `true` is replaced with the proper type `()`.
One of the reasons this could be slow is because `gtk` contains a *lot* of traits (at least 189, in fact).
I understand that optimizing error paths isn't that important, but 3-4 seconds on this tiny piece of code did seem a bit excessive, so I thought I'd open an issue anyway.
Tested on the current nightly (`rustc 1.32.0-nightly (f1e2fa8f0 2018-11-20)`) and stable 1.30.1 | A-type-system,C-enhancement,A-diagnostics,I-compiletime,T-compiler,T-types | low | Critical |
383,161,905 | pytorch | Provide Protobuf library if libtorch was built with included version | When building pytorch and libtorch with ``python setup.py build``, an internal version of protobuf is used. Our C++ app also uses Protobuf but finds an incompatible version with the one used by libtorch. Would it be possible to add to the install a ProtobufConfig.cmake so that we can link against the same protobuf?
cc @yf225 | module: build,module: protobuf,module: cpp,triaged | low | Minor |
383,192,291 | TypeScript | Inconsistency of mapped type and `Parameters<T>` between TS and JS | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.1.4
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** mapped type parameters js
**Code**
```ts
// TypeScript
const Foo = Object.freeze({
a: (arg1: string) => arg1,
b: (arg1: string, arg2: boolean) => arg2,
});
// typeof Foo = Readonly<{
// a: (arg1: string) => string;
// b: (arg1: string, arg2: boolean) => boolean;
//}>
type Bar = {[K in keyof typeof Foo]: (...args: Parameters<(typeof Foo)[K]>) => void}
// typeof Bar = {
// readonly a: (arg1: string) => void;
// readonly b: (arg1: string, arg2: boolean) => void;
// }
```
```js
// JavaScript
const Foo = Object.freeze({
a: (/** @type {string} */arg1) => arg1,
b: (/** @type {string} */arg1, /** @type {boolean} */arg2) => arg2,
});
// typeof Foo = Readonly<{
// a: (arg1: string) => string;
// b: (arg1: string, arg2: boolean) => boolean;
//}>
// ^ is identical to TS
/** @typedef {{[K in keyof typeof Foo]: (...args: Parameters<(typeof Foo)[K]>) => void}} Bar */
// typeof Bar = {
// readonly a: (arg1?: string) => void;
// readonly b: (arg1?: string, arg2?: boolean) => void;
// }
// ^ why optional??
```
**Expected behavior:**
`Bar.{a,b}` having the same signatures in TS and JS
**Actual behavior:**
`Bar.{a,b}` have only optional parameters in JS.
| Bug | low | Minor |
383,230,852 | TypeScript | Generic rest of intersection should be assignable to its type parameter constituents | ## Search Terms
generic rest intersection type parameter
## Suggestion
The rest of an intersection containing type parameters and object types should be assignable to the type parameters if the rest removed all the properties from the object types. For example:
```ts
function ripoff<T>(augmented: T & { a }): T {
const { a, ...original } = augmented;
return original;
}
```
This is technically not safe — the instantiation of the type parameters could overlap with the object types — but it is how higher-order spread currently behaves. I believe the most common use of rest is with disjoint types. (Spread is similar, but identical types is the most common use there.)
One way to allow this is to create a new assignability rule:
Pick<T & U & ... & { a } & { b } & ..., Exclude<keyof T & U & ..., 'a' | 'b' | ...>> ⟹ T & U & ...
In prose, the rule is that the pick of an intersection that consists of only type parameters and object types is assignable to the intersection of the type parameters if the second argument is an Exclude, and the Exclude's first argument is `keyof` the intersection of the type parameters, and the Exclude's second argument is the keys of the intersection of the object types.
Another optional is a simpler rule, where the pick of *any* intersection with a type parameter T is assignable to T:
Pick<T & ..., Exclude<keyof T & ..., K>> ⟹ T
Note that these rules don't cover what to do with constrained type parameters. The second rule is inaccurate enough already that it probably doesn't matter, but the first rule should probably have an additional restriction on Exclude's second argument.
## Use Cases
React HOCs basically all do this.
## Examples
From [this comment](https://github.com/Microsoft/TypeScript/pull/28312#issuecomment-440226162):
```ts
import React, { Component } from 'react'
import { Counter } from './counter-render-prop'
import { Subtract } from '../types'
type ExtractFuncArguments<T> = T extends (...args: infer A) => any ? A : never;
// get props that Counter injects via children as a function
type InjectedProps = ExtractFuncArguments<Counter['props']['children']>[0];
// withCounter will enhance returned component by ExtendedProps
type ExtendedProps = { maxCount?: number };
// P is constrained to InjectedProps as we wanna make sure that wrapped component
// implements this props API
const withCounter = <P extends InjectedProps>(Cmp: React.ComponentType<P>) => {
class WithCounter extends Component<
// enhanced component will not include InjectedProps anymore as they are injected within render of this HoC and API surface is gonna be extended by ExtendedProps
Subtract<P, InjectedProps> & ExtendedProps
> {
static displayName = `WithCounter(${Cmp.name})`;
render() {
const { maxCount, ...passThroughProps } = this.props;
return (
// we use Counter which has children as a function API for injecting props
<Counter>
{(injectedProps) =>
maxCount && injectedProps.count >= maxCount ? (
<p className="alert alert-danger">
You've reached maximum count! GO HOME {maxCount}
</p>
) : (
// here cast to as P is needed otherwise compile error will occur
<Cmp {...injectedProps as P} {...passThroughProps} />
)
}
</Counter>
);
}
}
return WithCounter;
};
// CounterWannabe implement InjectedProps on it's props
class CounterWannabe extends Component<
InjectedProps & { colorType?: 'primary' | 'secondary' | 'success' }
> {
render() {
const { count, inc, colorType } = this.props;
const cssClass = `alert alert-${colorType}`;
return (
<div style={{ cursor: 'pointer' }} className={cssClass} onClick={inc}>
{count}
</div>
);
}
}
// if CounterWannabe would not implement InjectedProps this line would get compile error
const ExtendedComponent = withCounter(CounterWannabe);
```
## Checklist
My suggestion meets these guidelines:
* [x] This **probably** wouldn't be a breaking change in existing TypeScript/JavaScript code. Would need to be tested.
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Critical |
383,264,765 | flutter | Transform.rotate is super slow | I'm implementing a card swiping tinder like ui. Cards are being moved during onPanUpdate (within GestureDetector).
When applying just regular Transform.translate it is super smooth, but with when I apply also Transform.rotate it is missing a lot of frames (even on iPhone X).
This is the sample code:
```dart
class MemberCard extends StatelessWidget {
final double width;
final double height;
final String photo;
final double rotation;
final Offset offset;
final double scale;
final bool dropShadow;
MemberCard({
this.width,
this.height,
this.photo,
this.rotation = 0.0,
this.offset = const Offset(0.0, 0.0),
this.scale = 1.0,
this.dropShadow = false,
});
@override
Widget build(BuildContext context) {
double angle = rotation * (math.pi / 180.0);
var matrix = Matrix4.translationValues(offset.dx, offset.dy, 0.0);
if (scale != 1.0) {
matrix..scale(scale);
}
if (rotation != 0.0) {
matrix.rotateZ(angle);
}
return Transform(
alignment: Alignment.center,
transform: matrix,
child: Stack(
children: <Widget>[
RepaintBoundary(
child: Container(
width: width,
height: height,
decoration: BoxDecoration(
boxShadow: dropShadow
? [
BoxShadow(
color: Colors.black.withOpacity(0.4),
blurRadius: 5.0,
offset: Offset(0.0, 1.0))
]
: [],
borderRadius: BorderRadius.all(Radius.circular(10.0)),
color: Colors.white,
// image: DecorationImage(
// image: NetworkImage(photo),
// fit: BoxFit.cover,
// ),
),
child: ClipRRect(
borderRadius: BorderRadius.circular(10.0),
child: Image.network(
photo,
fit: BoxFit.cover,
filterQuality: FilterQuality.none,
),
),
),
),
],
),
);
}
}
``` | engine,c: performance,has reproducible steps,P2,team-engine,triaged-engine,found in release: 3.19,found in release: 3.22 | low | Major |
383,266,747 | go | encoding/pem: fix #bytes lineBreaker.Write returns | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.2 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/santhosh/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/santhosh/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/xy/4qx1s2zn5qn82yzpx8xhbplr0000gn/T/go-build703305017=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
```go
const sixtyFourCharString = "0123456789012345678901234567890123456789012345678901234567890123"
var breaker lineBreaker
breaker.out = new(bytes.Buffer)
n, err := breaker.Write([]byte(test.in))
fmt.Println(n,err)
```
### What did you expect to see?
should print `64 nil`
### What did you see instead?
this prints `0 nil`
as this api is not exported, it will not be visible to end user. but still the io.Writer contract is broken | NeedsDecision | low | Critical |
383,402,640 | pytorch | [Caffe2] Cannot get repeated argument in custom operator in CUDA context | ## 🐛 Bug
I am trying to build own custom operator for caffe2. My operator has some arguments. Some of them are repeated (vectors of numbers). In operator constructor I get the arguments, however, when I run my operator in CPU context I am able to get all arguments, when I run in CUDA context all repeated arguments are empty.
## To Reproduce
Operator constructor
```
template <typename T,class Context>
class YoloOp final:public Operator<Context> {
public:
USE_OPERATOR_CONTEXT_FUNCTIONS;
YoloOp(const OperatorDef& operator_def, Workspace* ws)
: Operator<Context>(operator_def, ws),
anchor_mask_(this->template GetRepeatedArgument<int>("anchor_mask")),
anchors_(OperatorBase::GetRepeatedArgument<int>("anchors")),
numclass_(OperatorBase::GetSingleArgument<int>("numclass",80)),
...
std::cout << "YOLO num_classes: " << this->numclass_ << std::endl;
std::cout << "Anchors: " << this->anchors_ << std::endl;
```
Example of running operation:
```
caffe2::DeviceOption deviceOptionCPU;
deviceOptionCPU.set_device_type(PROTO_CPU);
caffe2::OperatorDef op = caffe2::CreateOperatorDef("Yolo","MyYolo",vector<string>{"inputYOLO_"+idx_str},
vector<string>{"out_"+idx_str,"tx_"+idx_str,"ty_"+idx_str,"tw_"+idx_str,"th_"+idx_str,"det_conf_"+idx_str,"class_prob_"+idx_str,"tmp"+idx_str},
deviceOptionCPU);
caffe2::AddArgument("numclass",60, &op);
std::vector<int> anchors({ 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326});
caffe2::AddArgument("anchors",anchors, &op);
```
## Expected behavior
When I run my operator in CPU context I get the following output:
```
YOLO num_classes: 60
Anchors: 10 13 16 30 33 23 30 61 62 45 59 119 116 90 156 198 373 326
```
When I run my operator in GPU context it just output empty anchors:
```
YOLO num_classes: 60
Anchors:
```
## Environment
```
Collecting environment information...
PyTorch version: 1.0.0a0+ff608a9
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.12.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration: GPU 0: GeForce GTX 1050 Ti
Nvidia driver version: 384.130
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.6.0.21
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
Versions of relevant libraries:
[pip] Could not collect
[conda] torch 1.0.0a0+ff608a9 <pip>
```
## Additional context
<!-- Add any other context about the problem here. -->
| caffe2 | low | Critical |
383,411,004 | rust | no_std panic=abort dev builds require `rust_eh_personality` | ```
$ cat Cargo.toml
[package]
name = "foo"
version = "0.1.0"
authors = ["Mike Hommey <[email protected]>"]
[lib]
crate-type = ["cdylib"]
[profile.dev]
panic = "abort"
[profile.release]
panic = "abort"
```
```
$ cat src/lib.rs
#![no_std]
use core::alloc::Layout;
use core::ffi::c_void;
use core::ptr;
const CHUNK_SIZE: usize = 1 << 20;
#[panic_handler]
#[no_mangle]
pub fn panic_impl(_: &core::panic::PanicInfo) -> ! {
loop {}
}
#[derive(Clone, Copy, Debug)]
pub(crate) struct ChunkLayout(Layout);
#[derive(Debug)]
pub(crate) struct ChunkLayoutErr;
impl ChunkLayout {
fn from_size_align(size: usize, align: usize) -> Result<Self, ChunkLayoutErr> {
if align < CHUNK_SIZE || (size & (CHUNK_SIZE - 1) != 0) {
return Err(ChunkLayoutErr);
}
Layout::from_size_align(size, align).map(ChunkLayout).map_err(|_| ChunkLayoutErr)
}
}
#[no_mangle]
pub unsafe extern "C" fn chunk_alloc_mmap(size: usize, align: usize) -> *mut c_void {
if let Ok(_layout) = ChunkLayout::from_size_align(size, align) {
ptr::null_mut()
} else {
ptr::null_mut()
}
}
```
```
$ cargo build
Compiling foo v0.1.0 (/tmp/foo)
Finished dev [unoptimized + debuginfo] target(s) in 0.24s
$ objdump -T target/debug/libfoo.so | grep rust_eh_personality
0000000000000000 D *UND* 0000000000000000 rust_eh_personality
```
Practically speaking, this means the resulting library (cdylib) can't be loaded because the symbol can't be resolved.
This doesn't happen with `--release` (there is no undefined reference to `rust_eh_personality`). And this doesn't happen when the body of `chunk_alloc_mmap` is further reduced to only `ptr::null_mut()`.
If I add -Wl,-Map,map to the linker arguments, the output `map` file says the symbol reference comes from:
```
.data.DW.ref.rust_eh_personality
0x0000000000004008 0x8 /tmp/foo/target/debug/deps/foo.3sp59mzlmyqssn40.rcgu.o
0x0000000000004008 DW.ref.rust_eh_personality
```
So rust actively creates a pointer to `rust_eh_personality` in `.data`. DW suggests this would have something to do with DWARF, so I tried enabling debug info on the release profile, but that still didn't make it happen with `--release`.
Cc: @japaric, @alexcrichton | A-linkage,T-compiler | low | Critical |
383,484,970 | opencv | Python bindings build fails when source directory named "opencv2" |
##### System information (version)
- OpenCV => master
- Operating System / Platform => Ubuntu 18
- Compiler => default
##### Detailed description
When source directory is named `opecnv2` python bindings can not be built:
```
FAILED: <compiling> /work/opencv2/modules/python/src2/cv2.cpp
In file included from /work/opencv2/modules/python/src2/cv2.cpp:109:0:
modules/python_bindings_generator/pyopencv_generated_include.h:14:10: fatal error: opencv2/modules/core/misc/python/shadow_umat.hpp: No such file or directory
#include "opencv2/modules/core/misc/python/shadow_umat.hpp"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
```
Directory names `opencv` and `opencv3` works well.
##### Steps to reproduce
```
git clone https://github.com/opencv/opencv opencv2
git -C opencv2 checkout master
mkdir build && pushd build
cmake -GNinja ../opencv2
ninja
```
| bug,category: python bindings,category: build/install | low | Critical |
383,509,875 | pytorch | No code example for AdaptiveLogSoftmaxWithLoss | ## 📚 Documentation
There are no code examples for `torch.nn.AdaptiveLogSoftmaxWithLoss`, and so it is not trivial how to go about using it as a layer, and how to use the methods `predict` and `log_prob`.
<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->
| module: docs,triaged | low | Minor |
383,530,023 | pytorch | [caffe2] Fails to build with fbgemm enabled | ## 🐛 Bug
Caffe2 git master fails to build with fbgemm enabled (`USE_FBGEMM:BOOL='ON'`), giving the following error:
```
[ 45%] Building CXX object caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/elementwise_sum_dnnlowp_op_avx2.cc.o
In file included from /storage/linux/abs/caffe2-git/src/pytorch/caffe2/quantization/server/dnnlowp.h:11,
from /storage/linux/abs/caffe2-git/src/pytorch/caffe2/quantization/server/caffe2_dnnlowp_utils.h:4,
from /storage/linux/abs/caffe2-git/src/pytorch/caffe2/quantization/server/utility_dnnlowp_ops.h:4,
from /storage/linux/abs/caffe2-git/src/pytorch/caffe2/quantization/server/elementwise_sum_dnnlowp_op_avx2.cc:1:
/storage/linux/abs/caffe2-git/src/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:9:10: fatal error: cpuinfo.h: No such file or directory
#include <cpuinfo.h>
^~~~~~~~~~~
compilation terminated.
make[2]: *** [caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/build.make:63: caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/elementwise_sum_dnnlowp_op_avx2.cc.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:4460: caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/all] Error 2
make: *** [Makefile:141: all] Error 2
```
This error seems to be caused by the recent commit https://github.com/pytorch/pytorch/commit/fb8c3d62feacf5c3d39f6fb034db944a89a0bcf4. It was building fine some commits before it.
## To Reproduce
Steps to reproduce the behavior:
1. mkdir build
1. cd build
1. [cmake options](https://bpaste.net/show/ca5a7eb99368)
1. make -j1
## Expected behavior
A succesful build with fbgemm enabled.
## Environment
- **PyTorch Version:** git master (currently at https://github.com/pytorch/pytorch/commit/f79fb58744ba70970de652e46ea039b03e9ce9ff)
- **OS:** Arch Linux x86_64
- **How you installed PyTorch:** source
- **Build command you used (if compiling from source):** already shown above
- **Python version:** 3.7.1
- **CUDA/cuDNN version:** not enabled in this example
- **GPU models and configuration:** not enabled in this example
- **Compiler:** gcc 8.2.1
- **Any other relevant information:** the same error occurs when enabling cuda and using gcc 7.3.1
## Additional context
It builds fine when disabling fbgemm (`USE_FBGEMM:BOOL='OFF'`). | caffe2 | low | Critical |
383,548,729 | flutter | AndroidView is offsetted and clipped on some Xiaomi devices with a notch | When running the [webview_flutter](https://github.com/flutter/plugins/tree/master/packages/webview_flutter) example on a device with a notch, a blank bar apears on the top of the native view, below the app bar. The platform view responds to touches with a vertical offset (one must touch above the point). #19030

| e: device-specific,platform-android,engine,a: platform-views,P3,team-android,triaged-android | medium | Major |
383,570,289 | TypeScript | ts error should better explain that import is a reserved keyword for a function name | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.1.6
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
import
import keyword
keyword
**Code**
given the following code with a function named `import` :disappointed: :
```ts
export function import(pad_id: number, config: Config, csvRows: string[][], externalCallback: (err?: Error | null, data?: string[]) => void) {
...
```

**Expected behavior:**
A clear error message explaining that import is a reserved keyword
**Actual behavior:**
A cryptic error message telling that there's a missing semicolumn :

**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
didn't found any | Suggestion,Effort: Moderate,Domain: Error Messages,Experience Enhancement | low | Critical |
383,609,330 | rust | Tracking issue for HashMap::raw_entry | Added in #54043.
----
As of 6ecad338381cc3b8d56e2df22e5971a598eddd6c / 2019-01-09, this feature covers:
```rust
impl<K, V, S> HashMap<K, V, S>
where K: Eq + Hash,
S: BuildHasher
{
pub fn raw_entry(&self) -> RawEntryBuilder<K, V, S> {…}
pub fn raw_entry_mut(&mut self) -> RawEntryBuilderMut<K, V, S> {…}
}
pub struct RawEntryBuilder<'a, K: 'a, V: 'a, S: 'a> {…} // Methods return Option<(&'a K, &'a V)>
pub struct RawEntryBuilderMut<'a, K: 'a, V: 'a, S: 'a> {…} // Methods return RawEntryMut<'a, K, V, S>
pub enum RawEntryMut<'a, K: 'a, V: 'a, S: 'a> {
Occupied(RawOccupiedEntryMut<'a, K, V>),
Vacant(RawVacantEntryMut<'a, K, V, S>),
}
pub struct RawOccupiedEntryMut<'a, K: 'a, V: 'a> {…}
pub struct RawVacantEntryMut<'a, K: 'a, V: 'a, S: 'a> {…}
```
… as well as `Debug` impls for each 5 new types, and their inherent methods. | A-collections,T-libs-api,B-unstable,C-tracking-issue,finished-final-comment-period,disposition-close,Libs-Tracked,S-tracking-design-concerns | high | Critical |
383,618,897 | rust | Multiple crates with the same name lead to conflicting `target/doc/crate` directories | The new [`rename-dependency` feature](https://github.com/rust-lang/cargo/pull/6319) in Rust 1.31 makes it possible to depend on multiple versions of the same crate. This mostly works fine, but when building docs, the docs of one version overwrite the docs of the other version.
I'm working on adding support for converting arrays from the `nalgebra` crate to the `ndarray` crate, and I'd like this to work for multiple versions of `nalgebra`. So, I've added this to the `Cargo.toml` of `ndarray`:
```toml
[dependencies]
nalgebra-crate-0_15 = { package = "nalgebra", version = "0.15", optional = true }
nalgebra-crate-0_16 = { package = "nalgebra", version = "0.16", optional = true }
[features]
nalgebra-0_15 = ["nalgebra-crate-0_15"]
nalgebra-0_16 = ["nalgebra-crate-0_16"]
```
In `lib.rs`, the features are used for conditional compilation:
```rust
#[cfg(feature = "nalgebra-0_15")]
extern crate nalgebra_crate_0_15;
#[cfg(feature = "nalgebra-0_16")]
extern crate nalgebra_crate_0_16;
#[cfg(feature = "nalgebra-0_15")]
mod convert_nalgebra_0_15;
#[cfg(feature = "nalgebra-0_16")]
mod convert_nalgebra_0_16;
```
and the `convert_nalgebra_0_15` and `convert_nalgebra_0_16` modules implement the conversions. For example, `convert_nalgebra_0_15` contains
```rust
use imp_prelude::*;
use nalgebra_crate_0_15 as na;
/// **Requires crate feature `"nalgebra-0_15"`**
impl<A, R, S1, S2> From<na::Matrix<A, R, na::U1, S1>> for ArrayBase<S2, Ix1>
where
A: na::Scalar,
R: na::Dim,
S1: na::storage::Storage<A, R, na::U1>,
S2: DataOwned<Elem = A>,
{
fn from(vector: na::Matrix<A, R, na::U1, S1>) -> ArrayBase<S2, Ix1> {
ArrayBase::from_vec(vector.iter().cloned().collect())
}
}
// ...
```
(See the full changes [here](https://github.com/rust-ndarray/ndarray/compare/d1b4027a4ca1fcd11c2b1a417460de3a6f162154..2961c5f08b813647557b8dd6a8083eb332920c2d).)
When I build with `cargo +nightly doc --open --features=nalgebra-0_15,nalgebra-0_16`, I see both implementations as expected in docs of the `ndarray` crate:

The problem is that both versions of the docs of `nalgebra` are written to the same location, `target/doc/nalgebra`. So, for example, when I click on `Matrix` (a type defined in the `nalgebra` crate), I get taken to `target/doc/nalgebra/base/struct.Matrix.html` regardless of which version of `nalgebra` it corresponds to.
To fix this, I would suggest naming the documentation directories according to the names the crates were renamed to. So, there would be `target/doc/nalgebra-crate-0_15` and `target/doc/nalgebra-crate-0_16` directories instead of just a single `target/doc/nalgebra` directory.
## Meta
`rustup run nightly rustc --version --verbose`:
```
rustc 1.32.0-nightly (f1e2fa8f0 2018-11-20)
binary: rustc
commit-hash: f1e2fa8f0469aac1ea69dd5b6164e1d198d57934
commit-date: 2018-11-20
host: x86_64-unknown-linux-gnu
release: 1.32.0-nightly
LLVM version: 8.0
``` | T-rustdoc | low | Minor |
383,619,766 | vscode | Centered editor layout: top and bottom padding | I don't like Zen mode taking full screen. I want some padding on top and bottom. Here's what I get in Vim with https://github.com/junegunn/goyo.vim:
<img width="946" alt="image" src="https://user-images.githubusercontent.com/4033249/48916946-41d7ac00-ee39-11e8-9cc9-c51a87e69653.png">
| feature-request,workbench-zen | low | Major |
383,630,617 | ant-design | Form: 为 onFieldsChange() 提供一个无 nested 版本 | - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
有这么一个需求:
在表单任意表单域控件的 onChange 且 newValue **通过 validator** 后, 收集到最新的数据变更
(且此时我并不知道有哪些 field)
研究了一下 API 后, 我觉得 使用 `onFieldChange` 这个方法最能符合这个需求
但是目前的 form 内部会将符合 nested 规则的 field name nested 化, `onFieldChange` 方法返回的 changedValue 和 allValue 也都是 nested 化的, 这导致我无法通过一个很直接的方法来判断 changedValue 这个对象的每一个 property 的 value 是 Field 对象还是一个 nested object
### What does the proposed API look like?
(想到的一个做法:)新增加一个函数, 相比目前的 `onFieldsChange` 函数, 其返回的 field 全部是未经 nested 的, 来允许用户方便的获取所有 value
(或者 antd 的 Form 如果暴露出 rc-form/createFormField.js 的 `isFormField` 方法的话, 似乎也可以用)
### BTW 这个函数的文档有问题
看了下 rc-form 的代码, 其 `onFieldsChange`函数的参数与文档中介绍的参数、 `Form.tsx` 的对应 interface 声明的参数都是不一致的
rc-form 的代码有三个参数:
`onFieldsChange(this.props, changedFields, this.fieldsStore.getNestedAllFields());`
( https://github.com/react-component/form/blob/master/src/createBaseForm.js#L286 )
而 ant.design 对应的文档只有两个参数:
https://github.com/ant-design/ant-design/blob/master/components/form/index.zh-CN.md#formcreateoptions
而 `Form.tsx` 中的接口定义有四个参数:
https://github.com/ant-design/ant-design/blob/master/components/form/Form.tsx#L22
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,IssueHuntFest | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.