id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
385,884,736 | neovim | smarter filename completion CTRL-x CTRL-f | <!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`: 0.3.1
- Vim (version: ) behaves differently? No.
- Operating system/version: Fedora 29
- Terminal name/version: zsh 5.6.2
- `$TERM`: xterm-256color
### Steps to reproduce using `nvim -u NORC`
Try to do file autocomplete on an assignment like this:
`export FILE=/path/to/file`
### Actual behaviour
Pattern not found.
### Expected behaviour
`isfname` is overridden in the pattern search to offer completions for file paths that are valid if the `=` and preceding characters were removed, while still completing paths that would be valid if the characters remained.
I made a post here in the Vim repo:
https://github.com/vim/vim/issues/3645
But it was immediately shot down without any discussion, and I was treated like I didn't know what I was doing and was asking for help. I was told just to make a custom completion script instead of patching the existing one, because this limitation is a "feature" and not a bug.
I thought I would see what the folks at NeoVim think, as I see there are already some existing autocomplete improvement issues and I wanted to offer something much more limited in scope which could be realized without too much hassle. This could be the dealbreaker for me to finally make a switch. | enhancement,has:workaround | low | Critical |
385,921,542 | vscode | Git - "Stage selected ranges" sometimes stages a revert of changes in the previous commit | I am not able to test with the insiders build as I don't have permission to install unauthorized software versions on my workstation here.
- VSCode Version: 1.29.1
- OS Version: Windows 10.0.15063
Steps to Reproduce:
(Since this seems to be kind of finicky, I've added more details about how I select things and so on than I would ordinarily provide.)
1. Change two different locations in a file.
1. Select the changed lines in one location (with the mouse) and choose **Git: Stage Selected Ranges** from the palette (I open it with Ctrl-Shift-P).
2. Fill in a commit message and commit the change by pressing Ctrl-Enter.
3. Click off the previous selection and select the other change. Again choose **Git: Stage Selected Ranges** from the palette. *Do not do anything else in between* (sometimes the issue does not occur if you do other things).
4. View the staged changes. In addition to the newly staged lines, an exact revert of the previous changes is present. This does get committed if you then commit, and it shows up in `git diff --cached`.
This happens pretty reliably, but only happened 75% of the times I was trying to reproduce it -- I assume I was doing something else in between or selecting something in a slightly different way the times it didn't happen.
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: **Yes**
## Git output
VSCode appears to be doing a lot of extra stuff the second time!
On staging the first change:
```
> git show HEAD:git training/Activities/activities.md
> git hash-object --stdin -w --path git training/Activities/activities.md
> git ls-tree -l HEAD -- git training/Activities/activities.md
> git update-index --cacheinfo 100644 3592ee3adb9657ed0ef318abc491cbc19969d7b5
git training/Activities/activities.md
> git status -z -u
> git symbolic-ref --short HEAD
> git rev-parse pilot/master
> git rev-parse --symbolic-full-name pilot/master@{u}
> git rev-list --left-right pilot/master...refs/remotes/origin/pilot/master
> git for-each-ref --format %(refname) %(objectname) --sort -committerdate
> git remote --verbose
> git show :git training/Activities/activities.md
```
On staging the second change:
```
> git hash-object --stdin -w --path git training/Activities/activities.md
> git ls-tree -l HEAD -- git training/Activities/activities.md
> git update-index --cacheinfo 100644 a4df3ae5acc9dfee9d7b33e8608581ca640715eb
git training/Activities/activities.md
> git status -z -u
> git symbolic-ref --short HEAD
> git rev-parse pilot/master
> git rev-parse --symbolic-full-name pilot/master@{u}
> git rev-list --left-right pilot/master...refs/remotes/origin/pilot/master
> git for-each-ref --format %(refname) %(objectname) --sort -committerdate
> git remote --verbose
> git show :git training/Activities/activities.md
> git ls-tree -l HEAD -- c:\Workspaces\Temp\toolslang_tfs_training\git training\Activities\activities.md
> git ls-tree -l HEAD -- c:\Workspaces\Temp\toolslang_tfs_training\git training\Activities\activities.md
> git show 29fdbfdab2573936bf25f93dc9d00e36c3079920
> git ls-files --stage -- c:\Workspaces\Temp\toolslang_tfs_training\git training\Activities\activities.md
> git show 29fdbfdab2573936bf25f93dc9d00e36c3079920
> git ls-files --stage -- c:\Workspaces\Temp\toolslang_tfs_training\git training\Activities\activities.md
> git cat-file -s a4df3ae5acc9dfee9d7b33e8608581ca640715eb
> git cat-file -s a4df3ae5acc9dfee9d7b33e8608581ca640715eb
> git show a4df3ae5acc9dfee9d7b33e8608581ca640715eb
> git show a4df3ae5acc9dfee9d7b33e8608581ca640715eb
> git show HEAD:git training/Activities/activities.md
> git show :git training/Activities/activities.md
``` | bug,help wanted,git | low | Major |
385,942,571 | TypeScript | Inference of parameter fail | **TypeScript Version:** 3.2.1
**Search Terms:** vscode, typescript, parameter, infer, inference
**Code**
Private repository so I cannot share the repro. However, here is the idea:
Work if one parameter is not defined:

Does not work if all of them are not defined:

**Expected behavior:**
All the parameter that can be infered to be infered. Does that cannot to be `any`.
**Actual behavior:**
The feature does not appear at all. No clue why is given.
**Playground Link:** Not available.
**Related Issues:** No
| Bug,Domain: JSX/TSX,Domain: Refactorings | low | Minor |
385,943,388 | vscode | Emmet support for custom HTML tags / attributes | Continuation of #62976.
Cusom tags / attributes should go into emmet's registry, so all emmet features work as expected for them. | feature-request,html,emmet | low | Major |
385,946,589 | pytorch | libtorch exports protobuf symbols | ## Problem
libtorch, the amazing c++ interface for Pytorch, exports the symbols of it's linked libprotobuf.
This apparently requires me to use the same version of libprotobuf and protoc for my project when depending on libtorch.
Attempting to link another version of protobuf caused memory corruption [in my case](https://github.com/protocolbuffers/protobuf/issues/5401).
## Proposed Solutions
A) Don't export the symbols, if that's possible.
B) Document the libprotobuf version used by libtorch. This way, I could at least fall back to that version for now. However, I was not able to find the used version. | high priority,module: build,module: protobuf,module: cpp,module: abi,triaged | medium | Critical |
385,991,581 | godot | Multiple overlapping sprites with normal maps blended incorrectly | **Godot version:**
3.1 gles3
**Issue description:**
When you have overlapping sprites with normalmaps like in the gif and the project, the are incorrectly blended together. It looks like the normalmap from sprite below is going through the other one

**Minimal reproduction project:**
https://drive.google.com/open?id=19TOz7OEjTpsPJkOcpVv7IlfFy-Xi2ise
| bug,topic:rendering,confirmed | low | Minor |
386,007,654 | TypeScript | Add type predicate to Object.is | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
object.is
type predicate
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
When a is b, a and b has to same type, so I suggest adding a few generic overloads with type predicates
```typescript
interface ObjectConstructor {
is<A, B extends A> (a: A, b: B): a is B
is<A extends B, B> (a: A, b: B): b is A
is (a: any, b: any): boolean
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion,Domain: lib.d.ts | low | Critical |
386,070,590 | TypeScript | AMD Module Names Directives Rewrite CommonJS require's. | **TypeScript Version:** 3.2.1
**Search Terms:** AMD, directive
**Code**: https://github.com/ahawkins/typescript-amd-bug. Run `npm test` to see a failing build.
**Expected behavior:** AMD `module-names` should not change requires when building CommonJS modules.
**Actual behavior:** CommonJS modules require's rewritten to use the AMD module name. If I remove the AMD directives from the sample code then things work as expected.
**Related Issues:** Uncertain.
| Bug | low | Critical |
386,117,575 | rust | Shipping clang as a rustup component | Sometimes it is beneficial (or even necessary) for mixed-language projects to build C/C++ with a Clang version that matches Rust's LLVM version. [Cross-language LTO](https://github.com/rust-lang/rust/issues/49879) is one of these cases.
Since Rust's LLVM hardly ever matches a specific release and is often ahead of the current stable release, it would be great if we provided an easy way to get ahold of the right Clang binaries. It looks like we are already building Clang for Rust-enabled LLDB anyway. Could we make also make it available via rustup? Either as a standalone component, or as part of `lldb` or `llvm-tools`?
cc @rust-lang/infra @rust-lang/release | T-dev-tools | medium | Critical |
386,143,623 | TypeScript | Type inference when combining inheritance, optional properties and union type | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.3.0-dev.20181130
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
type inference
type inference inheritance
type inference inheritance optional
type inference inheritance optional property
type inference inheritance optional union type
type inference inheritance optional property union type
**Code**
```ts
export class ModelBase {
modelBaseProp: boolean;
constructor() {
}
}
export class Model1 extends ModelBase {
model1Prop?: number;
constructor() {
super();
}
}
export class Model2 extends ModelBase {
model2Prop?: string;
constructor() {
super();
}
}
export class Test {
test: Model1 | Model2;
constructor() {
this.test = new Model1();
this.test.model1Prop = 1;
this.test = new Model2();
this.test.model2Prop = 'test';
}
}
```
**Expected behavior:**
No compilation errors.
**Actual behavior:**
`error TS2339: Property 'model1Prop' does not exist on type 'Model1 | Model2'.
Property 'model1Prop' does not exist on type 'Model2'`
and
`error TS2339: Property 'model2Prop' does not exist on type 'Model1 | Model2'.
Property 'model2Prop' does not exist on type 'Model1'.`
**Playground Link:** [link](http://www.typescriptlang.org/play/#src=export%20class%20ModelBase%20%7B%0D%0A%20%20modelBaseProp%3F%3A%20boolean%3B%0D%0A%0D%0A%20%20constructor()%20%7B%0D%0A%20%20%7D%0D%0A%7D%0D%0A%0D%0Aexport%20class%20Model1%20extends%20ModelBase%20%7B%0D%0A%20%20model1Prop%3F%3A%20number%3B%0D%0A%0D%0A%20%20constructor()%20%7B%0D%0A%20%20%20%20super()%3B%0D%0A%20%20%7D%0D%0A%7D%0D%0A%0D%0Aexport%20class%20Model2%20extends%20ModelBase%20%7B%0D%0A%20%20model2Prop%3F%3A%20string%3B%0D%0A%0D%0A%20%20constructor()%20%7B%0D%0A%20%20%20%20super()%3B%0D%0A%20%20%7D%0D%0A%7D%0D%0A%0D%0Aexport%20class%20Test%20%7B%0D%0A%20%20test%3A%20Model1%20%7C%20Model2%3B%0D%0A%20%20constructor()%20%7B%0D%0A%20%20%20%20this.test%20%3D%20new%20Model1()%3B%0D%0A%20%20%20%20this.test.model1Prop%20%3D%201%3B%0D%0A%0D%0A%20%20%20%20this.test%20%3D%20new%20Model2()%3B%0D%0A%20%20%20%20this.test.model2Prop%20%3D%20'test'%3B%0D%0A%20%20%7D%0D%0A%7D)
**Additional info** Making the properties in the derived classes non-optional works as expected. Note that the following is working properly: [Playground link](http://www.typescriptlang.org/play/#src=export%20class%20Model1%20%7B%0D%0A%20%20model1Prop%3F%3A%20number%3B%0D%0A%0D%0A%20%20constructor()%20%7B%0D%0A%20%20%7D%0D%0A%7D%0D%0A%0D%0Aexport%20class%20Model2%20%7B%0D%0A%20%20model2Prop%3F%3A%20string%3B%0D%0A%0D%0A%20%20constructor()%20%7B%0D%0A%20%20%7D%0D%0A%7D%0D%0A%0D%0Aexport%20class%20Test%20%7B%0D%0A%20%20test%3A%20Model1%20%7C%20Model2%3B%0D%0A%20%20constructor()%20%7B%0D%0A%20%20%20%20this.test%20%3D%20new%20Model1()%3B%0D%0A%20%20%20%20this.test.model1Prop%20%3D%201%3B%0D%0A%0D%0A%20%20%20%20this.test%20%3D%20new%20Model2()%3B%0D%0A%20%20%20%20this.test.model2Prop%20%3D%20'test'%3B%0D%0A%20%20%7D%0D%0A%7D)
```ts
export class Model1 {
model1Prop?: number;
constructor() {
}
}
export class Model2 {
model2Prop?: string;
constructor() {
}
}
export class Test {
test: Model1 | Model2;
constructor() {
this.test = new Model1();
this.test.model1Prop = 1;
this.test = new Model2();
this.test.model2Prop = 'test';
}
}
```
| Bug,Domain: Control Flow | low | Critical |
386,147,048 | TypeScript | Infer parameter types from usage quick fix infers to any from React component props | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.3.0-dev.20181130
**@types/react Version:** 16.7.10
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
Infer parameter types from usage quick fix react jsx
**Code**
```ts
import React from "react";
function handle(e) {
console.log(e);
}
function Button() {
return <button onClick={handle} />;
}
```
**Expected behavior:**
Trigger "Infer parameter types from usage" quick fix for `e` and it should add `React.MouseEvent<HTMLButtonElement>` type for the `e` parameter
**Actual behavior:**
It adds `any` type
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
https://github.com/Microsoft/TypeScript/issues/22357
https://github.com/Microsoft/TypeScript/issues/28766 | Bug | low | Critical |
386,147,072 | go | proposal: encoding/json: add error var to compare the returned error when using json.Decoder.DisallowUnknownFields() | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version devel +ffc7bc55f3 Tue Oct 9 10:35:08 2018 +0000 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/jaswdr/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/jaswdr"
GOPROXY=""
GORACE=""
GOROOT="/home/jaswdr/go"
GOTMPDIR=""
GOTOOLDIR="/home/jaswdr/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build749620831=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
```go
package main
import (
"log"
"bytes"
"encoding/json"
)
type T struct {
Bar string `json:"bar"`
}
func main() {
str := []byte(`{"foo":"bar"}`)
obj := new(T)
dec := json.NewDecoder(bytes.NewReader(str))
dec.DisallowUnknownFields()
err := dec.Decode(&obj)
if err != nil { // Whant to check if the specific unkown fields error happen
log.Fatal(err)
}
log.Println(obj)
}
```
### What did you expect to see?
Some way to check if the error happen.
```go
...
if err == json.UnknownFieldsError {
...
}
...
```
### What did you see instead?
No way to do this instead of checking the error string
---
If this is accepted I really want to work on it. | Proposal,Proposal-Hold | medium | Critical |
386,152,453 | TypeScript | Unexpected (unhelpful) autocomplete suggestions inside function call when spreading an array | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** spread autocomplete
**Code**

I'm not 100% sure what causes this, but its reproducible in this project even after closing VSCode.
I don't think it has to do with the error below, as that vanishes when I delete the line:
https://github.com/AlCalzone/create-adapter/blob/65e5f59ce824a9f405c441091d3d7fad13ee70b5/src/lib/questions.ts#L104
**Expected behavior:**
No autocomplete
**Actual behavior:**
This super unhelpful autocomplete

**Playground Link:** nope
**Related Issues:** nope
| Suggestion,In Discussion,Domain: Completion Lists | low | Critical |
386,158,845 | TypeScript | Infer parameter types from usage quick fix does not work for arrow function in class property initializers | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.3.0-dev.20181130
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
Infer parameter types from usage quick fix class property initializer arrow function
**Code**
```ts
class Foo {
log = msg => {};
do() {
const m = "hello";
this.log(m);
}
}
```
**Expected behavior:**
When opening the quick fix menu for the `msg` argument in the `log` property "Infer parameter types from usage" quick fix should be shown
**Actual behavior:**
It's not shown.
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
https://github.com/Microsoft/TypeScript/issues/28764
https://github.com/Microsoft/TypeScript/issues/22357
| Bug | low | Critical |
386,175,955 | go | proposal: spec: make imported symbols predictable | Currently when a package is imported, it's not possible to tell what package name it uses without going to the package itself to see what name it uses.
This can make it slow to analyse code (for example to jump to a symbol definition). For example, say we want to find the definition of the symbol `X` in the expression `m.X`. If m isn't defined in the local package, it's necessary to obtain a copy of every imported package's code to see if it's defined by any of them. For large packages, that can take a long time (many seconds). Even with caching proxies, it can still take a long time if the set of dependencies has recently changed.
There's also the issue of cognitive overhead: just seeing an import statement is not sufficient to know what symbols that import statement brings into scope. For large programs, that overhead can be significant, and is worse when one or more of the dependencies are no longer available - the reader of the code is forced to guess what symbols are imported.
This issue is even worse when "." imports are used to bring all a package's exported symbols into local scope. Even though dot-imports are [frowned upon](https://github.com/golang/go/wiki/CodeReviewComments#import-dot), there is still [no universal consensus](https://groups.google.com/d/msg/golang-nuts/PcMZGicbAFg/llxRhxsbDAAJ) that they should be avoided.
The goimports tool already implicitly acknowledges this issue by adding an explicit package name when the symbol isn't clear from the import path. Although this helps, this doesn't help in the general case, because tools cannot assume that goimports has been run.
I propose the following:
- dot imports should be disallowed
- the imported symbol for a given import should be made predictable
This would mean that we can always look at any symbol in a piece of Go source code with local context only and know definitively whether it is defined in the local package or not, and if not, exactly which import path would need to be investigated to find the symbol.
## Dot imports
The official guidelines suggest that a dot import should only be used "to let the file pretend to be part of package foo even though it is not". This is a stylistic choice and strictly unnecessary. The tests can still use package-qualified names with only minor inconvenience (the same inconvenience that any external user will see).
Other than that, I believe the most common use is to make the imported package feel "close to a DSL". This seems to be actively opposed to the aims of Go, as [it makes the programs much harder to read](https://github.com/golang/go/wiki/CodeReviewComments#import-dot).
## Predictable imported symbols for imports
The simplest possibility here would be to *require* an import symbol for every import path. As many (most?) people use goimports to manage their import statements, this might not be too onerous a requirement.
Another approach would be to state that when lacking an explicit import package name, the package name must match a name derived from the import path. Possible rules might be:
- split the import path on /; the expected name is the last element.
- choose the longest valid Go identifier from the end of the import path.
- whatever rule goimports currently uses to determine whether it should add an explicit package name.
This means that we could carry on importing `"fmt"` without redundantly specifying `fmt "fmt"`, but has the disadvantage that nothing else in the language specification talks about what's inside an import path.
| LanguageChange,Proposal,LanguageChangeReview | high | Major |
386,202,872 | electron | Support z-ordering for BrowserView | **Is your feature request related to a problem? Please describe.**
Maybe? If the goal is to get people to move away from `<webview>`, it would be nice to be able to layer things on top of a BrowserView.
**Describe the solution you'd like**
A BrowserWindow hosting a BrowserView should be able to position an html element above (on top of) the BrowserView
**Describe alternatives you've considered**
I'm mainly using the `<webview>` because of this.
| enhancement :sparkles:,component/BrowserView | high | Critical |
386,206,360 | go | x/build/cmd/gomote: configure instance by an environment variable | I've been doing a fair amount of `gomote`-based debugging for the `cmd/go` file-locking changes, and wrote the following script to improve the ergonomics of gomote commands:
`~/bin/mote`:
```bash
#!/bin/sh
SUBCMD=$1
shift
gomote "${SUBCMD}" "${GOMOTE}" "$@"
```
That allows me to eliminate stutter in `gomote` commands and still keep them repeatable.
Instead of:
```
$ export GOMOTE=$(gomote create darwin-amd64-10_12)
$ gomote push $GOMOTE && gomote run $GOMOTE go/src/make.bash
$ gomote run $GOMOTE go/bin/go test -short cmd/go
```
I can run:
```
$ export GOMOTE=$(gomote create darwin-amd64-10_12)
$ mote push && mote run go/src/make.bash
$ mote run go/bin/go test -short cmd/go
```
However, there is no fundamental reason why this should be a separate bash script.
@bradfitz, @dmitshur, @aclements: what do you think of pulling this into `gomote` proper? | Builders,FeatureRequest | low | Critical |
386,207,672 | go | cmd/internal/xcoff: move to x/debug/xcoff | Hi,
Since CL [149957,](https://go-review.googlesource.com/c/go/+/149957) the tests inside gccgoimporter as long as the gccgo command is available, which is the case on most of AIX machines. However, it's only aware of ELF gccgo files, it doesn't recognize XCOFF files. To add the features, cmd/internal/xcoff must be moved to x/debug/xcoff as it was suggested by issue [28037](https://github.com/golang/go/issues/28037).
Is that possible ?
Morever, big archive format is only available on AIX, as far as I know. Therefore, is that okay if I implement functions to read them in x/debug/xcoff ? Or I'd rather do a x/debug/ar as it was suggested here:
https://github.com/golang/go/blob/master/src/cmd/internal/buildid/buildid.go#L121
/cc @ianlancetaylor @bradfitz
| NeedsDecision | low | Critical |
386,215,915 | rust | Borrow error in if statement inside match arm, but not when a match guard is used | https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=10d4259862f36ad0571fb43da5855401
The two functions `remove` and `remove_not_ok` should be equivalent, but the borrow checker only allows the first.
```Rust
// This method compiles
fn remove(&mut self, v: usize) -> Option<usize> {
let mut current = &mut self.head;
loop {
match current {
None => return None,
Some(node) if node.v == v => {
*current = node.next.take();
return Some(v);
},
Some(node) => {
current = &mut node.next;
}
}
}
}
// This method does not compile
fn remove_not_ok(&mut self, v: usize) -> Option<usize> {
let mut current = &mut self.head;
loop {
match current {
None => return None,
Some(node) => {
if node.v == v {
*current = node.next.take();
return Some(v);
} else {
current = &mut node.next;
}
}
}
}
}
```
The error is:
```
error[E0506]: cannot assign to `*current` because it is borrowed
--> src/main.rs:39:25
|
37 | Some(node) => {
| ---- borrow of `*current` occurs here
38 | if node.v == v {
39 | *current = node.next.take();
| ^^^^^^^^
| |
| assignment to borrowed `*current` occurs here
| borrow later used here
``` | T-compiler,A-NLL,NLL-polonius | low | Critical |
386,235,610 | create-react-app | No tests found on Windows with Jenkins | ### Is this a bug report?
Yes
### Did you try recovering your dependencies?
Yes
### Which terms did you search for in User Guide?
test
### Environment
From `npx create-react-app --info`:
`(node:2604) UnhandledPromiseRejectionWarning: Error: The system cannot find the path specified.`
```
System:
OS: Microsoft Windows Version 10.0.14393
CPU: x64 Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz
Binaries:
Node: 10.13.0
npm: 6.4.1
npmPackages:
"@coreui/coreui": "^2.1.1",
"@coreui/icons": "0.3.0",
"@coreui/react": "^2.1.0",
"bootstrap": "^4.1.3",
"classnames": "^2.2.6",
"flag-icon-css": "^3.2.1",
"font-awesome": "^4.7.0",
"node-sass": "^4.10.0",
"prop-types": "^15.6.2",
"react": "^16.6.1",
"react-dom": "^16.6.1",
"react-helmet": "5.2.0",
"react-loadable": "^5.5.0",
"react-redux": "^5.1.0",
"react-router-bootstrap": "^0.24.4",
"react-router-dom": "^4.3.1",
"react-router-redux": "^5.0.0-alpha.8",
"react-scripts": "2.1.1",
"reactstrap": "^6.5.0",
"redux": "^4.0.1",
"redux-immutable": "4.0.0",
"redux-thunk": "^2.3.0",
"rimraf": "^2.6.2",
"simple-line-icons": "^2.4.1"
```
### Steps to Reproduce
1. Write test file and save it in `src` folder as `App.test.js`
2. `npm run react-scripts test --env=jsdom --no-watchman`
or
3. `npm react-scripts test --env=jsdom --coverage`
Reproduction repository: https://github.com/Saibamen/react_test
### Expected Behavior
Tests should be executed
### Actual Behavior
No tests found
```
react-scripts test --env=jsdom --no-watchman
00:02:06.736
00:02:11.011 No tests found
00:02:11.011 In C:\Program Files (x86)\Jenkins\workspace\xxx\src\Web\xxx.Web\ClientApp
00:02:11.011 50 files checked.
00:02:11.011 testMatch: C:\Program Files \(x86\)\Jenkins\workspace\xxx\src\Web\xxx.Web\ClientApp\src\**\__tests__\**\*.{js,jsx,ts,tsx},C:\Program Files \(x86\)\Jenkins\workspace\xxx\src\Web\xxx.Web\ClientApp\src\**\?(*.)(spec|test).{js,jsx,ts,tsx} - 0 matches
00:02:11.011 testPathIgnorePatterns: \\node_modules\\ - 50 matches
00:02:11.011 Pattern: - 0 matches
00:02:11.045 npm ERR! code ELIFECYCLE
00:02:11.046 npm ERR! errno 1
00:02:11.048 npm ERR! [email protected] test:ci: `react-scripts test --env=jsdom --no-watchman`
00:02:11.049 npm ERR! Exit status 1
00:02:11.049 npm ERR!
00:02:11.049 npm ERR! Failed at the [email protected] test:ci script.
00:02:11.050 npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
00:02:11.086
00:02:11.087 npm ERR! A complete log of this run can be found in:
00:02:11.087 npm ERR! C:\Users\jenkins\AppData\Roaming\npm-cache\_logs\2018-11-30T12_24_24_901Z-debug.log
00:02:11.109 Build step 'Execute Windows batch command' marked build as failure
``` | issue: needs investigation | low | Critical |
386,242,130 | go | crypto/x509: DN OU ordering | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go.1.11.1 linux/amd
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
linux amd64
</pre></details>
### What did you do?
When getting the subject of a certificate through cert.Subject.String(), where the DN has multiple OU's, the OU's are in a backwards order and joined with a '+'
https://play.golang.org/p/hKtL6TEeln5
### What did you expect to see?
`CN=Example User,OU=Upper,OU=Lower,O=My Org,C=GB`
### What did you see instead?
`CN=Example User,OU=Lower+OU=Upper,O=My Org,C=GB`
| NeedsInvestigation | low | Minor |
386,257,715 | create-react-app | Change proxying to work more like hosting providers | ### Is this a bug report?
No
### Well then ... what?!
I want to discuss possibly changing the way proxying works in development.
Most static site hosting providers will run proxies/rewrites **only if a static file does not match the request**. See e.g. the rules about "shadowing" URLs in [Netlify's guide on Redirects](https://www.netlify.com/docs/redirects/#rewrites-and-proxying), or the "Hosting Priorities" section in [Firebase hosting's section about URL rewrites](https://firebase.google.com/docs/hosting/url-redirects-rewrites#section-rewrites). In both of these providers, if a request matches a static file, that file is served first **before any proxies/rewrites**.
This is in contrast to the way that proxying currently works in CRA, [where the proxy runs **before** checking static URLs](https://github.com/facebook/create-react-app/blob/ffb219d84d8bc88dc1b94878b6ae10c528c37a57/packages/react-scripts/config/webpackDevServer.config.js#L93-L97), both when using `pkg.proxy` and `src/setupProxy.js`. So a proxy in CRA is actually able to overwrite valid URLs that may be in your bundle! This can be really confusing. We should be able to **always** assume that requests for static files that are part of the bundle resolve to that file, period.
### How to fix
Now, the thing about webpack-dev-server is that it's like a black box, so you're never quite sure what URLs are in the bundle (i.e. pathnames it is able to serve) because it's hard (using webpack's API) to predict all the filenames. So e.g. if a request comes through for `/scripts/whatever-23abcd.js`, you have to first check with the compiler to see if it has a file with that name. If it does, webpack-dev-middleware (WDM) serves that file. If not, it doesn't.
So what I've typically had to do is to [do my proxying **after** WDM runs](https://github.com/unpkg/unpkg.com/blob/41806b211724093afa8774b74b3912fa7e88c29d/modules/createDevServer.js#L41-L49), not before it. If WDM can serve a URL, we should *always* let it do so. If not, let's see if we can proxy the request somewhere else.
Does that make sense? | issue: proposal | low | Critical |
386,277,022 | TypeScript | ResolvedModuleWithFailedLookupLocations does not publicly expose failedLookupLocations | **TypeScript Version:** 3.3.0-dev.20181130
**Search Terms:** ResolvedModuleWithFailedLookupLocations, failedLookupLocations
**Expected behavior:** `ts.ResolvedModuleWithFailedLookupLocations` has a `failedLookupLocations` property which reports locations where the lookup was tried and failed.
**Actual behavior:** `failedLookupLocations` is marked `@internal` and thus not available in the public typings. This is in contrast to `ResolvedTypeReferenceDirectiveWithFailedLookupLocations` which does expose the `failedLookupLocations`, which leads me to believe this was an omission and not an intentional hiding of the field.
**Related Issues:** #28276 | Bug,API | low | Critical |
386,284,583 | go | x/build/maintner: Add Ability to Add GitHub and Gerrit Repositories while in SyncLoop | Currently if you call `SyncLoop` in a goroutine, and later call `TrackGitHub` or `TrackGerrit`, the corpus will not actually track that repository until the `sync` method returns and `SyncLoop` is called again.
This means that users of `corpus` that wish to add repositories to track on the fly need to maintain a custom written loop and call `Sync`.
| Builders | low | Minor |
386,298,138 | flutter | Doctor validator for minimum Gradle version | Context: https://github.com/flutter/flutter/issues/24757#issuecomment-442144708
> Is there anywhere to find the minimum Gradle version supported by Flutter? Or, if the minimum Gradle version changes, where can I find that info?
The developer was using Gradle 4.4, when flutter_tools was using 4.10.2 in its Gradle wrapper -- and this caused the build to fail with an NPE when Gradle was invoked.
We should add a validator to ensure that the user has a minimum version of Gradle installed. | tool,t: gradle,a: first hour,P2,team-tool,triaged-tool | low | Major |
386,307,853 | go | x/debug/cmd/viewcore: confusing error message when not specifying --exe | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.1 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/michael/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/michael/go"
GOPROXY=""
GORACE=""
GOROOT="/home/michael/sdk/go1.11.1"
GOTMPDIR=""
GOTOOLDIR="/home/michael/sdk/go1.11.1/pkg/tool/linux_amd64"
GCCGO="/usr/bin/gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/tmp/vc/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build665857060=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
```
% viewcore /tmp/core.dcs-package-imp.23131.ex61.1543603411 breakdown
can't read DWARF info from /lib/x86_64-linux-gnu/libc-2.24.so: decoding dwarf section info at offset 0x0: too short
% viewcore /tmp/core.dcs-package-imp.23131.ex61.1543603411 --exe ./bin/dcs-package-importer breakdown
all 13451362304 100.00%
[β¦]
```
Iβm not sure whatβs happening without --exe, but the error message should suggest I use --exe in that situation, I suppose? | NeedsInvestigation | low | Critical |
386,313,177 | pytorch | [c10d] Check that allgather/gather output tensors point to different storage | This will avoid the type of confusion in this comment: https://github.com/pytorch/pytorch/issues/14536#issuecomment-443257710.
List replication means the tensors refer to the same underlying storage. I can't think of a case where this would be expected behavior, so we can check and throw if two output tensors point to the same memory for these operations. | oncall: distributed,feature,triaged | low | Minor |
386,325,319 | flutter | Adding "copied" feedback to Sample Code UI on API doc | In recent study, on master channel's API doc (e.g. [actions](https://master-docs-flutter-io.firebaseapp.com/flutter/material/AppBar/actions.html)) we observed that participant was unsure if the sample code was successfully copied to clipboard --- after clicking on the copy all sample code button, she pressed ctrl + C again.
To further improve the usability of Sample code UI, we suggest this change: after clicking on it, show the message: "_Successfully copied to clipboard_". A brief messages popping up at the bottom of the screen (i.e. toast/[snackbar](https://material.io/design/components/snackbars.html#usage) in material design) should be enough.
cc: @jayoung-lee @gspencergoog @InMatrix | c: new feature,team,framework,from: study,d: api docs,P3,team-framework,triaged-framework | low | Major |
386,329,614 | TypeScript | USVString type definition | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
USVString
**Code**
```ts
function example(uri: USVString) {
// Do some work
}
```
**Expected behavior:**
Until typescript version 3.0.3 USVString used to be an existing type defined in `lib.dom.d.ts `
in version 3.1, according to the breaking changes page, some vendor-specific types are removed from lib.d.ts, a full list of removed types is [included](https://github.com/Microsoft/TypeScript/wiki/Breaking-Changes#typescript-31).
USVString is not on that list, yet it's type definition was also removed.
USVString is defined under webIDL, as can be seen [here](https://www.w3.org/TR/WebIDL/#idl-USVString).
**Actual behavior:**
`error TS2304: Cannot find name 'USVString'.` | Suggestion,Breaking Change,Domain: lib.d.ts,Awaiting More Feedback | low | Critical |
386,371,175 | rust | Using absolute paths from std combined with no_implicit_prelude has no edition 2018 compat warning | This code compiles without warning on edition 2015 with `#![warn(rust_2018_compatibility)]` active, but fails to build on edition 2018 ([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=e5c4eae35ba9a23a8c07dd5d1b169dd6)):
```rust
#![no_implicit_prelude]
#![warn(rust_2018_compatibility)]
fn main() {
std::process::exit(0);
}
```
error:
```
error[E0433]: failed to resolve: use of undeclared type or module `std`
--> src/main.rs:6:5
|
6 | std::process::exit(0);
| ^^^ use of undeclared type or module `std`
```
Note: changing the path to instead use `::std::process::exit` compiles on both editions. | A-lints,T-compiler,C-bug,A-edition-2018 | low | Critical |
386,390,876 | pytorch | fc_without_bias / FCWithoutBias | Hello all,
what is the correct way to call fc_without_bias or FCWithoutBias using the Python API? I tried with model.FCWithoutBias and fc_without_bias directly but I always get:
```
File "/home/carlos/Documents/git/pytorch/build/caffe2/python/model_helper.py", line 453, in __getattr__
','.join(workspace.C.nearby_opnames(op_type)) + ']'
AttributeError: Method fc_without_bias is not a registered operator. Did you mean: []
```
| caffe2 | low | Critical |
386,429,171 | TypeScript | Infer Generics Types, from Record<string<...> Aggregation for return type constraints | If you can imagine capturing type information at the same time as runtime information, were the inferred types literals match that those of the runtime types, so that one can implicitly at the same time
capture the typescript types and runtime information for the shape of the typescript type, in one shot.
This I have been doing quite successfully, and is quite powerfully. I find myself using what I term type constraints, to capture information at runtime in specific categories, that typings constraints impose. Which allows me to do advance record mixing.
This allows one to impose constraints on free form runtime information, to ensure
runtime information is correctly categorized for typing system performance of mixing.
## Example of return type constraints for categorization for mixing
```.ts
interface IConstraints<Param1 extends 'Param1A' | 'Param1B',
Param2 extends 'Param2A' | 'Param2B',
Param3 extends 'Param3A' | 'Param3B'> = {
__Param1 : Param1
__Param2 : Param2
__Param3 : Param3
}
// Typically Record< IConstraints.. Is a recursive record capture pattern, with neasted records
class RecordMix<
Dim1 extends Record<string, IConstraints<'Param1A',any,any>>,
Dim2 extends Record<string, IConstraints<'Param1A',any,any>,
Dim3 extends Record<string, IConstraints<never, 'Param2B','Param3A'>
Dim4 extends Record<string, IConstraints<never,'Param2A','Param3B'>
{
// Use the constructor to capture the runtime sets of information.
}
```
## Functionality as of current.
```.ts
interface Builder<A extends string, B extends string>
{
__A : A,
__B : B
}
type AString = 'A' | 'E' | 'G';
type BString = 'B' | 'F' | 'H';
type RecordBuilder<A extends AString, B extends BString> = Record<string, Builder<A,B>>
const builderRecord = {
a : {} as any as Builder<'A','B'>,
e : {} as any as Builder<'E','F'>,
// g : {} as any as Builder<'G','F'>,
}
const builderRecordInvalid = {
a : {} as any as Builder<'A','B'>,
e : {} as any as Builder<'E','F'>,
g : {} as any as Builder<'G','F'>,
}
function Capture<A extends 'A' |'E', B extends BString, Rec extends RecordBuilder<A, B>>(record : Rec)
{
return {} as any as {Ra:A,Rb:B};
}
const result = {a:Capture(builderRecord)} //OK
const result = {a:Capture(builderRecordInvalid)} //Failed input type constraints not met
// Type
const result: {
a: {
Ra: "A" | "E";
Rb: BString;
};
}
```
## Functionality Feature Request, using infer of aggregate
```.ts
function Capture<A extends 'A' |'E', B extends BString, Rec extends RecordBuilder<infer A, infer B>>(record : Rec)
{
return {} as any as {Ra:A,Rb:B};
}
const result = {a:Capture(builderRecord)} //OK
const result = {a:Capture(builderRecordInvalid)} //Failed input type constraints not met
// Type Results
const result: {
a: {
Ra: "A" | "E";
Rb: "B" | "F";
};
}
```
## Functionality Feature Request current mechanisms
```.ts
type ExtractRecordComponent<Shape extends Record<string,any>, T extends Record<string, Shape>, Key extends string> = ({
[K in keyof T] : T[K][Key]
})[keyof T]
function Capture<A extends 'A' |'E', B extends BString, Rec extends RecordBuilder<A, B>>(record : Rec)
: {Ra:ExtractRecordComponent<Builder<any,any>,Rec,'__A'>, Rb:ExtractRecordComponent<Builder<any,any>,Rec,'__B'>}
{
return {} as any;
}
const result = {a:Capture(builderRecord)} //OK
const result = {a:Capture(builderRecordInvalid)} //Failed input type constraints not met
// Type Results
const result: {
a: {
Ra: "A" | "E";
Rb: "B" | "F";
};
}
```
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
386,433,181 | TypeScript | Generic TypeSection for specialization of nested Record<string,.. Capturing... mechanics | The ability to define a typeSection, were one can use generics to narrow and customize the functions and statics for record structures.
A union syntax for the Record<string,string | TypeSectionName>
and returnType function() : string |TypeSectionName
Would be used to more specialize the typescript inside a typesections
```.ts
typeSection TypeSectionName<Gen1 extends 'Gen1A' | 'Gen1B', Gen2 extends 'Gen2A' | 'Gen2B'>
// whole bunch of functions and statics, which can't be customized, narrow more spesifically
// from a generic pattern to more spesifics, when capturing runtime information in shapes.
function ArrayItems<Param1 extends string, Gen1Mod extends Gen1,
Gen2Mod extends Gen2, ReturnTypeResult extends GenBuilder<ArrayItems,Gen1Mod, Gen2Mod>>(arrayItems : Record<string, infer Param1>, gen1 : infer Gen1Mod, gen2 : infer Gen2Mod) :
ReturnTypeResult
{
return new ArrayItems(arrayItems, gen1 : gen2);
}
typeSectionEnd
type GenBuilderNarrowMap<Gen1 extends 'Gen1A' | 'Gen1B', Gen2 extends 'Gen2A' | 'Gen2B'> =
{
'Gen1A' : 'Required'
'Gen1B' : 'Optional'
}[Gen1]
|
{
'Gen2A' : 'Nullable'
'Gen2B' : never
}[Gen2]
type GenBuilder<
Base extends GenBuilder<any,any>,
Gen1 extends 'Gen1A' | 'Gen1B', Gen2 extends 'Gen2A' | 'Gen2B',
Keys = GenBuilderNarrowMap<Gen1, Gen2>>
= Pick<Base<Gen1Mod, Gen2Mod>, Keys> & Remove<Base<Gen1,Gen2>, Keys>
interface GenBuilder<Gen1Mod extends Gen1, Gen2Mod extends Gen2>
{
Required() : any
Optional() : any
Nullable() : any
}
class ArrayItems<Gen1 extends 'Gen1A' | 'Gen1B', Gen2 extends 'Gen2A' | 'Gen2B'>
extends GenBuilder<Gen1,Gen2>
{
constructor(public config : {Required:boolean} )
{}
// Where the typesection woul futher mutate and constraint the type section by narrow it, for
// any instance mentiond after the use of required.
Required<ReturnTypeResults extends GenBuilde<Remove<Gen1,'Gen1A'>,Gen2>() : TypeSectionName<Ge1,Gen2>,ReturnTypeResults // Here return same structure, using the builder pattern to futher constrain the avaliable
{
this.Required = false;
new Arrayitems<'Gen1A', Gen2>(this);
}
Optional() : ArrayItems<'Gen2B',Gen2>
Nullable() : ArrayItems<Gen1, 'Gen2A'>
ArraySpesific() : number;
}
interface IConstraints<Param1 extends 'Param1A' | 'Param1B',
Param2 extends 'Param2A' | 'Param2B',
Param3 extends 'Param3A' | 'Param3B'> = {
__Param1 : Param1
__Param2 : Param2
__Param3 : Param3
}
// Typically Record< IConstraints.. Is a recursive record capture pattern, with neasted records
interface RecordContraints<Gen1 extends 'Gen1A' | 'Gen1B', Gen2 extends 'Gen2A' | 'Gen2B'>
// TypeSectionName means that for the next inner neasted evaluations of the record patterns,
// all those type signatures, will have become more spesification by the TypSectionName Generic Specialization.
extends Record<string, TypeSectionName<Ge1,Gen2>, (RecordContraints<Gen1, Gen2>**ByNeastedFunctionInputOtherwise** | IConstraints<Gen1,Gen2,any>)
{
// Use the constructor to capture the runtime sets of information.
}
class Categorize<
Cat1 extends RecordContraints<'Gen1A',any>,
Cat2 extends RecordContraints<'Gen1B','Gen2A'>,
Cat3 extends RecordContraints<'Gen1B','Gen2B'>>
{
constructor(public Cat1 : Cat1, public Cat2 : Cat2, public Cat3 : Cat3)
{
}
// All of this builds up to the following to the following, which allows
// mixing of types, to produce the correct well form record structure for typcially 8/16 different set, permutations.
// To ensure a typescript project is fully type checked.
newRecord(rec : Ca1 & Cat2) : void
updateRecord(rec : Partial<Cat2> & Cat3) : void
results() : Required<Ca1> & Partial<Cat2> & Cat3
}
const category = new Categorize(
{
// These are now specialize based on this sections intanse of the generics.
ArrayItems({})
},
{
// These are now specialize based on this sections intanse of the generics.
ArrayItems({})
},
{
ArrayItems({})
}
)
```
See Infer Generics Types, from Record<string<...> Aggregation for return type constraints on how both features work to get a bigger picture
https://github.com/Microsoft/TypeScript/issues/28787
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Minor |
386,443,465 | go | x/text/currency: rouble symbol is only printed with language.Russian | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/ainar/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/ainar/go"
GOPROXY=""
GORACE=""
GOROOT="/home/ainar/go/go1.11"
GOTMPDIR=""
GOTOOLDIR="/home/ainar/go/go1.11/pkg/tool/linux_amd64"
GCCGO="/usr/bin/gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build112907181=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
https://play.golang.org/p/umgrxwND7k4
(Using floats because #15274.)
### What did you expect to see?
<pre>
β½ 100.00 = β¬ 1.33
100.00 β½ = 1.33 β¬
100.00 β½ = 1.33 β¬
</pre>
### What did you see instead?
<pre>
RUB 100.00 = β¬ 1.33
RUB 100.00 = β¬ 1.33
β½ 100.00 = β¬ 1.33
</pre>
There is #14308 about the symbol position, but I don't understand why the rouble symbol is only used when I use `language.Russian`, while the euro symbol is used in all cases. | NeedsInvestigation | low | Critical |
386,472,263 | rust | Known deviations of macro paths and 2018 import paths from the "uniform path" model | "Uniform path" model for macros/imports means that if macro/import path compiles without errors, it's resolved in exactly same way as non-macro/import path.
(Possible differences in resolution of macro/import paths and other paths are caused by the fact that macro/import paths are resolved early, when module structure of the crate is still in flux.)
There are two known deviations from this model.
Import/macro resolution never sees local variables (`let x`) and generic parameters (`T` in `fn f<T>() { ... }`) - we don't have necessary infrastructure to do that during early resolution right now.
There's still some future-proofing in place turning imports referring to local variables and generic parameters into errors - https://github.com/rust-lang/rust/pull/55884/commits/22a13ef1584a06cd5b08c57b285925d5b4ebf0b6, but that future proofing has two holes.
First, macro paths are not future-proofed:
```rust
mod T { // T1
pub macro mac() {}
}
fn f<T>() { // T2
T::mac!(); // Should be an error because T2 shadows T1, but it's not.
}
```
To fix this, expanded macros need to leave better traces for late resolution, so that positions of expanded macros relative to generic parameters are known.
**UPDATE**: Fixed in https://github.com/rust-lang/rust/pull/56759 --> Second, future-proofing for non-renamed single-segment imports referring to generic parameters doesn't work due to an implementation issue:
```rust
fn self_import<T>() {
use T; // FIXME Should be an error, but future-proofing fails due to `use T` being "self-confirming"
}
```
To fix this we need to stop treating the import `use T;` as "self-confirming" during future-proofing resolution (similarly to how it's done for single-segment imports in https://github.com/rust-lang/rust/pull/56392). <-- | A-resolve,T-compiler,C-bug | low | Critical |
386,475,519 | godot | Resource path isn't case sensitive in Godot editor when loaded using preload(), but after exporting it becomes case sensitive. | **Godot version:**
3.0.6
**Issue description:**
When running the game in Godot editor, the resource path loaded using `preload()` **isn't** case sensitive. However, after exporting the game, the resource path loaded using `preload()` becomes case sensitive.
This could cause confusion to developers when they made an upper/lower-case typo in scripts, since even if the play-test in Godot editor is error-free, the exported game crashes.
**Steps to reproduce:**
1. Create a new scene, say **test.tscn**.
2. Create a new script, attach it to a random node in the default scene. Add the following line to the script:
`
var test_node = preload("res://Test.tscn")
`
3. Run the default scene. No error occurs. (Meaning `preload()` isn't case sensitive.)
4. Now export the project using the official templates. Run the standalone builds, and an error occurs saying "Test.tscn" can't be loaded. (`SCRIPT ERROR: GDScript::load_byte_code: Parse Error: Can't preload resource at path: res://Test.tscn`). It becomes case sensitive.
**Minimal reproduction project:**
[test.zip](https://github.com/godotengine/godot/files/2636068/test.zip)
| bug,platform:windows,platform:macos,topic:porting,confirmed,topic:export | low | Critical |
386,476,000 | rust | Feature request: String::replace_in_place | Essentially, this can avoid allocating extra memory under the condition that the replacement is smaller than the searched expression. If the replacement is larger, a `VecDeque<u8>` can be kept to minimise the amount of extra bytes allocated. We could also pre-allocate an array on the stack and do some sort of `SmallVecDeque`.
Having a `String::replace_in_place` could also allow a `Cow::replace_in_place` which only allocates a new string if there's a change.
I don't have the time for this but I figured I'd post this here anyway, as a tracking issue will be needed anyway. | T-libs-api,C-feature-request | low | Major |
386,510,515 | go | cmd/link: linking a c-shared archive into a Go program fails on Darwin | Please answer these questions before submitting your issue. Thanks!
#### What did you do?
I tried to rebuild the latest tip version with all.bash on my Mac.
#### What did you expect to see?
No error.
#### What did you see instead?
```
$ git rev-parse HEAD
b397248168fcb26400ac6afb88bf6080497a819e
```
```
##### ../misc/cgo/testcshared
--- FAIL: TestGo2C2Go (4.85s)
cshared_test.go:616: run: [go build -buildmode=c-shared -o /var/folders/pb/c7_4_d355ng5zfm4r_jg2gg00000gn/T/cshared-TestGo2C2Go380654146/libtestgo2c2go.dylib go2c2go/go]
cshared_test.go:641: run: [go build -o /var/folders/pb/c7_4_d355ng5zfm4r_jg2gg00000gn/T/cshared-TestGo2C2Go380654146/m1 go2c2go/m1]
cshared_test.go:642: command failed: [/var/folders/pb/c7_4_d355ng5zfm4r_jg2gg00000gn/T/cshared-TestGo2C2Go380654146/m1]
signal: abort trap
dyld: Library not loaded: libtestgo2c2go.dylib
Referenced from: /var/folders/pb/c7_4_d355ng5zfm4r_jg2gg00000gn/T/cshared-TestGo2C2Go380654146/m1
Reason: image not found
FAIL
2018/12/02 08:32:51 Failed: exit status 1
```
#### Does this issue reproduce with the latest release (go1.11.2)?
N/A
#### System details
```
go version devel +b397248168 Sat Dec 1 17:31:02 2018 +0000 darwin/amd64
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/yoshiki/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/yoshiki/gocode:/Users/yoshiki/exercises/gpl:/Users/yoshiki/oak"
GOPROXY=""
GORACE=""
GOROOT="/Users/yoshiki/tools/go"
GOTMPDIR=""
GOTOOLDIR="/Users/yoshiki/tools/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
GOROOT/bin/go version: go version devel +b397248168 Sat Dec 1 17:31:02 2018 +0000 darwin/amd64
GOROOT/bin/go tool compile -V: compile version devel +b397248168 Sat Dec 1 17:31:02 2018 +0000
uname -v: Darwin Kernel Version 18.2.0: Fri Oct 5 19:41:49 PDT 2018; root:xnu-4903.221.2~2/RELEASE_X86_64
ProductName: Mac OS X
ProductVersion: 10.14.1
BuildVersion: 18B75
lldb --version: lldb-1000.0.38.2
Swift-4.2
```
| help wanted,OS-Darwin,NeedsInvestigation | low | Critical |
386,510,890 | rust | Unhelpful error message when multiple lifetimes have the same name | I was just looking for some lifetime errors for a project, and I came across the following example:
https://stackoverflow.com/questions/24847331/rust-lifetime-error-expected-concrete-lifetime-but-found-bound-lifetime
The solution in that thread is correct, but the error message is somewhat unhelpful with today's stable and nightly compilers:
```
error[E0308]: method not compatible with trait
--> src/l3.rs:19:5
|
19 | fn to_c(&self, r: &'a Ref) -> Container<'a> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ lifetime mismatch
|
= note: expected type `fn(&l3::ContainerB<'a>, &'a l3::Ref) -> l3::Container<'a>`
found type `fn(&l3::ContainerB<'a>, &'a l3::Ref) -> l3::Container<'a>`
note: the lifetime 'a as defined on the method body at 19:5...
--> src/l3.rs:19:5
|
19 | fn to_c(&self, r: &'a Ref) -> Container<'a> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: ...does not necessarily outlive the lifetime 'a as defined on the impl at 17:6
--> src/l3.rs:17:6
|
17 | impl<'a> ToC for ContainerB<'a> {
| ^^
error[E0308]: method not compatible with trait
--> src/l3.rs:19:5
|
19 | fn to_c(&self, r: &'a Ref) -> Container<'a> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ lifetime mismatch
|
= note: expected type `fn(&l3::ContainerB<'a>, &'a l3::Ref) -> l3::Container<'a>`
found type `fn(&l3::ContainerB<'a>, &'a l3::Ref) -> l3::Container<'a>`
note: the lifetime 'a as defined on the impl at 17:6...
--> src/l3.rs:17:6
|
17 | impl<'a> ToC for ContainerB<'a> {
| ^^
note: ...does not necessarily outlive the lifetime 'a as defined on the method body at 19:5
--> src/l3.rs:19:5
|
19 | fn to_c(&self, r: &'a Ref) -> Container<'a> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error: aborting due to 2 previous errors
```
Basically the problem is that there are two lifetimes called `'a` and the "expected" and "found" types use different `'a`s. | C-enhancement,A-diagnostics,A-lifetimes,T-compiler,D-confusing,D-newcomer-roadblock | low | Critical |
386,536,907 | opencv | Weird result by using connectedComponentsWithStats | ##### System information (version)
<!-- Example
- OpenCV => 3.4
- Operating System / Platform => Windows 10 64 Bit
- Compiler => Visual Studio 2012
-->
Here is my testing code:
cv::Mat binTest = cv::Mat(1000, 2000, CV_8UC1,Scalar(255));
//find out connected component
cv::Mat _dstImg, __stat, _centroid;
int cc = cv::connectedComponentsWithStats(binTest, _dstImg, __stat, _centroid, 8, CV_32S, cv::ConnectedComponentsAlgorithmsTypes::CCL_GRANA);
Results is for __stat:
[2147483647, 2147483647, 2, 2, 0;
0, 0, 2000, 1000, 2000000]
There are two results by calling that function, but there is only one object in binTest. | category: java bindings,incomplete,needs investigation | low | Minor |
386,538,037 | TypeScript | Generic Type constraint from base classes, interfaces (Pass through) | The ability to allow extends type constraint to be an abstract specification of base class, interface.
This also prevent duplication, by having to define the type constraint twice, reduces the need for unnecessary intermediate interfaces.
```.ts
type ID = string
interface IShape<TID extends ID, Neasted>{
__ID: TID;
__Neasted : Neasted
}
interface ITSShape<T, ID>
{
__tsType: T;
__ID: ID;
}
interface IShapeType extends IShape<'T', boolean | number | null>
{
}
```
## Not possible
```.ts
interface IShapeTSType<T extends ['__Neasted']> extends IShapeType
{
__tsType : T;
}
interface IShapeTSType<T extends ['__Neasted']> extends IShape<'T', boolean | number | string>
{
__tsType : T;
}
interface IShapeTSType<T extends this['__Neasted']> extends IShapeType
{
__tsType : T;
}
interface IShapeTSType<T extends TShape['__Neasted'], TShape extends IShapeType> extends TShape
{
__tsType : T;
}
```
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Minor |
386,543,647 | TypeScript | Generic Constraint using implements or equals | When working with types and one wants to capture generic typescript types and runtime information in one go, without using experimental decorators, which also has limitations when its comes to type categorization upfront, as it would require extract routines to evaluate if interface uses valid types subset of types when the interface is defined.
One doesn't want to evaluate at the level of string literals here, but at the level of primitive types.
What I would like to propose is a new key call it equal, which would check that the type equals or maybe implements, which perform the same logic as extends, but would not evaluate string | boolean
primitives at literal level or deconstructed level of that type, basically stop there. Almost like named typing in a way.
## Search Terms
generic type implements instead of extends
generic implements
## Suggestion
// This depends on who you read this statement, T equals boolean or string explicity
// T implements boolean | string pars
```.ts
interface TSType<T equals boolean | string | number | null | undefined | readonly>
{
__tsType : T
}
type rr = TSType<true> // invalid type
type rr = TSType<false> // invalid type
type rr = TSType<'sdf'> // invalid type
type rr = TSType<234> // invalid type
type rr = TSType<234 | string> // invalid type
type rr = TSType<'sdf' | string> // invalid type
type rr = TSType<true | boolean> // invalid type
type rr = TSType<boolean> // valid types
type rr = TSType<string> // valid types
type rr = TSType<number> // valid types
type rr = TSType<undefined> // valid types
type rr = TSType<null> // valid types
type rr = TSType<readonly> // valid types
type rr = TSType<number | readonly> // valid types
type rr = TSType<string | number | readonly | undefined> // valid types
```
## Extract routine for ensuring decorators class for type mixing , conform to specific set of constraints
This method, however, like all others for nested records and arrays, makes it a whole lot more complicate to evaluate the typings. I am actually think of proposing a symbol type, which has identifiers, for structure, which allows simpler ability to iterator nested record structures..
```.ts
MeetsConstraints<T extends Record<string, any>,
TNullable extends null | never,
TRequired extends undefined | never,
TReadonly extends readonly | never
//Default, but there is no mechanism for this, yet as not standard, would have use annotation type interrogation, if implemented in the future.
= {
[K in keyof T] T[K] implement TNullable | TRequired | TReadonly ? 'T' : never
}[keyof T]
// typically this would be extends constraint for some other class or generic
type valid = MeetsConstraints<...> extends 'T' ? 'good' : TError('bad')
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
386,580,631 | TypeScript | cyclic dependency and main/module package.json breaks structural typing | **TypeScript Version:** 3.2.1
**Code**
See this repository reproducing the issue: https://github.com/AlexGalays/repro-ts-bug-module-duck
Just call `yarn` and check `index.ts`.
**Expected behavior:**
It should compile
**Actual behavior:**
With the introduction of a cycle in the lib modules (Option.toResult, and Result.toOption)
an Option from the /es folder is no longer compatible with an Option from the /commonjs folder (which is the reference for typings in package.json) even though they are structurally identical.

| Bug | low | Critical |
386,584,614 | godot | JSON Parser modifies long numbers | **Godot version:**
Version 3.0.6 stable official
**OS/device including version:**
Windows 10 Home 64
NVidia 1060 gtx
**Issue description:**
When a Json string containing a long number of 17 characters or more, it modifies the last numbers after the 16th position and replace them with 0, except for the 17th character which seems to be rounded in a strange way. For instance, the text `{"sessionid":"2178736780526359416"}` will return for sessionid the value `2178736780526359300`
When the number is a string, with quotes, the problem doesn't occur.
**Steps to reproduce:**
run the following code:
`print(JSON.parse('{"sessionid":2178736780526359412}').result.sessionid)`
It will return 2178736780526359300
| discussion,topic:core,documentation | low | Major |
386,592,139 | rust | Enable rlib-only libstd build (no dylib) | For cross-building libstd (for use e.g. with [miri](https://github.com/solson/miri/)), it would be great if one could make an rlib-only libstd build. This is because building a dylib invokes the native linker, and that fails when the target architectures is too foreign and/or not supported by the installed C toolchain. (Currently, we build libstd both as rlib and dylib.)
So my goal was to equip libstd with a feature flag to control whether it is built as a dylib (next to the rlib) or not. To this end, I patched [`collect_crate_types`](https://github.com/rust-lang/rust/blob/8660eba2b9bec5b0fe971b7281f79e79c2df2fae/src/librustc_driver/driver.rs#L1500) such that the crate types passed on the command-line and the one given as attributes are merged (instead of command-line taking precedence). That means I can use `#![cfg_attr(dylib, crate-type = "dylib")]` to make a `dylib` feature enable a dylib build. That seems to work, however, there is a problem: During bootstrap, rustbuild uses cargo's stamp file to determine which files to copy into the sysroot, and that stamp file now no longer includes the `libstd.so`, breaking the build.
I wonder if there is maybe a better way to achieve an rlib-only libstd build?
Cc @eddyb @alexcrichton any advice? | A-driver,T-compiler,C-feature-request | low | Major |
386,603,286 | pytorch | [Caffe2] Exception when creating gradient for [Cast] SquaredL2Distance as output layer of CNN network | ## π Bug
<!-- A clear and concise description of what the bug is. -->
I am getting the following error when I use the SquaredL2Distance operator as the output layer of my CNN network **(1st attempt)**:
```
dist = model.net.SquaredL2Distance([label, fc9_], 'dist')
predictions = dist.AveragedLoss([], ['predictions'])
```
```
carlos@carlos-ubuntu:~/Documents/git/Caffe2_scripts/caffe2_torcs_predictor$ python CNNTrainer_dpnet_dpnet.py
GPU mode selected
sgd optimizer selected
WARNING: Logging before InitGoogleLogging() is written to STDERR
W1202 21:31:09.273607 27931 operator.cc:89] Operator Conv does not support the requested feature. Msg: The current padding scheme leads to unequal padding on the left and right, which is not supported by cudnn.. Proto is: input: "data" input: "conv1__w" input: "conv1__b" output: "conv1_" name: "" type: "Conv" arg { name: "kernel" i: 11 } arg { name: "pad_l" i: 4 } arg { name: "pad_b" i: 4 } arg { name: "exhaustive_search" i: 0 } arg { name: "stride" i: 4 } arg { name: "pad_r" i: 3 } arg { name: "order" s: "NHWC" } arg { name: "pad_t" i: 5 } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "CUDNN"
W1202 21:31:09.273897 27931 operator.cc:89] Operator Conv does not support the requested feature. Msg: The current padding scheme leads to unequal padding on the left and right, which is not supported by cudnn.. Proto is: input: "data" input: "conv1__w" input: "conv1__b" output: "conv1_" name: "" type: "Conv" arg { name: "kernel" i: 11 } arg { name: "pad_l" i: 4 } arg { name: "pad_b" i: 4 } arg { name: "exhaustive_search" i: 0 } arg { name: "stride" i: 4 } arg { name: "pad_r" i: 3 } arg { name: "order" s: "NHWC" } arg { name: "pad_t" i: 5 } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "CUDNN"
I1202 21:31:09.273910 27931 operator.cc:167] Engine CUDNN is not available for operator Conv.
W1202 21:31:09.275804 27931 operator.cc:89] Operator MaxPool does not support the requested feature. Msg: The current padding scheme leads to unequal padding on the left and right, which is not supported by cudnn.. Proto is: input: "conv1_" output: "pool1_" name: "" type: "MaxPool" arg { name: "kernel" i: 3 } arg { name: "pad_l" i: 1 } arg { name: "pad_b" i: 1 } arg { name: "cudnn_exhaustive_search" i: 0 } arg { name: "stride" i: 2 } arg { name: "pad_r" i: 0 } arg { name: "order" s: "NHWC" } arg { name: "pad_t" i: 1 } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "CUDNN"
W1202 21:31:09.276072 27931 operator.cc:89] Operator MaxPool does not support the requested feature. Msg: The current padding scheme leads to unequal padding on the left and right, which is not supported by cudnn.. Proto is: input: "conv1_" output: "pool1_" name: "" type: "MaxPool" arg { name: "kernel" i: 3 } arg { name: "pad_l" i: 1 } arg { name: "pad_b" i: 1 } arg { name: "cudnn_exhaustive_search" i: 0 } arg { name: "stride" i: 2 } arg { name: "pad_r" i: 0 } arg { name: "order" s: "NHWC" } arg { name: "pad_t" i: 1 } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "CUDNN"
I1202 21:31:09.276083 27931 operator.cc:167] Engine CUDNN is not available for operator MaxPool.
W1202 21:31:09.276993 27931 operator.cc:89] Operator MaxPool does not support the requested feature. Msg: The current padding scheme leads to unequal padding on the left and right, which is not supported by cudnn.. Proto is: input: "conv5_" output: "pool5_" name: "" type: "MaxPool" arg { name: "kernel" i: 3 } arg { name: "pad_l" i: 1 } arg { name: "pad_b" i: 0 } arg { name: "cudnn_exhaustive_search" i: 0 } arg { name: "stride" i: 2 } arg { name: "pad_r" i: 1 } arg { name: "order" s: "NHWC" } arg { name: "pad_t" i: 1 } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "CUDNN"
W1202 21:31:09.277192 27931 operator.cc:89] Operator MaxPool does not support the requested feature. Msg: The current padding scheme leads to unequal padding on the left and right, which is not supported by cudnn.. Proto is: input: "conv5_" output: "pool5_" name: "" type: "MaxPool" arg { name: "kernel" i: 3 } arg { name: "pad_l" i: 1 } arg { name: "pad_b" i: 0 } arg { name: "cudnn_exhaustive_search" i: 0 } arg { name: "stride" i: 2 } arg { name: "pad_r" i: 1 } arg { name: "order" s: "NHWC" } arg { name: "pad_t" i: 1 } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "CUDNN"
I1202 21:31:09.277199 27931 operator.cc:167] Engine CUDNN is not available for operator MaxPool.
W1202 21:31:13.799096 27931 operator.cc:89] Operator ConvGradient does not support the requested feature. Msg: The current padding scheme leads to unequal padding on the left and right, which is not supported by cudnn.. Proto is: input: "data" input: "conv1__w" input: "conv1__grad" output: "conv1__w_grad" output: "conv1__b_grad" output: "data_grad" name: "" type: "ConvGradient" arg { name: "kernel" i: 11 } arg { name: "pad_l" i: 4 } arg { name: "pad_b" i: 4 } arg { name: "exhaustive_search" i: 0 } arg { name: "stride" i: 4 } arg { name: "pad_r" i: 3 } arg { name: "order" s: "NHWC" } arg { name: "pad_t" i: 5 } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "CUDNN" is_gradient_op: true
W1202 21:31:13.799773 27931 operator.cc:89] Operator ConvGradient does not support the requested feature. Msg: The current padding scheme leads to unequal padding on the left and right, which is not supported by cudnn.. Proto is: input: "data" input: "conv1__w" input: "conv1__grad" output: "conv1__w_grad" output: "conv1__b_grad" output: "data_grad" name: "" type: "ConvGradient" arg { name: "kernel" i: 11 } arg { name: "pad_l" i: 4 } arg { name: "pad_b" i: 4 } arg { name: "exhaustive_search" i: 0 } arg { name: "stride" i: 4 } arg { name: "pad_r" i: 3 } arg { name: "order" s: "NHWC" } arg { name: "pad_t" i: 5 } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "CUDNN" is_gradient_op: true
I1202 21:31:13.799808 27931 operator.cc:167] Engine CUDNN is not available for operator ConvGradient.
== Starting Training for 100 epochs ==
WARNING:caffe2.python.workspace:Original python traceback for operator `28` in network `train_net` in exception above (most recent call last):
WARNING:caffe2.python.workspace: File "CNNTrainer_dpnet_dpnet.py", line 24, in <module>
WARNING:caffe2.python.workspace: File "/home/carlos/Documents/git/Caffe2_scripts/caffe2_torcs_predictor/CNNCreator_dpnet_dpnet.py", line 146, in train
WARNING:caffe2.python.workspace: File "/home/carlos/Documents/git/Caffe2_scripts/caffe2_torcs_predictor/CNNCreator_dpnet_dpnet.py", line 86, in create_model
Traceback (most recent call last):
File "CNNTrainer_dpnet_dpnet.py", line 24, in <module>
stepsize=8000
File "/home/carlos/Documents/git/Caffe2_scripts/caffe2_torcs_predictor/CNNCreator_dpnet_dpnet.py", line 159, in train
workspace.RunNet(train_model.net)
File "/home/carlos/Documents/git/pytorch/build/caffe2/python/workspace.py", line 217, in RunNet
StringifyNetName(name), num_iter, allow_fail,
File "/home/carlos/Documents/git/pytorch/build/caffe2/python/workspace.py", line 178, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at tensor.h:495] IsType<T>(). Tensor type mismatch, caller expects elements to be float while tensor contains double Error from operator:
input: "label" input: "fc9_" output: "dist" name: "" type: "SquaredL2Distance" device_option { device_type: 1 cuda_gpu_id: 0 }
** while accessing input: label
```
Then I tried to fix the error converting `label` to float (even though it is already float32, see below) using the Cast operator **(2nd attempt)**
```
label_float = model.Cast(label, None, to=core.DataType.FLOAT)
dist = model.net.SquaredL2Distance([label_float, fc9_], 'dist')
predictions = dist.AveragedLoss([], ['predictions'])
```
However, I got the following error:
```
carlos@carlos-ubuntu:~/Documents/git/Caffe2_scripts/caffe2_torcs_predictor$ python CNNTrainer_dpnet_dpnet.py
GPU mode selected
Traceback (most recent call last):
File "CNNTrainer_dpnet_dpnet.py", line 24, in <module>
stepsize=8000
File "/home/carlos/Documents/git/Caffe2_scripts/caffe2_torcs_predictor/CNNCreator_dpnet_dpnet.py", line 153, in train
self.add_training_operators(train_model, predictions, label, device_opts, opt_type, base_learning_rate, policy, stepsize, epsilon, beta1, beta2, gamma, momentum)
File "/home/carlos/Documents/git/Caffe2_scripts/caffe2_torcs_predictor/CNNCreator_dpnet_dpnet.py", line 103, in add_training_operators
model.AddGradientOperators([loss])
File "/home/carlos/Documents/git/pytorch/build/caffe2/python/model_helper.py", line 335, in AddGradientOperators
self.grad_map = self.net.AddGradientOperators(*args, **kwargs)
File "/home/carlos/Documents/git/pytorch/build/caffe2/python/core.py", line 1840, in AddGradientOperators
self._net.op[skip:], ys)
File "/home/carlos/Documents/git/pytorch/build/caffe2/python/core.py", line 1107, in GetBackwardPass
return ir.GetBackwardPass(ys)
File "/home/carlos/Documents/git/pytorch/build/caffe2/python/core.py", line 982, in GetBackwardPass
forward_op_idx, all_input_to_grad)
File "/home/carlos/Documents/git/pytorch/build/caffe2/python/core.py", line 932, in _GenerateGradientsForForwardOp
forward_op, g_output)
File "/home/carlos/Documents/git/pytorch/build/caffe2/python/core.py", line 1080, in GetGradientForOp
format(op.type, e, str(op))
Exception: Exception when creating gradient for [Cast]:[enforce fail at cast_op.cc:139] argsHelper.HasSingleArgumentOfType<string>("from_type") || argsHelper.HasSingleArgumentOfType<int>("from_type"). Argument 'from_type' of type int or string is required to get the gradient of CastOp .
Op:
input: "label"
output: "train_net/Cast"
name: ""
type: "Cast"
arg {
name: "to"
i: 1
}
device_option {
device_type: 1
cuda_gpu_id: 0
}
```
Furthermore, I created the LMDB training dataset which stores the image data as uint8 and the label as a multivalue of float64:
**key: 00000001
image_data: shape: (210, 280, 3) type: uint8
indicators: shape: (14,) type: float64**
**How can I fix the error?**
## To Reproduce
Steps to reproduce the behavior:
1. **Project consists in two files: a trainer and a creator. Run the trainer which calls the train function of the creator.**
Trainer.py:
```
import logging
import CNNCreator_dpnet_dpnet
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger()
handler = logging.FileHandler("train.log", "w", encoding=None, delay="true")
logger.addHandler(handler)
dpnet_dpnet = CNNCreator_dpnet_dpnet.CNNCreator_dpnet_dpnet()
dpnet_dpnet.train(
num_epoch=100,
batch_size=32,
context='gpu',
opt_type='sgd',
base_learning_rate=0.01,
policy='step',
stepsize=8000
)
```
CNNCreator_dpnet_dpnet.py:
```
from caffe2.python import workspace, core, model_helper, brew, optimizer
from caffe2.python.predictor import mobile_exporter
from caffe2.proto import caffe2_pb2
import numpy as np
import logging
import os
import sys
import lmdb
import leveldb
class CNNCreator_dpnet_dpnet:
module = None
_current_dir_ = os.path.join('./')
_data_dir_ = os.path.join(_current_dir_, 'data', 'dpnet_dpnet')
_model_dir_ = os.path.join(_current_dir_, 'model', 'dpnet_dpnet')
INIT_NET = os.path.join(_model_dir_, 'init_net.pb')
PREDICT_NET = os.path.join(_model_dir_, 'predict_net.pb')
def add_input(self, model, batch_size, db, db_type, device_opts):
with core.DeviceScope(device_opts):
# load the data
data_uint8, label = brew.db_input(
model,
blobs_out=["data_uint8", "label"],
batch_size=batch_size,
db=db,
db_type=db_type,
)
# cast the data to float
data = model.Cast(data_uint8, "data", to=core.DataType.FLOAT)
# scale data from [0,255] down to [0,1]
data = model.Scale(data, data, scale=float(1./256))
# don't need the gradient for the backward pass
data = model.StopGradient(data, data)
return data, label
def create_model(self, model, data, label, device_opts):
with core.DeviceScope(device_opts):
data = data
# data, output shape: {[3,210,280]}
conv1_ = brew.conv(model, data, 'conv1_', dim_in=3, dim_out=96, kernel=11, stride=4, pad_t=5, pad_b=4, pad_l=4, pad_r=3) #legacy_pad=1)
# conv1_, output shape: {[96,53,70]}
relu1_ = brew.relu(model, conv1_, conv1_)
pool1_ = brew.max_pool(model, relu1_, 'pool1_', kernel=3, stride=2, pad_t=1, pad_b=1, pad_l=1, pad_r=0) #legacy_pad=1)
# pool1_, output shape: {[96,27,35]}
conv2_ = brew.conv(model, pool1_, 'conv2_', dim_in=96, dim_out=256, kernel=5, stride=4, pad_t=1, pad_b=1, pad_l=1, pad_r=1) #legacy_pad=1)
# conv2_, output shape: {[256,7,9]}
relu2_ = brew.relu(model, conv2_, conv2_)
pool2_ = brew.max_pool(model, relu2_, 'pool2_', kernel=3, stride=2, pad_t=1, pad_b=1, pad_l=1, pad_r=1) #legacy_pad=1)
# pool2_, output shape: {[256,4,5]}
conv3_ = brew.conv(model, pool2_, 'conv3_', dim_in=256, dim_out=384, kernel=3, stride=1, pad_t=1, pad_b=1, pad_l=1, pad_r=1) #legacy_pad=1)
# conv3_, output shape: {[384,4,5]}
relu3_ = brew.relu(model, conv3_, conv3_)
conv4_ = brew.conv(model, relu3_, 'conv4_', dim_in=384, dim_out=384, kernel=3, stride=1, pad_t=1, pad_b=1, pad_l=1, pad_r=1) #legacy_pad=1)
# conv4_, output shape: {[384,4,5]}
relu4_ = brew.relu(model, conv4_, conv4_)
conv5_ = brew.conv(model, relu4_, 'conv5_', dim_in=384, dim_out=256, kernel=3, stride=1, pad_t=1, pad_b=1, pad_l=1, pad_r=1) #legacy_pad=1)
# conv5_, output shape: {[256,4,5]}
relu5_ = brew.relu(model, conv5_, conv5_)
pool5_ = brew.max_pool(model, relu5_, 'pool5_', kernel=3, stride=2, pad_t=1, pad_b=0, pad_l=1, pad_r=1) #legacy_pad=1)
# pool5_, output shape: {[256,2,3]}
fc5_ = brew.fc(model, pool5_, 'fc5_', dim_in=256 * 2 * 3, dim_out=4096)
# fc5_, output shape: {[4096,1,1]}
relu6_ = brew.relu(model, fc5_, fc5_)
dropout6_ = brew.dropout(model, relu6_, 'dropout6_', ratio=0.5, is_test=False)
fc6_ = brew.fc(model, dropout6_, 'fc6_', dim_in=4096, dim_out=4096)
# fc6_, output shape: {[4096,1,1]}
relu7_ = brew.relu(model, fc6_, fc6_)
dropout7_ = brew.dropout(model, relu7_, 'dropout7_', ratio=0.5, is_test=False)
fc7_ = brew.fc(model, dropout7_, 'fc7_', dim_in=4096, dim_out=256)
# fc7_, output shape: {[256,1,1]}
relu8_ = brew.relu(model, fc7_, fc7_)
dropout8_ = brew.dropout(model, relu8_, 'dropout8_', ratio=0.5, is_test=False)
relu9_ = brew.relu(model, dropout8_, dropout8_)
fc9_ = brew.fc(model, relu9_, 'fc9_', dim_in=256, dim_out=14)
# fc9_, output shape: {[14,1,1]}
# FIRST ATTEMPT. Error got: Tensor type mismatch, caller expects elements to be float while tensor contains double Error from operator
dist = model.net.SquaredL2Distance([label, fc9_], 'dist')
predictions = dist.AveragedLoss([], ['predictions'])
'''
# SECOND ATTEMPT: Error got:
label_float = model.Cast(label, None, to=core.DataType.FLOAT)
dist = model.net.SquaredL2Distance([label_float, fc9_], 'dist')
predictions = dist.AveragedLoss([], ['predictions'])
'''
return predictions
# this adds the loss and optimizer
def add_training_operators(self, model, output, label, device_opts, opt_type, base_learning_rate, policy, stepsize, epsilon, beta1, beta2, gamma, momentum) :
with core.DeviceScope(device_opts):
xent = model.LabelCrossEntropy([output, label], 'xent')
loss = model.AveragedLoss(xent, "loss")
model.AddGradientOperators([loss])
if opt_type == 'adam':
if policy == 'step':
opt = optimizer.build_adam(model, base_learning_rate=base_learning_rate, policy=policy, stepsize=stepsize, beta1=beta1, beta2=beta2, epsilon=epsilon)
elif policy == 'fixed' or policy == 'inv':
opt = optimizer.build_adam(model, base_learning_rate=base_learning_rate, policy=policy, beta1=beta1, beta2=beta2, epsilon=epsilon)
print("adam optimizer selected")
elif opt_type == 'sgd':
if policy == 'step':
opt = optimizer.build_sgd(model, base_learning_rate=base_learning_rate, policy=policy, stepsize=stepsize, gamma=gamma, momentum=momentum)
elif policy == 'fixed' or policy == 'inv':
opt = optimizer.build_sgd(model, base_learning_rate=base_learning_rate, policy=policy, gamma=gamma, momentum=momentum)
print("sgd optimizer selected")
elif opt_type == 'rmsprop':
if policy == 'step':
opt = optimizer.build_rms_prop(model, base_learning_rate=base_learning_rate, policy=policy, stepsize=stepsize, decay=gamma, momentum=momentum, epsilon=epsilon)
elif policy == 'fixed' or policy == 'inv':
opt = optimizer.build_rms_prop(model, base_learning_rate=base_learning_rate, policy=policy, decay=gamma, momentum=momentum, epsilon=epsilon)
print("rmsprop optimizer selected")
elif opt_type == 'adagrad':
if policy == 'step':
opt = optimizer.build_adagrad(model, base_learning_rate=base_learning_rate, policy=policy, stepsize=stepsize, decay=gamma, epsilon=epsilon)
elif policy == 'fixed' or policy == 'inv':
opt = optimizer.build_adagrad(model, base_learning_rate=base_learning_rate, policy=policy, decay=gamma, epsilon=epsilon)
print("adagrad optimizer selected")
def add_accuracy(self, model, output, label, device_opts, eval_metric):
with core.DeviceScope(device_opts):
if eval_metric == 'accuracy':
accuracy = brew.accuracy(model, [output, label], "accuracy")
elif eval_metric == 'top_k_accuracy':
accuracy = brew.accuracy(model, [output, label], "accuracy", top_k=3)
return accuracy
def train(self, num_epoch=1000, batch_size=64, context='gpu', eval_metric='accuracy', opt_type='adam', base_learning_rate=0.001, weight_decay=0.001, policy='fixed', stepsize=1, epsilon=1E-8, beta1=0.9, beta2=0.999, gamma=0.999, momentum=0.9) :
if context == 'cpu':
device_opts = core.DeviceOption(caffe2_pb2.CPU, 0)
print("CPU mode selected")
elif context == 'gpu':
device_opts = core.DeviceOption(caffe2_pb2.CUDA, 0)
print("GPU mode selected")
workspace.ResetWorkspace(self._model_dir_)
arg_scope = {"order": "NHWC"}
# == Training model ==
train_model= model_helper.ModelHelper(name="train_net", arg_scope=arg_scope)
data, label = self.add_input(train_model, batch_size=batch_size, db=os.path.join(self._data_dir_, 'torcs-train-nchw-lmdb'), db_type='lmdb', device_opts=device_opts)
predictions = self.create_model(train_model, data, label, device_opts=device_opts)
self.add_training_operators(train_model, predictions, label, device_opts, opt_type, base_learning_rate, policy, stepsize, epsilon, beta1, beta2, gamma, momentum)
self.add_accuracy(train_model, predictions, label, device_opts, eval_metric)
with core.DeviceScope(device_opts):
brew.add_weight_decay(train_model, weight_decay)
# Initialize and create the training network
workspace.RunNetOnce(train_model.param_init_net)
workspace.CreateNet(train_model.net, overwrite=True)
# Main Training Loop
print("== Starting Training for " + str(num_epoch) + " epochs ==")
for i in range(num_epoch):
workspace.RunNet(train_model.net)
if i % 50 == 0:
print 'Iter ' + str(i) + ': ' + 'Loss ' + str(workspace.FetchBlob("loss")) + ' - ' + 'Accuracy ' + str(workspace.FetchBlob('accuracy'))
print("Training done")
# == Deployment model. ==
# We simply need the main AddModel part.
deploy_model = model_helper.ModelHelper(name="deploy_net", arg_scope=arg_scope, init_params=False)
self.create_model(deploy_model, "data", label, device_opts)
print("Saving deploy model")
self.save_net(self.INIT_NET, self.PREDICT_NET, deploy_model)
def save_net(self, init_net_path, predict_net_path, model):
init_net, predict_net = mobile_exporter.Export(
workspace,
model.net,
model.params
)
try:
os.makedirs(self._model_dir_)
except OSError:
if not os.path.isdir(self._model_dir_):
raise
print("Save the model to init_net.pb and predict_net.pb")
with open(predict_net_path, 'wb') as f:
f.write(model.net._net.SerializeToString())
with open(init_net_path, 'wb') as f:
f.write(init_net.SerializeToString())
print("Save the model to init_net.pbtxt and predict_net.pbtxt")
with open(init_net_path.replace('.pb','.pbtxt'), 'w') as f:
f.write(str(init_net))
with open(predict_net_path.replace('.pb','.pbtxt'), 'w') as f:
f.write(str(predict_net))
print("== Saved init_net and predict_net ==")
def load_net(self, init_net_path, predict_net_path, device_opts):
if not os.path.isfile(init_net_path):
logging.error("Network loading failure. File '" + os.path.abspath(init_net_path) + "' does not exist.")
sys.exit(1)
elif not os.path.isfile(predict_net_path):
logging.error("Network loading failure. File '" + os.path.abspath(predict_net_path) + "' does not exist.")
sys.exit(1)
init_def = caffe2_pb2.NetDef()
with open(init_net_path, 'rb') as f:
init_def.ParseFromString(f.read())
init_def.device_option.CopyFrom(device_opts)
workspace.RunNetOnce(init_def.SerializeToString())
net_def = caffe2_pb2.NetDef()
with open(predict_net_path, 'rb') as f:
net_def.ParseFromString(f.read())
net_def.device_option.CopyFrom(device_opts)
workspace.CreateNet(net_def.SerializeToString(), overwrite=True)
print("== Loaded init_net and predict_net ==")
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
Execute SquaredL2Distance as the output layer of the CNN network
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): **Caffer2 tag v0.4.0**
- OS (e.g., Linux): **Ubuntu 16.04**
- How you installed PyTorch (conda, pip, source): **Build from source (tag v0.4.0)**
- Build command you used (if compiling from source):
- Python version: **Python 2.7**
- CUDA/cuDNN version: **8.0/7.0.5**
- GPU models and configuration: **GTX 1050**
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| caffe2 | low | Critical |
386,619,804 | kubernetes | Identify e2e tests that rely on events and thus flaky, rewrite to avoid | This came up during a recent 1.13 release team meeting. Event delivery is not actually guaranteed, so any e2e tests that rely on events are susceptible to flakes. These will become more evident as a cluster is under load.
We should identify which e2e tests are using this anti-pattern, and determine if there is a recipe we can use to rewrite them to be less flaky. For example, we could likely use polling instead of waiting for an event.
/cc @jberkus
For any examples of tests that were already identified / fixed
/area test
/kind cleanup
/priority important-soon
/sig testing | area/test,priority/important-soon,kind/cleanup,kind/flake,sig/testing,lifecycle/frozen,area/deflake | medium | Major |
386,620,547 | pytorch | Advanced indexing slower than numpy | ## π Bug
Advanced indexing is slower than numpy.
## To Reproduce
Steps to reproduce the behavior:
```python
Python 3.7.0 (default, Oct 9 2018, 10:31:47)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.0.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import torch
In [2]: torch.__version__
Out[2]: '1.0.0.dev20181202'
In [3]: x = torch.randn(1024)
In [4]: idx = torch.rand(1024) > 0.5
In [5]: %%timeit
...: x[idx]
...:
35 Β΅s Β± 63.3 ns per loop (mean Β± std. dev. of 7 runs, 10000 loops each)
In [6]: x_np, idx_np = x.numpy(), idx.numpy()
In [7]: %%timeit
...: x_np[idx_np]
...:
2.36 Β΅s Β± 8.62 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each)
```
## Expected behavior
PyTorch should be as fast as numpy when doing advanced indexing of tensors that do not require grad.
## Environment
- PyTorch Version (e.g., 1.0): 1.0.0.dev20181202
- OS (e.g., Linux): Ubuntu 18.04.1 LTS
- How you installed PyTorch (`conda`, `pip`, source): conda install pytorch-nightly -c pytorch
- Python version: 3.7.0
cc @mruberry @rgommers @heitorschueroff @VitalyFedyunin @ngimel | module: performance,triaged,module: numpy,module: advanced indexing | low | Critical |
386,635,771 | go | cmd/compile: minimize morestack calls text footprint | This is more of a small papercut idea than a real issue, but every time I go through the disassembled output of objdump it comes back to mind, so I'm going to drop it here.
Currently the morestack check is laid out as follows:
```
function entry point:
check if enough stack
if not enough stack, jump to morestack block # JMP1
...function body...
morestack block:
call morestack
jump to function entry point # JMP2
```
if it was instead laid out like this:
```
morestack block:
call morestack
function entry point:
check if enough stack
if not enough stack, jump to morestack block
...function body...
```
we have one less jump (JMP2) and, more importantly, JMP1 can always use the more compact rel8 variant. The morestack block, in this case, would be placed in the padding between this and the previous function (because the morestack block of the previous function would have been moved as well, this should not significantly impact the overall amount of padding used)
For small functions (i.e. functions where rel8 is already used in both jumps) this would save 2 bytes per function, for functions where rel32 is used it would save 10 bytes per function. From a quick check there are ~1800 small functions and ~5000 big functions using morestack in the 1.11.2 windows compiler. This means that text size could decrease by ~86KB. This would help for #6853 and, hopefully, also improve icache hit rates.
Looking forward to when per-function metadata will make it possible to mostly eliminate morestack calls (https://github.com/golang/go/issues/25999#issuecomment-400841088), this could be further extended to have multiple entry points (so that we don't need to store multiple variants of functions for which we need both morestack and no morestack variants):
```
morestack block:
call morestack
function entry point:
check if enough stack
if not enough stack, jump to morestack block
function entry point no-morestack:
...function body...
```
I'm not knowledgeable about the linker, so I don't really know how workable these two ideas are. | NeedsInvestigation,compiler/runtime | low | Minor |
386,648,400 | go | cmd/compile: statictmps can prevent heap objects from being collected | ```
package main
import (
"fmt"
"runtime"
)
var x = [][]int{[]int{1}}
func main() {
s := make([]int, 1e8)
runtime.SetFinalizer(&s[0], func(p *int) { fmt.Println("finalized") })
x[0] = s
x = nil
runtime.GC()
runtime.GC()
runtime.GC()
}
```
When I run this program, the finalizer never executes. The big slice will live in the heap forever.
The original state of the program has `x` pointing to a compiler-allocated global variable `statictmp1`. That variable has one `[]int` slot in it, which in turn points to a second `statictmp2` variable holding a single `1`.
When we do `x[0]=s`, we set `statictmp1` to point to the heap instead. Then when we do `x = nil`, `statictmp1` is now unreachable. But `statictmp1` now points to an object in the heap, and we still scan `statictmp1` at every garbage collection, because it is a global.
If instead we allocated `statictmp1` on the heap, this problem would go away. There's a tradeoff here which I'm not sure how to resolve. The current situation prioritizes fast startup and preferring global data over heap data, but it can result in imprecise retention of objects.
A better fix would be to include a global "object" in the GC marking phase for each global variable. (This is similar to how stack objects work.) Named globals would be in the root set, but unnamed ones like the `statictmp`s would only be live if a heap object or other live global pointed to it.
I'm not sure it is worth fixing this problem. I sort of stumbled on it while working on #29013 but I haven't seen any instances in the wild. Thought it would be worth documenting it here in case someone had a better idea or someone found an actual real-world instance.
Note that a similar situation also applies to `statictmp2`. After`x[0]=s`, `statictmp2` is dead. We can't collect `statictmp2` because it is allocated in the globals area. We could collect it if it was allocated on the heap. But because it doesn't hold a pointer to heap objects, it isn't a big deal either way. (This also has a parallel to stack objects, which we can't collect directly - we can only ignore their contents.)
@aclements @rlh | GarbageCollector,compiler/runtime | low | Minor |
386,651,341 | go | proposal: encoding/asn1: timeParsing functions impose static formats | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.2 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/chris/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/chris/Optus/Git/gorims"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/go"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build723180761=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Attempt to unmarshal generalized time received from a 3rd party system that used a format other than the one hard coded in the parseGeneralizedTime([]byte) (time.Time, error) function.
### What did you expect to see?
A way to nominate the time format to be used
### What did you see instead?
That the time format is hardcoded as a const within the time parsing funcs.
| Proposal,Proposal-Crypto | low | Critical |
386,680,179 | go | cmd/go: confusing "build constraints exclude all Go files" error when trying to cross-compile a package that uses CGO | `go version`
go version go1.10.4 linux/amd64
`go env`
GOARCH="amd64"
GOBIN="/home/mayurw/go/bin"
GOCACHE="/home/mayurw/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/mayurw/go"
GORACE=""
GOROOT="/usr/lib/go-1.10"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go-1.10/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build555451646=/tmp/go-build -gno-record-gcc-switches"
I executed the command
env GOOS=linux GOARCH=arm64 go build
it produced output as bolow
go build github.com/DataDog/zstd: build constraints exclude all Go files in ~/go/src/github.com/DataDog/zstd
I'm really not getting what error it is. Please provide solution to go cross compilation from linux to ARM architecture binaries.. | NeedsInvestigation,GoCommand | low | Critical |
386,690,551 | flutter | [video_player] Support DRM content | Hello Everyone,
First of Thanks for such a beautiflu Framework. I really loved working in flutter and found it fast and to be much reactive as comared to native Andriod and iOs based application.
I am creating a VOD platform/ like testing to port out the existing application to Flutter.
The main issue which I am facing is Playing of DRM (.mpd format with a License URL) content in Flutter using video_player widget.
Has anybody ttried playing Protected DRM content in Flutter? | c: new feature,a: video,customer: crowd,p: video_player,package,team-ecosystem,P2,triaged-ecosystem | low | Critical |
386,764,260 | rust | Allowing different opt-level for different crates of the workspace | I have two crates in a workspace: backend server and wasm frontend (and a shared crate for the API types).
I want to use the following profile only for the `frontend` crate:
```toml
[profile.release]
lto = true
opt-level = "z"
```
I only want to optimize my frontend for size, but the backend server for speed.
But:
> warning: profiles for the non root package will be ignored, specify profiles at the workspace root
Why can't this be allowed?
I'd really like to avoid having to split the frontend off into a separate workspace, is there a solution? | T-cargo,C-feature-request | low | Minor |
386,835,348 | pytorch | [Feature Request] linux distribution friendly build system | ## π Feature
<!-- A clear and concise description of the feature proposal -->
## Motivation
To decrease the difficulty for linux distribution developers to create package for Pytorch/Caffe2, it would be better if some features could be added to the build system.
## Pitch
My definition of "friendly" is specific to Debian. That means the requirement is the most strict among the distributions... I expect the buildsystem to provide the following flags:
1. `USE_SYSTEM_XXX` flags to use XXX library provided by system, e.g. protobuf instead of building upon the vendored source.
2. `NO_DOWNLOAD` flags to prevent downloading anything during the build (if any).
## Additional context
I'm likely the one who will package Pytorch for Debian/Ubuntu, but I still hesitate to dig into it since the code base is still rapidly changing...
cc @malfet | module: build,triaged,enhancement | low | Major |
386,864,844 | pytorch | CI: Flaky download from download.pytorch.org | Downloads from download.pytorch.org sometimes fail. Sample:
```
Dec 03 07:21:59 Downloading: \"https://download.pytorch.org/models/resnet18-5c106cde.pth\" to /var/lib/jenkins/.torch/models/resnet18-5c106cde.pth
Dec 03 07:21:59 ERROR
Dec 03 07:21:59 test_check_onnx_broadcast (__main__.TestONNXUtils) ... ok
Dec 03 07:21:59 test_prepare_onnx_paddings (__main__.TestONNXUtils) ... ok
Dec 03 07:21:59
Dec 03 07:21:59 ======================================================================
Dec 03 07:21:59 ERROR: setUpClass (__main__.TestHub)
Dec 03 07:21:59 ----------------------------------------------------------------------
Dec 03 07:21:59 Traceback (most recent call last):
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/urllib3/connection.py\", line 171, in _new_conn
Dec 03 07:21:59 (self._dns_host, self.port), self.timeout, **extra_kw)
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/urllib3/util/connection.py\", line 56, in create_connection
Dec 03 07:21:59 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/socket.py\", line 745, in getaddrinfo
Dec 03 07:21:59 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
Dec 03 07:21:59 socket.gaierror: [Errno -3] Temporary failure in name resolution
Dec 03 07:21:59
Dec 03 07:21:59 During handling of the above exception, another exception occurred:
Dec 03 07:21:59
Dec 03 07:21:59 Traceback (most recent call last):
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 600, in urlopen
Dec 03 07:21:59 chunked=chunked)
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, in _make_request
Dec 03 07:21:59 self._validate_conn(conn)
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 849, in _validate_conn
Dec 03 07:21:59 conn.connect()
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/urllib3/connection.py\", line 314, in connect
Dec 03 07:21:59 conn = self._new_conn()
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/urllib3/connection.py\", line 180, in _new_conn
Dec 03 07:21:59 self, \"Failed to establish a new connection: %s\" % e)
Dec 03 07:21:59 urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7fa6a8042080>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution
Dec 03 07:21:59
Dec 03 07:21:59 During handling of the above exception, another exception occurred:
Dec 03 07:21:59
Dec 03 07:21:59 Traceback (most recent call last):
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/requests/adapters.py\", line 445, in send
Dec 03 07:21:59 timeout=timeout
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, in urlopen
Dec 03 07:21:59 _stacktrace=sys.exc_info()[2])
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/urllib3/util/retry.py\", line 398, in increment
Dec 03 07:21:59 raise MaxRetryError(_pool, url, error or ResponseError(cause))
Dec 03 07:21:59 urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='download.pytorch.org', port=443): Max retries exceeded with url: /models/resnet18-5c106cde.pth (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fa6a8042080>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))
Dec 03 07:21:59
Dec 03 07:21:59 During handling of the above exception, another exception occurred:
Dec 03 07:21:59
Dec 03 07:21:59 Traceback (most recent call last):
Dec 03 07:21:59 File \"test_utils.py\", line 449, in setUpClass
Dec 03 07:21:59 cls.resnet18_pretrained = models.__dict__['resnet18'](pretrained=True).state_dict()
Dec 03 07:21:59 File \"/var/lib/jenkins/.local/lib/python3.6/site-packages/torchvision/models/resnet.py\", line 165, in resnet18
Dec 03 07:21:59 model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/torch/utils/model_zoo.py\", line 66, in load_url
Dec 03 07:21:59 _download_url_to_file(url, cached_file, hash_prefix, progress=progress)
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/torch/utils/model_zoo.py\", line 72, in _download_url_to_file
Dec 03 07:21:59 u = urlopen(url, stream=True)
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/requests/api.py\", line 72, in get
Dec 03 07:21:59 return request('get', url, params=params, **kwargs)
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/requests/api.py\", line 58, in request
Dec 03 07:21:59 return session.request(method=method, url=url, **kwargs)
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/requests/sessions.py\", line 512, in request
Dec 03 07:21:59 resp = self.send(prep, **send_kwargs)
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/requests/sessions.py\", line 622, in send
Dec 03 07:21:59 r = adapter.send(request, **kwargs)
Dec 03 07:21:59 File \"/opt/conda/lib/python3.6/site-packages/requests/adapters.py\", line 513, in send
Dec 03 07:21:59 raise ConnectionError(e, request=request)
Dec 03 07:21:59 requests.exceptions.ConnectionError: HTTPSConnectionPool(host='download.pytorch.org', port=443): Max retries exceeded with url: /models/resnet18-5c106cde.pth (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fa6a8042080>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))
Dec 03 07:21:59
Dec 03 07:21:59 ----------------------------------------------------------------------
Dec 03 07:21:59 Ran 16 tests in 29.709s
Dec 03 07:21:59
Dec 03 07:21:59 FAILED (errors=1, skipped=3)
Dec 03 07:21:59 Traceback (most recent call last):
Dec 03 07:21:59 File \"test/run_test.py\", line 431, in <module>
Dec 03 07:21:59 main()
Dec 03 07:21:59 File \"test/run_test.py\", line 423, in main
Dec 03 07:21:59 raise RuntimeError(message)
Dec 03 07:21:59 RuntimeError: test_utils failed!
```
Log: https://circleci.com/gh/pytorch/pytorch/347714?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link | module: ci,triaged,module: flaky-tests,better-engineering | low | Critical |
386,867,081 | pytorch | pytorch_doc_push is racing with itself | Sample log: https://circleci.com/gh/pytorch/pytorch/344302?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link/console
```
Dec 01 04:11:40 ++ git status
Dec 01 04:11:40 On branch site
Dec 01 04:11:40 Your branch is ahead of 'origin/site' by 1 commit.
Dec 01 04:11:40 (use "git push" to publish your local commits)
Dec 01 04:11:40 Untracked files:
Dec 01 04:11:40 (use "git add <file>..." to include in what will be committed)
Dec 01 04:11:40
Dec 01 04:11:40 pytorch/
Dec 01 04:11:40
Dec 01 04:11:40 nothing added to commit but untracked files present (use "git add" to track)
Dec 01 04:11:40 ++ git push origin site
Dec 01 04:11:40 To https://yf225:[email protected]/pytorch/pytorch.github.io
Dec 01 04:11:40 ! [rejected] site -> site (fetch first)
Dec 01 04:11:40 error: failed to push some refs to 'https://yf225:[email protected]/pytorch/pytorch.github.io'
Dec 01 04:11:40 hint: Updates were rejected because the remote contains work that you do
Dec 01 04:11:40 hint: not have locally. This is usually caused by another repository pushing
Dec 01 04:11:40 hint: to the same ref. You may want to first integrate the remote changes
Dec 01 04:11:40 hint: (e.g., 'git pull ...') before pushing again.
Dec 01 04:11:40 hint: See the 'Note about fast-forwards' in 'git push --help' for details.
Exited with code 1
``` | triaged | low | Critical |
386,929,969 | rust | document overflow behaviour for integer parse | Hi. I'm new to Rust so please forgive me if I have missed something.
I was not able to conclusively determine from the specification the behaviour of the following snippet (in release builds):
"129".parse::<i8>().expect("parse")
This issue is a docs bug report. The actual Rust behaviour (returning an error result and producing a panic from the expect) seems good to me.
The only documentation I could find about the behaviour of the i8 etc. types on overflow is here:
https://doc.rust-lang.org/book/2018-edition/ch03-02-data-types.html#integer-overflow
which is rather vague. It is not clear whether the statement
In release builds, Rust does not check for overflow, and instead will do something called "two's complement wrapping."
applies only to arithmetic operations or to other things. One obvious reading would have the snippet above give the value -127; this seemed to me to be an undesirable behaviour but it isn't generally a good idea to assume that undocumented behaviour is as one personally desires, and I'm sure you'll agree that having to write a test program is not ideal.
IMO the overflow behaviour of traits like Add and Parse ought to be documented in the definition of the i8 type or something linked from:
https://doc.rust-lang.org/std/primitive.i8.html
Thanks for your attention. | C-enhancement,P-medium,T-libs-api,A-docs | low | Critical |
386,949,446 | rust | proc_macro types Display impls donβt respect the input layout | Hey there,
Iβm not sure whether itβs the expected behavior of the `Display` implementations for token types in `proc_macro`, but the `Display` implementations lose positional information. I [made a blog post about it](https://www.reddit.com/r/rust/comments/a25sg0/a_more_faithful_display_for_procmacro_token_types) and release a crate to fix that problem, [proc-macro-faithful-display](https://crates.io/crates/proc-macro-faithful-display).
I posted that on Discord and @eddyb advised me to open an issue here. So here it is. Not sure whether:
- Thereβs an actual problem, because perhaps the `Display` implementors have never meant to provide positional correctness.
- If thereβs a problem:
- Should we assume the current behavior of `Display` correct and instead provide another type to perform the *faithful* display (like the `.display()` function for `&Path`).
- Or instead just fix `Display`.
\o/ | A-macros | low | Major |
386,960,334 | rust | run-make-fulldeps/c-link-to-rust-va-list-fn fails on aarch64-linux-gnu | Running `python2.7 ./x.py test --stage 2` on `aarch64-linux-gnu` I end up with the following failure
```
---- [run-make] run-make-fulldeps/c-link-to-rust-va-list-fn stdou
t ----
error: make failed
status: exit code: 2
command: "make"
stdout:
------------------------------------------
LD_LIBRARY_PATH="/usr/src/myapp/build/aarch64-unknown-linux-gnu/t
est/run-make-fulldeps/c-link-to-rust-va-list-fn/c-link-to-rust-va
-list-fn:/usr/src/myapp/build/aarch64-unknown-linux-gnu/stage2/li
b:/usr/src/myapp/build/aarch64-unknown-linux-gnu/stage0-bootstrap
-tools/aarch64-unknown-linux-gnu/release/deps:/usr/src/myapp/buil
d/aarch64-unknown-linux-gnu/stage0/lib:" '/usr/src/myapp/build/aa
rch64-unknown-linux-gnu/stage2/bin/rustc' --out-dir /usr/src/myap
p/build/aarch64-unknown-linux-gnu/test/run-make-fulldeps/c-link-t
o-rust-va-list-fn/c-link-to-rust-va-list-fn -L /usr/src/myapp/bui
ld/aarch64-unknown-linux-gnu/test/run-make-fulldeps/c-link-to-rus
t-va-list-fn/c-link-to-rust-va-list-fn checkrust.rs
cc -ffunction-sections -fdata-sections -fPIC test.c /usr/src/myap
p/build/aarch64-unknown-linux-gnu/test/run-make-fulldeps/c-link-t
o-rust-va-list-fn/c-link-to-rust-va-list-fn/libcheckrust.a -o /us
r/src/myapp/build/aarch64-unknown-linux-gnu/test/run-make-fulldep
s/c-link-to-rust-va-list-fn/c-link-to-rust-va-list-fn/test -lm -l
rt -ldl -lpthread
LD_LIBRARY_PATH="/usr/src/myapp/build/aarch64-unknown-linux-gnu/t
est/run-make-fulldeps/c-link-to-rust-va-list-fn/c-link-to-rust-va
-list-fn:/usr/src/myapp/build/aarch64-unknown-linux-gnu/stage2/li
b/rustlib/aarch64-unknown-linux-gnu/lib:/usr/src/myapp/build/aarc
h64-unknown-linux-gnu/stage0-bootstrap-tools/aarch64-unknown-linu
x-gnu/release/deps:/usr/src/myapp/build/aarch64-unknown-linux-gnu
/stage0/lib:" /usr/src/myapp/build/aarch64-unknown-linux-gnu/test
/run-make-fulldeps/c-link-to-rust-va-list-fn/c-link-to-rust-va-li
st-fn/test
Makefile:4: recipe for target 'all' failed
------------------------------------------
stderr:
------------------------------------------
test: test.c:32: main: Assertion `test_rust(check_list_0, 0x01LL,
0x02, 0x03LL) == 0' failed.
Aborted (core dumped)
make: *** [all] Error 134
------------------------------------------
```
I was building this commit https://github.com/rust-lang/rust/commit/9cd3bef4cfaaac2a608682d4b0834cda344249e0
I haven't had the chance to debug it yet but I'll take a look when I get a chance.
cc @dlrobertson | A-codegen,A-FFI,T-compiler,C-bug,F-c_variadic,O-AArch64 | low | Critical |
386,966,474 | go | cmd/compile: specialize variadic functions | This is an idea for a compiler optimization to explore. I don't know whether it'll yield worthwhile fruit.
We can generate multiple forms of variadic functions. For example, given:
```go
func f(s ...int) {
// body
}
```
We could also generate and compile:
```go
func f.0() {
s := nil
// body
}
func f.1(x int) {
s := []int{x}
// body
}
```
This would shrink call sites. And hopefully the compiler would be able to use the info about s to compile f.0 and f.1 more aggressively. (We might need to add an optimization to remove loops when the loop iteration count is known to be 1.)
There are plenty of open questions. For example: how to decide which variants to generate, whether to rewrite .N suffixes away in backtraces, where to record info about which variants got generated, what the impact is on inlining and escape analysis.
This would also serve as a useful experiment for generics, in that we would need to sort out some questions about how to generate specialized variants of a particular function.
| Performance,compiler/runtime | low | Major |
386,977,031 | TypeScript | Allow ignoring certain TS suggestion-level diagnostic codes | Related to what was discussed in https://github.com/Microsoft/vscode/issues/61326.
Would provide more granularity to the settings introduced with https://github.com/Microsoft/vscode/pull/46590.
It might be useful to introduce a new setting to ignore certain suggestion diagnostic codes while still keeping suggestion diagnostics enabled, rather than having to disable them altogether.
What I'm proposing is to introduce 2 new settings, `javascript.suggestionActions.ignoredCodes` and `typescript.suggestionActions.ignoredCodes`. These would take a list of suggestion diagnostic codes that the user wants to ignore, given as a string of either comma- or space-separated code numbers.
Example:
```js
{
// ...
"typescript.suggestionActions.enabled": true,
"typescript.suggestionActions.ignoredCodes": "7043, 80006"
// ...
}
```
This list would only be checked when a suggestion-level diagnostic is received, so including non-suggestion-level diagnostic codes in the list would have no effect (errors and messages would not be ignored).
| Suggestion,In Discussion | high | Critical |
387,022,300 | godot | export var PackedScene crash when having circular references | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
v3.1.alpha.caliou.8dd00ed
8dd00ed1762c4956b3231d709ce0d01ee9b306c8
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
MacOS, 2017 Macbook Pro 15 inch
**Issue description:**
<!-- What happened, and what was expected. -->
I have an object called stairs and when going to those stairs it should teleport the player to a new scene. I exported a var and labeled it as a PackedScene. It works fine to reference it and create an instance inside the script, but if the scene that should be loaded also has an instance of the stairs and those stairs link back to the first scene I get an unknown crash. No console errors or anything, just Godot crashes and I have to re-open it.
**Steps to reproduce:**
I've managed to reproduce it in a new project: https://github.com/ebuchmann/PackedSceneCrashTest
The SceneSwitch scene is in both Scene1 and Scene2 and points to each other scene.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[PackedSceneCrashTest-master.zip](https://github.com/godotengine/godot/files/2641402/PackedSceneCrashTest-master.zip)
| bug,topic:core,confirmed,crash | medium | Critical |
387,037,899 | create-react-app | Imports of Babel helpers have impossibly long names in DEV | This looks clowny.
<img width="1005" alt="screen shot 2018-12-03 at 10 38 46 pm" src="https://user-images.githubusercontent.com/810438/49406051-57f43f00-f74c-11e8-9fb2-916d8d727059.png">
| tag: enhancement | low | Minor |
387,043,585 | rust | Tracking issue for future-incompatbility lint `order_dependent_trait_objects` | This is the **summary issue** for the `order_dependent_trait_impls`
future-compatibility warning and other related errors. The goal of
this page is describe why this change was made and how you can fix
code that is affected by it. It also provides a place to ask questions
or register a complaint if you feel the change should not be made. For
more information on the policy around future-compatibility warnings,
see our [breaking change policy guidelines][guidelines].
[guidelines]: https://github.com/rust-lang/rfcs/blob/master/text/1122-language-semver.md
#### What is the warning for?
As in issue #33140, rustc sometimes treats "seemingly-identical" trait object types as different. For example, `Send + Sync` and `Sync + Send` are treated as different types.
This occurs because the first trait in a trait object is treated specially in the compiler, which means that `Send + Sync` has its "first trait" being `Send` and `Sync + Send` has its "first trait" being `Sync`. That is a bug that we want to fix.
However, because the compiler made this distinction, it was possible to implement a trait separately for each of these types, for example:
```Rust
trait Foo {
fn xyz();
}
impl Foo for dyn Send + Sync {
fn xyz() {
println!("Hi I'm Send + Sync");
}
}
impl Foo for dyn Sync + Send {
//~^ ERROR conflicting implementations
fn xyz() {
println!("Hi I'm Sync + Send");
}
}
fn main() {
<dyn Send + Sync>::xyz();
<dyn Sync + Send>::xyz();
}
```
This obviously can't work if `Send + Sync` & `Sync + Send` are the same type! Therefore, it is being made into a coherence error.
To fix the warnings, remove all but one of the impls - e.g. the `Sync + Send` impl.
#### When will this warning become a hard error?
At the beginning of each 6-week release cycle, the Rust compiler team
will review the set of outstanding future compatibility warnings and
nominate some of them for **Final Comment Period**. Toward the end of
the cycle, we will review any comments and make a final determination
whether to convert the warning into a hard error or remove it
entirely.
#### Status
- introduced by #56481
- changed to `deny`-by default and report in deps in #102635 | A-lints,A-trait-system,T-lang,T-compiler,C-future-incompatibility,C-tracking-issue,A-auto-traits,T-types,A-trait-objects | low | Critical |
387,047,236 | go | misc/cgo/testcshared: TestGo2C2Go fails on some Android systems | android/arm64: https://build.golang.org/log/09e1a4ff3ef9aa281563ad09ca97456f76947b13:
```
--- FAIL: TestGo2C2Go (3.18s)
cshared_test.go:622: run: [go build -buildmode=c-shared -o /var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/cshared-TestGo2C2Go159080790/libtestgo2c2go.so go2c2go/go]
cshared_test.go:647: command failed: [go build -o /var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/cshared-TestGo2C2Go159080790/m1 go2c2go/m1]
exit status 2
# go2c2go/m1
/Users/elias/android-ndk-standalone-arm64/bin/../lib/gcc/aarch64-linux-android/4.9.x/../../../../aarch64-linux-android/bin/ld: warning: liblog.so, needed by /var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/cshared-TestGo2C2Go159080790/libtestgo2c2go.so, not found (try using -rpath or -rpath-link)
/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/cshared-TestGo2C2Go159080790/libtestgo2c2go.so: undefined reference to `__android_log_vprint'
clang38: error: linker command failed with exit code 1 (use -v to see invocation)
```
android/386: https://build.golang.org/log/4e90ced0da91bfcfa07b3acced63beea3829d137:
```
--- FAIL: TestGo2C2Go (4.11s)
cshared_test.go:622: run: [go build -buildmode=c-shared -o /var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/cshared-TestGo2C2Go737454438/libtestgo2c2go.so go2c2go/go]
cshared_test.go:647: run: [go build -o /var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/cshared-TestGo2C2Go737454438/m1 go2c2go/m1]
cshared_test.go:195: adb command failed: exit status 127
/system/bin/sh: /var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/cshared-TestGo2C2Go737454438/m1: not found
``` | help wanted,NeedsInvestigation,mobile | low | Critical |
387,061,442 | rust | libcore: Implement VaList::arg in pure rust | ### Summary
Implement `VaList::arg` in pure rust, similar to [va_list-rs].
### Details
We currently expose the `va_arg` intrinsic which should emit the correct LLVM for `va_arg` for the given architecture and OS. We currently use the [LLVM va_arg] instruction, but it doesn't emit the correct code for some common OSes and architectures causing us to implement the instruction manually (See [src/librustc_codegen_llvm/va_arg.rs] for details). Since we do not support calling `VaList::arg` on arbitrary types, we might be able to implement something similar to [va_list-rs] in pure rust for most architectures, falling back to the [LLVM va_arg] instruction only when a pure rust implementation does not exist.
[va_list-rs]: https://github.com/thepowersgang/va_list-rs
[LLVM va_arg]: https://llvm.org/docs/LangRef.html#va-arg-instruction
[src/librustc_codegen_llvm/va_list.rs]: https://github.com/rust-lang/rust/blob/master/src/librustc_codegen_llvm/va_arg.rs
### Original issue
Note this issue has been changed following https://github.com/rust-lang/rust/issues/56489#issuecomment-444142735. The original issue is as follows:
#### codegen: Move custom va_arg logic to librustc_codegen_ssa
The LLVM `va_arg` intrinsic is far from a complete implementation. As a result, we have started to manually implement `va_arg` (like clang does) with the `Builder` in [src/librustc_codegen_llvm/va_arg.rs](https://github.com/rust-lang/rust/blob/master/src/librustc_codegen_llvm/va_arg.rs). This logic should be moved to `librustc_codegen_ssa` in `BuilderMethods::va_arg`.
`BuilderMethods::va_arg` needs to fall back to LLVM's `va_arg` intrinsic when there isn't an custom implementation available, so we'll need to add a new trait method `backend_va_arg` (please suggest a better name :smile:) that exposes the backend specific implementation of `va_arg`. | C-enhancement,A-FFI,T-compiler,F-c_variadic | low | Major |
387,073,385 | pytorch | [caffe2] Corresponding C++ API for prepare_prediction_net | ## π Feature
Corresponding C++ API for prepare_prediction_net
## Motivation
We have a python API that is able to load predictor models in MetaNetDef format. https://github.com/pytorch/pytorch/blob/edb88b5f3af03718b443d015f195faa1832ce95b/caffe2/python/predictor/predictor_exporter.py#L127 However, the corresponding C++ API is missing.
## Pitch
In the Python world, we are able to export and load models in MetaNetDef format, however, when we want to productionize the model and load it in C++, the API to do so is missing.
## Alternatives
Right now, the only alternative is to export the init and predict nets seperately as protobufs and load them in C++. | caffe2 | low | Minor |
387,091,149 | pytorch | Caffe2 C++ runs single threaded | I noticed that when I run my caffe2 model in c++, it only utilizes a single core.
In tensorflow C++ API, I had a choice on the specific number of threads/cores I want to use to run the network. Is this possible in caffe2 C++ API as well??
I'm currently testing my code on a CPU only platform. | caffe2 | low | Minor |
387,182,332 | TypeScript | Array.length type guard for array spreading | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.3.0-dev.20181204
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** array length type guard spread
**Code**
```ts
const timeString = '09:00';
const date = new Date();
const timeParts = timeString.split(':').map((part) => parseInt(part, 10));
if (timeParts.length > 0) {
date.setHours(...timeParts);
}
```
**Expected behavior:** No errors
**Actual behavior:** TS2556: Expected 1-4 arguments, but got 0 or more.
**Playground Link:** https://www.typescriptlang.org/play/#src=const%20timeString%20%3D%20'09%3A00'%3B%0D%0Aconst%20date%20%3D%20new%20Date()%3B%0D%0Aconst%20timeParts%20%3D%20timeString.split('%3A').map((part)%20%3D%3E%20parseInt(part%2C%2010))%3B%0D%0Aif%20(timeParts.length%20%3E%200)%20%7B%0D%0A%20%20%20%20date.setHours(...timeParts)%3B%0D%0A%7D%0D%0A
**Related Issues:** None
| Suggestion,Awaiting More Feedback | medium | Critical |
387,224,742 | vscode | Search: find, replace input boxes miss scrollbar | Testing #64270
When having a tall multi-line input, the search find & replace boxes seem to miss a scrollbar when scrollable. | feature-request,search | low | Minor |
387,232,820 | vscode | Loaded scripts: session and folder should be a bit more distinguishable | Refs: #64230
In the loaded scripts view if I am debugging multiple sessions how we display sessions and folders is not distuingishable enough. Currently they have these two distinctions:
* sessions do not show a folder icon (which is great, but not visible in the default icon theme)
* sessions have a differetn hover text
I suggest to experiment with prefexing the session title with `Debug Session`
Or to try to prefix the folder label with `../` if that makes sense.
Any other ideas polish that would make them more distungishable would be good imho

| debug,under-discussion | low | Critical |
387,243,131 | pytorch | [caffe2] ConvTranspose with group attribute | ## π Feature
<!-- A clear and concise description of the feature proposal -->
The conv op has the group attribute, but it's not in convTranspose.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
The convTranspose should have the same functionality as conv. And pytorch convT op has group attribute.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
Support group attribute in convT.
| caffe2,module: op-unification | low | Minor |
387,268,298 | vscode | Wasted space in the multiline search input | Testing #64270
<img width="216" alt="screenshot 2018-12-04 at 13 42 34" src="https://user-images.githubusercontent.com/5047891/49442689-8fa1cc00-f7ca-11e8-9664-e774e7a2f424.png">
| feature-request,ux,search | low | Major |
387,269,723 | rust | syn fails to compile on raspberry pi (ARMv7) | # Environment
Board: Raspberry pi model 3
CPU: ARM v7 CPU
OS: Linux raspberrypi 4.14.79-v7+ #1159 SMP Sun Nov 4 17:50:20 GMT 2018 armv7l GNU/Linux
Cargo: cargo 1.30.0 (a1a4ad372 2018-11-02)
rustc: rustc 1.30.1 (1433507eb 2018-11-07)
target: armv7-unknown-linux-gnueabihf
# Issue
Running `cargo build --verbose` results in the following (at the `0.15.22` tag cloned from the repo):
```shell
pi@raspberrypi:~/syn $ cargo build --verbose
Fresh unicode-xid v0.1.0
Fresh proc-macro2 v0.4.24
Fresh quote v0.6.10
Compiling syn v0.15.22 (/home/pi/syn)
Running `rustc --crate-name syn src/lib.rs --color always --crate-type lib --emit=dep-info,link -C debuginfo=2 --cfg 'feature="clone-impls"' --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="parsing"' --cfg 'feature="printing"' --cfg 'feature="proc-macro"' --cfg 'feature="proc-macro2"' --cfg 'feature="quote"' -C metadata=3d5b59c8593ca1fb -C extra-filename=-3d5b59c8593ca1fb --out-dir /home/pi/syn/target/debug/deps -C incremental=/home/pi/syn/target/debug/incremental -L dependency=/home/pi/syn/target/debug/deps --extern proc_macro2=/home/pi/syn/target/debug/deps/libproc_macro2-7718d2b1e096cece.rlib --extern quote=/home/pi/syn/target/debug/deps/libquote-9e15765670015a6c.rlib --extern unicode_xid=/home/pi/syn/target/debug/deps/libunicode_xid-5d3b202eba6c006f.rlib --cfg syn_can_use_thread_id --cfg syn_can_call_macro_by_path`
error: Could not compile `syn`.
Caused by:
process didn't exit successfully: `rustc --crate-name syn src/lib.rs --color always --crate-type lib --emit=dep-info,link -C debuginfo=2 --cfg 'feature="clone-impls"' --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="parsing"' --cfg 'feature="printing"' --cfg 'feature="proc-macro"' --cfg 'feature="proc-macro2"' --cfg 'feature="quote"' -C metadata=3d5b59c8593ca1fb -C extra-filename=-3d5b59c8593ca1fb --out-dir /home/pi/syn/target/debug/deps -C incremental=/home/pi/syn/target/debug/incremental -L dependency=/home/pi/syn/target/debug/deps --extern proc_macro2=/home/pi/syn/target/debug/deps/libproc_macro2-7718d2b1e096cece.rlib --extern quote=/home/pi/syn/target/debug/deps/libquote-9e15765670015a6c.rlib --extern unicode_xid=/home/pi/syn/target/debug/deps/libunicode_xid-5d3b202eba6c006f.rlib --cfg syn_can_use_thread_id --cfg syn_can_call_macro_by_path` (signal: 11, SIGSEGV: invalid memory reference)
```
## Notes
It's worth noting that cross compiling from Linux x64 to ARM works fine using `arm-linux-gnueabihf-gcc` | I-crash,O-Arm | medium | Critical |
387,351,681 | go | cmd/compile: better const-based optimizations handling in compiler frontend | This issue describes a problem, potential solutions and results collected by a prototype implementation.<br>
It will be followed by a go-review CL.
## The current state
Some important Go optimization phases happen at the frontend stage where we have annotated AST.
Notable examples:
1. Inlining.
1. Escape analysis.
1. Things like short string comparison optimization.
While the idea of porting everything to SSA backend sounds exciting:
1. It's not trivial.
1. It's hard to express some things on the SSA level as it is very low-level and sometimes relevant
information is erased during the SSA construction.
1. The SSA backend has its own set of little problems.
My proposal is to extend frontend part a little bit so there is just a bit more information so some useful optimizations
that implemented in frontend can work better. It won't make (potential) transition to SSA harder as the changes are
very focused and affect only a few places.
## The problem
The problem I want to address in this document is a loss of const nature of values used through local variables.
If you use constant value directly, frontend recognizes it. If you're using it through never modified local variable,
it does not apply optimizations.
I'll describe two things I was investigating, but there can be a few more places where the proposed solution is applicable.
### Problematic case 1: escape analysis for make-slice
(See `#24577`)
```go
// Can be allocated on stack if "xs" doesn't escape.
xs := make([]int, 10)
// Always escapes (non-const size reason).
n := 10
xs := make([]int, n)
```
One might argue that this is not useful piece of code.
True, but the code below does essentially the same thing, but is more meaningful:
```go
func allocSlice(length int) []int {
return make([]int, length)
}
func example() int {
xs := allocSlice(10)
return len(xs)
}
```
`allocSlice` is inlined. After inlining, the produced code looks like this:
```go
func example() int {
_length := 10
_ret := make([]int, _length)
xs = _ret
return len(xs)
}
```
Note that it introduces the additional assignment.
It makes escape analysis to be too conservative there.
If const info is preserved, there could be no allocation at all.
### Problematic case 2: short string comparison optimizations
Functions like `strings.HasPrefix` and `strings.HasSuffix` also suffer from constness info loss during the inlining.
```go
// This is how HasPrefix is defined in stdlib.
// HasPrefix tests whether the string s begins with prefix.
func HasPrefix(s, prefix string) bool {
return len(s) >= len(prefix) && s[0:len(prefix)] == prefix
}
```
Now let's define two routines:
```go
func isComment1(line string) bool {
return strings.HasPrefix(line, "#")
}
func isComment2(line string) bool {
return len(line) >= 1 && line[0] == '#'
}
```
The first version is more readable and idiomatic. And it can be optimized to the `isComment2` if we
inline `HasPrefix` body manually, even without substituting `len("#")` since it's const expression anyway,
but it won't be as efficient due to intermediate variables reason.
Here is a difference in performance:
```
name old time/op new time/op delta
HasPrefix/#-8 16.5ns Β± 3% 2.8ns Β± 8% -82.98% (p=0.000 n=10+10)
```
The amount of generated code also different (for AMD64 it's 105 vs 31 bytes).
### Other potentially related issues
* https://github.com/golang/go/issues/16108
## Effectively constant value definition
We can consider local variable as effectively constant if:
1. Its address is never taken.
2. It is never assigned to.
The closure value capturing mechanism uses somewhat similar definition to figure out whether to capture something
by value or by reference.
## Proposed solution
There are 2 parts. One is very simple and solves the simplest case, the other solves inlining-related problem.
1. Use `Node.Name.Defn` for effectively constant nodes. So, if they're never assigned to,
just use constant value in analysis instead of `ONAME` node inside the optimizations and escape analysis.
The good news is that it's very trivial to do. First CL includes this step.
The bad news is that inliner doesn't assign `Name.Defn` and uses `OAS2` nodes.
So inlining problem remains.
2. During inlining, assign `Node.Name.Defn` field for constant initializers.
More below.
```go
// Statements below are introducing ONAMEs that have Defn bound to them.
// So we already have relevant data for simplest, (1) case.
x := 10
var y = 10
```
The first step is solved by introducing `getConstValue(n *Node) *Node`.
When node is passed to it, it looks into `n.Name.Defn` field and if it's `OAS` and `Right` field is constant, returns it.
Otherwise, it returns `n` itself (It could return `nil` as well, it's almost irrelevant detail).
In the place where constness-dependent optimization is about to happen, instead of checking the nodes itself, one
queries the const value of the node with `getConstValue`.
The second step implies that `getConstValue` can also return appropriate constant values passed into inlined function.
Note that it won't help to handle cases like this:
```go
n1 := 10
n2 := n1
xs := make([]int, n2)
```
The proposed mechanism is very trivial (or primitive, even), but it covers useful cases and is deterministic, which means programmers can rely on it. It also makes functions like `strings.HasPrefix` appropriate in a hot-spots so you don't have to inline it manually or write specialization as demonstrated in one of the examples above.
## Extra cases
Another example that can be enhanced by `getConstValue` is `OSTR2BYTES`.
```go
package str2bytes
type object struct {
name string
data []byte
}
func newObject(name, data string) *object {
return &object{
name: name,
data: []byte(data),
}
}
var sink interface{}
func BenchmarkNewObject(b *testing.B) {
var o *object
for i := 0; i < b.N; i++ {
o = newObject("foo", "123")
}
sink = o
}
```
```
name old time/op new time/op delta
NewObject-8 96.6ns Β± 2% 80.0ns Β± 2% -17.20% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
NewObject-8 56.0B Β± 0% 51.0B Β± 0% -8.93% (p=0.000 n=10+10)
```
Any use of `Isconst` inside compiler frontend can be a potential use case for `getConstValue`, but one should
not go too far (it can change compile-time semantics, like reporting out-of-bound access to an array through a variable that is effectively constant, but the spec does not permit that to happen and other spots are something that SSA backend can optimize on its own).
## compilebench results
Effect on compilation time is negligible:
```
name old time/op new time/op delta
Template 353ms Β± 1% 354ms Β± 1% ~ (p=0.497 n=9+10)
Unicode 143ms Β± 2% 142ms Β± 1% ~ (p=0.050 n=9+9)
GoTypes 1.40s Β± 2% 1.40s Β± 1% ~ (p=0.780 n=10+9)
Compiler 6.71s Β± 2% 6.69s Β± 0% ~ (p=0.370 n=9+8)
SSA 18.0s Β± 1% 17.8s Β± 1% -1.00% (p=0.004 n=10+10)
Flate 233ms Β± 1% 232ms Β± 1% ~ (p=0.123 n=10+10)
GoParser 287ms Β± 1% 288ms Β± 1% ~ (p=0.400 n=9+10)
Reflect 792ms Β± 2% 787ms Β± 0% ~ (p=0.447 n=10+9)
Tar 324ms Β± 0% 326ms Β± 1% +0.49% (p=0.021 n=8+9)
XML 462ms Β± 2% 459ms Β± 1% ~ (p=0.315 n=10+9)
StdCmd 31.8s Β± 1% 31.8s Β± 1% ~ (p=0.739 n=10+10)
```
With changes from initial CL (short string cmp + isSimpleSliceMake) there is a slight improvement in binary sizes
due to better optimization opportunities:
```
name old text-bytes new text-bytes delta
HelloSize 753kB Β± 0% 753kB Β± 0% -0.02% (p=0.000 n=10+10)
CmdGoSize 10.2MB Β± 0% 10.2MB Β± 0% -0.19% (p=0.000 n=10+10)
name old exe-bytes new exe-bytes delta
HelloSize 1.09MB Β± 0% 1.09MB Β± 0% ~ (all equal)
CmdGoSize 14.1MB Β± 0% 14.1MB Β± 0% -0.17% (p=0.000 n=10+10)
``` | Performance,compiler/runtime | high | Major |
387,357,367 | vscode | consider to allow for variable substitution in the inputs section | testing #64214:
I assume that variable substitution in the "inputs" section is disabled to avoid recursion...
However, I think it would be helpful to allow "simple" non-interactive variables like ${env:...}, ${config:...}, ${workspaceFolder:...}. This makes it possible to do something like this:
```
{
"label": "echoPrompt",
"description": "Please enter a folder name",
"default": "${env:HOME}",
"type": "prompt",
}
```
| feature-request,tasks,variable-resolving | medium | Critical |
387,391,519 | bitcoin | Bitcoind crashes with Too Many Files error | <!-- This issue tracker is only for technical issues related to Bitcoin Core.
General bitcoin questions and/or support requests are best directed to the Bitcoin StackExchange at https://bitcoin.stackexchange.com.
For reporting security issues, please read instructions at https://bitcoincore.org/en/contact/.
If the node is "stuck" during sync or giving "block checksum mismatch" errors, please ensure your hardware is stable by running memtest and observe CPU temperature with a load-test tool such as linpack before creating an issue! -->
<!-- Describe the issue -->
Stops accepting RCP calls
<!--- What behavior did you expect? -->
<!--- What was the actual behavior (provide screenshots if the issue is GUI-related)? -->
<!--- How reliably can you reproduce the issue, what are the steps to do so? -->
Every time I start bitcoind
<!-- What version of Bitcoin Core are you using, where did you get it (website, self-compiled, etc)? -->
Version 17 downloaded executable
<!-- What type of machine are you observing the error on (OS/CPU and disk type)? -->
Mac OSX
<!-- Any extra information that might be useful in the debugging process. -->
<!--- This is normally the contents of a `debug.log` or `config.log` file. Raw text or a link to a pastebin type site are preferred. -->
Matthews-MacBook-Pro:~ matt$ /Applications/bitcoin-0.17.0/bin/bitcoind -server -testnet -txindex -rest -rpcuser=matt -rpcpassword=plexus -rpcport=8332 -debug=leveldb
2018-12-04T17:07:25Z Bitcoin Core version v0.17.0 (release build)
2018-12-04T17:07:25Z InitParameterInteraction: parameter interaction: -whitelistforcerelay=1 -> setting -whitelistrelay=1
2018-12-04T17:07:25Z Assuming ancestors of block 0000000000000037a8cd3e06cd5edbfe9dd1dbcc5dacab279376ef7cfc2b4c75 have valid signatures.
2018-12-04T17:07:25Z Setting nMinimumChainWork=00000000000000000000000000000000000000000000007dbe94253893cbd463
2018-12-04T17:07:25Z Using the 'sse4(1way),sse41(4way),avx2(8way)' SHA256 implementation
2018-12-04T17:07:25Z Using RdRand as an additional entropy source
2018-12-04T17:07:25Z Default data directory /Users/matt/Library/Application Support/Bitcoin
2018-12-04T17:07:25Z Using data directory /Users/matt/Library/Application Support/Bitcoin/testnet3
2018-12-04T17:07:25Z Using config file /Users/matt/Library/Application Support/Bitcoin/bitcoin.conf
2018-12-04T17:07:25Z Using at most 125 automatic connections (283 file descriptors available)
2018-12-04T17:07:25Z Using 16 MiB out of 32/2 requested for signature cache, able to store 524288 elements
2018-12-04T17:07:25Z Using 16 MiB out of 32/2 requested for script execution cache, able to store 524288 elements
2018-12-04T17:07:25Z Using 4 threads for script verification
2018-12-04T17:07:25Z scheduler thread start
2018-12-04T17:07:25Z HTTP: creating work queue of depth 16
2018-12-04T17:07:25Z Config options rpcuser and rpcpassword will soon be deprecated. Locally-run instances may remove rpcuser to use cookie-based auth, or may be replaced with rpcauth. Please see share/rpcauth for rpcauth auth generation.
2018-12-04T17:07:25Z HTTP: starting 4 worker threads
2018-12-04T17:07:25Z Using wallet directory /Users/matt/Library/Application Support/Bitcoin/testnet3/wallets
2018-12-04T17:07:25Z init message: Verifying wallet(s)...
2018-12-04T17:07:25Z Using BerkeleyDB version Berkeley DB 4.8.30: (April 9, 2010)
2018-12-04T17:07:25Z Using wallet wallet.dat
2018-12-04T17:07:25Z BerkeleyEnvironment::Open: LogDir=/Users/matt/Library/Application Support/Bitcoin/testnet3/wallets/database ErrorFile=/Users/matt/Library/Application Support/Bitcoin/testnet3/wallets/db.log
2018-12-04T17:07:26Z Cache configuration:
2018-12-04T17:07:26Z * Using 2.0MiB for block index database
2018-12-04T17:07:26Z * Using 56.0MiB for transaction index database
2018-12-04T17:07:26Z * Using 8.0MiB for chain state database
2018-12-04T17:07:26Z * Using 384.0MiB for in-memory UTXO set (plus up to 286.1MiB of unused mempool space)
2018-12-04T17:07:26Z init message: Loading block index...
2018-12-04T17:07:26Z LevelDB using max_open_files=1000 (default=1000)
2018-12-04T17:07:26Z Opening LevelDB in /Users/matt/Library/Application Support/Bitcoin/testnet3/blocks/index
2018-12-04T17:07:26Z leveldb: Recovering log #791
2018-12-04T17:07:26Z leveldb: Level-0 table #801: started
2018-12-04T17:07:26Z leveldb: Level-0 table #801: 1101 bytes OK
2018-12-04T17:07:26Z leveldb: Delete type=2 #788
2018-12-04T17:07:26Z leveldb: Delete type=2 #785
2018-12-04T17:07:26Z leveldb: Delete type=2 #784
2018-12-04T17:07:26Z leveldb: Delete type=2 #790
2018-12-04T17:07:26Z leveldb: Delete type=2 #786
2018-12-04T17:07:26Z leveldb: Delete type=2 #787
2018-12-04T17:07:26Z leveldb: Delete type=2 #783
2018-12-04T17:07:26Z leveldb: Delete type=2 #782
2018-12-04T17:07:26Z leveldb: Delete type=2 #781
2018-12-04T17:07:26Z leveldb: Delete type=0 #791
2018-12-04T17:07:26Z leveldb: Delete type=3 #789
2018-12-04T17:07:26Z Opened LevelDB successfully
2018-12-04T17:07:26Z Using obfuscation key for /Users/matt/Library/Application Support/Bitcoin/testnet3/blocks/index: 0000000000000000
2018-12-04T17:07:27Z leveldb: Compacting 1@0 + 8@1 files
2018-12-04T17:07:27Z leveldb: Generated table #803@0: 14888 keys, 2140910 bytes
2018-12-04T17:07:27Z leveldb: Generated table #804@0: 5178 keys, 744418 bytes
2018-12-04T17:07:28Z leveldb: Generated table #805@0: 7347 keys, 1056062 bytes
2018-12-04T17:07:28Z leveldb: Generated table #806@0: 14938 keys, 2136593 bytes
2018-12-04T17:07:28Z leveldb: Generated table #807@0: 1256 keys, 179294 bytes
2018-12-04T17:07:28Z leveldb: Generated table #808@0: 14970 keys, 2138641 bytes
2018-12-04T17:07:28Z leveldb: Generated table #809@0: 3608 keys, 517536 bytes
2018-12-04T17:07:28Z leveldb: Generated table #810@0: 2 keys, 221 bytes
2018-12-04T17:07:28Z leveldb: Compacted 1@0 + 8@1 files => 8913675 bytes
2018-12-04T17:07:28Z leveldb: compacted to: files[ 0 8 55 91 0 0 0 ]
2018-12-04T17:07:33Z LoadBlockIndexDB: last block file = 154
2018-12-04T17:07:33Z LoadBlockIndexDB: last block file info: CBlockFileInfo(blocks=5075, size=108826844, heights=1441407...1446454, time=2018-10-31...2018-12-04)
2018-12-04T17:07:33Z Checking all blk files are present...
2018-12-04T17:07:33Z LevelDB using max_open_files=1000 (default=1000)
2018-12-04T17:07:33Z Opening LevelDB in /Users/matt/Library/Application Support/Bitcoin/testnet3/chainstate
2018-12-04T17:07:33Z leveldb: Recovering log #9327
2018-12-04T17:07:33Z leveldb: Level-0 table #9332: started
2018-12-04T17:07:33Z leveldb: Level-0 table #9332: 308 bytes OK
2018-12-04T17:07:33Z leveldb: Delete type=2 #9326
2018-12-04T17:07:33Z leveldb: Delete type=0 #9327
2018-12-04T17:07:33Z leveldb: Delete type=2 #9311
2018-12-04T17:07:33Z leveldb: Delete type=2 #9310
2018-12-04T17:07:33Z leveldb: Delete type=2 #9312
2018-12-04T17:07:33Z leveldb: Delete type=3 #9325
2018-12-04T17:07:33Z Opened LevelDB successfully
2018-12-04T17:07:33Z Using obfuscation key for /Users/matt/Library/Application Support/Bitcoin/testnet3/chainstate: fcab38c4f274e80a
2018-12-04T17:07:34Z Loaded best chain: hashBestChain=00000000000000063df115997921bf6856c08a08318b592df576c2f863b7d7c5 height=1446453 date=2018-12-04T04:45:18Z progress=0.999422
2018-12-04T17:07:34Z init message: Rewinding blocks...
2018-12-04T17:07:41Z WriteBatch memory usage: db=index, before=0.0MiB, after=0.0MiB
2018-12-04T17:07:41Z WriteBatch memory usage: db=chainstate, before=0.0MiB, after=0.0MiB
2018-12-04T17:07:41Z init message: Verifying blocks...
2018-12-04T17:07:41Z Verifying last 6 blocks at level 3
2018-12-04T17:07:41Z [0%]...leveldb: Compacting 1@0 + 3@1 files
2018-12-04T17:07:41Z leveldb: Generated table #9334@0: 1 keys, 207 bytes
2018-12-04T17:07:41Z [16%]...leveldb: Generated table #9335@0: 3162 keys, 181793 bytes
2018-12-04T17:07:41Z [33%]...leveldb: Generated table #9336@0: 2846 keys, 181033 bytes
2018-12-04T17:07:41Z leveldb: Compacted 1@0 + 3@1 files => 363033 bytes
2018-12-04T17:07:41Z leveldb: compacted to: files[ 0 3 78 484 0 0 0 ]
2018-12-04T17:07:41Z [50%]...[66%]...[83%]...[99%]...[DONE].
2018-12-04T17:07:41Z No coin database inconsistencies in last 6 blocks (444 transactions)
2018-12-04T17:07:41Z block index 15119ms
2018-12-04T17:07:41Z LevelDB using max_open_files=1000 (default=1000)
2018-12-04T17:07:41Z Opening LevelDB in /Users/matt/Library/Application Support/Bitcoin/testnet3/indexes/txindex
2018-12-04T17:07:41Z leveldb: Recovering log #12582
2018-12-04T17:07:41Z leveldb: Delete type=3 #12580
2018-12-04T17:07:41Z leveldb: Delete type=0 #12582
2018-12-04T17:07:41Z Opened LevelDB successfully
2018-12-04T17:07:41Z Using obfuscation key for /Users/matt/Library/Application Support/Bitcoin/testnet3/indexes/txindex: 0000000000000000
2018-12-04T17:07:41Z init message: Loading wallet...
2018-12-04T17:07:41Z txindex thread start
2018-12-04T17:07:41Z txindex is enabled at height 1446453
2018-12-04T17:07:41Z txindex thread exit
2018-12-04T17:07:42Z [default wallet] nFileVersion = 170000
2018-12-04T17:07:42Z [default wallet] Keys: 61977 plaintext, 0 encrypted, 61977 w/ metadata, 61977 total. Unknown wallet records: 1
2018-12-04T17:07:42Z [default wallet] Wallet completed loading in 1474ms
2018-12-04T17:07:42Z [default wallet] setKeyPool.size() = 2000
2018-12-04T17:07:42Z [default wallet] mapWallet.size() = 630
2018-12-04T17:07:42Z [default wallet] mapAddressBook.size() = 70700
2018-12-04T17:07:42Z mapBlockIndex.size() = 1446512
2018-12-04T17:07:42Z nBestHeight = 1446453
2018-12-04T17:07:42Z torcontrol thread start
2018-12-04T17:07:42Z AddLocal([2600:1700:57f0:8050:14a2:afde:1894:925c]:18333,1)
2018-12-04T17:07:42Z Discover: IPv6 en0: 2600:1700:57f0:8050:14a2:afde:1894:925c
2018-12-04T17:07:42Z AddLocal([2600:1700:57f0:8050:2556:3391:9481:b0b]:18333,1)
2018-12-04T17:07:42Z Discover: IPv6 en0: 2600:1700:57f0:8050:2556:3391:9481:b0b
2018-12-04T17:07:42Z Bound to [::]:18333
2018-12-04T17:07:42Z Bound to 0.0.0.0:18333
2018-12-04T17:07:42Z init message: Loading P2P addresses...
2018-12-04T17:07:42Z Leaving InitialBlockDownload (latching to false)
2018-12-04T17:07:42Z Imported mempool transactions from disk: 42 succeeded, 0 failed, 0 expired, 0 already there
2018-12-04T17:07:43Z Loaded 62215 addresses from peers.dat 192ms
2018-12-04T17:07:43Z init message: Loading banlist...
2018-12-04T17:07:43Z init message: Starting network threads...
2018-12-04T17:07:43Z net thread start
2018-12-04T17:07:43Z init message: Done loading
2018-12-04T17:07:43Z dnsseed thread start
2018-12-04T17:07:43Z addcon thread start
2018-12-04T17:07:43Z opencon thread start
2018-12-04T17:07:43Z msghand thread start
2018-12-04T17:07:43Z New outbound peer connected: version: 70015, blocks=1446521, peer=0
2018-12-04T17:07:49Z UpdateTip: new best=000000000000002d5ecbdf5e466a286dcce223661ee85fa5947d53861442565a height=1446454 version=0x20000000 log2_work=71.761827 tx=48217097 date='2018-12-04T04:52:27Z' progress=0.999427 cache=0.0MiB(125txo) warning='11 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.0MiB, after=0.0MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000000a6a191de2c5c2cdc42b64472579911aedf49607852c26e83c6 height=1446455 version=0x20000000 log2_work=71.761848 tx=48217134 date='2018-12-04T04:59:11Z' progress=0.999433 cache=0.0MiB(185txo) warning='11 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.0MiB, after=0.0MiB
2018-12-04T17:07:49Z UpdateTip: new best=000000000000b927b40c61133cb79e87434b7724c6473e11cab0b3807496e1eb height=1446456 version=0x20000000 log2_work=71.761848 tx=48217234 date='2018-12-04T05:19:19Z' progress=0.999448 cache=0.1MiB(419txo) warning='11 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.0MiB, after=0.0MiB
2018-12-04T17:07:49Z Pre-allocating up to position 0xe00000 in rev00154.dat
2018-12-04T17:07:49Z UpdateTip: new best=00000000000001100433c151cc8eca67aa93f56aa6acbbd23d6be471ac33a946 height=1446457 version=0x2000e000 log2_work=71.761869 tx=48217275 date='2018-12-04T05:28:32Z' progress=0.999456 cache=0.1MiB(460txo) warning='12 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.0MiB, after=0.0MiB
2018-12-04T17:07:49Z UpdateTip: new best=000000000017e1ea5199451d3e3541726ed4bd857fcaf9ea3c8edeef105bb689 height=1446458 version=0x20000000 log2_work=71.761869 tx=48217390 date='2018-12-04T05:48:40Z' progress=0.999471 cache=0.1MiB(648txo) warning='12 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.0MiB, after=0.0MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000000ec28bf31cc9fea23314c03b2d46cfac2b3410d7f14d2544962 height=1446459 version=0x20000000 log2_work=71.76189 tx=48217431 date='2018-12-04T05:53:56Z' progress=0.999475 cache=0.1MiB(717txo) warning='12 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.0MiB, after=0.0MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000dd9d42b543aae4fc1c1feacb0f08784c28d8fff4e481cdfc1c267 height=1446460 version=0x20000000 log2_work=71.76189 tx=48217559 date='2018-12-04T06:14:04Z' progress=0.999491 cache=0.1MiB(1003txo) warning='12 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.0MiB, after=0.0MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000c02b40c20b543aa0ff681586ad40f5f7b28ed4320d845fb9a9d33 height=1446461 version=0x20000000 log2_work=71.76189 tx=48217683 date='2018-12-04T06:34:08Z' progress=0.999507 cache=0.2MiB(1166txo) warning='12 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.0MiB, after=0.0MiB
2018-12-04T17:07:49Z UpdateTip: new best=000000000002833d8fee2756457a1e8356afa5674f10d97ab711afd0aa5f4213 height=1446462 version=0x20000000 log2_work=71.76189 tx=48217869 date='2018-12-04T06:54:16Z' progress=0.999522 cache=0.2MiB(1439txo) warning='12 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.0MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000000953b09524644db82230fbd4939532f34dc8ad5ff4a0a6cbe79 height=1446463 version=0x20000000 log2_work=71.761911 tx=48217939 date='2018-12-04T07:01:34Z' progress=0.999528 cache=0.2MiB(1554txo) warning='12 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=000000000000011ddc97ba7e679b04aecb5dee23a9895a8c59d6c4fc4fce339c height=1446464 version=0x20000000 log2_work=71.761932 tx=48218016 date='2018-12-04T07:11:21Z' progress=0.999536 cache=0.2MiB(1657txo) warning='12 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=000000000000010c87a927806b19eef5117088aa6f8b30a0062013a342eae3a3 height=1446465 version=0x20000000 log2_work=71.761953 tx=48218151 date='2018-12-04T07:29:54Z' progress=0.999550 cache=0.3MiB(1874txo) warning='12 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000000f413b6abf1bb13d47c4caa2b9c68d86861f64c6ea762f96d21 height=1446466 version=0x20000000 log2_work=71.761973 tx=48218226 date='2018-12-04T07:39:24Z' progress=0.999557 cache=0.3MiB(1990txo) warning='11 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000000c3876835ae819e6a06b2689784bd5a49dff5a4d04c358bbd9d height=1446467 version=0x2fff8000 log2_work=71.761994 tx=48218249 date='2018-12-04T07:42:16Z' progress=0.999560 cache=0.3MiB(2017txo) warning='12 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=000000000000004f7fff32ff0255f533f69c6ed204714585b7e24c7294a8c5af height=1446468 version=0x20400000 log2_work=71.762015 tx=48218351 date='2018-12-04T07:54:48Z' progress=0.999569 cache=0.3MiB(2158txo) warning='13 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=000000000000000df9611365cbefce75de3aecceec9a71c484a851f3d00398b8 height=1446469 version=0x20800000 log2_work=71.762036 tx=48218353 date='2018-12-04T07:55:01Z' progress=0.999570 cache=0.3MiB(2160txo) warning='14 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=000000000004a260c26fe6b315d5cc1f0a5490d62fbc08689042029f9af69050 height=1446470 version=0x20000000 log2_work=71.762036 tx=48218514 date='2018-12-04T08:15:12Z' progress=0.999585 cache=0.3MiB(2395txo) warning='14 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=000000000000005ff7940b2a1e7f4816e4a664d2238968739bd5fdafd0f4ecd3 height=1446471 version=0x20000000 log2_work=71.762057 tx=48218515 date='2018-12-04T08:15:09Z' progress=0.999585 cache=0.3MiB(2396txo) warning='14 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=0000000000000056df79f565bb235683bd8ecb388e4338f22f5c3e30e2f11d3b height=1446472 version=0x20000000 log2_work=71.762078 tx=48218609 date='2018-12-04T08:26:31Z' progress=0.999594 cache=0.3MiB(2537txo) warning='14 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=000000000000005e817358aa64b8f0cb6fa33fd50bef0d163826eee6dd00a68c height=1446473 version=0x20000000 log2_work=71.762099 tx=48218622 date='2018-12-04T08:28:05Z' progress=0.999595 cache=0.3MiB(2548txo) warning='14 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=0000000000000012cd1646b81f12739a74f996edd07b574b852c2a8f8d12ea0f height=1446474 version=0x20000000 log2_work=71.76212 tx=48218774 date='2018-12-04T08:46:45Z' progress=0.999610 cache=0.4MiB(2788txo) warning='14 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000000725a7cb479398be3bb12858628fbb263f4a6bf15c4241f9606 height=1446475 version=0x20800000 log2_work=71.762141 tx=48218781 date='2018-12-04T08:47:31Z' progress=0.999610 cache=0.4MiB(2800txo) warning='15 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=0000000000000111c4572babeaf1c9526e8eddaa22beff9c4bf71de473ca3841 height=1446476 version=0x20000000 log2_work=71.762161 tx=48218921 date='2018-12-04T09:06:45Z' progress=0.999625 cache=0.4MiB(2998txo) warning='15 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=000000000000004b73101db3749c7b060916dbc490855a78daefd2bcd008b21b height=1446477 version=0x20000000 log2_work=71.762182 tx=48218984 date='2018-12-04T09:15:17Z' progress=0.999632 cache=0.4MiB(3139txo) warning='15 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000000c6bba9e81651ad471b671f3acd7cd5ed9ae79cb2ac1f045061 height=1446478 version=0x20000000 log2_work=71.762203 tx=48219018 date='2018-12-04T09:18:57Z' progress=0.999635 cache=0.4MiB(3206txo) warning='15 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000000900505f898e53cbaaaa446ecc2a0aff020a6f5161b60f9063c height=1446479 version=0x20000000 log2_work=71.762224 tx=48219029 date='2018-12-04T09:20:01Z' progress=0.999636 cache=0.4MiB(3222txo) warning='15 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z New outbound peer connected: version: 70015, blocks=1446521, peer=1
2018-12-04T17:07:49Z UpdateTip: new best=000000000000001865349c72ef373b8211c412af115702e6888b0c6772ee82e3 height=1446480 version=0x20000000 log2_work=71.762245 tx=48219087 date='2018-12-04T09:26:42Z' progress=0.999641 cache=0.5MiB(3322txo) warning='15 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.1MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000000b433e9fbbd278dc567c90cace86556444e1ae57649741c5612 height=1446481 version=0x20000000 log2_work=71.762266 tx=48219212 date='2018-12-04T09:39:26Z' progress=0.999651 cache=0.5MiB(3521txo) warning='15 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.1MiB, after=0.2MiB
2018-12-04T17:07:49Z UpdateTip: new best=00000000000000d5803219f9474245dcbfdc70a44580f25eba34a7d54b599a96 height=1446482 version=0x3fffe000 log2_work=71.762287 tx=48219310 date='2018-12-04T09:52:36Z' progress=0.999661 cache=0.5MiB(3664txo) warning='16 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:49Z UpdateTip: new best=0000000000000044fe30a71fba844f92b427de27b7f6173d684699356d49f073 height=1446483 version=0x20000000 log2_work=71.762308 tx=48219379 date='2018-12-04T09:57:19Z' progress=0.999665 cache=0.5MiB(3784txo) warning='16 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:49Z UpdateTip: new best=0000000000000041704eaee1703fd8ee63a680b430af3098c1d2e8549c77d86e height=1446484 version=0x20000000 log2_work=71.762328 tx=48219428 date='2018-12-04T10:02:35Z' progress=0.999669 cache=0.5MiB(3851txo) warning='16 of last 100 blocks have unexpected version'
2018-12-04T17:07:49Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=000000000000005ec9c970eaa7953d6dd52532a242583f011d2d73fd94b48ca1 height=1446485 version=0x2000e000 log2_work=71.762349 tx=48219503 date='2018-12-04T10:14:32Z' progress=0.999678 cache=0.5MiB(3983txo) warning='17 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=00000000000000dfb2d5f5c06f1ea6122a12886204164ac4cfd2f7bdc95daf05 height=1446486 version=0x20000000 log2_work=71.76237 tx=48219551 date='2018-12-04T10:20:13Z' progress=0.999683 cache=0.6MiB(4088txo) warning='17 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=0000000000000003d2037b15bef07123e0e7b8750c5a0188fa8f959bf12d1fb2 height=1446487 version=0x20000000 log2_work=71.762391 tx=48219612 date='2018-12-04T10:30:08Z' progress=0.999690 cache=0.6MiB(4177txo) warning='17 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=000000000000013bb1e193cc3e190d1c5683190ffb06a77f39e9fea1ece344b4 height=1446488 version=0x20000000 log2_work=71.762412 tx=48219678 date='2018-12-04T10:44:20Z' progress=0.999701 cache=0.6MiB(4289txo) warning='17 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=0000000000000018435c24b9822bd36fe8372d4afe79add694e53e1df19cbf2c height=1446489 version=0x2000e000 log2_work=71.762433 tx=48219686 date='2018-12-04T10:45:19Z' progress=0.999702 cache=0.6MiB(4307txo) warning='18 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=000000000000004fee26b2474777afeae45ccb401bba82a7b31ec7942e60073c height=1446490 version=0x3fffe000 log2_work=71.762454 tx=48219740 date='2018-12-04T10:56:15Z' progress=0.999711 cache=0.6MiB(4383txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=000000000010cffca8bdddbbe64cadb0811f263c09e99e129fb7e3c76dd69c7c height=1446491 version=0x20000000 log2_work=71.762454 tx=48219865 date='2018-12-04T11:16:24Z' progress=0.999726 cache=0.6MiB(4564txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=0000000000127ce377076949417a616cf174d4d921f5c8469f839af6cfecd2a8 height=1446492 version=0x20000000 log2_work=71.762454 tx=48219957 date='2018-12-04T11:36:34Z' progress=0.999742 cache=0.6MiB(4697txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=000000000000003265773110b43eebc2cf5d218ffcd37fddd34e944d69b830e9 height=1446493 version=0x20000000 log2_work=71.762475 tx=48219973 date='2018-12-04T11:38:33Z' progress=0.999744 cache=0.6MiB(4717txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=000000000000002f92e46079e95d815adc8dc60ebdad1a3700135bfb7fe21ce0 height=1446494 version=0x2000e000 log2_work=71.762496 tx=48220033 date='2018-12-04T11:47:02Z' progress=0.999750 cache=0.6MiB(4819txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=000000000000010e437366fc750cd51f763715273813573ee19706bb80fe6feb height=1446495 version=0x20000000 log2_work=71.762516 tx=48220049 date='2018-12-04T11:54:05Z' progress=0.999756 cache=0.6MiB(4862txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=00000000000000dc8d0f77531f11e78fb0799ba22da26521f71dca56ae448b3b height=1446496 version=0x20000000 log2_work=71.762537 tx=48220084 date='2018-12-04T11:58:32Z' progress=0.999759 cache=0.7MiB(4907txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=00000000000000fdffea60041b4b78b249816aef47c2b11ebb37f389ebcd5f4d height=1446497 version=0x20000000 log2_work=71.762558 tx=48220115 date='2018-12-04T12:02:26Z' progress=0.999762 cache=0.7MiB(4930txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=00000000000000aa287e17caa22607fcd3b6d76ea54f7e8a54be831944c9d7bf height=1446498 version=0x20000000 log2_work=71.762579 tx=48220166 date='2018-12-04T12:14:10Z' progress=0.999771 cache=0.7MiB(5006txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=000000000000005f7fd1441bf4f3d8884b6d35d6cd9bb6069f5d1413b94bd07e height=1446499 version=0x20000000 log2_work=71.7626 tx=48220188 date='2018-12-04T12:15:56Z' progress=0.999773 cache=0.7MiB(5027txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=000000000000011cd1e6c4c65b781391d5f98afaf5bf583224c9dfd35312cfce height=1446500 version=0x20000000 log2_work=71.762621 tx=48220194 date='2018-12-04T12:17:25Z' progress=0.999774 cache=0.7MiB(5034txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=00000000000000b9a20c526ed07ab299b85c952d0d290dfcab5e23d78ce91044 height=1446501 version=0x20000000 log2_work=71.762642 tx=48220197 date='2018-12-04T12:17:41Z' progress=0.999774 cache=0.7MiB(5039txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=00000000b83f71941586cf5c1d61cdb6b1720b5f6c2f089a463f1e7feece25a4 height=1446502 version=0x20000000 log2_work=71.762642 tx=48220206 date='2018-12-04T12:37:45Z' progress=0.999790 cache=0.7MiB(5059txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=00000000000057f9ad5833bb285426a625eac6aee4b06582a04a8ffd4661753e height=1446503 version=0x20000000 log2_work=71.762642 tx=48220394 date='2018-12-04T12:57:46Z' progress=0.999805 cache=0.7MiB(5353txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=0000000000004656449376305e090c4cdb5975c3766589270b3c6300a838f4a6 height=1446504 version=0x20000000 log2_work=71.762642 tx=48220479 date='2018-12-04T13:17:48Z' progress=0.999821 cache=0.7MiB(5496txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.2MiB
2018-12-04T17:07:50Z UpdateTip: new best=00000000000000bc5445dcc5fe48437651d035dd02b4dc7941e1d7aca89b91af height=1446505 version=0x20000000 log2_work=71.762663 tx=48220556 date='2018-12-04T13:35:05Z' progress=0.999834 cache=0.7MiB(5655txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.2MiB, after=0.3MiB
2018-12-04T17:07:50Z UpdateTip: new best=00000000000000c9ad52284a86ce9b3b99c0f6f5d4e8f41032e9119a7981daa6 height=1446506 version=0x3fffe000 log2_work=71.762683 tx=48220586 date='2018-12-04T13:41:24Z' progress=0.999839 cache=0.8MiB(5698txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:50Z UpdateTip: new best=0000000000000126d77f2721cf0961964de348743d572c9dab201e69a24dc29a height=1446507 version=0x20000000 log2_work=71.762704 tx=48220598 date='2018-12-04T13:42:32Z' progress=0.999840 cache=0.8MiB(5719txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:50Z UpdateTip: new best=00000000000e54acb8b3b06a3e4d1c7717922d028173f2e48987bb1c67479b8f height=1446508 version=0x20000000 log2_work=71.762704 tx=48220700 date='2018-12-04T14:02:42Z' progress=0.999856 cache=0.8MiB(5837txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:50Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=00000000000fe82f8eafe6e998a21897f0541e4a8a44ca57db64289fbd37bf33 height=1446509 version=0x20000000 log2_work=71.762704 tx=48220784 date='2018-12-04T14:22:51Z' progress=0.999871 cache=0.8MiB(6035txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=000000000008874f18591516935c6a9f70ba51e2e595430a45ded60c1243fd64 height=1446510 version=0x20000000 log2_work=71.762704 tx=48220873 date='2018-12-04T14:43:07Z' progress=0.999887 cache=0.8MiB(6169txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=0000000000042659b84a60782a1c3fe122d819553f4f1a1f8d9d6f11e6e69f6e height=1446511 version=0x20000000 log2_work=71.762704 tx=48220954 date='2018-12-04T15:03:18Z' progress=0.999903 cache=0.8MiB(6306txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=0000000000000047b77e6797554d88bed4e552516cf36d07c4f2c0fa0f19f2bc height=1446512 version=0x3fffe000 log2_work=71.762725 tx=48220963 date='2018-12-04T15:05:29Z' progress=0.999905 cache=0.8MiB(6320txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=00000000000c9c09a65437ce5c003b3f4f2cd92571293b9457e24b279e5a57cd height=1446513 version=0x20000000 log2_work=71.762725 tx=48221039 date='2018-12-04T15:25:31Z' progress=0.999920 cache=0.9MiB(6438txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=000000000000011b9057ea5be9bc46491d4482f4de2cbedd76d95e4fab46a5a6 height=1446514 version=0x20000000 log2_work=71.762746 tx=48221071 date='2018-12-04T15:35:22Z' progress=0.999928 cache=0.9MiB(6489txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=0000000000000066d74db224a0ee1b2f68093b1f6bb387ef0a6638178317c6aa height=1446515 version=0x20000000 log2_work=71.762767 tx=48221109 date='2018-12-04T15:40:49Z' progress=0.999932 cache=0.9MiB(6554txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=00000000000000eea4fec653de6e0fefe21f21a22969eac0ae489078b0d84b7c height=1446516 version=0x2000e000 log2_work=71.762788 tx=48221201 date='2018-12-04T15:59:17Z' progress=0.999947 cache=0.9MiB(6686txo) warning='20 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=00000000000000dc81f087c596ab27a61963ba0b614e8cce752bed0fda262363 height=1446517 version=0x20000000 log2_work=71.762809 tx=48221228 date='2018-12-04T16:06:21Z' progress=0.999952 cache=0.9MiB(6713txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z [default wallet] AddToWallet c4031ac9ef08555e73295662fcd34ee3a81ce2e7293c5a0c1825a9d5b661cab9
2018-12-04T17:07:51Z [default wallet] AddToWallet 90604551570b0456450c71c776d4f2d338179b8ea5422d76bf69f64227d2c3c7
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=0000000000000114892185841fca097ae538cff3b4a2e34f5db795d461f2975f height=1446518 version=0x20000000 log2_work=71.76283 tx=48221244 date='2018-12-04T16:12:04Z' progress=0.999957 cache=0.9MiB(6736txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=000000000000002738dc0900171dc153d2f05f4fc7e6e7015a6180f16b90bae5 height=1446519 version=0x20000000 log2_work=71.76285 tx=48221245 date='2018-12-04T16:12:06Z' progress=0.999957 cache=0.9MiB(6737txo) warning='19 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=000000000017ea0106e1ada0f470f8d27b41c7ad7d29dfa957bf3580c4b84249 height=1446520 version=0x20000000 log2_work=71.76285 tx=48221308 date='2018-12-04T16:32:11Z' progress=0.999972 cache=0.9MiB(6823txo) warning='18 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:51Z UpdateTip: new best=0000000000188a9a88f54251a671f18e553376bebf9f247fe7547e93e37ca1bb height=1446521 version=0x20000000 log2_work=71.76285 tx=48221359 date='2018-12-04T16:52:16Z' progress=0.999988 cache=0.9MiB(6877txo) warning='18 of last 100 blocks have unexpected version'
2018-12-04T17:07:51Z WriteBatch memory usage: db=txindex, before=0.3MiB, after=0.3MiB
2018-12-04T17:07:54Z P2P peers available. Skipped DNS seeding.
2018-12-04T17:07:54Z dnsseed thread exit
2018-12-04T17:08:13Z New outbound peer connected: version: 70015, blocks=1414448, peer=2
2018-12-04T17:08:14Z New outbound peer connected: version: 70015, blocks=1446521, peer=3
2018-12-04T17:08:15Z New outbound peer connected: version: 70015, blocks=1446521, peer=4
2018-12-04T17:08:44Z New outbound peer connected: version: 70015, blocks=1446521, peer=5
2018-12-04T17:08:50Z New outbound peer connected: version: 70015, blocks=1446521, peer=6
2018-12-04T17:08:56Z New outbound peer connected: version: 70015, blocks=1446521, peer=7
2018-12-04T17:09:32Z leveldb: Compacting 1@0 + 2@1 files
2018-12-04T17:09:32Z leveldb: Generated table #12585@0: 3897 keys, 205474 bytes
2018-12-04T17:09:32Z leveldb: Generated table #12586@0: 645 keys, 34023 bytes
2018-12-04T17:09:32Z leveldb: Generated table #12587@0: 287 keys, 15201 bytes
2018-12-04T17:09:32Z leveldb: Compacted 1@0 + 2@1 files => 254698 bytes
2018-12-04T17:09:32Z leveldb: compacted to: files[ 0 3 62 548 658 0 0 ]
2018-12-04T17:09:35Z [default wallet] AddToWallet 60ed8bec64160a4f3dbc7c3ba04ff2856114385da37028180a6848271564c800 new
2018-12-04T17:09:42Z leveldb: Compacting 1@1 + 48@2 files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z leveldb: Generated table #12588@1: 41636 keys, 2170895 bytes
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z leveldb: compacted to: files[ 0 3 62 548 658 0 0 ]
2018-12-04T17:09:42Z leveldb: Compaction error: IO error: /Users/matt/Library/Application Support/Bitcoin/testnet3/indexes/txindex/012589.ldb: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:42Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:47Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:47Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:47Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:47Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:47Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:47Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:47Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:47Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:47Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:47Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
2018-12-04T17:09:52Z libevent: Error from accept() call: Too many open files
| macOS,RPC/REST/ZMQ,Resource usage | low | Critical |
387,461,522 | TypeScript | Badly hoisted variable declaration on tslib import | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.0-dev.20181204
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
export default async function foo() {
const out = [];
for await (const item of [1, 2]) {
out.push(item);
}
return out;
}
foo().then(console.log);
```
**Expected behavior:**
TS places variable declaration at correct scope.
**Actual behavior:**
Variable `_b` is used in a sibling scope, although it was not declared in any parent scope. This only happens when `importHelpers` is enabled. When it's disabled, the variable will be declared at the correct scope.
```js
import * as tslib_1 from "tslib";
export default async function foo() {
var e_1, _a;
const out = [];
try {
// _b is declared in wrong scope, because it is later used inside
// the finally block
for (var _b = tslib_1.__asyncValues([1, 2]), _c; _c = await _b.next(), !_c.done;) {
const item = _c.value;
out.push(item);
}
}
catch (e_1_1) { e_1 = { error: e_1_1 }; }
finally {
try {
if (_c && !_c.done && (_a = _b.return)) await _a.call(_b);
}
finally { if (e_1) throw e_1.error; }
}
return out;
}
foo().then(console.log);
```
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
Difficult to do, here is a minimal repo where the issue can be reproduced: https://github.com/marvinhagemeister/ts-helper-bug
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
no | Bug | low | Critical |
387,514,672 | flutter | Adding options to compress/original the picked videos in image_picker. | I am using image picker to pick videos from the gallery. The videos picked in iOS is compressed but for android it is not? Is there any widget in flutter that can compress the picked videos? | c: new feature,a: video,p: image_picker,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | medium | Critical |
387,525,904 | flutter | Ability to Integration Test Marked Text | In Flutter's integration/driver tests, it is not possible to programmatically type marked text into an iOS text field. Because of this, I couldn't write a regression test for https://github.com/flutter/engine/pull/6989.
The crash happened when marked text was cleared from a Cupertino text field. I was not able to reproduce the crash using our driver tests because of this despite trying various hacks in the engine code. I stopped short of trying something ugly like installing a Japanese keyboard and tapping on the screen at the right location to hit the right onscreen key. My failed attempts are located in https://github.com/flutter/flutter/pull/24861.
The [docs on UITextInput](https://developer.apple.com/documentation/uikit/uitextinput?language=objc) contain more info on marked text for anyone that's unfamiliar with phonetic input methods like the Japanese Kana keyboard.
<img width="468" alt="screen shot 2018-11-28 at 9 21 44 am" src="https://user-images.githubusercontent.com/389558/49479498-190bcb00-f7d8-11e8-8f40-1edd43316a05.png">
CC @HansMuller | a: text input,c: new feature,tool,t: flutter driver,P2,team-tool,triaged-tool | low | Critical |
387,537,866 | go | cmd/trace: provide aggregate 'go tool trace' goroutine analysis view | The current 'go tool trace' has a 'goroutine analysis' page, that shows data similar to this:

and after selecting a go routine, a detail page is displayed similar to this:

This should be changed, so that either 1) the main summary page shows the 'detail' for each in a list, or 2) add an option to the summary page labelled 'details' that provides this information.
If it far more useful than having to view the detail on each go routine individually.
| NeedsInvestigation,compiler/runtime | medium | Major |
387,538,166 | go | proposal: runtime/trace: improve Go diagnostics / tracing | **Overview**
Go offers extensive diagnostic [capabilities](https://golang.org/doc/diagnostics.html) out of the box, but they are of limited usefulness in a larger, highly concurrent, real-world application utilizing 100+ Go routines.
For the purposes of this proposal, I am using a real-world [application](https://github.com/robaho/go-trader). This application uses gRPC and has a single RPC stream active. It also supports 'fix financial protocol' via standard TCP sockets (no active connections). It emits multicast packets as well. It has a web interface, and also web socket interface (both with no active connections). So there is a lot of network end-points in-use. Although it is a real-world application, it is very small compared to most enterprise applications.
The following screen shot shows a profile, and the OS cpu utilization - notice the very heavy system cpu usage:

A tree sample provides a bit more context:
<pre>
10ms runtime.libcCall
runtime.pthread_mutex_lock
runtime.semawakeup
runtime.notewakeup
runtime.startm
runtime.wakep
runtime.ready
runtime.goready.func1
runtime.systemstack
runtime.goready
runtime.send
runtime.chansend
runtime.selectnbsend
google.golang.org/grpc/internal/transport.(*controlBuffer).executeAndPut
google.golang.org/grpc/internal/transport.(*controlBuffer).put
google.golang.org/grpc/internal/transport.(*http2Server).handleData
google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams
google.golang.org/grpc.(*Server).serveStreams
google.golang.org/grpc.(*Server).handleRawConn.func1
</pre>
Here is the OS total CPU % breakdown:

Here is the 'web' view for the profile:

Here is a GODEBUG sampling with scheddetail=1 for the same process.
<pre>
SCHED 87116ms: gomaxprocs=4 idleprocs=1 threads=14 spinningthreads=1 idlethreads=4 runqueue=0 gcwaiting=0 nmidlelocked=0 stopwait=0 sysmonwait=0
P0: status=0 schedtick=580546 syscalltick=798347 m=-1 runqsize=0 gfreecnt=0
P1: status=2 schedtick=570346 syscalltick=805620 m=-1 runqsize=0 gfreecnt=0
P2: status=1 schedtick=571461 syscalltick=801749 m=4 runqsize=0 gfreecnt=0
P3: status=1 schedtick=567930 syscalltick=814616 m=3 runqsize=0 gfreecnt=0
M13: p=-1 curg=-1 mallocing=0 throwing=0 preemptoff= locks=0 dying=0 helpgc=0 spinning=false blocked=true lockedg=-1
M12: p=-1 curg=50 mallocing=0 throwing=0 preemptoff= locks=0 dying=0 helpgc=0 spinning=false blocked=true lockedg=-1
M11: p=-1 curg=19 mallocing=0 throwing=0 preemptoff= locks=0 dying=0 helpgc=0 spinning=false blocked=true lockedg=-1
M10: p=-1 curg=-1 mallocing=0 throwing=0 preemptoff= locks=0 dying=0 helpgc=0 spinning=false blocked=true lockedg=-1
M9: p=1 curg=16 mallocing=0 throwing=0 preemptoff= locks=0 dying=0 helpgc=0 spinning=false blocked=false lockedg=-1
M8: p=-1 curg=-1 mallocing=0 throwing=0 preemptoff= locks=0 dying=0 helpgc=0 spinning=false blocked=true lockedg=-1
M7: p=-1 curg=-1 mallocing=0 throwing=0 preemptoff= locks=0 dying=0 helpgc=0 spinning=false blocked=true lockedg=-1
M6: p=-1 curg=-1 mallocing=0 throwing=0 preemptoff= locks=0 dying=0 helpgc=0 spinning=false blocked=true lockedg=-1
M5: p=-1 curg=34 mallocing=0 throwing=0 preemptoff= locks=0 dying=0 helpgc=0 spinning=false blocked=true lockedg=-1
M4: p=2 curg=-1 mallocing=0 throwing=0 preemptoff= locks=1 dying=0 helpgc=0 spinning=false blocked=false lockedg=-1
M3: p=3 curg=-1 mallocing=0 throwing=0 preemptoff= locks=1 dying=0 helpgc=0 spinning=true blocked=false lockedg=-1
M2: p=-1 curg=-1 mallocing=0 throwing=0 preemptoff= locks=1 dying=0 helpgc=0 spinning=false blocked=false lockedg=-1
M1: p=-1 curg=17 mallocing=0 throwing=0 preemptoff= locks=0 dying=0 helpgc=0 spinning=false blocked=false lockedg=17
M0: p=0 curg=1 mallocing=0 throwing=0 preemptoff= locks=0 dying=0 helpgc=0 spinning=false blocked=false lockedg=-1
G1: status=3(chan receive) m=0 lockedm=-1
G17: status=6() m=1 lockedm=1
G2: status=4(force gc (idle)) m=-1 lockedm=-1
G18: status=4(GC sweep wait) m=-1 lockedm=-1
G3: status=4(finalizer wait) m=-1 lockedm=-1
G5: status=4(chan receive) m=-1 lockedm=-1
G6: status=4(IO wait) m=-1 lockedm=-1
G7: status=4(select) m=-1 lockedm=-1
G8: status=4(select) m=-1 lockedm=-1
G9: status=4(select) m=-1 lockedm=-1
G10: status=4(select) m=-1 lockedm=-1
G11: status=4(select) m=-1 lockedm=-1
G12: status=4(select) m=-1 lockedm=-1
... 20 entries exactly like above except Go routine number...
G28: status=4(GC worker (idle)) m=-1 lockedm=-1
G71: status=4(IO wait) m=-1 lockedm=-1
G72: status=4(IO wait) m=-1 lockedm=-1
G73: status=4(sleep) m=-1 lockedm=-1
G74: status=4(timer goroutine (idle)) m=-1 lockedm=-1
G41: status=4(select) m=-1 lockedm=-1
G42: status=4(IO wait) m=-1 lockedm=-1
G75: status=4(GC worker (idle)) m=-1 lockedm=-1
G29: status=4(GC worker (idle)) m=-1 lockedm=-1
G44: status=4(GC worker (idle)) m=-1 lockedm=-1
</pre>
When using the 'trace' the flow it a bit easier to spot, as 'goroutine analysis' shows:

And going into the detail for startMarketData (exhibit A) function, we see:

and examining the sync block time, we see:

but examining the scheduler wait time, we see:

**Dilema**
Although the profile clearly shows lots of OS activity, and the OS monitor shows heavy CPU utilization, there is no easy way to analyze what is causing the heavy system cpu%. The biggest problem is that the profile is NOT partitioned by 'go routine'. Using the trace tool, things are better but there is no aggregate view. Ideally the detail view (exhibit A above), would be shown on the 'analysis page' for all go routines - the only way to get this information is to inspect each routine individually. Something like this (filed as issue #29103)

But more importantly, there needs to be a summary analysis as to the "why" - not reviewing a graph, for example, if I open the detail for the start market data routine above, it should show (sample only):
<pre>
startmarketdata.func1
network wait time 102 ms:
5% send on socket
70% poll
25% read on socket
scheduler wait 1 us
57% wait for GC cycle
43% wait for CPU
sync time 50 ms
67% wait on channel
100% sent by go routine xxxx
23% wait on mutex filename:line#
etc...
</pre>
The user can also use the 'trace viewer' which is 100% manual - it is very powerful, but difficult to see the forest through the trees.
**Specific Proposals**
1. Ability to add a "name" to a Go routine, something like:
<pre>
go "myname" func(){...}
go stringexp func(){...}
</pre>
Working with the current naming is more difficult than need be - you need to be an expert in all of the codebases of all of the libraries to even have an basic understanding of what is going on. The name would allows the library creator to provide significant information as to it purpose. This need should be appended to the existing "name" in the diagnostic tools.
2. In lieu of 1, at a minimum, the go routine name as provided now should be changed to include the source file and line number. Often a single function spawns multiple go routines, and the idiomatic way is anonymous functions, so you end up with func1, func2, etc. The source/line would make tracking these down much easier. This would also be the case, if the program did not provide a name in 1.
3. The trace output should have an option for per go routine stack sampling, which would place a 'stack trace event' in the trace file on the sampling interval. It may be possible to sync a profile capture with a sync capture based on timestamps, but it simplifies a lot to have everything in a single capture file.
4. There should be trace reporting by Go routine instance. Currently with the trace, all of the routines of a single type are lumped together. This does not allow easy detection of processing bottlenecks. For example, if there are 4 routines consuming 15%, is it 1 routine consuming 15%, or 3 consuming 1% and 1 consuming 14% ?
5. Most importantly, there is no public API to read the trace file to produce custom analysis reports, especially when there exists an API to add 'user events' to the trace. The internal/trace package should be made public. Currently, the information might be obtained in json from the 'tool trace webserver', but this is undocumented, and probably too inefficient for certain analysis needs. The current solution involves copy and pasting the package.
| Proposal,Proposal-Hold | low | Critical |
387,539,797 | flutter | flutter_test adds a delayed item to stream after it's closed | ## Steps to Reproduce
1. Put this code somewhere a test will run it.
```dart
final sc = StreamController<String>();
final obs = Observable.just("test").delay(Duration(seconds: 1)).takeUntil(sc.stream).listen((s) => print(s));
// final obs = Observable.just("test").takeUntil(sc.stream).listen((s) => print(s)); <--- This works
sc.add("whatever");
```
2. Make sure the test pumps enough time to trigger the delayed action.
2. Run the test, observe an exception thrown (see logs).
3. Run on device/emulation, observe that it works.
## Logs
```
βββ‘ EXCEPTION CAUGHT BY FLUTTER TEST FRAMEWORK βββββββββββββββββββββββββββββββββββββββββββββββββββββ
The following StateError was thrown running a test:
Bad state: Cannot add event after closing
When the exception was thrown, this was the stack:
#12 DelayStreamTransformer._buildTransformer.<anonymous closure>.<anonymous closure>.<anonymous closure>.<anonymous closure> (package:rxdart/src/transformers/delay.dart:42:34)
#22 AutomatedTestWidgetsFlutterBinding.pump.<anonymous closure> (package:flutter_test/src/binding.dart:705:27)
#25 TestAsyncUtils.guard (package:flutter_test/src/test_async_utils.dart:69:41)
#26 AutomatedTestWidgetsFlutterBinding.pump (package:flutter_test/src/binding.dart:701:27)
#27 WidgetTester.pump.<anonymous closure> (package:flutter_test/src/widget_tester.dart:247:53)
#30 TestAsyncUtils.guard (package:flutter_test/src/test_async_utils.dart:69:41)
#31 WidgetTester.pump (package:flutter_test/src/widget_tester.dart:247:27)
#32 main.<anonymous closure> (file:///C:/Users/andrey/Documents/Code/hello/hello_world/test/widget_test.dart:30:18)
<asynchronous suspension>
#33 testWidgets.<anonymous closure>.<anonymous closure> (package:flutter_test/src/widget_tester.dart:72:23)
#34 TestWidgetsFlutterBinding._runTestBody (package:flutter_test/src/binding.dart:566:19)
<asynchronous suspension>
#37 TestWidgetsFlutterBinding._runTest (package:flutter_test/src/binding.dart:550:14)
#38 AutomatedTestWidgetsFlutterBinding.runTest.<anonymous closure> (package:flutter_test/src/binding.dart:893:24)
#44 AutomatedTestWidgetsFlutterBinding.runTest (package:flutter_test/src/binding.dart:890:15)
#45 testWidgets.<anonymous closure> (package:flutter_test/src/widget_tester.dart:71:22)
#46 Declarer.test.<anonymous closure>.<anonymous closure>.<anonymous closure> (package:test_api/src/backend/declarer.dart:168:27)
<asynchronous suspension>
#47 Invoker.waitForOutstandingCallbacks.<anonymous closure> (package:test_api/src/backend/invoker.dart:249:15)
<asynchronous suspension>
#52 Invoker.waitForOutstandingCallbacks (package:test_api/src/backend/invoker.dart:246:5)
#53 Declarer.test.<anonymous closure>.<anonymous closure> (package:test_api/src/backend/declarer.dart:166:33)
#58 Declarer.test.<anonymous closure> (package:test_api/src/backend/declarer.dart:165:13)
<asynchronous suspension>
#59 Invoker._onRun.<anonymous closure>.<anonymous closure>.<anonymous closure>.<anonymous closure> (package:test_api/src/backend/invoker.dart:399:25)
<asynchronous suspension>
#73 _Timer._runTimers (dart:isolate/runtime/libtimer_impl.dart:382:19)
#74 _Timer._handleMessage (dart:isolate/runtime/libtimer_impl.dart:416:5)
#75 _RawReceivePortImpl._handleMessage (dart:isolate/runtime/libisolate_patch.dart:171:12)
(elided 53 frames from class _FakeAsync, package dart:async, and package stack_trace)
The test description was:
Counter increments smoke test
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Test failed. See exception logs above.
The test description was: Counter increments smoke test
β Counter increments smoke test
Exited (1)
```
flutter analyze
```
Analyzing hello_world...
No issues found! (ran in 2.0s)
```
flutter doctor -v
```
[β] Flutter (Channel beta, v0.11.3, on Microsoft Windows [Version 10.0.17134.407], locale en-AU)
β’ Flutter version 0.11.3 at C:\flutter
β’ Framework revision 72bf075e8d (4 weeks ago), 2018-11-09 20:36:17 -0800
β’ Engine revision 5646e86a6f
β’ Dart version 2.1.0 (build 2.1.0-dev.9.3 9c07fb64c4)
[β] Android toolchain - develop for Android devices (Android SDK 28.0.3)
β’ Android SDK at C:\Users\andrey\AppData\Local\Android\sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-28, build-tools 28.0.3
β’ Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06)
β’ All Android licenses accepted.
[β] Android Studio (version 3.2)
β’ Android Studio at C:\Program Files\Android\Android Studio
X Flutter plugin not installed; this adds Flutter specific functionality.
X Dart plugin not installed; this adds Dart specific functionality.
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06)
[β] VS Code (version 1.29.1)
β’ VS Code at C:\Users\andrey\AppData\Local\Programs\Microsoft VS Code
β’ Flutter extension version 2.21.0
[β] Connected device (1 available)
β’ SM N910C β’ 4100e4f8f29ba2e5 β’ android-arm β’ Android 6.0.1 (API 23)
β’ No issues found!
```
| a: tests,c: new feature,framework,c: proposal,P2,team-framework,triaged-framework | low | Critical |
387,544,329 | go | expvar: make it possible to remove memstat and cmdline defaults | The `expvar` package contains an `init` function that registers the `memstats` and `cmdline` vars to the `debug/vars` http handler. The package does not provide a means to remove these defaults, forcing the consumer to deal with them. It would be nice if the user had a means to remove these defaults without resorting to `unsafe` or forking the package.
A benchmark I've added reveals a mild but unnecessary performance impact with the defaults (when the number of total expvar.Vars is expected to be small), but the real problem I think is aesthetic: My other vars have nothing to do with memory usage or the command line, so the memory profile is intrusive to the application.
We also can't use `memstats` and `cmdline` as named for our own purposes. This does not impact me personally.
```
goos: windows
goarch: amd64
pkg: expvar
BenchmarkExpvarHandler/cmdline.memstats-8 200000 7219 ns/op
BenchmarkExpvarHandler/none-8 5000000 278 ns/op
BenchmarkExpvarHandler/user.10.vars-8 300000 5130 ns/op
PASS
ok expvar 4.822s
```
```
func BenchmarkExpvarHandler(b *testing.B) {
for _, tc := range []struct {
Name string
Init func()
}{
{"cmdline.memstats", func() {}},
{"none", func() { RemoveAll() }},
{"user.10.vars", func() { for i := 0; i < 10; i++ { NewInt(fmt.Sprintf("i%d", i)) } }},
} {
tc.Init()
b.Run(tc.Name, func(b *testing.B) {
for n := 0; n < b.N; n++ {
expvarHandler(writer{}, nil)
}
})
}
}
type writer http.Header
func (w writer) WriteHeader(_ int) {}
func (w writer) Write(p []byte) (int, error) { return len(p), nil }
func (w writer) Header() http.Header { return http.Header(w) }
```
Interestingly, the package tests contain an unexported `RemoveAll` function. The recently-closed issue #27555 discusses it in detail, but does not mention the potential use for removing the default vars.
| NeedsInvestigation,FeatureRequest | medium | Critical |
387,579,260 | pytorch | [caffe2] [NNPACK] how to set thread pool for NNPACK? | I use NNPACK to accelerate my CPU model in caffe2 by setting engine to NNPACK, but how can I set the thread pool which default is the total CPU's cores of my machine. But I don't need the full speed of my machine? What should I do for it? | caffe2 | low | Minor |
387,590,580 | pytorch | Tensor.copy_() seems to work improperly with numpy/list indices | ## π Bug
When I copied a tensor to part of another tensor, I found a problem when the tensor copied to is indexed by numpy array.
I'm using Pytorch 0.4.1. It is installed by pip, and I'm using Mac OS High Sierra.
Below is an example, which can be reproduced easily in python environment.
<!-- A clear and concise description of what the bug is. -->
`a = torch.zeros((2,2,1))`
`x = torch.ones((1,1))`
`indices = np.array([0,1])`
`a[0,indices].copy_(x)`
After this, printing out 'a' will still give a tensor full of zeros.
I also found that the same problem occured when indices are list. Below is another reproduction of the problem.
`a = torch.zeros((2,2,1))`
`x = torch.ones((1,1))`
`indices = [0,1]`
`a[0,indices].copy_(x)`
cc @jlin27 @mruberry | module: docs,triaged | low | Critical |
387,595,493 | flutter | Should the modal barrier of an Alert also change the color of the status bar? | ## Steps to Reproduce
1. Create a custom Status Bar color with
```dart
SystemChrome.setSystemUIOverlayStyle(
SystemUiOverlayStyle.dark.copyWith(
statusBarColor: Colors.red,
));
```
2. Then create an AlertDialog that shows up on an event
```dart
new AlertDialog(
title: new Text("Something went wrong!"),
content:
new Text("Be sure you are connected to the Device"),
actions: [
new FlatButton(
child: const Text("Ok"),
onPressed: () {
Navigator.pop(context);
_alertShown = 0;
}),
])
```
When the AlertDialog is fired up the custom status bar color stays bright and its opacity is not affected

| framework,f: material design,f: routes,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Critical |
387,603,420 | go | cmd/link: don't resolve WebAssembly imports at link-time | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.2 linux/amd64
</pre>
But also tested with `devel`:
<pre>
$ go version
go version devel +be09bdf589 Tue Dec 4 23:01:00 2018 +0000 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/user/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/user/go"
GOPROXY=""
GORACE=""
GOROOT="/home/user/Google/go"
GOTMPDIR=""
GOTOOLDIR="/home/user/Google/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="clang-8"
CXX="clang++-8"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build949670883=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<pre>
$ cat wasm_log.go
package main
import (
_ "unsafe" // for go:linkname
)
//go:linkname wasm_log wasm_log
func wasm_log(s string)
func main() {
wasm_log("Hello, world!")
}
</pre>
Note: the `wasm_log` function is provided by the host environment and is expected to be resolved at runtime, when the WASM module is instantiated.
### What did you expect to see?
Compiled WASM module that communicates with the host environment.
For the comparison, it builds successfully with TinyGo, and the resulting WASM module works as expected:
<pre>
$ tinygo build -target=wasm -no-debug -o wasm_log.wasm wasm_log.go
</pre>
<pre>
$ wavm-disas wasm_log.wasm wasm_log.wat
$ cat wasm_log.wat
(module
(type $0 (func (result i32)))
(type $1 (func (param i32 i32)))
(type $2 (func))
(import "env" "io_get_stdout" (func $io_get_stdout (result i32)))
(import "env" "wasm_log" (func $wasm_log (param i32 i32)))
(export "memory" (memory $4))
(export "__heap_base" (global $6))
(export "__data_end" (global $7))
(export "_start" (func $_start))
(export "cwa_main" (func $cwa_main))
(memory $4 2)
(table $3 1 1 anyfunc)
(global $5 (mut i32) (i32.const 66576))
(global $6 i32 (i32.const 66576))
(global $7 i32 (i32.const 1037))
(data $4 (i32.const 1024)
"Hello, world!")
(func $__wasm_call_ctors (type $2)
)
(func $_start (type $2)
call $io_get_stdout
drop
)
(func $cwa_main (type $2)
call $io_get_stdout
drop
i32.const 1024
i32.const 13
call $wasm_log
))
</pre>
### What did you see instead?
With `go1.11.2`:
<pre>
$ GOOS=js GOARCH=wasm go build -o wasm_log.wasm wasm_log.go
# command-line-arguments
./wasm_log.go:8:6: missing function body
</pre>
With `devel` (thanks to https://golang.org/cl/151318):
<pre>
$ GOOS=js GOARCH=wasm go build -o wasm_log.wasm wasm_log.go
# command-line-arguments
main.main: relocation target wasm_log not defined
</pre> | NeedsDecision,arch-wasm,compiler/runtime | low | Critical |
387,607,310 | pytorch | Cannot build Caffe2 with TensorRT | ## π Bug
I am trying to build the Caffe2 (master branch) with TensorRT by the following command:
```
cmake .. -DUSE_TENSORRT=ON -DCAFFE2_LINK_LOCAL_PROTOBUF=OFF
sudo make -j8
```
However, I faced with the following errors:
```
/home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_op_trt.cc: In member function βvirtual bool caffe2::TensorRTOp::RunOnDevice()β:
/home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_op_trt.cc:202:47: error: invalid initialization of reference of type βconst std::vector<long int>&β from expression of type βconst c10::ArrayRef<long int>β
auto chw = CheckDims(dims, tensor_dims);
^
/home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_op_trt.cc:16:9: note: in passing argument 2 of βint64_t caffe2::{anonymous}::CheckDims(const nvinfer1::Dims&, const std::vector<long int>&)β
int64_t CheckDims(
^
/home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_tranformer.cc: In function βvoid caffe2::{anonymous}::BlobToTensorProto(const string&, caffe2::Workspace*, caffe2::CUDAContext*, onnx_c2::TensorProto*)β:
/home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_tranformer.cc:102:59: error: no matching function for call to βcaffe2::Tensor::Tensor(const caffe2::Tensor&, caffe2::CUDAContext*&)β
const auto cpu_tensor = TensorCPU(cuda_tensor, context);
^
In file included from /home/alex/workspace/pytorch_master/caffe2/core/blob.h:14:0,
from /home/alex/workspace/pytorch_master/caffe2/core/operator.h:14,
from /home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_tranformer.h:9,
from /home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_tranformer.cc:1:
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:81:3: note: candidate: caffe2::Tensor::Tensor(const caffe2::Tensor&, caffe2::DeviceType)
Tensor(const Tensor& src, DeviceType type)
^
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:81:3: note: no known conversion for argument 2 from βcaffe2::CUDAContext*β to βcaffe2::DeviceType {aka c10::DeviceType}β
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:72:12: note: candidate: caffe2::Tensor::Tensor(const std::vector<int>&, caffe2::DeviceType)
explicit Tensor(const vector<int>& dims, DeviceType type)
^
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:72:12: note: no known conversion for argument 1 from βconst caffe2::Tensorβ to βconst std::vector<int>&β
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:68:12: note: candidate: caffe2::Tensor::Tensor(c10::IntList, c10::Device)
explicit Tensor(at::IntList dims, at::Device device): Tensor(device) {
^
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:68:12: note: no known conversion for argument 1 from βconst caffe2::Tensorβ to βc10::IntList {aka c10::ArrayRef<long int>}β
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:59:12: note: candidate: caffe2::Tensor::Tensor(c10::IntList, caffe2::DeviceType)
explicit Tensor(at::IntList dims, DeviceType type) : Tensor(type) {
^
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:59:12: note: no known conversion for argument 1 from βconst caffe2::Tensorβ to βc10::IntList {aka c10::ArrayRef<long int>}β
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:45:12: note: candidate: caffe2::Tensor::Tensor(c10::Device)
explicit Tensor(at::Device device)
^
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:45:12: note: candidate expects 1 argument, 2 provided
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:29:3: note: candidate: caffe2::Tensor::Tensor()
Tensor() : impl_() {}
^
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:29:3: note: candidate expects 0 arguments, 2 provided
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:23:18: note: candidate: caffe2::Tensor::Tensor(const caffe2::Tensor&)
class CAFFE2_API Tensor final {
^
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:23:18: note: candidate expects 1 argument, 2 provided
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:23:18: note: candidate: caffe2::Tensor::Tensor(caffe2::Tensor&&)
/home/alex/workspace/pytorch_master/caffe2/core/tensor.h:23:18: note: candidate expects 1 argument, 2 provided
/home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_tranformer.cc: In member function βvoid caffe2::TensorRTTransformer::Transform(caffe2::Workspace*, caffe2::NetDef*, const std::unordered_map<std::__cxx11::basic_string<char>, caffe2::TensorShape>&)β:
/home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_tranformer.cc:476:44: error: no matching function for call to βcaffe2::onnx::OnnxExporter::OnnxExporter(std::nullptr_t, bool)β
onnx::OnnxExporter exporter(nullptr, true);
^
In file included from /home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_tranformer.h:11:0,
from /home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_tranformer.cc:1:
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:42:3: note: candidate: caffe2::onnx::OnnxExporter::OnnxExporter(caffe2::onnx::DummyName*)
OnnxExporter(DummyName* dummy = nullptr) {
^
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:42:3: note: candidate expects 1 argument, 2 provided
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:36:18: note: candidate: caffe2::onnx::OnnxExporter::OnnxExporter(const caffe2::onnx::OnnxExporter&)
class CAFFE2_API OnnxExporter {
^
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:36:18: note: candidate expects 1 argument, 2 provided
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:36:18: note: candidate: caffe2::onnx::OnnxExporter::OnnxExporter(caffe2::onnx::OnnxExporter&&)
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:36:18: note: candidate expects 1 argument, 2 provided
/home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_tranformer.cc:507:45: error: no matching function for call to βcaffe2::onnx::OnnxExporter::OnnxExporter(std::nullptr_t, bool)β
onnx::OnnxExporter exporter2(nullptr, true);
^
In file included from /home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_tranformer.h:11:0,
from /home/alex/workspace/pytorch_master/caffe2/contrib/tensorrt/tensorrt_tranformer.cc:1:
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:42:3: note: candidate: caffe2::onnx::OnnxExporter::OnnxExporter(caffe2::onnx::DummyName*)
OnnxExporter(DummyName* dummy = nullptr) {
^
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:42:3: note: candidate expects 1 argument, 2 provided
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:36:18: note: candidate: caffe2::onnx::OnnxExporter::OnnxExporter(const caffe2::onnx::OnnxExporter&)
class CAFFE2_API OnnxExporter {
^
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:36:18: note: candidate expects 1 argument, 2 provided
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:36:18: note: candidate: caffe2::onnx::OnnxExporter::OnnxExporter(caffe2::onnx::OnnxExporter&&)
/home/alex/workspace/pytorch_master/caffe2/onnx/onnx_exporter.h:36:18: note: candidate expects 1 argument, 2 provided
caffe2/CMakeFiles/caffe2_gpu.dir/build.make:200511: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/contrib/tensorrt/tensorrt_op_trt.cc.o' failed
make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/contrib/tensorrt/tensorrt_op_trt.cc.o] Error 1
make[2]: *** Waiting for unfinished jobs....
caffe2/CMakeFiles/caffe2_gpu.dir/build.make:200559: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/contrib/tensorrt/tensorrt_tranformer.cc.o' failed
make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/contrib/tensorrt/tensorrt_tranformer.cc.o] Error 1
CMakeFiles/Makefile2:2058: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2
Makefile:160: recipe for target 'all' failed
make: *** [all] Error 2
```
It seems that code in caffe2/contrib/tensorrt is deprecated
Is there a way to solve this issue?
## Environment
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.12.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration: GPU 0: GeForce GTX 1050 Ti
Nvidia driver version: 384.130
TensorRT: 5.0 GA
| caffe2 | low | Critical |
387,607,900 | rust | Nightly not honoring -z stack-size | Hi there! I'm noticing that on nightly using `-z stack-size` doesn't seem to affect the stack size, or at least not in the way I've been using it. Below are some example commands from my project.
Using stable:
```sh
$ cargo +stable --version
cargo 1.30.0 (a1a4ad372 2018-11-02)
$ RUSTFLAGS="-C link-args=-zstack-size=48000" cargo +stable build --release --target=wasm32-unknown-unknown -vv -p hello_bare
Compiling hello_bare v0.1.0 (C:\Users\sagan\Documents\sagan-software\rust-eos\examples\hello_bare)
Running `rustc --crate-name hello_bare 'examples\hello_bare\src\lib.rs' --color always --crate-type cdylib --emit=dep-info,link -C opt-level=s -C panic=abort -C lto -C metadata=673de3ef755438ae --out-dir 'C:\Users\sagan\Documents\sagan-software\rust-eos\target\wasm32-unknown-unknown\release\deps' --target wasm32-unknown-unknown -L 'dependency=C:\Users\sagan\Documents\sagan-software\rust-eos\target\wasm32-unknown-unknown\release\deps' -L 'dependency=C:\Users\sagan\Documents\sagan-software\rust-eos\target\release\deps' -C link-args=-zstack-size=48000`
Finished release [optimized] target(s) in 0.67s
$ wasm2wat target/wasm32-unknown-unknown/release/hello_bare.wasm -o target/wasm32-unknown-unknown/release/hello_bare.wat --generate-names
$ tail target/wasm32-unknown-unknown/release/hello_bare.wat
(global $g0 (mut i32) (i32.const 48000))
(global $__heap_base i32 (i32.const 48004))
(global $__data_end i32 (i32.const 48004))
(export "memory" (memory 0))
(export "__indirect_function_table" (table 0))
(export "__heap_base" (global 1))
(export "__data_end" (global 2))
(export "apply" (func $apply))
(export "rust_eh_personality" (func $rust_eh_personality))
(data (i32.const 48000) "Hi, "))
```
Using nightly:
```sh
$ cargo +nightly --version
cargo 1.32.0-nightly (b3d0b2e54 2018-11-15)
$ RUSTFLAGS="-C link-args=-zstack-size=48000" cargo +nightly build --release --target=wasm32-unknown-unknown -vv -p hello_bare
Compiling hello_bare v0.1.0 (C:\Users\sagan\Documents\sagan-software\rust-eos\examples\hello_bare)
Running `rustc --crate-name hello_bare 'examples\hello_bare\src\lib.rs' --color always --crate-type cdylib --emit=dep-info,link -C opt-level=s -C panic=abort -C lto -C metadata=673de3ef755438ae --out-dir 'C:\Users\sagan\Documents\sagan-software\rust-eos\target\wasm32-unknown-unknown\release\deps' --target wasm32-unknown-unknown -L 'dependency=C:\Users\sagan\Documents\sagan-software\rust-eos\target\wasm32-unknown-unknown\release\deps' -L 'dependency=C:\Users\sagan\Documents\sagan-software\rust-eos\target\release\deps' -C link-args=-zstack-size=48000`
Finished release [optimized] target(s) in 0.59s
$ wasm2wat target/wasm32-unknown-unknown/release/hello_bare.wasm -o target/wasm32-unknown-unknown/release/hello_bare.wat --generate-names
$ tail target/wasm32-unknown-unknown/release/hello_bare.wat
(memory $memory 17)
(global $g0 (mut i32) (i32.const 1048576))
(global $__heap_base i32 (i32.const 1048580))
(global $__data_end i32 (i32.const 1048580))
(export "memory" (memory 0))
(export "__indirect_function_table" (table 0))
(export "__heap_base" (global 1))
(export "__data_end" (global 2))
(export "apply" (func $apply))
(data (i32.const 1048576) "Hi, "))
```
For my application it's crucial that data segments are included within the first 64KiB which is why I've reduced the stack size to 48. Is there a workaround or am I missing something? Thanks. | A-LLVM,T-compiler | low | Minor |
387,608,644 | pytorch | Modern interface for Storage | Compared to `Tensor`, `Storage` is lacking a bunch of new APIs, including `.device` object, new-style constructors etc. With all the hacks in its code (e.g., https://github.com/pytorch/pytorch/blob/ecc17fe3dd03962f36a989659336e42de86a38ca/torch/cuda/__init__.py#L503-L505), it is really easy to have bugs (e.g., https://github.com/pytorch/pytorch/issues/14673).
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @ailzhang | module: internals,triaged | low | Critical |
387,608,871 | vscode | Git - Gracefully handle git hooks | Most projects will have some form of precommit hooks for linting or tests through libraries such as [lint-staged](https://github.com/okonet/lint-staged) and [husky](https://github.com/typicode/husky).
When committing via the source control panel in VSCode there's no visual indicator for when a git hook is running. I think this could be an opportunity to put some UI indicator around this. | help wanted,feature-request,git | medium | Critical |
387,693,711 | pytorch | which operator of caffe2 have the same function as torch.nn.Parameter | which operator of caffe2 have the same function as torch.nn.ParameterοΌ I want create a tensor as module's parameter and that variables can be backpropagated. How can I implement itοΌ Thanks!
| caffe2 | low | Minor |
387,723,827 | rust | Unix domain sockets on Windows | Seeing https://github.com/Azure/mio-uds-windows .. Is there any reason why UDS are not implemented in the standard library for Windows?
| O-windows,T-libs-api,C-feature-request,A-io | medium | Critical |
387,772,113 | create-react-app | Support/document immutable deploys - aka immutablewebapps | Support/document deployments as described by https://immutablewebapps.org/
I know CRA is doing most of the things already described, so maybe we should have some documentation about doing immutable deploys? | issue: proposal | low | Minor |
387,827,586 | scrcpy | Having the ability to put commands like "ctrl+g" in the commandline | feature request | low | Minor |
|
387,907,745 | go | x/build: add CentOS builder | In particular for #26746.
| Builders,NeedsInvestigation,new-builder | low | Minor |
387,944,616 | flutter | Use a value of -1 in the maxLength arg in TextField to hide the maximum length | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.io/
* https://docs.flutter.io/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.io/bug-reports/
-->
## Steps to Reproduce
<!--
Please tell us exactly how to reproduce the problem you are running into.
Please attach a small application (ideally just one main.dart file) that
reproduces the problem. You could use https://gist.github.com/ for this.
If the problem is with your application's rendering, then please attach
a screenshot and explain what the problem is.
-->
This is based around a discussion between @Hixie and @HansMuller and myself in a different PR thread about the implementation I did of the "magic" value of maxLength to have it hide the actual maxLength but still show the current entered length.
After some discussion it was decided that instead of a large int -- a value of -1 should be used to indicate this behavior.
This is a place holder for that change.
## Logs
N/A
| a: text input,framework,f: material design,c: proposal,P2,team-design,triaged-design | low | Critical |
387,956,543 | flutter | Positioned widgets should have an option to apply to a Stack widget's size. | Sometimes it is desirable to have a widget within a Stack be positioned at some arbitrary offset like the following:
```dart
return Column(
children: <Widget>[
Stack(
children: <Widget>[
Container(height:20, width: 20, color: Colors.green,),
Positioned(
top: 10,
child: Text('Does not render because the stack is only 20x20'),
)
],
),
Container(height:40, width: 40, color: Colors.red,)
],
);
```
However, Positioned widgets are not considered when a Stack determines its size. As a result, the text in this example is cropped, and the red square draws too high. This can be worked around by using a Container with its padding set instead of a Positioned widget to wrap the Text. Is this intentional? Perhaps it would be better to give Positioned a bool that marks it for consideration in calculating the Stack's size. | c: new feature,framework,P2,team-framework,triaged-framework | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.