id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
431,039,445 | rust | Build fail when specify a local source for libc | $ git clone https://github.com/rust-lang/rust.git
$ git clone https://github.com/rust-lang/libc.git
$ cd rust
$ # edit Cargo.toml in rust to contain this line in field [patch.crates-io]:
libc = {path = “…/libc” }
$ ./x.py build
Then the erros appear:
error: the feature cfg_target_vendor has been stable since 1.33.0 and no longer requires an attribute to enable –> /home/bpang/rust_ref/libc/src/lib.rs:22:13 | 22 | feature(cfg_target_vendor, link_cfg, no_core) | ^^^^^^^^^^^^^^^^^ | = note: -D stable-features implied by -D warnings
error: unused attribute –> /home/bpang/rust_ref/libc/src/lib.rs:27:1 | 27 | #![no_std] | ^^^^^^^^^^ | = note: -D unused-attributes implied by -D warnings
error: crate-level attribute should be in the root module –> /home/bpang/rust_ref/libc/src/lib.rs:27:1 | 27 | #![no_std] | ^^^^^^^^^^
error: aborting due to 3 previous errors | T-bootstrap | low | Critical |
431,046,270 | flutter | [image_picker] new feature to allow original image to be saved when scaling image.. | After https://github.com/flutter/plugins/pull/1445 the original image is deleted after scaling on Android. This caused breaking for some people who needs the original image even after scaling.
Need to find a way to provide a way to choose to keep the original image. Preferably on both platforms.
| c: new feature,p: image_picker,package,team-ecosystem,P3,triaged-ecosystem | low | Minor |
431,121,827 | go | x/crypto/ssh: implement [email protected] compression | this is an internal google request for implementation of zlib compression in the x/crypto/ssh library, as specified at
https://tools.ietf.org/html/draft-miller-secsh-compression-delayed-00
| help wanted,NeedsFix,FeatureRequest | low | Major |
431,145,041 | flutter | [webview_flutter] Accessibility element not focused when trying to select the element directly. | When using a11y on iOS webview, we are able to focus on accessibility elements by going through all the elements one by one.
However, if we want to directly select an element inside the webview, by dragging a finger on the webview.
This issue can be reproduced directly using the example app in webview_flutter plugin. | platform-ios,a: accessibility,a: platform-views,p: webview,package,e: OS-version specific,P3,found in release: 2.2,found in release: 2.5,team-ios,triaged-ios | low | Major |
431,181,208 | go | cmd/go: 'mod verify' should not modify the go.mod file | <pre>
$ go version
go version go1.12 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/jsha/.cache/go-build"
GOEXE=""
GOFLAGS="-mod=vendor"
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/jsha/gopkg"
GOPROXY=""
GORACE=""
GOROOT="/home/jsha/go1.12"
GOTMPDIR=""
GOTOOLDIR="/home/jsha/go1.12/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/jsha/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build911932552=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
1. Checked out https://github.com/letsencrypt/boulder/
2. Ran `go mod init` to create a go.mod file.
3. Committed the go.mod file (`git commit go.mod -m 'Use modules'`)
4. Ran `go mod verify`
5. Ran `git diff`
### What did you expect to see?
No output.
### What did you see instead?
Several lines changed.
Specifically, what happened here is that Boulder depends on a number of packages. Some of those packages have opted into modules, and have their own dependencies listed in go.mod.
For instance, as of right now, Boulder has `golang.org/x/crypto` [vendored at 0709b304](https://github.com/letsencrypt/boulder/blob/23cb68be4ffa22e78a3258597b009b1dc3b6ac31/Godeps/Godeps.json#L357-L358) using `godep`. `go mod init` correctly picks up that version and puts it in go.mod. Boulder also depends on [`challtestsrv` v1.0.2](https://github.com/letsencrypt/challtestsrv/tree/v1.0.2). `challtestsrv` v1.0.2 [has a go.mod](https://github.com/letsencrypt/challtestsrv/blob/v1.0.2/go.mod) that depends on `golang.org/x/crypto` 505ab145 (which is later than 0709b304). Running `go mod verify` changes Boulder's go.mod to depend on 505ab145.
It seems like `go mod verify` is performing the `minimal version selection` algorithm and updating go.mod. I would expect that `go mod verify` doesn't change anything, based on [the documentation](https://godoc.org/cmd/go#hdr-Verify_dependencies_have_expected_content), which says:
> Verify checks that the dependencies of the current module, which are stored in a local downloaded source cache, have not been modified since being downloaded. If all the modules are unmodified, verify prints "all modules verified." Otherwise it reports which modules have been changed and causes 'go mod' to exit with a non-zero status. | NeedsFix,modules | low | Critical |
431,205,758 | vscode | Disable hover but keep Ctrl+Hover | Hover can be very frustrating when selecting text etc. Longer delay (eg 1500ms) solves this but when you actually need the hover information it is frustrating to wait.
Having the full hover information by pressing control and hover the mouse on specific code is very nice. Please leave this functionality while let us disable the simple hover (the one without pressing Ctrl). In other words separate "hover" and "ctrl+hover" functionalities, with two seaparate options for turning them on/off.
| feature-request,under-discussion,editor-hover | medium | Major |
431,211,116 | rust | Clarify precedence with ^ in chained comparison | ```rust
fn foo(x: i32, y: i32, z: i32) -> bool {
x < y ^ y < z // error: chained comparison operators require parentheses
}
```
Actual output:
```
error: chained comparison operators require parentheses
--> src/lib.rs:2:7
|
2 | x < y ^ y < z
| ^^^^^^^^^^^
|
= help: use `::<...>` instead of `<...>` if you meant to specify type arguments
= help: or use `(...)` if you meant to specify fn arguments
error[E0308]: mismatched types
--> src/lib.rs:2:17
|
2 | x < y ^ y < z
| ^ expected bool, found i32
error: aborting due to 2 previous errors
```
It would be helpful to clarify that `^` has a higher precedence than `<` and suggest that the user parenthesises `(x < y) ^ (y < z)`, which would result in the correct expected type.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics,D-papercut | low | Critical |
431,227,064 | rust | Rustdoc formatting presents accessibility issues | In a [thread on Reddit](https://www.reddit.com/r/rust/comments/bazy8u/how_do_experienced_rust_developers_read_the/ekg4ipg/) about finding Rust's documentation hard to read, one user pointed out that it's especially bad for people with dyslexia.
Not having dyslexia myself, I did a little bit of research on UX design for dyslexia. One resource I found pretty good was [this article](https://uxplanet.org/designing-for-dyslexia-6d12e8c41cd7), and in particular the presentations at the end.
One of the points about design for dyslexia, which can even help for those who are not dyslexic, is reducing the amount of visual noise and clutter. A quick pass shows that there are a lot of noisy elements. Most of them exist for a reason, but I'm not sure all of those reasons fully justify the amount of noise. I have highlighted a lot of the things that catch my attention which are not content.

There are a lot of different colors used on the page. Blue is used as the standard link color; but links to other items in declarations also use a custom color scheme for different types of items, like primitives vs. structs vs. enums vs. type aliases. I don't know about other experienced Rustaceans, but as an experienced Rustacean myself, I certainly couldn't tell you what color corresponded to what kind of type it is; but rather, as an experienced Rustacean, for most of those types I either know what it is, due to familiarity with the type or familiarity with conventions, or I don't really need to know what it is and can click through to find out if I want to.
Then the example code is syntax highlighted using a different scheme than the declarations. Again, syntax highlighting has some value for readability, though I find some aspects of syntax highlighting to be more helpful for *writing* code than reading it. I would be interested in hearing about usability studies on readability of code based on various syntax highlighting schemes; my guess would be that some aspects of highlighting, like longer spans of text which may be commented or quoted, would be more helpful for reading than others that are already locally obvious enough in the bare syntax. Anyhow, syntax highlighting may or may not be unnecessary noise, but in conjunction with the other highlighting scheme for declarations, there's definitely some room for improvement and simplification.
There are a lot of UI elements all over the page. The search box, which contains a lot of text with instructions, the theme chooser, the settings, all of the little disclosure `[+]` and `[-]` controls. Many of these seem to have good intention, but they add up to a lot of extra clutter. The disclosure boxes in particular are trying to balance between having complete information on the page and having too much extra information due to implementation of common traits with repetitive docs, but in doing so, they add a lot of extra noise to the page themselves, and make it a lot more work under the default settings to dig in and see everything implemented for a type.
There are also a couple of distracting design elements, like all of the horizontal rules. Most of them are in places where there is already some other visual element distinguishing sections, such as headers and spacing; they could probably be dropped with no ill effect. Another is the use of a grey background for `<code>` and `<pre>` elements; while this can help distinguish code from the surrounding text (especially when it is something like a single character which may be hard to distinguish by font), and is fairly common in a lot of places that support embedded code like GitHub, StackOverflow, and various other languages documentation, it does add some more noise and draws attention, so it should be carefully considered whether it is worthwhile.
While a lot of these design elements are there for a reason, the overall effect is somewhat reminiscent of the "chartjunk" which cropped up in the Excel era:

As a contrast, I took a screenshot of [similar information from godoc.org](https://godoc.org/net#IP), a Go documentation hosting site similar to docs.rs. There are many fewer elements I could identify as noise or clutter:

This is not quite fair, as the Go docs have one page per module rather than per type, so we don't see all of the navigation elements, but even if you scroll up to the top, you will see that the navigation elements are much simpler, with less on screen at once.
Beside the visual clutter, there are a few other aspects of the default Rust docs styles which can cause problems for people with dyslexia:
* It uses serif fonts for the body text, while sans serif fonts are generally preferred for readability
* It has very long line lengths; I counted 127 characters on a line, which is above a recommended 100 characters max for online use, 65 for being more comfortable for the average adult, or 45 for being best for those with dyslexia
* The default light theme is high-contrast, black text on a white background, which can be difficult for some people to read. The dark theme is lower contrast, but not everyone prefers a dark theme.
For comparison purposes, a quick survey of similar docs (whether in the standard library or a third party library) from other languages:
* [Python](https://docs.python.org/3/library/ipaddress.html) has a similar overall layout with some search and nav at the top, a nav bar to the left, and the content. The search and header doesn't seem as visually dominating as the search bar in Rustdoc. It has one horizontal rule. It takes an interesting approach to highlighting of inline `<code>`; they only use a grey background box if it is not a link, while the ones which are links (references to various functions and types) are only highlighted in blue. This does a good job of keeping the clutter down while still making all of the text distinguishable. It has one source link at the top, which uses the English phrase "Source code" rather than the abbreviation "src". Has English language compatibility information "New in version 3.3" at end of introduction. It has no disclosure boxes. It uses sans-serif fonts. An arbitrary line I picked was 103 characters long. High contrast black on white.
* [Ruby](https://ruby-doc.org/stdlib-2.5.1/libdoc/ipaddr/rdoc/IPAddr.html) also has a similar layout. Like Python, the header bar and search don't dominate as much as they do in Rustdoc. There is more noise overall than Python; very bright syntax highlighting, heavyweight boxes, but it still feels less cluttered than Rustdoc, without the disclosure controls, horizontal rules, `[src]` links all over, and highlighted method signatures. Hovering over method declaration shows that you can click to reveal source; slightly less discoverable, but also avoids extra noise when just reading. Serif font. Arbitrary line I picked was 77 characters long. High contrast black on white.
* [Golang](https://golang.org/pkg/net/) official site. Just header and content. Index is inline rather than in sidebar, with quick link to jump to index. Disclosure triangles only for overview and index, plus hidden by default examples. No horizontal rules, but headings do have heavyweight backgrounds. No styling for inline code. Headings for types and methods themselves are source links; somewhat hard to discover, but avoids visual noise. Sans-serif font. Arbitrary line I picked was 119 characters. [Godoc site](https://godoc.org/net) is very similar, but slightly different styling; smaller font, but narrower column means the same line wraps at 114 characters. High contrast black on white.
* [Haskell](http://hackage.haskell.org/package/iproute-1.7.7/docs/Data-IP.html) has some similar issues to deal with as Rust, given the similarity between typeclasses and traits. It has a very lightweight header, a table of contents on the right side, a lot of boxes on the screen, a lot of disclosure controls and source links. It is one of the more visually noisy designs, although the disclosure controls and source links don't seem to be as distracting as the ones in Rustdoc as they don't look as much like some piece of syntax (`[+]`, `[-]`, and `[src]` all look like they could be some kind of syntax). Docs that describe something particular about a given implementation of a typeclass are displayed by default, but methods on typeclasses are not displayed without manually exposing them. No background boxes for inline code. Sans serif font. Very long line lengths; none of the docs were even long enough to wrap on my screen. High contrast black on white.
* [Ocaml](https://docs.mirage.io/ipaddr/Ipaddr/index.html) (this is not an official docs site or even a docs aggregator, so styling may be idiosyncratic). Extremely lightweight navigation. Just a couple of horizontal rules at the top. Extremely lightweight styling makes it somewhat hard to visually distinguish declarations from descriptive text. No disclosure controls, no source links. Sans serif fonts. Arbitrary line is 95 characters. Cream colored background provides slightly lower contrast.
* [C#](https://docs.microsoft.com/en-us/dotnet/api/system.net.ipaddress?view=netframework-4.7.2). Large, complex navigation header, and left nav bar, but all elements are visually less prominent than the header of the main content. Simple search box in nav bar. Somewhat visually heavy example code sections. No source links, nothing collapsible. Have to click through for full docs on actual methods. Sans serif fonts. Arbitrary line is 101 characters. High contrast black on white; has a slightly lower contrast dark mode.
* [The Rust Book](https://doc.rust-lang.org/book/ch02-00-guessing-game-tutorial.html), using [mdBook](https://github.com/rust-lang-nursery/mdBook). This is actually used for other official parts of the Rust documentation, so is worth comparing and possibly emulating for consistency sake, though as a book and not automatically generated documentation it's not perfectly comparable. It has a very lightweight header, with a few small low-contrast icons for control, and the title of the book. It uses a single, very light, low-contrast horizontal rule to distinguish the header which is attached to the viewport from the text. You can collapse the navigation sidebar. Search requires clicking an icon, a search box is not shown by default. The heading for a chapter is much larger and more prominent than anything in the navigation bar. It has somewhat high-contrast, prominent icons in code samples, including "Copy", "Run", and "Show Hidden Code" depending on the sample; these can be a bit distracting. It uses grey background boxes for `<code>` and `<pre>`. The only disclosure controls are for hidden lines in sample code, and the sidebar. Trailing whitespace is used well to delimit sections. An arbitrary line is 102 characters. High contrast black on white; has several themes, including several lower contrast darker theme and one lower contrast black on beige theme, with a light on dark sidebar. The lower contrast theme also makes the icons in code samples quite low contrast, which helps make them less distracting, though it actually increases the contrast of the header icons, and the colors in the sidebar are somewhat bold and distracting. Has a print mode that allows loading the whole book at once for easier scrolling or browser search.
A few overall trends I note after looking at a lot of these:
* Every other site with both a header and a sidebar has the header span the whole page, rather than the sidebar spanning the whole page and the header being within the content area delimited by the sidebar. This seems to help with visually distinguish the header from the actual content of the page. Additionally, the other sites generally have a larger difference in size between the search and settings headers and the main content heading, making the main content heading more prominent; Rust's search bar is quite large, contains a lot of text, and the main heading is not much bigger.
* Only Haskell and Golang do significant amounts of hiding and disclosure. Golang hides examples by default, Haskell hides typeclass details by default.
* No other docs I looked at highlight different kinds of types differently.
* Most use sans-serif fonts and shorter line lengths than Rustdoc.
* None use horizontal rules as heavily as Rustdoc.
Some concrete recommendations based on the above. Most of these should be applied to all styles, but some may be appropriate for a special "readability" style. I'm sure not everyone will agree with all of these, but I think a lot of them are pretty universal usability improvements:
* [ ] #59840 Make the search and settings less visually prominent relative to the main content
* [ ] #59845 rustdoc styling improvements for readability
* [ ] #59851 Reduce visual prominence of controls, source links, and version numbers in rustdoc
* [x] #59853 Hide unstable items by default in stable rustdoc
* [x] #59860 Simplify and unify rustdoc sidebar styles
* [ ] Not everything present in the content is accessible from the side bar index (constants, impls on other types referencing this one). If everything were listed in the side bar index, that would make navigation easier and further reduce the need for disclosure controls in the body.
* [ ] Have fewer or no disclosure controls. Make decisions about what should be displayed vs. what needs to be clicked through to, don't make the user make that decision. This is probably the most complex part of these recommendations and may be appropriate to split into a separate ticket or RFC.
- [x] Any method that could be called on an object of the type or trait in question, or associated type, constant, etc, should probably be displayed with the summary line from the docs
- [ ] Those things which indicate an implementation that references the item in question can be left with just the basic declaration information, since you should be able to click through to the type or trait documentation to get the full details
* [x] For example, in the "Implementors" section of a trait, there's no reason that you would want to expand to see the declarations of each method that you've already read in the trait definition. All you need to see are the declarations, and if you want you can click through to those types to see any details about their particular implementation.
* [ ] In documentation for a type, impls for other types which just reference this type should be listed, but there's no reason to list all methods. For example, `impl PartialOrd<Ipv4Addr> for IpAddr` should be shown on the `Ipv4Addr` page, but there's no need to show its methods, you could click through to `PartialOrd` or `IpAddr` for that. What you want to see when looking at the docs for a given type are "what can I call on this type," while impls that merely reference this type should be linked but don't need methods expanded out here.
- [ ] If there are multiple impls that apply to a type being documented, such as `impl PartialOrd<Ipv4Addr> for Ipv4Addr` and `impl PartialOrd<IpAddr> for Ipv4Addr`, all of the declarations (including type parameters and associated types) can be listed in a block, followed by one set of declarations and docs for the actual methods. This may be somewhat complex to specify and implement, but if done correctly would give just the right amount of detail to see everything you could call on an `Ipv4Addr`, without being repetitive or requiring disclosure controls.
- [x] Show the declaration of items by default. It can be after the summary; for traits, that would essentially provide an index of the methods.
- [x] Either always show attributes, or mark some attributes to always be hidden and let others always be shown. Most items with attributes only have one line worth of attributes, which makes the "Expand attributes" disclosure just as noisy as displaying the attribute itself, and then the `[-]` adds extra noise once you're viewing it.
- [x] I haven't been able to figure out the logic behind the "Show hidden undocumented items" disclosure when looking at trait implementations for types, which seems to show documented items. It seems that maybe it's hiding the mandatory to implement methods from which the default impls for the other methods can be derived? Either way, it seems like this kind of hiding isn't necessary if the other suggestions are implemented.
That came out longer than I originally intended, and some of these suggestions may be more debatable than others. If anyone would prefer I turn some or all of this into an RFC, I'd be happy to do that; but if an issue is an appropriate place for this, then we can leave this here. | T-rustdoc,C-enhancement,A-rustdoc-ui,A-a11y | medium | Critical |
431,234,454 | flutter | Review _TabSwitchingViewState's internal caching behavior | `_TabSwitchingViewState` displays the content for a selected tab. It also caches the content of any previously selected tab offstage. But there are 2 things to consider:
1. Should we really apply caching automatically to every tab, regardless of what the developer wants? Could there be uncontrollable performance impacts as a result of this?
2. Should `_TabSwitchingViewState` really be invoking the `tabBuilder` for every cached page that is offscreen?
We should re-evaluate the API and behavior. At first glance it looks like we should consider using something like `KeepAlive` to signify that a page should be cached offstage. We can offer `KeepAlive` within something like a `CupertinoPageScaffold` automatically.
Related issue: #30790 | team,framework,c: performance,f: cupertino,P2,team-design,triaged-design | low | Major |
431,251,679 | neovim | syntax API | Followup to https://github.com/neovim/neovim/issues/9421#issuecomment-477773593 . The [roadmap](https://neovim.io/roadmap/) mentions "Syntax API" as a long-term goal. Concretely:
- `nvim_buf_attach_hl({buffer}, {send_buffer}, {opts})` - Subscribe to [all buffer "decorations"](https://github.com/neovim/neovim/issues/9421#issuecomment-477931420), where decoration is essentially the state in `screen.c`:
- syntax highlighting
- concealing
- `matchaddpos`/`nvim_buf_add_highlight`
- signs
- virtual text
- Allow clients to _define_ syntax structurally (i.e. text properties / extended marks #5031, not regex)
- Allow clients to _query_ syntax (including legacy Vim-regex syntax)
Notes:
- Whole-buffer syntax is expensive compared to Vim's existing viewport-restricted syntax
- Would a "fake viewport" be useful, e.g. so that clients could force Nvim to highlight the whole buffer (use-case: minimap)?
cc @bryphe @breja @crossr | enhancement,api,syntax | medium | Critical |
431,254,702 | pytorch | [RFC] Memory format (aka layout aka NHWC) support | ## Problem statement
CNN operators utilize canonical order of tensor dimensions and assign them semantic meaning. For the 2D case in PyTorch today an input to torch.nn.Conv2d has to be a 4d tensor in NCHW order - <batch, channels, width, height>.
For performance reasons, it's often beneficial to reorder dimensions differently so that memory accessed by particular operations is laid out contiguously and locality is better utilized. Most common option is moving dimensions towards the end - NHWC. There can be even more complex memory formats that tile one dimension into blocks, e.g. <N, C/16, H, W, C16>.
Example libraries utilizing it include:
* cudnn has faster performance on Volta in NHWC
* fbgemm and qnnpack don't support NCHW.
* libxsmm does support NCHW but the performance penalty is something like 50% (IIRC).
The challenge is that transforming the dimension order itself is expensive, so in cases when multiple CNNs operations are performed in a row (e.g. `conv(relu(conv)))`) it's beneficial **to transform to the different memory format once,** carry out operations and reorder them back.
Thus it's important to make PyTorch aware of different dimensions orders and be able **to pass tensors with different memory formats** between operations both in eager and JIT mode. Furthermore, it's beneficial to have automatic JIT optimization passes that try to apply heuristics or search techniques to figure out whether changing memory format is beneficial perf-wise and where in the model it makes sense to do it.
We strive to build API capable of representing:
* Tensor with different memory format (at the beginning, just dimension order) present in PyTorch in Eager and JIT. Blocked layouts are lower priority but still nice.
* User-exposed APIs for querying and changing memory format
* Core CNN operations being able to handle input tensors with different memory format and routing to corresponding faster implementation
* Ability to infer and optimize about memory formats in JIT passes
**Terminology**: the problem above is often referred to as “layout” (mxnet), “data_format” (tf), “image_format” (keras), “order” (caffe2). We propose to utilize name “memory format” or “memory_format” in PyTorch. The name “layout” is unfortunately taken in PyTorch with values 'strided' vs 'sparse_coo', so that option of naming is not available.
## Affected operators
Following operators at minimum should be memory-format-aware. In addition to producing the correct result, they need to deliver **best performance** from underlying libraries AND **preserve memory format** of outputs in order to propagate explicitly specified user intent.
* convolution
* different kinds of pooling
* batch norm, layer norm, instance norm (generally, whatever norms)
* upsampling/interpolation
* feature dropout
* softmax to a lesser degree - dimension can be manually specified there, but efficient implementations are present only for implicit nchw layout
* padding
* element-wise (unary and binary) operations
* constructors of tensors that inherit memory format, e.g. empty_like.
## API and Behavior Changes
Define concept of memory format in PyTorch:
* Constants like `torch.memory_format.channels_first`. They don't have specified type and can be arbitrary comparable objects (likely start with enum but in future might be other objects to interop with concept of named tensor)
* Alternative: use `torch.channels_first` directly
* Values are `channels_first` and `channels_last` (to allow for fewer constants)
* For 1D images / 3D tensors the values mean NCW, NWC, for 2D images / 4D tensors - NCHW, NHWC, for 3D images / 5D tensors - NCDHW, NDHWC
Add following methods to Tensor:
* `x.is_contiguous(torch.memory_format.channels_first)`
* `x.to(memory_format=torch.memory_format.channels_first)`
**Note**: there's no `x.get_memory_format()` function for now, only explicit checks - it allows wider range of possible implementations. We might want to add it though.
Tensor semantical layout **always stay the same - NCHW!** `x.size()` always returns `(n,c,h,w)`
Operations preserve memory format behavior:
* convolution, pooling, etc, (see above) return output in the **same memory format** as the input and internally dispatch to the best implementation
* unary element-wise operations preserve same memory format and need to run as fast as on contiguous tensor
* binary element-wise operations provide some reasonable guarantees on preserving memory format - likely can be defined broader but minimum is:
* NHWC + scalar → NHWC
* NHWC + column vector → NHWC
* backward operations for core CNN ops preserve the same memory format as in forward path. (it might be needed to be enforced explicitly because incoming gradients for the output can be in different memory format)
Memory format is a property of a tensor that is preserved through serialization/deserialization (in case the tensor is a parameter).
## Strided implementation
Tensor in PyTorch today have concept of strides that specify how **logical** tensor is laid out in **memory**. Specifically each tensor has a `strides` vector of the same length as `sizes`. In order to index elements in logical indexing `(i1, i2, .., ik)` one does dot product with strides and looks up memory at `offset + i0*stride0 + i1*stride1 + ... * ik * stridek`. Contiguous tensors thus have strides which are reversed cumulative products of sizes. For example 4D tensor with sizes `(n,c,h,w)` has strides `(c*h*w, h*w, w, 1)`.
Strides can be used to represent different memory formats (that are dimension re-ordering) physically while preserving logical default NCHW order. It gives effective definition of memory format transformation as:
```
# implementation of x.to(channels_last)
def to_mem_format_nhwc(x):
return x.permute(0,2,3,1).contiguous().permute(0,3,1,2)
# implementation of x.to(channels_first)
def to_mem_format_nchw(x):
return x.contiguous()
```
In NHWC format the strides vector is `(c*h*w, 1, c*w, c)`. Thus in memory buffer the weights are in contiguous order for NHWC.
Strides can be used for testing:
```
def is_nhwc_contiguous(x):
return x.permute(0,2,3,1).is_contiguous()
# or alteratively
def is_nhwc_contiguous(x):
n,c,h,w = x.size() # in any case the sizes remain in NCHW order
return x.stride() == (c*h*w, 1, c*w, c)
def is_nchw_contiguous(x):
return x.is_contiguous()
# operator implementations can just check contiguity and carry on directly on data pointer
def my_sample_op(x):
if x.is_contiguous(nhwc):
float* p = x.data();
# Do we need to go to c++ here?
# can we have an example in python?
n,c,h,w = x.size()
# operate on `p` as it's guaranteed to be (n,h,w,c) array
y=my_nhwc_op(p)
# Do we need to convert the layout of y?
else:
# Need to convert x to nhwc layout
x = x.permute(0,2,3,1).contiguous()
float *p = x.data();
# Is this needed?
y = my_nhwc_op(p)
return y.permute(0,3,1,2).contiguous()
```
**Pros** of this approach:
* Utilizes existing PyTorch concept of strides without adding new top-level ideas or API parameters
* Preserves logical behavior of tensor in canonical NCHW order
* Works for arbitrary reordering of input dimensions
* Existing serialization routines already preserves strides of tensor
* Ability to reuse many operations to work on different memory layout
**Cons**:
* Calling `.contiguous()` is equivalent to switching to NCHW and may occur by accident from user or inside one of the ops
* Explicit audit of operators is needed to ensure they preserve memory format
* Doesn't work for blocked / tiled formats - a different approach is needed
* It's possible to consider having adding them as first class citizen in PyTorch, but it's a much bigger change
* Alternative is to treat them as opaque handles, e.g. MKLDNN tensors
* Performance characteristics of underlying implementations are less obvious to the end user
**Biggest potential problem is with unclear user intent**. There's no way to distinguish whether user really wanted different memory format or input tensor just happened to be strided this way. Specifically, it leads **to behavior change** for the existing operations - today convolution can only produce NCHW-contiguous tensors even if the input is arbitrary strided, in a new world it might recognize the input as NHWC and thus would return NHWC too. It doesn't change semantics but leads to hard-to-debug performance issues. Possible solution might be to tag tensors explicitly with user-specified memory_format flag and only follow this annotation (in addition to strides).
To solve above issue, initial proposal is to introduce “soft” memory format tag on tensor that record the last `to(memory_format)` call done on tensor. Operators would need to propagate this annotation to the outputs. Annotation is “soft”, so we won't hard-error on mismatching annotations but rather produce warnings in profiling mode.
### Operator implementations
Signature of existing operators doesn't change. Operators can do hard-coded dispatch inside the operator to route to faster implementation. If implementation is not available, round-tripping through different memory format is possible. Alternative would be raising an error message.
```
def maxpool(x: Tensor):
if x.is_contiguous(torch.layout.NHWC):
return max_pool_impl_nhwc(x)
return max_pool_impl_default(x.contiguous())
```
It's preferred to use a single symbol like 'conv' to refer to the operators in JIT IR instead of creating a separate operators like 'conv_nhwc'. The reason for it is simplicity and keeping IR at the level of semantical representation.
### Element-wise operations
We have to ensure that core operations like element-wise preserve memory format and are efficient.
Unary operations can be generically handled by verifying whether a block of memory is “dense” - i.e. whether elements span an area without gaps and each memory location is used exactly once. It can be verified with simple algorithm
```
def is_dense_format(x):
p = 1
for s, d in sorted(zip(x.stride(), x.size())):
if s != p:
return False
p *= d
return True
def my_unary(x):
if is_dense_format(x):
return contig_memory_impl(x.data(), x.numel())
return default_strided_impl(x)
# is_dense_format can be used in implementations of e.g. empty_like too
```
### Performance tooling
For debugging performance we should add support to the profiler for:
* seeing where in the program actual memory reorderings occur - i.e. track calls to .contiguous()
* tracking which implementation is invoked
* issue warnings on memory format changes in e.g. binary ops (where “soft” annotation is useful)
This functionality can be built into an on-demand profiling tool.
### Autograd handling
It's logical to expect that backwards pass should run with the same memory format as forward. It won't always happen automatically as incoming gradients might be arbitrary strided. Thus forward pass has to explicitly recognize memory format, store it in autograd closure and apply to the grad tensor before the backwards function.
Possible implementation:
```
def conv_backward(input, weight, grad_output, grad_weight, grad_input):
if input.is_contiguous(torch.memory_format.channels_last):
grad_output = grad_output.to(torch.memory_format.channels_last)
return conv_backward_nhwc(...)
else:
grad_output = grad_output.contiguous()
return conv_backward_nchw(...)
```
## Representation in JIT
Current proposal is to have:
* No first-class handling for memory format in type annotations just yet. Instead, we can maintain a lookaside map in necessary shape for passes that manipulate memory format
* Inference pass (similar to shape_inference) that produces per-Value format annotations
* Memory format transformation passes (manual or automatic) that find where necessary `to(memory_format)` calls need to be inserted for optimal performance
For enforcement purposes, we can also utilize statements like `assert x.is_contiguous(channels_last)`.
Note: There's a question of where to store information that particular device has a preferred memory format combination (for example qconv on x86 routes to fbgemm that implements NHWC only). One option is to put it in op registration level, however, memory format annotation feels like more of a side information. We can start by maintaining a global map somewhere in JIT pass that denotes preferred memory formats and associated heuristics. If it gets untidy - we can switch to registration-based mechanism.
## Beyond: blocked layouts
As we decide to add more complex packings of tensors, using first-class PyTorch tensor for it might not be plausible because of high implementation cost and complexity. Two alternatives are possible:
* Opaque representations like custom C type bindings. This is an option to choose for packing in inference where diversity is higher in terms of perf optimizations
* First-class tensor type like MKLDNNTensor with some (but not all) of the operations bound on this new type
Yet another alternative is to implement native support for blocking/tiling in core PyTorch Tensor class.
## Named tensor relation
Existing proposal for [NamedTensor](https://docs.google.com/document/d/1ynu3wA2hcjwOtEng04N904gJjEbZWcINXO_ardX6hxc/edit?ts=5c9daf5d#heading=h.2gbe5xpga3w9) is structured as a type-checking mechanism on tensors - at the moment it doesn't assign any semantic meaning to dimension names. Thus the only way to infer meaning of the activation tensor is to continue using predetermined NCHW format. It makes NamedTensor and the current proposals orthogonal.
If we're willing to hard-specify meanings of some names (like “channels”, “widths”), operators can utilize this information to route to faster implementation. It'd be a semantic change though as the input tensors would logically have NHWC (not NCHW as today) memory format.
## Prior art
TensorFlow supports both NHWC and NCHW at the operator level, via the `data_format` parameter; acceptable values are (“NHWC”, “NCHW”) for 4-d inputs, (“NDHWC”, “NCDHW”) for 5-d inputs, or `channels_first` / `channels_last` independent of input dimensionality. It is up to the user to handle setting the parameter correctly, i.e. it is not tracked automatically by the tensor.
Caffe2 calls this parameter is called `order` rather than `data_format`, but it's still applied at individual operator level explicitly.
---
## Appendix: Other options considered
Litmus question: what does the following code print: `tensor_in_nhwc_layout.size(1)` - the number of channels (because default is NCHW in PyTorch) or height (because that's what is in NHWC layout at position 1).
Based on this answer several options are possible:
* **Option A - Strides (presented above).** Tensor layout is a completely internal representation. Implementation-like it's most conveniently done with strides.
* .size(1) returns me “channels”, but internal memory is laid out differently
* pro: doesn't change code of the model, my model can still do dimension arithmetic directly. In fact none of the public API changes
* cons: in strides implementation many operators call .contiguous() and can accidentally revert the layout back
* cons: From a user perspective, understanding what the guarantees of the op return are paramount. This IMO eliminates strides-only approaches, because it becomes very difficult to understand the format they your op will be returned in, and there's no API to say “ignore my strides, actually just return the NCHW-contiguous thing.” This is in addition to the limitations above.
* **Option B - Explicit NHWC tensor.** User explicitly manipulates tensor that has different dimension order but tensor itself doesn't know anything about it. We'd need some annotation on operator level to figure out what user expects.
* .size(1) returns “height”
* pro: no magic and very predictable
* cons: changing model from one layout to another becomes a complex operation that needs to track all accesses to .size() and .reshape() (or you need to make it explicit in the API?)
* **Option B' - Explicit NHWC tensor with layout flag**. Same as above, but we allow to attach annotation to the tensor to mark it's semantic layout that ops consume in their implementation. There's no need in operator level annotation then - an operator can do dispatch based on the layout flag of the inputs.
* **Option C - Named Tensor**. ([https://docs.google.com/document/d/1ynu3wA2hcjwOtEng04N904gJjEbZWcINXO_ardX6hxc/edit#heading=h.2gbe5xpga3w9](https://docs.google.com/document/d/1ynu3wA2hcjwOtEng04N904gJjEbZWcINXO_ardX6hxc/edit#heading=h.2gbe5xpga3w9))
* .size(1) returns “height” but we ask people to NOT use this API and instead use .size('channel')
* pro: very explicit and what user wants
* con: doesn't solve the transition problem, we'd need to force all code written with layout awareness to use named tensors. If not - the same problems as above apply
* **Option D - Layout is opaque tensor type**. Treat NHWC as we treat MKLDNN or SparseTensor - separate tensor type with different DispatchID. It's like Option A but with different tradeoffs on default behavior - non-implemented ops would fail instead of reverting to NCHW.
* .size(1) still returns “channels”
* pro: no magic and explicit, separate dispatch allows ops to decide what they want
* pro/cons: all necessary operators need to be implemented on different layout, if some op is missing, user would get an explicit error that it's not supported
* cons: we probably would need to ban many operations on it, e.g. views because expected results are hard to predict | module: internals,triaged,module: mkldnn | high | Critical |
431,301,304 | godot | Sometimes mono get corrupted and project crashes at editor startup | **Godot version:**
3.1 stable
**OS/device including version:**
Win10
**Issue description:**
Sometimes my project which uses C# get corrupted and editor crashes after loading the first scene
```
ERROR: reload: FATAL: Condition ' native == __null ' is true.
At: modules/mono/csharp_script.cpp:2716
Stacktrace:
Got a SIGILL while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.
```
I found a way to fix it by removing `.mono` folder - after re-run it recompiles and that fixes a crash
**Steps to reproduce:**
It's happening from time to time, I don't know how to reproduce it
**Minimal reproduction project:**
I can only give a corrupted .mono folder and correct .mono folder, the project itself is open-sourced and can be obtained from https://github.com/Chaosus/ModernShogi but I guess it's useless to fix this bug
[mono_corrupted.zip](https://github.com/godotengine/godot/files/3062017/mono_corrupted.zip)
[mono_ok.zip](https://github.com/godotengine/godot/files/3062022/mono_ok.zip)
| bug,topic:editor,topic:dotnet | low | Critical |
431,332,408 | go | runtime: ARM uClinux crash with -buildmode=c-archive | Please answer these questions before submitting your issue. Thanks!
#### What did you do?
main.go
package main
/*
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
*/
import "C"
//export TestAdd
func TestAdd(a C.int32_t, b C.int32_t) C.int32_t {
return a + b
}
func main() {}
build:
CGO_ENABLED=1 GOOS=linux GOARCH=arm GOARM=5 CC=arm-hisiv500-linux-uclibcgnueabi-gcc CXX=arm-hisiv500-linux-uclibcgnueabi-g++ go build -buildmode=c-archive
output dlltest.a
test.c
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <stdint.h>
#include <string.h>
#include <errno.h>
#include <inttypes.h>
extern "C" {
int32_t TestAdd(int32_t a, int32_t b);
}
int main(int argc, char* argv[])
{
int ret = TestAdd(10, 11);
printf("TestAdd, ret=%d", ret);
}
build:
export CC=arm-hisiv500-linux-uclibcgnueabi-gcc
export CXX=arm-hisiv500-linux-uclibcgnueabi-g++
$CXX -g test.c -o test dlltest.a -lpthread
output test exe file.
file test
test: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, with debug_info, not stripped
copy test to ARM board, run it.
#### What did you expect to see?
output TestAdd, ret=21
#### What did you see instead?
Segmentation fault
#### System details
gdb debug output
Program received signal SIGSEGV, Segmentation fault.
[Switching to LWP 427]
runtime.sysargs (argc=0, argv=0x0) at /usr/local/go-linux-arm-bootstrap/src/runtime/os_linux.go:206
```
go version go1.12.3 linux/amd64
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/csw/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/csw/work/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/work/dlltest/go.mod"
GOROOT/bin/go version: go version go1.12.3 linux/amd64
GOROOT/bin/go tool compile -V: compile version go1.12.3
uname -sr: Linux 4.19.0-041900-generic
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
/lib/x86_64-linux-gnu/libc.so.6: GNU C Library (Ubuntu GLIBC 2.27-3ubuntu1) stable release version 2.27.
gdb --version: GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
```
| help wanted,NeedsFix | low | Critical |
431,390,721 | go | x/net: Gccgo port for aix/ppc64 | I'm currently porting x/net for `aix/ppc64` for GC implementation (https://go-review.googlesource.com/c/net/+/170557). I was aiming to port it for Gccgo implementation as well.
In my understanding, for most of the OSes, a package implementation doesn't need to be changed according to which compiler is being used.
The only case, I'm thinking of, where a special file is needed is for Oses (like AIX, Solaris and maybe 1.12 Darwin) which haven't the same syscall backend. On Linux or BSDs, `syscall.Syscall` can be called from both Gc and Gccgo. Correct me if I'm wrong.
But on AIX, Gc syscalls needed `//go:linkname` and Gccgo ones needed `//extern`. Therefore a different syscall file is needed. However, the rest of the package should remain unchanged right ?
Note I've checked with gccgo-8 (go1.10) on Linux/amd64 and x/net package tests seems to pass as is.
@ianlancetaylor could you please confirm than these are the only different between Gccgo and Gc compiler on a package implementation ?
CC @mikioh
| NeedsInvestigation | low | Major |
431,434,568 | pytorch | Performance issue with torch.jit.trace(), slow prediction in C++ (CPU) | Hi,
Scripted CNNs are predicting much slower, especially on C++.
On the python side, this test shows the decrease in performance:
```
model = Model(128, 128, 19**2, 0.1)
input = torch.tensor(torch.ones(1,1,128,128))
torch.cuda.synchronize()
torch.set_num_threads(1)
start_time1 = time.time()
output1 = model(input)
print('Time for total prediction 1 = {0}'.format(time.time()-start_time1))
# Trace the model and convert the functionality to script
traced_model = torch.jit.trace(model, input)
start_time2 = time.time()
output2 = traced_model(input)
print('Time for total prediction 2 = {0}'.format(time.time()-start_time2))
```
The output is:
```
Time for total prediction 1 = 0.008975505828857422
Time for total prediction 2 = 0.015474081039428711
```
So after tracing the module, prediction is much slower. When I load the model in C++, the _forward_ method actually decreases the speed of the real-time prediction from 120Hz to 30Hz. Is there a way this performance could be improved?
cc @ezyang @gchanan @zou3519 @suo @yf225 @VitalyFedyunin @ngimel @mruberry | triage review,needs reproduction,module: performance,oncall: jit,module: cpp | medium | Major |
431,448,856 | pytorch | pytorch/caffe2/onnx/backend.cc:1668:57: error: invalid conversion from ‘google::protobuf::int32 {aka int}’ to ‘onnx_torch::TensorProto::DataType {aka onnx_torch::TensorProto_DataType}’ [-fpermissive] | ## ❓ Questions and Help
my god, when i compile the pytorch1.0 form source(v1.0rc0 v1.0rc1), i get this error.
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
| caffe2 | low | Critical |
431,537,490 | youtube-dl | Fox.com doesn't let me download a clip (HTTP Error 403: Forbidden) | - [x] I've **verified** and **I assure** that I'm running youtube-dl **2019.04.07**
- [x] At least skimmed through the [README](https://github.com/ytdl-org/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/ytdl-org/youtube-dl#faq) and [BUGS](https://github.com/ytdl-org/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
- [x] Other
---
### Fox.com doesn't let me download a clip
It worked last year in November as far as I can remember but right now it just doesn't let me even after updating youtube-dl. Any help please? This is what happens:
<details>
<summary>Output</summary>
<pre><code>
C:\Users\Name\Desktop>youtube-dl.exe https://www.fox.com/watch/27c41e87d8a275d38c43fe5c969ffaf6
[FOX] Downloading JSON metadata
[FOX] 27c41e87d8a275d38c43fe5c969ffaf6: Downloading JSON metadata
[FOX] 27c41e87d8a275d38c43fe5c969ffaf6: Downloading JSON metadata
[FOX] 27c41e87d8a275d38c43fe5c969ffaf6: Downloading m3u8 information
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 24
[download] Destination: The Stark & Guzman Show Round 2-27c41e87d8a275d38c43fe5c969ffaf6.mp4
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 1 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 1 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 1 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 1 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 1 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 1 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 1 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 1 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 1 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 1 (attempt 10 of 10)...
[download] Skipping fragment 1...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 2 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 2 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 2 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 2 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 2 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 2 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 2 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 2 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 2 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 2 (attempt 10 of 10)...
[download] Skipping fragment 2...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 3 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 3 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 3 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 3 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 3 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 3 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 3 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 3 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 3 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 3 (attempt 10 of 10)...
[download] Skipping fragment 3...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 4 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 4 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 4 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 4 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 4 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 4 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 4 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 4 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 4 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 4 (attempt 10 of 10)...
[download] Skipping fragment 4...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 5 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 5 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 5 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 5 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 5 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 5 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 5 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 5 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 5 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 5 (attempt 10 of 10)...
[download] Skipping fragment 5...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 6 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 6 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 6 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 6 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 6 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 6 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 6 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 6 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 6 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 6 (attempt 10 of 10)...
[download] Skipping fragment 6...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 7 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 7 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 7 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 7 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 7 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 7 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 7 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 7 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 7 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 7 (attempt 10 of 10)...
[download] Skipping fragment 7...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 8 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 8 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 8 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 8 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 8 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 8 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 8 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 8 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 8 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 8 (attempt 10 of 10)...
[download] Skipping fragment 8...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 9 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 9 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 9 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 9 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 9 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 9 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 9 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 9 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 9 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 9 (attempt 10 of 10)...
[download] Skipping fragment 9...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 10 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 10 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 10 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 10 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 10 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 10 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 10 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 10 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 10 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 10 (attempt 10 of 10)...
[download] Skipping fragment 10...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 11 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 11 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 11 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 11 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 11 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 11 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 11 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 11 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 11 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 11 (attempt 10 of 10)...
[download] Skipping fragment 11...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 12 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 12 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 12 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 12 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 12 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 12 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 12 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 12 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 12 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 12 (attempt 10 of 10)...
[download] Skipping fragment 12...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 13 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 13 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 13 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 13 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 13 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 13 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 13 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 13 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 13 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 13 (attempt 10 of 10)...
[download] Skipping fragment 13...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 14 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 14 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 14 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 14 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 14 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 14 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 14 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 14 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 14 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 14 (attempt 10 of 10)...
[download] Skipping fragment 14...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 15 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 15 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 15 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 15 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 15 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 15 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 15 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 15 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 15 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 15 (attempt 10 of 10)...
[download] Skipping fragment 15...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 16 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 16 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 16 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 16 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 16 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 16 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 16 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 16 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 16 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 16 (attempt 10 of 10)...
[download] Skipping fragment 16...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 17 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 17 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 17 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 17 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 17 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 17 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 17 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 17 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 17 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 17 (attempt 10 of 10)...
[download] Skipping fragment 17...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 18 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 18 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 18 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 18 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 18 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 18 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 18 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 18 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 18 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 18 (attempt 10 of 10)...
[download] Skipping fragment 18...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 19 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 19 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 19 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 19 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 19 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 19 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 19 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 19 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 19 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 19 (attempt 10 of 10)...
[download] Skipping fragment 19...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 20 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 20 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 20 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 20 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 20 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 20 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 20 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 20 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 20 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 20 (attempt 10 of 10)...
[download] Skipping fragment 20...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 21 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 21 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 21 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 21 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 21 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 21 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 21 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 21 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 21 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 21 (attempt 10 of 10)...
[download] Skipping fragment 21...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 22 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 22 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 22 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 22 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 22 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 22 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 22 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 22 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 22 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 22 (attempt 10 of 10)...
[download] Skipping fragment 22...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 23 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 23 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 23 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 23 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 23 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 23 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 23 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 23 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 23 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 23 (attempt 10 of 10)...
[download] Skipping fragment 23...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 24 (attempt 1 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 24 (attempt 2 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 24 (attempt 3 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 24 (attempt 4 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 24 (attempt 5 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 24 (attempt 6 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 24 (attempt 7 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 24 (attempt 8 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 24 (attempt 9 of 10)...
[download] Got server HTTP error: HTTP Error 403: Forbidden. Retrying fragment 24 (attempt 10 of 10)...
[download] Skipping fragment 24...
[download] 100% of 0.00B in 00:19
WARNING: 27c41e87d8a275d38c43fe5c969ffaf6: malformed AAC bitstream detected. Install ffmpeg or avconv to fix this automatically.
</code></pre></details>
| cant-reproduce | low | Critical |
431,538,348 | godot | Spaces are counted in RichTextLabel but not Label when using `get_total_character_count()` | **Godot version:**
3.1
There are differences in how **richtextlabel** and **label** hadle spaces in their texts. **richtextlabel** counts spaces when using _get_total_character_count()_ while **label** does not:

This isn't that much of a problem because you can just use _text.length()_ BUT when you use _label.visible_characters += 1_, that becomes a problem. For examle I wrote a system that shows the same text for both of those nodes and it works only with one of them at a time because each node handles spaces differently.
This is very annoying and messy. It would be great if both of them counted spaces. | bug,confirmed,topic:gui | low | Critical |
431,539,555 | godot | Duplicating Visual Script node loses custom script reference | **Godot version:**
3.1-stable-windows64
**Steps to reproduce:**
Create a custom node type using GD script
Add a "Custom Node" to a new Visual Script
Set the type to your custom node using Reference/Script thing in the Inspector
Press Ctrl-D to duplicate your node
Notice Script only shows a gear
(if you then click the blank Script reference in the Inspector you get into some weird stuff that seems to break some things and sometimes leads to crashes)
Original Node:

Duplicate Node:

**Reproduction project:**
* [email protected]:youreperfectstudio/catherines-quest.git
* commit ID: 0937d68a43a737565c2a642c86504d2198eca3c1
* script: story/robin_intro.vs | bug,confirmed,topic:visualscript | low | Critical |
431,569,893 | rust | Controls and search in Rustdoc header are visually distracting | This is being broken out of #59829 to provide smaller, actionable items that can be independently discussed and worked on.
A visual inspection of the Rustdoc design reveals that there are a lot of distracting design elements, which can be especially problematic for people with dyslexia or attention disorders.
One of the first encountered is the header containing the search bar and controls. It is visually larger than the main heading of the page, it contains several high contrast icons and controls which also have additional bounding boxes, the whole thing presents a non-rectangular shape making it stand out more, and the search box contains a fairly verbose set of instructions.

I have a few suggestions to improve the situation; some seem like obvious improvements, some are more debatable and I'm open to other suggestions:
* [x] Drop the horizontal rule, or make it span the full width and low contrast like the mdBook style after scrolling down #92797
* [x] Drop the boxes around the icons
* [ ] ~~Make the icons and "All Crates" control lower contrast~~
* [x] Increase the size of the main heading of the page to be larger than the search and controls heading
* [x] Consider removing the "All Crates" dropdown; I am not sure it is pulling its weight. If it is useful, it could instead be an option that appears after doing a search, rather than appearing on every page. #92490
* [x] Consider moving theme and settings into a single menu #92629
* [ ] Consider simplifying text to simply say "Search" and providing help elsewhere, such as in the above single menu
* [ ] Consider making the header span the whole width of the page, rather than appearing to be nested within the content by being inside the area delimited by the sidebar. This would be a significant change to the overall Rust docs style, but seems more consistent with the way many other sites are laid out. | T-rustdoc,A-rustdoc-ui | low | Critical |
431,574,673 | rust | E0623: Incorrect or ambiguous message | I encountered the following error:
```
error[E0623]: lifetime mismatch
--> src/algorithm/simplex/branch_and_bound/logic.rs:47:73
|
34 | tableau: Tableau<'data_provider, NonArtificial, AdaptedMatrixProvider<'original_data, MP>, PR>,
| ------------------------------------------------------------------------------------- these two types are declared with different lifetimes...
...
47 | let lower_provider: AdaptedMatrixProvider<'original_data, MP> = tableau.provider().clone_with_extra_bound(branch_variable,
| _________________________________________________________________________^
48 | | new_lower_bound,
49 | | BoundType::Lower);
| |___________________________________________________________________________________________________________________________________^ ...but data from `tableau` flows into `tableau` here
```
Looking at [line 47](https://github.com/vandenheuvel/rust-lp/blob/892bf3193c826cbcad479179e91d0e92f1aec8f0/src/algorithm/simplex/branch_and_bound/logic.rs#L47), this seems to be incorrect: data doesn't flow into `tableau`. Merely, another reference is created to a value which is also referenced by the existing `tableau`.
Moreover, it doesn't look like there are two types being pointed at by the compiler. The two types being referenced might be the outer and inner type though, but this is unclear and if this is the case, I think it requires change.
I tried to reproduce this issue on the playground, but was [not successful](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=eeb09142773da1cdbd5b7ac6ee3b381e). The full code can be found [here](https://github.com/vandenheuvel/rust-lp/blob/892bf3193c826cbcad479179e91d0e92f1aec8f0/src/algorithm/simplex/branch_and_bound/logic.rs#L47). A slightly different state of the codebase, with a similar error, can be found [here](https://github.com/vandenheuvel/rust-lp/blob/9b3ef9516123f4465a741f855b261fb729e148ba/src/algorithm/simplex/branch_and_bound/logic.rs#L47).
@estebank Looking at related issues, I thought that this might be relevant to you. | C-enhancement,A-diagnostics,A-lifetimes,T-compiler,D-confusing,D-newcomer-roadblock | low | Critical |
431,605,321 | rust | rustdoc styling improvements for readability | This is being broken out of #59829 to provide smaller, actionable items that can be independently discussed and worked on.
Comparing the overall rustdoc styling with recommendations for accessibility, including for dyslexia and attention disorders, there are a number of improvements that could be made.
In addition, the [mdBook](https://github.com/rust-lang-nursery/mdBook) default styling, which is already used for the [Rust Book](https://doc.rust-lang.org/book/) and other official Rust documentation, already meets or comes closer to meeting several of these criteria, so one way to achieve several of these goals would be to unify styling with mdBook.
A few of these recommendations may be more appropriate for a separate theme rather than the default theme, but I think many of these changes could improve readability and usability for all users.
* [x] Remove extraneous horizontal rules; horizontal rules should generally be used sparingly, but are used heavily in rustdoc. Appropriate spacing and sizing of headers can replace most uses of horizontal rules.
* [ ] Use a sans-serif font for the body text
* [x] Change the font size and column width such that lines are at most 100 characters wide, [preferably under 80](https://www.w3.org/TR/WCAG21/#visual-presentation). This probably means also limiting width of example code to 70 or 80 columns or so, but that's probably OK, example code shouldn't need as much room as the standard convention of 100 columns.
* [ ] Consider a lower contrast theme, either as the default or as an option distinct from the dark theme. This can be overdone, some sites go overboard with low-contrast themes, but there's a lot of room for going lower contrast before getting to that point.
* [ ] Reduce use of color, as colored text can catch attention and the more colors used the more distracting it can be. Additionally, relying on color to convey information presents problems for those with colorblindness.
- [x] Consider removing use of different colors for links to different kinds of items (structs, enums, type aliases, macros, etc). This may be worth its own issue as it interacts with the search UI. #91480
- [x] Consider removing colored headings; headings are already distinguished by size, font, spacing, and horizontal rules (though those should probably be also be dropped). In general, only one or two attributes should change to distinguish headings, otherwise they become too prominent and can drown out the text.
- [x] Consider removing the color from the "Run" link in the example code, or only coloring it upon hover. Also consider unifying styling of example code "Run" link with mdBook, though mdBook could also use lower contrast controls by default. #92058
* [ ] Reduce or eliminate the use of background boxes for inline `<code>` spans; the Python approach of not having a background box if it is already highlighted as a link seems like a nice idea.
* [x] Increase leading to 1.5 times the font size, and greater inter-paragraph spacing ([WCAG recommends 250% of the line height](https://www.w3.org/WAI/WCAG21/Understanding/visual-presentation.html), but that looks quite excessive to me; 2em looks like a readability improvement to me without being excessive) #93694
* [x] Provide a theme with a larger font size, or use a `1 rem` default font size to pick up the browser default font size (increasing the font size across the board doesn't necessarily improve readability; some users actually find smaller fonts more readable) #92448 | T-rustdoc,A-rustdoc-ui | medium | Critical |
431,618,613 | pytorch | [jit] Can't `torch.jit.script` a lambda | Something like `torch.jit.script(lambda x: x + 2)` fails since our source introspection requires a standalone function
cc @suo | oncall: jit,low priority,triaged,jit-backlog | low | Minor |
431,635,610 | pytorch | Embedding layer does not check input range | ## 🐛 Bug
Embedding layer does not check input range when running on cuda
## To Reproduce
Code to reproduce the behavior:
```
import torch
import torch.nn as nn
emb = nn.Embedding(10, 20).cuda()
x = torch.zeros(5, 1).long() + 10
x = x.cuda()
y = emb(x)
z = torch.zeros(10).cuda()
```
```
RuntimeError Traceback (most recent call last)
<ipython-input-1-758fb390ae81> in <module>()
8 y = emb(x)
9
---> 10 z = torch.zeros(10).cuda()
RuntimeError: CUDA error: device-side assert triggered
```
## Expected behavior
It should raise a correct error
## Environment
PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.1.85
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
Nvidia driver version: 390.116
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.16.2
[pip3] torch==1.0.1.post2
[pip3] torchvision==0.2.2.post3
[conda] Could not collect
| module: cuda,module: error checking,triaged | low | Critical |
431,671,359 | rust | Reduce visual prominence of controls, source links, and version numbers in rustdoc | This is being broken out of #59829 to provide smaller, actionable items that can be independently discussed and worked on.
A review of rustdoc for accessibility, especially for dyslexia and attention disorders, finds that there are a number of visually distracting elements in rustdoc which can draw attention away from the main content. These should be considered along with the other issues mentioned in #59829 about the overall effect of a number of distracting elements on the page.
Items in rustdoc can have a number of controls and auxiliary information associated with them, which can be quite prominent and distracting. These should either be eliminated, or made less prominent so that the reader can more easily follow the main content and only notice them when they need to find them.
At the top of a page, we see several controls that are larger and more prominent than the main text. The top of page `[-]` and `[src]` controls are larger and bolder than the body text. The version number is a bit smaller and lower contrast, but unexplained what it means.
Immediately above the main text is a "Show declaration" disclosure which is lower contrast, but larger, than the main text, and the main text itself also has a disclosure control immediately to its left, crowding it.

`impl` blocks have disclosure controls indicating a nesting relationship, even though the headings themselves don't, which leads to some distracting spacing. They also can have a number of closely spaced prominent `[src]` links and version numbers. The `[src]` links are a slightly larger font size (17px) than the content (16px).


There are also a variety of different sizes of these disclosure controls, which don't match the logical hierarchy.

There's an additional "important traits" control that appears on some methods which indicates some important traits implemented by the return value of the method. This allows showing traits that are implemented by the return type for thing like iterator adapters, where the user usually cares more about the trait implemented than the concrete type. This adds another UI element which is high contrast, doesn't match in size, and has a different way of disclosing hidden information. Previous discussion of this design is present in the implementation issue, #45039.


There is going to need to be a balance between reducing or eliminating the visual impact of these elements, while still making the same information easily discoverable for those who need it.
Here are a few suggestions about how we might be able to achieve this.
* [x] Avoid having these elements look like syntax. `[+]`, `[-]`, and `[src]` could be mistaken for some kind of source code annotation; the brackets are also very eye-catcing. Using more standard disclosure controls (such as `⊞` and `⊟`, or ▶︎ and ▼), and prose like `source` instead of `[src]`, may help.
* [x] Make source links and version information smaller and lower contrast; could make them become full contrast on hover
* [ ] Move version numbers introduced and source link to less prominent location
- perhaps inline with the text of the docs
- perhaps immediately to the right of the first line of the declaration, rather than right-justified
* [ ] Use English for the version number and source link
- "Introduced in Rust 1.12.0"
- "Since 1.12.0"
- "source"
- Something simple like "_(since 1.12.0 / source)_" might be appropriate.
* [ ] Adjust the sizes of disclosure controls and text to follow logical hierarchy and be smaller and less prominent than content
* [ ] Use standard disclosure control for "important traits", or if possible, just include the relevant information inline so you don't have to click through to it. These options were considered but ultimately not chosen in the implementation issue #45039. | T-rustdoc,C-enhancement,A-rustdoc-ui | medium | Critical |
431,674,694 | pytorch | Multi-gpu via torch::nn::parallel::data_parallel | ## 🐛 Bug
I tried to use multi-gpu capability in `C++`, but when I write `torch::nn::parallel::data_parallel`, I am getting `No member named 'parallel' in namespace 'torch::nn'` error.
## To Reproduce
Steps to reproduce the behavior:
Just add `torch::nn::parallel::data_parallel` in code with torch.h included.
## Expected behavior
It should at least know `parallel` namespace.
## Environment
```
PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Red Hat Enterprise Linux Server 7.6 (Maipo)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
CMake version: version 3.12.0
Python version: 2.7
Is CUDA available: Yes
CUDA runtime version: 7.5.17
GPU models and configuration:
GPU 0: Tesla P100-PCIE-16GB
GPU 1: Tesla P40
Nvidia driver version: 384.81
cuDNN version: /usr/lib64/libcudnn.so.7.5.0
Versions of relevant libraries:
[pip] numpy==1.16.2
[pip] tinynumpy==1.2.1
[pip] torch==1.0.1.post2
[pip] torchvision==0.2.1
[conda] Could not collect
```
## Additional context
<!-- Add any other context about the problem here. -->
Is `torch::nn::parallel::data_parallel` functional now?
cc @yf225 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar | oncall: distributed,module: cpp,feature,triaged | low | Critical |
431,679,055 | go | net/http: http.Client Timeout incompatible with writable bodies | As part of #26937, Transport started returning writable bodies for 101 Switching Protocol responses. This is great, thank you!
However, if you have a non-zero Timeout on your http.Client, you can't use this, because of the following [lines from client.go](https://github.com/golang/go/blob/fda5e6d6fa7abfe974a58dfeeceb95a8165d1b63/src/net/http/client.go#L266):
```
if !deadline.IsZero() {
resp.Body = &cancelTimerBody{
stop: stopTimer,
rc: resp.Body,
reqDidTimeout: didTimeout,
}
}
```
This wraps the writable body in a non-writable body.
Go version: 1.12
Observed behavior: res.Body.(io.ReadWriteCloser) fails when client.Timeout > 0.
Expected behavior: timeouts and writeable bodies are compatible.
Although we have some streaming requests, we would like to set a long (eg one hour) timeout on our requests to prevent resource exhaustion. #30876 suggests that we can't even do this manually with contexts. | NeedsInvestigation | low | Minor |
431,689,040 | pytorch | weight_norm doesn't support eta and returns nan for zero weights | ## 🐛 Bug
backprop on weights generated with torch._weight_norm that are zero filled yields nan gradients. I don't see a way to add an eta to the norm to prevent this.
## To Reproduce
Steps to reproduce the behavior:
```
w = torch.zeros(1, 3)
v = w / w.norm()
g = w.norm()
v.requires_grad = True
g.requires_grad = True
torch._weight_norm(v, g).matmul(torch.randn(3,1)).backward()
v.grad
> tensor([[nan, nan, nan]])
g.grad
> tensor(nan)
```
I'm encountering nan's during backprop during training of a network with weight normalization. From this seemingly related thread it sounds like the advice is to add an eta to the norm, but in this case the norm is generated in pytorch's c++ implementation and I don't see an obvious way to do this.
## Expected behavior
I expect there to be a way to generate non-nan gradients for weight-norm weights that are zero filled
## Environment
- PyTorch Version (e.g., 1.0): 1.0.0
- OS (e.g., Linux): Mac and Centos
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.6
- CUDA/cuDNN version: None
- GPU models and configuration: None
- Any other relevant information:
## Additional context
cc @albanD @mruberry | module: nn,triaged,module: NaNs and Infs,module: norms and normalization | low | Critical |
431,691,614 | rust | thumbv7a-pc-windows-msvc cannot find cl.exe when trying to expand c file for openssl-sys | I'm working on Windows 10 building Azure iot-edge for arm32, I was able to compile rust for arm successfully, using below command
`c:\python27\python.exe x.py build --host x86_64-pc-windows-msvc --build x86_64-pc-windows-msvc --target thumbv7a-pc-windows-msvc`
However, when I use the output to build iotedge, which has a dependency on openssl-sys (0.9.36 is the version we use), in https://github.com/sfackler/rust-openssl/blob/openssl-sys-v0.9.36/openssl-sys/build/main.rs#L420-L428, it tries to expand a c file, in try_expand implementation, https://github.com/alexcrichton/cc-rs/blob/1.0.15/src/lib.rs#L2119-L2135, spawn fails as it does not know where to find a x64-x64 cl.exe for the expansion
So it looks like cargo does set up environment correctly for the x64-arm compilation, however, crates are having a hard time to locate x64-x64 MSVC toolchain. I think this is something in rust itself needs to be fixed, I'd appreciate any pointers to potential locations inside rust that I can dig further.
p.s. I ended up with adding the folder of x64-x64 cl.exe into PATH, to be specific (%ProgramFiles(x86)%\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.16.27023\bin\Hostx64\x64), and I was able to workaround this issue and built everything fine.
Thanks
Chandler
| O-Arm,T-compiler,O-windows-msvc,C-bug | low | Minor |
431,710,420 | godot | Remember the value of the animation timeline scale slider for each animation | - Create an animation.
- Drag the timeline scale slider to some non-default value.
- Save the project.
- Close the project.
- Reopen the project. The value that was set with the slider has been forgotten. The expected behaviour is that the value is saved in e.g. the project file or some file specific to the project. | enhancement,topic:editor,usability | low | Minor |
431,717,149 | pytorch | pytorch.version.cuda is None when compiling with CUDA support | Compiling PyTorch from sources (487388d8) ends up with torch.version.cuda = None leading to errors when trying to import apex:
```
from apex import amp
File "/data/Workspace/miniconda3/envs/pytorch/lib/python3.7/site-packages/apex/__init__.py", line 2, in <module>
from . import amp
File "/data/Workspace/miniconda3/envs/pytorch/lib/python3.7/site-packages/apex/amp/__init__.py", line 1, in <module>
from .amp import init, half_function, float_function, promote_function,\
File "/data/Workspace/miniconda3/envs/pytorch/lib/python3.7/site-packages/apex/amp/amp.py", line 3, in <module>
from .lists import functional_overrides, torch_overrides, tensor_overrides
File "/data/Workspace/miniconda3/envs/pytorch/lib/python3.7/site-packages/apex/amp/lists/torch_overrides.py", line 67, in <module>
if utils.get_cuda_version() >= (9, 1, 0):
File "/data/Workspace/miniconda3/envs/pytorch/lib/python3.7/site-packages/apex/amp/utils.py", line 9, in get_cuda_version
return tuple(int(x) for x in torch.version.cuda.split('.'))
AttributeError: 'NoneType' object has no attribute 'split'
``` | needs reproduction,module: build,module: cuda,triaged | low | Critical |
431,746,196 | go | regexp: improve test coverage | There are some areas of the regexp module that are not covered by the tests, as extensive as they already are.
Mostly, they include some prefix stuff (that may have caused #30511 & #30425), some utf8 code paths, some case folding, some paths in onepass and the RuneReader input interface.
Does the idea of trying to run all test regexps through all 3 matchers (onepass when possible, backtrack, nfa) and all 3 input types (string, bytes, reader) sound good? We should aim for each of them being individually correct, and this would help adding new ones in the future (dfa!).
I'm starting work on a CL. I want to throw in some of the fuzzing I did before in #21463, should I add it here or open a separate issue/CL?
Thanks! | Testing,NeedsFix | low | Minor |
431,771,222 | rust | Suggestion for improved nested trait error messages when not implemented. | Hi,
I was recently experiencing a confusing error message communication due to an error while developing an actix-web application. The error was:
```
error[E0277]: the trait bound `fn((actix_web::Json<proto::LoginRequest>, actix_web::HttpRequest)) -> actix_web::HttpResponse {login}: actix_web::with::WithFactory<_, AppState<'_>, _>` is not satisfied
--> src/main.rs:117:18
|
117 | .with_config(login, |((cfg, _),)| {
| ^^^^^^^^^^^ the trait `actix_web::with::WithFactory<_, AppState<'_>, _>` is not implemented for `fn((actix_web::Json<proto::LoginRequest>, actix_web::HttpRequest)) -> actix_web::HttpResponse {login}`
```
The relevant code parts are:
```
fn login((lgn, state): (Json<LoginRequest>, HttpRequest<AppState>)) -> HttpResponse {
HttpResponse::Ok().json(())
}
#[derive(Debug, Serialize)]
struct Data {
data: u64
}
```
This error was unhelpful in resolution of the problem on a number of levels. It was only when asking for help that I was told of the method to resolve the cause. The error is that WithFactory can't be implemented, as FromRequest isn't implemented on (A, B) where A: FromRequest, B: FromRequest. In my case, A did not implement FromRequest, and this was due to lack of DeserializedOwned on Data.
A more constructive error message in this situation would have been:
```
the trait `actix_web::with::WithFactory<_, AppState, _>` is not implemented for `fn((actix_web::Json<Data>, actix_web::State<AppState>)) -> actix_web::HttpResponse {register}`
the trait `FromRequest` is not implemented for `(actix_web::Json<Data>, actix_web::State<AppState>)`
the trait `FromRequest` is not implemented for `actix_web::Json<Data>`
the trait `DeserializedOwned` is not implemented for `Data`
```
This could be found easily by checking the traits on each argument, and then recursing into that type and checking why the outer trait is not implemented. For example, (A, B) can't be FromRequest, because A is not FromRequest (but B is) so you would descend into A to determine why it does not implement FromRequest.
It was posed to me that this is not possible for rustc to solve, and that IDE's would abstract the problem. First, I think that given the other amazing advances in rustc, this error communication can be created and solved :). Second, not everyone uses an IDE, or that IDE may not have the features required to display an error like this. It would also be difficult for users requesting assistance or getting start to configure IDE's or be expected to use them, and automated systems like a CI should be able to report detailed errors when they occur.
I would also guess that this style of recursive trait error reporting would be of great use to a library like Futures, and many other applications. It would certainly improve accessibility to traits which often are seen as a deeply complex part of the rust ecosystem.
I hope this helps, thanks, | A-diagnostics,A-trait-system,T-compiler,D-terse,S-needs-repro | low | Critical |
431,776,619 | rust | x.py's naming of stages is confusing | Rust has the standard multi-stage structure that all bootstrapping compilers have.
A newcomer who knows about compiler stages will be confident with this, until they run a command like `./x.py build --stage 1` and get output like this:
```
Building stage0 std artifacts
...
Copying stage0 std from stage0
Building stage0 test artifacts
...
Copying stage0 test from stage0
Building stage0 compiler artifacts
...
Copying stage0 rustc from stage0
Building stage0 codegen artifacts
...
Assembling stage1 compiler
Building stage1 std artifacts
...
Copying stage1 std from stage1
Building stage1 test artifacts
...
Copying stage1 test from stage1
Building stage1 compiler artifacts
...
Copying stage1 rustc from stage1
Building stage1 codegen artifacts
...
Building rustdoc for stage1
...
```
For a newcomer, this is completely bizarre.
- Why is it building stage 0? Isn't stage 0 the compiler you download?
- How does it assemble the stage 1 compiler before building the stage 1 artifacts?
- Why is it building two stages? Shouldn't it only build one stage?
The key to understanding this is that `x.py` uses a surprising naming convention:
- A "stage N artifact" is an artifact that is *produced* by the stage N compiler.
- The "stage (N+1) compiler" is assembled from "stage N artifacts".
- A `--stage N` flag means build *with* stage N.
Somebody had to explain this to me when I started working on Rust. I have since had to explain it to multiple newcomers. Even though I understand it now, I still find it very confusing, and I have to think carefully about it all.
Here's a naming convention that makes more sense to me:
- A "stage N artifact" is an artifact that is produced by the stage (N-1) compiler.
- The "stage N compiler" is assembled from "stage N artifacts".
- A `--stage N` flag means build stage N, using stage (N-1).
That way, a command like `./x.py build --stage 1` would produce output like this:
```
Building stage1 std artifacts
...
Copying stage1 std from stage1
Building stage1 test artifacts
...
Copying stage1 test from stage1
Building stage1 compiler artifacts
...
Copying stage1 rustc from stage1
Building stage1 codegen artifacts
...
Assembling stage1 compiler
```
Is there any appetite for this change? I realize it would be invasive and people would have to update their aliases, build scripts, etc. But it might be worthwhile to avoid the ongoing confusion to both newcomers and experienced contributors. Deprecating the word "stage" in favour of "phase" could help with the transition. | T-bootstrap,A-contributor-roadblock,C-discussion | medium | Critical |
431,779,161 | rust | lifetime error mentions implemented trait instead of the associated type and the mismatched type. | In [this reddit thread](https://www.reddit.com/r/rust/comments/bbswe6/confusing_error_message_cannot_infer_lifetime/) a user commented on their confusion with an error message.
The problem is that the lifetime error for this code mentions `Iterator` twice in the error message instead the types whose lifetimes don't match.
The code that triggers the bug:
```
use std::marker::PhantomData;
struct AnIterator<'a> {
first: String,
_marker:PhantomData<&'a str>
}
impl<'a> Iterator for AnIterator<'a> {
type Item = &'a str;
fn next<'b>(&'b mut self) -> Option<Self::Item> {
Some(self.first.as_str())
}
}
```
The error message:
```
Compiling playground v0.0.1 (/playground)
error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements
--> src/lib.rs:12:25
|
12 | Some(self.first.as_str())
| ^^^^^^
|
note: first, the lifetime cannot outlive the lifetime 'b as defined on the method body at 11:13...
--> src/lib.rs:11:13
|
11 | fn next<'b>(&'b mut self) -> Option<Self::Item> {
| ^^
note: ...so that reference does not outlive borrowed content
--> src/lib.rs:12:14
|
12 | Some(self.first.as_str())
| ^^^^^^^^^^
note: but, the lifetime must be valid for the lifetime 'a as defined on the impl at 8:6...
--> src/lib.rs:8:6
|
8 | impl<'a> Iterator for AnIterator<'a> {
| ^^
= note: ...so that the types are compatible:
expected std::iter::Iterator
found std::iter::Iterator
error: aborting due to previous error
For more information about this error, try `rustc --explain E0495`.
error: Could not compile `playground`.
```
Here in the last note,instead of saying `expected std::iter::Iterator` `found std::iter::Iterator`,
it should be `expected &'a str` `found &'b str`.
It prints the correct error message if I change ` ->Option<Self::Item>` to `->Option<&'a str>`. | C-enhancement,A-diagnostics,A-lifetimes,T-compiler | low | Critical |
431,784,704 | youtube-dl | Authenticated GDC Vault 2019 content throws exception | ---
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2019.04.07**
- [x] At least skimmed through the [README](https://github.com/ytdl-org/youtube-dl/blob/master/README.md)
- [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### ERROR: Authentical Failure / Redirect
#### "Unsupported URL: https://www.gdcvault.com/login"
I'm able to access the requisite content in a web browser, but youtube-dl appears to fail when authenticating due to a login redirect (?)
As there appear to be known issues with the GDCVault extractor (2019), I'm using the generic extractor as advised in other reports.
#### Log Output: Failure
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'--force-generic-extractor', u'--username', u'PRIVATE', u'--password', u'PRIVATE', u'--verbose', u'https://www.gdcvault.com/play/1025986/Creating-a-Deeper-Emotional-Connection']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2019.04.07
[debug] Python version 2.7.15 (CPython) - Windows-10-10.0.17763
[debug] exe versions: none
[debug] Proxy map: {}
[generic] Creating-a-Deeper-Emotional-Connection: Requesting header
[redirect] Following redirect to https://www.gdcvault.com/login
[generic] login: Requesting header
WARNING: Forcing on generic information extractor.
[generic] login: Downloading webpage
[generic] login: Extracting information
ERROR: Unsupported URL: https://www.gdcvault.com/login
Traceback (most recent call last):
File "c:\python27\lib\site-packages\youtube_dl\extractor\generic.py", line 2337, in _real_extract
doc = compat_etree_fromstring(webpage.encode('utf-8'))
File "c:\python27\lib\site-packages\youtube_dl\compat.py", line 2551, in compat_etree_fromstring
doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory)))
File "c:\python27\lib\site-packages\youtube_dl\compat.py", line 2540, in _XML
parser.feed(text)
File "c:\python27\lib\xml\etree\ElementTree.py", line 1659, in feed
self._raiseerror(v)
File "c:\python27\lib\xml\etree\ElementTree.py", line 1523, in _raiseerror
raise err
ParseError: not well-formed (invalid token): line 51, column 22
Traceback (most recent call last):
File "c:\python27\lib\site-packages\youtube_dl\YoutubeDL.py", line 796, in extract_info
ie_result = ie.extract(url)
File "c:\python27\lib\site-packages\youtube_dl\extractor\common.py", line 529, in extract
ie_result = self._real_extract(url)
File "c:\python27\lib\site-packages\youtube_dl\extractor\generic.py", line 3320, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: https://www.gdcvault.com/login
```
As a control case, in contrast here is an example of output that works; the only difference in the passed arguments is the target URL. Unlike the above failure, the content at the below URL does not require authentication, subsequently youtube-dl is successful
#### Example Log Output: Success
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'--force-generic-extractor', u'--username', u'PRIVATE', u'--password', u'PRIVATE', u'--verbose', u'https://www.gdcvault.com/play/1026496/-Marvel-s-Spider-Man']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2019.04.07
[debug] Python version 2.7.15 (CPython) - Windows-10-10.0.17763
[debug] exe versions: none
[debug] Proxy map: {}
[generic] -Marvel-s-Spider-Man: Requesting header
WARNING: Forcing on generic information extractor.
[generic] -Marvel-s-Spider-Man: Downloading webpage
[generic] -Marvel-s-Spider-Man: Extracting information
[Kaltura] 1_bln945rf: Downloading video info JSON
[Kaltura] 1_bln945rf: Downloading m3u8 information
[debug] Default format spec: best/bestvideo+bestaudio
[debug] Invoking downloader on u'http://cdnapi.kaltura.com/p/1670711/sp/167071100/playManifest/entryId/1_bln945rf/format/url/protocol/http/flavorId/1_tl9ywps0?referrer=aHR0cHM6Ly93d3cuZ2RjdmF1bHQuY29t'
[download] Destination: Marvel's Spider-Man' - A Technical Postmortem-1_bln945rf.mp4
[download] 100% of 1.61GiB in 00:27
```
### Final Note: Failure Testing
A possible clue: note that when I attempt to provide bogus credentials, I get identical output as in the above two cases -- free content works, but secure content fails with the same URL error, perhaps indicating that authentication is altogether non-functional as it appears to make no difference
*Note that I do have valid credentials, which work as expected when entering them directly onto the website and viewing content in the embedded player*
---
| account-needed | low | Critical |
431,789,547 | rust | rustdoc: module flag to prevent code blocks without a lang item from counting as rust | So, I've got some code being generated by `bindgen`. The C source has `doxygen` doc comments in it, which `bindgen` faithfully converts over into doc comments. However, some of the `doxygen` content is indented with spaces (numbered lists and the like), so when converted into rustdoc it becomes a code block. These code blocks are picked up as "doctests" by `cargo test`, which then of course fail to build because they're not Rust code at all.
I'm told that, internally, this is all just code block elements by the time rustdoc sees it, you can't tell which code block was made via a backtick fence and which was made via indentation. However, you _can_ tell if a lang was declared on the code block.
So what I need is an opt-in flag that you can declare module-wide (and children modules and such) so that any code block _without_ a declared language is not treated as rust code by default, so then it won't become a doctest. | T-rustdoc,C-feature-request | low | Major |
431,792,665 | TypeScript | Add option to include default typeRoots when overriding typeRoots | ## Search Terms
default typeRoots
## Suggestion
By default, when `typeRoots` is unspecified, TypeScript looks for `node_modules/@types` in the project's directory and all ancestor directories, similar to Node's module resolution algorithm. This is covered in [the tsconfig.json docs](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html) and is implemented [here](https://github.com/Microsoft/TypeScript/blob/b534fb4849eca0a792199fb6c0cb8849fece1cfd/src/compiler/moduleNameResolver.ts#L240-L260).
When specifying `typeRoots` in tsconfig.json, this overrides the default `typeRoots` entirely. You can include `"node_modules/@types"` in your `typeRoots` array, but that does not include ancestor directories.
This is a feature request to provide a way, such as a new option, to include the default type roots in addition to the overridden values.
## Use Cases
The primary use case is within a repository using Yarn workspaces. Yarn hoists npm packages to the workspace root. This means you can end up with a directory hierarchy like this:
```
.
├── node_modules
│ └── @types
│ └── dependency
└── packages
└── example
├── example.ts
├── package.json
└── tsconfig.json
```
If `tsconfig.json` wants to include `node_modules/@types` in the workspace root (in addition to other type declaration directories specified in `typeRoots`), it needs to look like this:
```
{
"compilerOptions": {
"typeRoots": ["../../node_modules/@types", ...]
}
}
```
## Examples
This feature request is a quality-of-life improvement to instead allow for:
```
{
"compilerOptions": {
"typeRoots": [...],
"includeDefaultTypeRoots": true
}
}
```
The default value of this option would be false and would not be a breaking change.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals). | Suggestion,Awaiting More Feedback | high | Critical |
431,804,973 | pytorch | Support memoryview() method on torch.Tensor | ## 🚀 Feature
Support .tobytes() method on torch.Tensor
## Motivation
If I need to extract the raw bytes from a Tensor, I need to convert to numpy first and then use `tobytes`. It would be nice to have a tobytes() method for the Tensors themselves. For example, if I have a jpg image stored in a ByteTensor and I would like to open it using PIL, I need to do something like this currently:
```
data = tensor.numpy().tobytes()
img = Image.open(BytesIO(data), mode='r')
```
| feature,triaged,module: numpy | low | Major |
431,807,828 | create-react-app | Should the CRA ESLint config warn for unused vars? | I just started playing with React through create-react-app and was surprised when I wasn't warned about unused variables.
I had a look [at the ESLint config](https://github.com/facebook/create-react-app/blob/master/packages/eslint-config-react-app/index.js#L172) and it seems like it should. However, it isn't working for me and someone else on Reactiflux Discord confirm it wasn't for them.
My setup is VS Code 1.33.0 on Windows 10 with the ESLint 1.8.2 extension on a fresh create-react-app app. I added this code in the `render` method of the `App` component in _App.js_:
```
const a = 123;
eval('oh yeah');
```
VSCode shows a squiggly green line under the `eval` as an ESLint warning, so I know that's working. Nothing for the unused variable however. Same deal if I run `npx eslint src/`:
```
$ npx eslint src/
C:\Users\alext\src\react-tutorial\src\App.js
9:3 warning eval can be harmful no-eval
✖ 1 problem (0 errors, 1 warning
```
Should this be causing an ESLint warning with CRA? | issue: needs investigation | low | Critical |
431,839,535 | pytorch | Make operators like logsumexp and cumsum operate over dimension 0 by default (or at least for 1D arrays) | It's pretty ridiculous that, for a 1D input like `x = torch.rand(5)`, `torch.logsumexp` still requires you to specify you're operating over dimension 0.
In particular `torch.logsumexp(x)` will error, saying that it needs a dim argument. `torch.logsumexp(x, 0)` works.
My suggestion is that, at the very least for 1D inputs, the default dim you operate over is set to 0. This is a problem for `logsumexp` and `cumsum`, just to name two.
cc @jlin27 @mruberry @rgommers | module: docs,triaged,enhancement,module: numpy,module: reductions | low | Critical |
431,841,668 | pytorch | Improve unit test coverage of torch.unique | Currently it only covers `torch.unique(tensor, ...)`, but not `tensor.unique(...)` We should improve this after https://github.com/pytorch/pytorch/pull/18649 series get merged.
cc @mruberry @VitalyFedyunin | module: tests,triaged,module: sorting and selection | low | Minor |
431,847,061 | create-react-app | Need support for Typescript Project references | Recently typescript has added the feature of project references:
https://www.typescriptlang.org/docs/handbook/project-references.html
In the projects created with create-react-app (with --typescript switch), we are not able to use this feature of typescript because babel complains that we cannot refer files for compiling which are outside the 'src' folder.
I know that there are other workarounds such as
- publishing the other projects as modules to the npm repository and then we can install them in the create-react-app
- we can play around with 'npm link', but this approach is not clean. We have to simulate the project references, instead of using the actual references feature of typescript with this approach.
- otherwise, we can eject create-react-app, and then customize the options.
- last option is to do away with create-react-app and build the tool-chain from scratch, which I tried and made it to work with ts-loader instead of babel. In this approach, I had to set the 'projectReferences=true' option for the ts-loader inside webpack.config.js. This worked smoothly. But then I'm loosing on create-react-app, which I don't want.
But all these ways are not clean and involve the trade-off between something or the other.
Having the C# and Java background, I'm pretty much used to the concept of project references. Many will suggest to not go this route, but I do need it. If it was not useful enough, typescript wouldn't have come up with it in the first place.
Also, I'm new to the web development world (just a month old), so pardon me for my lack of knowledge and if I'm asking for something very trivial and already exists.
Thanks
-Ashish
| issue: proposal | high | Critical |
431,855,217 | neovim | OptionSet should not be fired during 'modeline' (E12) |
## Steps to reproduce
```
nvim --clean +'au OptionSet filetype lua print()'
:h h
```
## Actual behaviour
When vim help is shown (e.g. <F1> or `:help`), the following error message appears and after that every single keystroke will have this error message happening, making neovim totally unusable.
```
Error detected while processing function coc#rpc#notify[2]..<SNR>108_notify:
line 7:
E12: Command not allowed from exrc/vimrc in current dir or tag search
Press ENTER or type command to continue
```

## Expected behaviour
The help page should open without any error.
| bug-vim | medium | Critical |
431,925,681 | pytorch | Suggest model.eval() in torch.no_grad (and vice versa) | ## 📚 Documentation
`model.eval()` and `with torch.no_grad` are both commonly used in evaluating a model.
[Confusion exists about whether setting `model.eval()` also sets `autograd.no_grad`](https://discuss.pytorch.org/t/does-model-eval-with-torch-set-grad-enabled-is-train-have-the-same-effect-for-grad-history/17183/5?u=ataraxy)
Would you accept a PR which suggests usage of the other, with words like:
* If evaluating a model's performance, using Module.eval() may also be useful.
* If evaluating a model's performance, using autograd.no_grad may also be useful. | module: docs,triaged | low | Major |
431,943,151 | go | doc: net/http/httputil: add example for reuse of Director | **Not a bug, it's a ~~API change proposal~~ documentation enhancement**:
# Rationale
Since go1.12 enhanced further the ReverseProxy implementation a closed source library (and many more open-source ones) that provide a reverse proxy for HTTP and WS can be deprecated.
The only difference between the closed source library was that I had a more convenient hook to modify a request that allowed me to focus only on the VHOST logic.
The change here would noticeably reduce the boilerplate and make developers avoid re-implementing logic and private function(s) that Directory already implements/uses.
**Update:** As per discussion, documentation is going to be enhanced. Ref: https://github.com/golang/go/issues/31406#issuecomment-482354227
# So, in terms of Golang code
## Current state:
```go
forwardProxy := httputil.NewSingleHostReverseProxy(target)
forwardProxy.Director = func(req *http.Request) {
// NOTE: Just a copy of `ReverseProxy`'s default `Director`
req.URL.Scheme = target.Scheme
req.URL.Host = target.Host
// NOTE: `singleJoiningSlash` must also be defined again by developer
req.URL.Path = singleJoiningSlash(target.Path, req.URL.Path)
if targetQuery == "" || req.URL.RawQuery == "" {
req.URL.RawQuery = targetQuery + req.URL.RawQuery
} else {
req.URL.RawQuery = targetQuery + "&" + req.URL.RawQuery
}
if _, ok := req.Header["User-Agent"]; !ok {
// explicitly disable User-Agent so it's not set to default value
req.Header.Set("User-Agent", "")
}
// NOTE: End of copied
// NOTE: Custom developer logic just begins to start here
...
}
forwardProxy.Transport = &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
}
```
## (**Not needed**) With an addition of `ModifyRequest` hook
```go
forwardProxy := httputil.NewSingleHostReverseProxy(target)
forwardProxy.ModifyRequest = func(req *http.Request) {
// NOTE: Custom developer logic just begins to start here
...
}
forwardProxy.Transport = &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
}
```
## Documentation to be updated with
```go
forwardProxy := httputil.NewSingleHostReverseProxy(target)
originalDirector := forwardProxy.Director
forwardProxy.Director = func(req *http.Request) {
originalDirector(req)
// NOTE: Custom developer logic just begins to start here
...
}
...
```
# In pros/cons based on what I see (with respect to the fact that I'm just a Golang-using developer):
## Pros
* Less boilerplate
* No need to re-implement logic and re-define private functions used in `reverse_proxy`.
* Less error-prone since the developer only needs to think what logic to add instead of checking the existing implementation of `Director`.
* Not a breaking-change in API.
## Cons
~~* More than one way to do things.~~
~~* Needs a bit better documentation to signify the difference between `Director` and `ModifyRequest`.~~
ref: #31393
cc: @agnivade | Documentation,NeedsDecision | low | Critical |
431,968,549 | rust | Variance should perhaps take into account 'static bounds. | For example, when trying to build something resembling a pointer w/ optional metadata:
```rust
struct Ptr<T: ?Sized + Pointee> {
addr: usize,
meta: T::Meta,
}
trait Pointee {
type Meta: 'static + Copy;
}
impl<T> Pointee for T {
type Meta = ();
}
impl<T> Pointee for [T] {
type Meta = usize;
}
```
`Ptr<T>` ends up invariant over `T` *even though* lifetimes in `T` *cannot possibly* affect `T::Meta` (which has a `'static` bound), and that results in errors like these:
```rust
error[E0308]: mismatched types
--> src/lib.rs:16:56
|
16 | fn covariant<'a>(p: Ptr<&'static ()>) -> Ptr<&'a ()> { p }
| ^ lifetime mismatch
|
= note: expected type `Ptr<&'a ()>`
found type `Ptr<&'static ()>`
note: the lifetime 'a as defined on the function body at 16:14...
--> src/lib.rs:16:14
|
16 | fn covariant<'a>(p: Ptr<&'static ()>) -> Ptr<&'a ()> { p }
| ^^
= note: ...does not necessarily outlive the static lifetime
```
cc @rust-lang/wg-traits | T-lang | low | Critical |
431,976,746 | go | cmd/vet: +build comment error is confusingly worded | Please answer these questions before submitting your issue. Thanks!
#### What did you do?
Hello up there. I was updating my [tracing/xruntime](https://godoc.org/lab.nexedi.com/kirr/go123/tracing/internal/xruntime) package for Go1.12 and hit test error:
```
.../src/lab.nexedi.com/kirr/go123/tracing/internal/xruntime$ go test
# lab.nexedi.com/kirr/go123/tracing/internal/xruntime
./runtime_g_amd64.s:3:1: +build comment must appear before package clause and be followed by a blank line
FAIL lab.nexedi.com/kirr/go123/tracing/internal/xruntime [build failed]
```
The error here complains about `+build` directive in assembly file:
---- 8< ---- (`runtime_g_amd64.s`)
```asm
#include "textflag.h"
// +build amd64 amd64p
// func getg() *g
TEXT ·getg(SB),NOSPLIT,$0-8
MOVQ (TLS), R14
MOVQ R14, ret+0(FP)
RET
```
(https://lab.nexedi.com/kirr/go123/blob/7ee2de42/tracing/internal/xruntime/runtime_g_amd64.s)
It was working with Go1.11 and previous releases.
#### What did you expect to see?
Build and test succeed; test pass, as with e.g. Go1.11:
```
.../src/lab.nexedi.com/kirr/go123/tracing/internal/xruntime$ go1.11 test -v
=== RUN TestStartStopTheWorld
--- PASS: TestStartStopTheWorld (1.00s)
PASS
ok lab.nexedi.com/kirr/go123/tracing/internal/xruntime 1.003s
```
#### What did you see instead?
```
.../src/lab.nexedi.com/kirr/go123/tracing/internal/xruntime$ go1.12 test -v
# lab.nexedi.com/kirr/go123/tracing/internal/xruntime
./runtime_g_amd64.s:3:1: +build comment must appear before package clause and be followed by a blank line
FAIL lab.nexedi.com/kirr/go123/tracing/internal/xruntime [build failed]
```
#### System details
```
go version go1.12.3 linux/amd64
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/kirr/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/kirr/go"
GOPROXY=""
GORACE=""
GOROOT="/home/kirr/src/tools/go/go"
GOTMPDIR=""
GOTOOLDIR="/home/kirr/src/tools/go/go/pkg/tool/linux_amd64"
GCCGO="/usr/bin/gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
GOROOT/bin/go version: go version go1.12.3 linux/amd64
GOROOT/bin/go tool compile -V: compile version go1.12.3
uname -sr: Linux 4.19.0-4-amd64
Distributor ID: Debian
Description: Debian GNU/Linux buster/sid
Release: testing
Codename: buster
/lib/x86_64-linux-gnu/libc.so.6: GNU C Library (Debian GLIBC 2.28-8) stable release version 2.28.
gdb --version: GNU gdb (Debian 8.2.1-2) 8.2.1
```
| help wanted,NeedsFix,Analysis | low | Critical |
432,007,288 | pytorch | LayerNorm is very slow (almost frozen) in CPU of multiprocessing | The context of the use case is doing ES over small network in multiprocessing.
It turns out that with `LayerNorm` it becomes extremely slow on CPU.
To reproduce the effect, the code is attached below:
```python
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils import vector_to_parameters
from concurrent.futures import ProcessPoolExecutor
import time
import numpy as np
global use_ln
use_ln = True
global device
device = 'cpu'
class Model(nn.Module):
def __init__(self, use_ln):
super().__init__()
self.use_ln = use_ln
self.fc1 = nn.Linear(17, 64)
if use_ln:
self.ln1 = nn.LayerNorm(64)
self.fc2 = nn.Linear(64, 64)
if use_ln:
self.ln2 = nn.LayerNorm(64)
self.out = nn.Linear(64, 6)
def forward(self, x):
if self.use_ln:
x = self.ln1(F.relu(self.fc1(x)))
x = self.ln2(F.relu(self.fc2(x)))
else:
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = torch.tanh(self.out(x))
return x
def initializer():
print(f'Initializing, PID: {os.getpid()}')
t = time.perf_counter()
global model
model = Model(use_ln).to(device)
print(f'Finish initializing, PID: {os.getpid()}, taken {time.perf_counter() - t} s')
def fitness(solution):
vector_to_parameters(solution.to(device), model.parameters())
for i in range(10):
for t in range(1000):
observation = np.random.randn(1, 17)
action = model(torch.from_numpy(observation).float().unsqueeze(0).to(device))
print(f'PID: {os.getpid()}, finished')
with ProcessPoolExecutor(max_workers=16, initializer=initializer) as executor:
t = time.perf_counter()
num_param = sum([param.numel() for param in Model(use_ln).parameters() if param.requires_grad])
solutions = [torch.randn(num_param) for _ in range(16)]
list(executor.map(fitness, solutions))
print(f'Total time: {time.perf_counter() - t}')
```
The benchmark is:
- **`use_ln=False, device='cpu'`:**
```
Initializing, PID: 2009192
Finish initializing, PID: 2009192, taken 0.001776786521077156 s
Initializing, PID: 2009194
Initializing, PID: 2009196
Finish initializing, PID: 2009194, taken 0.001952311024069786 s
Finish initializing, PID: 2009196, taken 0.0018356721848249435 s
Initializing, PID: 2009203
Finish initializing, PID: 2009203, taken 0.0015176795423030853 s
Initializing, PID: 2009210
Finish initializing, PID: 2009210, taken 0.0021672528237104416 s
Initializing, PID: 2009216
Initializing, PID: 2009226
Finish initializing, PID: 2009216, taken 0.0018634404987096786 s
Finish initializing, PID: 2009226, taken 0.001307731494307518 s
Initializing, PID: 2009232
Initializing, PID: 2009237
Finish initializing, PID: 2009232, taken 0.0021293386816978455 s
Finish initializing, PID: 2009237, taken 0.0014764349907636642 s
Initializing, PID: 2009242
Finish initializing, PID: 2009242, taken 0.0015203002840280533 s
Initializing, PID: 2009253
Finish initializing, PID: 2009253, taken 0.0014950130134820938 s
Initializing, PID: 2009259
Initializing, PID: 2009262
Finish initializing, PID: 2009259, taken 0.002156071364879608 s
Initializing, PID: 2009269
Finish initializing, PID: 2009272, taken 0.0021638404577970505 s
Finish initializing, PID: 2009262, taken 0.0012727994471788406 s
Finish initializing, PID: 2009269, taken 0.0021071340888738632 s
Initializing, PID: 2009272
Initializing, PID: 2009280
Finish initializing, PID: 2009280, taken 0.0012570079416036606 s
PID: 2009259, finished
PID: 2009232, finished
PID: 2009192, finished
PID: 2009242, finished
PID: 2009216, finished
PID: 2009196, finished
PID: 2009280, finished
PID: 2009203, finished
PID: 2009210, finished
PID: 2009262, finished
PID: 2009226, finished
PID: 2009253, finished
PID: 2009272, finished
PID: 2009237, finished
PID: 2009194, finished
PID: 2009269, finished
Total time: 3.3697756361216307
```
- **`use_ln=True, device='cpu'`:**
```
Initializing, PID: 2013343
Initializing, PID: 2013346
Finish initializing, PID: 2013343, taken 0.002532508224248886 s
Initializing, PID: 2013354
Initializing, PID: 2013359
Finish initializing, PID: 2013346, taken 0.001596800982952118 s
Finish initializing, PID: 2013354, taken 0.002076219767332077 s
Finish initializing, PID: 2013359, taken 0.0015829745680093765 s
Initializing, PID: 2013364
Finish initializing, PID: 2013364, taken 0.0018675383180379868 s
Initializing, PID: 2013369
Initializing, PID: 2013372
Finish initializing, PID: 2013369, taken 0.00201280415058136 s
Finish initializing, PID: 2013372, taken 0.0018042325973510742 s
Initializing, PID: 2013379
Finish initializing, PID: 2013379, taken 0.0017306245863437653 s
Initializing, PID: 2013384
Finish initializing, PID: 2013384, taken 0.0018676463514566422 s
Initializing, PID: 2013389
Initializing, PID: 2013392
Finish initializing, PID: 2013389, taken 0.0027825143188238144 s
Finish initializing, PID: 2013392, taken 0.0016419757157564163 s
Initializing, PID: 2013397
Finish initializing, PID: 2013397, taken 0.0018332768231630325 s
Initializing, PID: 2013402
Finish initializing, PID: 2013402, taken 0.0018747877329587936 s
Initializing, PID: 2013409
Finish initializing, PID: 2013409, taken 0.0016566626727581024 s
Initializing, PID: 2013414
Finish initializing, PID: 2013414, taken 0.0016422048211097717 s
Initializing, PID: 2013419
Finish initializing, PID: 2013419, taken 0.0017681997269392014 s
...
```
**Completely frozen after waiting a few minutes, and `htop` shows all 80 cores CPU are 100% busy**
- **`use_ln=False, device='cuda'`:**
```
Initializing, PID: 2010102
Initializing, PID: 2010103
Initializing, PID: 2010108
Initializing, PID: 2010111
Initializing, PID: 2010114
Initializing, PID: 2010117
Initializing, PID: 2010119
Initializing, PID: 2010122
Initializing, PID: 2010126
Initializing, PID: 2010129
Initializing, PID: 2010132
Initializing, PID: 2010135
Initializing, PID: 2010138
Initializing, PID: 2010141
Initializing, PID: 2010144
Initializing, PID: 2010147
Finish initializing, PID: 2010111, taken 17.613565219566226 s
Finish initializing, PID: 2010102, taken 17.802420677617192 s
Finish initializing, PID: 2010114, taken 17.798802657052875 s
Finish initializing, PID: 2010103, taken 17.81860247440636 s
Finish initializing, PID: 2010117, taken 17.89303245767951 s
Finish initializing, PID: 2010108, taken 17.97427663579583 s
Finish initializing, PID: 2010132, taken 18.15790667384863 s
Finish initializing, PID: 2010126, taken 18.16597461514175 s
Finish initializing, PID: 2010141, taken 18.14934244006872 s
Finish initializing, PID: 2010122, taken 18.205005967989564 s
Finish initializing, PID: 2010119, taken 18.209256762638688 s
Finish initializing, PID: 2010129, taken 18.243990190327168 s
Finish initializing, PID: 2010147, taken 18.25442030467093 s
Finish initializing, PID: 2010135, taken 18.283051615580916 s
Finish initializing, PID: 2010138, taken 18.324560744687915 s
Finish initializing, PID: 2010144, taken 18.545221898704767 s
PID: 2010111, finished
PID: 2010114, finished
PID: 2010102, finished
PID: 2010103, finished
PID: 2010117, finished
PID: 2010108, finished
PID: 2010126, finished
PID: 2010122, finished
PID: 2010141, finished
PID: 2010119, finished
PID: 2010132, finished
PID: 2010147, finished
PID: 2010129, finished
PID: 2010135, finished
PID: 2010138, finished
PID: 2010144, finished
Total time: 35.38426893763244
```
- **`use_ln=True, device='cuda'`:**
```
Initializing, PID: 2010657
Initializing, PID: 2010658
Initializing, PID: 2010663
Initializing, PID: 2010666
Initializing, PID: 2010669
Initializing, PID: 2010670
Initializing, PID: 2010675
Initializing, PID: 2010678
Initializing, PID: 2010679
Initializing, PID: 2010684
Initializing, PID: 2010685
Initializing, PID: 2010690
Initializing, PID: 2010693
Initializing, PID: 2010694
Initializing, PID: 2010698
Initializing, PID: 2010702
Finish initializing, PID: 2010657, taken 16.785767970606685 s
Finish initializing, PID: 2010658, taken 17.48567765392363 s
Finish initializing, PID: 2010663, taken 17.629254495725036 s
Finish initializing, PID: 2010679, taken 17.669835798442364 s
Finish initializing, PID: 2010684, taken 17.806519005447626 s
Finish initializing, PID: 2010666, taken 17.94813333079219 s
Finish initializing, PID: 2010678, taken 17.9285822045058 s
Finish initializing, PID: 2010675, taken 17.933384394273162 s
Finish initializing, PID: 2010685, taken 17.93839507550001 s
Finish initializing, PID: 2010669, taken 17.989747492596507 s
Finish initializing, PID: 2010670, taken 18.010178688913584 s
Finish initializing, PID: 2010690, taken 18.07465230114758 s
Finish initializing, PID: 2010698, taken 18.061042606830597 s
Finish initializing, PID: 2010693, taken 18.082385893911123 s
Finish initializing, PID: 2010702, taken 18.17208438925445 s
Finish initializing, PID: 2010694, taken 18.283266445621848 s
PID: 2010657, finished
PID: 2010658, finished
PID: 2010663, finished
PID: 2010679, finished
PID: 2010684, finished
PID: 2010678, finished
PID: 2010675, finished
PID: 2010685, finished
PID: 2010670, finished
PID: 2010669, finished
PID: 2010666, finished
PID: 2010693, finished
PID: 2010698, finished
PID: 2010690, finished
PID: 2010702, finished
PID: 2010694, finished
Total time: 39.30256709083915
```
```
- PyTorch Version: 1.0.0.dev20190323
- OS: Ubuntu 18.04
- How you installed PyTorch: pip
- Python version: 3.7
``` | module: cpu,triaged | low | Major |
432,060,588 | rust | meta matches pseudo-identifiers | [This (playground)](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f0a01f8d82a2ce3c19b6ffac01a6050f) fails:
```rust
macro_rules! foo {
($($m:meta => $i:item)* _ => $j:item) => {};
}
foo! {
_ => { fn f() -> i32 { 2 } }
}
fn main() { f(); }
```
```
error: local ambiguity: multiple parsing options: built-in NTs meta ('m') or 1 other option.
--> src/main.rs:6:5
|
6 | _ => { fn f() -> i32 { 2 } }
| ^
```
---
[This also fails (playground)](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=71db79668a74f60dd391cbebcedbc536):
```rust
macro_rules! foo {
($m:meta) => {};
(_ => { $i:item }) => {
$i
};
}
foo! {
_ => { fn f() -> i32 { 2 } }
}
fn main() { f(); }
```
```
error: expected identifier, found reserved identifier `_`
--> src/main.rs:9:5
|
9 | _ => { fn f() -> i32 { 2 } }
| ^ expected identifier, found reserved identifier
```
---
I believe both cases should work because `meta` should not match pseudo-identifiers.
cc @petrochenkov @Centril @nrc @alexcrichton | A-macros,T-lang,T-compiler,C-bug,S-has-mcve | low | Critical |
432,164,810 | flutter | Support clipOp for clipRRect | `Canvas.clipRect` accepts `clipOp: ClipOp` argument but `Canvas.clipRRect` does not.
Skia's https://skia.org/user/api/SkCanvas_Reference#SkCanvas_clipRRect does support clipOp argument so in Dart this can be also supported.
Same for `Canvas.clipPath`.
This is really handy for implementing fully transparent window on darkened layer. | c: new feature,engine,dependency: skia,c: proposal,P3,team-engine,triaged-engine | low | Minor |
432,169,934 | pytorch | [Feature Request] Common constants in the torch.* namespace | ## 🚀 Feature
Two feature requests asked for constants from the `math` package to be included in `torch` as well:
- https://github.com/pytorch/pytorch/issues/19123
- https://github.com/pytorch/pytorch/issues/6510
## Motivation
- To avoid adding an `import math` when you want to do it
- Because NumPy and SciPy do it
## Alternatives
Dont do it and tell users to `import math`
| triaged,enhancement,module: numpy | low | Major |
432,213,552 | go | net: TestUnixAndUnixpacketServer flaky on linux | https://build.golang.org/log/09fce21c31d8c4435fb99a4f7c0e8a1bb2bf900f and
https://build.golang.org/log/faa4560674d8e7d406b7031bfe985db237b9860a:
```
--- FAIL: TestUnixAndUnixpacketServer (0.00s)
server_test.go:197: #3: EOF
FAIL
FAIL net 8.796s
```
See previously #13205 (CC @bradfitz).
| Testing,help wanted,NeedsInvestigation | low | Major |
432,293,887 | go | cmd/go/internal/modfetch: module path validation inconsistent between repo and proxy fetch paths | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.4 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
...
</pre></details>
### What did you do?
These are the reproduction steps posted in gocenter gopher slack channel by @thepudds.
mkdir /tmp/scratchpad/hashicorp-vault-1
cd /tmp/scratchpad/hashicorp-vault-1
export GOPROXY='https://gocenter.io'
export GOPATH='/tmp/go-path-for-hashicorp-vault-test-1'
go mod init tempmod
cat <<EOF > main.go
package x
import (
_ "github.com/hashicorp/vault/api"
)
func main() {}
EOF
go mod tidy
go list -m all | wc -l
go list -m all > list.all.out
find $GOPATH/pkg/mod -name "*.mod" | wc -l
du -sh $GOPATH/pkg/mod
Repeat this in another window with different sets of directories(Replace -1 with -2 for example) but without setting GOPROXY.
You can see the du output in the GOPROXY window to be significantly different in the window without GOPROXY setting and the list of modules (list_all.out) shows difference in the version number for one of the dependencies (github.com/pierrec/lz4)
### What did you expect to see?
In the both cases with GOPROXY and without GOPROXY, expect to see the dependencies of modules to be resolved to the same version
### What did you see instead?
There is a difference in the du output as well as the content of the list of modules with and without GOPROXY,
With GOPROXY, the modules list shows
github.com/pierrec/lz4 v2.1.1+incompatible // indirect
Without GOPROXY, the modules list shows
github.com/pierrec/lz4 v2.0.5+incompatible // indirect
Please note that the actual dependency based on vendor.json in the project github.com/hashicorp/vault is actually v0.0.0-20181005164709-635575b42742 (go mod init in that project generates this correctly) but go mod tidy overwrites it with a different version.
| NeedsInvestigation,early-in-cycle,modules | low | Major |
432,302,736 | pytorch | 8 tests in test_c10d fail when running all tests in one command | ## 🐛 Bug
All tests in test_c10d pass if run individually in separate commands, but 8 fail consistently if run together in one pytest command. They all hit the similar error as shown below:
```
test/test_c10d.py::DistributedDataParallelTest::test_sync_reduction Process process 0:
Traceback (most recent call last):
File "/home/shenli/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/shenli/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/data/users/shenli/pytorch/test/test_c10d.py", line 481, in _run
getattr(self, self.id().split(".")[2])()
File "/data/users/shenli/pytorch/test/test_c10d.py", line 439, in wrapper
fn(self)
File "/data/users/shenli/pytorch/test/test_c10d.py", line 66, in wrapper
return func(*args, **kwargs)
File "/data/users/shenli/pytorch/test/test_c10d.py", line 55, in wrapper
return func(*args, **kwargs)
File "/data/users/shenli/pytorch/test/test_c10d.py", line 1802, in test_sync_reduction
for d in devices]
File "/data/users/shenli/pytorch/test/test_c10d.py", line 1802, in <listcomp>
for d in devices]
RuntimeError: CUDA error: initialization error
Process process 1:
Traceback (most recent call last):
File "/home/shenli/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/shenli/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/data/users/shenli/pytorch/test/test_c10d.py", line 481, in _run
getattr(self, self.id().split(".")[2])()
File "/data/users/shenli/pytorch/test/test_c10d.py", line 439, in wrapper
fn(self)
File "/data/users/shenli/pytorch/test/test_c10d.py", line 66, in wrapper
return func(*args, **kwargs)
File "/data/users/shenli/pytorch/test/test_c10d.py", line 55, in wrapper
return func(*args, **kwargs)
File "/data/users/shenli/pytorch/test/test_c10d.py", line 1802, in test_sync_reduction
for d in devices]
File "/data/users/shenli/pytorch/test/test_c10d.py", line 1802, in <listcomp>
for d in devices]
RuntimeError: CUDA error: initialization error
```
The failed ones are:
```
1. DistributedDataParallelTest.test_dist_broadcast_coalesced_gloo
2. DistributedDataParallelTest.test_fp16
3. DistributedDataParallelTest.test_gloo_backend
4. DistributedDataParallelTest.test_nccl_backend
5. DistributedDataParallelTest.test_queue_reduction
6. DistributedDataParallelTest.test_sync_params_no_buffers
7. DistributedDataParallelTest.test_sync_params_with_buffers
8. DistributedDataParallelTest.test_sync_reduction
```
## To Reproduce
`py.test test/test_c10d.py `
## Environment
Collecting environment information...
PyTorch version: 1.1.0a0+91a2900
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: CentOS Linux 7 (Core)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
CMake version: version 3.14.0
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
GPU 4: Tesla V100-SXM2-16GB
GPU 5: Tesla V100-SXM2-16GB
GPU 6: Tesla V100-SXM2-16GB
GPU 7: Tesla V100-SXM2-16GB
Nvidia driver version: 396.69
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.2
[pip] torch==1.1.0a0+91a2900
[conda] blas 1.0 mkl
[conda] mkl 2019.3 199
[conda] mkl-include 2019.3 199
[conda] mkl_fft 1.0.10 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] torch 1.1.0a0+91a2900 dev_0 <develop>
| oncall: distributed,module: tests,triaged | low | Critical |
432,310,433 | godot | HBoxContainer default separation not shown | **Godot version:**
3.1.1 Mono 2fbc421
**Issue description:**
`HBoxContainer` has a default separation of some kind. But the field for it in the editor, when unchecked, displays 0. This is very confusing. Either it should display the correct separation value or the default should be set to 0.
I would prefer the latter, I was looking for a while for some margin or padding setting on some children nodes that were set to fill and expand because a random gap was between them for no obvious reason and the widths where a bit off. *But changing the default value would change everyone's HBoxContainers without custom separation in existing projects. So just displaying or mentioning it in the field or at least the docs is probably better.*
I thought we could discuss it here first before making a pull. | enhancement,discussion,documentation,topic:gui | low | Major |
432,334,206 | TypeScript | react-redux type inference is broken since v3.1 | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.1.0-dev.20180810
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** react-redux connect type inference
I've noticed this issue while upgrading from v3.0.3. Type inferrence got broken on some of redux's `connect` calls. I was able to isolate the issue (see the code below). The issue is reproducible on `@next` and `@latest`. I ran some regression tests and was able to narrow it down to `3.1.0-dev.20180810` where the bug was introduced. The last version where the code works as expected was `3.1.0-dev.20180809`.
**Code**
```ts
type MapStateToProps<TStateProps> = () => TStateProps;
type MergeProps<TStateProps, TMergedProps> = (stateProps: TStateProps) => TMergedProps;
function connect<TStateProps = {}, TMergedProps = {}>(
mapStateToProps: MapStateToProps<TStateProps>,
mergeProps: MergeProps<TStateProps, TMergedProps>) {
}
connect(() => ({ time: 123 }), ({ time }) => ({
time,
doSomething() {
},
}))
```
**Expected behavior:** compiles without errors
**Actual behavior:** `error TS2459: Type 'TStateProps' has no property 'time' and no string index signature.`
**Playground Link:** [here](https://www.typescriptlang.org/play/#src=type%20MapStateToProps%3CTStateProps%3E%20%3D%20()%20%3D%3E%20TStateProps%3B%0D%0A%0D%0Atype%20MergeProps%3CTStateProps%2C%20TMergedProps%3E%20%3D%20(stateProps%3A%20TStateProps)%20%3D%3E%20TMergedProps%3B%0D%0A%0D%0Afunction%20connect%3CTStateProps%20%3D%20%7B%7D%2C%20TMergedProps%20%3D%20%7B%7D%3E(%0D%0A%20%20%20mapStateToProps%3A%20MapStateToProps%3CTStateProps%3E%2C%0D%0A%20%20%20mergeProps%3A%20MergeProps%3CTStateProps%2C%20TMergedProps%3E)%20%7B%0D%0A%7D%0D%0A%0D%0Aconnect(()%20%3D%3E%20(%7B%20time%3A%20123%20%7D)%2C%20(%7B%20time%20%7D)%20%3D%3E%20%7B%0D%0A%20%20%20return%20%7B%0D%0A%20%20%20%20%20%20time%2C%0D%0A%20%20%20%20%20%20doSomething()%20%7B%0D%0A%20%20%20%20%20%20%7D%2C%0D%0A%20%20%20%7D%0D%0A%7D)%0D%0A%0D%0Aconnect(()%20%3D%3E%20(%7B%20time%3A%20123%20%7D)%2C%20(%7B%20time%20%7D)%20%3D%3E%20(%7B%0D%0A%20%20%20doSomething()%20%7B%0D%0A%20%20%20%7D%2C%0D%0A%7D))%0D%0A%0D%0A%0D%0Aconnect(()%20%3D%3E%20(%7B%20time%3A%20123%20%7D)%2C%20(%7B%20time%20%7D)%20%3D%3E%20(%7B%0D%0A%20%20%20time%2C%0D%0A%20%20%20doSomething%3A%20()%20%3D%3E%20%7B%0D%0A%20%20%20%7D%2C%0D%0A%7D))%0D%0A%0D%0A%0D%0Aconnect(()%20%3D%3E%20(%7B%20time%3A%20123%20%7D)%2C%20(%7B%20time%20%7D)%20%3D%3E%20(%7B%0D%0A%20%20%20time%2C%0D%0A%20%20%20doSomething()%20%7B%0D%0A%20%20%20%7D%2C%0D%0A%7D))%0D%0A)
Somehow these examples compile without problem though:
```ts
// no 'expression body' in C# terms
connect(() => ({ time: 123 }), ({ time }) => {
return {
time,
doSomething() {
},
}
})
// no 'time' on TMergedProps
connect(() => ({ time: 123 }), ({ time }) => ({
doSomething() {
},
}))
// property instead of a function
connect(() => ({ time: 123 }), ({ time }) => ({
time,
doSomething: () => {
},
}))
``` | Bug | low | Critical |
432,355,793 | flutter | The height of CupertinoTextField will reduce on editing when the placeholder is not in English | I set the placeholder to non-English words when using CupertinoTextField, then I found that the height of the widget will reduce on editing.

I have tried four languages, and here is my test code:
```
class Filter extends StatefulWidget {
@override
_FilterState createState() => _FilterState();
}
class _FilterState extends State<Filter> {
Map languages = {
'English': 'test',
'Chinese': '测试',
'Japanese': 'テスト',
'Korean': '테스트',
'Arabic': 'اختبر',
};
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
leading: CupertinoNavigationBarBackButton(),
title: Text('CupertinoTextField Test'),
centerTitle: true,
),
backgroundColor: Colors.white,
body: Container(
padding: EdgeInsets.all(8.0),
child: Center(
child: CupertinoTextField(
placeholder: 'Flutter ${languages['Chinese']}',
decoration: BoxDecoration(border: Border.all(color: Colors.red)),
),
),
),
);
}
}
```
Did I do anything wrong? | a: text input,framework,a: internationalization,f: cupertino,has reproducible steps,P3,found in release: 3.3,found in release: 3.7,team-text-input,triaged-text-input | low | Major |
432,386,807 | rust | Implement `CStr` as a pointer rather than a slice | Littered throughout the `CStr` documentation are notes and hints that it should be eventually converted to using a pointer underneath the hood instead of a slice. This change will make `CStr::from_ptr` O(1) instead of O(n). Other constructors may still be O(n) because they have to check to ensure there is only one null terminator. If someone wants to try implementing this, it seems like a pretty simple change.
[Current struct definition](https://github.com/rust-lang/rust/blob/master/src/libstd/ffi/c_str.rs#L116):
```
pub struct CStr {
inner: [c_char]
}
```
Proposed:
```
pub struct CStr {
inner: *const c_char
}
``` | C-enhancement,A-FFI,T-libs-api | medium | Major |
432,394,801 | neovim | 'nofsync' may lose data (empty file + swapfile) after system crash | I was using nvim to write a small python script. I had another program watching the python script to run it when I wrote it to disk. In addition, I had Syntastic installed and python support, but syntastic was temporarily set to not check (SyntasticCheckToggle off).
When I updated the file, the script was run, but my memory was exhausted and the system hung. After reboot, I found that not only was the file I wrote emtpy (size 0 bytes), but nvim offered me the option of recovering the file, but when I hit 'r', nvim reported that the recovery file was missing.
Could this be the result of a race condition in the recovery file management where old recovery data is deleted before current recovery data is flushed to disk. Double buffer solution?
- `nvim --version`: NVIM v0.3.3
- Operating system/version: Fedora 29
- Terminal name/version: Gnome Terminal/TMUX
- `$TERM`: tmux-256color
### Steps to reproduce using `nvim -u NORC`
Sorry, I cannot reproduce this event, safely.
### Actual behaviour
Recover data is not available under some conditions that should be covered.
### Expected behaviour
Updating recovery data should be atomic
| bug,robustness,complexity:high,has:workaround,has:plan,system,filesystem | medium | Critical |
432,414,006 | TypeScript | Rename field to a name with dash does not transform the field with quotes. | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.0-dev.20190412
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** hyphen refactor, dash rename
**Code**
```ts
interface TestObject {
test:string
}
const instance: TestObject = { test: "hello" };
console.log(instance.test);
```
Rename "test" to a variable with a hyphen, e.g "te-st"
**Expected behavior:**
Code is refactored to use quotes:
```ts
interface TestObject {
"te-st":string
}
const instance: TestObject = { "te-st": "hello" };
console.log(instance["te-st"]);
```
**Actual behavior:**
Refactor replaces the variable as given, producing invalid code
```ts
interface TestObject {
te-st:string
}
const instance: TestObject = { te-st: "hello" };
console.log(instance.te-st);
```
**Playground Link:** Replacing all occurrences in the playground also includes all Test matches, so I tested this in vscode
**Related Issues:** none I could find
| Suggestion,Experience Enhancement | low | Minor |
432,429,997 | go | x/all: gccgo support | In #31382, @mikioh asks an interesting questions about the maintenance of golang.org/x packages on gccgo.
How are we sure that the changes made in golang.org/x will work on gccgo ?
For Go toolchain, they are builders on every GOOS/GOARCH which check that it's working but what about the Gccgo toolchain ? Do we assume that if it works on Go, it must work on Gccgo ?
The core code of each package should indeed work, after all that's just Go code. But what about special gccgo files, like in x/sys/unix for syscalls, or all the Go assembly files which cannot be built on gccgo, like in x/crypto ?
CC @bradfitz @ianlancetaylor | NeedsDecision | low | Minor |
432,457,184 | material-ui | [Tabs] Update to match the specification | Hi, I have to disagree with this change.
We are using scrollable and fullWidth together just fine, it only requires adding a minWidth to the Tab button components but otherwise it works just fine.
This is a breaking change for our layout - could this be reverted / are there other solutions to keep fullWidth and scrollable available at the same time?
_Originally posted by @olee in https://github.com/mui-org/material-ui/pull/13980#issuecomment-482465309_
https://material.io/design/components/tabs.html | design: material,breaking change,component: tabs | medium | Critical |
432,516,427 | create-react-app | Check for port conflict before compilation in npm start | I'd like to suggest that CRA attempts to pre-bind the port it is configured to run on when doing `npm start` before compilation starts. The compilation is slow and I often forget I have a different CRA running on 3000 (I use the default) already, so it costs some time to find out only after the server attempts to start and have to kill the hog and restart the compilation.
Obviously doing this check would still leave a window where if it passes pre-compilation, the process can be bound _during_ the compilation, but I believe the current behavior is find in that case. Checking pre-emptively before compilation should catch approximately 100.0% of cases where you forgot to kill a different process on the desired port. | issue: proposal,contributions: claimed | low | Major |
432,548,118 | go | crypto/x509: unexpected name mismatch error | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
Sorry, I'm not actually using Go itself, but I'm using etcd (https://github.com/etcd-io/etcd; apparently written in go) and got an error about subject and issuer names not matching in my certificate chain, which I believe can be tracked down to go's crypto/x509 implementation.
### What did you do?
I tried to verify a certificate chain, where the Issuer DN of sub-CA certificate was specified using the ASN.1 type UTF8String, while the subject DN of the (renewed) CA certificate used PRINTABLESTRING.
### What did you expect to see?
RFC 5280 (dated May 2008), section 7.1 says:
``RFC 3280 required only binary comparison of attribute values encoded in UTF8String, however, this specification requires a more comprehensive handling of comparison.``
And then it goes on to give details on how to compare DN's and then refers to RFC 4518 on how to compare strings of different ASN.1 types. To me, this sounds like at least comparing the exact same ASCII string when encoded as PRINTABLESTRING in one instance and as UTF8STRING in the other still should result in recognizing the two string as equal and thus the certificate chain should be validated successfully (as is the case e.g. for OpenSSL).
### What did you see instead?
I got a name mismatch, just as if the ASN.1 encoding of the DN's would be "blindly" compared byte by byte as described by the outdated RFC 3280 (crypto/x509/verify.go:570 seems to indicate that this actually is the case). Of course, for now, there's the obvious workaround of ensuring the ASN.1 encodings of DN's are identical, but in the long run, that shouldn't be necessary (and if you're switching to a different software (version) for generating certificates, ensuring continuing compatibility might not be trivial).
| Unfortunate,NeedsInvestigation | medium | Critical |
432,559,951 | three.js | Light Probe interpolation using Tetrahedral Tesselations | ##### Description of the problem
Now that we have a LightProbe class (https://github.com/mrdoob/three.js/pull/16191), the Spherical harmonic class (https://github.com/mrdoob/three.js/pull/16187) merged as well as some shader support(https://github.com/mrdoob/three.js/pull/16152), we should explore adding a Tetrahedral tesselation method for interpolating between light probes to the renderer so that it can set the 4 SH + weights per object.
The best reference is this presentation from Unity itself:
https://gdcvault.com/play/1015312/Light-Probe-Interpolation-Using-Tetrahedral
(Referenced from here: https://docs.unity3d.com/Manual/LightProbes-TechnicalInformation.html)
It seems that given a set of light probes you just do a 3D delaunay tetrahedralization and use 3D barycentric coordinates for interpolation and you cache which tetrahedral each object is in to speed up lookups. Pretty simple, just need a standard delaunay algorithm implemented along with a searchable tetrahedral data structure.
/ping @WestLangley @donmccurdy @richardmonette | Enhancement | medium | Critical |
432,601,060 | go | proposal: spec: improve error handing using `guard` and `must` keywords | I'll start with an example. Here's a version of the `CopyFile` example function form the [proposal for improved error handling in Go 2.0](https://go.googlesource.com/proposal/+/master/design/go2draft-error-handling-overview.md), rewritting using `must` and `guard`:
```
func CopyFile(src, dst string) (err error) {
defer func() {
if err != nil {
err = fmt.Errorf("copy %s to %s: %v", src, dst, err)
}
}()
r := guard os.Open(src)
defer must r.Close()
w := guard os.Create(dst)
defer must w.Close()
err = io.Copy(w, r)
// here we need to do extra stuff when an Copy error happens: now we must use the 'normal' error handling method, and cannot use guard or must
if err != nil {
_ := os.Remove(dst) // fail silently if errors happen during error handling
}
return
}
```
The `must` keyword is syntactic sugar to `panic` on any error returned by a function call:
```golang
w := must os.Open("foo.txt")
```
is conceptually equivalent to
```
w, err := os.Open("foot.text")
if err != nil {
panic(err)
}
```
The `must` keyword can only be used on function calls that return an error as last return argument. In functions that do not return an error as last return value, or where return values are unnamed, `must` is exactly equivalent to the code above. In case a function does return a named error value, when a error is returned from the call, `must` assigns this error to function's named return error value, then executes all `defer`red functions just like a normal panic, and then uses the return error value when propagating the panic upwards. As a result, a `defer` statement can therefore also be used to augment errors raised in a `panic` caused by the `must` keyword.
The `guard` keyword is syntactic sugar to return from a function when a function call returns in an error (as last argument):
```
func Foo() (int, string, error) {
w := guard os.Open("foo.txt")
// ....
}
```
is equivalent to:
```
func Foo() (int, string, error) {
w, err := os.Open("foot.text")
if err != nil {
return 0, "", err // guard returns zero values for all but the last return arguments
}
// ....
}
```
The `guard` keyword can be used only on function calls that return an `error` as last return value, in functions that also return an `error` as last value.
The `must` and `guard` keywords cannot be used in conjunction with the `go` keyword.
# Benefits
The `if err := ...; err != nil {}` pattern has several drawbacks:
- It is so omni-present, it quickly become `noise` in the code
- Code like `doFileStuff(os.Open("foobar.txt"))` does not work, because `Open` also returns an (optional) error
Both of these drawbacks would disappear (in second case either `guard` or `must` can be used). Of course there is the very valid argument that errors should be augmented with more relevant error information that only the caller can provide. In general there are three situations:
1. No extra error info is needed / available
2. The extra info is general for all errors that appear in the function body
3. Extra info is added for a specific error
In the first case we can just use `guard` (or `must`) and we're done. In the second case, the `defer` technique can be used to add function specific information (mostly the call argument values) for all errors that are returned (including `panic`s via `must`).
Finally there is the case where we want to add information to a specific error only. In that case, as well in the case more has to be done than just augmenting the error, the current `if err != nil {}` pattern should be used: especially if more error handling code is needed there is no real reason to move this important code elsewhere as it is specific to the code directly above it.
--
As an extra benefit, with the `must` keyword all functions of the `MustXXX()` pattern are no longer needed.
# Benefits over the `2.0` proposal
My main concern with the `handle` keyword is that it adds another code path: the error handling path. While it would be fine for error augmentation (added relative information to the error), you need to 'read down, then read up' checking all `handle` clauses to see what happens on an error. This not ideal. If any additional stuff must happen that should really be done under that function call, using the `if err != nil {}`: it is there where the error happens and any error handling code should be below it, not somewhere above in any of multiple of levels.
The advange of `must` and `guard`, in my opinion, over `handle` and `check` are that specific error handling code stay where it is done now, but all other cases, where we need to add general function wide info, or no info at all, can be handle much more easily and cleaner.
| LanguageChange,Proposal,error-handling,LanguageChangeReview | high | Critical |
432,609,501 | pytorch | index_put_ take min when there are repeated indices | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Assume we want to get this:
image[x,y]=p, where x, y, p are lists. For example,
x=[1, 1, 2]
y=[3, 3, 4]
p=[1, 2, 3]
It should be mentioned that image[x][y] may be updated twice, since x[0]=x[1]=1, y[0]=y[1]=3, but p[0]=1, p[1]=2. In this case, image[1][3] should be the minimum, 1.
But it seems that ~~torch.where~~ does not consider that one element in list can be updated twice or more.
Is there an efficient way to do this? Better use OP of Pytorch. I can update image in a for loop, but it’s too slow, since the len(x) is big.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
## Pitch
<!-- A clear and concise description of what you want to happen. -->
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| low priority,triaged,enhancement,module: advanced indexing | low | Major |
432,626,223 | TypeScript | Request: Infer generics | The request is to let `infer` support generics.
Background:
I recently published a recursive `Pipe` and `Compose`. It offers some key advantages, including the preservation of variable names.
https://github.com/babakness/pipe-and-compose-types
Its shortcoming is in the case of generics. This is because `infer` does not work well with generics. Example
```ts
/**
* Extracts function arguments
*/
export type ExtractFunctionArguments < Fn > = Fn extends ( ...args: infer P ) => any ? P : never
/**
* Extracts function return values
*/
export type ExtractFunctionReturnValue<Fn> = Fn extends ( ...args: any[] ) => infer P ? P : never
type Foo = ExtractFunctionArguments< <A>(a:A, b:number) => A >
// Foo has type [ {} , number ]
```
TypeScript 3.4 improved how `pipe` and `compose` functions work with generics with higher order type inference from generic functions.
Common implementations of `pipe` and `compose` rely on parameter overloading. However, even here, there is an issue with variadic functions. See example in this issue:
#30727
The broader issue with the parameter overloading version of `pipe` and `compose` is that parameter names are lost, decreasing code clarity. The recursive version, relying on `infer` does maintain parameter names.
Providing support generics with `infer` will present a major productivity bump and will nearly complete the community's need for the common place `pipe` and `compose` in FP patterns without compromise.
The only outlier is a small situation with parameter names on variadic functions of a minimum arity of zero as opposed to at least one. Can provide examples if needed since it is a separate issue.
| Suggestion,Awaiting More Feedback | low | Major |
432,655,113 | godot | Weird behavior of 3x3 vs 3x3 minimal autotiling | **Godot version:** v3.1.stable.official (installed via steam)
**OS/device including version:** Ubuntu 18.04, GTX 1080ti
**Issue description:**
The same tiling works well with "3x3 (minimal)" but breaks with "3x3", even though all the needed tiles are seemingly present.
This is what it looks like with "3x3 (minimal)"

and the exact same setup only switching the tilemap from "3x3 (minimal)" to "3x3"

**Steps to reproduce:**
1. Create a tilemap with the following tileset

2. Create an autotile region and set the bitmasks as shown in the screenshots above
3. Set the Autotile bitmask to either "3x3" or "3x3 (minimal)"
4. Draw the pattern as shown with the tilemap
**Minimal reproduction project:**
If this is a bug and not just my misunderstanding I can provide it.
| discussion,topic:core,documentation | low | Critical |
432,663,554 | pytorch | Deprecate torch.add(tensor, value, other) | ## 🚀 Feature
It's confusing that add has the scale argument. Could we create a new `torch.axpy` function that does this?
## Motivation
Numpy also makes this separation: they have numpy.add, and scipy.blas.axpy.
This could also make things nicer: add can be treated as an example of a binary pointwise operation like mul, sub, div and not require any special casing for its `value` argument.
cc @ezyang @gchanan @mruberry @rgommers @heitorschueroff @cpuhrsch | module: bc-breaking,triaged,module: numpy,module: deprecation,module: ux | low | Minor |
432,678,866 | flutter | Support High Contrast Text/Fonts | Android has an A11y setting for high contrast text/fonts. Fuchsia would like this as well.
The framework and engine don't currently respect this flag. We should implement it.
On Android, this is documented as forcing all fonts to black or white: https://support.google.com/accessibility/android/answer/6151855?hl=en (on my personal phone, this setting is actually labeled as `High contrast fonts`).
I do see that in some cases, where you would have white text on a white background, the setting forces a black stroke around the font:

| customer: fuchsia,platform-android,framework,engine,a: accessibility,platform-fuchsia,a: typography,P3,team-android,triaged-android | low | Major |
432,687,607 | TypeScript | IntelliSense Parses JSDoc Namepath Part "module" as a Actual Module File Path | _- VSCode Version: 1.34.0-insider_
_- OS Version: 10.14.3 (18D109)_
_- Does this issue occur when all extensions are disabled?: Yes_
## Background
JSDoc uses [namepaths](http://usejsdoc.org/about-namepaths.html) to link variables in documentation. If you use the [@module](http://usejsdoc.org/tags-module.html) block to contain variables, then referencing those variables requires using the `module:` part in your namepaths.
## Problem
When using JSDoc namepaths involving the 'module' part, VSCode displays the module's file path within IntelliSense. I think as a result the signature's return value is also wrong, instead displaying the module's exported functions.
Screenshot of issue:
<img width="721" alt="Screen Shot 2019-04-12 at 12 44 19 PM" src="https://user-images.githubusercontent.com/8714251/56054022-ad0c1700-5d23-11e9-8bfb-10231f9ed294.png">
Is there a way to escape that module keyword? Or some way to write it so it works with VSCode? Using the full module namepath in JSDoc is required to reference types defined within module blocks, as described here https://github.com/jsdoc3/jsdoc/issues/969 and here https://github.com/jsdoc3/jsdoc/issues/1533.
| Needs Investigation | low | Minor |
432,688,038 | rust | Add /SOURCELINK debug support on Windows | It would be useful to add support for the VC++ linker option /SOURCELINK. This option embeds a json file mapping local source file paths to URLs to enable debugging on machines that might not have source. Documentation is [here](https://docs.microsoft.com/en-us/cpp/build/reference/sourcelink?view=vs-2019).
/SOURCELINK requires asomewhat recent versions of MSVC, so Rust support probably requires a version test for MSVC or some opt-in mechanism.
| A-debuginfo,T-compiler,O-windows-msvc,C-feature-request | low | Critical |
432,689,146 | create-react-app | GENERATE_SERVICEWORKER environment variable | Suggested new feature
**Goal**
- Be able to prevent the generation of service worker related files during a build.
**Justification**
- For the simplest applications (i.e. not using a service worker):
- Avoid loading unnecessary resources (see #6684).
- Remove a bit of clutter / have a more streamlined build result.
- Simplify deployment.
- Allow a more progressive understanding for beginners.
**Proposed approach**
- Add a `GENERATE_SERVICEWORKER` environment variable (following the pattern of `GENERATE_SOURCEMAP`) that may be set via the shell or the `.env` file.
- If this variable is set to `false` then the `service-worker.js` and `precache-manifest.[hash].js` files wouldn't be generated in the `build` directory.
- Comment out by default the following lines in `index.js`:
- `import * as serviceWorker from './serviceWorker';`
- `serviceWorker.unregister();`
- Actually, the second line could be instead:
- `// serviceWorker.register();`
- The explanatory comments in the `index.js` file could tell to uncomment the two lines to allow the use of a service worker (instead of renaming .unregister to .register).
**Note**
- As the service worker is presented as an _opt-in_ feature, it would be more logical that the default behavior was to _not_ generate the service worker files. In this case, the `GENERATE_SERVICEWORKER` environment would have to explicitly be set to `true` to generate service worker files.
- That said, this would cause a somewhat less "backward-compatible" behavior, so choosing what may be the best alternative remains open.
Thanks! | issue: proposal | low | Minor |
432,708,147 | go | cmd/go: implicit go install of std in $GOROOT/src has weird error | I did this by mistake (hit enter too early) and then noted the kinda weird error message:
```
bradfitz@go:~/go/src$ go install
can't load package: package std: unknown import path "std": cannot find module providing package std
bradfitz@go:~/go/src$ go install std
bradfitz@go:~/go/src$ echo $?
0
```
I guess it's correct, as "std" is some alias rather than a package, but maybe it can be made to just work, or have a more helpful error.
Low priority.
/cc @bcmills | NeedsInvestigation,modules | low | Critical |
432,717,582 | TypeScript | Export an interface for JSON `CompilerOptions` | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
- CompilerOptions
- convertCompilerOptionsFromJson
## Suggestion
Currently TypeScript exports the [`CompilerOptions`](https://github.com/Microsoft/TypeScript/blob/97fbc87e9834b9bf12fe01bf442e16749da02d1d/lib/typescript.d.ts#L2489) interface, which represents the compiler options that can be passed in. However, since this interface uses enums, it doesn't match with the format of `compilerOptions` in the `tsconfig.json` file. I'd like to be able to use a `JsonCompilerOptions` (or some better name) interface to strictly type options in libraries that wrap TypeScript without rewriting the interface.
## Use Cases
- Libraries that wrap TypeScript like [rollup-plugin-typescript](https://github.com/rollup/rollup-plugin-typescript) can strictly type overrides for values in `tsconfig` without either requiring the user to import TypeScript enums or rewriting the interface.
- The [`convertCompilerOptionsFromJson`](https://github.com/Microsoft/TypeScript/blob/025d82633915b67003ea38ba40b9239a19721c13/src/compiler/commandLineParser.ts#L2371) function takes in `any` currently, and could instead use this hypothetical interface.
## Examples
#### TypeScript wrapping libraries can let consumers override tsconfig options
```ts
import { Plugin } from 'rollup';
import { JsonCompilerOptions } from 'typescript';
interface Options extends JsonCompilerOptions {
customOptions?: string;
}
export default function rollupPluginTypescript(options?: Options): Plugin;
```
#### Strict typing for `convertCompilerOptionsFromJson`
```ts
import { convertCompilerOptionsFromJson } from 'typescript';
// Type error
const opts = convertCompilerOptionsFromJson({ allowJs: 'true' }, process.cwd());
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
432,728,855 | flutter | flutter create, app_name, AndroidManfest.xml, strings.xml | When I use Android Studio to generate a new Android project, the project contains a generated file `app/src/main/res/values/string.xml` with the following content
<resources>
<string name="app_name">cheetah</string>
</resources>
and the file `app/src/main/AndroidManifest.xml` contains the line
android:label="@string/app_name"
When I create a new Flutter app, there is no `strings.xml` file generated and `android/app/src/main/AndroidManifest.xml` contains the line
android:label="cheetah"
I suggest it would be better if the `flutter create` command generates Android-specific files that match what Android Studio generates as closely as possible. It makes working with Flutter more familiar and an easier transition for Android developers.
| platform-android,tool,c: proposal,P2,team-android,triaged-android | low | Major |
432,729,828 | godot | Command line export with Linux headless always re-imports PNG resources | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
3.1
<!-- Specify commit hash if non-official. -->
**OS/device including version:**
linux_headless
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
<!-- What happened, and what was expected. -->
When run with `--export` the headless version will re-import all resources, causing the export to take 20x longer than before.
```
~/bin/Godot_v3.1-stable_linux_headless.64 --export Linux/X11 Kinematic\ Character\ 3D.x86_64
ERROR: instance: Class 'EditorSettings' can only be instantiated by editor.
At: core/class_db.cpp:518.
ERROR: poll: /home/jpate/.config/godot/editor_settings-3.tres:3 - Parse Error: Can't create sub resource of type: EditorSettings
At: scene/resources/resource_format_text.cpp:561.
ERROR: load: Condition ' err != OK ' is true. returned: RES()
At: core/io/resource_loader.cpp:208.
ERROR: _load: Failed loading resource: /home/jpate/.config/godot/editor_settings-3.tres
At: core/io/resource_loader.cpp:285.
WARNING: create: Could not open config file.
At: editor/editor_settings.cpp:872.
reimport: begin: (Re)Importing Assets steps: 2
reimport: step 0: purple_wood.png
reimport: step 1: white_wood.png
savepack: begin: Packing steps: 102
savepack: step 2: Storing File: res://cubelib.res
savepack: step 14: Storing File: res://cubio.gdc
savepack: step 27: Storing File: res://follow_camera.gdc
savepack: step 39: Storing File: res://.import/icon.png-487276ed1e3a0c39cad0279d744ee560.stex
savepack: step 39: Storing File: res://icon.png.import
savepack: step 52: Storing File: res://level.scn
savepack: step 64: Storing File: res://.import/purple_wood.png-ae65a206e7a59edf759728c3bad04e56.s3tc.stex
savepack: step 64: Storing File: res://purple_wood.png.import
savepack: step 77: Storing File: res://purplecube.scn
savepack: step 89: Storing File: res://.import/white_wood.png-6895acd60ce97b4315494d2be377c357.s3tc.stex
savepack: step 89: Storing File: res://white_wood.png.import
savepack: step 102: Storing File: res://cubio.gd.remap
savepack: step 102: Storing File: res://follow_camera.gd.remap
savepack: step 102: Storing File: res://icon.png
savepack: step 102: Storing File: res://project.binary
savepack: end
reimport: end
ERROR: ~List: Condition ' _first != __null ' is true.
At: ./core/self_list.h:111.
ERROR: cleanup: There are still MemoryPool allocs in use at exit!
At: core/pool_vector.cpp:70.
```
**Steps to reproduce:**
Download attached modified version of *kinematic_character* demo
extract
```
cd kinematic_character_export/kinematic_character
path/to/Godot_v3.1-stable_linux_headless.64 --export Linux/X11 Kinematic\ Character\ 3D.x86_64
```
**Minimal reproduction project:**
[kinematic_character_export.zip](https://github.com/godotengine/godot/files/3075038/kinematic_character_export.zip)
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| bug,topic:editor,confirmed,topic:import | medium | Critical |
432,754,188 | angular | DatePipe date issue 0001-01-01T00:00:00Z | # 🐞 bug report
### Affected Package
DatePipe
### Is this a regression?
### Description
When passing 0001-01-01T00:00:00Z into DatePipe it is being formatted as "12/31/0001 BC"
{{ledgerEntry.date | date: 'MM/dd/yyyy GGG':'UTC'}}
## 🔬 Minimal Reproduction
https://stackblitz.com/edit/angular-sz4zdy?embed=1&file=src/app/app.component.html
## 🔥 Exception or Error
## 🌍 Your Environment
Windows 10
**Angular Version:**
<pre><code>
@angular-devkit/architect 0.13.6
@angular-devkit/build-angular 0.13.6
@angular-devkit/build-optimizer 0.13.6
@angular-devkit/build-webpack 0.13.6
@angular-devkit/core 7.3.6
@angular-devkit/schematics 0.8.4
@angular/cdk 6.4.7
@angular/cli 6.2.4
@angular/flex-layout 6.0.0-beta.15
@angular/material 6.4.7
@ngtools/webpack 7.3.6
@schematics/angular 0.8.4
@schematics/update 0.8.4
rxjs 6.1.0
typescript 2.7.2
webpack 4.29.0
</code></pre>
**Anything else relevant?**
<!-- ✍️Is this a browser specific issue? If so, please specify the browser and version. -->
Tested in FireFox and Chrome
<!-- ✍️Do any of these matter: operating system, IDE, package manager, HTTP server, ...? If so, please mention it below. -->
Nope
| type: bug/fix,area: common,area: i18n,freq1: low,state: confirmed,P4 | low | Critical |
432,755,553 | TypeScript | Intersection type intellisense duplicates jsdoc, rather than overriding | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.33.1
- OS Version: macOS 10.12.6
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
---
Sorry if the title isn't clear, it's tricky to put into words but obvious with code. The gist is when a `type` is extended and has the same key, the intellisense combines the documentation, rather than the "child" overriding it, like they do with `interface`. It's easier to explain with an example.
When intersecting types with the same property, the intellisense documentation shows all of the comments instead of just the latest:
```typescript
type Duck {
/** This is a really great duck */
id: string
}
type Mallard = Duck & {
/** I love this duck */
id: string
}
const duckMan: Mallard = {
id // intellisense here shows "This is a really great duck I love this duck"
}
```
<img width="346" alt="Pasted_Image_4_12_19__4_05_PM" src="https://user-images.githubusercontent.com/438465/56066268-e00ed480-5d3c-11e9-9653-cf463aef5a60.png">
However, with an interface, the intellisense shows only the direct type comment, which is expected.
```typescript
interface Duck {
/** This is a really great banana */
id: string
}
interface Mallard extends Duck {
/** I love this duck */
id: string
}
const duckMan: Mallard = {
id // Intellisense says "I love this duck"
}
```
<img width="356" alt="Pasted_Image_4_12_19__4_05_PM" src="https://user-images.githubusercontent.com/438465/56066238-c53c6000-5d3c-11e9-85df-e9900f921351.png">
I found this out because I'm using `graphql-codegen` to generate types for my GraphQL schema. It generates types, and the intellisense in some cases repeats many times.
**Expectation**
I would expect the "closest" comment doc of a `type` to override all of the others, like it does with an `interface`. If there's a reason that can't work, then at least having a uniqueness check would be good, maybe `new Set(typedocs)`) | Suggestion,Experience Enhancement | low | Minor |
432,757,819 | TypeScript | "require" is not an autocomplete option for blank js file | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
Steps to Reproduce:
1. Open a blank JavaScript file
2. type `var x = r`
`require` should be the first value that pops up in the autocomplete options.
At the moment you can type out the entire word `require` and it _still_ wont appear in the autocomplete list.
This is really annoying for node developers that need to use the `require` function all the time and don't have access to an es6 environment.
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<details>
<summary>Version details</summary>
Version: 1.28.0 (user setup)
Commit: 431ef9da3cf88a7e164f9d33bf62695e07c6c2a9
Date: 2018-10-05T14:58:53.203Z
Electron: 2.0.9
Chrome: 61.0.3163.100
Node.js: 8.9.3
V8: 6.1.534.41
Architecture: x64
Windows 10
</details> | Suggestion,Experience Enhancement | low | Minor |
432,785,330 | three.js | Editor should have most of THREE's features | ##### Description of the problem
There are several features of `THREE` that aren't accessible from the Editor. When I say accessible, I mean combination of:
* properties are not visible for editing
* objects not being able to be created
* objects not being able to be imported or exported.
Ideally, the Editor should be capable of creating and editing most objects and features that the API contains. This issue is really more to keep track of what the Editor can and can't do currently. I'm listing features that I think the Editor would be capable of handling.
#### Cameras
- [x] `PerspectiveCamera`
- [x] `OrthographicCamera`
- [ ] ~~`ArrayCamera`~~
- [ ] ~~`CubeCamera`~~
- [ ] ~~`StereoCamera`~~
#### Core
- [x] `BufferAttribute`
- [x] `BufferGeometry`
- [ ] `Layers`
- [x] `InstancedBufferAttribute`
- [x] `InstancedBufferGeometry`
- [ ] `InstancedInterleavedBuffer` * (#16050)
- [x] `InterleavedBufferAttribute`
#### Geometry
- [x] `BoxGeometry`
- [x] `CircleGeometry`
- [x] `ConeGeometry`
- [x] `CylinderGeometry`
- [x] `DodecahedronGeometry`
- [ ] `EdgesGeometry` *
- [ ] `ExtrudeGeometry`
- [x] `IcosahedronGeometry`
- [x] `LatheGeometry`
- [x] `OctahedronGeometry`
- [x] `PlaneGeometry`
- [x] `RingGeometry`
- [ ] `ShapeGeometry`
- [x] `SphereGeometry`
- [x] `TetrahedronGeometry`
- [x] `TorusGeometry`
- [x] `TorusKnotGemetry`
- [x] `TubeGeometry`
- [ ] `WireframeGeometry` **
#### Lights
- [x] `AmbientLight`
- [x] `DirectionalLight`
- [x] `HemisphereLight`
- [x] `PointLight`
- [ ] `RectAreaLight` (#16251)
- [x] `SpotLight`
- [ ] `LightProbe`
#### Materials
- [x] `LineBasicMaterial`
- [x] `LineDashedMaterial`
- [x] `MeshBasicMaterial`
- [x] `MeshDepthMaterial`
- [x] `MeshLambertMaterial`
- [x] `MeshMatcapMaterial`
- [x] `MeshNormalMaterial`
- [x] `MeshPhongMaterial`
- [x] `MeshPhysicalMaterial`
- [x] `MeshStandardMaterial`
- [x] `MeshToonMaterial`
- [x] `PointsMaterial`
- [x] `RawShaderMaterial`
- [x] `ShaderMaterial`
- [x] `ShadowMaterial`
- [x] `SpriteMaterial`
#### Objects
- [x] `Group`
- [ ] `LOD`
- [ ] `Line`
- [ ] `LineLoop`
- [ ] `LineSegments`
- [x] `Mesh`
- [ ] `InstancedMesh`
- [x] `Points`
- [x] `SkinnedMesh`
- [x] `Sprite`
#### Textures
- [ ] `Texture` (properties not editable #13882, #15695)
- [ ] `CubeTexture` (#13880)
- [ ] `VideoTexture`
#### Misc
- [x] Animations
- [ ] Audio
- [ ] Post-Processing
\* = Serializable, but not deserializable
** = Serializable, but not as the original geometry (i.e. `WireframeGeometry` is serialized as `BufferGeometry`)
##### Three.js version
- [x] Dev
- [x] r103
##### Browser
- [x] All of them
- [ ] Chrome
- [ ] Firefox
- [ ] Internet Explorer
##### OS
- [x] All of them
- [ ] Windows
- [ ] macOS
- [ ] Linux
- [ ] Android
- [ ] iOS
| Enhancement,Editor | medium | Major |
432,792,796 | storybook | Add url parameter for collapsing sidebar | **Is your feature request related to a problem? Please describe.**
Currently collapsing the sidebar is only available globally in the options, but not as a URL parameter.
For instance we can embed the story in a buttons section of a docs with all the correct options and knobs selected and present in the URL. However the sidebar remains open - if we collapse it in the config - it ruins the usability of the storybook standalone not embedded in an app.
**Describe the solution you'd like**
Ideally the selected sidebar state could be passed in the url
designsystem.com/?path=/story/breadcrumb&sidebar-isCollapsed=true
**Describe alternatives you've considered**
- Using just the iframe as an embed: lacks the excellent knob and full canvas functionality
- Collapse the sidebar in config: makes the actual app unusable as a standalone
| feature request,core | low | Major |
432,806,746 | pytorch | Different behavior of torch.nn.MultiMarginLoss on CPU/GPU Tensors | ## 🐛 Bug
According to the documentation, the input to torch.nn.MultiMarginLoss() should be:
- x (a 2D mini-batch Tensor)
- y (a 1D tensor of target class indices, **0 <= y <= x.size(1)**).
For CPU tensors, it will raise RuntimeError when target (y) is out of range, however that seems not the case for GPU tensors.
## To Reproduce
Steps to reproduce the behavior:
```
>>> a = torch.rand(3, 1)
>>> b = torch.LongTensor(3).random_(0, 5)
>>> criterion = torch.nn.MultiMarginLoss()
>>> criterion(a, b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/afs/cs.pitt.edu/usr0/miz44/opt/miniconda3/envs/pytorch_test/lib/python3.7/site-packages/torch/nn/modules/module.py", line 494, in __call__
result = self.forward(*input, **kwargs)
File "/afs/cs.pitt.edu/usr0/miz44/opt/miniconda3/envs/pytorch_test/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 1138, in forward
weight=self.weight, reduction=self.reduction)
File "/afs/cs.pitt.edu/usr0/miz44/opt/miniconda3/envs/pytorch_test/lib/python3.7/site-packages/torch/nn/functional.py", line 2380, in multi_margin_loss
return torch._C._nn.multi_margin_loss(input, target, p, margin, weight, reduction_enum)
RuntimeError: invalid argument 3: target out of range at /pytorch/aten/src/THNN/generic/MultiMarginCriterion.c:43
```
The above is the expected behavior. However,
```
>>> criterion(a.cuda(), b.cuda())
tensor(0.4437, device='cuda:0')
```
I didn't quite understand how this value is calculated, as I suppose the behavior on CPU and GPU should be consistent?
When giving inputs with correct shape, the results on CPU and GPU tensors seem consistent.
```
>>> c = torch.rand(3, 5)
>>> criterion(c, b)
tensor(0.6158)
>>> criterion(c.cuda(), b.cuda())
tensor(0.6158, device='cuda:0')
```
## Expected behavior
A error message similar with CPU tensors should be given to warn about the unexpected input.
## Environment
Collecting environment information...
PyTorch version: 1.1.0.dev20190411
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: CentOS Linux release 7.6.1810 (Core)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
CMake version: version 2.8.12.2
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
GPU 3: GeForce GTX 1080 Ti
GPU 4: GeForce GTX 1080 Ti
GPU 5: GeForce GTX 1080 Ti
Nvidia driver version: 410.79
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.2
[pip] torch-nightly==1.1.0.dev20190411
[pip] torchvision-nightly==0.2.3
[conda] torch-nightly 1.1.0.dev20190411 pypi_0 pypi
[conda] torchvision-nightly 0.2.3 pypi_0 pypi | module: cuda,module: error checking,triaged | low | Critical |
432,808,360 | rust | Change powerpc64 base CPU | Currently, following targets use “ppc64” as base CPU: powerpc64-unknown-linux-gnu, powerpc64-unknown-linux-musl, powerpc64-unknown-freebsd.
In LLVM, “ppc64” implies “+altivec”. This is unfortunate, because it means Rust targets are unusable on devices based on PowerPC e5500, which is fairly popular. In particular, #43610 is about QorIQ P5040 SoC using PowerPC e5500 core, and #59040 is about QorIQ T1042 SoC using PowerPC e5500 core.
Thoughts? | C-enhancement,T-compiler,O-PowerPC,A-target-specs | medium | Major |
432,809,206 | puppeteer | Support Electron and other content embedders | ## What's a "content embedder"?
Chromium codebase is organized into layers:
- `//blink` - HTML rendering engine
- `//content` - an API for browser implementers (think of it as a library that you'd use if you were to write your own browser). Things like process model are handled here; uses `//blink`.
- `//chrome` - Chromium implementation (uses `//content`).
Content-embedders are all the products that are based on `//content` layer. Chromium, Chrome Headless, ChromeCast are all different `//content` embedders.
Electron is a `//content` embedder as well.
## DevTools Protocol and content embedders
Majority of DevTools API is implemented in `//blink` and `//content`. However, certain methods are supposed to be implemented by embedders. DevTools team makes sure all the methods needed for Puppeteer operation are supported by both Chromium and Headless.
Electron is, however, missing out.
## What exactly goes wrong?
- Since version v1.5.0, when Puppeteer connects to a browser it requests all existing browser
contexts with the `Target.getBrowserContexts()` protocol method.
- Method `Target.getBrowserContexts()` is supposed to be implemented by content embedders.
- Electron doesn't implement the method, so Puppeteer connect to electron fails (#3793).
## What can be done?
**Option 1: Defer to Embedders.**
Ideally, `//content`-embedders implement all necessary DevTools protocol methods on their end. This way they'll guarantee 100% compatibility with Puppeteer API. This, however, is a major undertaking for embedders; not sure if there's any interest in this.
**Option 2: support clicking/typing/evaluation**
Alternatively, we can aim for a "good-enough" compatibility with `//content` embedders. For example, we can make sure that things like clicking, navigation, typing and evaluation work fine with electron app (and add these electron tests to our CI). These should be enough to drive many default testing and automation scenarios.
# 🚀 Vote!
- If there's an interest in **Option 1**, please file a bug to the repository with the content embedder you'd like to be supported and ask them to implement the `Target.getBrowserContexts()` method. Plz cross-post the link here.
- If there's an interest in **Option 2**, please 👍 this issue and share your usecase.
| feature,chromium | medium | Critical |
432,837,352 | godot | Attach Node Script dialog should be wider in order to show more of script path | The problem can be seen here:
<img width="532" alt="Screen Shot 2019-04-13 at 1 24 07 PM" src="https://user-images.githubusercontent.com/6002340/56079070-85828080-5def-11e9-8d59-1ec24ec76933.png">
Each time I create a new script I want to make sure it's in the right place (path) and has the correct file name. The problem is that I have to scroll the text field to the right each time to see the complete path.
It would be good if the dialog was made wider and/or resizable. | enhancement,topic:editor,usability | low | Minor |
432,843,525 | TypeScript | Additive inverse of number literals | ## Search Terms
Using unary minus on variable that is of number-literal type. Negative number literals.
## Suggestion
This is a specific case of #26382. My proposal is simply that if we apply unary minus on an expression whose type is a union of number literal types, the result should be a union of their additive inverses:
```typescript
declare var a: 1 | 2 | 3;
const b = -a; // should be -1 | -2 | -3
```
Furthermore we could add a unary minus type operator:
```typescript
-(42) === -42
-(A | B) === -A | -B
-number === number
-(not number) // error
```
Both should be very easy and uncontroversial to implement.
## Use Cases
```typescript
function compare(a: C, b: C): -1 | 0 | 1;
function reverseCompare(a: C, b: C): -1 | 0 | 1
{
return -compare(a, b); // shouldn't throw an error
}
```
The type operator would be just for sake of completeness
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
432,854,228 | godot | AnimationPlayer: Adding Animations became more tedious in 3.1 instead of easier compared to 3.0.6 because of UI change | **Godot version:**
3.1 stable
**Issue description:**
Unlike the 3D workflow, the 2D workflow has no build in support to import animations from from external sources to the AnimationPlayer. https://github.com/godotengine/godot/issues/18269
While adding 2D animations to the AnimationPlayer has always been very time consuming and tedious for this reason, it has become **even more tedious** with the new AnimationPlayer UI which hides "New", "Duplicate" and "Rename" ect functionality in a drop down menu.

In 3.0.6, these were one-click-buttons.

This is especially a problem for games with more than one directional movement.
I request to have these UI buttons back on the panel for easy one-click direct access, since this are operations that are repeated hundreds of times hiding them in a drop down does nothing but add useless complexity, wasted time and less user friendly experience.
(Imagine you are an Animator and you have to do this full time, all day every day. I'd rather die.) | enhancement,discussion,topic:editor,usability | medium | Critical |
432,870,431 | pytorch | [Caffe2] Retraining saved model | https://github.com/caffe2/tutorials/blob/master/CIFAR10_Part2.ipynb
Above documentation is incorrect. Also, the Exporter fails if a model has Batch Norm in it. Also, following the same steps in above document for retraining a model gives a segmentation fault.
Please guide. I also posted this on PyTorch discuss but there is no response.
At least, a good working document to load model(which has Batch Norm) and retrain it would be really helpful. | caffe2 | low | Minor |
432,886,601 | TypeScript | Suggestion: shorthand for templated type to extend and default to the same type | ## Search Terms
default extends equals generics = template types is
## Suggestion
Can we have a way to declare that a templated (generic) type both `extends` and is `=` a type value without repeating the type value?
```typescript
declare function example<T extends Base = Base>(value: T): T;
```
A clean way to do this would be to allow the `is` keyword following a generic type:
```typescript
declare function example<T is Base>(value: T): T;
```
Failing that, how about omitting the first type value?
```typescript
declare function example<T extends = Base>(value: T): T;
```
## Use Cases
Generic parameters with extended and default values set to the same are a reasonable pattern:
```typescript
declare function querySelector<E extends Element = Element>(selectors: string): E | null;
```
In larger type definitions with many generics, these `extends`+`=` definitions can get a little cumbersome:
```typescript
type BitAddThree<
A extends Bit = Bit,
B extends Bit = Bit,
C extends Bit = Bit
> =
[A, B, C] extends [0, 0, 0] ? [0, 0] :
// etc. etc.
[Bit, Bit, Bit];
```
## Examples
```typescript
declare function querySelector<E is Element>(selectors: string): E | null;
declare function querySelector<E extends = Element>(selectors: string): E | null;
```
```typescript
type BitAddThree<A is Bit, B is Bit, C is Bit> = // ...
type BitAddThree<A extends = Bit, B extends = Bit, C extends = Bit> = // ...
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Major |
432,891,038 | opencv | Memory corruption in computing optical flow | ##### System information (version)
- OpenCV => 3.1.0
- Operating System / Platform => Ubuntu 16.04
- Compiler => cmake (3.14.0) , g++ (5.4.0), C++14
##### Detailed description
Looks like there is a problem with `cv::calcOpticalFlowPyrLK`. I checked and made sure that the inputs (prev image, next image, prev pts, next pts) are in fact valid.
This is in fact the line that invokes the above function.
```c++
const double win_size = 30.0;
const int klt_max_iter = 30;
const double klt_eps = 0.001;
vector<uchar> status;
vector<float> error;
cv::TermCriteria termcrit(cv::TermCriteria::COUNT+cv::TermCriteria::EPS,
klt_max_iter, klt_eps);
cv::calcOpticalFlowPyrLK(img_ref, img_cur,
points_ref, points_cur,
status, error,
cv::Size2i(win_size, win_size),
4, termcrit, cv::OPTFLOW_USE_INITIAL_FLOW);
```
The exact backtrace from gdb is
```bash
(gdb) bt
#0 0x00007ffff6112428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#1 0x00007ffff611402a in __GI_abort () at abort.c:89
#2 0x00007ffff61547ea in __libc_message (do_abort=do_abort@entry=2, fmt=fmt@entry=0x7ffff626ded8 "*** Error in `%s': %s: 0x%s ***\n") at ../sysdeps/posix/libc_fatal.c:175
#3 0x00007ffff615d37a in malloc_printerr (ar_ptr=<optimized out>, ptr=<optimized out>, str=0x7ffff626e030 "free(): invalid next size (normal)", action=3) at malloc.c:5006
#4 _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at malloc.c:3867
#5 0x00007ffff616153c in __GI___libc_free (mem=<optimized out>) at malloc.c:2968
#6 0x00007ffff4d33faa in void cv::pyrDown_<cv::FixPtCast<unsigned char, 8>, cv::PyrDownVec_32s8u>(cv::Mat const&, cv::Mat&, int) () from /usr/local/lib/libopencv_imgproc.so.3.1
#7 0x00007ffff4d3bbd3 in cv::pyrDown(cv::_InputArray const&, cv::_OutputArray const&, cv::Size_<int> const&, int) () from /usr/local/lib/libopencv_imgproc.so.3.1
#8 0x00007ffff5426d72 in cv::buildOpticalFlowPyramid(cv::_InputArray const&, cv::_OutputArray const&, cv::Size_<int>, int, bool, int, int, bool) () from /usr/local/lib/libopencv_video.so.3.1
#9 0x00007ffff542b2b6 in cv::calcOpticalFlowPyrLK(cv::_InputArray const&, cv::_InputArray const&, cv::_InputArray const&, cv::_InputOutputArray const&, cv::_OutputArray const&, cv::_OutputArray const&, cv::Size_<int>, int, cv::Ter
mCriteria, int, double) () from /usr/local/lib/libopencv_video.so.3.1
#10 0x00007ffff7b9717c in svo::initialization::trackKlt(boost::shared_ptr<svo::Frame>, boost::shared_ptr<svo::Frame>, std::vector<cv::Point_<float>, std::allocator<cv::Point_<float> > >&, std::vector<cv::Point_<float>, std::allocat
or<cv::Point_<float> > >&, std::vector<Eigen::Matrix<double, 3, 1, 0, 3, 1>, std::allocator<Eigen::Matrix<double, 3, 1, 0, 3, 1> > >&, std::vector<Eigen::Matrix<double, 3, 1, 0, 3, 1>, std::allocator<Eigen::Matrix<double, 3, 1, 0,
3, 1> > >&, std::vector<double, std::allocator<double> >&) () from /home/kv/slam/svo/svvo/lib/libsvo.so
#11 0x00007ffff7b9801d in svo::initialization::KltHomographyInit::addSecondFrame(boost::shared_ptr<svo::Frame>) () from /home/kv/slam/svo/svvo/lib/libsvo.so
#12 0x00007ffff7b67849 in svo::FrameHandlerMono::processSecondFrame() () from /home/kv/slam/svo/svvo/lib/libsvo.so
#13 0x00007ffff7b69d63 in svo::FrameHandlerMono::addImage(cv::Mat const&, double) () from /home/kv/slam/svo/svvo/lib/libsvo.so
#14 0x0000000000408709 in svo::BenchmarkNode::runFromFolder() ()
#15 0x0000000000406a85 in main ()
(gdb)
```
Also, to note:
This problem arises when I try to compute optical flow at the 3rd time step i.e, between `frame 0` and `frame 2`. Since there was very little disparity between `frame 0` and `frame 1`, I had to compute optical flow for `frame 0` and `frame 2` to ensure there is enough disparity.
Also, here is the [pastebin link](https://pastebin.com/92B00TkB) for a more detailed backtrace. | bug,category: video,incomplete | low | Critical |
432,897,902 | rust | Add CLI argument to set favicon URL | T-rustdoc,C-feature-request,A-rustdoc-ui,A-CLI | low | Minor |
|
432,910,773 | godot | AnimationPlayer: adding keys to Method Track has become unintuitive and complicated |
**Godot version:**
3.1 stable
**Issue description:**
When you want to add a keyframe to an empty Call method track, you will get to a window saying "select method"
This is not what I want. I want to add a keyframe not a select a method. Even if you already wrote a function in your script, you cannot select it this way. Searching your function won't work unless you saved your script.
Inserting a key should be one click and not force me to follow 5 different and very specific steps first. Stuff like I have to first write the function into the script. Maybe I want to think about that later. Maybe I want to change that on the fly and now think about timing the placement of keyframes.
It's a massive step backwards from the 3.0.6 Call Func track in terms of usability. There I could just press the little small + at the end of the track and it would create a keyframe for me. The same functionality worked on other tracks, so I did not need to learn it I already knew. I would like to have that plus back.

**Steps to reproduce:**
New Animation > New Method track > RMB click > add key | enhancement,discussion,topic:editor,usability | low | Major |
432,944,849 | godot | Autocompletion doesn't work if other scene is opened | **Godot version:**
3.1 stable
**OS/device including version:**
Windows 7 x64
**Steps to reproduce:**
**_Prerequisite_**: 2 scenes, one of them should have at least one node
Ex.:
```
- Scene 1
- Node
- Scene 2
```
1. Create script for Scene 1,
2. While Scene 1 tab is opened start typing `$` / `get_node("` in _ready
**_Intermediate result_**: autocompletion suggest you Node. Fine.
3. Change scene tab to Scene 2,
4. Repeat step 2
**_Result_**: no autocompletion list
**_Expected_**: autocompletion list with Node
**Minimal reproduction project:**
[test14042019.zip](https://github.com/godotengine/godot/files/3077141/test14042019.zip)

| bug,topic:editor,confirmed | low | Critical |
432,958,506 | pytorch | add stable distribution in torch.distributions | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Add stable distribution in torch.distributions
## Motivation
I would like to get a pdf of stable distribution on GPU. So far, I have to transfer to Numpy and use Scipy to do that and transfer back to tensor on GPU. This process becomes a bottleneck of my code.
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
## Pitch
Add stable distribution in torch.distributions
<!-- A clear and concise description of what you want to happen. -->
## Alternatives
No
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
## Additional context
N/A
<!-- Add any other context or screenshots about the feature request here. -->
| module: distributions,feature,low priority,triaged | low | Major |
433,005,422 | opencv | LineIterator pos() overflows when coordinates and/or base data type is large | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
Please:
* Read the documentation to test with the latest developer build.
* Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue.
* Try to be as detailed as possible in your report.
* Report only one problem per created issue.
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => 3.4
- Operating System / Platform => Windows 64 Bit
- Compiler => gcc 5.4.0
##### Detailed description
<!-- your description -->
The `pos()` function of `LineIterator` (from the `imgproc` module) returns incorrect result under certain conditions due to integer overflow.
Specifically, `p.y * step` can potentially overflow `int` when either or both of the constituents are large
https://github.com/opencv/opencv/blob/46353564353e0e6c6d82a47f52283957ddd37e8b/modules/imgproc/include/opencv2/imgproc.hpp#L4746
##### Steps to reproduce
```.cpp
// Pardon the weird example.. it's from a specific use case from my work which was how i discovered this bug
Mat testImg = Mat::zeros(21481, 72320, CV_8UC2);
LineIterator testIt(testImg, Point(63894,21163), Point(63895, 21162));
for(int i = 0; i < testIt.count; ++testIt, i++ ) {
Point p = testIt.pos();
cout << i << ": " << p << endl;
}
```
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
-->
| bug,category: imgproc,affected: 3.4 | low | Critical |
433,024,695 | rust | async/await: cannot move out of captured variable in an `Fn` closure | Hello,
recently faced with strange behavior, please comment, is this a bug, or not?
Compiler version: 1.35.0-nightly 2019-04-12 99da733f7f38ce8fe684
```rust
#![feature(futures_api, async_await, await_macro)]
use std::future::Future;
struct Task<T> {
task: T,
}
impl<T> Task<T>
where
T: Fn(),
{
fn new(task: T) -> Self {
Self { task }
}
fn execute(&self) {
(self.task)();
}
}
struct AsyncTask<T> {
task: T,
}
impl<T, F> AsyncTask<T>
where
T: Fn() -> F,
F: Future<Output = ()>,
{
fn new(task: T) -> Self {
Self { task }
}
async fn execute(&self) {
await!((self.task)());
}
}
fn main() {
let string = "Hello, World!".to_string();
let _task = Task::new(move || {
println!("{}", string);
});
let string = "Hello, World!".to_string();
let _async_task = AsyncTask::new(async move || {
println!("{}", string);
});
}
```
~~([Playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=28a27aa660e7ed576b8893ab389026e5))~~
([Updated Playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=8d1f9f3ac5ece51ed6ead2da96e9aa44)
Errors:
```
Compiling playground v0.0.1 (/playground)
error[E0507]: cannot move out of captured variable in an `Fn` closure
--> src/main.rs:47:52
|
46 | let string = "Hello, World!".to_string();
| ------ captured outer variable
47 | let _async_task = AsyncTask::new(async move || {
| ____________________________________________________^
48 | | println!("{}", string);
49 | | });
| |_____^ cannot move out of captured variable in an `Fn` closure
error: aborting due to previous errors
For more information about this error, try `rustc --explain E0507`.
error: Could not compile `playground`.
To learn more, run the command again with --verbose.
``` | A-closures,T-compiler,A-async-await,AsyncAwait-Triaged,requires-nightly,fixed-by-async-closures | low | Critical |
433,025,222 | TypeScript | Type Math.min & Math.max using generic | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
- Math.min generic
- Math.max generic
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
Currently, Math.min is typed as:
```js
min(...values: number[]): number;
```
However, I would like to change the definition to:
```js
min<T extends number>(...values: [T, ...T[]]): T;
min(): number; // Because this will return Infinity
```
Because Math.min should never return things that aren't its input. (Except when non-number is passed in, then it'll return `NaN`. But that's impossible with this typing.)
(Okay, there's another case: if no value is passed in, it'll return `Infinity`. Unfortunately, there's no literal type for `Infinity` so we can't type that correctly. ~And AFAIK there's no way to force at least 1 parameter with rest args.~ **Updated**: there is, using tuple type: `[T, ...T[]]`. Proposal updated. Still, it would be nice to have literal `Infinity` type.)
(The same applies with Math.max, except `Infinity` is now `-Infinity`)
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
Let's say I have this type:
```js
type TBankNoteValues = 1 | 5 | 10 | 20 | 50 | 100;
```
And I want to know what is the highest-value banknote I have in the array. I could use Math.max to find that out. But with the current typing, the return value isn't guaranteed to be TBankNoteValues.
```js
let values: TBankNoteValues[] = [50, 100, 20];
let maxValues = Math.max(...values); // number
```
And now I can't pass maxValues to a function that expects TBankNoteValues anymore.
## Examples
<!-- Show how this would be used and what the behavior would be -->
```js
type TBankNoteValues = 1 | 5 | 10 | 20 | 50 | 100;
let values: TBankNoteValues[] = [50, 100, 20];
let maxValues = Math.max(...values);
// Current type: number
// Expected type: TBankNoteValues
```
[Playground link](https://www.typescriptlang.org/play/index.html#src=type%20TBankNoteValues%20%3D%201%20%7C%205%20%7C%2010%20%7C%2020%20%7C%2050%20%7C%20100%3B%0A%0Alet%20values%3A%20TBankNoteValues%5B%5D%20%3D%20%5B50%2C%20100%2C%2020%5D%3B%0Alet%20maxValues%20%3D%20Math.max(...values)%3B)
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- ~~Shouldn't be any. Anything that expects the old signature, TypeScript should just infer `T` to `number`.~~
- Now that I think about it, this make the return type of the function narrows down. So, it could make type inference on a variable declaration changes unexpectedly. An explicit type annotation should fix this.
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | medium | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.