id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
275,529,099
rust
vtable addresses differ based on the number of codegen units
I'm seeing the ptr::eq function from the standard library sometimes return false when it should return true. This is happening in debug mode in nightly and beta, but not stable, and not in any version in release mode. Here's the simplest I could get a reproduction of the issue: use std::ptr; use container::Container; trait Trait {} struct Struct {} impl Trait for Struct {} mod container { use Trait; pub(crate) struct Container<'a> { pub(crate) item: &'a Trait, } } impl<'a> Container<'a> { fn new(item: &'a Struct) -> Container<'a> { Container { item } } } fn main() { let item = Struct {}; let container = Container::new(&item); assert!(ptr::eq(container.item, &item)); } Expected the assertion to pass, and it fails on beta and nightly, in debug mode only. ## Meta `rustc +beta --version --verbose`: rustc 1.22.0-beta.3 (cc6ed0640 2017-11-13) binary: rustc commit-hash: cc6ed0640fbcd2dff95b4532fd12aa0d6c545f28 commit-date: 2017-11-13 host: x86_64-pc-windows-msvc release: 1.22.0-beta.3 LLVM version: 4.0 `rustc +nightly --version --verbose`: rustc 1.23.0-nightly (5041b3bb3 2017-11-19) binary: rustc commit-hash: 5041b3bb3d953a14f32b15d1e41341c629acae12 commit-date: 2017-11-19 host: x86_64-pc-windows-msvc release: 1.23.0-nightly LLVM version: 4.0
A-codegen,A-trait-system,T-lang,T-compiler,C-bug
medium
Critical
275,539,148
go
cmd/compile: cannot compile valid (but esoteric) self-recursive interface
https://play.golang.org/p/YSkuYfVQHI Esoteric case. Just for reference.
compiler/runtime
low
Minor
275,578,325
vscode
ANSI color support in edit buffer
<!-- Do you have a question? Please ask it on http://stackoverflow.com/questions/tagged/vscode. --> Now ANSI colors are supported in the debug console, I'd like to see it in the editor, something like [SublimeANSI](https://github.com/aziz/SublimeANSI). <!-- Use Help > Report Issues to prefill these. --> - VSCode Version: 1.19 - OS Version: macOS Steps to Reproduce: 1. open a file which contains ANSI color escape sequence, or simply `ls --color | code` 2. colorless raw ANSI code displayed, rather than colored text. <!-- Launch with `code --disable-extensions` to check. --> Reproduces without extensions: Yes
feature-request,editor-rendering
high
Critical
275,609,945
go
proposal: net: add support for "let localhost be localhost"
The I-D https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost became an IETF dnsop-wg draft. It would be better to support the feature once the I-D has been published as an RFC for the sake of convenience instead of saying "sorry, there's no direct relationship between the IPv4 loopback address prefix ```127.0.0.0/8``` or the IPv6 loopback address ```::1``` and the name ```localhost```, that's just the convention", although the resolution for ```localhost``` still remains as a burden of applications from security perspective.
Proposal,Proposal-Hold
low
Major
275,612,605
TypeScript
Sort jsdoc parameter suggestions by argument position
<!-- BUGS: Please use this template. --> <!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript --> <!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> From https://github.com/Microsoft/vscode/issues/35124 <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.7.0-dev.20171116 **Code** For the code: ```ts /** * @param | */ function foo(z, a) {} ``` Trigger completions at the `|` after `@param` **Expected behavior:** `z` is returned first, followed by `a`. The `sortText` can be used for this **Actual behavior:** `a` and `z` are returned with sort orders weights: ``` [Trace - 11:33:07 PM] Sending request: completions (176). Response expected: yes. Current queue length: 0 Arguments: { "file": "/Users/matb/projects/san/y.ts", "line": 2, "offset": 11, "includeExternalModuleExports": true } [Trace - 11:33:07 PM] Response received: completions (176). Request took 2 ms. Success: true Result: [ { "name": "a", "kind": "parameter", "kindModifiers": "", "sortText": "0" }, { "name": "z", "kind": "parameter", "kindModifiers": "", "sortText": "0" } ] ```
Suggestion,Help Wanted,Good First Issue,Domain: Completion Lists,VS Code Tracked,PursuitFellowship
low
Critical
275,614,822
TypeScript
TypeScript: aggregate errors for file from all projects
_From @OliverJAsh on September 30, 2017 16:20_ I have a project with three folders, two of which are TS projects and the third of which is shared between them: - shared - service-worker (with `tsconfig.json`) - browser (with `tsconfig.json`) Both "browser" and "service-worker" depend on "shared". However, when I open a file in "shared", I only see errors related to one of the `tsconfig.json`s—usually this will be the `tsconfig.json ` used for the file I navigated from previously. It would be great if VSCode could aggregate all errors for the current file for all matching TS projects. _Copied from original issue: Microsoft/vscode#35432_
Bug,Domain: TSServer
medium
Critical
275,621,320
go
net: clarification on the use of 0.0.0.0/8, ::/128 and ::ffff:0.0.0.0/128 as destination
The fixes for #6290 and #18806 revealed that the use of special IP addresses, 0.0.0.0/8 or ::/128, as the destination is confusing especially when working with datagram transport protocols as described in #22811. It would be better to clarify and to implement consistent API semantics (because some of the special addresses is prohibited from using the destination IP address, so the existing behavior is Go-specific extended interpretation) for avoiding unnecessary confusion. Just FYI: - 0.0.0.0/8 is defined in section 3.2.1.3 of RFC 1122 as "this host on this network." For historical reasons, some IP stack implementations allow to use 0.0.0.0/32 as the destination address of IP packets, but in general it's prohibited from using as the destination address, - ::/128 is defined in section 2.5.2 of RFC 4291 as "the unspecified address." It's clearly stated that the unspecified address must not be used as the destination address and I've never seen IP stack implementation that allows to use the unspecified address as the destination address (I guess that IPv6 protocol conformance tools did a great job.)
help wanted,NeedsInvestigation
low
Minor
275,689,141
godot
Implement InputEventGesture (incl. Pan and Magnify) for all platforms
**Operating system or device, Godot version, GPU Model and driver (if graphics related):** Current master (5a23136). **Issue description:** `InputEventGesture` was added to core in #12573, together with `InputEventPanGesture` and `InputEventMagnifyGesture`, but they're only implemented for macOS so far. They should also be implemented for all platforms where they are relevant. I *guess* they would make sense on all platforms since gestures can be done on touchscreens as well as touchpads, so the list would be: - [ ] Android (#25474 - reverted) - [ ] iOS - [ ] Javascript - [ ] Wayland Linux and (if possible) *BSDs - [ ] X11 Linux and (if possible) *BSDs - [x] macOS (#12573) - [ ] UWP - [ ] Windows See #12573 for the base API and its implementation on macOS. If more types of gestures are wanted, they should likely be discussed in their own issue, let's keep this one focused on pan and magnify.
enhancement,platform:windows,platform:linuxbsd,platform:web,platform:ios,platform:android,topic:core,topic:porting,platform:uwp,topic:input
medium
Critical
275,708,423
rust
Infer if type through indirection.
It surprised me that the first example works, while the second doesn't. https://play.rust-lang.org/?gist=a1bee92067f87a67240b4ccd2bc9a6a4&version=stable ```rust trait Foo {} struct A; impl Foo for A {} struct B; impl Foo for B {} pub fn test() { let a = A; let b = B; // Works. let x: &Foo = if true { &a } else { &b }; // Doesn't work. let y = if true { &a } else { &b }; let z: &Foo = y; } ``` ``` error[E0308]: if and else have incompatible types --> src/main.rs:17:13 | 17 | let y = if true { &a } else { &b }; | ^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `A`, found struct `B` | = note: expected type `&A` found type `&B` ``` It seems like adding this `let` shouldn't change anything but it seems the required type of the if statement isn't propagated unless it is directly used. The same behaviour can be found when using `func(if ...)` vs `let x = if ...; func(x)` (sorry if this is known, I couldn't find anything)
C-enhancement,A-inference
low
Critical
275,731,948
angular
request| feat(form): Ability to programmatically submit an AbstractControl, NgForm or a FormGroupDirective
## I'm submitting a... <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [x] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior In standard HTML/TypeScript, without angular, it's possible to get a HTMLFormElement and call either `.sumbit()` or `.reset()` on it to submit it programmatically. If we want to do this in an Angular2 Component, we will need to either: - have a `@ViewChild('myForm') formElementRef`, casting the `formElementRef.nativeElement` to `HtmlFormElement` and then call `.submit()` on it - have a hidden `input type="sumbit"` (or similar), again do a `@ViewChild` on it and trigger it programmatically. Note that in the case of `.reset()` this is very well supported and documented. ## Expected behavior To be able to trigger the `.submit()` operation directly from an `FormGroup`/`FormArray` or even an `NgForm` would be really helpful. There are cases where, for practical UI/UX reasons the submit button cannot be in the `<form>` element itself. Although this is far from ideal, it's still something that can be and is done. ## Minimal reproduction of the problem with instructions `<button #submit>Submit!<button> <form [formGroup]="form" (submit)="submitted()">[...]</form>` Then try to submit the form without using an `ElementRef` or a hidden submit inside the `<form>`. ## What is the motivation / use case for changing the behavior? Some UIs will require us to have the submit element outside of the `<form>`, or to submit a form relying on different mechanism/conditions (e.g., time triggered for a game-like UI). ## Environment <pre><code> Angular version: 5.1.0-beta.1 </code></pre>
feature,state: Needs Design,freq2: medium,area: forms,feature: under consideration
medium
Critical
275,765,024
rust
We should lint when !-fallback results in questionable code
An example is the following code: ```rust #![feature(never_type)] use std::error::Error; pub trait Deserialize: Sized { fn deserialize() -> Result<Self, Box<Error>>; } impl Deserialize for ! { fn deserialize() -> Result<!, Box<Error>> { Err("oh geez")?; panic!() } } fn foo() -> Result<usize, Box<Error>> { Deserialize::deserialize()?; Ok(22) } ``` Here, the type of `Deserialize::deserialize()?` defaults to `!`, making the `Ok(22)` unreachable. It might be unclear to users why the `Ok(22)` is unreachable since the `!`-fallback happens implicitly, so it would be good to point at it with an auxiliary note on the warning. eg. something like `note: unconstrained type variable fell back to !`
C-enhancement,A-lints,A-diagnostics,T-compiler,F-never_type
low
Critical
275,770,703
angular
by default not send undefined params in url
## I'm submitting a... - [x] Feature request ## CURRENT To be simple, I will take example for GET request. Here you have a simple method making http call with an optional parameter: ``` public getCity(cityId?: number){ this.get('city', { params: {city: cityId}}) } ``` `cityId?` is optional, so you can call method like `this.getCity()`, if you do so angular will make request to this url : `http://localhost/city?city=undefined` ## EXPECTED I think avoid sending undefined params (not including them in HttpParams if they are undefined) should be most common use case because we can check this on backend. If params aren't present == undefined. ## MOTIVATION Advantages of this is : maybe URL have limits and sending essential/useful data we really need is important. I don't understand why send them by default if they are undefined. Something like [Include.NON_NULL](https://github.com/FasterXML/jackson-annotations/blob/master/src/main/java/com/fasterxml/jackson/annotation/JsonInclude.java#L107) in jackson. What you think about not send them by default ? or add option in [options{}](https://angular.io/api/common/http/HttpClient#get) object just to avoid this big codes: ``` let cleanedParams = new HttpParams(); req.params.keys().forEach(x => { if(req.params.get(x) != undefined) cleanedParams = cleanedParams.append(x, req.params.get(x)); }) const clonedRequest = req.clone({ params:cleanedParams }); return next.handle(clonedRequest) ``` Otherwise yes actually we can fortunately workaround by interceptor or with : ``` public getCity(cityId?: number){ let params = {}; if(cityId) params.city=cityId; this.get('city', params) } ``` But when you have 10/20 params or 500/1k http request you are happy if you avoid this condition :-D I didn't think about others backend endpoint so will think within my use case. Idea should only react with undefined params not with null params. For instance my endpoint in java if I expect to receive `@PathVariable Long cityId` I can receive null but not undefined. ## Environment Angular 5.0.2 Browser tested: - [x] Chrome console (version 21.11.2017)
feature,breaking changes,area: router,feature: under consideration
high
Critical
275,814,519
youtube-dl
VEVO not working
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.11.15*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.11.15** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your *issue*? - [x] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` Microsoft Windows [Version 10.0.16299.64] (c) 2017 Microsoft Corporation. Alle Rechte vorbehalten. C:\...\youtube-dl>youtube-dl -F -v https://www.vevo.com/watch/elena-paparizoy/(official-music-video)/GRUV71700172 [debug] System config: [] [debug] User config: ['--no-check-certificate', '--ignore-errors', '--no-continue', '--no-overwrites', '--proxy', ''] [debug] Custom config: [] [debug] Command-line args: ['-F', '-v', 'https://www.vevo.com/watch/elena-paparizoy/(official-music-video)/GRUV71700172'] [debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252 [debug] youtube-dl version 2017.11.15 [debug] Python version 3.4.4 - Windows-10-10.0.16299 [debug] exe versions: none [debug] Proxy map: {} [Vevo] Retrieving oauth token [Vevo] GRUV71700172: Downloading api video info ERROR: string indices must be integers Traceback (most recent call last): File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\extractor\common.py", line 506, in _request_webpage File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\YoutubeDL.py", line 2196, in urlopen File "C:\Python\Python34\lib\urllib\request.py", line 470, in open File "C:\Python\Python34\lib\urllib\request.py", line 580, in http_response File "C:\Python\Python34\lib\urllib\request.py", line 508, in error File "C:\Python\Python34\lib\urllib\request.py", line 442, in _call_chain File "C:\Python\Python34\lib\urllib\request.py", line 588, in http_error_default urllib.error.HTTPError: HTTP Error 400: Bad Request During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\extractor\vevo.py", line 178, in _call_api File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\extractor\common.py", line 684, in _download_json File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\extractor\common.py", line 638, in _download_webpage File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\extractor\common.py", line 535, in _download_webpage_handle File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\extractor\common.py", line 515, in _request_webpage youtube_dl.utils.ExtractorError: Failed to download video info: HTTP Error 400: Bad Request (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\YoutubeDL.py", line 784, in extract_info File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\extractor\common.py", line 437, in extract File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\extractor\vevo.py", line 194, in _real_extract File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\extractor\vevo.py", line 182, in _call_api File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp10f7prtw\build\youtube_dl\extractor\vevo.py", line 182, in <listcomp> TypeError: string indices must be integers ``` --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://www.youtube.com/watch?v=BaW_jenozKc - Single video: https://youtu.be/BaW_jenozKc - Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights. --- ### Description of your *issue*, suggested solution and other information Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible. If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
geo-restricted
low
Critical
275,849,413
rust
Rustdoc: replace Self with concrete types
It's pretty common to write code like: ```rust pub struct Foo { field: usize, } impl Foo { pub fn new() -> Self { Self { field: 0 } } } impl Default for Foo { fn default() -> Self { Self::new() } } ``` Generated docs: ![](https://screenshots.firefoxusercontent.com/images/702054a3-6243-4055-87ef-28bf5bee98d8.png) This is a simple example, but generally I find documentation much easier to browse when `Self` is replaced with concrete types (i.e. `Foo`). Because of that, I tend to write concrete types in public interfaces, but otherwise use `Self` everywhere else because it's so easy to type. It'd be great if rustdoc could automatically replace all occurences of `Self` in the interface with concrete types. cc @jeehoonkang
T-rustdoc,C-feature-request
low
Major
275,852,483
TypeScript
`+=` on `string | undefined` should narrow its operand to `string`
<!-- BUGS: Please use this template. --> <!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript --> <!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.6.1 **Code** ```ts let a: string | undefined; a = a + "a"; a; // string let b: string | undefined; b += "a"; b; // should be string, but is still string | undefined ``` **Expected behavior:** Both notations should be equivalend
Bug
low
Critical
275,855,466
pytorch
Memory leak when doing backward with grad as yourself
@zdevito You said you wanted to make a python repro? ``` void backward(std::vector<Variable> outputs) { variable_list varlst; function_list funclst; for (auto v : outputs) { funclst.emplace_back(v.grad_fn(), v.output_nr()); varlst.emplace_back(v); } detail::engine.execute(funclst, varlst, false); } ``` This will leak a bunch of Tensors and in general be very sad for whatever Variables used. cc @ezyang @gchanan @zou3519 @bdhirsh @heitorschueroff @albanD @gqchen @pearu @nikitaved
module: autograd,module: memory usage,triaged,quansight-nack
medium
Major
275,889,725
opencv
Absent documentation for sub-pixel coordinate system
##### System information (version) - OpenCV => all - Operating System / Platform => all - Compiler => all ##### Detailed description There are a number of methods in OpenCV that accept or generate pixel coordinates as floating point numbers, `remap` is one of those methods I am using. I googled for a while trying to find out whether this method places centre of the top-left pixel at `0,0` or at `0.5,0.5`, some conflicting information came up: http://answers.opencv.org/question/87923/sub-pixel-coordinate-origin/ http://answers.opencv.org/question/35111/origin-pixel-in-the-image-coordinate-system-in-opencv/ It seems that the correct answer is **`0,0` is the centre of the top left pixel**, judging by this commit: https://github.com/opencv/opencv/commit/e646f9d2f1b276991a59edf01bc87dcdf28e2b8f There should be a place somewhere in the documentation where the pixel coordinate system is properly defined, not just for the integer case. One paragraph stating what the coordinates of the centre and the four corners of the top left pixel are would suffice.
category: imgproc,category: documentation,RFC
low
Major
275,904,316
go
proposal: net/http: add Client.Put, Client.Patch
# PROPOSAL I am proposing the addition of the following functions to the net/http package. func Put() func Patch() func ()Put func ()Patch These functions would be similar to the Post functions that already exist. ## REASONS I believe all the reasons that the Post functions where added would also apply to these functions. Furthermore the fact the the Post functions already exist would make these functions a natural extension of an approach that has already been started but is currently incomplete. ## WORK TO BE DONE BY sjtwiv
Proposal,Proposal-Hold
low
Major
275,913,059
pytorch
Fuse bias to CuDNN convolution
CuDNN provides a fused cudnnConvolutionBiasActivationForward, but we don't use it. We should. NB: Please don't work on this until #3666 merges.
triaged,enhancement
low
Minor
276,025,466
TypeScript
Possible to narrow type of object literal property with strictNullChecks?
**TypeScript Version:** Current playground (2.6?) **Code** ```ts type Foo = { bar?: number[]; } let f1: Foo = {}; f1.bar = []; f1.bar.push(0); let f2: Foo = { bar: [] }; f2.bar.push(0); ``` **Expected behavior:** Both references to `.bar` compile successfully. **Actual behavior:** Second reference (`f2.bar.push(0)`) fails with "object is possibly undefined" -- the type of `f2.bar` is still treated as `number[] | undefined` even though it obviously can't be undefined. ---- This might not be an actual *bug* as such but whatever magic allows the compiler to narrow the type of `f1.bar` after assigning to it, should also be able to narrow the type when it was assigned in the literal.
Suggestion,Committed
low
Critical
276,036,156
TypeScript
Feature Request: add TypeChecker API to query if identifier is definitely assigned
Currently it's not possible to query CFA. I need to find out if an identifier is definitely assigned. This logic is part of `checkIdentifier` and is not accessible outside of that function. My use case is the TSLint rule `no-unnecessary-type-assertion` where we need to identify false positives. The current implementation compares the type before and after the non-null-assertion. That doesn't work well for identifiers that are not definitely assigned, because the assertion suppresses TS2454 and can therefore not be removed.
Suggestion,In Discussion,API
low
Minor
276,036,848
electron
Can't set VisibleOnAllWorkspaces for window.open windows
Here is what I'm trying to achieve: ``` var win = window.open(window.location.pathname + '#/my-subwindow', 'Lovely Title'); win.focus(); win.setVisibleOnAllWorkspaces(true); ``` But of course, the window that is returned from the window.open() is not a BrowserWindow but instead, it's a [BrowserWindowProxy](https://electronjs.org/docs/api/browser-window-proxy). The documentation of [window.open](https://electronjs.org/docs/api/window-open#windowopenurl-framename-features) suggests that you can define features where: "each feature has to be a field of BrowserWindow's options." but `visibleOnAllWorkspaces` is not available as an option from what I understand from the documentation. Even if I had a way to get a hold of the window *through* the BrowserWindowProxy and mess with it directly that would be great 😂
enhancement :sparkles:,platform/macOS
low
Major
276,093,057
opencv
cv::cudacodec::createVideoReader segfault
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => 3.1 - Operating System / Platform => Ubuntu 16.04 - Compiler => gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609 ##### Detailed description Description of this problem and it's solution could be found here: http://answers.opencv.org/question/178227/cvcudacodeccreatevideoreader-segfault My opinion that one should include sources with symbols loading code into the OpenCV. Also it will be great if OpenCV would disable linking libnvcuvid at compile time. As of now, it is hard to build OpenCV with CUDA stubs leaving LIBNVCUVID enabled(e.g. on systems without CUDA or in the Docker), because nvidia provides no stubs for libnvcuvid.
feature,priority: low,category: gpu/cuda (contrib)
low
Critical
276,152,885
kubernetes
API validation code is hard to review
I do a fair number of code reviews. I try to pay attention to API validation. It's gotten very hard to review. * There are at least 2 util libraries, and the core API validation is effectively a 3rd. * There are poor conventions across those libraries. * Commonly used functions might be in any of the 3. * They are all called "validation". * Every API version has a "validation" pkg, too. I'd like to chip away at this and establish better conventions, leading towards a more declarative validation process. It's not urgent, but I need some code to work on :) @kubernetes/sig-api-machinery-feature-requests
sig/api-machinery,kind/feature,lifecycle/frozen
low
Major
276,166,465
rust
ill-typed unused FFI declarations can cause UB
This compiles and prints "p is not null and 0x0": ```Rust pub mod bad { #[allow(improper_ctypes)] extern { pub fn malloc(x: usize) -> &'static mut (); } #[no_mangle] pub fn bar() { let _m = malloc as unsafe extern "C" fn(usize) -> &'static mut (); } } pub mod good { extern { fn malloc(x: usize) -> *const u8; } pub fn foo() { unsafe { let p = malloc(0x13371337deadbeef); // your computer doesn't have enough memory if p.is_null() { panic!("p is null"); } else { panic!("p is not null and {:?}", p); } } } } fn main() { bad::bar(); good::foo(); } ``` The problem is that we have two declarations of the "malloc" symbol, but LLVM uses a global namespace for these. So during codegen, the 2nd declaration we generate overwrites the first. In this case, the "ill-typed" malloc declaration (`bad::malloc`) comes last, up putting a `nonnull` attribute on `malloc`, which causes `mod good` to be miscompiled. Here's [another example](https://play.rust-lang.org/?version=stable&mode=release&edition=2021&gist=ed2b1299a7d8dad50f7b4c911eefee44) that does not involve `malloc`. It does not get miscompiled currently, but it demonstrates the issue. See [here](https://github.com/rust-lang/rust/issues/46188#issuecomment-2348194976) for a document that summarizes why this happens, and what our options are.
A-LLVM,A-codegen,A-FFI,P-medium,T-lang,T-compiler,I-unsound,C-bug
high
Critical
276,201,000
node
Debugger crashes with assertion failure while using Chrome DevTools and remote debug
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: 8.9.1 LTS * **Platform**: macOS 10.13.1 (17B48) (Darwin Kernel Version 17.2.0) Webstorm 2017.2.5 Chrome: 62.0.3202.94 No crash using Webstorm 2017.1.4 While using the Chrome DevTools and node --inspect, the node process crashes right after Chrome auto-detected the debug session as a network target. ``` /usr/local/bin/node[28426]: ../src/node.cc:1449:void node::InternalCallbackScope::Close(): Assertion `(env_->trigger_async_id()) == (0)' failed. 1: node::Abort() [/usr/local/bin/node] 2: node::(anonymous namespace)::DomainEnter(node::Environment*, v8::Local<v8::Object>) [/usr/local/bin/node] 3: node::InternalCallbackScope::Close() [/usr/local/bin/node] 4: node::InternalCallbackScope::~InternalCallbackScope() [/usr/local/bin/node] 5: node::RunForegroundTask(v8::Task*) [/usr/local/bin/node] 6: node::NodePlatform::FlushForegroundTasksInternal() [/usr/local/bin/node] 7: node::inspector::NodeInspectorClient::runMessageLoopOnPause(int) [/usr/local/bin/node] 8: v8_inspector::V8Debugger::handleProgramBreak(v8::Local<v8::Context>, v8::Local<v8::Value>, std::__1::vector<int, std::__1::allocator<int> > const&, bool, bool) [/usr/local/bin/node] 9: v8::internal::Debug::OnDebugBreak(v8::internal::Handle<v8::internal::FixedArray>) [/usr/local/bin/node] 10: v8::internal::Debug::Break(v8::internal::JavaScriptFrame*) [/usr/local/bin/node] 11: v8::internal::Runtime_DebugBreakOnBytecode(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/local/bin/node] 12: 0x2fb5b0f842fd 13: 0x2fb5b108876c 14: 0x2fb5b103d1e0 ``` <img width="1060" alt="nodejscrash" src="https://user-images.githubusercontent.com/11752441/33149203-2ea63770-cfcf-11e7-80ea-512d7c7321e7.png"> Reproduce: - Clone Demo Project: https://github.com/Microsoft/TypeScript-Node-Starter/ - Install mongodb via homebrew - Start mongodb via `mongod --config /usr/local/etc/mongod.conf` - Open demo project in Webstorm, `npm run build` and configure a run configuration by adding `Node.js` Run Configuration and change `JavaScript file` to `dist/server.js` - Set breakpoint in `src/controllers/user.ts` Line 30 - Start debugger with the Bug icon, right next to the run icon in Webstorm - Open client site in Chrome `http://localhost:3000` - Use the client site to trigger breakpoint (logged in as user) - Resume by clicking `Step Out` - Use the client site to trigger breakpoint again (logged in as user) - Chrome: Right-click, Inspect - Debugger crashes as shown For me, this crash is only reproducible on one of three Mac systems with identical software versions.
inspector,async_hooks
medium
Critical
276,205,475
go
cmd/vet: know what builtin functions have no side effects
Reminder issue to continue the work in https://go-review.googlesource.com/c/go/+/79536. For example, `len` and `cap` never have any side effects, and it would be useful to know that `i == len(x) || i == len(x)` is a suspicious expression. More broadly, this could be extended to automatically detect what functions are free of side effects. But this would require having access to the full source with full type information, and may be complex and costly, so I'm not entirely sure it's a right fit for vet. A simpler version of the above would be to also add standard library functions, such as `strings.Contains` or `path.Join`. I don't know if it is OK for vet to treat standard library packages differently, though.
help wanted,NeedsFix,Analysis
low
Minor
276,223,882
puppeteer
Properly handle target crashes
Crashed targets should be handled gracefully: 1. All pending commands should be rejected. This includes protocol messages and watchdogs, such as NavigatorWatcher / WaitTask 2. All subsequent commands to the crashed target should reject right away Neither (1) or (2) happen with Puppeteer v0.13.0.
feature,chromium
medium
Critical
276,229,768
godot
Double-clicking on a .gdns or .gdnlib opens the script editor
Godot 3.0 master Windows 10 64 bits I noticed that double-clicking on .gdn or .gdnlib resources in the file browser is not only opening them in the inspector, but also opens an empty script editor and adds them to the open list as if they were scripts. I guess this is not intented, as these don't contain source code.
bug,topic:editor,confirmed,topic:gdextension
low
Minor
276,246,454
bitcoin
Multiwallet rescan sequentially scans multiple wallets instead of in parallel
If you start up with a collection of N out of sync wallets bitcoin will perform N interdependent rescans; this can be rather slow e.g. if they're all at height 200k. This should be fixed, or we should offer suicide counseling at a minimum.
Wallet
low
Major
276,248,607
flutter
iOS mid-fling scroll simulation inaccurate
The scroll motions during the ~50ms in the middle of a fast fling motion on iOS is not accurate. In the middle of the fling itself (and generally), the scrollable's position offset follows the finger exactly whereas on iOS, when the finger moves too fast, the scrollable has an inertia and follows the finger with a spring motion the entire time instead of just at the end when it goes ballistic. https://streamable.com/ki7an
platform-ios,framework,a: fidelity,f: scrolling,f: cupertino,customer: crowd,has reproducible steps,P2,found in release: 3.6,team-ios,triaged-ios
low
Major
276,271,023
neovim
msgpack & shada functional tests fail with TZ=GMT-14
### Steps to reproduce ``` env TZ=GMT-14 make functionaltest ``` ### Actual behaviour ``` [1m[ ERROR ] 17 errors, listed below: [ ERROR ] ...neovim-0.2.2/2nd/test/functional/plugin/msgpack_spec.lua @ 284: In autoload/msgpack.vim function msgpack#strptime works ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'nvim_eval' ...neovim-0.2.2/2nd/test/functional/plugin/msgpack_spec.lua:287: in function <...neovim-0.2.2/2nd/test/functional/plugin/msgpack_spec.lua:284> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 1362: In autoload/shada.vim function shada#strings_to_sd works with multiple items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1363: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1362> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 1424: In autoload/shada.vim function shada#strings_to_sd works with header items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1425: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1424> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 1487: In autoload/shada.vim function shada#strings_to_sd works with search pattern items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1488: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1487> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 1602: In autoload/shada.vim function shada#strings_to_sd works with replacement string items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1603: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1602> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 1641: In autoload/shada.vim function shada#strings_to_sd works with history entry items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1642: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1641> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 1738: In autoload/shada.vim function shada#strings_to_sd works with register items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1739: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1738> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 1837: In autoload/shada.vim function shada#strings_to_sd works with variable items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1838: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1837> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 1878: In autoload/shada.vim function shada#strings_to_sd works with global mark items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1879: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1878> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 1900: In autoload/shada.vim function shada#strings_to_sd works with jump items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1901: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1900> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 1922: In autoload/shada.vim function shada#strings_to_sd works with buffer list items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1923: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1922> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 2023: In autoload/shada.vim function shada#strings_to_sd works with local mark items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2024: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2023> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 2045: In autoload/shada.vim function shada#strings_to_sd works with change items ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:1341: in function 'strings2sd_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2046: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2045> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 2078: In autoload/shada.vim function shada#get_binstrings works ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'shada#get_binstrings' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2070: in function 'getbstrings_eq' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2084: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2078> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 2249: In plugin/shada.vim event BufWriteCmd works ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2266: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2249> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 2305: In plugin/shada.vim event FileWriteCmd works ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2322: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2305> [ ERROR ] ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua @ 2344: In plugin/shada.vim event FileAppendCmd works ./test/functional/helpers.lua:93: internal-start-string:Internal error: start > string stack traceback: ./test/functional/helpers.lua:93: in function 'request' ./test/functional/helpers.lua:151: in function 'nvim_command' ...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2364: in function <...d/neovim-0.2.2/2nd/test/functional/plugin/shada_spec.lua:2344> ``` ### Expected behaviour Passing tests
test,runtime
low
Critical
276,368,703
opencv
Error in resize method when using nearest neighbour interpolation
##### System information (version) - OpenCV => master - Operating System / Platform => all - Compiler => all ##### Detailed description When resizing image with nearest neighbour source image pixel coordinates are computed incorrectly. For example shrink a 5x5 image down to 1x1, you should get center pixel value in the result, instead top-left pixel value is present in the result. Incorrect code is here: https://github.com/opencv/opencv/blob/981009ac1f06106244ac52b16a20a4dc337ad816/modules/imgproc/src/resize.cpp#L235 Should be instead: ```.cpp int sx = cvFloor(x*ifx + (ifx-1)*0.5); ``` and here: https://github.com/opencv/opencv/blob/981009ac1f06106244ac52b16a20a4dc337ad816/modules/imgproc/src/resize.cpp#L144 Should be instead: ```.cpp int sy = std::min(cvFloor(y*ify + (ify-1)*0.5), ssize.height-1); ``` There are probably more optimized versions (maybe AVX version) that are also using wrong math, I didn't look for them. I have a more detailed explanation with equations in here: https://gist.github.com/Kirill888/6b187e65f088971a48d078df72d27d32 I'm aware of this issue #9096, it's the same problem, but I thought I'll create new issue anyway to discuss the solution for it.
category: imgproc,RFC
low
Critical
276,425,621
flutter
Text editing handles go over, not under, appbars
Cursor is not respecting the app z-index (it should be under the AppBar) ## Steps to Reproduce As you can see in the image bellow, the cursor is not Hidden by the AppBar. ![z-index](https://user-images.githubusercontent.com/1667961/33181696-573f334c-d058-11e7-8195-6f9628c4706f.png) ## Flutter Doctor ``` [✓] Flutter (on Linux, locale pt_BR.UTF-8, channel master) • Flutter at /home/bispo/workspace/flutter • Framework revision 167382480a (7 horas atrás), 2017-11-23 10:24:16 +0100 • Engine revision 93b2179597 • Tools Dart version 1.25.0-dev.11.0 [✓] Android toolchain - develop for Android devices (Android SDK 26.0.3) • Android SDK at /home/bispo/Android/Sdk • Android NDK at /home/bispo/Android/Sdk/ndk-bundle • Compiler in Android NDK at /home/bispo/Android/Sdk/ndk-bundle/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin/arm-linux-androideabi-gcc • Platform android-26, build-tools 26.0.3 • Java binary at: /usr/local/android-studio/jre/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_112-release-b06) [✓] Android Studio (version 2.3) • Android Studio at /usr/local/android-studio • Java version OpenJDK Runtime Environment (build 1.8.0_112-release-b06) [✓] IntelliJ IDEA Community Edition (version 2017.2) • Flutter plugin version 19.1 • Dart plugin version 172.4343.25 [✓] Connected devices • Android SDK built for x86 • emulator-5554 • android-x86 • Android 7.1.1 (API 25) (emulator) ```
a: text input,framework,f: material design,a: fidelity,a: quality,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-text-input,triaged-text-input
low
Major
276,462,546
godot
The Remote Scene Tree causes jerky movement everytime it updates
**Operating system or device, Godot version, GPU Model and driver (if graphics related):** Godot 3.0 master https://github.com/godotengine/godot/commit/3a33725014169ce51d6ac581a4dce3b97a60da00 **Issue description:** <!-- What happened, and what was expected. --> In the example, Every time the Remote Scene Tree updates it causes some choppy movement. If I disable the auto switching to it under Editor settings / Debugger the issue disappear. **Steps to reproduce:** 1. Open the example and make sure the remote scene tree is opened. **Link to minimal example project:** <!-- Optional but very welcome. You can drag and drop a zip archive to upload it. --> [level_test.zip](https://github.com/godotengine/godot/files/1500106/level_test.zip)
enhancement,topic:editor,confirmed,performance
medium
Critical
276,474,214
flutter
Hot reloading messaging is confusing
I ❤️ the new IDE "reload-on-save" feature(s), but I think the output emitted is pretty confusing: > Launching lib/main.dart on iPhone SE in debug mode... > Reloaded 0 of 466 libraries in 410ms. > Reloaded 0 of 466 libraries in 303ms. I ask myself the following questions when reading this output: * Does it mean the reload _failed_? * Does it mean that my save didn't require reloading any libraries? * Why did it take 300ms to do nothing? I think it might be more friendly to either say: > "No libraries required reloading (0 of 466 in 303ms)." > "Reload successful but no libraries reload (...)" Upon further investigation, I managed to find out it was because I was editing files that were not yet imported from `lib/main.dart`. That seems unfortunate, because it means if I'm creating new files or editing files in `test/` I'm still reloading. Other suggestions: * Maybe don't include libraries that are dependencies? For example I only have "1" library. ## Steps to Reproduce * Create a brand new repository `flutter create hello` * Navigate to the new repository and run it (`cd hello`, `flutter run`) * Add a new file (i.e. `lib/src/data.dart`) that is not imported yet * Reload (in most IDEs, this is on save) Originally moved from https://github.com/Dart-Code/Dart-Code/issues/450. ## Logs > Launching lib/main.dart on iPhone SE in debug mode... > Reloaded 0 of 466 libraries in 410ms. > Reloaded 0 of 466 libraries in 303ms. > Reloaded 0 of 466 libraries in 485ms. ## Flutter Doctor ``` [✓] Flutter (on Mac OS X 10.12.6 16G29, locale en-US, channel master) • Flutter at /Users/matan/git/flutter • Framework revision 167382480a (12 hours ago), 2017-11-23 10:24:16 +0100 • Engine revision 93b2179597 • Tools Dart version 1.25.0-dev.11.0 [✓] Android toolchain - develop for Android devices (Android SDK 25.0.2) • Android SDK at /Users/matan/Library/Android/sdk • Unable to locate Android NDK. • Unable to locate compiler in Android NDK. • Platform android-25, build-tools 25.0.2 • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_112-release-b05) [✓] iOS toolchain - develop for iOS devices (Xcode 9.1) • Xcode at /Applications/Xcode.app/Contents/Developer • Xcode 9.1, Build version 9B55 • ios-deploy 1.9.2 • CocoaPods version 1.3.1 [✓] Android Studio (version 2.2) • Android Studio at /Applications/Android Studio.app/Contents • Java version OpenJDK Runtime Environment (build 1.8.0_112-release-b05) [✓] IntelliJ IDEA Ultimate Edition (version 2017.2.5) • Flutter plugin version 17.0 • Dart plugin version 172.4343.25 [✓] Connected devices • iPhone SE • 9F33391B-D2D7-4E44-95C4-9A912FDCD824 • ios • iOS 11.1 (simulator) ```
tool,t: hot reload,a: quality,P3,team-tool,triaged-tool
low
Critical
276,482,428
go
path/filepath: Clean definition is not idempotent
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? version 1.9 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? windows/amd64 ### Issue The algorithm defined for the `Clean` function is faulty and, in addition, is not faithfully implemented. In particular, the definition is faulty because the idempotent condition Clean(Clean(x)) == Clean(x) could fail, if `Separator` is not `'/'`, as on Windows systems. In fact, by the algorithm documentation, on Windows systems `'/'` would not be processed as `Separator`, until the final translation to `Separator`. Thus, applying `Clean` once or twice (with `'/'` translated to `Separator` at the end of the first cleaning step) could return different results, contrary to the idempotent condition. In addition, the algorithm implementation slightly deviates from the documentation. In fact, every character that satisfies os.IsPathSeparator(c) == true is processed as `Separator`, not only `Separator`, as stated in the algorithm definition. On Windows systems, this is true for both: `'\\'` and `'/'`. Please note that, if `os.IsPathSeparator('/')` is true for all available implementations, `'/'` is always processed as `Separator`. Thus, idempotence can be magically preserved, despite the errors in the algorithm definition and implementation. ### Proposal I propose to clarify the algorithm documentation and adapt the implementation as follows. #### Documentation 1. change the first sentence to > Clean returns the shortest path name equivalent to path by purely lexical processing. First, it replaces any occurrence of slash by Separator. Then, it applies the following rules iteratively until no further processing can be done: 2. replace `slash` with `Separator` in the sentence > The returned path ends in a slash ... 3. remove the sentence > Finally, any occurrences of slash are replaced by Separator. #### Implementation 1. add the follwing statement at the first line path = FromSlash(path) 2. change any expression like `os.IsPathSeparator(x)` to x == Separator 2. change the last line to: return out.string()
Documentation,NeedsFix
low
Critical
276,544,627
youtube-dl
Alphatv.gr site support
## - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.11.15*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.11.15** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [x] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` [debug] System config: [] [debug] User config: [] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] youtube-dl version 2017.11.15 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] Proxy map: {} ... <end of log> ``` --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://www.youtube.com/watch?v=BaW_jenozKc - Single video: https://youtu.be/BaW_jenozKc - Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights. --- ### Description of your *issue*, suggested solution and other information Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible. If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
site-support-request
low
Critical
276,567,342
go
fmt: "%%" placeholder accepts incorrect index values
What version of Go are you using (go version)? go1.9.2 Does this issue reproduce with the latest release? yes What operating system and processor architecture are you using (go env)? macOS Sierra 10.12.6, darwin/amd64 ### What did you do? Run the following code ``` package main import ( "fmt" ) func main() { fmt.Printf("%[100]%") fmt.Printf("%[-100]%") fmt.Printf("%[not a mumber]%") } ``` ### What did you expect to see? In all cases, errors about incorrect index is printed. ### What did you see instead? In all cases, '%' is printed. This behaviour is unexpected and, at least, not documented here: https://golang.org/pkg/fmt/
NeedsFix
low
Critical
276,643,954
angular
Possibility to have a loop animation, using triggers. [Feature Request]
Hello i`m trying to make a loop animation with full control using triggers, right now theres no easy way to make it, only a few possibles work arounds. ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [X] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior Not possible to make a loop inside a trigger config, right now i have to check animation done to swap the trigger back and create the loop. edit: sharing the example of how i do now: https://angular-animations-demo.firebaseapp.com/loop-demo source here: https://github.com/leonardopaiva/angular-animation-basic-project/tree/master/src/app/loop-demo ## Expected behavior Something like a parameter to group, query or animate, setting loop true, or the number of repetitions, animate(300, boolLoopTrue) or animate(300, {boolRepetitions: 3}), example: <pre><code> transition('default => standby', [ group([ query('.character', [ animate(300, style({ opacity: 0 })), animate(300, style({ opacity: 1 })) ], boolLoopTrue) ]) ]) </code></pre> ## What is the motivation / use case for changing the behavior? I`m trying to make complex animations and need it to make things right, stop using work arounds. ## Environment Angular <pre><code> Angular version: 5.0.0 "@angular/animations": "^5.0.1" </code></pre>
feature,area: animations,freq1: low,P4,feature: under consideration
medium
Critical
276,673,423
rust
Test coverage for double trailing commas is poor
#### Theory Every place in the grammar which supports trailing commas should be tested that it fails for double commas, lest Rust be locked into supporting it forever. A particularly notable case of this is for `macro_rules!` macros, many of which must manually implement their own trailing comma support (leading to more chances for mistakes). #### Reality ```sh [ lampam @ 12:22:45 ] (master •2) ~/asd/clone/rust/src/test $ ls **/*.rs -1 > files $ python3 ``` ```python >>> import re >>> contents = [open(x.strip()).read() for x in open('files')] >>> r = re.compile(',\\s*,\\s*[\\])}]', re.MULTILINE) >>> [x for x in map(r.findall, contents) if x] [] ```
C-cleanup,A-parser,E-needs-test
low
Major
276,697,414
pytorch
Considerable slowdown in Adam.step after a number of epochs with multiple losses
I have a model with multiple outputs and, therefore, multiple losses. When training I accumulate the losses using retain_graph. Something along the lines of: ```python self.zero_grad() for output_label, output in self(input, target).items(): loss = self.loss(output, target[output_label]) loss.backward(retain_graph=True) self.optimizer.step() ``` where input, output and target are dictionaries with the respective data for the different inputs and losses. I am using Adam for the optimization. I've noticed that after a number of epochs, the running time of an epoch goes suddenly up from 7sec to 34sec. I also noticed a slowdown of CPU usage in my computer (I haven't test this yet on the GPU). Memory usage doesn't seem to increase. I profiled the code and I saw this (output from cProfile): Normal epoch: ``` 52 0.012 0.000 3.538 0.068 adam.py:30(step) 624 0.646 0.001 0.646 0.001 {method 'addcdiv_' of 'torch._C.FloatTensorBase' objects} ``` Slow epoch: ``` 52 0.013 0.000 24.576 0.473 adam.py:30(step) 624 21.469 0.034 21.469 0.034 {method 'addcdiv_' of 'torch._C.FloatTensorBase' objects} ``` I've tested with other adaptive losses like Adagrad, and there I can't see the issue. It seems to be related to this line of code in Adam.step(): ``` p.data.addcdiv_(-step_size, exp_avg, denom) ``` Any ideas about why this is happening? It seems like suddenly the size of the accumulated gradient explodes, but I can't see why. cc @vincentqb @VitalyFedyunin @ngimel
awaiting response (this tag is deprecated),needs reproduction,module: performance,module: optimizer,triaged
low
Major
276,698,767
opencv
Open CV Segmentation Violation Java
##### System information (version) Open CV 3.3.1 Linux ARM Debian Oracle JDK 8 and Open JDK 8 ##### Detailed description Core dump when loading native Library, same as closed issue #10080 which I believe needs to be reopened. I've investigated at length, confirm compiling with FFMPEG disabled, the problem doesn't exist. The problem appears to be when the libopenmpt library is loaded, at least the stack trace from gdb points there. ``` #0 0x5e21b8c4 in ?? () from /usr/lib/arm-linux-gnueabihf/libopenmpt.so.0 #1 0x76f15c5c in ?? () from /lib/ld-linux-armhf.so.3 ``` ##### Steps to reproduce Compile with FFMEPG on ARM / Debian or Raspberry PI and load the native Open CV library.
priority: low,incomplete
low
Major
276,732,195
TypeScript
tsc --watch should listen for stdin requests to transpile on demand
nodemon will listen for "rs" streaming to `process.stdin` and this allows the user to manually restart a process. In the same way, there are times where it would be nice to manually restart an incremental build with `tsc --watch`, although it might not be much faster since it will rebuild everything, it should be a little smoother.
Suggestion,Needs Proposal
low
Minor
276,733,103
opencv
Increasing efficiency of the Keypoint and DMatch Python wrappers
##### Detailed description Python wrapper for `Keypoint` and `DMatch` is totally inefficient in which we need to access each object separately in order to extract its attributes (center, radius, trainingIdx and queryIdx), rather than using Numpy arrays. Is it possible to change that behavior to something similar to `connectedComponentsWithStats` wrapper of the `stats` variable? That wrapper is a Numpy array, smoothly used in conjugation with an exported enum, which allows vectorized operations without any need for loops.
feature,category: python bindings,category: features2d
low
Minor
276,733,541
pytorch
[Feature Request] Implement "same" padding for convolution operations?
The implementation would be easy, but could help many people suffered from the headache of calculating how many padding they need. cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @albanD @mruberry @walterddr
high priority,module: nn,module: convolution,triaged,enhancement,needs design
high
Critical
276,748,397
go
proposal: spec: read-only types
I propose adding read-only types to Go. Read-only types have two related benefits: 1. The compiler guarantees that values of read-only type cannot be changed, eliminating unintended modifications that can cause subtle bugs. 2. Copying as a defense against modification can be reduced, improving efficiency. An additional minor benefit is the ability to take the address of constants. This proposal makes significant changes to the language, so it is intended for Go 2. All new syntax in this proposal is provisional and subject to bikeshedding. **Basics** All types have one of two _permissions_: read-only or read-write. Permission is a property of types, but I sometimes write "read-only value" to mean a value of read-only type. A type preceded by `ro` is a read-only type. The identifier `ro` is pronounced _row_. It is a keyword. There is no notation for the read-write permission; any type not marked with `ro` is read-write. The `ro` modifier can be applied to slices, arrays, maps, pointers, structs, channels and interfaces. It cannot be applied to any other type, including a read-only type: `ro ro T` is illegal. It is a compile-time error to * modify a value of read-only type, * pass a read-only slice as the first argument of `append`, * use slicing to extend the length of a read-only slice, * or send to or receive from a read-only channel. A value of read-only type may not be immutable, because it may be referenced through another type that is not read-only. Examples: 1. A function can assert that it will not modify its argument. ``` func transmit(data ro []byte) { ... } ``` The compiler guarantees that the bytes of `data` will not be altered by `transmit`. 2. A method can return an unexported field of its type without fear that it will be changed by the caller. ``` type BufferedReader struct { buf []byte } func (b *BufferedReader) Buffer() ro []byte { return buf } ``` This proposal is concerned exclusively with avoiding modifications to _values_, not _variables_. Thus it allows assignment to variables of read-only type. ``` var EOF ro error = errors.New("EOF") ... EOF = nil ``` One could imagine a companion proposal that also used `ro`, but to restrict assignment: ``` ro var EOF = ... // cannot assign to EOF ``` I don't pursue that idea here. **Conversions** There is an automatic conversion from `T` to `ro T`. For instance, an actual parameter of type `[]int` can be passed to a formal parameter of type `ro []int`. This conversion operates at any level: a `[][]int` can be converted to a `[]ro []int` for example. There is an automatic conversion from `string` to `ro []byte`. It does not apply to nested occurrences: there is no conversion from `[][]string` to `[]ro []byte`, for example. (Rationale: `ro` does not change the representation of a type, so there is no cost to adding `ro` to any type, at any depth. A constant-time change in representation is required to convert from `string` to `ro []byte` because the latter is one word larger. Applying this change to every element of a slice, array or map would require a complete copy.) **Transitivity** Permissions are transitive: a component retrieved from a read-only value is treated as read-only. For example, consider `var a ro []*int`. It is not only illegal to assign to `a[i]`; it is also illegal to assign to `*a[i]`. Transitivity increases safety, and it can also simplify reasoning about read-only types. For example, what is the difference between `ro *int` and `*ro int`? With transitivity, the first is equivalent to `ro *ro int`, so the difference is just the permission of the full type. **The Address Operator** If `v` has type `ro T`, then `&v` has type `*ro T`. If `v` has type `T`, then `ro &v` has type `ro *T`. This bit of syntax simplifies constructing read-only pointers to struct literals, like `ro &S{a: 1, b: 2}`. Taking the address of constants is permitted, including constant literals. If `c` is a constant of type `T`, then `&c` is of type `ro *T` and is equivalent to ``` func() ro *T { v := c; return &v }() ``` **Read-Only Interfaces** Any method of an interface may be preceded by `ro`. This indicates that the receiver of the method must have read-only type. ``` type S interface { ro Marshal() ([]byte, error) Unmarshal(ro []byte) error } ``` If `I` is an interface type, then `ro I` is effectively the sub-interface that contains just the read-only methods of `I`. If type `T` implements `I`, then type `ro T` implements `ro I`. Read-only interfaces can prevent code duplication that might otherwise result from the combination of read-only types and interfaces. Consider the following code from the `sort` package: ``` type Interface interface { Less(i, j int) bool Len() int Swap(i, j int) } func Sort(data Interface) bool { … code using Less, Len, and Swap … } func IsSorted(data Interface) bool { … code using only Less and Len … } type IntSlice []int func (x IntSlice) Less(i, j int) bool { return x[i] < x[j] } func (x IntSlice) Len() int { return len(x) } func (x IntSlice) Swap(i, j int) { x[i], x[j] = x[j], x[i] } func Ints(a []int) { // invoked as sort.Ints Sort(IntSlice(a)) } func IntsAreSorted(a []int) bool { return IsSorted(IntSlice(a)) } ``` We would like to allow `IntsAreSorted` to accept a read-only slice, since it does not change its argument. But we cannot cast `ro []int` to `IntSlice`, because the `Swap` method modifies its receiver. It seems we must copy code somewhere. The solution is to mark the first two methods of the interface as read-only: ``` type Interface interface { ro Less(i, j int) bool ro Len() int Swap(i, j int) } func (x ro IntSlice) Less(i, j int) bool { return x[i] < x[j] } func (x ro IntSlice) Len() int { return len(x) } ``` Now we can write `IsSorted` in terms of the read-only sub-interface: ``` func IsSorted(data ro Interface) bool { … code using only Less and Len … } ``` and call it on a read-only slice: ``` func IntsAreSorted(a ro []int) bool { return IsSorted(ro IntSlice(a)) } ``` **Permission Genericity** One of the problems with read-only types is that they lead to duplicate functions. For example, consider this trivial function, ignoring its obvious problem with zero-length slices: ``` func tail1(x []int) []int { return x[1:] } ``` We cannot call `tail1` on values of type `ro []int`, but we can take advantage of the automatic conversion to write ``` func tail2(x ro []int) ro []int { return x[1:] } ``` Thanks to the conversion from read-write to read-only types, `tail2` can be passed an `[]int`. But it loses type information, because the return type is always `ro []int`. So the first of these calls is legal but the second is not: ``` var a = []int{1,2,3} a = tail1(a) a = tail2(a) // illegal: attempt to assign ro []int to []int ``` If we had to write two variants of every function like this, the benefits of read-only types would be outweighed by the pain they cause. To deal with this problem, most programming languages rely on overloading. If Go had overloading, we would name both of the above functions `tail`, and the compiler would choose which to call based on the argument type. But we do not want to add overloading to Go. Instead, we can add generics to Go&mdash;but just for permissions. Hence _permission genericity_. Any type inside a function, including a return type, may be preceded by `ro?` instead of `ro`. If `ro?` appears in a function, it must appear in the function's argument list. A function with an `ro?` argument `a` must type-check in two ways: * `a` has type `ro T` and `ro?` is treated as `ro`. * `a` has type `T` and `ro?` is treated as absent. In calls to a function with a return type `ro? T`, the effective return type is `T` if the `ro?` argument `a` is a read-write type, and `ro T` if `a` is a read-only type. Here is `tail` using this feature: ``` func tail(x ro? []int) ro? []int { return x[1:] } ``` `tail` type-checks because: * With `x` declared as `ro []int`, the slice expression can be assigned to the effective return type `ro []int`. * With `x` declared as `[]int`, the slice expression can be assigned to the effective return type `[]int`. This call succeeds because the effective return type of `tail` is `ro []int` when the argument is `ro []int`: ``` var a = ro []int{1,2,3} a = tail(a) ``` This call also succeeds, because `tail` returns `[]int` when its argument is `[]int`: ``` var b = []int{1,2,3} b = tail(b) ``` Multiple, independent permissions can be expressed by using `ro?`, `ro??`, etc. (If the only feasible type-checking algorithm is exponential, implementations may restrict the number of distinct `ro?...` forms in the same function to a reasonable maximum, like ten.) In an interface declaration, `ro?` may be used before the method name to refer to the receiver. ``` type I interface { ro? Tail() ro? I } ``` There are no automatic conversions from function signatures using `ro?` to signatures that do not use `ro?`. Such conversions can be written explicitly. Examples: ``` func tail(x ro? []int) ro? []int { return x[1:] } var ( f1 func(x ro? []int) ro? []int = tail // legal: same type f2 func(ro []int) ro []int = tail // illegal: attempted automatic conversion f3 = (func(ro []int) ro []int)(tail) // legal: explicit conversion ) ``` Permission genericity can be implemented completely within the compiler. It requires no run-time support. A function annotated with `ro?` requires only a single implementation. **Strengths of This Proposal** ***Fewer Bugs*** The use of `ro` should reduce the number of bugs where memory is inadvertently modified. There will be fewer race conditions where two goroutines modify the same memory. One goroutine can still modify the memory that another goroutine reads, so not all race conditions will be eliminated. ***Less Copying*** Returning a reference to a value's unexported state can safely be done without copying the state, as shown in Example 2 above. Many functions take `[]byte` arguments. Passing a string to such a function requires a copy. If the argument can be changed to `ro []byte`, the copy won't be necessary. ***Clearer Documentation*** Function documentation often states conditions that promise that the function doesn't modify its argument, or that extracts a promise from the caller not to modify a return value. If `ro` arguments and return types are used, those conditions are enforced by the compiler, so they can be deleted from the documentation. Furthermore, readers know that in a well-designed function, a non-`ro` argument will be written along at least one code path. ***Better Static Analysis Tools*** Read-only annotations will make it easier for some tools to do their job. For example, consider a tool that checks whether a piece of memory is modified by a goroutine after it sends it on a channel, which may indicate a race condition. Of course if the value is itself read-only, there is nothing to do. But even if it isn't, the tool can do its job by checking for writes locally, and also observing that the value is passed to other functions only via read-only argument. Without `ro` annotations, the check would be difficult (requiring examining the code of functions not in the current package) or impossible (if the call was through an interface). ***Less Duplication in the Standard Library*** Many functions in the standard library can be removed, or implemented as wrappers over other functions. Many of these involve the `string` and `[]byte` types. If the `io.Writer.Write` method's argument becomes read-only, then `io.WriteString` is no longer necessary. Functions in the `strings` package that do not return strings can be eliminated if the corresponding `bytes` method uses `ro`. For example, `strings.Index(string, string) int` can be eliminated in favor of (or can trivially wrap) `bytes.Index(ro []byte, ro []byte) int`. This amounts to 18 functions (including `Replacer.WriteString`). Also, the `strings.Reader` type can be eliminated. Functions that return `string` cannot be eliminated, but they can be implemented as wrappers around the corresponding `bytes` function. For example, `bytes.ToLower` would have the signature `func ToLower(s ro? []byte) ro? []byte`, and the `strings` version could look like ``` func ToLower(s string) string { return string(bytes.ToLower(s)) } ``` The conversion to `string` involves a copy, but `ToLower` already contains a conversion from `[]byte` to `string`, so there is no change in efficiency. Not all `strings` functions can wrap a `bytes` function with no loss of efficiency. For instance, `strings.TrimSpace` currently does not copy, but wrapping it around `bytes.TrimSpace` would require a conversion from `[]byte` to `string`. Adding `ro` to the language without some sort of permission genericity would result in additional duplication in the `bytes` package, since functions that returned a `[]byte` would need a corresponding function returning `ro []byte`. Permission genericity avoids this additional duplication, as described above. ***Pointers to Literals*** Sometimes it's useful to distinguish the absence of a value from the zero value. For example, in the original Google protobuf implementation (still used widely within Google), a primitive-typed field of a message may contain its default value, or may be absent. The best translation of this feature into Go is to use pointers, so that, for example, an integer protobuf field maps to the Go type `*int`. That works well except for initialization: without pointers to literals, one must write ``` i := 3 m := &Message{I: &i} ``` or use a helper function. In Go as it currently stands, an expression like `&3` cannot be permitted because assignment through the resulting pointer would be problematic. But if we stipulate that `&3` has type `ro *int`, then assignment is impossible and the problem goes away. **Weaknesses of This Proposal** ***Loss of Generality*** Having both `T` and `ro T` in the language reduces the opportunities for writing general code. For example, an interface method with a `[]int` parameter cannot be satisfied by a concrete method that takes `ro []int`. A function variable of type `func() ro []int` cannot be assigned a function of type `func() []int`. Supporting these cases would start Go down the road of covariance/contravariance, which would be another large change to the language. ***Problems Going from `string` to `ro []byte`*** When we change an argument from `string` to `ro []byte`, we may eliminate copying at the call site, but it can reappear elsewhere because the guarantee is weaker: the argument is no longer immutable, so it is subject to change by code outside the function. For example, `os.Open` returns an error that contains the filename. If the filename were not immutable, it would have to be copied into the error message. Data structures like caches that need to remember their methods' arguments would also have to copy. Also, replacing `string` with `ro []byte` would mean that implementers could no longer compare via operators, range over Unicode runes, or use values as map keys. ***Subsumed by Generics*** Permission genericity could be subsumed by a suitably general design for generics. No such design for Go exists today. All known constraints on generic types use interfaces to express that satisfying types must provide all the interface's methods. The only other form of constraint is syntactic: for instance, one can write `[]T`, where `T` is a generic type variable, enforcing that only slice types can match. What is needed is a constraint of the form "`T` is either `[]S` or `ro []S`", that is, permission genericity. A generics proposal that included permissions would probably drop the syntax of this proposal and use identifiers for permissions, e.g. ``` gen <T, perm Ro> func tail(x Ro []T) Ro []T { return x[1:] } ``` ***Missing Immutability*** This proposal lacks a permission for immutability. Such a permission has obvious charms: immutable values are goroutine-safe, and conversion between strings and immutable byte slices would work in both directions. The problem is how to construct immutable values. Literals of immutable type would only get one so far. For example, how could a program construct an immutable slice of the first N primes, where N is a parameter? The two easy answers&mdash;deep copying, or [letting the programmer assert immutability](https://dlang.org/spec/const3.html#creating_immutable_data)&mdash;are both unpalatable. Other solutions exist, but they would require additional features on top of this proposal. Simply adding an `im` keyword would not be enough. ***Does Not Prevent Data Races*** A value cannot be modified through a read-only reference, but there may be other references to it that can be modified concurrently. So this proposal prevents some but not all data races. Modern languages like [Rust](https://www.rust-lang.org), [Pony](https://www.ponylang.org) and [Midori](http://joeduffyblog.com/2015/11/03/blogging-about-midori) have shown that it is possible to eliminate all data races at compile time. But the cost in complexity is high, and the value unclear&mdash;there would still be many opportunities for race conditions. If Go wanted to explore this route, I would argue that the current proposal is a good starting point. **References** [Brad Fitzpatrick's read-only slice proposal](https://docs.google.com/document/d/1UKu_do3FRvfeN5Bb1RxLohV-zBOJWTzX0E8ZU1bkqX0/edit#heading=h.2wzvdd6vdi83) [Russ Cox's evaluation of the proposal](https://docs.google.com/document/d/1-NzIYu0qnnsshMBpMPmuO21qd8unlimHgKjRD9qwp2A/edit). This document identifies the problem with the `sort` package discussed above, and raises the problem of loss of generality as well as the issues that arise in moving from `string` to `ro []byte`. [Discussion on golang-dev](https://groups.google.com/d/topic/golang-dev/Y7j4B2r_eDw/discussion)
LanguageChange,Proposal,LanguageChangeReview
high
Critical
276,754,529
youtube-dl
Unsupported URL :- cricket.com.au
Cricket.com.au yields Unspported URL. <br> Here is the log. <br> youtube-dl -v http://www.cricket.com.au/news/match-report/australia-england-first-ashes-test-day-three-gabba-live-stream-scores-highlights-scores-smith-marsh/2017-11-25 > log.txt [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: [u'--prefer-ffmpeg', u'-v', u'http://www.cricket.com.au/news/match-report/australia-england-first-ashes-test-day-three-gabba-live-stream-scores-highlights-scores-smith-marsh/2017-11-25'] [debug] Encodings: locale UTF-8, fs UTF-8, out None, pref UTF-8 [debug] youtube-dl version 2017.11.15 [debug] Python version 2.7.12 - Linux-4.4.0-101-generic-x86_64-with-Ubuntu-16.04-xenial [debug] exe versions: avconv 10.7, avprobe 10.7, ffmpeg 3.4-1, ffprobe 3.4-1 [debug] Proxy map: {} WARNING: Falling back on generic information extractor. ERROR: Unsupported URL: http://www.cricket.com.au/news/match-report/australia-england-first-ashes-test-day-three-gabba-live-stream-scores-highlights-scores-smith-marsh/2017-11-25 Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/generic.py", line 2170, in _real_extract doc = compat_etree_fromstring(webpage.encode('utf-8')) File "/usr/local/lib/python2.7/dist-packages/youtube_dl/compat.py", line 2539, in compat_etree_fromstring doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory))) File "/usr/local/lib/python2.7/dist-packages/youtube_dl/compat.py", line 2528, in _XML parser.feed(text) File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1653, in feed self._raiseerror(v) File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1517, in _raiseerror raise err ParseError: not well-formed (invalid token): line 19, column 26 Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 784, in extract_info ie_result = ie.extract(url) File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py", line 437, in extract ie_result = self._real_extract(url) File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/generic.py", line 3075, in _real_extract raise UnsupportedError(url) UnsupportedError: Unsupported URL: http://www.cricket.com.au/news/match-report/australia-england-first-ashes-test-day-three-gabba-live-stream-scores-highlights-scores-smith-marsh/2017-11-25
site-support-request
low
Critical
276,755,238
rust
Adding serde_json as an unused external crate causes a compiler error in unrelated code
This code compiles: ```rust use std::cell::Cell; use std::default::Default; fn main() { let cell = Cell::new(0u64); cell.get() == Default::default(); } ``` This code does not: ```rust extern crate serde_json; use std::cell::Cell; use std::default::Default; fn main() { let cell = Cell::new(0u64); cell.get() == Default::default(); } ``` ``` error[E0283]: type annotations required: cannot resolve `u64: std::cmp::PartialEq<_>` --> src/lib.rs:7:16 | 7 | cell.get() == Default::default(); | ^^ error: aborting due to previous error ```
C-enhancement,A-diagnostics,T-compiler
low
Critical
276,769,195
pytorch
Wrap Cephes library for mathematical special functions
The Cephes library provides many useful special functions and is used inside Scipy. The Deepmind folks wrapped Cephes for use in Torch ([site](http://deepmind.github.io/torch-cephes/), [code](https://github.com/deepmind/torch-cephes)). Is if feasible to wrap Cephes for similar use in PyTorch? **EDIT**: Request for the Cephes functions should be directed to https://github.com/pytorch/pytorch/issues/50345 (Issue for tracking torch.special) as most of these functions are exposed in scipy via special module. cc @mruberry @rgommers
feature,triaged,module: numpy
medium
Major
276,782,411
svelte
await once
Follow-up to #952 / https://github.com/sveltejs/svelte/issues/654#issuecomment-345490875. It would modify the behaviour of `await` blocks such that you'd only see the 'pending' state once — thereafter, whenever a new promise value was set, the old one would be preserved until the promise resolved. An additional argument would be passed to the `then` block, allowing the UI to indicate that the currently displayed data was out of date: ```html <!-- autocomplete suggestion list — we don't want to blow away the previous set of suggestions while we're waiting for the server to send us some new ones --> {{#await once suggestions}} <span>loading...</span> {{then value, pending}} <datalist id='suggestions' style='opacity: {{pending ? 0.5 : 1}}'> {{#each value as suggestion}} <option>{{suggestion}}</option> {{/each}} </datalist> {{#if pending}} <span>updating...</span> {{/if}} {{catch err}} <span class='error'>could not get suggestions!</span> {{/await}} ```
feature request,stale-bot,temp-stale
low
Critical
276,783,193
svelte
Async/streaming SSR renderer
Now that we have an `await` template primitive, it makes sense to have a streaming renderer: ```js require('svelte/ssr/register'); const app = express(); const SomeRoute = require('./components/SomeRoute.html'); app.get('/some-route', (req, res) => { SomeRoute.renderToStream({ foo: getPromiseSomehow(req.params.foo) }).pipe(res); }); ``` It would write all the markup to the stream until it encountered an `await` block (at which point it would await the promise, and render the `then` or `catch` block as appropriate) or a component (at which point it would pipe the result of `childComponent.renderToStream(...)` into the main stream). We'd get `renderAsync` for free, just by buffering the stream: ```js const markup = await MyComponent.renderAsync({...}); ``` Doubtless this is slightly more complicated than I'm making it sound.
feature request
high
Critical
276,797,602
kubernetes
Kubernetes should configure the ambient capability set
> /kind bug **What happened**: The following takes place on a k8s 1.8.2 cluster. I have a Docker container image that wants to listen on :80, and specifies a non-root USER. To get this running, in my pod spec the container has the following security context: ```yaml securityContext: capabilities: drop: - all add: - NET_BIND_SERVICE allowPrivilegeEscalation: false ``` When I schedule this pod on the cluster, the container fails to bind to :80 (permission denied), and goes into a crashloop. Note that Kubernetes did not complain that this configuration is in any way infeasible. The reason for this is that Linux capabilities interact in surprising ways with other security mechanisms. In this case, the problem is that I'm also running the container as a non-root user, and Kubernetes/Docker are only setting the inherited, permitted, effective and bounding capability sets. The catch is: the effective and permitted sets get _cleared_ when you transition from UID 0 to UID !0, so my container ends up with: ``` CapInh: 0000000000000400 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 0000000000000400 CapAmb: 0000000000000000 NoNewPrivs: 1 ``` 0x400 is CAP_NET_BIND_SERVICE, and as you can see my effective capabilities do not have this bit set. The linux kernel corrected this very confusing behavior by introducing the ambient capability set, which does not have surprising behaviors when you transition from UID 0 to !0. If you've set a capability as ambient, you keep it unless you explicitly revoke it. **What you expected to happen**: I expect the capabilities I assign in my podspec to still exist when my main binary `exec`s, regardless of other security context configuration (assuming k8s accepted my manifest as valid). To me, that translates to: k8s should be writing the caps described by `securityContext.capabilities` into the ambient capability set, as well as the other capability sets. Alternatively, if you believe the current behavior of `securityContext.capabilities` is working as intended, there should be another knob somewhere that I can use to populate the ambient capability set. However, I would strongly encourage you to instead consider the current behavior of `securityContext.capabilities` combined with non-root users as a bug, because it will likely trip up ~everyone using it unless they know a lot about the linux capability implementation. **How to reproduce it (as minimally and precisely as possible)**: Deploy this pod to a cluster using the default container runtime. You should see it crashlooping, with `kubectl logs bug-demo` showing that netcat is not allowed to bind to :80. If you comment out `runAsUser` and let the container binary run as root, it'll work fine. Similarly, if you modify the container to have a binary that has been altered with `setcap net_bind_service=+ep`, the contianer will run correctly as !root, because the setcap'd binary allows the container to regain the privileges it lost when transitioning out of UID 0. ```yaml apiVersion: v1 kind: Pod metadata: name: bug-demo spec: containers: - name: netcat image: danderson/bug-demo:latest args: - /bin/sh - -c - "netcat -l -p 80" securityContext: runAsUser: 65534 capabilities: drop: - all add: - NET_BIND_SERVICE allowPrivilegeEscalation: false ``` **Anything else we need to know?**: **Environment**: - Kubernetes version (use `kubectl version`): 1.8.2 server, 1.8.1 kubectl - Cloud provider or hardware configuration: bare metal cluster, single node (master taint removed), set up with `kubeadm`. - OS (e.g. from /etc/os-release): Debian Testing - Kernel (e.g. `uname -a`): Linux pandora 4.12.0-2-amd64 #1 SMP Debian 4.12.13-1 (2017-09-19) x86_64 GNU/Linux - Install tools: `kubeadm` - Others:
sig/node,kind/feature,lifecycle/frozen,sig/security
high
Critical
276,826,196
vscode
Git - Support git subtree
- VSCode Version: Code - Insiders 1.19.0-insider (89b158e11cb1c3fe94a3876222478ed2d0549fc8, 2017-11-24T05:14:03.606Z) - OS Version: Windows_NT x64 10.0.15063 - Extensions: Extension|Author (truncated)|Version ---|---|--- markdown-mermaid|bie|0.1.1 team|ms-|1.122.0 debugger-for-chrome|msj|3.5.0 --- Steps to Reproduce: 1. Ctrl-shift-P 2. type "Git:" <!-- Launch with `code --disable-extensions` to check. --> Reproduces without extensions: Yes Git [git subtree](http://alistra.ghost.io/2014/11/30/git-subtree-a-better-alternative-to-git-submodule/) commands is currently missing in the git command list and wasn't included in the git sync button. Subtree is great for maintaining a monorepo for large projects where there is a single source of truth for all the code. Currently this issue impacts developers experience as it is tedious to do manual sync of all the changes in the subtree with the original git repo of the project for the subtree. Proposal: 1. Add all the git subtree command (add, push, pull)
feature-request,git
high
Critical
276,829,365
neovim
VimL heredoc (<<HERE) for user commands
Currently if one wants to emulate `:python` with user commands he will be stuck at implementing `<< EOF`: there is no way for user commands to have multiline input, except for not adding `-bar` and using `:execute 'MyCmd' "multiline\nstring"`. On the other hand it is impossible to just add `command -herestring` and expect it to work fine: not unless somebody wants to state that user commands must be defined not prior to its *usage*, but prior to its *appearance* (i.e. prior to the first function definition which happens to have that command inside) and can’t switch definition after that (or give up on having proper parser). Thus the suggestion: knowing that `python << EOF\n{multiline string}\nEOF` yields just the same result as `execute 'python' "{multiline string}"` I am suggesting to add `:here` command used like that: here MyCmd << EOF foo EOF and if 1 here! MyCmd << EOF First non-empty line appears as having zero indent when bang is used. And this appears as an error for it being less indented then the first one. EOF endif . The only thing which `:here` command will do is calling `MyCmd` with multiline string in `<args>`, universally and without requiring any additional support in `MyCmd` definition. Additionally `:here` may accept range and forward it to `MyCmd`. Also `:here!` is good for built-in `:python` itself.
enhancement,vimscript
low
Critical
276,864,227
react
RFC: Drop isAttributeNameSafe() check
We currently validate DOM attributes on the client and ignore the ones with invalid names: https://github.com/facebook/react/blob/0c164bb4851e78e5f789dd8619f17ffcfee0221f/packages/react-dom/src/client/DOMPropertyOperations.js#L202-L204 This check used to be important for safety when we did `innerHTML` rendering on the client side, but it's not anymore. If we just let it call `setAttribute`, the browser would throw on a bad attribute name. This check used to run very infrequently (only for data attributes and custom elements), but now more attributes follow this code path (since any "simple" attributes with the same names are effectively treated as unknown attributes). So even though we cache the result, it seems unfortunate to do the work that the browser is already doing for us. While this would be a breaking change (so it has to go in 17), I think we should just remove this check, and let the browser throw. This does make spreading props blindly a bit more dangerous, but we have a warning so it should be visible.
Type: Enhancement,Type: Breaking Change,React Core Team
low
Major
276,942,659
go
cmd/compile: implement powerpc 32-bit backend (ppc32)
Tracking bug for missing PowerPC 32-bit backend. People interested in this can put a 👍 reaction here so that we can track interest.
FeatureRequest,compiler/runtime
high
Critical
276,986,874
pytorch
Sparse matrices in dataloader error
I have noticed that using sparse matrices with a DataLoader works only if num_processes = 0. Example: ``` import torch from torch.utils.data import DataLoader print('torch version: ', torch.__version__) i = torch.LongTensor([[0, 1, 1], [2, 0, 2]]) v = torch.FloatTensor([3, 4, 5]) sparse_tensor = torch.sparse.FloatTensor(i, v, torch.Size([2, 3])) dataset_sp = 2 * [sparse_tensor] def collate_fn(batch): return batch[0] loader = DataLoader(dataset_sp, batch_size=1, collate_fn=collate_fn) print('num_workers=0: ') for i, b in enumerate(loader): print(i, b.size()) loader = DataLoader(dataset_sp, batch_size=1, collate_fn=collate_fn, num_workers=1) print('num_workers=1: ') for i, b in enumerate(loader): print(i, b.size()) ``` Gives the result: ``` torch version: 0.2.0+5989b05 num_workers=0: 0 torch.Size([2, 3]) 1 torch.Size([2, 3]) num_workers=1: Process Process-1: Traceback (most recent call last): File "/Users/liam/anaconda/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap self.run() File "/Users/liam/anaconda/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/Users/liam/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 45, in _worker_loop data_queue.put((idx, samples)) File "/Users/liam/anaconda/lib/python3.6/multiprocessing/queues.py", line 348, in put obj = _ForkingPickler.dumps(obj) File "/Users/liam/anaconda/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) File "/Users/liam/anaconda/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 46, in reduce_tensor metadata = (tensor.storage_offset(), tensor.size(), tensor.stride()) AttributeError: 'torch.sparse.FloatTensor' object has no attribute 'storage_offset' ``` cc @vincentqb
module: sparse,triaged
medium
Critical
277,026,283
vue
Issues with v-model.number
### Version 2.5.8 ### Reproduction link [https://jsfiddle.net/50wL7mdz/79189/](https://jsfiddle.net/50wL7mdz/79189/) ### Steps to reproduce Included in fiddle ### What is expected? Input type='number' should not clear values, accept stings formatted in different locales. v-model.number should not return string. ### What is actually happening? Input value is cleared sometimes, v-model.number returns "" for "" and partially typed numbers. --- This issue started with topic on forum https://forum.vuejs.org/t/extra-directive-for-number-inputs/22438, it was suggested to open an issue for that. Here is original post: Hi! I've found that `v-model.number` has limitations and requires some boilerplate in some cases. This is mostly because `input[type="number"]` returns `''` for partially input numbers (like `1.`). Here are some problems with it: - App has no difference when input is either empty or invalid/partial. - Bound attribute has to be of `[String, Number]` type. - `''`, `undefined`, and `0` are falsy values. This leads to `val !== ''` checks in all places this attribute is used. 2nd and 3d issues can be solved with computed property, but it's hard to implement for nested ones and array of values. I came to using separate field for casted values: ``` <input type='number' v-model='obj.val' @input='$set(obj, "valCasted", _nOrNull($event.target.value))' /> ``` I wanted to it implement with custom directive (like `v-model-number='obj.valCasted'`), but I see that `v-model` is handled differently by compiler. This way it can automatically use `$set` when property is not defined on object. But I have not found how this can be implemented with custom directives. So here are questions :) : - Is there a better way to work with `input[type="number"]`? If not: - Can this be implemented with custom directives as convenient as `v-model` is? - Should this be added to vue? ----- After that post I've tried to implement custom component, it's included in fiddle, but it also has some issues. I've also added different types of inputs to fiddle to check their behaviour and checked against different locales. Thank you! <!-- generated by vue-issues. DO NOT REMOVE -->
discussion
medium
Major
277,049,699
opencv
features2d wrapper without flann module
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) - OpenCV => 3.3.1 - Operating System / Platform => Ubuntu 16.04 - Compiler => /usr/bin/c++ (ver 5.4.0) ##### Detailed description [FlannBasedMatcher](https://github.com/opencv/opencv/blob/master/modules/features2d/include/opencv2/features2d.hpp#L1115) has two methods, constructor `FlannBasedMatcher` and `create()` that can't be wrapped without optional flann module but `CV_WRAP` ignores `#ifdef HAVE_OPENCV_FLANN`. <!-- your description --> ##### Steps to reproduce ``` cmake -DBUILD_LIST=features2d,python2 .. && make -j8 ```
bug,category: python bindings,category: build/install,category: features2d,category: flann
low
Critical
277,066,843
rust
trait implementation hidden by where clause
In both cases in the code below, the trait implementation `impl Op<()> for Single` seems to be hidden by the clause `where Single: Op<T>`. Using universal function call syntax works as a workaround. ```rust trait Op<T> { fn op(&self, t: T); } struct Single; impl Op<()> for Single { fn op(&self, _t: ()) {} } struct Pair { a: Single, b: Single, } // case 1 fn foo<T>(pair: Pair, t: T) where Single: Op<T>, { pair.a.op(t); // this fails to compile: pair.b.op(()); // this works: <Single as Op<()>>::op(&pair.b, ()); } // case 2 impl<T> Op<T> for Pair where Single: Op<T>, { fn op(&self, t: T) { self.a.op(t); // this fails to compile: self.b.op(()); // this works: <Single as Op<()>>::op(&self.b, ()); } } ``` The error messages: ``` error[E0308]: mismatched types --> lib.rs:23:15 | 23 | pair.b.op(()); | ^^ expected type parameter, found () | = note: expected type `T` found type `()` error[E0308]: mismatched types --> lib.rs:35:19 | 35 | self.b.op(()); | ^^ expected type parameter, found () | = note: expected type `T` found type `()` ```
A-trait-system,T-compiler,A-inference,C-bug,T-types
low
Critical
277,085,054
vue
shouldPrefetch enhancement
### What problem does this feature solve? Currently, I use `import(/* webpackChunkName: "lang-[request]" */ json!yaml!./myForm.lang.${currentLocale}.yaml)` to load the appropriate translation for my components. This generates `numberOfForms * numberOfLangages` chunks that are prefetched when the application starts. I would like to load only translations for the current language. `shouldPrefetch(file, type)` gives ``` 0.js script 1.js script 2.js script ... ``` There is not enough information to filter files that should not been prefetched. ### What does the proposed API look like? I don't know what are the available information at `shouldPrefetch`. But probably API should look like: `shouldPrefetch(file, type, originalFilename)` <!-- generated by vue-issues. DO NOT REMOVE -->
feature request
low
Minor
277,114,443
pytorch
Implement DE in pytorch.optim
Hi, I’d like to use PyTorch to run Evolutionary algorithms on GPU. I noticed that DE (differential evolution) is implemented in Torch (https://github.com/torch/optim/blob/master/de.lua), so I was thinking to implement DE in PyTorch. As DE is not (widely) used in Deep Learning I wanted to check with you if it is in your interest to have DE implemented in PyTorch? I’m happy to discuss the implementation and create PR when ready. Once I learn more about PyTorch (by implementing DE) I’d also like to implement Genetic algorithm (GA) in the near future. Best, Dejan cc @vincentqb
module: optimizer,triaged,enhancement,needs research
low
Minor
277,135,891
flutter
Enable the Flutter tool to change app id after project creation
The app id can be set when a project is created using --org. It'd be great if the flutter tool could also update the app id after the project's been created, so that the Android and Xcode projects don't have to be manually edited.
c: new feature,tool,P3,team-tool,triaged-tool
low
Major
277,142,963
react-native
[iOS] TextInput clear button is not accessible
### Is this a bug report? Yes ### Have you read the [Contributing Guidelines](https://facebook.github.io/react-native/docs/contributing.html)? Yes ### Environment Environment: OS: macOS High Sierra 10.13.1 Node: 6.12.0 Yarn: 1.3.2 npm: 5.5.1 Watchman: 4.7.0 Xcode: Xcode 9.1 Build version 9B55 Android Studio: 3.0 AI-171.4408382 Packages: (wanted => installed) react: 16.0.0-beta.5 => 16.0.0-beta.5 react-native: ^0.49.5 => 0.49.5 Target Platform: iOS (9) ### Steps to Reproduce https://snack.expo.io/S1EkFy5gG 1. Input any text for the clear button to appear. 2. Turn on Voice Over 3. Try to select the clear button. ### Expected Behavior "Clear" button can be selected and activated. iOS native text input control supports it. ### Actual Behavior Voice Over cannot focus on the "clear" button. ### Reproducible Demo https://snack.expo.io/S1EkFy5gG
Component: TextInput,Accessibility,Accessibility Team - Evaluated
low
Critical
277,173,908
rust
API convention for blocking-, timeout-, and/or deadline-related functions
The standard library currently exposes several blocking- and/or timeout-related functions: Function \ Versions | Blocking | Timeout (ms) | Timeout --- | --- | --- | --- `std::sync::Condvar::wait*` | `wait` | `wait_timeout_ms` | `wait_timeout` `std::sync::mpsc::Receiver::recv*` | `recv` | none | `recv_timeout` `std::thread::park*` | `park` | `park_timeout_ms` | `park_timeout` `std::thread::sleep*` | none | `sleep_ms` | `sleep` The timeout versions taking a `u32` in milliseconds are actually deprecated for the version taking a `Duration` since `1.6.0`. This issue tracks the possibility to extend these APIs and provide a convention for blocking-, timeout-, and/or deadline-related functions. The current suggestion is: Function \ Versions | Blocking | Timeout | Deadline --- | --- | --- | --- `std::sync::Condvar::wait*` | `wait` | `wait_timeout` | `wait_deadline` `std::sync::mpsc::Receiver::recv*` | `recv` | `recv_timeout` | `recv_deadline` `std::thread::park*` | `park` | `park_timeout` | `park_deadline` `std::thread::sleep*` | none | `sleep_for` | `sleep_until` The blocking versions do not take any extra argument and are not suffixed. The timeout versions take a timeout as a `Duration` and return if this timeout is reached (the timeout starts when the function is called with best-effort precision). They are suffixed by `_timeout`. The deadline versions take a deadline as an `Instant` and return if this deadline is reached (the deadline precision is best-effort). They are suffixed by `_deadline`. For functions that do not have a meaningful blocking version (like sleep which would essentially block until the program ends), the timeout version would be suffixed by `_for` and the deadline version would be suffixed by `_until`. We don't have enough data-points to see if this rule is actually applicable. In a first iteration, we could leave aside those functions that do not have a meaningful blocking version.
C-enhancement,T-libs-api,B-unstable,A-time
medium
Major
277,175,370
react
Resetting a form containing a focused controlled number input puts it out of step with state
Here's a fixture demonstrating the issue (first test case): http://react-number-input-form-reset-bug.surge.sh/number-inputs If you have a controlled number input within a form containing a reset button, hitting Enter can trigger that reset event. This causes the focused input to be reset to the `defaultValue`, which won't be in sync with the tracked value because we do that work on blur for number inputs to avoid triggering validation warnings. This doesn't affect other input types, since the tracked value is updated immediately. This might be a viable tradeoff for avoiding those validation warnings, so I'm not sure if this is actionable, but I wanted to at least document it for future reference. cc @nhunzaker @gaearon
Type: Bug,Component: DOM
medium
Critical
277,199,339
go
runtime: gdb tests fail on NetBSD
The netbsd-amd64-8branch (NetBSD 8.0+) builder is back and NetBSD is kinda working for the first time in ages. But only kinda. The runtime tests fail with a timeout about two thirds of the time. Examples: https://build.golang.org/log/9ff11d9a995eb3b2555ce8f88bb3280c91639386 https://build.golang.org/log/d9a762c61874b62e34cec30f79ddc21a190922f7 https://build.golang.org/log/7ab9f1d000be47604a25b1ebf17fc55f5799d0ec https://build.golang.org/log/6678e6c8dfb7533054836eb140f9eff8416e3a30 Many more at https://build.golang.org. It'd be nice to have NetBSD happy for Go 1.10, considering the Go 1.9 news that NetBSD support was dead (https://golang.org/doc/go1.9#known_issues) which prompted a number of people to help fix things up. /cc @aclements @ianlancetaylor @bsiegert
help wanted,OS-NetBSD,NeedsInvestigation,compiler/runtime
low
Major
277,256,379
go
x/text: cgo changes broke golang.org/x/text/collate/tools/colcmp on Darwin
The golang.org/x/text/collate/tools/colcmp binary no longer compiles at Go tip. (It works at Go 1.9 and Go 1.8) https://build.golang.org/log/6a0073f83d8b33849ca6b8f57120b1161e849c32 ``` # golang.org/x/text/collate/tools/colcmp /var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/gopath/src/golang.org/x/text/collate/tools/colcmp/darwin.go:41: cannot use nil as type _Ctype_CFAllocatorRef in argument to _Cfunc_CFStringCreateWithBytes /var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/gopath/src/golang.org/x/text/collate/tools/colcmp/darwin.go:48: cannot use nil as type _Ctype_CFAllocatorRef in argument to func literal /var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/gopath/src/golang.org/x/text/collate/tools/colcmp/darwin.go:76: cannot use nil as type _Ctype_CFAllocatorRef in argument to _Cfunc_CFStringCreateWithCharactersNoCopy /var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/gopath/src/golang.org/x/text/collate/tools/colcmp/darwin.go:79: cannot use nil as type _Ctype_CFAllocatorRef in argument to _Cfunc_CFStringCreateWithCharactersNoCopy /var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/gopath/src/golang.org/x/text/collate/tools/colcmp/darwin.go:82: cannot use nil as type _Ctype_CFAllocatorRef in argument to _Cfunc_CFStringCreateWithCharactersNoCopy /var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/gopath/src/golang.org/x/text/collate/tools/colcmp/darwin.go:85: cannot use nil as type _Ctype_CFAllocatorRef in argument to _Cfunc_CFStringCreateWithCharactersNoCopy /var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/gopath/src/golang.org/x/text/collate/tools/colcmp/darwin.go:93: cannot use nil as type _Ctype_CFAllocatorRef in argument to _Cfunc_CFStringCreateWithBytesNoCopy /var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/gopath/src/golang.org/x/text/collate/tools/colcmp/darwin.go:98: cannot use nil as type _Ctype_CFAllocatorRef in argument to _Cfunc_CFStringCreateWithBytesNoCopy /var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/gopath/src/golang.org/x/text/collate/tools/colcmp/darwin.go:101: cannot use nil as type _Ctype_CFAllocatorRef in argument to _Cfunc_CFStringCreateWithBytesNoCopy /var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/gopath/src/golang.org/x/text/collate/tools/colcmp/darwin.go:106: cannot use nil as type _Ctype_CFAllocatorRef in argument to _Cfunc_CFStringCreateWithBytesNoCopy /var/folders/dx/k53rs1s93538b4x20g46cj_w0000gn/T/workdir/gopath/src/golang.org/x/text/collate/tools/colcmp/darwin.go:106: too many errors ```
OS-Darwin,NeedsFix
low
Critical
277,281,907
neovim
tnoremap <Esc> <C-\><C-n>
Can we please make this default? Personally I believe it would add quite a bit to the popularity of :terminal if people knew that you can use normal mode in a terminal buffer.
enhancement,defaults,terminal
medium
Major
277,302,249
go
net: LookupHost shows different results between GODEBUG=netdns=cgo and go
### What version of Go are you using (`go version`)? go version go1.9.2 darwin/amd64 ### What did you do? ``` package main import ( "net" "fmt" ) func main() { fmt.Println(net.LookupHost("10.256")) fmt.Println(net.LookupHost("10.256.1")) fmt.Println(net.LookupHost("10.10.256")) fmt.Println(net.LookupHost("10.10.256.1")) fmt.Println(net.LookupHost("10.10.10.256")) } ``` ### What did you expect to see? This program works fine on both Linux & Windows and does show below output, but not on Mac ``` [] lookup 10.256: no such host [] lookup 10.256.1: no such host [] lookup 10.10.256: no such host [] lookup 10.10.256.1: no such host [] lookup 10.10.10.256: no such host ``` ### What did you see instead? ``` [10.0.1.0] <nil> [] lookup 10.256.1: no such host [10.10.1.0] <nil> [] lookup 10.10.256.1: no such host [] lookup 10.10.10.256: no such host ```
help wanted,NeedsFix
low
Critical
277,356,805
pytorch
[docs] Tensor.new is not documented
Docs master has in http://pytorch.org/docs/master/tensors.html: new(*args, **kwargs) Constructs a new tensor of the same data type. Missing documentation of arguments. From some other pages it can be figured out that `new` is preserving the storage and even the device number and can take a size argument. Would be good to have complete specs.
module: docs,triaged
low
Major
277,385,432
rust
[docs] unclear how to create a Box<[T]> from a pointer and a length
I wanted to create a `Box<[T]>` from a pointer and a length and neither the docs of `slice` or `Box` were helpful. It wasn't hard (`Box::from_raw(slice::from_raw_parts_mut(ptr, len) as *mut [u8])`), but the experience could have been better.
C-enhancement,P-medium,T-libs-api,E-medium,A-docs
low
Minor
277,427,882
go
archive/tar: Header.FileInfo.Mode.IsRegular reports true for non-regular files
### Background ### A tar file is a format for archiving a filesystem and is able to represent anything including regular files, directories, block and character devices, symlinks, and hardlinks. Over the years, many incompatible extensions to tar have become common practice. Specifically, one them is the use of pseudo-files to encode additional information about a file or a set of files. A pseudo-file is encoded in the same way as a normal file, but the `Header.Typeflag` is set to some special value and the contents of the file are parsed according to some format determined by the typeflag. Both the GNU and PAX formats are heavy users of this approach. The Go implementation of tar provides automatic parsing of the metadata contained in these pseudo-files for the real file it describes. For example, Go1.1 added support for parsing long filenames for the GNU format; Go1.3 added support for parsing local PAX headers; and Go1.3 added support for parsing GNU sparse headers. ### The problem ### Parsing the metadata for a single file is easy to represent in the API since we can just represent the additional information in the Header struct. However, the PAX format adds a feature called "global PAX headers" which encodes metadata intended to modify all subsequent files (#22748 indicates that this feature is actually used; e.g., `git archive --format=tgz` produces them). In order to provide the metadata contained within a global PAX header, `Reader.Next` returns a single `Header` representing the pseudo-file (in go1.10, we actually parse the PAX records into `Header.PAXRecords`, but that is orthogonal to this issue). The problem with global headers (and really any typeflag that the Go implementation does not recognize) is that the following reports true: ```go Header{Typeflag: TypeXGlobalHeader}.FileInfo().Mode().IsRegular() // true ``` This is the behavior on all versions of Go thus far. The problem with `IsRegular` reporting true is: 1. It lies. This is not a regular file. 2. A user relying only on `Header.FileInfo` alone is unable to handle pseudo-files specially (e.g., ignoring them). However, it is not trivial to have `Header.FileInfo.Mode.IsRegular` report false since the logic for `os.FileMode.IsRegular` only reports false for a [very narrow set of types](https://github.com/golang/go/blob/92ad8df5d10b08ae73e8104f3202f458616853f1/src/os/types.go#L59), none of which are an obvious choice for "special" files like metadata or unrecognized typeflags. Thus, we need to make decision whether to keep or change the current behavior of `Header.FileInfo.Mode.IsRegular` for special files like `TypeXGlobalHeader`. If we change it, how do we make it report false? The only approach I see is to define a new constant (e.g., `ModeSpecial`) in the `os` package. Thoughts? \cc @rasky @bradfitz
NeedsDecision
low
Minor
277,510,074
TypeScript
Wrong overload selected on function when passed a callback annotated with a type with multiple optional properties
**TypeScript Version:** 2.7.0-dev.20171128 **Code** This issue was first noticed in code that uses LoDash with the typings from DefinitelyTyped. It seems to manifest only in a very specific set of circumstances. The following code sample is as minimal a reproduction as I could find. ```ts let _: LoDashStatic; interface LoDashStatic { mapValues<T extends object, TResult>( obj: T, callback: (value: T[keyof T]) => TResult ): { [P in keyof T]: TResult }; mapValues<T extends object>(obj: T, iteratee: object): { [P in keyof T]: boolean }; } interface Item { key?: string; bar?: number; } interface Collection { [key: string]: Item; } const source = { foo: { bar: 42 } }; let items: Collection = _.mapValues( source, (v: Item) => v ); ``` Note that the correct overload is selected when any of the following changes are made to the above code: - Remove the type annotation on the callback: `(v) => v` instead of `(v: Item) => v` - Remove the nested optional property: `{ foo: {} }` instead of `{ foo: { bar: 42 } }` - Remove the unused optional property: `interface Item { bar?: number; }` instead of `interface Item { key?: string; bar?: number; }` - Annotate the source collection: `const source: Collection = { foo: { bar: 42 } };` instead of `const source = { foo: { bar: 42 } };` **Expected behavior:** Compiles without issue. **Actual behavior:** ``` error TS2322: Type '{ foo: boolean; }' is not assignable to type 'Collection'. Property 'foo' is incompatible with index signature. Type 'boolean' is not assignable to type 'Item'. ```
Bug
low
Critical
277,510,169
go
cmd/cgo: provide some mechanism for treating certain C pointer types as uintptr
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version go1.9.2 darwin/amd64 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/tschinke/go" GORACE="" GOROOT="/usr/local/go" GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/bv/6rt5ck194ls704dz9p4c0hlh0000gn/T/go-build985720357=/tmp/go-build -gno-record-gcc-switches -fno-common" CXX="clang++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" ### What did you do? I'm not using goMobile but using jni + cgo directly to get maximum control. The code works without any problems on ARM but fails on x86 randomly. At first it looks like the usual cgo check, however I'm quite confident that this is a bug. One variant is this: E/Go: panic: runtime error: cgo argument has Go pointer to Go pointer E/Go: goroutine 17 [running, locked to thread]: E/Go: main.GetPrimitiveArrayCritical.func1(0x5d12ef60, 0x72d00005, 0x72c68a80) E/Go: /Users/tschinke/repos/libs_wdy/go_snappy/go/src/wdy/jni.go:1852 +0x79 E/Go: main.GetPrimitiveArrayCritical(0x5d12ef60, 0x72d00005, 0x62811b64) E/Go: /Users/tschinke/repos/libs_wdy/go_snappy/go/src/wdy/jni.go:1852 +0x23 E/Go: main.Java_de_worldiety_snappy_go_GoSnappy_uncompress2(0x5d12ef60, 0x7c700001, 0x72d00005, 0x0, 0x8, 0x18) E/Go: /Users/tschinke/repos/libs_wdy/go_snappy/go/src/wdy/math.go:79 +0x27 E/Go: main._cgoexpwrap_3ee6d16a6a1f_Java_de_worldiety_snappy_go_GoSnappy_uncompress2(0x5d12ef60, 0x7c700001, 0x72d00005, 0x0, 0x8, 0x0) E/Go: command-line-arguments/_obj/_cgo_gotypes.go:3055 +0x6a But the cgo statement is wrong, math.go:79 looks like this: 77: //export Java_de_worldiety_snappy_go_GoSnappy_uncompress2 78: func Java_de_worldiety_snappy_go_GoSnappy_uncompress2(env *C.JNIEnv, clazz C.jclass, compressed C.jbyteArray, compressedOffset C.jint, compressedSize C.jint) C.jbyteArray { 79: ptrComp := GetPrimitiveArrayCritical(env, C.jarray(compressed)) To be complete, the GetPrimitiveArrayCritical method is declared like this: // jni.h: // void * (JNICALL *GetPrimitiveArrayCritical)(JNIEnv *env, jarray array, jboolean *isCopy); func GetPrimitiveArrayCritical(env *C.JNIEnv, array C.jarray) unsafe.Pointer { return C._GoJniGetPrimitiveArrayCritical(env, array) } And this: static void* _GoJniGetPrimitiveArrayCritical(JNIEnv* env, jarray array) { return (*env)->GetPrimitiveArrayCritical(env, array, JNI_FALSE); } Please correct me, but I can't see any Go-Pointer at all. ### What did you expect to see? a) cgo check should not randomly fail b) cgo check should be consistent across targets (x86 and arm) c) a possibility to disable the cgo check at runtime, which works for an android library. Setting the environment variable through Android (Os.setenv or libcore) has no effect, and to set it from Go itself is to late. Give us some public variable like GOGC. ### What did you see instead? Random panics on Dell Venue 8, running Android 4.4 on x86 (32bit), but works on ARMs without problems.
help wanted,NeedsInvestigation,FeatureRequest
medium
Critical
277,528,163
go
net/url: URL allows malformed query round trip
#### What did you do? ```go package main import ( "fmt" "log" "net/url" ) func main() { u, err := url.Parse("http://example.com/bad path/?bad query#bad fragment") if err != nil { log.Fatal(err) } fmt.Println(u.String()) } ``` https://play.golang.org/p/hdX1zpv3BN #### What did you expect to see? I expect either url.Parse return a non-nil error or URL.String method return fully escaped url representation — `http://example.com/bad%20path/?bad%20query#bad%20fragment` — with query being escaped the same way as path or fragment. #### What did you see instead? http://example.com/bad%20path/?bad query#bad%20fragment For the reference, such url is rejected by net/http.Server: https://play.golang.org/p/2gujmbXZlu #### Does this issue reproduce with the latest release (go1.9.2)? Yes #### System details ``` go version devel +9a13f8e11c Tue Nov 28 06:47:50 2017 +0000 darwin/amd64 GOARCH="amd64" GOBIN="" GOCACHE="/Users/artyom/Library/Caches/go-build" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/tmp/go:/Users/artyom/go" GORACE="" GOROOT="/Users/artyom/Repositories/go" GOTMPDIR="" GOTOOLDIR="/Users/artyom/Repositories/go/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/lb/3rk8rqs53czgb4v35w_342xc0000gn/T/go-build624293827=/tmp/go-build -gno-record-gcc-switches -fno-common" GOROOT/bin/go version: go version devel +9a13f8e11c Tue Nov 28 06:47:50 2017 +0000 darwin/amd64 GOROOT/bin/go tool compile -V: compile version devel +9a13f8e11c Tue Nov 28 06:47:50 2017 +0000 uname -v: Darwin Kernel Version 17.2.0: Fri Sep 29 18:27:05 PDT 2017; root:xnu-4570.20.62~3/RELEASE_X86_64 ProductName: Mac OS X ProductVersion: 10.13.1 BuildVersion: 17B48 lldb --version: lldb-900.0.57 Swift-4.0 ``` https://tools.ietf.org/html/rfc3986#section-3.4 states that query component should be defined as ([appendix A](https://tools.ietf.org/html/rfc3986#appendix-A)): query = *( pchar / "/" / "?" ) pchar = unreserved / pct-encoded / sub-delims / ":" / "@" unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~" pct-encoded = "%" HEXDIG HEXDIG sub-delims = "!" / "$" / "&" / "'" / "(" / ")" / "*" / "+" / "," / ";" / "=" There's no whitespace character in this list. whatwg [agrees on that](https://url.spec.whatwg.org/#url-query-string): > A URL-query string must be zero or more URL units. > > [...] > > The URL units are [URL code points](https://url.spec.whatwg.org/#url-code-points) and percent-encoded bytes. > > [...] > > The URL code points are ASCII alphanumeric, U+0021 (!), U+0024 ($), U+0026 (&), U+0027 ('), U+0028 LEFT PARENTHESIS, U+0029 RIGHT PARENTHESIS, U+002A (*), U+002B (+), U+002C (,), U+002D (-), U+002E (.), U+002F (/), U+003A (:), U+003B (;), U+003D (=), U+003F (?), U+0040 (@), U+005F (_), U+007E (~), and code points in the range U+00A0 to U+10FFFD, inclusive, excluding surrogates and noncharacters.
NeedsDecision
medium
Critical
277,533,673
rust
`thread::Builder::spawn` returns WouldBlock for EAGAIN
When trying to launch a thread and the thread limit is reached or there is not enough virtual address space available for another thread, `thread::Builder::spawn` returns an `io::Error` of kind `WouldBlock`. ```rust extern crate libc; fn main() { unsafe { libc::setrlimit(libc::RLIMIT_NPROC, &libc::rlimit { rlim_cur: 0, rlim_max: 0 }); } let error = std::thread::Builder::new().spawn(|| unreachable!()).unwrap_err(); println!("I/O error kind {:?}: {:?}", error.kind(), error); } ``` This prints (on Linux): ``` I/O error kind WouldBlock: Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } } ``` [WouldBlock means](https://doc.rust-lang.org/std/io/enum.ErrorKind.html#variant.WouldBlock): > The operation needs to block to complete, but the blocking operation was requested to not occur. This doesn't make a lot of sense in the context of thread creation. Yes, if the create call were to block until the thread/virtual address space limit is no longer reached, this error interpretation would be correct, but I know of no threading API (Windows or Linux) with these semantics. The source of the problem is that the POSIX errors `EAGAIN` and `EWOULDBLOCK` may be defined as the same error value, and Rust chose to always interpret that as `EWOULDBLOCK`. I'm not sure what course of action I'd suggest to clear up the confusion. (NB. On Windows, AFAICT there is no way to limit the number of threads, but when running out of virtual address space, `CreateThread` returns `ERROR_NOT_ENOUGH_MEMORY`, which gets decoded as kind `Other`)
O-linux,P-low,T-libs-api,C-bug
low
Critical
277,564,321
kubernetes
start migrating controllers to use the new Event API group
Following #49112 The kubernetes/features#383 intends to migrate the different controllers towards the event API group described. This issue intends to track the PRs related to this migration. cc @gmarek
sig/scalability,area/controller-manager,lifecycle/frozen
low
Major
277,580,638
pytorch
Suppress hidden state output of RNNs?
Currently the forward method of RNN modules (LSTM, GRU, etc) return the output as well as the hidden states of all layers. When an RNN module contains many layers, the second output can take up a significant amount of memory, even though in many cases it is not needed. E.g. ``` lstm = nn.LSTM(40, 320, 5) y, _ = lstm(x) # The '_' variable takes lots of memory even though it's discarded right away ``` On the other hand, if I break down a multilayer RNN module into many modules representing single layers, I can discard the second output of the forward method layer by layer. With a 5-layer LSTM, I found doing so saved so much memory that I could double the batch size. ``` lstm1 = nn.LSTM(40, 320, 1) lstm2 = nn.LSTM(320, 320, 1) lstm3 = nn.LSTM(320, 320, 1) lstm4 = nn.LSTM(320, 320, 1) lstm5 = nn.LSTM(320, 320, 1) x, _ = lstm1(x) # The unwanted second output is discarded layer by layer x, _ = lstm2(x) # and doesn't not consume too much memory x, _ = lstm3(x) x, _ = lstm4(x) x, _ = lstm5(x) ``` Is it possible to add a switch to the forward method to specify whether the second output should be computed? E.g. ``` lstm = nn.LSTM(40, 320, 5) y = lstm(x, requires_hidden = False) # Do not store the hidden states of each layer ```
module: memory usage,triaged,enhancement
low
Major
277,620,971
opencv
build warning: QuartzCore.framework/QuartzCore.tbd out of sync
``` [ 28%] Linking CXX shared library ../../lib/libopencv_videoio.dylib ld: warning: text-based stub file /System/Library/Frameworks//QuartzCore.framework/QuartzCore.tbd and library file /System/Library/Frameworks//QuartzCore.framework/QuartzCore are out of sync. Falling back to library file for linking. ``` Install Xcode 9.1 Run: `sudo xcode-select --install`
category: build/install,platform: ios/osx
low
Minor
277,659,827
rust
Program compiles even though a type cannot be inferred
The code below compiles, even though the type of the elements of the `Vec` is unknown. ```rust use std::mem; use std::os::raw::c_void; #[no_mangle] pub extern "C" fn alloc(size: usize) -> *mut c_void { let mut buf = Vec::with_capacity(size); let ptr = buf.as_mut_ptr(); mem::forget(buf); return ptr as *mut c_void; } fn main() { } ``` Somehow, Rust is picking a type and compiling anyway. It is unclear to me which type that may be and whether this behavior is specified somewhere. The type could be anything, from `()` to `String` or something else. You can test this by adding a type annotation to the declaration of `buf`. The code compiles regardless of the type you choose.
T-compiler,A-inference,C-bug
low
Major
277,707,779
go
x/text: Support UnicodeSet as per UTR35
Feature request: Support the [UnicodeSet](http://unicode.org/reports/tr35/#Unicode_Sets) syntax as defined in [Unicode Technical Report 35](http://unicode.org/reports/tr35/). This would be needed to implement [CLDR transliteration rules](http://unicode.org/repos/cldr/trunk/common/transforms/) which use UnicodeSets for filtering and matching; to support [CLDR exemplar characters](http://unicode.org/reports/tr35/tr35-6.html#Character_Elements) which are also defined in terms of UTR35 UnicodeSets; and other Unicode stuff such as [UTR39 Unicode Security Mechanisms](http://www.unicode.org/reports/tr39/) that make use of UnicodeSets. See Unicode’s [list-unicodeset tool](http://unicode.org/cldr/utility/list-unicodeset.jsp) for an online demo (and its [documentation](http://cldr.unicode.org/unicode-utilities/list-unicodeset)); and the [ICU documentation](http://userguide.icu-project.org/strings/unicodeset) for the ICU API to UnicodeSets. For reference, you might want to have a look at the [C++ implementation](http://source.icu-project.org/repos/icu/icu/tags/release-58-rc/source/common/uniset.cpp) and the [Java implementation](http://source.icu-project.org/repos/icu/icu4j/tags/release-58-rc/main/classes/core/src/com/ibm/icu/text/UnicodeSet.java) inside the ICU sources. Not sure if this could be implemented by rewriting the string syntax to Go regular expressions, or if this would need more work.
NeedsDecision
low
Minor
277,728,074
rust
Adding a specialized impl can break inference.
Relevant to #31844. This is how it happens: ```rust #![feature(specialization)] trait B { fn b(&self) -> Self; } // Impl 1 impl<T> B for Option<T> where T: Default { default fn b(&self) -> Option<T> { Some(T::default()) } } // Removing one of the two concrete impls makes inference succeed. // Impl 2 impl B for Option<String> { fn b(&self) -> Self { Some("special".into()) } } // Impl 3 impl B for Option<i32> { fn b(&self) -> Self { Some(0) } } fn main() { // Cannot infer `T` in `Option<T>`. None.b(); } ``` This issue does not originate from specialization, since if we removed `Impl 1` the same problem would occur. But with specialization if those impls were added in order, the story would be a bit confusing: 1. With `Impl 1`, inference fails. 2. Added `Impl 2`, yay inference succeeds. 3. Added `Impl 3`, inference is back to failing. The only fix would be to make inference fail in step 2. Even if it's not something we want fix it still seems worth noting somewhere that we are ok with specializaton breaking inference.
T-compiler,A-inference,C-bug,F-specialization
low
Minor
277,864,373
rust
Poor error message for attempt to make doubly-fat pointers
Compiling the following code yields the given error: ```rust #![allow(unused_variables)] trait T1 { fn f(&self); } trait T2 { fn g(&self); } struct S; impl T1 for S { fn f(&self) { println!("<S as T1>::f"); } } impl T2 for T1 { fn g(&self) { println!("<T1 as T2>::g"); self.f(); } } fn main() { let s: S = S; let bs: Box<S> = Box::new(s); let t1_object: Box<T1> = bs; let t2_object: Box<T2> = t1_object; } ``` ```plain error[E0308]: mismatched types --> src/main.rs:26:30 | 26 | let t2_object: Box<T2> = t1_object; | ^^^^^^^^^ expected trait `T2`, found trait `T1` | = note: expected type `std::boxed::Box<T2>` found type `std::boxed::Box<T1>` ``` This is confusing when the intent is to create a trait object. I think it's common to internalize the rule that one can coerce `Box<T>` to `Box<Trait>` if `T: Trait`, without noticing the side requirement that `Box<T>` must not be a fat pointer, lest we create a doubly-fat pointer. I would speculate that what's going on in the compiler is that the coercion code declines to perform any coercion, leaving Rust to report the mismatch. It would be helpful to issue a note to the effect that trait object creation was considered, but rejected because the incoming pointer was already fat.
C-enhancement,A-diagnostics,T-compiler
low
Critical
277,879,455
rust
cargo test incorrectly warns for dead code
Hi! I got unexpected dead code warnings when executing `cargo test` with multiple test files. I use a very simple example below that reproduces the issue. I have two files that contains tests, `tests/test_one.rs` and `tests/test_two.rs`. Both contains exactly the same content (except for the unique test function name they contain): ```rust mod routines; #[test] fn test_one() { routines::my_routine(); } ``` And another file called `tests/routines.rs` that simply contains: ```rust pub fn my_routine() { } ``` When I execute `cargo test`, the two tests are executed successfully and there is no raised warning. But if I remove the `my_routine()` call from one of the two tests, `cargo test` stills end up successfully but raises a warning on `pub fn routine()` saying `function is never used`. However, one test still calls the function, so there is no dead code as stated. I got the same issue with both rust stable (1.22.1) and rust nightly (1.24.0). Thanks.
C-enhancement,T-cargo
medium
Critical
277,892,640
go
go/types: TestStdFixed test results are cached across $GOROOT/test changes
"go test go/types -run TestStdFixed" results may be cached by cmd/go even after changing files in $GOROOT/test. I noticed this by changing "ignored" to "package ignored" in test/fixedbugs/issue22877.go (in CL 80759), and repeated runs of go test still output "(cached)" pass results. /cc @rsc @griesemer
NeedsInvestigation
low
Critical
277,899,032
TypeScript
Suggest specifying generic as union if candidates are different
<!-- BUGS: Please use this template. --> <!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript --> <!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.6.1. **Code** > If a generic type can't be formed by picking one of the inference candidates, you'll get the error you posted. – https://stackoverflow.com/questions/39905523/why-isnt-the-type-argument-inferred-as-a-union-type/39905723#39905723 This makes sense, however I would like to question whether we can improve the user experience around this, so errors due to this constraint are easier to understand and fix. For example: ```ts { function compare<T>(x: T, y: T): number { return 1; } compare( 'oops', /* Argument of type '42' is not assignable to parameter of type 'string'. */ 42, ); } { function match<T>(cases: { foo: T; bar: T }): T { return cases.foo; } /* Argument of type '{ foo: number; bar: string; }' is not assignable to parameter of type '{ foo: number; bar: number; }'. Types of property 'bar' are incompatible. Type 'string' is not assignable to type 'number'. */ match({ foo: 1, bar: 'foo', }); } ``` As a TypeScript user, I have struggled with these errors many times, and I've only recently realised the specific constraint on the type system which is the root cause of these errors: generics are picked from the first candidate and are not widened to include all candidates. I have also seen other people struggle with this when learning TypeScript. We can fix this error by specifying the generic as a union: ``` ts match<string | number>({ foo: 1, bar: 'foo', }); ``` However, this fix is really not obvious from the error message, especially if the user is not aware of this constraint on the type system (that generics will not be inferred as unions). I'm wondering if there's any way we can better surface this constraint to the user, to make it clearer to users how they can fix these type errors, such as by specifying the generic as a union (if that is what they intend).
Suggestion,Needs Proposal,Domain: Error Messages
low
Critical
277,908,758
go
runtime: unsafe pointer maps
There are several places in the code where pointers are stored into slots that the pointer maps do not note as holding pointers. While the occurrences found so far appear benign with the current GC they do prevent adding diagnostics to the write barrier intended to detect errant writes. At the very least they should be documented as to why they are benign.
NeedsInvestigation,compiler/runtime
low
Minor
277,976,276
opencv
'Segmentation fault' with gpu video decoding
- OpenCV => 3.3.1-dev (github trunk: 29e4a4940dd81b22b26cbb51a2673d8c17f9f57d) - Operating System / Platform => CentOS 7.3 64Bit - Compiler => Default compiler on OS, gcc 4.8.5 20150623 (Red Hat 4.8.5-11) - Cuda => 9.0 - GPU => 1080 Ti I am trying 'samples/gpu/video_reader.cpp' to decode video using nvidia's gpu, but got 'segfault'. Cuda 9 now uses 'dynlink_nvcuvid.h' headers, and before that it uses 'nvcuvid.h'. So I guess this must be something related to 'dynlink_nvcuvid.h'
bug,priority: low,category: gpu/cuda (contrib)
low
Major
278,179,354
bitcoin
Bitcoin is returning higher fees for 36 block window than 2 block window (on testnet)
In testnet, right now, `estimatesmartfee` is returning *higher* estimates for 36 block window than 2 block window. The results: ``` estimatesmartfee 2 -> { "feerate": 0.01254483, "blocks": 2 } estimatesmartfee 36 -> { "feerate": 0.03863968, "blocks": 36 } estimaterawfee 2 -> { "short": { "decay": 0.962, "scale": 1, "fail": { "startrange": 490954, "endrange": 1e+99, "withintarget": 35, "totalconfirmed": 35.41, "inmempool": 165, "leftmempool": 0 }, "errors": [ "Insufficient data or no feerate found which meets threshold" ] }, "medium": { "feerate": 0.01254483, "decay": 0.9952, "scale": 2, "pass": { "startrange": 626596, "endrange": 1e+99, "withintarget": 38, "totalconfirmed": 38.96, "inmempool": 1, "leftmempool": 0 }, "fail": { "startrange": 596758, "endrange": 626596, "withintarget": 15.9, "totalconfirmed": 50.78, "inmempool": 14, "leftmempool": 0 } }, "long": { "decay": 0.99931, "scale": 24, "fail": { "startrange": 799713, "endrange": 1e+99, "withintarget": 138.78, "totalconfirmed": 147.56, "inmempool": 1, "leftmempool": 0 }, "errors": [ "Insufficient data or no feerate found which meets threshold" ] } } estimaterawfee 36 -> { "medium": { "feerate": 0.01254483, "decay": 0.9952, "scale": 2, "pass": { "startrange": 690822, "endrange": 1e+99, "withintarget": 37.33, "totalconfirmed": 38.19, "inmempool": 1, "leftmempool": 0 }, "fail": { "startrange": 596758, "endrange": 690822, "withintarget": 47.22, "totalconfirmed": 51.99, "inmempool": 12, "leftmempool": 0 } }, "long": { "decay": 0.99931, "scale": 24, "fail": { "startrange": 799713, "endrange": 1e+99, "withintarget": 139.47, "totalconfirmed": 147.66, "inmempool": 1, "leftmempool": 0 }, "errors": [ "Insufficient data or no feerate found which meets threshold" ] } } ``` I am also attaching fee_estimates.dat [fee_estimates.dat.zip](https://github.com/bitcoin/bitcoin/files/1518545/fee_estimates.dat.zip) Of course testnet is always "weird", but 36 block estimate being higher than 2 block estimate doesn't make sense, even on testnet. The version of bitcoind is v0.15.0.0
TX fees and policy
low
Critical
278,232,674
go
sync: TestWaitGroupMisuse2 takes 45-90 seconds on netbsd, AIX
TestWaitGroupMisuse2 on Linux with 8 cores takes 0.22s. On NetBSD it does pass but takes 45-90 seconds. Why? /cc @bsiegert
Testing,help wanted,OS-NetBSD,NeedsInvestigation,OS-AIX,compiler/runtime
low
Major
278,323,630
react
value|defaultValue={Symbol|Function} should be ignored, not stringified
Regression in master from https://github.com/facebook/react/pull/11534. Found it thanks to the attribute fixture snapshots.
Type: Bug,Difficulty: starter,Component: DOM,good first issue,React Core Team
high
Critical
278,375,010
go
net: a better builtin DNS stub resolver
This is a meta bug for tracking the status of the issues regarding builtin DNS stub resolver. - Refabrication - #16218 - Transport - #21160 - #23866 - #23873 - #27552 - Labels - #7122 - #9391 - #10631 - #10622 - #17659 - #22782 - #22826 - RRs - #27546 - Error handling - #18588 - Options - #6464 - #13279 - Testing - #13295
NeedsFix,umbrella
low
Critical
278,473,347
go
x/net/publicsuffix: ICANN flag returns true for "za" domains not in the list
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version go1.9.2 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? goos: darwin goarch: amd64 ### What did you do? I tested domains such as "gli.za" not present in the public suffix list. https://play.golang.org/p/DaJICtp00J ### What did you expect to see? I expected to have icann flag set to false, since the domain is not in the public suffix list. // za : http://www.zadna.org.za/content/page/domain-information ac.za agric.za alt.za co.za edu.za gov.za grondar.za law.za mil.za net.za ngo.za nis.za nom.za org.za school.za tm.za web.za ### What did you see instead? icann flag set to true I added debugging lines locally to check why the flag was being set and to see if there was a rule found. I saw that the code was going through the following steps (https://github.com/golang/net/blob/master/publicsuffix/list.go#L47): 1. Before the for loop icann flag is set to false 2. The for loop evaluates based on the nodes: - find za -> found -> icann is set to true - find gli -> not found -> break 3. Return the last subdomain since no rules match (https://github.com/golang/net/blob/master/publicsuffix/list.go#L89) Is this the expected behaviour? It seems like a corner case, given "za" is one of the rare domains in the list that does not include the base level ("za"), but only 2 level domains.
NeedsInvestigation
low
Critical
278,546,180
godot
Display a progress bar in the background of numerical properties in the Inspector and Project Settings
On the values of the Inspector and Project Settings, a progress bar in the background of numerical properties could be displayed. This would add some visual feedback (making visual grepping faster), as well as making adjustements by dragging the mouse more intuitive for users. The latter may require changing the dragging direction for adjustments (since bars are horizontal), though. As seen in Blender: ![blender_slider_values](https://user-images.githubusercontent.com/180032/33496026-ec0d9a92-d6c8-11e7-9618-6c3bc89b9970.png)
enhancement,topic:editor,usability
low
Major
278,582,769
go
cmd/go: add test -coverhtml
(maybe this shouldn't be a proposal, it seems trivial enough that we should just do it?) _Inspired by rakyll's suggestion in #16768._ I propose a small change to go test's "coverprofile" flag to output both the binary coverage format as well as HTML. The file extension passed in by the user would switch the output format (".html" for HTML, anything else for the existing behaviour). This change is simple and highly unlikely to break existing users, unlike #16768, which suggests to add additional flags that overlap with "go tool cover" and can quickly descend into supporting more and more flags. Before: ``` go test -coverprofile foo.out && go tool cover -html foo.out -o foo.html && rm foo.out ``` After: ``` go test -coverprofile foo.html ```
help wanted,Proposal,Proposal-Accepted,DevExp
low
Major
278,586,839
go
x/build: create mips32 soft-float builder
Commit 6be1c09 finished support for mips32 soft-float. We should have a builder configuration to test this.
Builders,NeedsFix,new-builder
low
Minor
278,601,567
go
encoding/json: bad encoding of field with MarshalJSON method
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? version 1.9 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? windows/amd64 ### What did you do? ```golang package main import ( "encoding/json" "os" ) type T bool func (v *T) MarshalJSON() ([]byte, error) { return []byte{'1'}, nil } type S struct { X T } func main() { v := S{true} e := json.NewEncoder(os.Stderr) e.Encode(v) // should print {"X":true} e.Encode(&v) // should print the same value } ``` ### What did you expect to see? ```json {"X":true} {"X":true} ``` ### What did you see instead? ```json {"X":true} {"X":1} ``` ### Issue The `json.Marshal` documentation states that: > Pointer values encode as the value pointed to. A nil pointer encodes as the null JSON value. Thus, `&v` should be marshaled to the same JSON value as `v`: ```json {"X":true} ``` Moreover, it states that: > If an encountered value implements the Marshaler interface and is not a nil pointer, Marshal calls its MarshalJSON method to produce JSON. Therefore, Marshal should not call the `MarshalJSON` method to produce JSON for the `X` field, because its `T` type does not implement the `json.Marshaler` interface. In fact, `MarshalJSON` has `*T` receiver and the Go documentation states: > The method set of an interface type is its interface. The method set of any other type `T` consists of all methods declared with receiver type `T`. ... The method set of a type determines the interfaces that the type implements ... As a final remark: 1. from the source code, the most relevant difference between cases `v` and `&v` is that `X` field becomes addressable in case `&v`, changing the encoding generated by `condAddrEncoder`; 1. implementing `MarshalJSON` with `T` value receiver makes `T` a `json.Marshaler` interface, with `X` values properly encoded by MarshalJSON: ```json {"X":1} {"X":1} ``` 1. if the intended behavior is that Marshal should anyway call the `MarshalJSON` method on encoding `T` values, whenever `*T` implements the `json.Marshaler` interface, that should be clearly documented.
NeedsDecision,early-in-cycle
low
Critical
278,623,621
rust
Move more of rustc_llvm to upstream LLVM
In general we try to use the LLVM C API whenever we can as it's generally nice and stable. It also has the great benefit of being maintained by LLVM so it tends to never be a pain point when upgrading LLVM! Unfortunately though LLVM's C API isn't 100% comprehensive and we often need functionality above and beyond what you can do with just C. For this custom functionality we typically use the C++ API of LLVM and compile our own shims which then in turn have a C API. At the time of this writing all of the C++ to C shims are located in the [`src/rustllvm` directory](https://github.com/rust-lang/rust/tree/bb42071f63830a984c4983f6fbdf982916857f72/src/rustllvm) across three main files: [`ArchiveWrapper.cpp`](https://github.com/rust-lang/rust/blob/bb42071f63830a984c4983f6fbdf982916857f72/src/rustllvm/ArchiveWrapper.cpp), [`PassWrapper.cpp`](https://github.com/rust-lang/rust/blob/bb42071f63830a984c4983f6fbdf982916857f72/src/rustllvm/PassWrapper.cpp), and [`RustWrapper.cpp`](https://github.com/rust-lang/rust/blob/bb42071f63830a984c4983f6fbdf982916857f72/src/rustllvm/RustWrapper.cpp). These files are all compiled via `build.rs` [around here](https://github.com/rust-lang/rust/blob/bb42071f63830a984c4983f6fbdf982916857f72/src/librustc_llvm/build.rs#L167-L172) where basically use `llvm-config` to guide us in how to compile those files. The downside of these shims that we have, however, is that they're difficult for us to maintain over time. They impose problems whenever we upgrade LLVM (we have to get them compiling again as the C++ APIs change quite regularly). Additionally it also makes consumers of Rust have a more difficult time using custom LLVM versions. For example right now our shims compile on LLVM 5 but probably not LLVM trunk. Additionally for users that like to follow LLVM trunk then keeping up with the breakage of our shims can be quite difficult! To help solve this problem it seems the ideal solution is to try to upstream at least a big chunk of the C++ APIs that we're using. This way we can much more closely stick to LLVM's C API which is far more stable. It makes it that much easier for us to eventually upgrade LLVM and it makes users using a custom LLVM not need to worry about using an LLVM beyond the one that we're using (aka LLVM trunk). I'll try to have a checklist here we can maintain over time which also is a good listing of what each of the APIs does! ## `ArchiveWrapper.cpp` In general this is functionality for reading archive `*.a` files in the Rust compiler. This makes reading rlibs (which are archive files) extra speedy. The functions here are: * [ ] `LLVMRustOpenArchive` * [ ] `LLVMRustDestroyArchive` * [ ] `LLVMRustArchiveIteratorNew` * [ ] `LLVMRustArchiveIteratorFree` * [ ] `LLVMRustArchiveIteratorNext` * [ ] `LLVMRustArchiveChildName` * [ ] `LLVMRustArchiveChildData` * [ ] `LLVMRustArchiveChildFree` * [ ] `LLVMRustArchiveMemberNew` * [ ] `LLVMRustArchiveMemberFree` * [ ] `LLVMRustWriteArchive` These functions are basically just reading and writing archives, using iterators for reading and providing a list of structs for writing. ## `PassWrapper.cpp` This file is when we get into a bit more of a smorgasboard of random functions rather than a consistent theme, so I'll comment more of them inline below. A general theme here I've found as I wrote these down is that it's not critical that all of these are implemented. I could imagine that it would be possible to have a mode where we as rustc still compile shims sometimes (like the ones below) but many of the shims are stubbed out to not actually use LLVM at all if we're in "non-Rust-LLVM mode" (aka custom LLVM mode). In other words, we don't necessarily need to upstream 100% of these functions. * [ ] `LLVMInitializePasses` - not entirely sure why we can't use the upstream versions. Someone more knowledgeable with LLVM may know how to replace this! * [ ] `LLVMRustFindAndCreatePass` - this is how we add custom passes to a pass manager by their string name * [ ] `LLVMRustPassKind` - categorizes whether a pass is a function or module pass * [ ] `LLVMRustAddPass` - add a custom pass to a pass manager * [ ] `LLVMRustPassManagerBuilderPopulateThinLTOPassManager` - thin wrapper around the C++ API to populate a ThinLTO pass manager * [ ] `LLVMRustHasFeature` - this is actually a pretty tricky one. It has to do with https://github.com/rust-lang/rust/issues/46181 and is I think the only function which actually only works with our fork. I can provide more information for this if necessary. * [ ] `LLVMRustPrintTargetCPUs` - mostly just a debugging helper we could stub out in the custom LLVM case. * [ ] `LLVMRustPrintTargetFeatures` - same as above * [ ] `LLVMRustCreateTargetMachine` - this is one we have to create a `TargetMachineRef` ourselves but also giving us full access to all the fields, would probably just involve exposing more field accessors and setters and such. * [ ] `LLVMRustDisposeTargetMachine` - complement to the above * [ ] `LLVMRustAddAnalysisPasses` - I think this is just adding "standard" passes to the pass manager IIRC, we're just trying to mirror what clang is doing here. * [ ] `LLVMRustConfigurePassManagerBuilder` - just configuring some fields, again also aimed at mirroring clang. * [ ] `LLVMRustAddBuilderLibraryInfo` - again, attempting to mirror clang by configuring all the fields * [ ] `LLVMRustAddLibraryInfo` - mirroring clang * [ ] `LLVMRustRunFunctionPassManager` - seems ripe to add upstream! * [ ] `LLVMRustSetLLVMOptions` - I think this is for one-time configuration of LLVM at startup * [ ] `LLVMRustWriteOutputFile` - there's a whole bunch of ways to write outupt files with LLVM, if we had something that just wrote it out to memory or a file that'd be good enough for us * [ ] `LLVMRustPrintModule` - I'm pretty sure this is mainly just generating IR, but I'm not personally too familiar with the need for a custom class here * [ ] `LLVMRustPrintPasses` - AFAIK a debugging helper, could be stubbed out with a custom LLVM * [ ] `LLVMRustAddAlwaysInlinePass` - may just be missing upstream? * [ ] `LLVMRustRunRestrictionPass` - I think this is part of our LTO bindings, internalizing lots of stuff * [ ] `LLVMRustMarkAllFunctionsNounwind` - definitely part of our LTO bindings, for when you're compiling with `-C lto` and `-C panic=abort` * [ ] `LLVMRustSetDataLayoutFromTargetMachine` - not entirely sure what this is... * [ ] `LLVMRustGetModuleDataLayout` - also not entirely sure what this is... * [ ] `LLVMRustSetModulePIELevel` - I think just configuring more properties * [ ] `LLVMRustThinLTOAvailable` - for us just testing the LLVM version right now * [ ] `LLVMRustWriteThinBitcodeToFile` - mostly just what it says on the tin * [ ] `LLVMRustThinLTOBufferCreate` - same as abvoe but in memory * [ ] `LLVMRustThinLTOBufferFree` - freeing the above * [ ] `LLVMRustThinLTOBufferPtr` - reading the above * [ ] `LLVMRustThinLTOBufferLen` - reading the above * [ ] `LLVMRustParseBitcodeForThinLTO` - mostly what it says on the tin These APIs are all related to ThinLTO are are still somewhat in flux, there may not be a great C API just yet. * [ ] `LLVMRustCreateThinLTOData` * [ ] `LLVMRustFreeThinLTOData` * [ ] `LLVMRustPrepareThinLTORename` * [ ] `LLVMRustPrepareThinLTOResolveWeak` * [ ] `LLVMRustPrepareThinLTOInternalize` * [ ] `LLVMRustPrepareThinLTOImport` ## `RustWrapper.cpp` Sort of even a bigger smorgasboard than `PassWrapper.cpp`! Note that many of these functions are very old and may have actually made their way into the C API of LLVM by now, in which case that'd be awesome! * [ ] `LLVMRustCreateMemoryBufferWithContentsOfFile` - this is something we can and probably should write ourselves rather than relying on LLVM * [ ] `LLVMRustGetLastError` - this is a Rust-specific API for getting out an error message, I'd imagine that whenever it's set we'd have something analagous in LLVM. * [ ] `LLVMRustSetLastError` - used by the C++ code to set the error that rustc will retrieve later * [ ] `LLVMRustSetNormalizedTarget` - I think this is just exposing something that wasn't already there. * [ ] `LLVMRustPrintPassTimings` - debugging on our end. * [ ] `LLVMRustGetNamedValue` - I think this is just fun dealing with metadata * [ ] `LLVMRustGetOrInsertFunction` - needed that C++ function most likely. * [ ] `LLVMRustGetOrInsertGlobal` - again, probably just needed the function * [ ] `LLVMRustMetadataTypeInContext` - more constructors for more types * [ ] `LLVMRustAddCallSiteAttribute` - just a "fluff" thing we needed to do that wasn't possible in C IIRC * [ ] `LLVMRustAddAlignmentCallSiteAttr` - same as above * [ ] `LLVMRustAddDereferenceableCallSiteAttr` - same as above * [ ] `LLVMRustAddDereferenceableOrNullCallSiteAttr` - same as above * [ ] `LLVMRustAddFunctionAttribute` - same as above * [ ] `LLVMRustAddAlignmentAttr` - same as above * [ ] `LLVMRustAddDereferenceableAttr` - same as above * [ ] `LLVMRustAddDereferenceableOrNullAttr` - same as above * [ ] `LLVMRustAddFunctionAttrStringValue` - same as above * [ ] `LLVMRustRemoveFunctionAttributes` - same as above * [ ] `LLVMRustSetHasUnsafeAlgebra` - not entirely sure what this is doing... * [ ] `LLVMRustBuildAtomicLoad` - I think at the time the C API didn't exist? * [ ] `LLVMRustBuildAtomicStore` - same as above * [ ] `LLVMRustBuildAtomicCmpXchg` - same as above * [ ] `LLVMRustBuildAtomicFence` - same as above * [x] `LLVMRustSetDebug` - I think one-time configuration of LLVM * [ ] `LLVMRustInlineAsm` - I think the C API didn't exist (or wasn't full-featured enough) * [ ] `LLVMRustAppendModuleInlineAsm` - that function probably wasn't exposed in C * [ ] `LLVMRustVersionMinor` - just exposing a constant * [ ] `LLVMRustVersionMajor` - same as above * [ ] `LLVMRustDebugMetadataVersion` - this and most debug functions below I think just aren't in the C API * [ ] `LLVMRustAddModuleFlag` - same as above * [ ] `LLVMRustMetadataAsValue` - same as above * [ ] `LLVMRustDI*` - same as above (there's a whole bunch of these) * [ ] `LLVMRustWriteValueToString` - IIRC this is mostly debugging * [ ] `LLVMRustLinkInExternalBitcode` - used during normal LTO * [x] `LLVMRustLinkInParsedExternalBitcode` - used during normal LTO * [ ] `LLVMRustGetSectionName` - not sure where this came from... * [ ] `LLVMRustArrayType` - missing C API? * [ ] `LLVMRustWriteTwineToString` - I think more debugging/diagnostics * [ ] `LLVMRustUnpackOptimizationDiagnostic` - diagnostics * [ ] `LLVMRustUnpackInlineAsmDiagnostic` - diagnostics * [ ] `LLVMRustWriteDiagnosticInfoToString` - diagnostics * [ ] `LLVMRustGetDiagInfoKind` - custom for us I think? * [ ] `LLVMRustGetTypeKind` - missing C API? * [x] `LLVMRustWriteDebugLocToString` - debugging API I think * [ ] `LLVMRustSetInlineAsmDiagnosticHandler` - dealing with inline asm diagnostics * [ ] `LLVMRustWriteSMDiagnosticToString` - diagnostics * [x] `LLVMRustBuildLandingPad` - missing C API? * [ ] `LLVMRustBuildCleanupPad` - same as above * [ ] `LLVMRustBuildCleanupRet` - same as above * [ ] `LLVMRustBuildCatchPad` - same as above * [ ] `LLVMRustBuildCatchRet` - same as above * [ ] `LLVMRustBuildCatchSwitch` - same as above * [ ] `LLVMRustAddHandler` - same as above * [ ] `LLVMRustBuildOperandBundleDef` - same as above * [ ] `LLVMRustBuildCall` - same as above * [ ] `LLVMRustBuildInvoke` - same as above * [ ] `LLVMRustPositionBuilderAtStart` - same as above I think? * [ ] `LLVMRustSetComdat` - same as above * [ ] `LLVMRustUnsetComdat` - same as above * [ ] `LLVMRustGetLinkage` - same as above * [ ] `LLVMRustSetLinkage` - same as above * [ ] `LLVMRustConstInt128Get` - same as above * [x] `LLVMRustGetValueContext` - same as above * [ ] `LLVMRustGetVisibility` - same as above * [ ] `LLVMRustSetVisibility` - same as above * [ ] `LLVMRustModuleBufferCreate` - serializing a module to memory * [ ] `LLVMRustModuleBufferFree` - freeing above * [ ] `LLVMRustModuleBufferPtr` - reading above * [ ] `LLVMRustModuleBufferLen` - reading above * [ ] `LLVMRustModuleCost` - mostly a debugging helper
C-cleanup,A-LLVM,T-compiler,E-help-wanted,E-medium,C-tracking-issue,S-tracking-impl-incomplete
low
Critical