id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
58,408,710 | nvm | Install node 32bits in Ubuntu 64bits | Is there a way to install an specific node arch (32|64) with nvm, in a 64 bits OS?
| installing node,feature requests,pull request wanted | low | Major |
58,448,555 | nvm | ls-remote output is deceiving when one of nodejs.org and iojs.org is not accessible | I recently installed nvm on a new server that did not have the Thawte root certificate used by nodejs.org configured as a trusted root certificate in OpenSSL. The AddTrust root certificate used by iojs.org was configured as a trusted root cert. The result of this was that when I ran `nvm ls-remote` it only displayed iojs versions with no other output. This was very confusing as there was no indication that the requests to nodejs.org were failing.
| installing node,SSL issue,pull request wanted | low | Major |
58,478,303 | rust | Deref coercion from `String` to `&str` doesn't seem to always work | This code compiles fine:
``` rust
fn main() {
let x = "a".to_string();
let y: String = ["b", &x[..], "c"].concat();
println!("{}", y);
}
```
but this code doesn't
``` rust
fn main() {
let x = "a".to_string();
// Doesn't work
let y: String = ["b", &x, "c"].concat();
println!("{}", y);
}
```
error output:
```
<anon>:4:25: 4:27 error: mismatched types:
expected `&str`,
found `&collections::string::String`
(expected str,
found struct `collections::string::String`) [E0308]
<anon>:4 let y: String = ["b", &x, "c"].concat();
^~
```
I'd expect to not have to sometimes write `&foo[..]` and sometimes `&foo` to get a `&str` out a `String`. The longer form is too verbose, and the inconsistency of having to sometimes use one over the other seems like a needless user mental model cost, especially because it's not obvious to me at all when I should use one form over the other.
rustc version: `rustc 1.0.0-nightly (522d09dfe 2015-02-19) (built 2015-02-21)`
| A-type-system,T-compiler,C-bug,T-types | medium | Critical |
58,488,339 | react | Optimizing Compiler: Tagging ReactElements | We can make more optimized reconciliation by tagging ReactElements with the "hidden class" of their props.
For example, this is guaranteed to always have three props: `className`, `width`, `children`.
``` javascript
<div className="foo" style={{ width: w, height: 100 }}>{c}</div>
```
If we could tag every element with these properties with a unique ID:
``` javascript
{ __t: 7, type: 'div', props: { className: 'foo', style: { width: w, height: 5 }, children: c } }
```
Then we could use the hidden class to generate an optimized diffing algorithm for these instead of iterating over the properties. Presumably, we would only need to do this for `type: <string>` since we only diff native components.
Bonus points if we can determine which properties are constant. Perhaps using a property descriptor object:
``` javascript
// Constant properties are annotated as 1, other properties are excluded and inferred by props.
var t = { className: 1, style: { height: 1 } };
{ __t: t, type: 'div', props: { className: 'foo', style: { width: w, height: 5 }, children: c } }
```
We would use a heuristic inside React to determine when to create an optimized differ. For example, after 10+ updates to the same component. Just like a JIT would do.
``` javascript
if (oldElement.__t === newElement.__t) {
numberOfUpdates++;
} else {
numberOfUpdates = 0;
}
if (numberOfUpdates === 10) {
optimizedDiffer = generateOptimizedDiffer(newElement);
optimizedDiffer(oldElement, newElement);
} else if (numberOfUpdates > 10) {
optimizedDiffer(oldElement, newElement);
} else {
manualDiffing(oldElement, newElement);
}
```
| Component: Optimizing Compiler,Resolution: Backlog,React Core Team | low | Major |
58,499,987 | rust | Extend #[bench] with memory usage | It would be nice to be able to see memory usage (and memory leak) in `#[bench]` tests to detect regressions. I guess jemalloc can do that with something like `je_malloc_stats_print`.
cc #14119
cc #14875
cc #19776
| T-libs-api,C-feature-accepted | medium | Major |
58,630,551 | neovim | :! (with no arguments) should pipe the buffer to &shell | Most (all?) shells can [execute stdin](http://stackoverflow.com/a/9966150/152142). E.g.:
```
echo ls | bash
```
Yet in Vim, if I have a buffer with these contents:
```
ls
```
and then execute any of these:
```
:%!
:%w !
:%r !
```
nothing happens. The buffer contents should pipe to `&shell` if `:!` has no arguments.
Also, `:!` does not work with a new, unsaved buffer. It should.
| enhancement,terminal | low | Major |
58,655,738 | youtube-dl | Custom filename transformations (was: Filename: I want to restrict characters, but permit spaces) | Right now, restricting filenames to reasonable characters also prevents spaces.
Yea, I know, most unix scripts/etc. can't handle spaces, but most modern GUI tools can.
| request | medium | Major |
58,775,371 | youtube-dl | add support for 'app passwd' | my understanding is that 2-step authentication is not supported. but i couldn't find anything about app specific passwd here. this should work with netrc just like regular passwd but i doesn't.
| request | low | Major |
58,785,281 | go | x/mobile/cmd/gomobile: icon support in gomobile | There are some useful defaults we could have for app icons. Place an icon in <pkg>/assets/icon.png, and it gets used.
But it turns out there are many icons. Android has an arbitrary number of potential sizes, depending on the DPI of the screen. iOS has a series of fixed sizes. Both also have much larger store icons (1024x1024 and 512x512).
http://developer.android.com/design/style/iconography.html
https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html
Given the large number of android combinations, automatic scaling might make sense. However users may need the ability to provide multiple icons, as small images often require artistic adjustment to the number of pixels. One possibility is <pkg>/assets/iconUxV.png.
| mobile | low | Major |
58,920,223 | TypeScript | SourceFileObject.getNamedDeclarations is missing declarations before methods | I'm doing a SourceFileObject.getNamedDeclarations on the following code snippet:
``` javascript
module Mankala {
export class Rectangle {
x: number;
y: number;
width: number;
height: number;
square() {
}
}
}
```
This returns an array that is missing the property 'height'.
The problem is in
SourceFileObject.prototype.getNamedDeclarations
``` javascript
case 125 /* Method */:
var functionDeclaration = node;
if (functionDeclaration.name && functionDeclaration.name.getFullWidth() > 0) {
var lastDeclaration = namedDeclarations.length > 0 ? namedDeclarations[namedDeclarations.length - 1] : undefined;
if (lastDeclaration && functionDeclaration.symbol === lastDeclaration.symbol) {
if (functionDeclaration.body && !lastDeclaration.body) {
namedDeclarations[namedDeclarations.length - 1] = functionDeclaration;
}
}
```
functionDeclaration.symbol === lastDeclaration.symbol is true as both symbols are undefined.
The property 'height' is then replaced by method 'square'
| Bug,Help Wanted,API | low | Major |
58,943,086 | You-Dont-Know-JS | "async & performance": Document unhandled rejection hooks | In async & performance you mention promise rejections and debuggability with error handling. A patch has landed in io.js in 1.4 which is released tomorrow with native promises that lets you do something similar to your example:
Your example syntax:
```
var p = Promise.reject( "Oops" ).defer();
var p2 = Promise.reject( "Oops" ); // will be reported as an unhandled rejection to the console
```
Can now be coded with
```
var p = Promise.reject( "Oops" ).catch(function(){});
var p2 = Promise.reject( "Oops" ); // will be reported as an unhandled rejection to the console
```
When doing:
```
process.on("unhandledRejection", function(promise, reason) {
console.log("Unhandled Error", reason);
});
```
This will achieve semantics very close to your `.defer` semantics using native ES6 promises in io.js, I think it's worth mentioning :) How do you feel about this?
| for second edition | low | Critical |
58,944,257 | kubernetes | Efficient lookup by label selection, reverse label selection, uid, and IP addresses | In a discussion I had yesterday with @bgrant0607 and @lavalamp it came up the need of indexing labels and probably ip addresses. Reference: https://github.com/GoogleCloudPlatform/kubernetes/pull/4482
> We'll still want to do the label indexing, reverse label indexing, lookup by IP address, specification of query URLs in service and controller statuses, field selection for more efficient listing of just metadata, etc.
On top of that I would like also to expose the entity relationship somehow in the api (e.g. /api/v1beta3/relations/{services,replicationcontroller,...})
| sig/scalability,area/api,area/usability,priority/awaiting-more-evidence,sig/api-machinery,kind/feature,lifecycle/frozen | medium | Major |
58,996,043 | go | proposal: spec: reconsider rule disallowing div-0 by constants | http://play.golang.org/p/flvr-MFRgR
is a perfectly reasonable program that doesn't compile. Perhaps the spec rule (which admittedly, I advocated) was a mistake after all.
We could reconsider this since permitting it would be backward-compatible.
(See also #10004 for context.)
| LanguageChange,Proposal,LanguageChangeReview | low | Major |
59,077,545 | TypeScript | Contextual intellisense in Visual Studio | Hi,
The most obvious case is for enums:
``` typescript
enum Foo {
bar,
allBarOne
}
function f(foo: Foo) {
foo === // list Foo members here
}
```
Should also work for `boolean` (although such comparisons are not very common).
Another case is for `typeof` in the context of union types:
``` typescript
function f(foo: string|number) {
typeof foo === // 'string', 'number'
}
```
One problem with this case is that an option for single vs double quotes would need to be provided.
And also `instanceof`
``` typescript
class Foo {}
function f(foo: Foo|Array<number>) {
foo instanceof // Foo, Array<number>
}
```
| Suggestion,Help Wanted,API | low | Minor |
59,105,523 | neovim | configure ";" and "," to repeat more motions | The ";" key is real nice to repeat f,t and T,F motions and save keystrokes.
### Problematic
In real life, us english is not the only spoken language and in many keyboard layouts, you have to leave aside keys like "[", "]", "`" and even sometimes ";" in favor of direct acces to dead keys or accentuated characters such as "é", "ç","à" and so on.
Part of the solution to this is to reclaim the accuented characters in normal mode by remaping them to the much useful bracket operators.
After doing this, your are still missing direct-one-keystroke-action keys because of the dead keys (vim registers nothing as nothing is sent to vim untill you type a second key).
### Solutions
Here comes the idea of key overloading. We could add a configuration option to overload the ";" and "," with more movements.
### What i have tried before
I tried a few plugins before but i found there are no elegant solutions; as soon as the users remaps one of the motion keys everything is screwed up. As far as i know, you have to implement a kind of proxy to register the last pressed motion key.
### Benefits and usage scenario
A user would add in his .vimrc something like:
_set motion_repeat_overload_options+=<C-D>,<C-U>,<C-E>,<C-Y>,"[[","]]", <PageUp>,<PageDown>,...._
and then any of those motions could be repeat by ";" and reversed by ","
### How does this help with the above problematic and why is it cool?
- More 2 keystrokes motion are replaced by a single keystroke.
- You can remap <Space> to ";" (IMHO the space key is the biggest and dumbest by default in normal mode. Maybe you use it in some clever way i have not thought of). Anyway, with this scheme you can even simulate a browser behavior. Say You hit the distant <PageDown> or <C-D> once and then you can read down by hitting <Space> again assuming 3+ pages.
- You can remap the ";" to something else (if you had direct acces to it) or simply make good use of the extra power of that key.
### Going a step further
Let's say you remap "," to <S-Space>, one could reverse the "cruising" direction by hitting the <S-Space> once, and then continue moving in that direction simply hitting <Space> again. Here you would save a second key on a normal layout. This direction toggling would of course be the subject of another vim option.
Any thoughts on this?
Best regards,
| enhancement,needs:design | low | Major |
59,136,942 | You-Dont-Know-JS | "async & performance": keep an eye on SharedArrayBuffer proposals | May solidify into something that the second edition of "async & performance" should cover.
https://blog.mozilla.org/javascript/2015/02/26/the-path-to-parallel-javascript/
+@dherman
| for second edition | medium | Major |
59,308,054 | rust | Comparison operators have higher precedence than range operator `..` | This has the effect of causing parse errors and other weirdness when trying to use a range literal in a comparison.
``` rust
1..2 == 1..2
```
gives us
> ```
> <input>:1:10: 1:12 error: expected one of `.`, `;`, `<eof>`, or an operator, found `..`
> <input>:1 1..2 == 1..2
> ```
Huh? Okay, what about this?
``` rust
1..2 == 1
```
> ```
> <anon>:13:20: 13:31 error: start and end of range have incompatible types: expected `_`, found `bool` (expected integral variable, found bool) [E0308]
> <anon>:13 println!("{:?}", { 1..2 == 1 });
> ^~~~~~~~~~~
> ```
So it looks as though the comparison is being reduced before the range.
More fun:
``` rust
1..2 < (3..4)
```
> ```
> <anon>:13:27: 13:35 error: mismatched types:
> expected `_`,
> found `core::ops::Range<_>`
> (expected integral variable,
> found struct `core::ops::Range`) [E0308]
> <anon>:13 println!("{:?}", { 1..2 < (3..4) });
> ^~~~~~~~
> ```
Never mind the fact that ranges don't implement (Partial)Ord -- the precedence is still wrong here, too. Placing the parentheses on the first range instead of the second properly tells us
> ```
> <anon>:13:20: 13:30 error: binary operation `<` cannot be applied to type `core::ops::Range<_>`
> <anon>:13 println!("{:?}", { (1..2) < 3..4 });
> ^~~~~~~~~~
> ```
(although i'd argue that it should _also_ complain about mismatched types here, as well. ;)
| E-hard,A-parser,P-low,I-needs-decision,T-lang,C-feature-request | low | Critical |
59,419,745 | You-Dont-Know-JS | "this & object prototypes": editorial review (about chapter 6) | Per Kyle's request, I'm posting what I find to be the serious issues with chapter 6 as they now stand; of course I'm open to being persuaded that I may have missed something. This is mostly from an Amazon review. I'll do my best to translate to second person, but forgive me if a few third person references to you survive.
The tl;dr is that this chapter does not adequately discuss how constructor functions are used idiomatically, and not honestly compared to OLOO. Specifically, there are many, many situations where OLOO is far worse than well designed constructor functions, and an unsuspecting new developer reading this book may take away some simply awful habits.
Chapter 5 was good. You explain how prototype chains work, and how to create an object from a prototype: `Object.create`. From here I assumed you would show the limitations of `Object.create`: that to create a "family" (I'm avoiding using the word class for obvious reasons) of objects you'll need to repetitively code up factory functions of some sort that create the object using `Object.create`, then set the relevant properties, then return the object. Something like this (contrived of course):
```
function createPerson(name, birthday){
var result = Object.create(personPrototype);
result.name = name;
result.birthday = birthday;
return result;
}
```
Which would then be a perfect time to explain why constructor invocation exists, and how it very cleanly solves this very common problem, albeit with some syntactic baggage unfortunately borrowed from Java.
Nope. The chapter ends, and into 6.
Chapter 6 would have been a tremendous opportunity to thoroughly explain how OO development in JavaScript is typically done--constructor functions--with alternatives that might occasionally be appropriate, namely your OLOO. Instead it seemed to devolve into a tirade about how standard JavaScript idioms aren't to the your liking, and why developers should instead use OLOO, which actively hampers many common use cases (use cases you do not bring up). Any non-expert JS developer who reads this chapter may pick up some bad advice which is very easy to misuse in a professional environment.
Many of your examples seemed like gross misrepresentations and sleights of hand picked to make (typical) prototypal inheritance (constructor functions with chained prototypes) look unnecessarily bad. On page 132 you present the following, as part of the "traditional class design pattern"
```
LoginController.prototype.failure = function(err){
//super call
Controller.prototype.failure.call(this, "Login invalid: " + err);
}
```
Overriding a base method just to lock in a string prefix? Fine; book examples can be contrived. The problem is that the alternative you present with his OOLA is this
```
AuthController.rejected = function(err){
this.failure("Auth Failed: " + err);
}
```
You just made a brand new method with a new name that calls this.failure, with this.failure resolving up to the prototype (just like before). The reader may be left wondering why this possibility was never brought up for the "traditional" way
```
LoginController.prototype.rejected = function(err){
this.failure("Login invalid: " + err);
}
```
or a new reader may be left with the impression that this way does not exist in the traditional idiom.
The rest of your comparison seemed to involve similar sleights of hand; it almost seemed apples-to-oranges. On page 134, when implementing your OOLA pattern you completely cut the inheritance hierarchy up to appear simpler. Couldn't this have been done with the "traditional" pattern? You also don't point out that the errors array had to be re-initialized, and that any developer following this pattern would have to also re-initialize every. single. other. property a base object may have (yours had only had the one, and so hid this complexity). And of course you fail to consider the difficulty this code would have if you ever needed to create a second, or third of these objects. These are all things that constructor functions handle seamlessly for you.
Why not discuss these issues so the reader has a fuller picture? Was there a word limit I'm unaware of?
Finally, is your mental model for the traditional way really honest? Surely this suffices for the vast majority of developers
obj.foo();
- box is shown of obj, listing obj's own properties
- foo is not there, arrow takes you up to obj's prototype
- check there
- repeat step 2 until found, or prototype chain exhausted.
While—if they're a senior dev—also knowing (in the back of their heads) that there are also constructor properties at each level pointing to the relevant constructor function.
You _did_ explain how prototype chains work earlier in the book—and well. Why not tie it together with a more thorough exploration of constructor functions?
Note, I took the time to write all this up, albeit originally excessively angrily on Amazon, because many young developers learn their trade from books like this. Years ago Crockford's text was extremely influential to me—in a bad way. I accepted his opinions as fact, and frankly wound up looking foolish. I don't know if Crockford's reasons for (ostensibly at least) hating constructor functions are the same as yours, but it can be a (somewhat) harmful thing to put out there when presented like this.
| for second edition,editorial | low | Critical |
59,458,421 | go | liblink, cmd/ld: don't encode the instruction being relocated in Reloc.add | When I first implemented cgo for ARM, I made the wrong precedence to encode
the instruction itself in the relocation's addend field. It makes everything harder:
liblink must stuff the real addend into the instruction, put it in reloc.add, and then
in the linker, it must get the real addend out from the instruction, calculate the
real value, and stuff the value back into the instruction.
Additionally, it makes -S output unreadable.
```
rel 20+4 t=6 runtime.morestack_noctxt+9bfffffe
rel 40+4 t=6 runtime.duffzero+eb00005e
```
(Note: the 2nd line is calling into the middle of duffzero, but you can't tell
anything from the addend without some shifts and masks to figure out the
offset)
Compare this to the 6g -S output:
```
rel 16+4 t=4 runtime.morestack_noctxt+0
rel 37+4 t=4 runtime.duffzero+228
```
I suspect this is because in cmd/ld/lib.c, we always pass a zero val (the variable o)
to archreloc, instead, it really should pass the original value read from the symbol.
Should we correct this? so that archreloc is passed the real instrutions/data being
relocated in *val argument, and we place the real addend in Reloc.add field?
| NeedsInvestigation | low | Major |
59,589,304 | rust | Consider using `llvm.invariant.*` intrinsics for immutable memory | Non-mutable memory that doesn't contain `UnsafeCell` can be marked `invariant`: http://llvm.org/docs/LangRef.html#llvm-invariant-start-intrinsic
http://llvm.org/docs/Frontend/PerformanceTips.html
| A-LLVM,I-slow,C-enhancement,A-codegen,T-compiler | low | Major |
59,601,605 | youtube-dl | Add support for yahoo sub domains | Pls. add support for following yahoo sub domains.
https://hk.movies.yahoo.net/
https://tw.movies.yahoo.com
http://special.movies.yahoo.co.jp/detail/20150303380294/
http://movies.yahoo.co.jp/movie/%E3%82%B7%E3%82%A7%E3%83%95%E3%80%80%E4%B8%89%E3%83%84%E6%98%9F%E3%83%95%E3%83%BC%E3%83%89%E3%83%88%E3%83%A9%E3%83%83%E3%82%AF%E5%A7%8B%E3%82%81%E3%81%BE%E3%81%97%E3%81%9F/350453/trailer/?vid=26280
| site-support-request | low | Minor |
59,605,273 | go | runtime: GC behavior in non-steady mode | Currently GC allows heap to grow to 2x of live memory during the previous GC. This can play badly with spiky memory usage. Consider that in steady state program has live set X. GC will allow heap to grow to 2X and then collect it back to X, and so on. Now consider that there is a 1.5X spike in memory usage. If GC happens _after_ the spike (when live set is again X), then GC will collect X memory (garbage generated during the spike) and set heap limit to 2X as before. Now if GC happens to happen _during_ the spike (when live set is 1.5X), then GC will collect only 0.5X and set heap limit to 3X.
Basically bad timing can increase maximum heap (RSS) by up to 2x.
Memory-constrained environments, like browsers, pay a great deal of attention to this problem. The idea is to set smarter GC threshold when heap grows/shrinks.
I did not work out a solution. But what I have in mind is: if heap grows, and especially if the next threshold (next_gc) will be larger than the current RSS (heap_sys - heap_released), then set next_gc to, say, heap_inuse \* (1 + GOGC/100) \* 0.75.
Since heap cannot grow all the time, this throttling is only temporal.
@rsc @RLH @aclements @randall77
| compiler/runtime | low | Major |
59,692,531 | nvm | sha1sum surprise | Unmet dependency on OSX.
```
-bash: sha1sum: command not found
```
Easy solve in the usual way.
```
brew install md5sha1sum
```
Perhaps a note in the readme?
| installing node: checksums,shell alias clobbering,pull request wanted | low | Major |
59,737,940 | go | syscall: Syscall9 for darwin/{arm,arm64} actually grubs up to 7 arguments | But a few system calls require 8 or 9 arguments.
```
// Actually Syscall7.
TEXT ·Syscall9(SB),NOSPLIT,$0-52
```
| OS-Darwin | low | Minor |
59,741,263 | rust | Incorrect / Inconsistent behavior of deref coercion with {} | In this code, the commented-out lines fail to compile.
``` rust
struct Foo;
fn f(_: &Foo) {}
fn g(_: &&Foo) {}
fn main() {
let x: &Foo = &Box::new(Foo);
f(x);
f(&Box::new(Foo));
f(&(Box::new(Foo)));
//f(&{Box::new(Foo)});
let y: &Foo = &Box::new(Foo);
g(&y);
//g(&&Box::new(Foo));
//g(&(&Box::new(Foo)));
g(&{&Box::new(Foo)});
}
```
[playpen](http://is.gd/x2qdNo)
I believe that, among other things, `f(&{Box::new(Foo)});` should compile since `&{Box::new(Foo)}` has type `&Box<Foo>`, which is deref-coercible to `&Foo`.
Additionally, the behavior is so inconsistent that it is basically impossible to predict whether the other lines will compile without trying them.
| A-type-system,T-compiler,C-bug,T-types | low | Major |
59,817,082 | go | x/build: trybot status shouldn't show FAIL for temp failures | The trybot status page shows "FAIL" even for temporary failures, like going over the quota.
It should have a different state, or just show idle instead.
Example screenshot:

/cc @adg
| Builders | low | Critical |
60,021,414 | TypeScript | Assignment of string literal indexed enum member passes compilation but results in invalid javascript | The following TypeScript:
```
const enum MyEnum {This,That};
MyEnum["That"] = 1;
```
compiles down to the following javascript:
```
;
1 /* "That" */ = 1;
```
This holds true for standard `enum`s as well, a `const enum` was just a more succinct example.
While I'm not sure why anyone would want to do this, it does seem that valid TypeScript should compile to valid javascript, and this will obviously cause a runtime error.
| Bug,Help Wanted | low | Critical |
60,112,796 | TypeScript | Allow different syntactic elements have different indentation | Original issue: [[st3] keeps indenting by 4 ws when typing](https://github.com/Microsoft/ngconf2015demo/issues/2)
| Suggestion,Help Wanted | low | Minor |
60,188,037 | go | cmd/pprof: duplicate listings in weblist report | When using the interactive "weblist" command, pprof generates multiple reports, one per source file, even though each report contains the same data accumulated across all sources.
E.g.,
```
go build -gcflags="-cpuprofile=$PWD/cpuprofile.1" -a runtime
cp cpuprofile.1 cpuprofile.2
go tool pprof $(go env GOTOOLDIR)/6g cpuprofile.1 cpuprofile.2
```
Then try running "list main.main" and "weblist main.main". Notice that weblist shows the same data, but twice because of the two source files.
When you have a lot of source files (e.g., 2000+ from repeatedly building the entire standard library), this makes using weblist impractical, because it constructs the entire report in memory first. (Currently I'm resorting to an ad hoc tool to concatenate CPU profile data.)
| compiler/runtime | low | Minor |
60,247,213 | youtube-dl | Ustream not download | C:\Distribute\Soft\youtube-d>youtube-dl -o test.flv "http://www.ustream.tv/
embed/18742830?v=3&wmode=direct" --verbose
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-o', 'test.flv', 'http://www.ustream.tv/embed/
18742830?v=3&wmode=direct', '--verbose']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2015.03.03.1
[debug] Python version 2.7.8 - Windows-7-6.1.7601-SP1
[debug] exe versions: none
[debug] Proxy map: {}
[ustream] 18742830: Downloading webpage
ERROR: Unable to extract desktop_video_id; please report this issue on http
s://yt-dl.org/bug . Make sure you are using the latest version; type youtu
be-dl -U to update. Be sure to call youtube-dl with the --verbose flag and
include its complete output.
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 648, in extract_info
File "youtube_dl\extractor\common.pyo", line 275, in extract
File "youtube_dl\extractor\ustream.pyo", line 38, in _real_extract
File "youtube_dl\extractor\common.pyo", line 557, in _html_search_regex
File "youtube_dl\extractor\common.pyo", line 547, in _search_regex
RegexNotFoundError: Unable to extract desktop_video_id; please report this
issue on https://yt-dl.org/bug . Make sure you are using the latest version
; type youtube-dl -U to update. Be sure to call youtube-dl with the --ver
bose flag and include its complete output.
| bug | low | Critical |
60,263,802 | go | cmd/internal/obj/arm64: moves that use C_BITCON and C_ADDCON don't work | Replaces 4ad/go#68.
Issue #10108 is related.
| NeedsInvestigation | low | Minor |
60,264,039 | go | cmd/internal/obj/arm64: floating point immediates are loaded through indirection | ARM64 is 64 bit, we gain nothing by coalescing float immediates as addresses are 64-bit.
Related to issue #10108 and issue #10112.
Replaces 4ad/go#49.
| NeedsInvestigation | low | Minor |
60,568,519 | neovim | suspend/resume: check changed files | In vim, when I suspend with ctrl+z, change something, and then come back, vim will offer me options about reloading files. Neovim simply does not; I have to do :e to reload a file. Is this a bug?
| bug | medium | Critical |
60,667,040 | go | x/mobile: font.Default cannot be parsed by freetype-go | font.Default loads kCTFontSystemFontType, which does not parse with the error:
```
freetype: unsupported TrueType feature: cmap encoding
```
The Monospace font (kCTFontUserFixedPitchFontType) works.
| mobile | low | Critical |
60,737,791 | TypeScript | Support for tabs in language service formatter. | The formatter described in the link below currently doesn't seem to offer a way to indent by using tabs.
https://github.com/Microsoft/TypeScript/wiki/Using-the-Compiler-API#pretty-printer-using-the-ls-formatter
Could that be added?
Thanks!
| Suggestion,Help Wanted,API | low | Major |
60,741,498 | You-Dont-Know-JS | "async & performance": slim down general explanations of iterators/generators | Now that "ES6 & Beyond" is going to fully cover iterators and generators, perhaps it'd be appropriate to slim down ch4 a fair bit.
For example, we could strip out some of the nuances around `yield *`, error propagation, etc.These are all interesting and useful details, but they're not entirely needed to understand using generators for async flow control. I can just point to the "ES6 & Beyond" title for more info.
| for second edition | medium | Critical |
60,761,118 | go | x/tools/cmd/eg: support ... package expansion | ``` bash
$ eg -w -t t.go cmd/...
cannot find package "cmd/..." in any of:
```
It'd be nice if eg supported `...` the way that the `go` tool does.
I'm happy to do the legwork if desired.
| NeedsInvestigation | low | Minor |
60,761,657 | go | x/tools/cmd/eg: allow multiple refactorings in a single template | While working on `Node` refactoring leading up to [CL 7360](https://go.dev/cl/7360), I found myself applying 8 different templates (one per relevant field) sequentially. It would be nicer to use--and I think faster to execute--to apply them all in a single pass.
The template could look like a sequence of alternating pairs of `before` and `after`:
```go
package P
func before(s string) error { return fmt.Errorf("%s", s) }
func after(s string) error { return errors.New(s) }
func before(msg string) { log.Fatalf("%s", msg) }
func after(msg string) { log.Fatal(msg) }
```
The semantics would be that the output would be equivalent to applying each of the pairs sequentially as a template.
Thoughts? I'm happy to help with implementation.
| NeedsInvestigation | low | Critical |
60,788,560 | youtube-dl | https://www.tvnz.co.nz/worlds-worst/s1-ep1-video-6251831 fails to download | I'm happy to run more tests if need be. Would be nice to have tvnz.co.nz videos work with ytdl :)
$ TZ="NZST" youtube-dl 'https://www.tvnz.co.nz/worlds-worst/s1-ep1-video-6251831' --verbose
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['https://www.tvnz.co.nz/worlds-worst/s1-ep1-video-6251831', '--verbose']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.03.09
[debug] Python version 2.7.9 - Linux-3.16.0-4-amd64-x86_64-with-debian-8.0
[debug] exe versions: rtmpdump 2.4
[debug] Proxy map: {}
[generic] s1-ep1-video-6251831: Requesting header
WARNING: Falling back on generic information extractor.
[generic] s1-ep1-video-6251831: Downloading webpage
[generic] s1-ep1-video-6251831: Extracting information
ERROR: Unsupported URL: https://www.tvnz.co.nz/worlds-worst/s1-ep1-video-6251831
Traceback (most recent call last):
File "/home/justa/bin/youtube-dl/youtube_dl/extractor/generic.py", line 814, in _real_extract
doc = parse_xml(webpage)
File "/home/justa/bin/youtube-dl/youtube_dl/utils.py", line 1521, in parse_xml
tree = xml.etree.ElementTree.XML(s.encode('utf-8'), **kwargs)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1300, in XML
parser.feed(text)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
ParseError: mismatched tag: line 96, column 2
Traceback (most recent call last):
File "/home/justa/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 649, in extract_info
ie_result = ie.extract(url)
File "/home/justa/bin/youtube-dl/youtube_dl/extractor/common.py", line 275, in extract
return self._real_extract(url)
File "/home/justa/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1283, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: https://www.tvnz.co.nz/worlds-worst/s1-ep1-video-6251831
In Firefox, this site requires the timezone to be correctly set, or an error along the lines "incorrect timezone" is displayed where the video player would normally appear.
| site-support-request | low | Critical |
61,556,815 | youtube-dl | Please add support for godvine.com | Can not download this video:
http://www.godvine.com/Return-Of-Lost-Love-Letter-Brings-Tears-For-WWII-Vet-6931.html
| site-support-request | low | Minor |
61,640,625 | youtube-dl | Unable to download gamingcx.com's streaming videos. | From http://www.gamingcx.com/ ...
Example:
$ youtube-dl -v http://www.gamingcx.com/2015/03/gamecenter-cx-episode-126-super-spy.html
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-v', 'http://www.gamingcx.com/2015/03/gamecenter-cx-episode-126-super-spy.html']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.03.09
[debug] Python version 2.7.3 - Linux-3.2.0-4-amd64-x86_64-with-debian-7.8
[debug] exe versions: avconv 1.0.10, avprobe 1.0.10, ffmpeg 1.0.10, ffprobe 1.0.10, rtmpdump 2.4
[debug] Proxy map: {}
[generic] gamecenter-cx-episode-126-super-spy: Requesting header
WARNING: Falling back on generic information extractor.
[generic] gamecenter-cx-episode-126-super-spy: Downloading webpage
[generic] gamecenter-cx-episode-126-super-spy: Extracting information
ERROR: Unsupported URL: http://www.gamingcx.com/2015/03/gamecenter-cx-episode-126-super-spy.html
Traceback (most recent call last):
File "/usr/bin/youtube-dl/youtube_dl/extractor/generic.py", line 814, in _real_extract
doc = parse_xml(webpage)
File "/usr/bin/youtube-dl/youtube_dl/utils.py", line 1521, in parse_xml
tree = xml.etree.ElementTree.XML(s.encode('utf-8'), **kwargs)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1301, in XML
parser.feed(text)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1643, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1507, in _raiseerror
raise err
ParseError: not well-formed (invalid token): line 13, column 130
Traceback (most recent call last):
File "/usr/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 649, in extract_info
ie_result = ie.extract(url)
File "/usr/bin/youtube-dl/youtube_dl/extractor/common.py", line 275, in extract
return self._real_extract(url)
File "/usr/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1283, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: http://www.gamingcx.com/2015/03/gamecenter-cx-episode-126-super-spy.html
Thank you in advance. :)
| site-support-request | low | Critical |
61,654,715 | rust | Move metadata out of dylibs | Crate metadata constitute [a significant proportion](https://github.com/rust-lang/rust/issues/21482) of Rust dylibs. It's only needed for compilation; otherwise it just bloats up the size of Rust programs, and, unlike debug info, can't even be easily stripped.
In the case of dylibs, we could move metadata out into a companion file (say, `<library>.rsmd`), which developers may choose to not distribute if linking to the library is not expected (for example, Rust's own stage0 binaries).
I think this approach would be very congruent with Rust's philosophy of zero cost abstractions. The only downside I can see, is that distribution of dylibs as _libraries_ would become slightly less convenient as there would be two files instead of one.
| C-enhancement,A-metadata | low | Critical |
61,668,175 | TypeScript | Allow wrapped values to be used in place of primitives. | ``` ts
class NumberWrapper {
constructor(private value: number) {}
valueOf(): number { return this.value; }
}
var x = new NumberWrapper(1);
// The right-hand side of an arithmetic operation
// must be of type 'any', 'number' or an enum type.
console.log(2 + x);
```
It would be nice if an arithmetic operation could allow using the wrapped number because it's `valueOf()` method returns the expected primitive type.
This can be generalized to: if type `T` is expected, then a value that implements the following interface can be used.
``` ts
interface Wrapped<T> {
valueOf(): T;
}
```
| Suggestion,Needs Proposal | high | Critical |
61,916,455 | neovim | Multihead: multiple linked clients (like Emacs "frames", multitenancy) | Hi Everyone, I've got a few use-cases that I'm wondering if Neovim can solve. If I'm missing something just let me know.
The main thing I'd like to do is to be able to have two or more **linked** Windows (using OS terminology, NOT vim windows, see emacs **frames** for a comparison) of Vim. The reason for this is that it would facilitate multi-monitor set-ups. Each monitor would have its own OS window (I'll call them **widgets** from now on to avoid confusion) and focus could be switched between the two (or more) using vim-bindings.
This is a necessary addition to existing structures like tabs, windows, splits, etc. because often different monitors have different resolutions and arrangements, and Vim doesn't always adapt well to these situations (stretching a single widget across multiple screens doesn't work well, and MANY programmers are using multi-monitor setups).
Some features this would require for optimal use would include:
- If a buffer is present in each widget, they should update themselves with changes made in other widgets (preferably in real-time)
- It is possible for a buffer to be open in one widget, while not open in another.
- It is possible for a buffer to be open in multiple widgets at the same time.
- All splits, windows, and tabs are managed on a per widget basis (making a split in one widget doesn't affect any others).
- There is only one active cursor between all widgets
- The Vim instance is aware of all windows and can switch between them using keyboard bindings (similar to switching between splits using <ctrl-w>[key] bindings
As far as implementation goes I don't believe something like this is currently possible. Since Vim does not traditionally deal with Widgets itself (leaving that to gVim and MacVim) it would perhaps be more dependent on GUI clients to implement, however it would still be important for Neovim to define an interface which allows this behaviour. These features will come in handy when third-parties begin to look at embedding Vim, and will allow them greater flexibility in doing so.
I don't believe that Vim's server modes allow this behaviour as they are, though it would be possible to extend it's behaviour to accommodate this sort of thing.
If there is any confusion, please ask. For reference, simply look over how "frames" are handled in Emacs.
I'm not sure whether it would be most beneficial to share a buffer list between widgets or not, that's up for debate I suppose.
Cheers! Thanks for your time.
| enhancement,ui,ui-extensibility | high | Critical |
63,006,517 | go | test/fixedbugs: bug248 and bug369 fail when the absolute path contains a space | Go version:
```
go version devel +6ffed30 Wed Mar 18 15:14:37 2015 +0000 darwin/amd64
```
I did:
```
cd ~/"Google Drive"
git clone https://go.googlesource.com/go
cd go/src
./all.bash
```
I expected to see a successful build and test.
Instead I saw:
```
...
##### ../test
# go run run.go -- fixedbugs/bug248.go
exit status 1
bug2.go:8: import path contains space character: "/Users/drchase/Google Drive/work/go/test/fixedbugs/bug248.dir/bug0"
bug2.go:9: import path contains space character: "/Users/drchase/Google Drive/work/go/test/fixedbugs/bug248.dir/bug1"
```
and a similar failure with bug369.go .
My workaround is to rename "Google Drive" to "GoogleDrive" (and correct external references to the original name).
Small reproducer (can copy and paste into shell):
```
cd; mkdir -p tmp ; cd tmp ; mkdir "t m p" ; cd "t m p"
cat > main.go <<EOF
package main
import (
p0 "./x"
"fmt"
)
func main() {
fmt.Println(p0.Id("Hello, playground"))
}
EOF
mkdir x
cat > x/bug0.go <<EOF
package x
func Id (s string) string {
return s
}
EOF
go tool 6g main.go
# end of copy and paste
```
This produces the error:
```
main.go:3: import path contains space character: "/Users/drchase/tmp/t m p/x"
main.go:7: undefined: p0 in p0.Id
```
Ordinary compilation works fine:
```
$ go run main.go
Hello, playground
```
| Testing | low | Critical |
63,088,055 | rust | unused-features only warns unused non-language features | For the following snippet, rustc only warns about `rustc_privat` being unused:
```
#![feature(rustc_private)]
#![feature(box_syntax)]
#![feature(trace_macros)]
#![feature(slicing_syntax)]
#![feature(log_syntax)]
fn main() {}
```
According to the documentation, [check_unused_or_stable_features](https://github.com/rust-lang/rust/blob/master/src/librustc/middle/stability.rs#L570) only checks unused non-langauge features, but it is the only place that emits `UNSUSED_FEATURES`.
| C-enhancement,A-diagnostics,T-compiler | low | Minor |
63,162,672 | rust | Poor interaction between int fallback and other flow of type information | A reduced example:
``` rust
use std::ops::Shl;
struct Foo<T>(T);
impl<T> Shl<usize> for Foo<T> {
type Output = Foo<T>;
fn shl(self, _: usize) -> Foo<T> { self }
}
impl<T> Shl<i32> for Foo<T> {
type Output = Foo<T>;
fn shl(self, _: i32) -> Foo<T> { self }
}
fn main() {
let _ = Foo(0u32) << 2; // works fine
let _ = (Foo(0u32) << 2).0; // does not work
let x = Foo(0u32) << 2; // does not work
let _ = x.0;
let x: Foo<u32> = Foo(0u32) << 2; // works
let _ = x.0;
}
```
generates the following error:
```
<anon>:18:13: 18:31 error: the type of this value must be known in this context
<anon>:18 let _ = (Foo(0u32) << 2).0; // does not work
^~~~~~~~~~~~~~~~~~
<anon>:21:13: 21:16 error: the type of this value must be known in this context
<anon>:21 let _ = x.0;
^~~
```
I suspect what is happening here is that the fallback isn't being triggered early enough -- in particular, before the projection is generating the error. Note that the same problem occurs with a normal struct.
(This may be one reason that `Shl`/`Shr` are only implemented on `usize` for `Wrapping<T>`.)
| A-type-system,T-compiler,A-inference,C-bug,T-types | low | Critical |
63,302,490 | neovim | `set <key>={sequence}` is not implemented | Sorry if this has been addressed in docs, but I searched and couldn't find anything.
in my vimrc I have
```
set <k0>=^[Op
set <k1>=^[Oq
set <k2>=^[Or
set <k3>=^[Os
set <k4>=^[Ot
set <k5>=^[Ou
set <k6>=^[Ov
set <k7>=^[Ow
set <k8>=^[Ox
set <k9>=^[Oy
" These '^[' are all \x1b ("escape") chars
```
This is mainly being used with vim-mark now (which already has binds for the numpad `<k0-9>` keys).
However I am finding that in neovim, the `set <k0>=` isn't actually successfully setting a keycode (`E846: Key code not set: <k0>`) like it does in Vim.
It also appears that the numpad numbers are being interpreted the same as the actual numbers, so it types into the ephemeral number count buffer. Which is otherwise perfectly reasonable, but not the behavior I am looking for.
So I am wondering what the proper way to do this is. Has setting keycodes with `set` been abolished? I hope not, but if so, do I then need to come up with some sort of new-style keycode like change whatever is mapped to `<k0>` to use `<m-O-p>` or something?
| bug,compatibility,tui,core,mappings | medium | Critical |
63,386,995 | rust | region-outlives obligations lead to uninformative/undecipherable region inference failures | Consider this program:
``` rust
use std::cell::Cell;
trait Foo { fn foo(&mut self); }
struct Pair<'a,'b> { x: &'a Cell<u8>, y: &'b Cell<u8> }
// This impl says it callers of `foo` on `Pair<'a,'b>` must ensure
// that 'b outlives 'a.
impl<'a, 'b:'a> Foo for Pair<'a, 'b> {
fn foo(&mut self) {
println!("pre x: {} y: {}", self.x.get(), self.y.get());
// 'b outlives 'a, so `&'b Cell<u8> <: &'a Cell<u8>`
self.x = self.y;
println!("post x: {} y: {}", self.x.get(), self.y.get());
}
}
impl<'a,'b> Pair<'a,'b> {
fn bar(&mut self) {
self.foo();
}
}
fn baz<'a,'b>(pa: &'a Cell<u8>, pb: &'b Cell<u8>) {
let mut p = Pair { x: pa, y: pb };
p.bar();
}
fn main() {
let a = Cell::new(1);
let b = Cell::new(2);
let pa = &a;
let pb = &b;
baz(pa, pb);
}
```
This yields the following error ([playpen](http://is.gd/6ukHwR)):
```
<anon>:18:14: 18:19 error: cannot infer an appropriate lifetime for lifetime parameter `'b` due to conflicting requirements
<anon>:18 self.foo();
^~~~~
<anon>:17:5: 19:6 help: consider using an explicit lifetime parameter as shown: fn bar(&mut self)
<anon>:17 fn bar(&mut self) {
<anon>:18 self.foo();
<anon>:19 }
error: aborting due to previous error
playpen: application terminated with error code 101
```
There is no mention of the region constraint on the impl providing `foo` that is causing calling `foo` from `bar` to fail.
cc @nikomatsakis
| C-enhancement,A-diagnostics,A-borrow-checker,T-compiler,D-confusing | low | Critical |
63,442,176 | youtube-dl | Read config file from current directory for portable Windows installation | On Windows, [the config file is currently read](https://github.com/rg3/youtube-dl/blob/master/README.md#configuration) from locations outside where youtube-dl.exe resides.
| request | low | Minor |
63,548,933 | rust | Variables bound with different modes in patterns | The following doesn't compile:
``` rust
enum Test<'a> {
A(&'a u64),
B(u64),
}
fn foo(test: Test) {
match test {
Test::A(r) | Test::B(ref r) => println!("{}", r)
}
}
fn main() {
foo(Test::A(&0));
foo(Test::B(1));
}
```
failing with the following error:
``` text
test.rs:7:30: 7:35 error: variable `r` is bound with different mode in pattern #2 than in pattern #1
test.rs:7 Test::A(r) | Test::B(ref r) => println!("{}", r)
^~~~~
error: aborting due to previous error
```
However, the following does work:
``` rust
enum Test<'a> {
A(&'a u64),
B(u64),
}
fn foo(test: Test) {
match test {
Test::A(&ref r) | Test::B(ref r) => println!("{}", r)
}
}
fn main() {
foo(Test::A(&0));
foo(Test::B(1));
}
```
Is there any reason rust can't just perform this reborrow automatically? `&ref` is a _very_ cryptic pattern.
| A-borrow-checker,T-compiler,C-bug | low | Critical |
64,023,384 | neovim | Nvim calls many getcwd and epoll_wait syscall | @tarruda please watch this video (download it to your local machine because the preview quality is crap).
https://www.dropbox.com/s/96ky8qb8xpkdc82/nvim_strace-2015-03-24_12.14.04.mkv?dl=0
That slowness is noticeable using `unite.vim` plugin.
| performance | low | Major |
64,332,900 | rust | error: overflow evaluating the requirement `<_ as core::iter::Iterator>::Item` | ```
src/lib.rs:1:1: 1:1 error: overflow evaluating the requirement `<_ as core::iter::Iterator>::Item` [E0275
src/lib.rs:1 //!
^
```
The error suggests adding a recursion requirement to my crate root
```
note: consider adding a `#![recursion_limit="128"]` attribute to your crate
```
It continues to recommend doubling the recursion limit until I've raised it so high that rustc itself overflows it's stack
```
thread 'rustc' has overflowed its stack
Could not compile `mindtree_utils`.
```
[Here's a link](https://github.com/mitchmindtree/utils-rs) to the repository that is failing to build in case you would like to give it a go.
The error only appeared in the latest nightly for rustc (I'm fairly sure the crate was building fine before I updated). Here's the version of the nightly I've just downloaded which seems to have caused the failure:
```
rustc 1.0.0-nightly (123a754cb 2015-03-24) (built 2015-03-25)
```
| C-enhancement,T-compiler | medium | Critical |
64,396,329 | go | x/tools/cmd/eg: embedded fields are mishandled | $GOPATH/src/egbug/x.go:
``` go
package egbug
type T struct {
*E
E2 *E
}
type E struct {
Next *T
}
func main() {
var t *T
t.Next = new(T)
}
```
Template:
``` go
package p
import "egbug"
func before(t *egbug.T) *egbug.T { return t.Next }
func after(t *egbug.T) *egbug.T { return t.E2.Next }
```
eg output:
``` go
package egbug
type T struct {
*E
E2 *E
}
type E struct {
Next *T
}
func main() {
var t *T
t.E2.E2.Next = new(T)
}
```
The second statement in main should be `t.E2.Next`, not `t.E2.E2.Next`. The output does not compile.
| NeedsInvestigation | low | Critical |
64,451,549 | rust | Type inferencer probably could figure this case out (associated types) | ``` rust
trait Async {
type Value;
type Error;
}
enum Result<T, E> {
Ok(T),
Err(E),
}
impl<T, E> Result<T, E> {
fn reduce<F, U>(self, init: U::Value, action: F) -> U::Value
where F: Fn(U::Value, T) -> U,
U: Async<Error=E> {
unimplemented!();
}
}
impl Async for i32 {
type Value = i32;
type Error = ();
}
pub fn main() {
let res: Result<&'static str, ()> = Result::Ok("hello");
let val: i32 = 123;
res.reduce(val, |val /* i32 */, curr| val);
// Compiles if annotated ^^
}
```
cc @nikomatsakis
| A-type-system,C-enhancement,T-compiler,A-inference,T-types | low | Critical |
64,505,138 | go | x/tools/cmd/eg: needs package documentation | I pointed someone to https://golang.org/x/tools/cmd/eg, which unhelpfully says "For documentation, run the command , or see Help in golang.org/x/tools/refactor/eg". The command cannot be run on mobile, so they following the unclickable link (with some effort) to discover a Help that is not helpful:
`const Help = "" /* 4235 byte string literal not displayed */`
Please consider using the technique adopted by the Go tool for package documentation: it is generated from help output.
https://github.com/golang/go/blob/master/src/cmd/go/mkdoc.sh
| Refactoring | low | Minor |
64,652,293 | rust | Order of operands to equality expression matters when inferring a AsRef implementation | This may be related to or the same as #23673, but I wanted to file it anyway as it seems to have a slightly different flavor.
``` rust
#![feature(convert)]
struct Container {
i: u8,
b: bool,
}
impl AsRef<u8> for Container {
fn as_ref(&self) -> &u8 {
&self.i
}
}
// A second implementation to make the `as_ref` ambiguous without further information
impl AsRef<bool> for Container {
fn as_ref(&self) -> &bool {
&self.b
}
}
fn main() {
let c = Container { i: 42, b: true };
// Succeeds
&42u8 == c.as_ref();
// Fails
c.as_ref() == &42u8;
}
```
Fails with
```
type annotations required: cannot resolve `Container : core::convert::AsRef<_>` [E0283]
```
However, if you flip the order of the arguments to the equality operator, then the code compiles and the correct type is inferred. It seems as if both forms should work the same.
Originally from this [Stack Overflow question](http://stackoverflow.com/q/29278940/155423)
| A-type-system,P-low,I-needs-decision,T-compiler,A-inference,C-bug,T-types | low | Minor |
64,747,765 | go | runtime/cgo: test signal from foreign thread before cgo call | A C++ program can trigger the Go signal handler from a global constructor before any cgo calls are made. As badsignal uses cgocallback, this requires an extra M be initialized. This was done for #10207, as it appears to sometimes happen with the os/signal tests on darwin/arm.
It needs a robust test.
There also needs to be some decision made about how this interacts with the deadlock detector. (Perhaps another issue.)
/cc @dvyukov
| compiler/runtime | low | Major |
64,993,609 | You-Dont-Know-JS | "async & performance": revise "TCO" discussion | - Move TCO from Chapter 6 to Chapter 5
- Slim it down
- Refer to TCO in "ES6 & Beyond"
| for second edition | medium | Major |
65,085,737 | neovim | :terminal should not use &shell | I use fish as a default shell, but sh as my shell for vim scripts because many of the plugins I use assume the standard sh syntax. fish does work, actually very well, with `:terminal`.
According to [one SO answer](http://stackoverflow.com/questions/11059067/what-is-the-nix-command-to-view-a-users-default-login-shell), the login shell can be obtained by running
``` bash
$ getent passwd $LOGNAME | cut -d: -f7
```
...and for Mac, [another SO answer](http://stackoverflow.com/questions/16375519/how-to-get-default-shell) recommended the same, but using this:
``` bash
$ grep ^$(id -un): /etc/passwd | cut -d : -f 7-
```
although the `^$(id -un)` syntax isn't fish-compatible and `grep` may output more than one username.
| terminal | medium | Critical |
65,337,979 | electron | Support click-through of transparency | As commented here, https://github.com/atom/atom-shell/pull/949#issuecomment-87087841 - transparent windows unfortunately are of limited use unless you can click through the transparent area.
I noticed that in the [documentation](https://github.com/atom/atom-shell/blob/master/docs/api/frameless-window.md#limitations) it is stated that this is blocked by an upstream bug. The nw.js project DOES have [support for this](https://github.com/nwjs/nw.js/wiki/Transparency#click-through-transparency), but i'm not sure how different their implementation is...
I think this is worth the issue to track discussion and implementation, so here it is.
| enhancement :sparkles:,component/transparent | high | Critical |
65,338,263 | go | cmd/compile: inline function calls that return a closure | ERROR: type should be string, got "https://go-review.googlesource.com/#/c/8200/ added a benchmark to the strings package that shows strings.Trim and its variants allocate memory. The allocation is caused by the call to makeCutsetFunc <a href=\"https://github.com/golang/go/blob/master/src/strings/strings.go#L625\">here</a>. Inlining this call removes the allocation and provides a nice performance boost to Trim. You can see the effect on the benchmark using Go Tip (commit 6262192cd0fb98d6bb80752de70ae33fc10dc33e) below:\n\n```\nbenchmark old ns/op new ns/op delta\nBenchmarkTrim 3204 2323 -27.50%\n\nbenchmark old allocs new allocs delta\nBenchmarkTrim 11 0 -100.00%\n\nbenchmark old bytes new bytes delta\nBenchmarkTrim 352 0 -100.00%\n```\n\nmakeCutsetFunc is a pretty simple function (all it does is return a closure), and it was pointed out in review that it might be nice to inline functions like it in the compiler rather than changing strings.Trim explicitly. \n" | Performance,NeedsInvestigation,compiler/runtime | low | Major |
65,410,337 | javascript | Please include the "Why" in your styleguide | Hi there
As your styleguide is getting popular I just wanted to express some concerns I have with it. I'm generally for styleguides, however, I believe that things are always depending on a context and if you're classifying something as bad / good you should always include a "why" and reason about your decision. There is nothing more frustrating than seeing developers which were trained to accept a fact that something is good or bad but they don't know the reasoning behind. Things can change from one day to the other and they will have no background to re-validate their decisions.
From an educational standpoint but also for future maintenance of your styleguide (as things are changing and you need to adopt) I strongly recommend you to include short reasoning in your examples why something is considered good or bad at that given time and context. One example would be in your type coercion entries. Why is it bad to use the unary + operator to convert a string to a number? In some context this makes absolute sense (both float and int can be converted, hex and exponent strings can be converted) and also it's faster than parseInt / parseFloat. If you include the "why" in your comments you can clarify your assumptions and the context.
There are also plenty of recommendations that are based on pure preference. Although, I prefer single quote over double quote string notation, and I have my personal functional explanation why I prefer them, there is no general explanation why they should be preferred. This is simply personal preference with some vague functional reasoning. I can imagine a lot of young developers following your styleguide will think that there is something wrong with double quotes and they will probably think JSON is ridiculously flawed and the jQuery source code (they prefer double quotes) is a pile of crap... I'd really also put a reasoning behind it and also explain that it's not necessarily a bad thing, whats really bad is not being consistent.
Thanks and I hope you agree to my motivation.
Cheers
Gion
| enhancement | medium | Critical |
65,419,677 | TypeScript | Suggestion: allow instantiation of type aliases | Currently a type aliased class [cannot be instantiated](https://github.com/Microsoft/TypeScript/issues/2552). Also there is no way to alias a [type with type parameters](https://github.com/Microsoft/TypeScript/issues/1616). Please consider making type aliases as capable as the types that they represent, basically capturing the type expression under a different name.
| Suggestion,Needs Proposal | medium | Critical |
65,606,788 | neovim | clipboard=autoselect | It will be convenient if neovim can automatically send visual selection to clipboard (without yanking).
| clipboard,has:workaround | medium | Critical |
65,778,995 | youtube-dl | Can not download video | Hello.
There is a university site here in Vienna.
It has embedded flash player. I can watch the video in the browser (firefox) fine.
But I want to save it locally too, because I want to watch it offline too - sometimes
my internet connection is not up.
Here is what I did - keep in mind that --password xxx is not my real password; I use my real password but I get something strange.
> youtube-dl --verbose --username shevegen --password xxxxx https://multimedia.boku.ac.at/multimedia-intern/2011SS/agrarmaerkte_schiebel/20110621/agrarmaerkte20110621.html
Here is what I get:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'--verbose', u'--username', u'PRIVATE', u'--password', u'PRIVATE', u'https://multimedia.boku.ac.at/multimedia-intern/2011SS/agrarmaerkte_schiebel/20110621/agrarmaerkte20110621.html']
[debug] Encodings: locale ISO-8859-1, fs ISO-8859-1, out ISO-8859-1, pref ISO-8859-1
[debug] youtube-dl version 2015.03.28
[debug] Python version 2.7.9 - Linux-3.2.29-i686-Intel-R-_Celeron-R-_CPU_G1630_@_2.80GHz-with-slackware-14.0
[debug] exe versions: ffmpeg 2.6.1, ffprobe 2.6.1
[debug] Proxy map: {}
[generic] agrarmaerkte20110621: Requesting header
WARNING: Could not send HEAD request to https://multimedia.boku.ac.at/multimedia-intern/2011SS/agrarmaerkte_schiebel/20110621/agrarmaerkte20110621.html: HTTP Error 401: Unauthorized
[generic] agrarmaerkte20110621: Downloading webpage
ERROR: Unable to download webpage: HTTP Error 401: Unauthorized (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "/usr/bin/youtube-dl/youtube_dl/extractor/common.py", line 314, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1673, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 475, in error
return self._call_chain(_args)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(_args)
File "/usr/lib/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
I do not know what this error means. Perhaps it could become more specific?
The HTML code in question is this here:
```
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>Agrarmärkte - 21.06.2011</title>
</head>
<body style="background-color:#000;">
<div align="center">
<object id="player" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" name="player" width="720" height="404">
<param name="movie" value="player.swf" />
<param name="allowfullscreen" value="true" />
<param name="allowscriptaccess" value="always" />
<param name="flashvars" value="agrarmaerkte20110517.mp4%26autostart=true" />
<embed
type="application/x-shockwave-flash"
id="player2"
name="player2"
src="player.swf"
width="720"
height="423"
allowscriptaccess="always"
allowfullscreen="true"
flashvars="file=agrarmaerkte20110621.mp4&autostart=true"
/>
</object> <!-- beim zweiten "height": Videovertikale+20 (für Steuerleiste im Player) -->
</div>
</body>
</html>
```
| site-support-request,account-needed | low | Critical |
65,949,492 | nvm | Test failure | On master, `npm run test/fast` fails for me:
```
$ npm run test/fast
> [email protected] test/fast /home/ubuntu/src/nvm
> shell=$(basename -- $(ps -o comm= $(ps -o ppid= -p $PPID)) | sed 's/^-//'); make TEST_SUITE=fast test-$shell
Running tests in bash
Running tests at 2015-04-02T15:15:03
test/fast
...
Listing versions
...
✓ Running "nvm ls stable" and "nvm ls unstable" should return the appropriate implicit alias
✗ Running "nvm ls system" should include "system" when appropriate
"nvm ls system" contained "system" when system node is not present
✓ Running "nvm ls" should display all installed versions.
✓ Running "nvm ls" should filter out ".nvm"
✓ Running "nvm ls" should filter out "versions"
✗ Running "nvm ls" should include "system" when appropriate
"nvm ls" contained "system" when system node is not present
✓ Running "nvm ls" should list versions in the "versions" directory
...
✓ Running "nvm use iojs" uses latest io.js version
✗ Running "nvm use system" should work as expected
Did not report error, system node not found
✓ Running "nvm use x" should create and change the "current" symlink
...
Done, took 42 seconds.
69 tests passed.
3 tests failed.
make: *** [test-bash] Error 3
npm ERR! Linux 3.13.0-48-generic
npm ERR! argv "/home/ubuntu/.nvm/versions/io.js/v1.6.3/bin/iojs" "/home/ubuntu/.nvm/versions/io.js/v1.6.3/bin/npm" "run" "test/fast"
npm ERR! node v1.6.3
npm ERR! npm v2.7.4
npm ERR! code ELIFECYCLE
npm ERR! [email protected] test/fast: `shell=$(basename -- $(ps -o comm= $(ps -o ppid= -p $PPID)) | sed 's/^-//'); make TEST_SUITE=fast test-$shell`
npm ERR! Exit status 2
npm ERR!
npm ERR! Failed at the [email protected] test/fast script 'shell=$(basename -- $(ps -o comm= $(ps -o ppid= -p $PPID)) | sed 's/^-//'); make TEST_SUITE=fast test-$shell'.
npm ERR! This is most likely a problem with the nvm package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! shell=$(basename -- $(ps -o comm= $(ps -o ppid= -p $PPID)) | sed 's/^-//'); make TEST_SUITE=fast test-$shell
npm ERR! You can get their info via:
npm ERR! npm owner ls nvm
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /home/ubuntu/src/nvm/npm-debug.log
```
I do have a system Node at `/usr/bin/nodejs` [sic], but no system-wide `node` (without `js` suffix); running Ubuntu 14.04.2.
| testing,pull request wanted | low | Critical |
65,970,808 | go | debug/dwarf: No method to lookup type by type signature | The dwarf package internally has support for finding type entries by type signature, but it does not expose this to users of the dwarf package. (*Data).Type takes an "info" offset of a type entry and understands internally how to follow AttrType fields within that type that reference other types by signature. But AttrType can also be used on program entities to reference the type of that entity (e.g., the type of a variable), and any occurrence of AttrType can be either an "info" offset, which can be resolved by the package user; or a type signature, which cannot currently be resolved by the package user.
In practice this often isn't an issue because compilers that emit type signatures typically do so for typedef'd types (since there's little value in doing this for a single instance type), so the AttrType of the program entity will be an "info" offset of the typedef and that, in turn, will reference the type signature. However, compilers are free to do what they want and may emit type signatures in program entities, making the current API fragile. There's also no way for a caller to do its own decoding of the type structure if it references types by signature.
Consider adding an API to fetch a type by its signature. This could be accomplished by simply exposing (_Data).sigToType, but this puts the burden of distinguishing "info" offests and type signatures on the user. We could instead add a variant of (_Data).Type that takes a Val (really an interface{}) and does the appropriate kind of lookup.
Orthogonally, this new method could return either a dwarf.Type or a dwarf.Entry. Returning dwarf.Type forces the user to accept the dwarf package's decoding of the type structure. On the other hand, a method to resolve a type offset or signature into an Entry and another method to decode an Entry into a Type would be more flexible and would expose the ability to read the types section without forcing the caller to use dwarf.Type.
See related discussion at https://go-review.googlesource.com/#/c/7280/
| compiler/runtime | low | Critical |
66,008,931 | rust | Coherence error messages can be inscrutable | In particular, when the overlap check fails, it doesn't give much clarity, particularly when the overlap is between impls in different crates. It should tell you:
1. Precisely what type overlaps
2. For each case of negative reasoning it was not able to make, why it was not able to make it.
#2 might be a bit tricky to do, not sure, but we can certainly improve on current state of affairs.
| E-hard,A-diagnostics,P-medium,T-compiler,C-bug | low | Critical |
66,058,140 | go | x/tools/cmd/eg: take better care with pointers, maintain addressability of expressions | Given this package:
``` go
package egbug
var Imap = map[*T]int{}
type T struct {
I int
}
func main() {
var t T
p := &t.I
_ = p
}
```
and this template:
``` go
package p
import "egbug"
func before(t *egbug.T) int { return t.I }
func after(t *egbug.T) int { return egbug.Imap[t] }
```
applying eg generates uncompilable code:
``` go
package egbug
var Imap = map[*T]int{}
type T struct {
I int
}
func main() {
var t T
p := &Imap[t]
_ = p
}
```
``` bash
$ go build
# egbug
./x.go:11: cannot use t (type T) as type *T in map index
./x.go:11: cannot take the address of Imap[t]
```
There are two separate bugs here--T vs *T and breaking addressability.
This is a real life example, from attempting to use `eg` to instrument the gc compiler to find correlations in Node field usage.
| NeedsInvestigation | low | Critical |
66,064,332 | go | x/mobile/app: detect device orientation | A function that returns the device's current orientation would be handy to switch between portrait- and landscape-optimized user interfaces.
``` go
const (
OrientationLandscape = iota
OrientationPortrait
// ...
)
func Orientation() int
```
Thoughts? cc/ @crawshaw @hyangah
| mobile | low | Major |
66,087,891 | go | x/mobile/exp/audio: support audio streaming | Currently, the audio player has a naive OpenAL backend that buffers the entire source into the memory to prepare the media. We currently can only recommend the audio player to play small audio files such as sound effects.
In order to relax this restriction we should limit the generation of OpenAL buffers and reuse them as the source is done processing. The number of buffers to be generated are limited on some platforms, such as iOS, to certain number. The rate of processing, and the host device's resources are both critical in the determination of the number of active buffers.
| mobile | low | Minor |
66,139,313 | TypeScript | Generic decorators - could they receive some default type arguments? | A decorator can receive arguments, by being defined as a function that accepts parameters and then returns a normal decorator function:
```
function computed(evaluator: (obj: any) => any) {
return (obj: any, key: string) => {
Object.defineProperty(obj, key, {
get() { return evaluator(obj); }
});
};
}
```
I could use the above example like this:
```
class C {
firstName = "Homer";
lastName = "Simpson";
@computed(c => c.firstName + " " + c.lastName)
fullName: string;
}
```
Not a realistic example in that it's just another way of defining a property getter, but it's enough to demonstrates the general problem: `c` is of type `any` so the access to `firstName` and `lastName` is not type-checked, and nor is the result of the expression.
We can attempt to address this manually, because decorators can be generic:
```
function computed<O, R>(evaluator: (obj: O) => R) {
return (obj: O, key: string) => {
Object.defineProperty(obj, key, {
get() { return evaluator(obj); }
});
};
}
class C {
firstName = "Homer";
lastName = "Simpson";
@computed<C, string>(c => c.firstName + " " + c.lastName)
fullName: string;
}
```
But this still doesn't close the loop. The compiler is happy with the following abuse, in which `c` is supposedly a `string` but is actually a `C`, and `fullName` is declared a `string` but will actually become a `number` (albeit `NaN` I guess):
```
@computed<string, number>(c => parseInt(c, 10))
fullName: string;
```
In summary: decorators are passed the values of the meta-information in standard parameters (`target`, `key`, `value`), but they are not passed the compile-time types of those values in a way that can be used to create fully type-safe decorators.
So it would be helpful if in decorator-defining functions with generic parameters, those parameters could somehow map automatically to the types of `target` and `value` (as given meaning in the `__decorate` helper).
For big yucks, a straw man in which the compiler recognises a couple of built-in decorators that can appear on the type parameters:
```
function computed<@decoratorTarget O, @decoratorValue R>(evaluator: (obj: O) => R) {
...
}
```
Being decorated in this way, it would not be possible to manually specify type arguments for those type parameters when using the decorator. They would effectively be invisible to the user. If the decorator has its own custom type parameters, they would appear from the outside to be the only parameters (and for clarity should be at the front of the list).
| Suggestion,In Discussion,Domain: Decorators | medium | Critical |
66,322,347 | rust | Trait impls from where clauses (`ParamEnv`) take precedence over freestanding trait impls | I stumbled upon this trying to write a function that generically takes a range (or something else with which `str` can be indexed), and slices a str with that range. I got that to work, this bug comes into play when the str is produced by slicing another string with a `RangeFrom` or another concrete type (in the same function).
Here's a simpler (no lifetimes, no associated types) and self-contained example ([playpen](http://is.gd/f5auhw)):
``` rust
trait Frobnicate<I> {
fn frob(&self, i: I) -> Self;
}
struct Thing;
struct IA;
struct IB;
impl Frobnicate<IA> for Thing {
fn frob(&self, _i: IA) -> Thing { Thing }
}
impl Frobnicate<IB> for Thing {
fn frob(&self, _i: IB) -> Thing { Thing }
}
fn delegate_frob<I>(t: Thing, i: I) -> Thing
where Thing : Frobnicate<I> {
let t2: Thing = t.frob(IA);
t2.frob(i)
}
fn main() {}
```
This seems innocent enough. There's a `impl Frobnicate<IA> for Thing`, so the `.frob(IA)` call should work, and it's known to return `Thing` again, so via the `where` clause the `.frob(i)` call is good as well. However, I get this error message:
```
<anon>:20:28: 20:30 error: mismatched types:
expected `I`,
found `IA`
(expected type parameter,
found struct `IA`) [E0308]
<anon>:20 let t2: Thing = t.frob(IA);
^~
```
It appears that the where clause makes the compiler forget that there are other impls.
Adding a `Thing : Frobnicate<IA>` bound only makes the compiler (rightly) complain that that's not a bound at all, since it doesn't involve any type parameters.
UFCS makes it compile, though:
``` rust
let t2: Thing = <Thing as Frobnicate<IA>>::frob(&t, IA);
```
Edit: Besides playpen, I can also reproduce this locally:
```
rustc 1.0.0-beta (9854143cb 2015-04-02) (built 2015-04-02)
binary: rustc
commit-hash: 9854143cba679834bc4ef932858cd5303f015a0e
commit-date: 2015-04-02
build-date: 2015-04-02
host: x86_64-pc-windows-gnu
release: 1.0.0-beta
```
But I already noticed it a couple of weeks ago, so it can't be a recent regression.
| A-trait-system,T-compiler,C-bug,T-types | low | Critical |
66,382,673 | go | net: File method of Conn on Windows | I am writing something that requires access to the file descriptor of a socket created by net.Dial (which returns a net.conn) and then recast as net.TCPConn so that I can use net.TCPConn.File().Fd().
However, (as the error message says), "dup is not supported on windows". for my purposes, I don't "need" the dup() and would love to settle for just getting the raw file descriptor, but sadly the code within conn.File() appears as follows.
https://golang.org/src/net/net.go?s=5510:5555#L195
``` golang
func (c *conn) File() (f *os.File, err error) { return c.fd.dup() }
```
Is there any way I could ask for an "if windows then just return c.fd" option? Hopefully this request isn't a waste of time, I am far from an expert on go.
| help wanted,OS-Windows | medium | Critical |
66,488,441 | nvm | Write tests for bare `nvm run X` | In order to test this, I'd need to figure out some way to intercept the call to `nvm-exec`, or, to intercept its call to `exec`.
From #625.
| testing,pull request wanted | low | Minor |
66,677,821 | nvm | Dash tests fail on my system | ```
$ npm run test/fast
> [email protected] test/fast /home/ubuntu/src/nvm
> shell=$(basename -- $(ps -o comm= $(ps -o ppid= -p $PPID)) | sed 's/^-//'); make TEST_SUITE=fast test-$shell
Running tests in dash
Running tests at 2015-04-06T18:49:59
test/fast
Aliases
✗ Running "nvm alias <aliasname> <target>" again should change the target
./Running "nvm alias <aliasname> <target>" again should change the target: 73: cd: can't cd to test
mkdir: cannot create directory ‘/alias’: Permission denied
! WARNING: Version '0.0.2' does not exist.
tee: /alias/test-stable-1: No such file or directory
test-stable-1 -> 0.0.2 (-> N/A)
mkdir: cannot create directory ‘/alias’: Permission denied
nvm alias test-stable-1 0.0.2 did not set test-stable-1 to 0.0.2
✗ Running "nvm alias <aliasname>" should list but one alias.
./Running "nvm alias <aliasname>" should list but one alias.: 73: cd: can't cd to test
mkdir: cannot create directory ‘/alias’: Permission denied
✗ Running "nvm alias" lists implicit aliases when they do not exist
./Running "nvm alias" lists implicit aliases when they do not exist: 73: cd: can't cd to test
mkdir: cannot create directory ‘/alias’: Permission denied
both the tree and the node path are required
nvm alias did not contain the default local stable node version
✗ Running "nvm alias" lists manual aliases instead of implicit aliases when present
./Running "nvm alias" lists manual aliases instead of implicit aliases when present: 73: cd: can't cd to test
both the tree and the node path are required
both the tree and the node path are required
stable and unstable versions are the same!
✗ Running "nvm alias" should list all aliases.
./Running "nvm alias" should list all aliases.: 73: cd: can't cd to test
mkdir: cannot create directory ‘/alias’: Permission denied
did not find test-stable-1 alias
alias
circular
✗ nvm_resolve_alias
./nvm_resolve_alias: 73: cd: can't cd to test
nvm_resolve_alias loopback was not ∞; got
✗ nvm_resolve_local_alias
./nvm_resolve_local_alias: 73: cd: can't cd to test
nvm_resolve_local_alias loopback was not ∞; got
✗ nvm_resolve_alias
./nvm_resolve_alias: 73: cd: can't cd to test
'nvm_resolve_alias test-stable-1' was not v0.0.1; got
✗ nvm_resolve_local_alias
./nvm_resolve_local_alias: 73: cd: can't cd to test
'nvm_resolve_local_alias test-stable-1' was not v0.0.1; got
Listing
alias
Listing paths
✗ Running "nvm which 0.0.2" should display only version 0.0.2.
./Running "nvm which 0.0.2" should display only version 0.0.2.: 73: cd: can't cd to test
N/A: version "v0.0.2" is not yet installed
v0.0.2 not found
✓ Running "nvm which foo" should return a nonzero exit code when not found
Listing versions
✓ Running "nvm ls 0.0.2" should display only version 0.0.2.
✗ Running "nvm ls 0.2" should display only 0.2.x versions.
./Running "nvm ls 0.2" should display only 0.2.x versions.: 73: cd: can't cd to test
both the tree and the node path are required
both the tree and the node path are required
both the tree and the node path are required
"nvm ls 0.1" did not contain v0.1.3
✓ Running "nvm ls foo" should return a nonzero exit code when not found
✓ Running "nvm ls io" should return NA
...
```
| testing,pull request wanted | low | Major |
66,679,535 | rust | Numeric fallbacks don't work when inherent methods are involved | I would expect `2.0.sqrt()` to return sqrt(2) as a f64. Instead,
```
<anon>:9:13: 9:19 error: type `_` does not implement any method in scope named `sqrt`
<anon>:9 2.0.sqrt()
<anon>:9:19: 9:19 help: methods from traits can only be called if the trait is in scope; the following traits are implemented but not in scope, perhaps add a `use` for one of them:
<anon>:9:19: 9:19 help: candidate #1: use `std::num::Float`
<anon>:9:19: 9:19 help: candidate #2: use `core::num::Float`
```
... and the Float trait is deprecated!
This also occurs with integer literals.
I ran into this in the wild while doing an exponential decay:
``` rust
let k = c * 2.0.powf(-1.0 * scale * dt);
```
A workaround is to annotate the literal's type explicitly.
| A-type-system,T-lang,T-compiler,A-inference,C-bug,T-types | low | Critical |
66,871,436 | neovim | Terminal buffers don't respect `bufhidden` when not `hide` or `delete` | With a master build from a few days ago, terminal buffers seem to be deleted when the windows showing them is closed. This seems to contradict the usual usage patterns where buffers live until they are explicitly deleted by the user. Moreover, I'd like to be able to use a terminal session for some time, put it into the background, an reload it into a split once I need it back.
| terminal | low | Major |
66,884,431 | go | cmd/compile: assigning large values does not use memmove | Consider this piece of code
```
package main
import "fmt"
func main() {
f()
}
func f() {
var a [200]int
var b [200]int
a = b
b = a
fmt.Println(&a, &b)
}
```
The assignments of `a = b` or `b = a` where the size of `a` or `b` is above the DUFFCOPY limit of 128 words produces some very simplistic code
```
a = b
10c60: e1a01004 mov r1, r4
10c64: e1a00005 mov r0, r5
10c68: e2843e32 add r3, r4, #800 ; 0x320
10c6c: e4912004 ldr r2, [r1], #4
10c70: e4802004 str r2, [r0], #4
10c74: e1530001 cmp r3, r1
10c78: 1afffffb bne 10c6c <main.f+0x4c>
b = a
10c7c: e1a01005 mov r1, r5
10c80: e1a00004 mov r0, r4
10c84: e2853e32 add r3, r5, #800 ; 0x320
10c88: e4912004 ldr r2, [r1], #4
10c8c: e4802004 str r2, [r0], #4
10c90: e1530001 cmp r3, r1
10c94: 1afffffb bne 10c88 <main.f+0x68>
```
Should `sgen/stackcopy` take the opportunity to setup a call to `runtime.memmove` for values larger than 128 words ?
| compiler/runtime | low | Major |
67,149,514 | rust | regression: dead heap allocations aren't optimized out anymore | #22159 was closed after a llvm update (#22526). It used to work (I remember I tried it out). Now it doesn't work anymore.
``` rust
fn main() {
let _ = Box::new(42);
}
```
http://is.gd/Wekr7w
LLVM-IR:
``` llvm
; ModuleID = 'rust_out.0.rs'
target datalayout = "e-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-unknown-linux-gnu"
; Function Attrs: uwtable
define internal void @_ZN4main20h657e6a8d1dc11120eaaE() unnamed_addr #0 {
entry-block:
%0 = tail call i8* @je_mallocx(i64 4, i32 0)
%1 = icmp eq i8* %0, null
br i1 %1, label %then-block-57-.i.i, label %"_ZN5boxed12Box$LT$T$GT$3new19h823444268625886800E.exit"
then-block-57-.i.i: ; preds = %entry-block
tail call void @_ZN3oom20he7076b57c17ed7c6HYaE()
unreachable
"_ZN5boxed12Box$LT$T$GT$3new19h823444268625886800E.exit": ; preds = %entry-block
%2 = bitcast i8* %0 to i32*
store i32 42, i32* %2, align 4
%3 = icmp eq i8* %0, inttoptr (i64 2097865012304223517 to i8*)
br i1 %3, label %"_ZN14Box$LT$i32$GT$8drop.86517h1e7c6ecb62969b35E.exit", label %cond.i
cond.i: ; preds = %"_ZN5boxed12Box$LT$T$GT$3new19h823444268625886800E.exit"
tail call void @je_sdallocx(i8* %0, i64 4, i32 0)
br label %"_ZN14Box$LT$i32$GT$8drop.86517h1e7c6ecb62969b35E.exit"
"_ZN14Box$LT$i32$GT$8drop.86517h1e7c6ecb62969b35E.exit": ; preds = %"_ZN5boxed12Box$LT$T$GT$3new19h823444268625886800E.exit", %cond.i
ret void
}
define i64 @main(i64, i8**) unnamed_addr #1 {
top:
%2 = tail call i64 @_ZN2rt10lang_start20he050f8de3bcc02b7VRIE(i8* bitcast (void ()* @_ZN4main20h657e6a8d1dc11120eaaE to i8*), i64 %0, i8** %1)
ret i64 %2
}
declare i64 @_ZN2rt10lang_start20he050f8de3bcc02b7VRIE(i8*, i64, i8**) unnamed_addr #1
declare noalias i8* @je_mallocx(i64, i32) unnamed_addr #1
; Function Attrs: cold noinline noreturn
declare void @_ZN3oom20he7076b57c17ed7c6HYaE() unnamed_addr #2
declare void @je_sdallocx(i8*, i64, i32) unnamed_addr #1
attributes #0 = { uwtable "split-stack" }
attributes #1 = { "split-stack" }
attributes #2 = { cold noinline noreturn "split-stack" }
```
| A-LLVM,I-slow,C-enhancement,A-codegen,P-medium,T-compiler,C-optimization | medium | Critical |
67,182,736 | go | cmd/compile: use conditional execution in place of branches for small blocks on arm | For example, consider max:
``` go
func max(a, b int) int {
if a > b {
return a
}
return b
}
```
This compiles roughly to:
```
MOVW "".a(FP), R3
MOVW "".b+4(FP), R2
CMP R2, R3
BLE a
MOVW R3, "".~r2+8(FP)
RET
a:
MOVW R2, "".~r2+8(FP)
RET
```
But it could be:
```
MOVW "".a(FP), R3
MOVW "".b+4(FP), R2
CMP R2, R3
MOVW.LE R2, "".~r2+8(FP)
RET.LE
MOVW R3, "".~r2+8(FP)
RET
```
The general guidance from ARM is that it is worth branching instead of using conditional instructions when you reach 4-6 conditional instructions.
It's unclear to me when this should be done:
- during codegen (would require significant state to be passed around)
- as a peephole optimization
- at instruction selection time (since it already moves branch sequences around)
- as part of the 1.6 SSA effort (probably the best option)
Not urgent.
/cc @davecheney @4ad @minux
| NeedsInvestigation | low | Major |
67,193,607 | go | x/review/git-codereview: give an error if the Change-Id: line is deleted | I think this is the reason why some CLs have more than one Gerrit
CL.
Given that using the same change to open a fresh new gerrit CL is
rarely needed, I think git-codereview should error out if the user
deletes the Change-Id: line (with git change).
And we can add a -f option to override the error (I'm also fine with
let the user `git commit --amend` to manually remove the Change-Id.)
Of course, git-codereview probably can't detect the case where
the user has used `git commit --amend` to remove the Change-Id,
but our contribution guideline uses git change, so the proposed
check should catch most problems.
| NeedsInvestigation | low | Critical |
67,361,885 | youtube-dl | add bbc archive | hi,
could you add support for bbc archive? thanks
example page: http://www.bbc.co.uk/archive/whatwewore/5607.shtml
```
$ youtube-dl http://www.bbc.co.uk/archive/whatwewore/5607.shtml
[generic] 5607: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 5607: Downloading webpage
[generic] 5607: Extracting information
ERROR: Unsupported URL: http://www.bbc.co.uk/archive/whatwewore/5607.shtml
$
```
| site-support-request | low | Critical |
67,382,060 | youtube-dl | Display speed and percentage when updating | request,build/update | low | Minor |
|
67,462,206 | TypeScript | Define assignability relation for primitive-constrained type parameters | The spec does not define any case in Assignment Compatibility where `S` is a type parameter constrained to a primitive type and `T` is that same primitive type.
This leads to some weird errors, e.g. `T` is not assignable to `number` even though `T extends number`:
``` ts
enum E { A, B, C }
enum F { X, Y, Z }
function f<T extends number>(x: T[]): T {
var y: number = x[0]; // Error, T is not assignable to number...??
}
// f<T extends number> is useful:
var x = f([E.A, E.B]); // ok, x: E
var y = f([F.X, F.Y]); // ok, y: F
var z = f(['foo', 'bar']); // Error
```
See also http://stackoverflow.com/questions/29545216/inconsistent-behavior-of-enum-as-constrained-parameter-in-generic
| Suggestion,Help Wanted | low | Critical |
67,490,520 | go | gccgo: "incompatible type in initialization" is too vague | The error "incompatible type in initialization" is not very informative, here is a simple patch that of course could be better but shows the error.
https://github.com/h4ck3rm1k3/gcc-1/commit/ba7845236966facf2a3a20e6a2c1817b59b9a913
| NeedsInvestigation | low | Critical |
67,564,470 | javascript | Suggestion to set up Github Page | Maybe you want to set up a [GitHub Page](https://pages.github.com/) and get the custom domain [styleguide.js.org](http://dns.js.org) for free.
(renaming the repo to `styleguide` would be required)
| enhancement,question | low | Major |
67,596,561 | TypeScript | Flow type helpers | Flow offer some nice type helpers in his recent versions, while some of them are already addressed by typescript, some could perhaps be helpful.
Here are extracted comments from flow source just for inspiration.
- $Either<...T> is the union of types ...T
- $All<...T> is the intersection of types ...T
- $Supertype<T> acts as any over supertypes of T
- $Subtype<T> acts as any over subtypes of T
- $Shape<T> matches the shape of T
- $Diff<T,S>
- $Enum<T> is the set of keys of T
- $Record<T> is the type of objects whose keys are those of T
| Suggestion,Needs Proposal | low | Major |
67,613,789 | neovim | Dynamically change `undofile()` / multiple undo trees per file | I often see that undofiles are not being taken into account anymore, which is likely to be caused by changing branches in Git etc, where the file contents then does not match anymore the expected contents - which then causes the undo information to be discarded.
A good workaround for this might be to use additional information in the undo filename, like the current Git branch. This could be done by allowing to override the `undofile()` function used by Vim.
A better fix might be if Neovim could store multiple branches of the file in the default undo file by itself. I.e., if there is no entry for the current file's hash, it would create a new entry / root in the undo file.
When you then change back to another branch, the previous undo file entry would be used again.
| enhancement | low | Minor |
67,621,596 | rust | name-based comparison in new label-shadowing check has likely hygiene issues. | Spawned off of #24162. Our lifetimes only carry a `Name`, not an `Ident`, which means the comparisons for shadowing are only doing name based comparisons.
But macros should be free to introduce labels, and have them be treated as independent due to hygiene.
This bug is believed to only introduce issues where code will cause a warning to be emitted by the new shadowing check when it should be accepted; i.e. there are not any known soundness issues associated with this problem.
(Note: While loop labels themselves are `Ident`s, much of the syntax system does not treat them the same way it treats "normal" variables with respect to e.g. hygiene. For example the `syntax::visit` system does not invoke `visit_ident` on loop labels.)
| P-low,A-macros,T-compiler,C-bug,A-hygiene | low | Critical |
67,641,964 | youtube-dl | add support for spoilertv.com | URL: http://www.spoilertv.com/2015/04/mr-robot-first-look-full-promo.html
| site-support-request | low | Minor |
67,653,448 | neovim | perf: in_id_list() | When setting `cursorline` or `relativenumber` and opening a LaTeX file, the UI become sluggish and slow. This is a known problem in vim as well but I figured out maybe the UI refactoring that is going on makes it a good time to check it out. If it's not in its place, please let me know and I'll close.
| performance,bug-vim,syntax | low | Major |
67,690,282 | rust | rustdoc: Pain points of reexports | There's a lot of pain points that come up over time about rustdoc and reexports, and there's a lot of open issues as well, so this is going to serve as a metabug connecting all of them:
- [x] When a name is reexported across crates, the source crate does not show where it was reexported as (often the more canonical location). #13414
- [x] Lifetime parameters on functions can be lost. #14462
- [x] Private implementors are showing up. #14586
- [ ] Search results return more than one instance. #15723
- [ ] Information about type aliases is lost across crates. #15823
- [x] Methods can be missing from the search index. #20246
- [x] Macro source looks... interesting. #21311
- [ ] Incoming hyperlinks go to the source crate, not the "canonical location". #22083
- [x] Some `Self: Sized` bounds are lost. #24183
- [x] Sometimes reexports just don't show at all #24296
- [ ] Inclusion changes order of impl blocks #32290
- [x] Only type namespace shows up when processing re-export of private module #34843
- [x] Re-exported enum variants don't show up in documentation #35488
- [x] Many times re-exported type shows wrong documentation page #37608
This is just a list of the _current set_ of bugs, there have been countless others that have been fixed over time. Some of these are fundamental limitations, some of these are just bug fixes. I think that a huge part of "truly fixing this" will be tracking where the "canonical location" for a type is. For example rustdoc should understand that `Vec` traditionally comes from the standard library, not `libcollections`, and all incoming links, references, etc, should go there.
I'll try to keep this updated over time!
| T-rustdoc,metabug,C-tracking-issue,A-cross-crate-reexports | low | Critical |
67,706,794 | go | x/build: automate testing "toolstash -cmp" with trybots | For cleanup CLs like https://go-review.googlesource.com/#/c/8762, it would be nice if the trybots could take care of building a toolchain at HEAD, toolstash saving it, rebuilding at HEAD+CL, and then running an appropriate "toolstash -cmp" build.
Using toolstash/buildall, this only needs to be run once on a fast/cheap trybot type (e.g., linux/amd64) to avoid wasting more expensive trybot resources (e.g., OS X and/or ARM).
Obviously this would only be on an opt-in basis. Perhaps just a new dedicated trybot type that doesn't run by default, is configured with a different testing script than running all.bash, and is selected via whatever +trybot mechanism is implemented.
Maybe too niche a use case, but some rationale:
- Now that the toolchain is in Go, it seems like cleanup CLs to make the code more idiomatic are going to be increasingly common.
- It's easy to misuse toolstash; e.g., accidentally build the stashed toolchain at HEAD+CL rather than HEAD, negating the testing benefits.
- As a reviewer, you need to trust that developers who say "Passes toolstash -cmp" actually tested it correctly.
| Builders,FeatureRequest | low | Minor |
67,799,215 | youtube-dl | Progress for --dump-json | Would it be possible to add a progress bar when fetching the data for a channel- or a playlist-json-dump (--dump-json)?
| request | low | Major |
67,914,531 | youtube-dl | Automatically embed subtitles in mkv files | Since it's possible to embed subtitles in mp4 (in a hard coded way probably), is there a way to embed subtitles files with mkvmerge or with ffmpeg in the post-processing option ?
(Maybe use the `--exec` command)
| request,postprocessors | low | Minor |
67,943,811 | neovim | TagSearchPre/TagSearchPost Auto Commands | Commands and maps that require the presence of a tag file should emit TagSearchPre/TagSearchPost events. This would include:
```
:[count]ta[g][!] {ident}
g<LeftMouse>
<C-LeftMouse>
CTRL-]
{Visual}CTRL-]
:ts[elect][!] [ident]
:sts[elect][!] [ident]
g]
{Visual}g]
:tj[ump][!] [ident]
:stj[ump][!] [ident]
g CTRL-]
{Visual}g CTRL-]
:[count]tn[ext][!]
:[count]tp[revious][!]
:[count]tN[ext][!]
:[count]tr[ewind][!]
:[count]tf[irst][!]
:tl[ast][!]
:lt[ag][!] [ident]
:pts[elect][!] [ident]
:ptj[ump][!] [ident]
:[count]ptn[ext][!]
:[count]ptp[revious][!]
:[count]ptN[ext][!]
:[count]ptr[ewind][!]
:[count]ptf[irst][!]
:ptl[ast][!]
```
But not:
```
g<RightMouse>
<C-RightMouse>
CTRL-T
:[count]po[p][!]
:[count]ta[g][!]
:tags
```
Plug-ins can hook into these events to (for example):
- Set the 'tags' option by
- searching recursively upwards for matching files
- searching for current file in vim-specific tag-file database
- Automatically generate the tag file if none found by
- Prompting the user for the location of the tag file (the "project root")
This complements plug-ins that use the BufWritePost/FileWritePost events to update tags.
| enhancement,events | low | Minor |
67,962,340 | youtube-dl | [Request] Would a %if(playlist)s tag be possible? | It would be very handy when used in a config.txt file:
-o "D:\%(uploader)s-%(upload_date)s-(%if(playlist)s_%(playlist_index)s) - %(title)s.%(ext)s"
Or even simplier, YDL could detect if the video is part of a playlist and automatically add which number the videos are in the playlist. Like:
-o "D:\%(uploader)s-%(upload_date)s>-(%(playlist)s_%(playlist_index)s)< - %(title)s.%(ext)s"
> < indicating that those file name tags should only be used if the tags are applicable to the video/URL. I'm guessing URL would be easier to do.
| request | low | Minor |
67,966,709 | youtube-dl | Variable line-break spacing in Vimeo descriptions not being detected | As seen in the description of https://vimeo.com/38644453, Vimeo appears to allow different amounts of spacing between paragraphs and line-breaks. (Some sentences have a larger vertical gap between the lines.) Looking at the source, a combination of `<p>` and `<br>` HTML tags are used to achieve this effect. These are not detected by youtube-dl’s --write-description setting and are rendered only as regular line breaks. The .info.json file uses `/n` at the end of each line. Could there be a way to detect presentation and formatting elements of the description and an option to preserve them in the .description file? If necessary, descriptions with formatting could be saved in an HTML, XML or RTF format depending on which one is felt to be most suited.
What follows is the output of `youtube-dl https://vimeo.com/38644453 -v`. I’m using the latest Homebrew build.
[debug] System config: []
[debug] User config: ['-o', '~/Downloads/[%(upload_date)s]%(title)s[%(uploader_id)s][%(id)s]/[%(upload_date)s]%(title)s[%(uploader_id)s][%(id)s][%(format)s].%(ext)s', '--write-description', '--write-info-json', '--write-annotations', '--write-thumbnail', '--no-playlist', '--all-subs', '--add-metadata']
[debug] Command-line args: [u'https://vimeo.com/38644453', u'-v']
WARNING: Parameter outtmpl is bytes, but should be a unicode string. Put from **future** import unicode_literals at the top of your code file or consider switching to Python 3.x.
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.04.09
[debug] Python version 2.7.6 - Darwin-14.3.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 2.6.1, ffprobe 2.6.1
[debug] Proxy map: {}
[vimeo] 38644453: Downloading webpage
[vimeo] 38644453: Extracting information
[vimeo] 38644453: Downloading webpage
[info] Writing video description to: /Users/username/Downloads/[20120316]Drive Too[benjaminbarfoot][38644453]/[20120316]Drive Too[benjaminbarfoot][38644453][h264-sd - 640x360].mp4.description
[info] Writing video annotations to: /Users/username/Downloads/[20120316]Drive Too[benjaminbarfoot][38644453]/[20120316]Drive Too[benjaminbarfoot][38644453][h264-sd - 640x360].mp4.annotations.xml
WARNING: There are no annotations to write.
[info] Writing video description metadata as JSON to: /Users/username/Downloads/[20120316]Drive Too[benjaminbarfoot][38644453]/[20120316]Drive Too[benjaminbarfoot][38644453][h264-sd - 640x360].info.json
[vimeo] 38644453: Downloading thumbnail ...
[vimeo] 38644453: Writing thumbnail to: /Users/username/Downloads/[20120316]Drive Too[benjaminbarfoot][38644453]/[20120316]Drive Too[benjaminbarfoot][38644453][h264-sd - 640x360].jpg
[debug] Invoking downloader on u'https://avvimeo-a.akamaihd.net/37523/607/89062448.mp4?token2=1428876495_d659290c6f9b3b1227b3fd6005433ce9&aksessionid=af840de64915851f&ns=4'
[download] Destination: /Users/username/Downloads/[20120316]Drive Too[benjaminbarfoot][38644453]/[20120316]Drive Too[benjaminbarfoot][38644453][h264-sd - 640x360].mp4
[download] 100% of 23.22MiB in 00:10
[ffmpeg] Adding metadata to '/Users/username/Downloads/[20120316]Drive Too[benjaminbarfoot][38644453]/[20120316]Drive Too[benjaminbarfoot][38644453][h264-sd - 640x360].mp4'
[debug] ffmpeg command line: ffmpeg -y -i '/Users/username/Downloads/[20120316]Drive Too[benjaminbarfoot][38644453]/[20120316]Drive Too[benjaminbarfoot][38644453][h264-sd - 640x360].mp4' -c copy -metadata 'comment=The mission was simple, but for this driver, nothing is ever easy.
Driver
Danny Morgan
Voice of Mum
Benjamin Barfoot
Written by
Danny Morgan
Filmed, Directed & Edited by
Benjamin Barfoot
Produced by
2entertain' -metadata 'description=The mission was simple, but for this driver, nothing is ever easy.
Driver
Danny Morgan
Voice of Mum
Benjamin Barfoot
Written by
Danny Morgan
Filmed, Directed & Edited by
Benjamin Barfoot
Produced by
2entertain' -metadata 'artist=Benjamin Barfoot' -metadata 'title=Drive Too' -metadata date=20120316 -metadata purl=https://vimeo.com/38644453 '/Users/username/Downloads/[20120316]Drive Too[benjaminbarfoot][38644453]/[20120316]Drive Too[benjaminbarfoot][38644453][h264-sd - 640x360].temp.mp4'
| request | low | Critical |
Subsets and Splits