id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
78,854,591 | react | Change event fires extra times before IME composition ends | ### Extra details
* Similar discussion with extra details and reproducing analysis: https://github.com/facebook/react/issues/8683
* Previous attempt to fix it: https://github.com/facebook/react/pull/8438 (includes some unit tests, but sufficient to be confident in the fix)
------
### Original Issue
When I was trying this [example](https://jsfiddle.net/reactjs/n47gckhr/light/) from https://facebook.github.io/react/blog/2013/11/05/thinking-in-react.html, any Chinese characters inputted by Chinese pinyin input method would fire too many renders like:

Actually I would expect those not to fire before I confirm the Chinese character.
Then I tried another kind of input method - wubi input method, I got this:

It's weird too. So I did a test [in jQuery](http://jsbin.com/yepogahobo/1/edit?html,js,console,output):

Only after I press the space bar to confirm the character, the `keyup` event would fire.
I know it might be different between the implementation of jQuery `keyup` and react `onChange` , but I would expect the way how jQuery `keyup` handles Chinese characters instead of react's `onChange`.
| Type: Bug,Component: DOM | high | Critical |
79,217,934 | go | testing: autodetect appropriate benchtime | For discussion:
There is tension in how long to run benchmarks for. You want to run long, in order to make any overhead irrelevant and to reduce run-to-run variance. You want to run short, so that it takes less time; if you have a fixed amount of computing time, it'd be better to run multiple short tests, so you can do better analysis than taking the mean, perhaps by using [benchstat](rsc.io/benchstat).
Right now we use a fixed duration, which is ok, but we could do better. For example, many of the microbenchmarks in strconv appear stable at 10ms, which is 100x faster than the default of 1s.
Rough idea, input welcomed:
The time to run a benchmark is V+C*b.N, where b.N is the number of iterations and V and C are random variables -- V for overhead, C for code execution time. We can take measurements using different b.N (starting small and growing) and estimate V and N. Based on that, we can calculate what b.N value is required to reduce the contribution of V to the sum to some fixed limit, say 1%.
This should allow stable, fast benchmarks to execute very quickly. Slower benchmarks would get slower (you have to execute with b.N=1 and 2 at a bare minimum), but that's better than accidentally misleading the user into thinking that they have a meaningful performance number, which is what can currently happen.
We would probably want to change benchtime to be a cap on running time and increase the default value substantially. If stable numbers are not achievable within the provided benchtime, we would warn the user, who could increase the benchtime or change the benchmark.
I put together a quick-and-dirty version of this using linear regression to estimate V and C. It almost immediately caught a badly behaved benchmark (fixed in [CL 10053](https://go.dev/cl/10053)), when it estimated that the benchmark would take hours to run in order to be reliable. I haven't run it outside the `encoding/json` package; I imagine that there are other benchmarks that need fixing.
Again, input welcomed. I'm not a statistician; I don't even play one on TV.
| NeedsInvestigation | low | Major |
79,585,626 | go | cmd/compile: some large-temp-heap-allocations need to be moved before escape analysis | This follows https://golang.org/cl/10268/ and includes more convoluted (harder to trigger, harder to fix) versions of the problems addressed there. "The problem" is that any time a stack allocation of pointer-containing data is converted to a heap allocation (this happens if the stack allocation is very large), the pointers stored in that data must be noted as escaping to heap. This works fine if the stack-to-heap conversion takes place before escape analysis, but not-fine if it occurs after escape analysis. In most cases conversion occurs during typechecking (before escape analysis) but in some cases walk (after escape analysis) performs transformations that are then type-checked. From josharian's comments on the CL above:
---
Here are some large (but not arbitrarily large) temps introduced during walk. Don't know whether they are of concern.
Up to 1024 bytes in convT2E (walk.go:1064)
Up to 2048 bytes in OMAKEMAP (walk.go:1441)
Up to 128 bytes in OSTRARRAYRUNE (walk.go:1559)
Here are what look to me like arbitrarily large temps introduced during walk, although I might be wrong:
In ascompatet in walk.go:1758. See also the todo at order.go:30. Code to reproduce:
```
package p
func g() ([10000]byte, error) {
switch { // prevent inlining
}
var x [10000]byte
return x, nil
}
func f() {
var e interface{}
var err error
e, err = g()
_, _ = e, err
}
```
Note that f uses 20048 bytes of stack.
reorder3save, at walk.go:2461. Code to reproduce:
```
package p
func g() [10000]byte {
switch {
} // prevent inlining
var x [10000]byte
return x
}
func f() {
var m map[[10000]byte]bool
m[g()] = true
}
```
Note that f uses 20016 bytes of stack.
| compiler/runtime | low | Critical |
79,620,106 | kubernetes | Generalize container image representation | Forked from #7018 and #7203.
We need to generalize the representation of container images so that we can support more image formats, pinning to specific image hashes, references to image API objects, and other features in the future. Generalizing this representation breaks ALL API versions, so we should do that now.
The proposed format is here: https://github.com/GoogleCloudPlatform/kubernetes/issues/7203#issuecomment-104463241
It follows the approach of VolumeSource.
```
ContainerImage json:",inline"
...
type ContainerImage struct {
DockerImage *DockerImage
}
type DockerImage struct {
Name string // "repository" in v1 registry speak
Tag string
}
```
@bakins @philips @smarterclayton @yifan-gu
| priority/backlog,sig/node,sig/api-machinery,kind/feature,lifecycle/frozen | medium | Critical |
79,826,757 | youtube-dl | Display whether YouTube video has encrypted signature in JSON dump (--get-json) | As talked about in https://github.com/rg3/youtube-dl/issues/5787 and https://github.com/rg3/youtube-dl/issues/5781 it would be very useful to display whether the original YouTube signature was encrypted or not in the JSON dump (`--get-json`/`-j`)
Something as simple as `"encryptedSignature": 1` added to the JSON dump would do the trick.
Would this be difficult to add?
| request | low | Minor |
79,891,141 | youtube-dl | site support request: pinkbike.com | ```
$ youtube-dl --version
2015.05.15
```
```
$ youtube-dl -v http://www.pinkbike.com/video/408888/
[debug] System config: []
[debug] User config: ['--console-title']
[debug] Command-line args: ['-v', 'http://www.pinkbike.com/video/408888/']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.05.15
[debug] Python version 3.4.3 - Linux-4.0.4-2-ARCH-x86_64-with-arch
[debug] exe versions: ffmpeg 2.6.3, ffprobe 2.6.3, rtmpdump 2.4
[debug] Proxy map: {}
[generic] 408888: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 408888: Downloading webpage
[generic] 408888: Extracting information
ERROR: Unsupported URL: http://www.pinkbike.com/video/408888/
Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/youtube_dl/YoutubeDL.py", line 649, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/common.py", line 273, in extract
return self._real_extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/generic.py", line 1467, in _real_extract
raise UnsupportedError(url)
youtube_dl.utils.UnsupportedError: Unsupported URL: http://www.pinkbike.com/video/408888/
```
| site-support-request | low | Critical |
80,245,845 | rust | Mutable reference not re-borrowed by binary operator | When implementing a binary operator on `&mut Something`, the binary operator consumes the mutable reference instead of borrowing it:
`````` rust
use std::ops::Shl;
struct Test;
impl<'a> Shl<()> for &'a mut Test {
type Output = ();
fn shl(self, rhs: ()) {}
}
// Compiles
fn test_direct() {
let mut test = Test;
let test_ref = &mut test;
test_ref.shl(());
test_ref.shl(());
}
// Doesn't compile
//
// ```
// error: use of moved value: `test_ref`
// test_ref << ();
// ^~~~~~~~
// note: `test_ref` moved here because it has type `&mut Test`, which is non-copyable
// test_ref << ();
// ```
fn test_shift() {
let mut test = Test;
let test_ref = &mut test;
test_ref << (); // Should be equivalent to test_ref.shl(())
test_ref << ();
}
fn main() {}
``````
| A-type-system,T-compiler,C-bug,T-types | low | Critical |
80,507,260 | youtube-dl | [Cosmetics] Updating youtube-dl.exe | While updating youtube-dl the temporary _youtube-dl-updater.bat_ performs this task:
``` bat
@echo off
echo Waiting for file handle to be closed ...
ping 127.0.0.1 -n 5 -w 1000 > NUL
move /Y "D:\Storage\Media\Binaries\youtube-dl.exe.new" "D:\Storage\Media\Binaries\youtube-dl.exe" > NUL
echo Updated youtube-dl to version 2015.01.10.2.
start /b "" cmd /c del "%~f0"&exit /b"
```
Now for some reason the _move_-command (or something else?) causes a new commandprompt to appear, followed by the final echo + a new linefeed:
``` bat
D:\Storage\Media\Binaries>youtube-dl.exe -U
Updating to version 2015.05.20 ...
Waiting for file handle to be closed ...
D:\Storage\Media\Binaries>Updated youtube-dl to version 2015.05.20.
D:\Storage\Media\Binaries>
```
You'd always have to press ENTER for the commandprompt to appear again.
When I save the temporary bat-file and run it localy on a dummy _youtube-dl.exe.new_ and _youtube-dl.exe.new_ it performs as expected:
``` bat
D:\Storage\Media\Binaries>youtube-dl-updater.bat
Waiting for file handle to be closed ...
Updated youtube-dl to version 2015.01.10.2.
D:\Storage\Media\Binaries>
```
The final echo is displayed and the commandprompt appears again.
This behaviour is a but confusing, especially in batchscripts where I use youtube-dl.exe. Is this normal?
| build/update | low | Minor |
80,740,375 | youtube-dl | Use JSON-RPC interface to download the video through aria2 | Currently, download through aria2 is implemented in external downloader. I do not like the interface in the terminal, and aria2 has a pretty good JSON-RPC interface, which can be managed from web interface. I would like to be able to see the download list and speed through web browser instead of terminal window, since they are easier to read and looks prettier.
| request | medium | Major |
81,081,346 | go | spec: for struct types, clarify field "name" and "scope" of field names/selectors | Field names are identifiers "declared" with a struct type definition. The section of struct types could be clearer with respect to what a "field name" is. E.g.: struct{T} vs struct {T T} vs struct {t T} .
Also, the "scope" of field names/selectors is not explicitly defined. Determine if there's any clarification needed.
| Documentation,NeedsInvestigation | low | Minor |
81,083,180 | go | spec: evaluation order of statements unspecified | The order is obvious to everybody but there should probably be a sentence somewhere.
| Documentation,NeedsInvestigation | low | Minor |
81,085,998 | go | spec: clarify receiver value passed in method invocations | The section on Selectors ( http://tip.golang.org/ref/spec#Selectors ) has some examples of selectors and what (expanded) expression they stand for, but the section could be expanded or have more explanations, especially for receiver values passed to method invocations.
This is one of the most complex aspects of the language, both to explain and to implement and could use a bit more (or clearer) explanations.
| Documentation,NeedsInvestigation | low | Minor |
81,362,920 | rust | Tools for dumping information about .rlibs | It would be great if there was a set of tools for dumping information about a given .rlib. This could be used to debug linkage and export issues.
Something like `nm` and `ldd`.
I note that `nm` does appear to partly work on a .rlib, but the output is a bit raw.
Where is the best place to start for understanding the .rlib format as it stands? Is it a good idea to build this on `rustc::metadata`?
See also http://stackoverflow.com/questions/24603672/rust-library-for-inspecting-rlib-binaries
| T-compiler,T-dev-tools,C-feature-request | medium | Critical |
81,424,043 | react | Support for reparenting | When writing a component that contains a set of large subtrees that stay relatively the same, but are simply moved around such that React's virtual DOM diffing can't detect the movement, React will end up recreating huge trees it should simply be moving.
For example, pretend `blockA` and `blockB` are very large structures. They may be made of several levels of children and components. For example one could be the entire page contents and the other the sidebar, while this `render()` is the page root.
``` jsx
render() {
var blockA = <div>AAA</div>,
blockB = <div>BBB</div>;
if ( this.props.layoutA ) {
return <div>
<div className="something">{blockB}</div>
<div className="something">{blockA}</div>
</div>;
} else {
return <div>
{blockA}
{blockB}
</div>;
}
}
```
Because the blocks aren't at the same level React cannot see the relation between these blocks and `key` cannot be used to give React any hints. As a result, when `layoutA` is changed, instead of the two blocks being moved to their new location the entire page is essentially completely unrendered and then re-rendered from scratch.
I understand why this is the case. It would be far to expensive for React to be able to detect movement of nodes like this.
But I do believe we need a pattern to hint to React that this component has large blocks that may be moved around at different levels.
Note that there may be a component in between the rendering component root and the block. So parent semantics scoped to the nearest component won't work. This'll need owner scoping.
I understand that React is trying to eliminate the need for React.createElement to be used and owner scoping within special attributes interferes with that. So instead of a component scoped `key=""` variant I think a method/object style interface kind of like `React.addons.createFragment` might work.
| Type: Feature Request,Component: Component API,Resolution: Backlog | high | Critical |
81,474,242 | TypeScript | Feature request: Decorators on enum members | It would be nice to be able to decorate enum members (in the same way as properties?).
| Suggestion,Needs Proposal,Domain: Decorators | high | Critical |
81,611,022 | javascript | Distributing ES6 | section on how to publish es6 files to npm
| enhancement,pull request wanted,editorial | low | Minor |
81,742,134 | youtube-dl | [Escapist] download ads instead of video | It downloads a 4.5MB, 45-second mp4 video which is an ad, instead of the normal video
```
% youtube-dl -v http://www.escapistmagazine.com/videos/view/zero-punctuation/10119-Grand-Theft-Auto-Online-Review
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-v', 'http://www.escapistmagazine.com/videos/view/zero-punctuation/10119-Grand-Theft-Auto-Online-Review']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.05.15
[debug] Python version 3.4.3 - Linux-4.0.4-2-ARCH-armv7l-with-arch
[debug] exe versions: ffmpeg present, ffprobe present
[debug] Proxy map: {}
[Escapist] 10119: Downloading webpage
[Escapist] 10119: Downloading video config
[debug] Invoking downloader on 'http://video.escapistmagazine.com/links/316860e78107a05e6f1c1e9646ac716c/mp4_hq/escapist/zero-punctuation/3ea8c080b4a98ef9e0d3eb197fb8ea29.mp4'
[download] Destination: Grand Theft Auto Online-10119.mp4
[download] 15.5% of 4.48MiB at 11.04KiB/s ETA 05:51
```
From viewing the network console in-browser, I think escapist sends this ads, then do HTTP 206 to get the real one.
| cant-reproduce | low | Critical |
82,044,592 | rust | Implied bounds on nested references + variance = soundness hole | The combination of variance and implied bounds for nested references opens a hole in the current type system:
``` rust
static UNIT: &'static &'static () = &&();
fn foo<'a, 'b, T>(_: &'a &'b (), v: &'b T) -> &'a T { v }
fn bad<'a, T>(x: &'a T) -> &'static T {
let f: fn(&'static &'a (), &'a T) -> &'static T = foo;
f(UNIT, x)
}
```
This hole has been fixed in #129021 for non-higher-ranked function pointers. The underlying issue still persists.
```rust
static UNIT: &'static &'static () = &&();
fn foo<'a, 'b, T>(_: &'a &'b (), v: &'b T, _: &()) -> &'a T { v }
fn bad<'a, T>(x: &'a T) -> &'static T {
let f: fn(_, &'a T, &()) -> &'static T = foo;
f(UNIT, x, &())
}
fn main() {}
```
---
Update from @pnkfelix :
While the test as written above is rejected by Rust today (with the error message for line 6 saying "in type `&'static &'a ()`, reference has a longer lifetime than the data it references"), that is just an artifact of the original source code (with its explicit type signature) running up against _one_ new WF-check.
The fundamental issue persists, since one can today write instead:
``` rust
static UNIT: &'static &'static () = &&();
fn foo<'a, 'b, T>(_: &'a &'b (), v: &'b T) -> &'a T { v }
fn bad<'a, T>(x: &'a T) -> &'static T {
let f: fn(_, &'a T) -> &'static T = foo;
f(UNIT, x)
}
```
(and this way, still get the bad behaving `fn bad`, by just side-stepping one of the explicit type declarations.)
| A-type-system,P-medium,I-unsound,C-bug,A-variance,S-bug-has-test,T-types | high | Critical |
82,196,115 | rust | Enable segfault / bus error handlers on more UNIX platforms | src/libstd/sys/unix/stack_overflow.rs is enabled on Linux, OS X, Bitrig, and OpenBSD targets only, because the implementation used to have its own signal-handling bindings that were only known to be correct on those targets. In #25784 I refactored the bindings and verified them for all current ports, I think we can turn this on on all current ports. This would enable handlers on Android, iOS, FreeBSD, and Dragonfly.
I don't have easy build infras for these myself, so I'm probably not going to do this immediately. But if someone wants to test one out and make sure that it works, I think it's just a matter of adding an OS to the `#[cfg]` and writing a program that segfaults.
| A-runtime,O-android,O-netbsd,C-bug,T-libs | low | Critical |
82,237,298 | youtube-dl | Expand Niconico Video support | Niconico Video has two quality levels for a video, "normal" and "[economy](http://dic.nicovideo.jp/a/%E4%BD%8E%E7%94%BB%E8%B3%AA%E3%83%A2%E3%83%BC%E3%83%89)" (Japlish for "low"). Currently, using `youtube-dl -F` on a Niconico Video link only brings up one video quality:
```
format code extension resolution note
0 unknown_video unknown
```
Expanding youtube-dl's functionality to include video quality would be nice. The url for the video file changes by having "low" added as a suffix (http://smile-fnl41.nicovideo.jp/smile?m=1097445.24697low vs.
http://smile-fnl41.nicovideo.jp/smile?v=1097445.24697) (Economy mode can be forced by appending `?eco=1` to the end of a Niconico URL, for what it's worth)
It should be noted that normal users who don't pay up for a premium account may get the economy quality automatically from JST 6:00 PM to 2:00 AM on normal days, and JST 12:00 PM to 2:00 AM on Saturdays, Sundays, and holidays. If this affects youtube-dl's behavior of automatically getting the highest quality, the file size in bytes can be found in "size_high" (for normal quality) and "size_low" (for economy) at http://ext.nicovideo.jp/api/getthumbinfo/$VIDEO-ID (for example, http://ext.nicovideo.jp/api/getthumbinfo/sm1097445 for http://www.nicovideo.jp/watch/sm1097445)
| request | low | Minor |
82,253,140 | youtube-dl | Add support for 5sing | Example URLs:
http://5sing.kugou.com/yc/1794876.html (yc: yuanchang; original song)
http://5sing.kugou.com/fc/4238226.html (fc: fanchang; cover song)
http://5sing.kugou.com/bz/1146744.html (bz: banzou; instrumental)
Downloading an MP3 is enabled on most songs but requires you to log in; click on 下载 and then the first large rectangular button to the left (http://5sing.kugou.com/fc/4238226.html 's download page is http://5sing.kugou.com/fc/down/4238226)
Sometimes a song cannot be downloaded (禁止下载 [appears on the download page in red](http://5sing.kugou.com/yc/down/1794876))
All songs can be played without logging in as far as I know. The Firefox extension FlashGot is able to detect the files that 5sing serves while streaming last time I checked. And regardless of whether a song can be officially "download"ed or not, FlashGot succeeds every time ( ̄▽ ̄;) So, I suppose that there is most often two audio qualities available (streaming and download), sometimes only one (streaming only).
I also think that it would be a good idea to support the obsolete style 5sing link (5sing recently moved domains). Example: http://yc.5sing.com/1794876.html ({fc|yc|bz}.5sing.com/ becomes 5sing.kugou.com/{fc|yc|bz}/)
| site-support-request | low | Minor |
82,472,081 | rust | Rustdoc: pages are being overwritten on the file systems with case-insensitive names | Not sure if we can work around the case-insensitivity but when generating docs for the identifiers with the same name, but different cases(well it is ok for Rust after all :))
Like:
https://github.com/DoumanAsh/trace-macro/blob/master/src/lib.rs#L29
https://github.com/DoumanAsh/trace-macro/blob/master/src/lib.rs#L150
We will have only one page, instead of two. I suspect that the last generated page will overwrite the first one since from Win file-system point of view it will be the same file.(And NTFS is also case-insensitive?)
I'm not sure what would be a good work around for that.
I suppose it would be good idea to track names to avoid such issues and if there are several identifiers with the same names(different case) it should warn and generate pages with some suffix(like [-num]).
Or even no need to warn, after all docs will be generated successfully.
Rust stable 1.0
`rustdoc 1.0.0 (a59de37e9 2015-05-13) (built 2015-05-14)`
Win8 x64
| T-rustdoc,E-mentor,C-bug | low | Major |
82,498,765 | go | x/tools/refactor/eg: when matching struct literals, abstract over the tagged/tagless forms | Equivalent struct literals may be written in several ways:
T{1, 2}
T{a: 1, b: 2}
T{b: 2, a: 1}
The 'eg' tool should allow a pattern using any of these forms to match any of these expressions. That means internally converting to the named form and doing name-based (not index-based) matching of subtrees. The output should be emitted in the same form as the original.
| Tools,Refactoring | low | Minor |
82,586,582 | TypeScript | Invalid combinations of --out, --outDir, and --rootDir should cause a compiler error. | When using invalid combinations of `--out`, `--outDir`, and `--rootDir`, the compiler currently doesn't report any errors, but emits files in 'random' locations. Further when you specify `--rootDir` without `--outDir` this is ignored as well.
**Expected** to get errors/warnings from the compiler in all those cases.
| Bug,Help Wanted | low | Critical |
82,948,629 | youtube-dl | Add support for piapro | piapro is a Japanese website made for hosting VOCALOID-based media, mainly images, audio, and text.
Example URLs for audio:
http://piapro.jp/content/es7uj48x6bvcbtgy (old-style URL, still functional)
http://piapro.jp/t/KToM (current URL style)
Two types of audio quality, download and streaming. Very similar to #5839, for the download needs one to be logged in, while the streaming doesn't.
Download: http://piapro.jp/download/?id=es7uj48x6bvcbtgy&view=content
Streaming: http://c1.piapro.jp/amp3/es7uj48x6bvcbtgy_20100116020522_audition.mp3
The "es7uj48x6bvcbtgy" and "20100116020522" parts can both be derived solely from the HTML source:
```
<input type="hidden" name="id" value="es7uj48x6bvcbtgy">
~~~~~~~~~~~~~~~~
```
```
<a class="link_songle" href="http://songle.jp/songs/piapro.jp%2Ft%2FKToM%2F20100116020522" target="_blank">http://songle.jp/songs/piapro.jp%2Ft%2FKToM%2F20100116020522</a>
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
```
It might be ethical for youtube-dl to have the user know of the license agreement for a song, which is contained in `<ul class="plc">`.
| site-support-request | low | Minor |
83,051,474 | youtube-dl | Add support for authentication on tvcast.naver.com (age restricted videos) | How to download video from this url? http://tvcast.naver.com/v/406354
The video is age restricted. Is there anyway to download this video?
C:\Users\Jaimin>youtube-dl http://tvcast.naver.com/v/406354 --verbose
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'http://tvcast.naver.com/v/406354', u'--verbose']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2015.04.09
[debug] Python version 2.7.8 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-64626-gca671be, ffprobe N-64626-gca671be
[debug] Proxy map: {}
[Naver] 406354: Downloading webpage
ERROR: couldn't extract vid and key; please report this issue on https://yt-dl.o
rg/bug . Make sure you are using the latest version; type youtube-dl -U to upd
ate. Be sure to call youtube-dl with the --verbose flag and include its complete
output.
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 651, in extract_info
File "youtube_dl\extractor\common.pyo", line 275, in extract
File "youtube_dl\extractor\naver.pyo", line 42, in _real_extract
ExtractorError: couldn't extract vid and key; please report this issue on https:
//yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -
U to update. Be sure to call youtube-dl with the --verbose flag and include its
complete output.
| request | low | Critical |
83,121,556 | youtube-dl | MusiXmatch support | Hello, i was looking at downloading some music videos and noticed that most of those with lyrics were not "subtitled" on the official video, then found I out the MusiXmatch chrome extension which put those lyrics as subtitles under the CC option on youtube, it makes a request with the youtube video ID, can this be implemented so i can embed those subtitles?
| request | low | Minor |
83,194,033 | go | tour: Traversal order | Context: /concurrency/8
It should be made clear that Walk function should traverse the tree in-order.
| NeedsInvestigation | low | Minor |
83,417,529 | youtube-dl | twofactor code input request | If i provide only -u parameter with username, app asks to input password, but if then youtube required two factor authentication- there no request for user to input key generated by Google Authenticator app.
It will be great to have a possibility to input TFA-code on-demand.
THANKS
| request | low | Minor |
83,673,499 | youtube-dl | Please add support for artycok.tv | It hosts art-related videos mainly but not only somehow connected to the Czech Republic. It would be great if it were supported.
| site-support-request | low | Minor |
83,698,111 | kubernetes | Kubelet doesn't attempt to re-register | In the process of setting up HA, I blew away my master state, which means that node information was lost. The kubelet doesn't attempt to re-register itself. It probably should.
This isn't a v1.0 feature, though.
@dchen1107
| priority/backlog,area/reliability,sig/node,kind/feature,lifecycle/frozen,needs-triage | medium | Major |
83,921,697 | youtube-dl | Fails to rip Vimeo search results | Sometimes I want everything in the search results...
youtube-dl https://vimeo.com/search?q=tanya+dakin
[vimeo:user] search?q=tanya+dakin: Downloading page 1
ERROR: Unable to extract list title; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| site-support-request | low | Critical |
84,117,030 | nvm | When https://nodejs.org/dist/ is unreachable, nvm should indicate this to user | Currently, if https://nodejs.org/dist/ is unreachable, `nvm install` fails with a misleading message:
``` sh-session
$ nvm install 0.10
Version '0.10' not found - try `nvm ls-remote` to browse available versions.
```
The reason for this failure should be properly identified and relayed to the user.
| feature requests | low | Critical |
84,706,129 | neovim | line editor (readline) functionality | When a program which uses a line editor (e.g. bash, which is readline-based) is running inside of neovim's :terminal mode, neovim should take over the line editor's responsabilities.
I believe this would greatly increase the convenience of combining neovim, terminal mode, and any program that uses a line editor, including pretty much all shells.
As an example, let's say you're in a terminal window. You've typed a command, but want to prefix it with e.g. `time` or `sudo` or something. With this proposal, you would just go to neovim's normal mode (either by `<c-\><c-n>` or some other mapped key), and press `I` and you'd be at the beginning of the line. No need for all the hassle with a "neovim normal" mode and a "bash vi-mode normal mode" and all possible confusion resulting from that.
Another example : you've entered part of a command, but you need to paste a few extra arguments in the middle somewhere. You go to another screen to copy whatever you want, switch to the terminal-screen again and, still in normal mode, move the cursor to the correct position of the command line, and press `p`. Whatever you copied is put right where Vim's cursor is (as opposed to the current situation, where it is inserted where the cursor was when you were last in terminal-mode, regardless of how you moved the vim-cursor in the mean time).
We will have to make sure that we can still communicate with whatever program is on the other side of the terminal, for stuff like tab-completion. In fact, possible completions should be fed into neovim's completion system, so they can be handled by whatever setting the user wants (e.g. if the the user uses autocomplpop/youcompleteme, he can also get that functionality for bash-commands, or for python variables when running IPython).
Note that this will involve modifications of the line editors as well as neovim.
Line editors we need to take into account :
- readline (for a.o. bash)
- zsh line editor (zsh)
- ? (feel free to suggest additions)
###### Side Notes
Related work and issues, though I believe they are different :
- https://github.com/ardagnir/athame
- #1777
They are different because they do it the other way around. With these, the idea is that e.g. bash (or python or whatever) has more (neo)vim capabilities. A `set -o vi` on steroids if you like.
My proposal is having a full nvim, which has a terminal buffer, and operating directly on the line that's normaly handled by the line editor.
Also note the discussion at #2676 regarding what should be the default way to go from terminal-mode to normal mode. I am a strong advocate of a single `<Esc>`, but others want a single `<Esc>` to get passed straight to the terminal. The main reason that's been given for this, is vi-style line editor applications. If the line editor accessed the full power of neovim when running in a terminal, the need for sending `<Esc>` directly to the terminal would be far smaller, possibly giving us more sensible defaults.
| enhancement,tui | medium | Critical |
84,732,835 | go | encoding/json: Decoder internally buffers full input | When using the JSON package, if I encode a struct like
``` go
type Data struct {
Count int
Names []string
}
```
and then decode it into
``` go
type SmallData struct {
Count int
}
```
it will still allocate memory for the list of names, even though it just gets thrown away. This becomes an annoyance when I have several multigigabyte JSON files like this. It would be neat if the JSON parser could identify what fields it cares about, or somehow be told what fields to ignore, and chuck them.
| Performance | low | Major |
84,911,783 | rust | MSVC: support LTCG native libraries in rlibs | MSVC supports the option to compile native libraries with link-time code generation enabled via the `/GL` switch to the compiler. This cause the compiler to change the format of the `*.obj` file emitted to no longer be a COFF object file. When MSVC's `lib.exe` tool is used to assemble these object files into a `.lib`, it will correctly detect that these are LTCG-enabled files, adding an appropriate symbol table to the archive being generated. This means that when the linker comes along to link the library, the internal symbol table of the archive points at the LTCG files, allowing the linker to find where symbols are and also produce object files on-the-fly.
With rustc, however, the symbol tables for LTCG files can be lost. Whenever a native library is included statically into an rlib, the objects are extracted by the compiler and then re-inserted into an rlib. This operation loses the entries in the symbol table of the rlib (LLVM doesn't understand the format of the file being inserted), causing the linker to later on ignore all LTCG enabled files.
Long story short, if a LTCG enabled C library is linked statically into an rlib, the linker will not resolve any of the symbols in the C library. This means that all native code linked statically to rlibs **cannot have LTCG enabled**, and this is occasionally somewhat difficult to retrofit on existing build systems.
Fixing this will involve improving the merging process of two archives, probably just blindly carrying over the symbol table from one archive to another (at least on MSVC). I'm not sure what the best route to do this is, but our options are:
- Add support to `llvm-ar.exe` to do this.
- Split apart `llvm-ar.cpp` into a library-like interface, and then rewrite the "driver" in rustc itself
- Port `llvm-ar.exe` to Rust, and do everything in Rust
The latter two options have the great benefit of lifting the compiler's dependence on an `ar` tool, which would be nice!
| E-hard,O-windows-msvc,C-feature-request | low | Minor |
85,202,616 | go | math/big: printing is non-linear and very slow for large Floats | The Text method of big.Float produces a textual representation of the floating point number. I have a program that can create very large numbers and they are slow to print. Moreover, the time to print them does not scale sensibly with the size of the number:
% time ivy -e '1e10000' > /dev/null
real 0m0.005s
user 0m0.002s
sys 0m0.003s
% time ivy -e '1e100000' > /dev/null
real 0m0.032s
user 0m0.027s
sys 0m0.004s
% time ivy -e '1e1000000' > /dev/null
real 0m2.729s
user 0m2.702s
sys 0m0.017s
% time ivy -e '1e10000000' > /dev/null
real 7m11.246s
user 7m7.678s
sys 0m1.834s
%
I have investigated to some extent and the time is definitely being spent inside Text.
| NeedsInvestigation | low | Major |
85,237,393 | go | misc/ios: make it easier to run benchmarks on the device | I have some hacks I carry around locally to enable benchmarking on an iOS device:
- Make tests print "DONE" after running benchmarks, similarly to the way they currently print "PASS" after running tests.
- Add an envvar to go_darwin_arm_exec.go to control what string it watches for to detect that a test has completed running, and set that envvar to "DONE" when running benchmarks.
- Add an envvar to go_darwin_arm_exec.go to enable debug mode, rather than it being a constant, and add that envvar when running benchmarks, in order to see incremental output.
This is obviously not a good solution. I don't know what a good solution looks like, but I'd like one. :)
/cc @crawshaw
| NeedsInvestigation | low | Critical |
85,391,936 | kubernetes | By default, merge more .kubeconfig files | By default, kubectl should have access to all clusters, users and contexts defined in any .kubeconfig files in any of the following locations:
- Any files or directories specified by KUBECONFIG
- /etc/kubernetes/config/*
- ~/.kube/*
(Today kubectl only loads from ~/.kube/config, or only KUBECONFIG or --kubeconfig if specified.)
This would allow me to do a few things that I can't easily do today:
- /etc/kubernetes/config/*: Use puppet/ansible/chef/etc. to drop separate credential files for each of our clusters into /etc/kubernetes/config/\* to make it really easy for developers to run kubectl --context=<some-cluster> on any server and always get to the right cluster without any configuration on the developers part. (And if there's only one cluster, just run kubectl <cmd> and have it work automatically with no configuration.)
- ~/.kube/*: Download a .kubeconfig file and have access to it immediately just by moving it into ~/.kube/. Today I have to either manually merge the files or specify KUBECONFIG or --kubeconfig (which would then ignore all Preferences in my ~/.kube/config file).
- KUBERNETES_CONFIGS/--kubeconfig: I can download a zip of kubeconfigs, unzip it, and point at it, without losing access to my kubeconfig preferences/current context.
The convention I would follow is that files in any of the above locations would not contain Preferences or CurrentContext to prevent any merging headaches (which is what caused much anger in #4428); those two .kubeconfig fields should only be specified in ~/.kube/config. I would prefer to move them to a different file altogether, but that is a separate discussion.
@jlowdermilk @bgrant0607 @deads2k
| priority/backlog,area/kubectl,kind/feature,sig/cli,lifecycle/frozen | medium | Major |
85,579,794 | go | x/mobile: apps crash on darwin/arm64 at draw_arrays(__GLIContextRec*, unsigned int, int, int) | Apps crash on darwin/arm at draw_arrays(__GLIContextRec*, unsigned int, int, int) slightly after the launch. It usually takes 3-4 seconds for me to see the EXC_BAD_ACCESS.
I am packing the arm64 binary in a lipo file, my build steps are available for inspection at https://github.com/rakyll/go-xcode/blob/master/Makefile.
```
#0 0x00000001829ef064 in ___lldb_unnamed_function235$$AGXGLDriver ()
#1 0x00000001829eecf0 in agxuBeginRenderCommand(AGXContextRec*, unsigned int, bool, bool) ()
#2 0x00000001829e5bbc in glrAGXRenderVertexArray(GLDContextRec*, unsigned int, unsigned int, int, int, unsigned int, void const*, int, void const*) ()
#3 0x0000000187f90e58 in glDrawArrays_ACC_ES2Exec ()
#4 0x0000000004509b64 in draw_arrays(__GLIContextRec*, unsigned int, int, int) ()
#5 0x00000000040dce9c in _cgo_80207b16a855_Cfunc_process ()
#6 0x0000000004138dd0 in asmcgocall ()
#7 0x00000000040de178 in threadentry ()
#8 0x0000000196317dc8 in _pthread_body ()
#9 0x0000000196317d24 in _pthread_start ()
```
/cc @crawshaw @hyangah @nigeltao @minux
| mobile | low | Critical |
85,756,027 | youtube-dl | [Errno 36] File name too long on eCryptfs | Filesystem: EXT4
OS: Ubuntu Linux 14.04 LTS
`youtube-dl --version`: 2015.06.04.1
When I try to download a Vimeo video:
`youtube-dl --ignore-errors --restrict-filenames https://vimeo.com/80352108`
I got the following error:
`ERROR: unable to open for writing: [Errno 36] File name too long: ...`
| bug | medium | Critical |
85,767,283 | go | runtime: support dlclose with -buildmode=c-shared | ``` go
package main
import (
"C"
"fmt"
)
var (
c chan string
)
func init() {
c = make(chan string)
go func() {
n := 1
for {
switch {
case n%15 == 0:
c <- "FizzBuzz"
case n%3 == 0:
c <- "Fizz"
case n%5 == 0:
c <- "Buzz"
default:
c <- fmt.Sprint(n)
}
n++
}
}()
}
//export fizzbuzz
func fizzbuzz() *C.char {
return C.CString(<-c)
}
func main() {
}
```
build this with
```
$ go build -buildmode=c-shared -o libfizzbuzz.so libfizzbuzz.go
```
then go
``` python
from ctypes import *
import _ctypes
lib = CDLL("./libfizzbuzz.so")
lib.fizzbuzz.restype = c_char_p
print lib.fizzbuzz()
print lib.fizzbuzz()
print lib.fizzbuzz()
print lib.fizzbuzz()
print lib.fizzbuzz()
print lib.fizzbuzz()
_ctypes.dlclose(lib._handle)
```
```
1
2
Fizz
4
Buzz
Fizz
Segmentation fault
```
| compiler/runtime | high | Critical |
85,799,062 | TypeScript | 'Content-Length' value returned by TSServer is one off on Windows | The 'Content-Length' value returned by TSServer does not take into account that newlines on Windows are two characters `\r\n` instead of `\n`. The issue was found in PR Valloric/ycmd#156.
Here's a minimal example. Create a dummy `test.ts` file and a Python script with the following lines:
``` python
#!/usr/bin/env python
import json
import subprocess
class TypeScript():
_tsserver_handle = subprocess.Popen(
'tsserver.cmd',
stdout = subprocess.PIPE,
stdin = subprocess.PIPE
)
def SendRequest(self, command, arguments=None):
request = {
'seq': 0,
'type': 'request',
'command': command
}
if arguments:
request['arguments'] = arguments
self._tsserver_handle.stdin.write(json.dumps(request) + '\n')
def ReadResponse(self):
headers = {}
while True:
headerline = self._tsserver_handle.stdout.readline().strip()
if not headerline:
break
key, value = headerline.split( ':', 1 )
headers[ key.strip() ] = value.strip()
if 'Content-Length' not in headers:
raise RuntimeError( "Missing 'Content-Length' header" )
contentlength = int( headers[ 'Content-Length' ] )
output = self._tsserver_handle.stdout.read( contentlength )
print( repr( output ) )
def Main():
typescript = TypeScript()
test_file = 'test.ts'
command = 'open'
arguments = { 'file': test_file }
typescript.SendRequest(command, arguments)
command = 'reload'
arguments = { 'tmpfile': test_file, 'file': test_file }
typescript.SendRequest(command, arguments)
typescript.ReadResponse()
command = 'reload'
arguments = { 'tmpfile': test_file, 'file': test_file }
typescript.SendRequest(command, arguments)
typescript.ReadResponse()
if __name__ == '__main__':
Main()
```
This script sends requests to TSServer and tries to read them. An exception is raised if no `Content-Length` key in header is found.
On Linux, the output is:
```
'{"seq":0,"type":"response","command":"reload","request_seq":0,"success":true}\n'
'{"seq":0,"type":"response","command":"reload","request_seq":0,"success":true}\n'
```
On Windows:
```
'{"seq":0,"type":"response","command":"reload","request_seq":0,"success":true}\r'
Traceback (most recent call last):
File "./typescript.py", line 68, in <module>
Main()
File "./typescript.py", line 64, in Main
typescript.ReadResponse()
File "./typescript.py", line 36, in ReadResponse
raise RuntimeError( "Missing 'Content-Length' header" )
RuntimeError: Missing 'Content-Length' header
```
If `contentlength` is incremented by one:
```
'{"seq":0,"type":"response","command":"reload","request_seq":0,"success":true}\r\n'
'{"seq":0,"type":"response","command":"reload","request_seq":0,"success":true}\r\n'
```
In all these cases, TSServer returns a length of 78 characters (`{"seq":0,"type":"response","command":"reload","request_seq":0,"success":true}` is 77 long). It should return a length of 79 characters on Windows.
| Bug,Help Wanted,API | low | Critical |
85,804,734 | TypeScript | Provide an xclangspec file for TypeScript syntax highlighting in Xcode | This is a feature request for the TypeScript team to ship an xclangspec file with the TypeScript sources in order to provide support for syntax highlighting of the language in the Xcode IDE.
Google has done the same for their Go language; see here for an example of the xclangspec file format: https://code.google.com/p/go/source/browse/misc/xcode/go.xclangspec?r=30b0c392132645259e053a2ba8904383a55bab03
| Suggestion,Help Wanted | low | Major |
85,829,235 | go | x/build: coordinator should keep a small pool of ready-to-go GCE buildlets | The coordinator's GCE buildlet pool should always have a few of each type of buildlet ready to go to avoid the start-up latency.
e.g. an idle Plan 9, OpenBSD 32/63, FreeBSD 64 (at least 2x), Linux (probably at least 4x), etc.
Then once we start building them, we also start spinning up the helpers for test sharding.
This will raises costs, but probably shouldn't be too bad. I'll post some math on the cost here later.
/cc @adg
| Performance,Builders,FeatureRequest | low | Minor |
85,866,285 | nvm | Error with linked packages when `reinstall-packages` | I've got an error at the end of `reinstall-packages` command
```
Linking global packages from iojs-v2.0.0...
nvm:cd:646: no such file or directory: /Users/roadhump/Projects/eslint\n/Users/roadhump/Projects/yeoman-generators/generator-linters
```
| shell: zsh: oh-my-zsh | low | Critical |
85,952,737 | rust | No error despite of multiple applicable methods in scope | Given the following code:
``` rust
extern crate mio;
use mio::buf::RingBuf;
use mio::buf::Buf;
use std::io::Read;
fn main() {
let buf = RingBuf::new(10);
let bytes = buf.bytes();
println!("{:?}", bytes);
}
```
`buf` is of type `RingBuf`. `RingBuf` does not provide `.bytes()`, but it implements both `Buf` and `Read`, which both provide a `.bytes()` implementation.
According to https://doc.rust-lang.org/book/ufcs.html the compiler should complain. But it simply chooses to take the implementation of `Read`, which return the "wrong" result type.
(Without the "use Read" line rust chooses the implementation of `Buf`, which return the "correct" type.)
(I am using Rust 1.0.0.)
| T-lang,C-bug | low | Critical |
85,958,883 | rust | Closures should always implement the all applicable Fn* traits. | If a closure is passed into a function call as temporary, it only implements the `Fn*` traits needed to satisfy the constraints specified by the function. However, if it is declared first and then moved into the function, it correctly implements all applicable `Fn*` traits.
``` rust
fn strip<F: FnOnce()>(f: F) -> F { f }
fn fail<F: Fn()>(f: F) { f() }
// Works.
fn test_works() {
let f = || {};
let f = strip(f);
fail(f);
}
// Doesn't
fn test_broken() {
// f only implements FnOnce().
let f = strip(|| {});
fail(f);
}
fn main() {}
```
| C-enhancement,P-medium,A-closures,T-compiler | low | Critical |
86,472,052 | youtube-dl | [site-support-request] rat-porn.com | Example URL:
http://www.rat-porn.com/video/312/pretty-18yo-blonde-fucked-on-cam
```
$ youtube-dl -v 'http://www.rat-porn.com/video/312/pretty-18yo-blonde-fucked-on-cam'
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-v', 'http://www.rat-porn.com/video/312/pretty-18yo-blonde-fucked-on-cam']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.05.29
[debug] Python version 3.4.3 - Linux-4.0.5-1-ARCH-x86_64-with-arch
[debug] exe versions: ffmpeg 2.6.3, ffprobe 2.6.3, rtmpdump 2.4
[debug] Proxy map: {}
[generic] pretty-18yo-blonde-fucked-on-cam: Requesting header
WARNING: Falling back on generic information extractor.
[generic] pretty-18yo-blonde-fucked-on-cam: Downloading webpage
[generic] pretty-18yo-blonde-fucked-on-cam: Extracting information
ERROR: Unsupported URL: http://www.rat-porn.com/video/312/pretty-18yo-blonde-fucked-on-cam
Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/youtube_dl/YoutubeDL.py", line 649, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/common.py", line 273, in extract
return self._real_extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/generic.py", line 1504, in _real_extract
raise UnsupportedError(url)
youtube_dl.utils.UnsupportedError: Unsupported URL: http://www.rat-porn.com/video/312/pretty-18yo-blonde-fucked-on-cam
```
| site-support-request,nsfw | low | Critical |
86,472,638 | youtube-dl | [site-support-request] luxuretv.com | Example URL:
http://en.luxuretv.com/videos/slut-spreads-her-big-breasts-in-front-of-a-webcam-16982.html
```
$ youtube-dl -v 'http://en.luxuretv.com/videos/slut-spreads-her-big-breasts-in-front-of-a-webcam-16982.html'
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-v', 'http://en.luxuretv.com/videos/slut-spreads-her-big-breasts-in-front-of-a-webcam-16982.html']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.05.29
[debug] Python version 3.4.3 - Linux-4.0.5-1-ARCH-x86_64-with-arch
[debug] exe versions: ffmpeg 2.6.3, ffprobe 2.6.3, rtmpdump 2.4
[debug] Proxy map: {}
[generic] slut-spreads-her-big-breasts-in-front-of-a-webcam-16982: Requesting header
WARNING: Falling back on generic information extractor.
[generic] slut-spreads-her-big-breasts-in-front-of-a-webcam-16982: Downloading webpage
[generic] slut-spreads-her-big-breasts-in-front-of-a-webcam-16982: Extracting information
ERROR: Unsupported URL: http://en.luxuretv.com/videos/slut-spreads-her-big-breasts-in-front-of-a-webcam-16982.html
Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/youtube_dl/YoutubeDL.py", line 649, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/common.py", line 273, in extract
return self._real_extract(url)
File "/usr/lib/python3.4/site-packages/youtube_dl/extractor/generic.py", line 1504, in _real_extract
raise UnsupportedError(url)
youtube_dl.utils.UnsupportedError: Unsupported URL: http://en.luxuretv.com/videos/slut-spreads-her-big-breasts-in-front-of-a-webcam-16982.html
```
| site-support-request | low | Critical |
86,509,849 | youtube-dl | [site-support-request] https://couragefound.org/ | https://couragefound.org/2015/06/courage-event-the-digital-surveillance-state-quo-vadis-democracy/
Current youtube-dl e1b9322b091122b6f6832c70e3a845a84ee1764e cannot find the video on this page.
| site-support-request | low | Minor |
86,629,533 | youtube-dl | youtube-dl downloads DASH manifest before noticing that the youtube doesn't match the regex given by the --match-title argument | This slows it down quite a bit, especially on longer playlists.
| request | low | Major |
87,006,608 | rust | Implement likely/unlikely intrinsic (tracking issue for RFC 1131) | Tracking issue for rust-lang/rfcs#1131:
- [x] Implement the likely/unlikely intrinsic
- [ ] Re-export in `std::hint`
- [ ] Document
- [ ] Stabilize
| metabug,B-RFC-approved,T-lang,B-unstable,E-help-wanted,C-tracking-issue,A-intrinsics,S-tracking-perma-unstable | high | Critical |
87,107,430 | youtube-dl | Request to support box.com | I would like youtube-dl to be able to download from box.com, please.
Example: https://app.box.com/s/yrvxrv91fenex2acvo0zdpbogjjb9jrf
| site-support-request | low | Major |
87,128,036 | go | syscall: forkAndExecInChild assumes clone doesn't block, but it sometimes does | syscall.forkAndExecInChild [uses](https://github.com/golang/go/blob/go1.4.2/src/syscall/exec_linux.go#L85) RawSyscall6 to call clone. This means the goroutine doesn't give up its scheduler slot, making the assumption that clone can't block. But in the case of a program with open files on a fuse file system, clone may block indefinitely. So if the program execs GOMAXPROCS times in parallel, it may lock up and fail to schedule any other runnable goroutines. If the fuse file system is unresponsive or if the fuse server is the same process as has the files open, the program will never make progress.
Here is an example of the kernel stack for a thread that is blocking in sys_clone on behalf of an os/exec.Cmd on Linux 3.19.0-16-generic (Ubuntu 15.04):
```
[<ffffffff812e708d>] __fuse_request_send+0xcd/0x290
[<ffffffff812e7262>] fuse_request_send+0x12/0x20
[<ffffffff812f0d6d>] fuse_flush+0x12d/0x170
[<ffffffff811f0ac3>] filp_close+0x33/0x80
[<ffffffff81212168>] put_files_struct+0x78/0xd0
[<ffffffff8121226b>] exit_files+0x4b/0x60
[<ffffffff810736c4>] copy_process.part.27+0x8e4/0x1b10
[<ffffffff81074aa1>] do_fork+0xd1/0x350
[<ffffffff81074da6>] SyS_clone+0x16/0x20
[<ffffffff817c9699>] stub_clone+0x69/0x90
[<ffffffffffffffff>] 0xffffffffffffffff
```
I previously filed #10202 about a similar issue with syscall.Dup2, which Ian fixed in 28074d5baad961f931df9895c57a82d164641f06. Can a similar fix be made here? I know there are tricky issues with the fork lock, but I don't pretend to be familiar enough to say.
| compiler/runtime | medium | Major |
87,133,875 | kubernetes | Bulk creation API | I have a list of services defined under a json having "kind" : "list"
I'm using the v1beta3 API version.
However, I don't see any endpoint to launch a list resource.
I'm following the Swagger spec on http://kubernetes.io/third_party/swagger-ui/#!/v1beta3/listService to see the endpoints.
Is there any way to use the API to launch multiple services using a single json/yaml with kind=list?
| priority/backlog,area/api,area/app-lifecycle,sig/api-machinery,kind/feature,lifecycle/frozen | medium | Major |
87,399,769 | go | os/exec: provide better support for creating pipelines | Creating pipelines using os/exec is somewhat tedious. There are a couple of good external packages that provide higher-level interfaces:
http://godoc.org/gopkg.in/pipe.v2
http://godoc.org/github.com/ghemawat/stream
Brad asked me to file an issue to think about making this better for Go 1.6.
Related: it's common to want the text of stderr on failure, but not as CombinedOutput. The pipe package calls this DividedOutput: http://godoc.org/gopkg.in/pipe.v2#DividedOutput
| help wanted,FeatureRequest | low | Critical |
87,579,361 | youtube-dl | Add ID3 tags to MP3 files (was: Thumbnail shows in VLC, but not Windows Media Player?) | Running the following creates an mp3 file and the corresponding thumbnail:
> youtube-dl --write-thumbnail --embed-thumbnail -x --audio-format mp3 https://www.youtube.com/watch?v=v2AC41dglnM
If I open the mp3 with VLC, the thumbnail shows, but if I open it with Windows Media Player it doesn't. What am I doing wrong? How can I make it show in Windows Media Player as well?
| request,postprocessors | low | Minor |
87,812,374 | go | cmd/compile/ssa: duplicate block elim | ``` go
package p
func g_ssa(a, b int) int {
if a < 5 {
return 1
}
if b < 5 {
return 1
}
return 0
}
```
At the end of the layout pass (chosen for readability--this remains true at the end of compilation), the SSA looks like:
```
g_ssa <nil>
b1:
v1 = Arg <mem> [.mem]
v2 = FP <uint64>
v18 = MOVQconst <int> [1]
v22 = MOVQconst <int> [0]
v28 = MOVQload <int> [0] v2 v1
v32 = MOVQload <int> [8] v2 v1
v24 = CMPQconst <flags> [5] v28
v7 = MOVQstore <mem> [16] v2 v22 v1
LT v24 -> b3 b4
b3:
v14 = MOVQstore <mem> [16] v2 v18 v7
Plain -> b2
b2:
v29 = Phi <mem> v14 v21 v25
Exit v29
b4:
v20 = CMPQconst <flags> [5] v32
LT v20 -> b5 b6
b5:
v21 = MOVQstore <mem> [16] v2 v18 v7
Plain -> b2
b6:
v25 = MOVQstore <mem> [16] v2 v22 v7
Plain -> b2
```
Note that blocks `b3` and `b5` are effectively identical. One of them could be eliminated.
This happens fairly often in practice. For example, our generated equality code looks like:
``` go
func eq(a, b T) {
if a.X != b.X {
return false
}
if a.Y != b.Y {
return false
}
return true
}
```
This ought to produce code that is just as efficient as:
``` go
func eq(a, b T) {
return a.X == b.X && a.Y == b.Y
}
```
Right now it doesn't, but we can do better. There's also a lot of code in the compiler that looks like this. Also complex Less methods for sort.Interface.
Is this worth doing? Ought this to occur as part of an existing pass, or as a new one?
| NeedsInvestigation | low | Major |
87,852,728 | rust | Indicate which version of MSVC Rust was built with | When linking to the CRT on Windows, all code that is statically linked together needs to use the same version of the CRT. Different versions of the CRT (aka they have different DLL names) are not ABI compatible, so `msvcrt120.dll` and `msvcrt110.dll` are not ABI compatible. In particular, almost all Rust code links statically to `libstd`, so the version of the CRT that `libstd` uses needs to be same version of the CRT that Rust code using that `libstd` is using.
For the gnu version of Rust, it relies on `msvcrt.dll` and since all versions of MinGW will link to `msvcrt.dll` everything is fine and dandy.
However, for the MSVC version of Rust, it is linked against a specific versioned CRT such as `msvcrt120.dll`. Since the version of the CRT depends on which version of MSVC you build your code with, there is a possibility for you to use a different version of the CRT than `libstd` was built with, causing room for potential ABI issues. There's even separate debug/release versions of the CRT for each version, as well as static/dynamically linked versions of the CRT.
Therefore, to ensure that there is no ABI compatibility issues, distributions of the MSVC version of Rust should indicate exactly which version of MSVC and the CRT they were built with.
Note that this is purely an issue for developers using Rust to build their code with the msvc toolchain. End-users of binaries built by Rust merely need to have the appropriate redistributable installed and there's no ABI concerns there.
| O-windows-msvc,C-feature-request | low | Critical |
87,878,000 | go | x/review/git-codereview: mail must update the reviewer list even if there are no staged changes | mail rejects to update the reviewer list if there are no staged changes. This behaviour is breaking a common use-case where the contributor creates a CL with no reviewers, review his/her changes on Gerrit and add reviewers. Allow mail subcommand to update the reviewers list even if there are no changes.
| NeedsInvestigation | low | Minor |
87,994,819 | TypeScript | Explicit any type in for..in loop | For the reason of consistency, it should be possible to explicitly annotate the iterator variable at least with the "any" type in a for..in loop.
*Edit from @sandersn: Fixes should apply to JSDoc too: https://github.com/microsoft/TypeScript/issues/43756, and perhaps allow annotations on `for .. of` too*
| Suggestion,Awaiting More Feedback | high | Critical |
88,488,946 | youtube-dl | Glomdalen.no (site-support-request) | Hello! Thank you for a great service. I would be really happy if you would add support for Glomdalen.no. Here is an example:
http://www.glomdalen.no/tv/er-dette-offside-5-19-69221.html
I think this is similar to other newspapers, such as ba.no, which I believe you support, but I couldn't find a way to make it work for Glomdalen.no ...
Thank you very much! :-)
| site-support-request | low | Minor |
88,657,568 | rust | Add optional iterations output to libtest benchmark | The `ns / iter` output of libtest's benchmarking harness can hide subtle differences for fast operations (also for some programs you just want to measure throughput). Could we have an option to output the number of iterations per second instead?
I note that `src/libtest/lib.rs` has a `fn ns_per_iter(&mut self)` which calculates this and should be easy enough to extend.
| C-enhancement,A-libtest | low | Minor |
88,745,082 | TypeScript | Intellisense behaviour of functions and parameters | Some odd/unintuitive Intellisense behaviour:
``` typescript
/**
* Function description A
* @param name Parameter description A
*/
function uniqueName(name: string): string {
}
export const Uid = {
/**
* Function description B
* @param name Parameter description B
*/
uniqueName
};
```
The intellisense description for the 'Uid.uniqueName' property is now description B, however the intellisense for the parameter is description A. There isn't anything overriding here - the description of the function is always taken from the export, and the parameter description is always taken from the function definition. As such, I need to do the following to get full intellisense support for this pattern:
``` typescript
/**
* @param name Parameter description
*/
function uniqueName(name: string): string {
}
export const Uid = {
/**
* Function description
*/
uniqueName
};
```
i.e. function definitions in one place, parameter definitions in another place. I know I can work around this by restructuring the code, but it still feels like I should be able to achieve it in the above format.
| Suggestion,Help Wanted,VS Code Tracked,Experience Enhancement | low | Minor |
88,854,146 | rust | Clarify story on libm bindings | Currently the f32 and f64 types provide a number of functions that are just bindings to libm (e.g. trigonometric functions). There are many unbound functions, however (see https://github.com/rust-lang/rust/pull/25780 for some), which can in theory be added over time. It's unclear, however, whether we want to continue this trend and bind all functions or instead try to move away from libm (one possibility being deprecating existing functionality in favor of an external crate).
It would be nice to have a comprehensive understanding on the functions libm provides (beyond those we bind today) and what our story here should be!
| C-tracking-issue,T-libs,A-floating-point,Libs-Tracked | low | Major |
89,075,655 | go | cmd/go: "go list" option to display accumulated cgo LDFLAGS for -buildmode=c-archive/c-shared | Trying to build a simple Go library that spins up an `http.Server` from a call in C, but it fails to compile the C against the produced library object. Similar examples without involving `http` work just fine.
```
$ go version
go version devel +a2aaede Wed Jun 17 14:55:39 2015 +0000 darwin/amd64
$ ld -v
@(#)PROGRAM:ld PROJECT:ld64-241.9
configured to support archs: armv6 armv7 armv7s arm64 i386 x86_64 x86_64h armv6m armv7m armv7em
LTO support using: LLVM version 3.5svn
$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 6.0 (clang-600.0.57) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin13.4.0
Thread model: posix
```
Error:
```
$ gcc -o gohttp-c examples/c/main.c gohttplib.a -lpthread
Undefined symbols for architecture x86_64:
"_CFArrayGetCount", referenced from:
_FetchPEMRoots in gohttplib.a(000003.o)
"_CFArrayGetValueAtIndex", referenced from:
_FetchPEMRoots in gohttplib.a(000003.o)
"_CFDataAppendBytes", referenced from:
_FetchPEMRoots in gohttplib.a(000003.o)
"_CFDataCreateMutable", referenced from:
_FetchPEMRoots in gohttplib.a(000003.o)
"_CFDataGetBytePtr", referenced from:
_FetchPEMRoots in gohttplib.a(000003.o)
__cgo_6dbb806e9976_Cfunc_CFDataGetBytePtr in gohttplib.a(000003.o)
(maybe you meant: __cgo_6dbb806e9976_Cfunc_CFDataGetBytePtr)
"_CFDataGetLength", referenced from:
_FetchPEMRoots in gohttplib.a(000003.o)
__cgo_6dbb806e9976_Cfunc_CFDataGetLength in gohttplib.a(000003.o)
(maybe you meant: __cgo_6dbb806e9976_Cfunc_CFDataGetLength)
"_CFRelease", referenced from:
_FetchPEMRoots in gohttplib.a(000003.o)
__cgo_6dbb806e9976_Cfunc_CFRelease in gohttplib.a(000003.o)
(maybe you meant: __cgo_6dbb806e9976_Cfunc_CFRelease)
"_SecKeychainItemExport", referenced from:
_FetchPEMRoots in gohttplib.a(000003.o)
"_SecTrustCopyAnchorCertificates", referenced from:
_FetchPEMRoots in gohttplib.a(000003.o)
"_kCFAllocatorDefault", referenced from:
_FetchPEMRoots in gohttplib.a(000003.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [example-c] Error 1
```
More thorough description and full code examples are [available in this StackOverflow question](https://stackoverflow.com/questions/30896892/using-go-1-5-buildmode-c-archive-with-net-http-server-linked-from-c).
This might be related to the OSX/darwin arch as a commenter was able to get it working on Ubuntu.
| help wanted,NeedsFix,GoCommand | low | Critical |
89,079,880 | TypeScript | Bug with resolveClassOrInterfaceMembers | https://github.com/Microsoft/TypeScript/blob/6db4faf4883636e70ef7a99401b9dbc5d1a58631/src/compiler/checker.ts#L2894
```
interface monster {
(a: "a"): void;
(b: "b"): void;
(c: "c"): void;
(d: "d"): void;
(e: "e"): void;
(x: string): void;
}
interface m1 extends monster {
a();
}
interface m2 extends monster {
b();
}
interface m3 extends monster { }
interface m4 extends monster { }
interface m5 extends m1, m2, m3, m4 { }
var waa: m5;
waa("a")
```
The compiler finds 24 call expressions because it appends 'monster' from each base type.
| Suggestion,Help Wanted | low | Critical |
89,172,088 | go | os: TestStartProcess and TestHostname fail on android | See http://build.golang.org/log/8f8c15bb002fa2ba25799e34e155095462700a47
```
go_android_exec: adb shell mkdir -p /data/local/tmp/os.test-7203
go_android_exec: adb push /tmp/go-build611040176/os/_test/os.test /data/local/tmp/os.test-7203/os.test-7203-tmp
199 KB/s (2565204 bytes in 12.544s)
go_android_exec: adb shell cp '/data/local/tmp/os.test-7203/os.test-7203-tmp' '/data/local/tmp/os.test-7203/os.test-7203'
go_android_exec: adb shell rm '/data/local/tmp/os.test-7203/os.test-7203-tmp'
go_android_exec: adb shell export TMPDIR="/data/local/tmp/os.test-7203"; export GOROOT="/data/local/tmp/goroot"; export GOPATH="/data/local/tmp/gopath"; cd "/data/local/tmp/goroot/src/os"; '/data/local/tmp/os.test-7203/os.test-7203' -test.short=true -test.timeout=4m0s; echo -n exitcode=$?
--- FAIL: TestStartProcess (0.00s)
os_test.go:780: StartProcess: fork/exec /bin/pwd: no such file or directory
--- FAIL: TestHostname (0.01s)
os_test.go:1210: fork/exec /bin/hostname: no such file or directory
FAIL
exitcode=1go_android_exec: adb shell rm -rf /data/local/tmp/os.test-7203
FAIL os 13.867s
```
There needs to have some proper access methods to android-specific system properties.
| OS-Android | low | Minor |
89,188,732 | You-Dont-Know-JS | "this & object prototypes": ch2 issue about polyfill of Function.prototype.bind() | The snippet in the polyfill of Function.prototype.bind() :
```
this instanceof fNOP && oThis
? this : oThis
```
I cannot really understand why still need `&& oThis` even though `this instanceof fNOP` already indicate that the function invoked with `new` operater.
I try this polyfill with below snippet
```
function foo(p1,p2) {
this.val = p1 + p2;
}
var bar = foo.bind( null, "p1" );
var baz = new bar( "p2" );
baz.val; // p1p2
```
but the result is `undefined` :(
The correct result can appear once I comment `&& oThis`.
| for second edition | low | Minor |
89,199,826 | react | Use Inline Event Handlers for trapBubbledEventsLocal and the iOS Safari Click Hack | We currently do a lot of work at the end of mount to find all the nodes and attach listeners after the fact. This is severely impacting initial rendering performance of `<form />`, `<img />` and click handlers.
Instead we can just use inline event handlers in the innerHTML string. For the iOS Safari hack it should be trivial. The handler doesn't even have to do anything.
The inline event handler would need to either redispatch the event, or call into some other event handler system. Probably a global listeners.
``` js
window._handleReactEvent = ...;
```
``` html
<img onload="_handleReactEvent(event)">
```
Since there could potentially be multiple Reacts, they should probably chain the handler if there already is one registered. (Although multiple Reacts in the same document is already pretty broken in this regard.)
It doesn't have to be a global. Since inline event handlers gets the element added as a `with(element)` scope around itself. It is equivalent to add it to the prototype:
``` js
Element.prototype._handleReactEvent = ...;
```
or
``` js
HTMLImgElement.prototype._handleReactEvent = ...;
```
This makes them a bit more hidden, unobtrusive.
We still need to render this string for server-side rendering to avoid needing to change the HTML or wire up handlers after-the-fact on the client.
Since these events can fire before React has loaded, we need to check for the existence of the handler before it is used.
``` js
<img onload="this._handleReactEvent&&_handleReactEvent(event)">
```
It is critical that this string is short - for innerHTML string concat performance and network performance. Yet it needs to be unlikely to collide with anything else.
Is there a unicode character we could use?
| Type: Enhancement,Component: DOM,React Core Team | medium | Critical |
89,261,661 | youtube-dl | RSS feed parser doesn't respect --dateafter | I'm running:
`youtube-dl --dateafter now-5day http://www.escapistmagazine.com/rss/videos/list/1.xml`
Which correctly gets the most recent video, however it then proceeds to get the next 50 videos from the RSS feed.
Would it be possible to implement the --dateafter switch to look at the posted date of the RSS feed?
| request | low | Minor |
89,274,284 | youtube-dl | Brightcove empty playlist | ./youtube-dl --verbose http://aptn.ca/blackstone/video/season-1/
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'--verbose', u'http://aptn.ca/blackstone/video/season-1/']
[debug] Encodings: locale US-ASCII, fs US-ASCII, out US-ASCII, pref US-ASCII
[debug] youtube-dl version 2015.06.15
[debug] Python version 2.7.9 - FreeBSD-8.2-STABLE-amd64-64bit-ELF
[debug] exe versions: ffmpeg 0.11.3, ffprobe 0.11.3
[debug] Proxy map: {}
[generic] season-1: Requesting header
WARNING: Falling back on generic information extractor.
[generic] season-1: Downloading webpage
[generic] season-1: Extracting information
[generic] Brightcove video detected.
[download] Downloading playlist: Blackstone | SEASON 1
[generic] playlist Blackstone | SEASON 1: Collected 1 video ids (downloading 1 of them)
[download] Downloading video 1 of 1
[Brightcove] AQ~~,AAAAEQEb72E~,MYPTzvrWinzoYROX3AWqzRir9E-Up5Qk: Downloading playlist information
ERROR: Empty playlist; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "./youtube-dl/youtube_dl/YoutubeDL.py", line 650, in extract_info
ie_result = ie.extract(url)
File "./youtube-dl/youtube_dl/extractor/common.py", line 273, in extract
return self._real_extract(url)
File "./youtube-dl/youtube_dl/extractor/brightcove.py", line 239, in _real_extract
return self._get_playlist_info(player_key[0])
File "./youtube-dl/youtube_dl/extractor/brightcove.py", line 277, in _get_playlist_info
raise ExtractorError('Empty playlist')
ExtractorError: Empty playlist; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| geo-restricted | low | Critical |
89,722,137 | youtube-dl | Add option for checking if youtube-dl is outdated | please include an auto update option like Google Chrome which will notify users that an update is available instead of manually checking for an update.
This can be achieved by checking for the version information whenever youtubedl is run to download some video.
| request | low | Major |
89,859,769 | neovim | External programs called from :r cannot accept input | Let's say I want to use :r with a command that requires input.
For example:
```
:r !gpg -d ~/test.gpg 2>/dev/null
```
On vim, this would decrypt the file `test.gpg` and put its contents into the buffer. The [vim-gnupg](https://github.com/jamessan/vim-gnupg) plugin uses this behaviour to allow transparent editing of GPG encrypted files.
When you run the above command, Vim's :r will show something like this at the bottom of the window:
```
You need a passphrase to unlock the secret key for
user: "User <[email protected]>"
4096-bit RSA key, ID 12345678, created 1970-01-01 (main key ID 12345678)
Enter passphrase:
```
Naturally, the cursor is placed after "Enter passphrase: " so that you can enter your passphrase and decrypt the file.
In Neovim however, the message is printed but the cursor just disappears, and I have to kill the process to get back to a terminal.
| enhancement | low | Minor |
90,106,326 | kubernetes | Generalize prevention of accidental deletion / mutation | PR #9975 added hardcoded protection of the default namespace to the Lifecycle admission controller. This mechanism at least needs to be configurable, since Openshift and Kubernetes have additional infrastructure namespaces they'd like to protect. However, it also needs to be possible to remove the protection to shut down a cluster #4630, without making it susceptible to accidents. One solution would be to add a `protected` field to metadata of any object. `protected: true` would prevent deletion until the object was updated to set `protected: false`. This would also make it straightforward to protect the special Kubernetes services, addons, and other self-hosted components, while still making it possible to update them and to delete all resources during shutdown.
cc @derekwaynecarr @lavalamp
| priority/important-soon,area/api,sig/api-machinery,kind/feature,area/teardown,lifecycle/frozen | medium | Critical |
90,143,098 | youtube-dl | "enseignemoi.com" support | Hi, can you please add support for "enseignemoi.com" :
$ youtube-dl -v http://www.enseignemoi.com/mamadou-karambiri/video/l-adoration-5571.html
[debug] System config: []
[debug] User config: [u'--ignore-errors', u'--continue', u'-o', u'%(title)s__%(format_id)s__%(id)s.%(ext)s', u'-f', u'mp4/flv', u'--add-metadata', u'--write-description', u'--embed-subs', u'--embed-thumbnail']
[debug] Command-line args: [u'-v', u'http://www.enseignemoi.com/mamadou-karambiri/video/l-adoration-5571.html']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.06.15
[debug] Python version 2.7.8 - Linux-3.16.0-41-generic-x86_64-with-Ubuntu-14.10-utopic
[debug] exe versions: avconv 11-6, avprobe 11-6, ffmpeg 2.6.2, ffprobe 2.6.2, rtmpdump 2.4
[debug] Proxy map: {}
[generic] l-adoration-5571: Requesting header
WARNING: Falling back on generic information extractor.
[generic] l-adoration-5571: Downloading webpage
[generic] l-adoration-5571: Extracting information
ERROR: Unsupported URL: http://www.enseignemoi.com/mamadou-karambiri/video/l-adoration-5571.html
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/generic.py", line 1020, in _real_extract
doc = parse_xml(webpage)
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/utils.py", line 1558, in parse_xml
tree = xml.etree.ElementTree.XML(s.encode('utf-8'), **kwargs)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1300, in XML
parser.feed(text)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
ParseError: mismatched tag: line 46, column 3
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 650, in extract_info
ie_result = ie.extract(url)
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py", line 273, in extract
return self._real_extract(url)
File "/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/generic.py", line 1571, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: http://www.enseignemoi.com/mamadou-karambiri/video/l-adoration-5571.html
| site-support-request | low | Critical |
90,145,389 | rust | Installer on OS X displays wrong install sizes | 
Installing on OS X goes well, in general, but there is a minor problem: the sizes quoted for installation of the compiler, Cargo, and of the documentation are all Zero KB.
| O-macos,E-help-wanted,C-bug | low | Minor |
90,170,536 | go | cmd/vet, go/types: make it easier to handle architecture-dependent constant values correctly | CL 11252 introduces a vet check for integer comparisons. It has false positives when checking code like `u > uint64(^uintptr(0))` where u is a `uint64`. This expression is always false when compiled for 64 bit machines but not for 32 bit machines. IIRC, something similar occurred for the suspicious shift vet check.
It would be nice if vet checks could inquire about types and values on a per-int-size basis.
go/types uses the host machine's integer size as a default. We could override that in vet and typecheck twice, once with 32 bit ints and once with 64 bit ints, but that would be expensive. Maybe there's a better way?
/cc @robpike @griesemer
| Analysis | low | Minor |
90,352,102 | youtube-dl | Feature request: use ffprobe/avprobe to fetch extra metadata for use in _sort_formats | Hi, I have noticed that the "-f bestaudio" does not really download the best audio quality :
$ youtube-dl --ignore-config -o "%(title)s__%(format_id)s__%(id)s.%(ext)s" -f bestaudio --restrict-filenames https://www.youtube.com/watch?v=NLYggpNyQuo
[youtube] NLYggpNyQuo: Downloading webpage
[youtube] NLYggpNyQuo: Extracting video information
[youtube] NLYggpNyQuo: Downloading DASH manifest
[download] Kacou_Severin_-Le_secret_de_l_onction_1__140__NLYggpNyQuo.m4a has already been downloaded
[download] 100% of 136.33MiB
[ffmpeg] Correcting container in "Kacou_Severin_-Le_secret_de_l_onction_1__140__NLYggpNyQuo.m4a"
Using the options "-f best -x" gets a file with better audio quality :
$ youtube-dl --ignore-config -o "%(title)s__%(format_id)s__%(id)s.%(ext)s" -f best -x -k --restrict-filenames https://www.youtube.com/watch?v=NLYggpNyQuo
[youtube] NLYggpNyQuo: Downloading webpage
[youtube] NLYggpNyQuo: Extracting video information
[youtube] NLYggpNyQuo: Downloading DASH manifest
[download] Kacou_Severin_-Le_secret_de_l_onction_1__22__NLYggpNyQuo.mp4 has already been downloaded
[download] 100% of 375.25MiB
[avconv] Destination: Kacou_Severin_-Le_secret_de_l_onction_1__22__NLYggpNyQuo.m4a
$ avprobe Kacou_Severin_-Le_secret_de_l_onction_1__140__NLYggpNyQuo.m4a 2>&1 | grep Audio:
Stream #0.0(und): Audio: aac, 44100 Hz, stereo, fltp, 125 kb/s (default)
$ avprobe Kacou_Severin_-Le_secret_de_l_onction_1__22__NLYggpNyQuo.mp4 2>&1 | grep Audio:
Stream #0.1(und): Audio: aac, 44100 Hz, stereo, fltp, 191 kb/s (default)
| request | low | Major |
90,355,339 | youtube-dl | Feature : Please add a command line option (--no-OPTION) to disable one parameter already set in the "youtube-dl.conf" file (--OPTION value). | Hi,
I have the "--embed-thumbnail" option set in my "youtube-dl.conf".
The problem is when I download the audio of a youtube stream, youtube-dl embeds the thumbnail but I don't want the thumbnails in my audio files otherwise my hardware won't be to play them, see the last line of this output :
$ youtube-dl -f best -x -k https://www.youtube.com/watch?v=NLYggpNyQuo
[youtube] NLYggpNyQuo: Downloading webpage
[youtube] NLYggpNyQuo: Extracting video information
[youtube] NLYggpNyQuo: Downloading DASH manifest
[info] Writing video description to: Kacou_Severin_-Le_secret_de_l_onction_1__22__NLYggpNyQuo.description
[youtube] NLYggpNyQuo: Downloading thumbnail ...
[youtube] NLYggpNyQuo: Writing thumbnail to: Kacou_Severin_-Le_secret_de_l_onction_1__22__NLYggpNyQuo.jpg
[download] Kacou_Severin_-Le_secret_de_l_onction_1__22__NLYggpNyQuo.mp4 has already been downloaded
[download] 100% of 377.53MiB
[ffmpeg] Adding metadata to 'Kacou_Severin_-Le_secret_de_l_onction_1__22__NLYggpNyQuo.mp4'
[avconv] Destination: Kacou_Severin_-Le_secret_de_l_onction_1__22__NLYggpNyQuo.m4a
[ffmpeg] Subtitles can only be embedded in mp4 or mkv files
[atomicparsley] Adding thumbnail to "Kacou_Severin_-Le_secret_de_l_onction_1__22__NLYggpNyQuo.m4a"
$ avprobe Kacou_Severin_-Le_secret_de_l_onction_1__22__NLYggpNyQuo.m4a 2>&1 | egrep "Input|Duration:|Stream"
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Kacou_Severin_-Le_secret_de_l_onction_1__22__NLYggpNyQuo.m4a':
Duration: 02:28:30.14, start: 0.000000, bitrate: 194 kb/s
Stream #0.0(und): Audio: aac, 44100 Hz, stereo, fltp, 191 kb/s (default)
Stream #0.1: Video: mjpeg, yuvj420p, 1440x1080 [PAR 1:1 DAR 4:3], 90k tbn
A work around would be to specify another configuration file dedicated to audio downloads, such as :
--config=FILE Specify the location of a startup file you wish to use.
Can you help ?
| request | low | Minor |
90,403,555 | youtube-dl | [Suggestion] Please add an option to skip starting segments together with --hls-prefer-native | Downloading seems to be segment-by-segment with the experimental --hls-prefer-native option, it could be very convenient if there is an additional option to control over the starting segment number so that one can choose to record partial footage from a lengthy youtube video.
| request | low | Minor |
90,462,920 | java-design-patterns | Blackboard architectural pattern |
### Description:
Implement the Blackboard design pattern in the project. The Blackboard pattern is an architectural pattern used in situations where multiple subsystems need to collaborate to solve a problem that is beyond the individual capabilities or knowledge of each subsystem. The main elements of the Blackboard pattern include:
- **Blackboard**: A shared global memory structure that holds the data or state of the solution space. All subsystems read from and write to the blackboard.
- **Knowledge Sources**: Independent subsystems or modules that have specialized knowledge and can operate on the data in the blackboard. They can contribute partial solutions or refine existing data.
- **Control Component**: Manages the flow of control among the knowledge sources. It determines which knowledge source will get access to the blackboard at any given time based on certain criteria or heuristics.
### References:
- https://en.wikipedia.org/wiki/Blackboard_system
- Pattern-Oriented Software Architecture, Volume 1: A System of Patterns
- Pattern-Oriented Software Architecture: A System of Patterns
### Acceptance Criteria:
1. A blackboard component that allows multiple knowledge sources to read and write data.
2. At least three independent knowledge sources that interact with the blackboard to contribute to or refine the solution.
3. A control component that manages the interaction between the knowledge sources and ensures orderly access to the blackboard.
For more details on contributing to this project, please refer to our [contribution guidelines](https://github.com/iluwatar/java-design-patterns/wiki). | info: help wanted,epic: pattern,type: feature | low | Major |
90,463,980 | java-design-patterns | Binding Properties pattern | ### Description
The Binding Properties design pattern is used to synchronize state between different components or objects in a system, ensuring that changes in one part are automatically reflected in another. This pattern is particularly useful in scenarios involving UI components and their underlying data models, where maintaining consistent state across various elements is crucial.
**Main Elements of Binding Properties Pattern:**
1. **Observable Property**: A property whose changes can be observed by other objects.
2. **Observer**: An object that subscribes to changes in the observable property and updates itself accordingly.
3. **Binding Mechanism**: The infrastructure that connects the observable properties with their observers, facilitating the automatic update process.
4. **Two-Way Binding**: An optional feature where changes in either the observable property or the observer can propagate in both directions.
### References
1. [Java Design Patterns Project Contribution Guidelines](https://github.com/iluwatar/java-design-patterns/wiki)
2. [Binding Properties Design Pattern - Wikipedia](https://en.wikipedia.org/wiki/Observer_pattern#Binding_properties)
3. [Binding Properties in WikiWikiWeb](http://c2.com/cgi/wiki?BindingProperties)
### Acceptance Criteria
1. **Implementation of Observable Property**: Create a class representing an observable property that allows observers to register and unregister, and notifies them of changes.
2. **Observer Interface**: Define an interface for observers that requires an update method, which will be called when the observable property changes.
3. **Binding Mechanism**: Develop the infrastructure to link observable properties with their observers, supporting at least one-way binding. Optionally, implement two-way binding functionality.
Please ensure that the implementation adheres to the project contribution guidelines and includes appropriate documentation and test cases.
| epic: pattern,type: feature | low | Major |
90,464,655 | java-design-patterns | Join pattern | **Description:**
The Join design pattern allows multiple concurrent processes or threads to be synchronized such that they all must complete before any subsequent tasks can proceed. This pattern is particularly useful in scenarios where tasks can be executed in parallel but the subsequent tasks must wait for the completion of these parallel tasks.
**Main Elements of Join Pattern:**
1. **Task Execution:** Multiple tasks are executed concurrently.
2. **Synchronization Point:** A join point where all tasks must converge and synchronize.
3. **Completion Handling:** After all tasks reach the join point, subsequent tasks can proceed.
**References:**
- [Join pattern on Wikipedia](https://en.wikipedia.org/wiki/Join-pattern)
**Acceptance Criteria:**
1. Implement a `JoinPattern` class demonstrating the synchronization of multiple threads using the join pattern.
2. Include a comprehensive unit test that validates the correct synchronization and execution order of threads.
3. Ensure the implementation follows the project contribution guidelines as outlined in the [wiki](https://github.com/iluwatar/java-design-patterns/wiki).
| epic: pattern,type: feature | low | Major |
90,465,618 | java-design-patterns | Scheduler pattern | ### Description
The Scheduler design pattern is a behavioral pattern used to manage the execution of tasks. This pattern is particularly useful in scenarios where tasks need to be executed at specific intervals or under specific conditions. The Scheduler pattern typically involves a scheduler component that manages task execution, a task component that defines the executable units of work, and a trigger or condition that initiates task execution.
**Main elements of the Scheduler pattern:**
1. **Scheduler:** Manages the execution of tasks, determining when and how often they should run.
2. **Task:** Represents the work to be performed, encapsulating the logic for execution.
3. **Trigger/Condition:** Specifies the criteria or schedule for task execution, such as time intervals or specific events.
4. **Execution Context:** Provides the environment in which tasks are executed, handling necessary resources and states.
### References
- [Project Contribution Guidelines](https://github.com/iluwatar/java-design-patterns/wiki)
- [Scheduler Pattern - Wikipedia](https://en.wikipedia.org/wiki/Scheduling_(computing))
### Acceptance Criteria
1. Implement the Scheduler component to manage and execute tasks based on defined triggers or conditions.
2. Define and implement a Task interface that encapsulates the logic for various tasks.
3. Create examples demonstrating the usage of the Scheduler pattern with different types of triggers/conditions and tasks.
| epic: pattern,type: feature | medium | Major |
90,466,561 | java-design-patterns | Proactor pattern | **Description:**
The Proactor design pattern is an asynchronous programming pattern used to efficiently handle multiple concurrent operations. In this pattern, the application initiates asynchronous operations and a separate handler (the Proactor) deals with the completion of these operations. The Proactor pattern is useful in scenarios where the system needs to handle a high number of I/O operations, such as in network servers and high-performance computing applications.
**Main Elements of Proactor Pattern:**
1. **Initiator:** The component that initiates asynchronous operations.
2. **Asynchronous Operation:** The operations that are performed asynchronously, often related to I/O tasks.
3. **Completion Handler:** A callback or handler that is invoked when the asynchronous operation is complete.
4. **Proactor:** The dispatcher that manages the completion of asynchronous operations and invokes the appropriate handlers.
5. **Operation Result:** The data or result obtained after the completion of the asynchronous operation.
**References:**
1. [Proactor Design Pattern - Wikipedia](https://en.wikipedia.org/wiki/Proactor_pattern)
2. [Asynchronous I/O - Proactor Pattern](https://www.dre.vanderbilt.edu/~schmidt/PDF/proactor.pdf)
**Acceptance Criteria:**
1. The implementation should include a clear separation of the main elements: Initiator, Asynchronous Operation, Completion Handler, and Proactor.
2. A working example demonstrating the Proactor pattern in a real-world scenario, such as a network server handling multiple simultaneous connections.
3. Comprehensive unit tests to verify the correct behavior of each component in the pattern, ensuring robustness and reliability. | epic: pattern,type: feature | low | Major |
90,466,794 | java-design-patterns | Active Record pattern | ### Description
The Active Record design pattern is a common architectural pattern used to manage database records. It simplifies data access by encapsulating the database logic within a model class. Each instance of the model corresponds to a row in the database, and the model class includes methods for CRUD (Create, Read, Update, Delete) operations.
#### Main Elements of Active Record Pattern:
1. **Model Class**: Represents a table in the database. Each instance represents a single row.
2. **CRUD Operations**: Methods for Create, Read, Update, and Delete operations are defined within the model.
3. **Database Connection**: The model class manages the database connection.
4. **Simple Queries**: Basic querying capabilities are encapsulated within the model.
### References
- [Active Record Pattern - Martin Fowler](https://martinfowler.com/eaaCatalog/activeRecord.html)
- [Java Design Patterns Project Contribution Guidelines](https://github.com/iluwatar/java-design-patterns/wiki)
- [Active Record Wikipedia](https://en.wikipedia.org/wiki/Active_record_pattern)
### Acceptance Criteria
1. A new model class should be created, representing a database table.
2. The model class should include methods for basic CRUD operations.
3. The implementation should adhere to the project’s contribution guidelines and include relevant documentation and unit tests. | info: help wanted,epic: pattern,type: feature,status: stale | medium | Major |
90,471,713 | java-design-patterns | Facet pattern | ### Description:
The Facet design pattern is used to represent different aspects or views of an object, allowing an object to expose multiple interfaces that reflect different facets of its functionality. This pattern is useful in situations where an object needs to be interacted with in various ways depending on the context or role.
**Main Elements of the Facet Design Pattern:**
1. **Core Object**: The primary object that contains the core data and functionality.
2. **Facet Interfaces**: Interfaces that represent different aspects or views of the core object. Each interface provides a specific subset of the core object's functionality.
3. **Facet Implementations**: Concrete classes that implement the facet interfaces, often acting as wrappers around the core object to provide the necessary views or functionalities.
### References:
- [Project Contribution Guidelines](https://github.com/iluwatar/java-design-patterns/wiki)
- [Facet - A Pattern for Dynamic Interfaces](https://www.hillside.net/plop/plop2002/final/plop2002_ecrahen0_0.pdf)
### Acceptance Criteria:
1. A core object should be identified or created that contains the primary data and functionality.
2. Multiple facet interfaces should be defined, each representing a different view or aspect of the core object.
3. Concrete classes implementing the facet interfaces should be created, providing the necessary views or functionalities by interacting with the core object.
4. Unit tests should be written to ensure that the facet interfaces and their implementations correctly interact with the core object and provide the expected functionalities.
| epic: pattern,type: feature | low | Major |
90,557,984 | go | tour: add a thorough discussion of the io interfaces | We ask the reader to implement an `io.Reader`, but it doesn't actually read from anything. It can be hard to see why this is a "Reader", or indeed what a "Reader" is. I think we need more detail on this topic.
| help wanted | low | Minor |
90,644,323 | youtube-dl | f4m and hls using the external downloader | i'm asking if it is possible to make f4m and hls downloading the segments using the external downloader if the --externl-downloader option is used.
as i see in the code the F4mFD is using HttpFD and the NativeHlsFD use the compat_urllib_request.
it even possible to share the downloading part of the HlsFD and NativeHlsFD and the only difference is how to combine the segments together.
i made a simple script to download m3u8 segments and create a m3u8 video file working with the local parts(you can play it without combining the parts).
and it simple to combine the files just make the m3u8 result file as the input of the ffmpeg.
it depends on m3u8 module and the aria2 but it's possible to make it work with other downloaders.
[M3U8 downloader](https://gist.github.com/remitamine/2e0cbef7d50e8e7ef2d2)
| request | low | Major |
90,750,723 | go | x/review/git-codereview: provide better error message when upstream is not set | When a branch's upstream is not set, the error message is cryptic. This just happened to me as a consequence of switching branches before mailing a CL. Here's a sample bash transcript to illustrate the issue and to provide a fix for any issue spelunkers who find this in the meantime.
``` bash
$ # do work
$ git change branch1
$ # decide branch2 is a better name
$ git checkout -branch branch2
$ git branch -D branch1
$ git mail
git rev-parse --abbrev-ref branch2@{u}
fatal: ambiguous argument 'branch2@{u}': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
branch2@{u}
git-codereview: exit status 128
$ # fix the problem
$ git branch --set-upstream master branch2
$ git mail
remote: Processing changes: new: 1, done
...
```
| NeedsInvestigation | low | Critical |
90,780,088 | go | x/image/tiff: index out of range | The following program:
``` go
package main
import (
"bytes"
"golang.org/x/image/tiff"
"io/ioutil"
"os"
)
func main() {
data, _ := ioutil.ReadFile(os.Args[1])
tiff.Decode(bytes.NewReader(data))
}
```
on this file:
https://drive.google.com/file/d/0B20Uwp8Hs1oCdDhyRzAwbE5qc2M/view?usp=sharing
crashes as follows:
```
panic: runtime error: index out of range
goroutine 1 [running]:
io/ioutil.readAll.func1(0xc8200436c0)
src/io/ioutil/ioutil.go:30 +0x11e
golang.org/x/image/tiff/lzw.(*decoder).decode(0xc8200b1300)
src/golang.org/x/image/tiff/lzw/reader.go:187 +0x75b
golang.org/x/image/tiff/lzw.(*decoder).Read(0xc8200b1300, 0xc8200cb001, 0xdff, 0xdff, 0x201, 0x0, 0x0)
src/golang.org/x/image/tiff/lzw/reader.go:141 +0x19e
bytes.(*Buffer).ReadFrom(0xc820043618, 0x7f23a2e99518, 0xc8200b1300, 0x1001, 0x0, 0x0)
src/bytes/buffer.go:173 +0x23f
io/ioutil.readAll(0x7f23a2e99518, 0xc8200b1300, 0x200, 0x0, 0x0, 0x0, 0x0, 0x0)
src/io/ioutil/ioutil.go:33 +0x154
io/ioutil.ReadAll(0x7f23a2e99518, 0xc8200b1300, 0x0, 0x0, 0x0, 0x0, 0x0)
src/io/ioutil/ioutil.go:42 +0x51
golang.org/x/image/tiff.Decode(0x7f23a2e992d8, 0xc820018420, 0x7f23a2e99438, 0xc820012480, 0x0, 0x0)
src/golang.org/x/image/tiff/reader.go:646 +0xf2a
main.main()
tiff.go:12 +0xf2
```
on commit eb11b45157c1b71f30b3cec66306f1cd779a689e
go version devel +3cab476 Sun Jun 21 03:11:01 2015 +0000 linux/amd64
| help wanted,NeedsInvestigation | low | Critical |
91,159,894 | go | x/image/tiff: invalid format: wrong number of samples for RGB | [This tiff file](https://drive.google.com/file/d/0BwZPvI8DfSonTi00TVVQa3J3TFU/view?usp=sharing) can't be decoded using "golang.org/x/image/tiff". It give out following error message:
```
tiff: invalid format: wrong number of samples for RGB
```
Test program:
``` go
package main
import (
"bytes"
"io/ioutil"
"os"
"golang.org/x/image/tiff"
)
func main() {
data, _ := ioutil.ReadFile(os.Args[1])
_, err := tiff.Decode(bytes.NewReader(data))
if err != nil {
println(err.Error())
}
}
```
Use cgo binding to libtiff decode this file without problem.
go version
```
go version go1.4.2 linux/amd64
```
tiffinfo
```
TIFF Directory at offset 0x83d68 (540008)
Image Width: 300 Image Length: 450
Bits/Sample: 8
Sample Format: unsigned integer
Compression Scheme: None
Photometric Interpretation: RGB color
Samples/Pixel: 4
Rows/Strip: 1
Planar Configuration: separate image planes
```
| NeedsInvestigation | low | Critical |
91,278,509 | go | x/image/draw: color space-correct interpolation | The BiLinear and CatmullRom interpolators assume the luminance of the source image's color space is linear. But the vast majority of images are in the sRGB color space, which has a highly non-linear luminance curve. As a result, scaled images can look quite different from the source image.
For example, on a reasonably calibrated monitor (and at 100% scaling in the browser), the left and right columns of the following image have the same average luminance:

But scaling this down by a factor of 2 using x/image/draw yields the following:

Here, the two columns appear very different.
For an in-depth (possibly too in-depth) discussion of this problem and many test images, including the one I used above, see http://www.4p8.com/eric.brasseur/gamma.html. My test program is here: https://gist.github.com/aclements/599107a2e3f187f8a2c0.
A lot of software (including browsers) gets this wrong, and that's really unfortunate. It would be fantastic if Go software using x/image got this right out of the box.
Probably the best solution to this would be to thread color space information throughout the image library. At the other extreme, given the general recommendation to assume sRGB in the absence of other information (since virtually every image created in the past two decades is sRGB), it may make sense to simply assume sRGB when interpolating. We could also do the latter first and then later add more complete color space support, with the default being sRGB. Another option is to add this information to the x/image/draw.Options structure, though I fear that may interfere with later adding proper color space support to image.
/cc @nigeltao. (We discussed this a few weeks ago in person, but I figured I should open an issue so it doesn't get lost.)
| Suggested | medium | Major |
91,319,948 | go | runtime: document P states | P states follow complex and largely undocumented rules. https://go-review.googlesource.com/10158 took a stab at documenting one of these rules, but it was just a band-aid on a bigger problem. @rsc wrote a big explanation, but Gerrit ate it. We should document these properly.
| Documentation,NeedsInvestigation,compiler/runtime | low | Minor |
91,429,687 | youtube-dl | Add support for HLS WebVTT subtitles | JSON dumps of CICGC URLs already include a link to sliced English subtitles in WebVTT format that could easily be downloaded using ffmpeg. It would be nice if the ComCarCoff extractor was able to detect these.
| subtitles | low | Major |
91,444,575 | youtube-dl | PMC Add Support | example URL:
http://pmc.tv/pouya-nadidanet-sakhte/
| site-support-request | low | Minor |
91,516,092 | youtube-dl | Request: Full Character Support under Compliant Filesystems | When downloading videos with characters typically unsupported under NTFS, and similarly handling file systems, titles of videos with characters like **? < > \ : \* | "** , have their contents replaced with safe characters - this handling can be disregarded on many non-Windows, except for the fringe cases of **NUL** and **/**, and may be desirable when cataloging videos with filenames true to their titles. Such change could be implemented via a flag, proposedly _--original-title_.
| request | low | Major |
91,665,192 | go | x/net/webdav: TestDir fails on Plan 9 | See http://build.golang.org/log/86c5f54b2e864b4a89f8756c4c069739fb314cc9
```
--- FAIL: TestDir (0.00s)
file_test.go:501: test case #7 "mk-dir /a want errExist": got "ok" (<nil>), want "errExist"
FAIL
FAIL golang.org/x/net/webdav 1.019s
```
| Testing,OS-Plan9,NeedsInvestigation | low | Major |
Subsets and Splits