id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
106,562,223 | neovim | wrapscan could remember from where it started | I work with logs over 1000000 > lines and when doing a search, I sometimes get lost not knowing if I am looking at new results or some I already saw.
It would be useful if wrapscan could somehow warned when the search starts again as discussed in http://vim.1045645.n5.nabble.com/How-to-make-search-with-wrapscan-more-convenient-td5713480.html.
(Also with big files the search takes so long that sometimes you don't know if it failed or if it's still looking, if the statusbar could display a "SEARCHING" token that would be great but I saw that big file handling will be revamped in later versions).
| enhancement | low | Critical |
106,567,343 | go | encoding/xml: brittle support for matching a namespace by identifier or url | The issue is that I believe a struct tag's namespace should be matchable by the xmlns identifier or url.
To shed some light on the issue, consider a RSS feed parser thats deals with namespaces from a variety of definitions. I could expect a few different kinds of xmlns definitions for the same type of structure. ie. consider mRSS feeds in the wild that use the "media" namespace, you will find:
1. Xmlns wasn't defined, but the namespace was used (ie. for mRSS with media namespace)
2. Xmlns was defined as `xmlns:media="http://search.yahoo.com/mrss/"`
3. Xmlns was defined as `xmlns:media="http://search.yahoo.com/mrss"`
I noticed that encoding/xml would track the xmlns' in a map to the url, and would match the struct tags to the url. The issue of course here is with 2 and 3, where the difference between a "/" would throw off the parser.
I wrote a fix (including tests) using Go 1.5.1's encoding/xml code: https://github.com/pkieltyka/xml/commit/7ad1fab466ec10f0fe7e47a36050b1956ac8bedb
Consider a partial parser for the media rss module:
``` go
type Media struct {
Title Title `xml:"media title"`
Description Description `xml:"media description"`
Thumbnails []Thumbnail `xml:"media thumbnail"`
Contents []Content `xml:"media content"`
MediaGroups []Group `xml:"media group"`
}
```
Notice the using the namespace prefix in the struct tag instead of the ns url. But, if xmlns:media="URL" was defined in the original document, the parser would expect to match it by the URL, but IMO, it should check both the prefix and url of the namespace. I'm reporting this issue and will submit the fix separately, thanks for the consideration.
| NeedsInvestigation | low | Major |
106,613,549 | go | cmd/compile: turn nilcheckelim into branchelim | nilcheckelim is effective and fast, but only eliminates nil checks. There are other kinds of duplicate branches that it would be nice to also elide.
For example, consider code like:
``` go
func f_ssa(e interface{}) int {
if e.(int) < 0 {
return -1
}
return e.(int) * 2
}
```
Each of the type assertions will generate a branch to check e's dynamic type, but the subsequent checks are unnecessary, and could be eliminated with very similar logic to nilcheckelim. Code like this, in which an interface is type asserted repeatedly, shows up in the compiler, so it is not theoretical.
I'd expect the souped-up optimization to also handle (silly) code like:
``` go
func f_ssa(i int) int {
var x int
if i < 3 {
x += 1
}
if i < 3 {
x += 2
}
return x
}
```
(The compiled code should only have one branch.)
/cc @randall77 @tzneal @dr2chase
| Performance,NeedsFix,compiler/runtime | low | Minor |
106,638,182 | opencv | cv::CascadeClassifier::detectMultiScale detection differences from OpenCV 2.4.11 => 3.0.0 | I see a large difference in detection output between OpenCV versions 2.4.11 and 3.0.0 on Android using a custom CascadeClassifier (Haar features) with the same input image and parameters -- I see 29 and 50 detections respectively. This is using the outputRejectLevel variant of cv::CascadeClassifier::detectMultiScale. I assume this is a bug and can investigate the cause, but I wanted to check here for known issues before starting my search.
Update: I believe the difference is due to the `if( classifier->data.stages.size() + result == 0 )` vs `if( classifier->data.stages.size() + result < 4 )` change below:
opencv 3.0.0
```
int result = classifier->runAt(evaluator, Point(x, y), scaleIdx, gypWeight);
if( rejectLevels )
{
if( result == 1 )
result = -(int)classifier->data.stages.size();
if( classifier->data.stages.size() + result == 0 ) // <====== HERE
{
mtx->lock();
rectangles->push_back(Rect(cvRound(x*scalingFactor),
cvRound(y*scalingFactor),
winSize.width, winSize.height));
rejectLevels->push_back(-result);
levelWeights->push_back(gypWeight);
mtx->unlock();
}
}
else if( result > 0 )
{
mtx->lock();
rectangles->push_back(Rect(cvRound(x*scalingFactor),
cvRound(y*scalingFactor),
winSize.width, winSize.height));
mtx->unlock();
}
if( result == 0 )
x += yStep;
}
```
opencv 2.4.X:
```
if( classifier->data.stages.size() + result < 4 ) // <====== HERE
{
mtx->lock();
rectangles->push_back(Rect(cvRound(x*scalingFactor), cvRound(y*scalingFactor), winSize.width, winSize.height));
rejectLevels->push_back(-result);
levelWeights->push_back(gypWeight);
mtx->unlock();
}
```
@Dikay900 : Any thoughts on this?
| bug,priority: normal,category: objdetect,affected: 3.4 | low | Critical |
106,680,844 | go | x/tools/cmd/present: browser fullscreen mode wish | On large displays present shows almost [half of the next slide](http://s.natalian.org/2015-09-16/1442368126_1918x1058.png) which can be distracting. Could there be a fullscreen mode please?
| NeedsFix,Tools | medium | Major |
106,681,002 | go | x/tools/cmd/present: html output command | I know you can Save as... the presentation from the browser but it would be nice to have a flag to output html to a directory for archival purposes.
| Tools | low | Minor |
106,718,926 | go | runtime: memory and performance degradation | Below is the program, I am running it with 1.4 and current tip. There are significant regressions with binary size, execution time and memory consumption.
Binary size on 1.4 3581368, binary size on tip is 4096280.
Below are results of running it with `TIME="%e %M" time`:
```
1.4
4.04 3035492
4.38 3035496
4.66 3035500
4.51 3035500
4.42 3035504
4.34 3035500
4.20 3035496
3.87 3035496
4.07 3035496
4.15 3035504
4.28 3035492
tip
4.93 3009044
5.30 4910668
5.49 2978652
5.05 3929244
5.86 2980032
5.91 3929368
5.24 2980052
5.26 3929196
5.64 2980976
5.60 2979944
5.15 3929228
5.36 3929224
4.97 2980096
6.36 3929472
4.77 3929172
```
1.4 reliably consumes 3GB, while 1.5 can consume 3GB or 4GB or 5GB.
There also seems to be a performance regression of about 30%.
Memory consumption instability and variance seems to be the most troublesome.
tip should not consume significantly more than 1.4.
``` go
package gob
import (
"bytes"
"encoding/gob"
"fmt"
"io"
"reflect"
"testing"
"github.com/dvyukov/go-fuzz/examples/fuzz"
)
type X struct {
A int
B string
C float64
D []byte
E interface{}
F complex128
G []interface{}
H *int
I **int
J *X
K map[string]int
}
func init() {
gob.Register(X{})
}
func TestT(t *testing.T) {
data := "#\xff\x99\x03\x01\x01\x03RT\x19\x01\xff\x9a\x00\x01\xfb\x00\aA\x01" +
"\x04\x00\x01\x01B\x01\f\x00\x01\x01C\x01\b\x00\x00\x00\x16\xff\x9a\x01" +
"\"\x01\x05hello\x01\u007f\xff\xff\xff\xf0\xf9!\t@\x00"
Fuzz([]byte(data))
}
func Fuzz(data []byte) int {
score := 0
for _, ctor := range []func() interface{}{
func() interface{} { return nil },
func() interface{} { return new(int) },
func() interface{} { return new(string) },
func() interface{} { return new(float64) },
func() interface{} { return new([]byte) },
func() interface{} { return new(interface{}) },
func() interface{} { return new(complex128) },
func() interface{} { m := make(map[int]int); return &m },
func() interface{} { m := make(map[string]interface{}); return &m },
func() interface{} { return new(X) },
} {
v := ctor()
dec := gob.NewDecoder(bytes.NewReader(data))
if dec.Decode(v) != nil {
continue
}
dec.Decode(ctor())
score = 1
if ctor() == nil {
continue
}
b1 := new(bytes.Buffer)
if err := gob.NewEncoder(b1).Encode(v); err != nil {
panic(err)
}
v1 := reflect.ValueOf(ctor())
err := gob.NewDecoder(bytes.NewReader(data)).DecodeValue(v1)
if err != nil {
panic(err)
}
if !fuzz.DeepEqual(v, v1.Interface()) {
fmt.Printf("v0: %#v\n", reflect.ValueOf(v).Elem().Interface())
fmt.Printf("v1: %#v\n", v1.Elem().Interface())
panic(fmt.Sprintf("values not equal %T", v))
}
b2 := new(bytes.Buffer)
err = gob.NewEncoder(b2).EncodeValue(v1)
if err != nil {
panic(err)
}
v2 := ctor()
dec1 := gob.NewDecoder(b1)
if err := dec1.Decode(v2); err != nil {
panic(err)
}
if err := dec1.Decode(ctor()); err != io.EOF {
panic(err)
}
if vv, ok := v.(*X); ok {
fix(vv)
}
if !fuzz.DeepEqual(v, v2) {
fmt.Printf("v0: %#v\n", reflect.ValueOf(v).Elem().Interface())
fmt.Printf("v2: %#v\n", reflect.ValueOf(v2).Elem().Interface())
panic(fmt.Sprintf("values not equal 2 %T", v))
}
}
return score
}
func fix(vv *X) {
// See https://github.com/golang/go/issues/11119
if vv.I != nil && (*vv.I == nil || **vv.I == 0) {
// If input contains "I:42 I:null", then I will be in this weird state.
// It is effectively nil, but DeepEqual does not handle such case.
vv.I = nil
}
if vv.H != nil && *vv.H == 0 {
vv.H = nil
}
if vv.J != nil {
fix(vv.J)
}
}
```
go version devel +a1aafdb Tue Sep 15 16:12:59 2015 +0000 linux/amd64
| compiler/runtime | low | Major |
106,760,565 | opencv | `OutputArray` and `Mat` interprets `std::vector` differently | The following code
```
std::vector<cv::Point2f> points;
for (int i = 0; i < 10; ++i) points.push_back(cv::Point2f(i, i));
cv::Mat a(points);
cv::OutputArray b(points), c(a);
std::cerr << a.size() << b.size() << c.size() << std::endl;
```
Outputs as `[1 x 10][10 x 1][1 x 10]`, which are inconsistent and caused some potential bugs.
| category: core,RFC,future | low | Critical |
106,860,640 | go | x/mobile: Cannot use GL calls inside Go library | Hi,
I have a Java Android application that uses a Go library to do some OpenGL rendering. I have configured something similar to the [bind](https://github.com/golang/mobile/tree/master/example/bind/android) example.
I have a Go library that contains functions that execute some GL calls. In the Java app, I have a GLSurfaceView and inside the renderer I call to the Go library, however nothing gets rendered.
Furthermore, my phone would freeze up almost totally at times, at which point I need to power off the screen, turn it on, unlock the app and shut it down through recent apps option.
My Go code looks as follows:
``` go
package lib
import (
"golang.org/x/mobile/gl"
)
func OnCreate() {
gl.ClearColor(0.2, 0.2, 0.6, 1.0)
}
func OnChanged(width, height int) {
gl.Viewport(0, 0, width, height)
}
func OnDraw() {
gl.Clear(gl.COLOR_BUFFER_BIT)
}
func GetString() string {
return "Hello World"
}
```
and my Renderer class looks as follows:
``` java
package com.demotexbg.thugpigeon;
import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import go.lib.Lib;
public class GameActivityRenderer implements GLSurfaceView.Renderer {
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
Lib.OnCreate();
System.out.println("Initializing: " + Lib.GetString());
}
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
Lib.OnChanged(width, height);
}
@Override
public void onDrawFrame(GL10 gl) {
Lib.OnDraw();
}
}
```
The "Initializing: Hello World" is printed to LogCat but rendering does not occur. If I replace all the `Lib` calls with the code they represent, then I get a proper render to the screen.
I am using the following tools:
- Min / Compile / Target SDK Version: 16
- Java: 1.7
- Android Studio: 1.3.2
- Go: go1.5.1 darwin/amd64
- Mobile Gradle plugin: org.golang.mobile.bind:0.2.3 (I tried 0.2.2 as well)
- Android Build Tools: 23.0.1
Is there something special that I need to do to have all my OpenGL calls be managed by the Go library? I feel that something thread-related has changed since Go 1.5 Dev, where the same scenario would work.
Regards,
Momchil Atanasov
| mobile | low | Minor |
106,932,375 | go | cmd/asm: unactionable "invalid local variable type 0" | cmd/asm produces the following error message on the program:
```
TEXT T(SB),$0
TYPE
```
```
asm: T: invalid local variable type 0
```
The error message should contain file name and line number, otherwise it is unactionable.
go version devel +5512ac2 Wed Sep 16 17:56:14 2015 +0000 linux/amd64
| compiler/runtime | low | Critical |
106,959,046 | youtube-dl | URL should be normalized before trying to match with regex | e.g.
http://tp.srgssr.ch/p/portal?urn=urn%3Asrf%3Aais%3Avideo%3A3fd90c9f-507a-4a79-b6b6-ac55f746b4d4&autoplay=true&legacy=true&width=640&height=360&playerType=
doesn't match, because
urn%3Asrf%3Aais%3Avideo%3A3fd90c9f-507a-4a79-b6b6-ac55f746b4d4
isn't normalized to
urn:srf:ais:video:3fd90c9f-507a-4a79-b6b6-ac55f746b4d4
which would match.
| geo-restricted | low | Minor |
106,994,220 | kubernetes | Event recorder should enfoce API conventions for Event Reason | As code produces events, we are not always enforcing our own API conventions for EventNames
An example PR where we had to make a change:
https://github.com/kubernetes/kubernetes/pull/14110
We should enforce proper event naming conventions as defined here:
https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#events
The best way to stop this in the future is for the Event recorder to error/warn when an invalid event name is provided.
| priority/backlog,sig/api-machinery,lifecycle/frozen | medium | Critical |
107,051,318 | youtube-dl | running make introduces diff | running make introduces a diff in files that are in the repsitory
```
README.md | 88 ++++++++++++++++++++++++++----------------------
docs/supportedsites.md | 17 ++++++----
2 files changed, 56 insertions(+), 49 deletions(-)
```
I think this should not happen, so ether the diff should be in the repo or the make does something wrong
| bug,documentation | low | Minor |
107,089,485 | go | x/build: collect stats, logs | The coordinator should record per-build logs, and also keep statistics about timing information in BigQuery or something.
Then we can graph build times, test times, individual test times, machine bootup times, etc, all over time.
Related: #12668 (shootout tests talke 30 mins on arm)
| Builders | low | Major |
107,181,009 | opencv | Tests OCL_Filter/Erode.Mat (Dilate) fail on Android x86 | OS: Android 4.4 x86
Failed tests:
OCL_Filter/Dilate.Mat/100
OCL_Filter/Dilate.Mat/101
OCL_Filter/Dilate.Mat/124
OCL_Filter/Dilate.Mat/125
OCL_Filter/Erode.Mat/100
OCL_Filter/Erode.Mat/101
OCL_Filter/Erode.Mat/124
OCL_Filter/Erode.Mat/125
Common output ("actual" value differs from test to test):
[ RUN ] OCL_Filter/Erode.Mat/125
/localdisk/j/workspace/cv/src/modules/imgproc/test/ocl/test_filters.cpp:121: Failure
Expected: (TestUtils::checkNormRelative(dst_roi, udst_roi)) <= (threshold), actual: 0.806538 vs 1e-06
Size: [21 x 15]
[ FAILED ] OCL_Filter/Erode.Mat/125, where GetParam() = (32FC4, 0, 0x0, BORDER_CONSTANT, 3, true, 1) (336 ms)
These tests are passed on x64 Android 5.1 emulator
| bug,priority: normal,platform: android,affected: 3.4 | low | Critical |
107,185,606 | opencv | Tests in UMat group fail on Haswell | OS: Win7 x86/x64, Win8.1 x86/x64, Win10 x64, CentOS x64
Processor: Intel Core i7 5th generation
Failed tests:
UMat/UMatBasicTests.copyTo/19
UMat/UMatBasicTests.copyTo/9
UMat/UMatTestReshape.reshape/149
UMat/UMatTestReshape.reshape/205
UMat/UMatTestReshape.reshape/37
UMat/UMatTestReshape.reshape/93
UMatBasicTests.copyTo common output:
[ RUN ] UMat/UMatBasicTests.copyTo/9
..\..\..\..\..\src\modules\core\test\test_umat.cpp(190): error: Expected: (TestUtils::checkNorm2(roi_ua, ua)) <= (0), actual: 99 vs 0
Size: [601 x 371]
[ FAILED ] UMat/UMatBasicTests.copyTo/9, where GetParam() = (0, 1, 640x480, true) (17 ms)
UMatTestReshape.reshape common output:
[ RUN ] UMat/UMatTestReshape.reshape/149
..\..\..\..\..\src\modules\core\test\test_umat.cpp(317): error: Expected: (TestUtils::checkNorm2(ua.reshape(nChannels), a.reshape(nChannels))) <= (0), actual: 99 vs 0
Size: [601 x 371]
..\..\..\..\..\src\modules\core\test\test_umat.cpp(350): error: Expected: (TestUtils::checkNorm2(ua.reshape(nChannels, ua.dims, sz), a.reshape(nChannels, a.dims, sz))) <= (0), actual: 99 vs 0
Size: [222971 x 1]
[ FAILED ] UMat/UMatTestReshape.reshape/149, where GetParam() = (2, 3, 640x480, true) (22 ms)
| bug,priority: normal,affected: 3.4,category: t-api | low | Critical |
107,186,908 | opencv | UMatBasicTests.GetUMat tests fail on Win10 | Win10 x64 on Intel Core i7 of 5th generation
Failed tests:
UMat/UMatBasicTests.GetUMat/6
UMat/UMatBasicTests.GetUMat/7
UMat/UMatBasicTests.GetUMat/8
UMat/UMatBasicTests.GetUMat/9
UMat/UMatBasicTests.GetUMat/16
UMat/UMatBasicTests.GetUMat/17
UMat/UMatBasicTests.GetUMat/18
UMat/UMatBasicTests.GetUMat/19
UMat/UMatBasicTests.GetUMat/26
UMat/UMatBasicTests.GetUMat/27
UMat/UMatBasicTests.GetUMat/28
UMat/UMatBasicTests.GetUMat/29
UMat/UMatBasicTests.GetUMat/36
UMat/UMatBasicTests.GetUMat/37
UMat/UMatBasicTests.GetUMat/38
UMat/UMatBasicTests.GetUMat/39
Common output for all failed tests:
[ RUN ] UMat/UMatBasicTests.GetUMat/28
unknown file: error: C++ exception with description "..\..\..\..\..\src\modules\core\src\ocl.cpp:4520: error: (-215) u->mapcount == 0 in function cv::ocl::OpenCLAllocator::deallocate" thrown in the test body.
[ FAILED ] UMat/UMatBasicTests.GetUMat/28, where GetParam() = (6, 1, 640x480, false) (19 ms)
| bug,priority: normal,affected: 3.4,category: t-api | low | Critical |
107,238,391 | kubernetes | Possible future improvements for Job object | 1. (COMPLETE) set a deadline and kill the job after that deadline . This was called ActiveDeadlineSeconds in the original proposal. I argued for removing it for now and it was dropped during review. @bgrant0607 argues that we have tried (on Borg) offering users many options for managing/throttling failures, and none of them worked well for users except deadlines for non-serving jobs.
2. automatically throttle creation of new pods when failures are frequent.
3. distinguish between "application failures" and "cluster failures" (e.g. machine-level OOM, node failure) in reporting failures, and in throttling.
4. consider interaction of Pods of a Job with Vertical Autoscaler. That is, I should be able to start a job with an initial prediction of memory usage, and then if it OOMs, modify the request and retry.
5. ensure that successful pods are not automatically deleted before the Job finishes (because then the Job Manager will forget about the success). Either mark the pod as do-not-garbage-collect, or else record the success somewhere else.
6. Show relevant pod events in Job describe output.
7. Ensure graceful deletion works when reducing parallelism
8. Indexed Job feature: when each Pod is created, manager decides which "completion index" it is trying to do. Ensures one completion for each index from 0 to `completions - 1`. Expose that index via the downward API. Pods can pick up pod-specific options by e.g. indexing into bash-array env var. #14188
| priority/backlog,sig/apps,help wanted,area/workload-api/job,lifecycle/frozen | medium | Critical |
107,259,448 | rust | Nonparametric dropck (tracking issue for RFC 1238) | Tracking issue for rust-lang/rfcs#1238.
- [x] subtask: `#[may_dangle]` attribute #34761
- [ ] can't close this until we have _stable_ support for 3rd party collection types
cc @pnkfelix
| P-medium,T-lang,B-unstable,B-RFC-implemented,C-tracking-issue,S-tracking-design-concerns | low | Major |
107,300,730 | kubernetes | Refactor Volume Code | The `Volume` interface defined in `pkg/volume/volume.go` defines methods that are used for mounting/unmounting/attaching/detaching but also volume creation/deletion. The former is used by the kubelet binary on the node, and the later by the persistent volume controller on the master.
While we should keep all volume related code in the same directory, we should consider refactoring to separate kubelet only/master only code into different packages and move common code into a shared dependent package.
| priority/backlog,kind/cleanup,sig/storage,help wanted,lifecycle/frozen | medium | Major |
107,317,276 | youtube-dl | site support requested for Muenchen.tv's live webcams | please add support for Muenchen.tv Oktoberfest webcams
their m3u8 wowsa url keeps on changing so it would be great to be able to watch / record this via YTDL:
(this live stream is already working with YTDL https://www.muenchen.tv/livestream/ )
they have two additional live webcam streams for the next two weeks for Oktoberfest, one pointing towards the band, hopefully with sound:
https://www.muenchen.tv/mediathek/videolivestream/wiesn-webcam-im-hofbraeu-festzelt/
and one to the street in between beer tents and joy rides:
https://www.muenchen.tv/mediathek/videolivestream/wiesn-webcam-auf-die-wirtsbudenstrasse/
cheers
| site-support-request | low | Minor |
107,375,551 | TypeScript | Request: Class Decorator Mutation | If we can get this to type check properly, we would have perfect support for boilerplate-free mixins:
``` typescript
declare function Blah<T>(target: T): T & {foo: number}
@Blah
class Foo {
bar() {
return this.foo; // Property 'foo' does not exist on type 'Foo'
}
}
new Foo().foo; // Property 'foo' does not exist on type 'Foo'
```
| Suggestion,Needs Proposal | high | Critical |
107,477,907 | TypeScript | Tag types | ### Problem
- There is no straight way to encode a predicate (some checking) about a value in its type.
### Details
There are situations when a value has to pass some sort of check/validation prior to being used. For example: a min/max functions can only operate on a non-empty array so there must be a check if a given array has any elements. If we pass a plain array that might as well be empty, then we need to account for such case inside the min/max functions, by doing one of the following:
- crashing
- returning undefined
- returning a given default value
This way the calling side has to deal with the consequences of min/max being called yet not being able to deliver.
``` typescript
function min<a>(values: a[], isGreater: (one: a, another: a) => boolean) : a {
if (values.length < 1) { throw Error('Array is empty.'); }
// ...
}
var found;
try {
found = min([]);
} catch (e) {
found = undefined;
}
```
### Solution
A better idea is to leverage the type system to rule out a possibility of the min function being called with an empty array. In order to do so we might consider so called tag types.
A **tag type** is a qualifier type that indicates that some predicate about a value it is associated with holds true.
``` typescript
const enum AsNonEmpty {} // a tag type that encodes that a check for being non-empty was passed
function min<a>(values: a[] & AsNonEmpty) : a {
// only values that are tagged with AsNonEmpty can be passed to this function
// the type system is responsible for enforcing this constraint
// a guarantee by the type system that only non-empty array will be passed makes
// implementation of this function simpler because only one main case needs be considered
// leaving empty-case situations outside of this function
}
min([]); // <-- compile error, tag is missing, the argument is not known to be non-empty
min([1, 2]); // <-- compile error, tag is missing again, the argument is not know to be non-empty
```
it's up to the developer in what circumstances an array gets its AsNonEmpty tag, which can be something like:
``` typescript
// guarntee is given by the server, so we always trust the data that comes from it
interface GuaranteedByServer {
values: number[] & AsNonEmpty;
}
```
Also tags can be assigned at runtime:
``` typescript
function asNonEmpty(values: a[]) : (a[] & AsNonEmpty) | void {
return values.length > 0 ? <a[] & AsNonEmpty> : undefined;
}
function isVoid<a>(value: a | void) : value is void {
return value == null;
}
var values = asNonEmpty(Math.random() > 0.5 ? [1, 2, 3] : []);
if (isVoid(values)) {
// there are no elements in the array, so we can't call min
var niceTry = min(values); // <-- compile error;
} else {
var found = min(values); // <-- ok, tag is there, safe to call
}
```
As was shown in the current version (1.6) an empty const enum type can be used as a marker type (AsNonEmpty in the above example), because
- enums might not have any members and yet be different from the empty type
- enums are branded (not assignable to one another)
However enums have their limitations:
- enum is assignable by numbers
- enum cannot hold a type parameter
- enum cannot have members
A few more examples of what tag type can encode:
- `string & AsTrimmed & AsLowerCased & AsAtLeast3CharLong`
- `number & AsNonNegative & AsEven`
- `date & AsInWinter & AsFirstDayOfMonth`
Custom types can also be augmented with tags. This is especially useful when the types are defined outside of the project and developers can't alter them.
- `User & AsHavingClearance`
**ALSO NOTE**: In a way tag types are similar to boolean properties (flags), BUT they get type-erased and carry no rutime overhead whatsoever being a good example of a zero-cost abstraction.
**UPDATED:**
Also tag types can be used as units of measure in a way:
- `string & AsEmail`, `string & AsFirstName`:
``` typescript
var email = <string & AsEmail> '[email protected]';
var firstName = <string & AsFirstName> 'Aleksey';
firstName = email; // <-- compile error
```
- `number & In<Mhz>`, `number & In<Px>`:
``` typescript
var freq = <number & In<Mhz>> 12000;
var width = <number & In<Px>> 768;
freq = width; // <-- compile error
function divide<a, b>(left: number & In<a>, right: number & In<b> & AsNonZero) : number & In<Ratio<a, b>> {
return <number & In<Ratio<a, b>>> left / right;
}
var ratio = divide(freq, width); // <-- number & In<Ratio<Mhz, Px>>
```
```
```
| Discussion | high | Critical |
107,487,826 | You-Dont-Know-JS | "es6 & beyond": ch2, "tagged template literals", cover explicit multiline string literals | Illustrate a tagged template literal function that makes multiline string literals require explicit linebreaks instead of assuming them.
`````` js
function multiline(strings,...values) {
return Function(`return "${strings.raw.join("").replace(/\n/g,"") }"`)();
}
// TODO: fix this so it adds in the interpolated values
// (but does not strip linebreaks in those values!)
// TODO: also, don't use the `Function(..)` option; just find
// actual "\n" occurrences in the raw strings and make them
// into linebreaks
// WIP:
// function multiline(strings,...values) {
// var strs = strings.raw.map(function mapper(v){
// return v.replace(/\n/g,"")
// .replace(/(^|[^\\]|(?:\\\\)+)((?:\\n)+)/g,function(match,p1,p2){
// return `${p1}${p2.replace(/\\n/g,"\n")}`;
// });
// });
// return strs.join("");
// }```
```js
console.log( `hello
world` );
// hello
//
// world
``````
``` js
console.log( multiline`hello
world` );
// hello world
```
``` js
console.log( multiline`hello\n
\n
world` );
// hello
//
// world
```
| for second edition | medium | Minor |
107,607,575 | TypeScript | "design:paramnames" metadata key | From https://github.com/Microsoft/TypeScript/pull/2589
> A few notes on metadata:
> - Type metadata uses the metadata key "design:type".
> - Parameter type metadata uses the metadata key "design:paramtypes".
> - Return type metadata uses the metadata key "design:returntype".
Can you please add support for a `"design:paramnames"` metadata key? It would return the design-time names of the arguments of a function
I can convert a function to string at runtime using [Function.prototype.toString()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/toString) and then use a couple of Regex to get the parameter names:
```
STRIP_COMMENTS = /((\/\/.*$)|(\/\*[\s\S]*?\*\/))/mg;
ARGUMENT_NAMES = /([^\s,]+)/g;
```
But when a minifier/compressor is used the param names become `"a"`, `"b"` ...
I would like to have a way to access the design-time names of the arguments of a function at run-time even after compression.
| Suggestion,In Discussion,Domain: Decorators | medium | Major |
107,765,434 | youtube-dl | Avoid bandwith throttling | Hola Ricardo,
I Wondered if there's a way of implementing an option for circumventing a specific kind of bandwidth throtling?
Let me explain: I noticed quite a few providers, for example soundcloud and mixcloud, will serve the first few megabytes of a stream at maximum speed, but after this the speed drops just to the amount needed for streaming the served media at its bitrate.
Like this it takes a long time to download certain media with youtube-dl. How I used to circumvent this is by stopping & resuming the download for example 20 times for a 100mb file, in case they serve the first 5mb unthrottled.
Technically this works very well, but it's not very handy in a practical way. So I created my little helper script where I use youtube-dl to generate the download-url, and let Curl download the actual file, by downloading the first XXkb block, than the second XXkb block, etc... using the range header start & end values, and a while loop.
This works out great for me, but I wonder if it would be very difficult to implement this in the downloader classes of youtube-dl? With an optional switch to activate this 'chunk-downloading', or even the ability for contributors to implement it in the extractors - if the specific service is known for using these 'first part fast' throttling mechanisms - would really benefit the download speed for a lot of users, in my humble opinion :)
So, my questions are: Do you think it's a good idea to implement this kind of functionality and how difficult will it be to do so? If you consider implementing this in the future, I like to help in any way I can
to make this possible.
Thanks for creating this great 'little' tool, I use it regularly and am very happy it exists!
Saludos,
Erik
| request | medium | Major |
107,856,262 | opencv | Fisheye calibrate failure with specific arguments | Fisheye `calibrate()` function fails with seg fault when `std::vector<cv::Mat>` is used instead of `std::vector<cv::Vec3d>` as `OutputArrayOfArrays` parameters `rvecs`, `tvecs`.
| bug,priority: normal,category: calib3d,affected: 3.4,Hackathon | low | Critical |
107,883,793 | go | x/mobile: android bind/java/Seq.java forces application context binding on library load | ## Problem
1) It is not required to bind an application context for go <-> jni code to work. For native activities that uses context, this might be case, but not all go library code needs this.
2) Compatibility issues if context is auto-bound. In the wild we have experienced wild/random crashes that is related to the order of webview native libary loading before/after golib plus other jni libraries. Our fix required us to load each library in a specific order and/or not binding to application context.
## Source
At Line 29 of bind/java/Seq.java, application context is passed/set to golib via jni.
``` java
static {
// Look for the shim class auto-generated by gomobile bind.
// Its only purpose is to call System.loadLibrary.
try {
Class.forName("go.LoadJNI");
} catch (ClassNotFoundException e) {
// Ignore, assume the user will load JNI for it.
Log.w("GoSeq", "LoadJNI class not found");
}
try {
// TODO(hyangah): check proguard rule.
Application appl = (Application)Class.forName("android.app.AppGlobals").getMethod("getInitialApplication").invoke(null);
Context ctx = appl.getApplicationContext();
setContext(ctx);
} catch (Exception e) {
Log.w("GoSeq", "Global context not found:" + e);
}
initSeq();
new Thread("GoSeq") {
public void run() { Seq.receive(); }
}.start();
}
```
## Proposal
Do not auto bind application context on library load. Let user decide.
``` java
static {
// Look for the shim class auto-generated by gomobile bind.
// Its only purpose is to call System.loadLibrary.
try {
Class.forName("go.LoadJNI");
} catch (ClassNotFoundException e) {
// Ignore, assume the user will load JNI for it.
Log.w("GoSeq", "LoadJNI class not found");
}
initSeq();
new Thread("GoSeq") {
public void run() { Seq.receive(); }
}.start();
}
```
| mobile | low | Critical |
107,917,031 | nvm | Add tests: `nvm run --silent` and `nvm exec --silent` | I added support for `--silent` to `nvm run` and `nvm exec` in eb81fba8f7fda7342472256040597a8e59261552 for #842.
However, I was lazy, and didn't write any tests.
This issue is a reminder for me, or a request to anyone else, to write these tests. They should ensure that the output is correct without `--silent`, with `--silent`, and also that `--silent` is only swallowed and respected when it's in the third argument position (immediately after `run` or `exec`).
| testing,pull request wanted | low | Minor |
107,953,383 | opencv | OutputArray::create fails for vector<Mat> | example:
``` cpp
vector<Mat> test;
OutputArray(test).create(3, 1, CV_32FC3);
cout << test.size() << endl;
cout << test[2].size() << endl;
test[2].at<Vec3f>(0)[2] = 1;
```
output:
```
3
[0 x 0]
SEGFAULT
```
The issue seems to be that OutputArray::create does only resize the vector without allocating the individual matrices.
| priority: normal,feature,category: core,affected: 3.4 | low | Minor |
108,078,082 | You-Dont-Know-JS | Async & Performance - Ch1 - process.nextTick() | Great book series BTW
A word of caution about using process.nextTick to 'chunk' your work. Consider the following node app...
``` javascript
var a = 0;
var count = parseInt(process.argv[2]);
setTimeout(function() {
var b = a; // careful to get the instant 'now' version of a
console.log(b);
}, 0);
function work() {
if (++a < count) {
process.nextTick(work);
}
}
work();
```
In this code, I've chunked the work (incrementing a up to count) by calling process.nextTick and you might expect the setTimeout callback to happen sometime before the work finishes but it doesn't. Even for gigantic values of count. The problem is process.nextTick adds the function to the event queue without giving setTimeout (or any other IO for that matter) a chance to get a look in. It's like a DOS attack inside your JS program.
A slightly better function to use is setImmediate - which lets IO et al have a go before scheduling the callback.
Guyon
| for second edition | low | Major |
108,164,600 | youtube-dl | youtube-dl repeats thumbnail embedding if file already exists | Using latest version (2015-09-22) as of this post.The thumbnail is downloaded before youtube-dl checks the video and sees it has already been downloaded. Then the thumbnail logic embeds the thumbnail file since it is there. This results in a video with duplicate thumbnails.

If the video already exists maybe youtube-dl should assume a thumbnail was already embedded and discard the downloaded one.
| request | low | Minor |
108,188,467 | kubernetes | RFC: Support user defined and extensible deployment strategies | The new deployment API (#1743) specifies some out of the box strategies (recreate, rolling) which will be implemented inside the deployment controller. It's possible these strategies may not cover all deployment needs for end users, or for downstream users (such as OpenShift). The deployment system should enable:
1. Users to define their own deployment strategy logic in a custom Docker image.
2. Downstream systems to define deployment strategies in a delegate controller.
It's possible these two capabilities are related but orthogonal in their design/implementation. I'm starting the discussion here so we can collect initial thoughts and start linking any existing relevant issues.
| priority/backlog,area/app-lifecycle,kind/feature,area/workload-api/deployment,sig/apps,lifecycle/frozen | medium | Major |
108,190,643 | kubernetes | RFC: Deployment hooks | Users often need a means to inject custom behavior into the lifecycle of a deployment process. The deployment API (#1743) could be expanded to support the execution user-specified Docker images which are given an opportunity to complete at various points during the recon process for a deployment.
Use cases and various design approaches were discussed previously in an [OpenShift deployment hooks proposal](https://github.com/openshift/origin/blob/master/docs/proposals/post-deployment-hooks.md).
This RFC is to capture initial thoughts on the topic and to link any existing related issues.
| area/app-lifecycle,priority/awaiting-more-evidence,kind/feature,area/workload-api/deployment,sig/apps,lifecycle/frozen | high | Critical |
108,213,051 | rust | rustc OSX LC_LOAD_DYLIB paths are broken | I've noticed that the `/usr/local/bin/rustc` has several dylib load commands for various rust libraries which have incorrect/nonexistent paths prefixed to them.
There are two points I'd like to bring up, the first being much more serious.
## Bad Paths
In my setup various tools all report that `/usr/local/bin/rustc` loads/requires the following dynamic libraries:
```
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc_driver-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc_trans-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc_privacy-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc_borrowck-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc_resolve-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc_lint-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc_typeck-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libflate-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc_data_structures-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libarena-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libgraphviz-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libgetopts-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librbml-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc_back-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libsyntax-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libserialize-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libterm-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/liblog-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libfmt_macros-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc_llvm-198068b3.dylib
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libstd-198068b3.dylib
/usr/lib/libSystem.B.dylib
/usr/lib/libedit.3.dylib
/usr/lib/libc++.1.dylib
```
However, most of these (the rust libs) appear to be an artifact from the initial compilation build (on a side note, non-absolute paths in OSX typically should have an `@rpath` or `@install_path` prefixed to the path, etc.) . If you run:
`DYLD_PRINT_LIBRARIES=true /usr/local/bin/rustc --version`
you will notice all of the libraries actually get bound to `/usr/local/lib/<name of dylib>`. This only accidentally works, because `dyld`'s default library search path(s) are:
```
$(HOME)/lib:/usr/local/lib:/lib:/usr/lib
```
and when interpreting a binary if `dyld` fails to find a library at the specified `LC_LOAD_DYLIB` path, it then takes the basename of the library with the bad path, e.g.:
```
x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libstd-198068b3.dylib -> libstd-198068b3.dylib
```
and looks for that dynamic library in the default path list; in our case it just so happens the libraries were installed to `/usr/local/lib`, so it finds them and runs as normal. If they were installed to a nonstandard path, `/usr/local/bin/rustc` would fail to run with something like:
```
dyld: Library not loaded: x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libstd-198068b3.dylib
Referenced from: /usr/local/bin/rustc
Reason: image not found
Trace/BPT trap: 5
```
You can verify this by moving `/usr/local/lib/libstd-198068b3.dylib` somewhere else (don't do this unless you know how to undue it), or by manually editing the basename of the imported libraries in the `rustc` binary (don't do this either unless you know how to undo it); either case will fail with an error similar to the above.
Therefore `/usr/local/bin/rustc` is only accidentally running correctly on OSX as of this writing.
If `/usr/local/lib/` is the preferred location for rust libraries (and why not), then I _highly_ suggest outputting `LC_LOAD_DYLIB` paths like `/usr/local/lib/libstd-198068b3.dylib`, etc.
If you want this dynamic or configurable, then I suggest using `@rpath` although this adds more complexity; more information can be found [at the Apple official documentation](https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/DynamicLibraries/100-Articles/RunpathDependentLibraries.html).
## Too Many Libraries
This is a less serious issue, but I believe most of the libraries printed above are actually not required to run `rustc` on OSX. Please correct me if I'm wrong, but it looks like the only imports in `rustc` are:
```
2038 __ZN4main20h62bf81b281987584efdE (8) ~> x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/librustc_driver-198068b3.dylib
2040 __ZN2rt10lang_start20hd654f015947477d622wE (8) ~> x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libstd-198068b3.dylib
2048 _rust_stack_exhausted (8) ~> x86_64-apple-darwin/stage1/lib/rustlib/x86_64-apple-darwin/lib/libstd-198068b3.dylib
2050 _exit (8) ~> /usr/lib/libSystem.B.dylib
2028 dyld_stub_binder (8) -> /usr/lib/libSystem.B.dylib
```
Hence, the minimal set of dynamic libraries required is:
```
/usr/local/lib/librustc_driver-198068b3.dylib
/usr/local/lib/libstd-198068b3.dylib
/usr/lib/libSystem.B.dylib
```
which I believe closely matches the library dependencies for the GNU/Linux `rustc` distribution.
| A-linkage,O-macos,C-bug | medium | Critical |
108,233,505 | youtube-dl | Extractor for NPR music | Here's an example: http://www.npr.org/player/embed/439491238/440274224
I can look into adding an extractor for it but thought I'd first post a bug in case anyone had any pointers, experience with npr, etc. BTW, it looks like it's jwplayer underneath...
| site-support-request | low | Critical |
108,339,898 | go | x/text: localization support | This issue is intended as an umbrella tracking issue for localization support.
Localization support includes:
- formatting values, such as numbers, currencies, and dates in a language- and region-specific way.
- marking text to be translated in fmt-ed text and templates.
- integration into translation pipeline
Details to be covered in design docs.
| umbrella | high | Critical |
108,340,868 | youtube-dl | Problem with download videos from mirror.co.uk/video | hey guys first of all thank you for this amazing app it really useful, but unfortunately i cant download videos from http://www.mirror.co.uk/video/ i wish if you can fix this problem also make new option that can give me the download link directly for example site.com/video.mp4.
thank you
| site-support-request | low | Major |
108,444,386 | opencv | Bug: cv::CascadeClassifier window advances prematurely for no obvious reason | In the code below:
``` C++
class CascadeClassifierInvoker : public ParallelLoopBody
{
public:
CascadeClassifierInvoker( CascadeClassifierImpl& _cc, int _nscales, int _nstripes,
const FeatureEvaluator::ScaleData* _scaleData,
const int* _stripeSizes, std::vector<Rect>& _vec,
std::vector<int>& _levels, std::vector<double>& _weights,
bool outputLevels, const Mat& _mask, Mutex* _mtx)
{
...
}
void operator()(const Range& range) const
{
...
for( int scaleIdx = 0; scaleIdx < nscales; scaleIdx++ )
{
...
for( int y = y0; y < y1; y += yStep )
{
for( int x = 0; x < szw.width; x += yStep ) // <---------- Increment
{
int result = classifier->runAt(evaluator, Point(x, y), scaleIdx, gypWeight);
if( rejectLevels )
{
if( result == 1 )
...
if( classifier->data.stages.size() + result == 0 )
{
...
}
}
else if( result > 0 )
{
...
}
if( result == 0 )
x += yStep; // // <---------- Increment Again!
}
}
}
}
```
When result is 0, which I assume means classification has failed on the 0th's step, the code manually changes the step for no obvious reason. One may argue that when the first weak classifier fails, there is very little change that the next window passes all weak classifiers, but this argument is invalid for multiple reasons:
1. there is no mathematical reason behind the statement above
2. we cannot assume any limit on the number of weak classifiers. If a model has two weak classifiers, then the next window could probably be a positive candidate
3. it is not consistent in all conditions of the inner `if` statement
4. this behavior interferes with correct grouping
It appears that this behavior only happens when rejectLevels **is not** set, which means almost 99% of the common usage of this method.
Simplest Fix: Remove the following block:
```
if( result == 0 )
x += yStep;
```
Warning: the patched code may cause regression.
| bug,priority: normal,category: objdetect,affected: 3.4 | low | Critical |
108,500,846 | youtube-dl | option for log file | Hi,
I know that the output from youtube-dl can be redirected to a log file.
I am looking for an log option of youtube-dl, so that I can configure it in youtube-dl.conf permanently and not bother with providing a log redirection for each download.
Infact, if the log file can also have options like -o parameter(then we can have log file with id in it's filename $id.log)
regards
Chumma
| request | low | Major |
108,521,369 | youtube-dl | Add muxing option; allow --embed-subs to automatically remux if needed | #### Problem
A video from Crunchyroll is downloaded with `--write-sub`, `--embed-subs` and `--merge-output-format mkv` enabled. The expected outcome is that the downloaded FLV file will be losslessly muxed with the subtitles into an MKV file. What actually happens is that the two files are downloaded as normal without merging; the message `[ffmpeg] Subtitles can only be embedded in mp4 or mkv files` is returned.
#### Possible Solution 1
Add an option to remux the video to a given container format without re-encoding. (i.e. `--mux mkv`, `--mux mp4` etc.)
#### Possible Solution 2
Change the behaviour of `--embed-subs` so that it automatically remuxes if the file format doesnโt allow embedded subtitles. Allow the container format (MKV, MP4) to be specified if possible.
#### Possible Solution 3
Change the behaviour of `--merge-output-format` to always remux regardless of whether or not a merge is required.
Verbose output follows.
```
$ youtube-dl --ignore-config -v -n --merge-output-format mkv --write-sub --sub-lang enUS --embed-subs "http://www.crunchyroll.com/symphogear/episode-1-awakening-heartbeat-685195"
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'--ignore-config', u'-v', u'-n', u'--merge-output-format', u'mkv', u'--write-sub', u'--sub-lang', u'enUS', u'--embed-subs', u'http://www.crunchyroll.com/symphogear/episode-1-awakening-heartbeat-685195']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.09.22
[debug] Python version 2.7.10 - Darwin-14.5.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 2.8, ffprobe 2.8, rtmpdump 2.4
[debug] Proxy map: {}
[Crunchyroll] Logging in
[Crunchyroll] 685195: Downloading webpage
[Crunchyroll] 685195: Downloading media info
[Crunchyroll] 685195: Downloading media info for 360p
[Crunchyroll] 685195: Downloading media info for 480p
[Crunchyroll] 685195: Downloading media info for 720p
[Crunchyroll] 685195: Downloading media info for 1080p
[Crunchyroll] 685195: Downloading subtitles for English (US)
[Crunchyroll] 685195: Downloading subtitles for Espaรฑol
[Crunchyroll] 685195: Downloading subtitles for Espaรฑol (Espaรฑa)
[Crunchyroll] 685195: Downloading subtitles for Franรงais (France)
[Crunchyroll] 685195: Downloading subtitles for ุงูุนุฑุจูุฉ
[info] Writing video subtitles to: Symphogear Episode 1 โ Awakening Heartbeat-685195.enUS.ass
[debug] Invoking downloader on 'rtmpe://cp150757.edgefcs.net/ondemand/?auth=daEa.cxcIdqdYaucYc1ciaRc7b7c9dqdLd4-bwb6oB-dHa-nCGyptNDxuu&aifp=0009&slist=c20/s/ve1953099/video.mp4'
[download] Symphogear Episode 1 โ Awakening Heartbeat-685195.flv has already been downloaded
[download] 100% of 541.18MiB
[ffmpeg] Subtitles can only be embedded in mp4 or mkv files
```
| request | low | Critical |
108,537,834 | youtube-dl | Terminate download after command-line supplied duration | Looking through the documentation, I can't see a way to download a stream for a specific number of seconds.
This would be useful when downloading from audio services like tunein where the stream does not end at the end of the programme.
At the moment, I'm working around this by calling youtube-dl from a script and killing the background PID after a certain amount of time:
```
(/usr/local/bin/youtube-dl -q --no-part -o "${OUTFILE}" ${URL}) & DLPID=$!
sleep $DURATION
kill ${DLPID}
```
Having a --duration flag and terminating after this time would be quite useful for cases like this.
| request | low | Minor |
108,560,391 | go | x/mobile: Fullscreen mode, app.SetFullscreen(fullscreen bool) ? | If it is okay to add this to the app package, i can send a CL implementing this to Android and IOS.
| mobile | low | Major |
108,569,101 | opencv | A quest for better docs for MatExpr and Mat | Hi devs,
After using Opencv for my project, I feel that we need a quest to improve docs of MatExpr. There is potential bugs if we do not use the library correctly.
For example, MatExpr
```
// assuming matrix B of type CV_8U, I just want to normalize it into [0, 1]
Mat A = B/255.0f; // potiential bug
```
The results are completely different from what we want. The matrix `A` is still in CV_8U, so it will lose floating points value to convert to just _0_ or _1_.
My opinion is that we should explicitly say that the type of resultant matrix is the same as with the most compatible operand matrix.
| category: documentation,affected: 3.4,RFC | low | Critical |
108,727,397 | rust | Teach make-tidy to detect unused *.rs files | As an example, the file https://github.com/rust-lang/rust/blob/master/src/librustc_front/attr.rs is currently dead and can be removed, but before realizing it I spent several minutes trying to understand what's wrong with my text editor and why can't it find the definition of [`unlower_attribute`](https://github.com/rust-lang/rust/blob/master/src/librustc_front/attr.rs#L57).
| T-bootstrap,E-medium,C-feature-request | low | Major |
109,036,783 | youtube-dl | Cannot download multiple formats with archive | I have an issue where downloading multiple file formats from a YouTube video will only download the first set of videos. For example, if a format selection is "-f 140+136,171+247" with the archive on with "--download-archive archive.txt", youtube-dl will only download formats 140+136. It will say that the video is already in the archive, and not download formats 171+247.
Example log
```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'--verbose', u'--ignore-errors', u'--no-continue', u'--no-overwrites', u'--keep-video', u'--no-post-overwrites', u'--download-archive', u'archive.txt', u'--write-description', u'--write-info-json', u'--write-annotations', u'--write-thumbnail', u'--all-subs', u'--output', u'%(uploader)s (%(uploader_id)s)/%(id)s/%(title)s - %(upload_date)s.%(ext)s', u'-f', u'bestvideo[ext=mp4]+bestaudio[ext=m4a],bestvideo[ext=webm]+bestaudio[ext=webm]', u'https://www.youtube.com/watch?v=Mk9yk7B-cNA']
[debug] Encodings: locale UTF-8, fs UTF-8, out None, pref UTF-8
[debug] youtube-dl version 2015.09.28
[debug] Python version 2.7.3 - Linux-3.2.0-4-686-pae-i686-with-debian-stretch-sid
[debug] exe versions: none
[debug] Proxy map: {}
[youtube] Mk9yk7B-cNA: Downloading webpage
[youtube] Mk9yk7B-cNA: Downloading video info webpage
[youtube] Mk9yk7B-cNA: Extracting video information
WARNING: video doesn't have subtitles
[youtube] Mk9yk7B-cNA: Searching for annotations.
[youtube] Mk9yk7B-cNA: Downloading DASH manifest
[youtube] Mk9yk7B-cNA: Downloading DASH manifest
[info] Mk9yk7B-cNA: downloading video in 2 formats
[info] Writing video description to: Smosh Olivia (UCeaMWfo8kwdbLIl0RyuwVoA)/Mk9yk7B-cNA/First Live Stream with me i am a sandwich - 20150921.description
[info] Writing video annotations to: Smosh Olivia (UCeaMWfo8kwdbLIl0RyuwVoA)/Mk9yk7B-cNA/First Live Stream with me i am a sandwich - 20150921.annotations.xml
[info] Writing video description metadata as JSON to: Smosh Olivia (UCeaMWfo8kwdbLIl0RyuwVoA)/Mk9yk7B-cNA/First Live Stream with me i am a sandwich - 20150921.info.json
[youtube] Mk9yk7B-cNA: Downloading thumbnail ...
[youtube] Mk9yk7B-cNA: Writing thumbnail to: Smosh Olivia (UCeaMWfo8kwdbLIl0RyuwVoA)/Mk9yk7B-cNA/First Live Stream with me i am a sandwich - 20150921.jpg
WARNING: You have requested multiple formats but ffmpeg or avconv are not installed. The formats won't be merged.
[debug] Invoking downloader on 'https://r1---sn-hp57knls.googlevideo.com/videoplayback?id=324f7293b07e70d0&itag=136&source=youtube&requiressl=yes&mn=sn-hp57knls&pl=47&mv=m&ms=au&mm=31&gcr=us&nh=IgpwcjA0Lm1pYTA0KgkxMjcuMC4wLjE&ratebypass=yes&mime=video/mp4&gir=yes&clen=286166677&lmt=1443080099230244&dur=1892.833&mt=1443597092&upn=35XrcrsV40Y&key=dg_yt0&sver=3&fexp=9405975,9408710,9409069,9410705,9412927,9413140,9415365,9415435,9415485,9416023,9416126,9416729,9417098,9417707,9418094,9418153,9418400,9418411,9418438,9418448,9418802,9419444,9419488,9420348,9421013,9421196,9421501,9421890&signature=3B7FE0E3FB97C3EDF14CDC6B141A02319068560C.7B303E7F49D21C7C1F05634A78379EBA1316006B&ip=2602:306:bdc6:98e0:e0a3:581e:1e92:b31d&ipbits=0&expire=1443618762&sparams=ip,ipbits,expire,id,itag,source,requiressl,mn,pl,mv,ms,mm,gcr,nh,ratebypass,mime,gir,clen,lmt,dur'
[download] Destination: Smosh Olivia (UCeaMWfo8kwdbLIl0RyuwVoA)/Mk9yk7B-cNA/First Live Stream with me i am a sandwich - 20150921.f136.mp4
[download] 100% of 272.91MiB in 04:19
[debug] Invoking downloader on 'https://r1---sn-hp57knls.googlevideo.com/videoplayback?id=324f7293b07e70d0&itag=140&source=youtube&requiressl=yes&mn=sn-hp57knls&pl=47&mv=m&ms=au&mm=31&gcr=us&nh=IgpwcjA0Lm1pYTA0KgkxMjcuMC4wLjE&ratebypass=yes&mime=audio/mp4&gir=yes&clen=30063595&lmt=1443078187650282&dur=1892.890&mt=1443597092&upn=35XrcrsV40Y&key=dg_yt0&sver=3&fexp=9405975,9408710,9409069,9410705,9412927,9413140,9415365,9415435,9415485,9416023,9416126,9416729,9417098,9417707,9418094,9418153,9418400,9418411,9418438,9418448,9418802,9419444,9419488,9420348,9421013,9421196,9421501,9421890&signature=3D0FB255F932920EE3C2A7604C992E2ADC0B2938.04B0BE30B6BC871184F0496A96E1296DB4E4807A&ip=2602:306:bdc6:98e0:e0a3:581e:1e92:b31d&ipbits=0&expire=1443618762&sparams=ip,ipbits,expire,id,itag,source,requiressl,mn,pl,mv,ms,mm,gcr,nh,ratebypass,mime,gir,clen,lmt,dur'
[download] Destination: Smosh Olivia (UCeaMWfo8kwdbLIl0RyuwVoA)/Mk9yk7B-cNA/First Live Stream with me i am a sandwich - 20150921.f140.m4a
[download] 100% of 28.67MiB in 00:25
[download] First Live Stream with me i am a sandwich has already been recorded in archive
```
| request | low | Critical |
109,102,433 | go | go/doc: Examples not executable if pkg.name != basename(pkg.path) | % export GOPATH=$(pwd)
% cat src/foo1/foo.go
package foo
func Foo() {
}
% cat src/foo1/foo_test.go
package foo_test
import "foo1" // defines foo
func ExampleFoo() {
foo.Foo()
}
% godoc -play -http :9999 &
% open http://localhost:9999/pkg/foo1/#pkg-examples
The example is shown but not executable (grey not yellow); renaming foo1 to foo makes it executable, as does using an explicit (redundant) renaming import:
import foo "foo1"
Seems like some code in godoc is assuming pkg.name == basename(pkg.path) instead of finding the actual package name.
| help wanted,Tools | low | Minor |
109,185,146 | javascript | Best practice for wrapping function arguments | Style guide should advise wrapping function arguments that go over maximum line length.
``` js
function someVeryLongFunctionName(someVeryLongArgumentName1, someVeryLongArgumentName2, someVeryLongArgumentName3) {
// Function body
}
```
or:
``` js
function myFunction(someArg1, someArg2, someArg3, someArg4, someArg5, someArg6, someArg7, someArg8) {
// Function body
}
```
Wrapped:
``` js
function myFunction(someArg1, someArg2, someArg3, someArg4,
someArg5, someArg6, someArg7, someArg8) {
// Function body
}
```
| question | low | Major |
109,234,711 | rust | Range expressions: discrepancies between rustc and parser-lalr | I'd guess that the last test case here is a `rustc` bug, but I don't know about the others.
``` rust
fn match_rangefull() {
match .. { _ => () } // rustc compiles it, parser-lalr rejects
}
fn field_of_range() {
123.. .start; // rustc rejects, parser-lalr accepts
}
fn eq_rangefull() {
.. == ..; // rustc rejects, parser-lalr accepts
}
fn return_rangeto() -> std::ops::RangeTo<i32> {
return ..5; // rustc compiles it, parser-lalr parses it as (return)..5
}
fn assign_range() {
let x;
x = 4..5; // rustc compiles it, parser-lalr parses it as (x = 4)..5
}
fn nonblock_expr_range() {
// rustc compiles this, and I don't think it should. I think parser-lalr
// gets it right.
//
// rustc parses it like: let _one = (1..).start;
// parser-lalr parses it like: let _one = {1; ..}.start;
//
let _one = { {1} .. }.start;
}
fn main() {}
```
Also see https://github.com/rust-lang/rust/issues/25119.
| A-grammar,P-medium,I-needs-decision,T-lang,C-bug | low | Critical |
109,294,291 | youtube-dl | Default options (bestvideo+bestaudio) does not consider downloading a higher-resolution video | ```
$ youtube-dl --get-format https://www.youtube.com/watch?v=X7pzGyfRx9A
WARNING: Could not download DASH manifest: HTTP Error 403: Forbidden
136 - 900x720 (DASH video)+141 - audio only (DASH audio)
```
```
$ youtube-dl -F https://www.youtube.com/watch?v=X7pzGyfRx9A
[youtube] X7pzGyfRx9A: Downloading webpage
[youtube] X7pzGyfRx9A: Downloading video info webpage
[youtube] X7pzGyfRx9A: Extracting video information
[youtube] X7pzGyfRx9A: Downloading DASH manifest
[youtube] X7pzGyfRx9A: Downloading js player new-en_US-vflJLt_ns
[youtube] X7pzGyfRx9A: Downloading DASH manifest
WARNING: Could not download DASH manifest: HTTP Error 403: Forbidden
[info] Available formats for X7pzGyfRx9A:
format code extension resolution note
171 webm audio only DASH audio 94k , vorbis@128k (44100Hz), 7.20MiB
140 m4a audio only DASH audio 129k , m4a_dash container, aac @128k (44100Hz), 10.15MiB
141 m4a audio only DASH audio 256k , m4a_dash container, aac @256k (44100Hz), 20.16MiB
160 mp4 180x144 DASH video 114k , avc1.4d400c, 15fps, video only, 8.63MiB
242 webm 300x240 DASH video 148k , vp9, 1fps, video only, 6.92MiB
243 webm 450x360 DASH video 255k , vp9, 1fps, video only, 12.07MiB
133 mp4 300x240 DASH video 262k , avc1.4d400d, 15fps, video only, 19.29MiB
134 mp4 450x360 DASH video 403k , avc1.4d4015, 15fps, video only, 17.30MiB
244 webm 600x480 DASH video 467k , vp9, 1fps, video only, 22.22MiB
135 mp4 600x480 DASH video 698k , avc1.4d4016, 15fps, video only, 36.07MiB
247 webm 900x720 DASH video 786k , vp9, 1fps, video only, 40.29MiB
136 mp4 900x720 DASH video 1183k , avc1.4d401f, 15fps, video only, 71.93MiB
17 3gp 176x144
36 3gp 320x240
5 flv 400x240
43 webm 640x360
18 mp4 640x360
22 mp4 1280x720 (best)
```
The format specification `mp4+bestaudio` downloads the higher quality combination:
```
$ youtube-dl --get-format --format mp4+bestaudio https://www.youtube.com/watch?v=X7pzGyfRx9A
WARNING: Could not download DASH manifest: HTTP Error 403: Forbidden
22 - 1280x720+141 - audio only (DASH audio)
```
| request | low | Critical |
109,302,397 | go | x/build: add builders with different GODEBUG=cpu capabilities | Our builders for the most part only run on Haswell CPUs on Google Compute Engine.
A number of Go packages in the standard library and runtime use assembly to choose alternate paths depending on the capabilities of the processor.
I feel like we're not getting good test coverage, only testing the Haswell paths.
We do have a GOARCH=386 builder, though, which is at one extreme, but there are a number of steps in the middle untested.
Can we put an abstraction over CPUID etc somewhere (unexported) such that the builders can fake it before assembly code can?
/cc @ianlancetaylor @minux @adg
| Builders,NeedsFix,FeatureRequest,new-builder | medium | Critical |
109,395,398 | TypeScript | Give an error indicating that the triple slash has syntax error | I think it will be very nice to give a syntax error when there is a syntax error in triple slash
For example
``` ts
/// <reference patj="file1.ts"/> => a syntax error that triple slash syntax
```
| Suggestion,Help Wanted | low | Critical |
109,424,589 | kubernetes | Write proposal for controller pod management: adoption, orphaning, ownership, etc. (aka controllers v2) | Write a comprehensive proposal for how controllers should manage sets of pods. The main goal is to make controller APIs more usable and less error-prone.
We've discussed a number of changes:
- unique label generation: #12298, #34292
- controllerRef to manage overlap: https://github.com/kubernetes/kubernetes/issues/2210#issuecomment-135588169
- server-side cascading deletion via existence dependences: https://github.com/kubernetes/kubernetes/issues/1535#issuecomment-91413501
- GC prevention via finalizers: #3585
- deletion of non-conforming pods: https://github.com/kubernetes/kubernetes/blob/master/docs/design/daemon.md#cluster-mutations
- generation/observedGeneration: #7328
- status field containing selector query: #3676
- conditions: #14181
- aggregate status: #7483
- policy for naming generated resources: #15776
- reverse label lookup: #1348
We may want to split the following into separate issues:
Changes that would facilitate static work/role assignment:
- identity assignment: #260, #14188
- PVC replication: #12450
Long-standing idea to improve security and reusability around templates:
- separate template: #170
Reusability could also be addressed by:
- template proposal: #18215
Also need to make it easier to update existing pods:
- For labels, currently need to update pod template, then pods, then selector: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/rolling_updater.go#L616 . The loop to update pods after that wouldn't be necessary if it watched observedGeneration (#7328).
- Inline updates: #9043
- vertical auto-scaling
| priority/important-soon,area/app-lifecycle,sig/api-machinery,sig/apps,lifecycle/frozen | medium | Critical |
109,485,542 | opencv | Shape Transformers and Interfaces - Thin Plate Spline Warping | Hey,
s.th. in this Shape Transformer Interface seems to be buggy.
Maybe also see my former Question at the opencv forum http://answers.opencv.org/question/69384/shape-transformers-and-interfaces/
The following code will only produce a completly gray image. The error is reproducable by giving the last point the same x/y value e.g. 2/2, but will work perfectly fine with 2/3.
But thats only one possibility to cause this error. I found other picutres with "real" landmark points which also cause this error but I couldn't figure out a pattern. (Let me know if you need the data)
I would be grateful for any help :)
```
int main(void)
{
cv::Mat sImg = cv::imread("D:\\Opencv3\\sources\\samples\\data\\graf1.png", cv::IMREAD_GRAYSCALE);
std::vector<cv::Point2f> points;
std::vector<cv::DMatch> good_matches;
cv::Mat tImg;
points.push_back(cv::Point(0, 0)); //Just to have a few points
points.push_back(cv::Point(1, 1));
points.push_back(cv::Point(2, 2)); // points.push_back(cv::Point(2, 3)) -> works fine
good_matches.push_back(cv::DMatch(0, 0, 0));
good_matches.push_back(cv::DMatch(1, 1, 0));
good_matches.push_back(cv::DMatch(2, 2, 0));
// Apply TPS
cv::Ptr<cv::ThinPlateSplineShapeTransformer> mytps = cv::createThinPlateSplineShapeTransformer(0);
mytps->estimateTransformation(points, points, good_matches); // Using same points nothing should change in tImg
imshow("sImg", sImg); // Just to see if I have a good picture
mytps->warpImage(sImg, tImg);
imshow("tImg", tImg); //Always completley gray ?
cv::waitKey(0);
}
```
| bug,priority: normal,affected: 3.4,category: shape | low | Critical |
109,528,763 | opencv | Bug: Dependencies between detector runs on LBP | There is some bug with cv::CascadeClassifier in case of lbpcascade_frontalface:
We have 3 photos (attached files arranged as 02463d131, 02463d139, 02463d142 respectively). ะกonsistently run detector on them. For the last picture we get the wrong result. Remove, for example, the second photo. Now, the result for the third picture changed and became correct.
This observation is true only for lbp cascades. On haarcascade all is ok.
See attached files for more info.
Build info:
openCV 3.0.0, vc12, static
Code:
``` cpp
#include <opencv2\opencv.hpp>
int main() {
std::string f1 = "C:/test/detection/imgs/02463d131.jpg";
std::string f2 = "C:/test/detection/imgs/02463d139.jpg";
std::string f3 = "C:/test/detection/imgs/02463d142.jpg";
cv::Mat im1 = cv::imread(f1, CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat im2 = cv::imread(f2, CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat im3 = cv::imread(f3, CV_LOAD_IMAGE_GRAYSCALE);
cv::CascadeClassifier faceCascade;
bool isok = faceCascade.load("C:/test/detection/cv_bug/lbpcascade_frontalface.xml");
//bool isok = faceCascade.load("C:/test/detection/cv_bug/haarcascade_frontalface_alt.xml");
std::vector<cv::Rect> output1, output2, output3;
faceCascade.detectMultiScale(im1, output1, 1.16, 1, 0, cv::Size(100, 100), cv::Size(0.99 * im1.cols, 0.99 * im1.cols));
//faceCascade.detectMultiScale(im2, output2, 1.16, 1, 0, cv::Size(100, 100), cv::Size(0.99 * im2.cols, 0.99 * im2.cols));
faceCascade.detectMultiScale(im3, output3, 1.16, 1, 0, cv::Size(100, 100), cv::Size(0.99 * im3.cols, 0.99 * im3.cols));
std::cout << "02463d131.jpg: " << output1[0] << " Face detected: " << output1.size() << '\n';
//std::cout << "02463d139.jpg: " << output2[0] << " Face detected: " << output2.size() << '\n';
std::cout << "02463d142.jpg: " << output3[0] << " Face detected: " << output3.size() << '\n';
std::cin.get();
return 0;
}
```




| bug,priority: normal,category: objdetect,affected: 3.4,category: ocl | low | Critical |
109,548,010 | go | go/printer: unexpected handling of indented comment | The comment was indented one extra tab stop beyond where it should be by gofmt.
- Deleting one of the tabs and running gofmt again results in the proper alignment.
- Removing the statement below the comment and re-formatting will remove the extra tab stop.
- Changing the statement above the comment so the "func" line is on the same line as the assignment causes the extra tab to be removed as well.
Also unexpected, and the reason the bug was noticed: gofmt converts two spaces into two tabs here, which causes an unfortunate interaction with xxx (when not in tab mode) and probably other editors -- if you attempt to remove the extra indent and then re-run gofmt, you may be led to believe it's impossible to get the comment to line up as desired/expected.
```
package p
func f() {
a =
func(T1) T2 {
return q
}
// Comment
b =
func(T1) T2 {
return q
}
}
```
| NeedsInvestigation | low | Critical |
109,572,206 | neovim | RPC request (or other K_EVENT) causes redraw | I have noticed that the intro message is a bit too short to read. I was looking through the code to try and find where this timeout would be set, but came up empty handed. Can someone push me in the right direction so I can make my first Neovim contribution? :smile:
| needs:design,job-control,event-loop | low | Major |
109,573,630 | youtube-dl | Site Request: Huffingtonpost.CA | .....I am hoping this is just a matter of adding the .ca URL to the existing huffpost extractor.
URL: http://www.huffingtonpost.ca/2015/09/28/entrepreneur-enters-hands-free-hoverboard-market-engulfed-in-patent-war_n_8207232.html#_methods=onPlusOne2C_ready2C_close2C_open2C_resizeMe2C_renderstart2Concircled2Cdrefresh2Cerefresh2Conloadid=I0_1443544219927parent=http3A2F2Fwwwhuffingtonpostcapfname=rpctoken=47705081
Output:
c:\Transmogrifier>youtube-dl.py -v "http://www.huffingtonpost.ca/2015/09/28/entrepreneur-enters-hands-free-hoverboard-market-engulfed-in-patent-war_n_8207232.html#_methods=onPlusOne
open2C_resizeMe2C_renderstart2Concircled2Cdrefresh2Cerefresh2Conloadid=I0_1443544219927parent=http3A2F2Fwwwhuffingtonpostcapfname=rpctoken=47705081"
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.huffingtonpost.ca/2015/09/28/entrepreneur-enters-hands-free-hoverboard-market-engulfed-in-patent-war_n_8207232.html#_methods=onPlusOn
_open2C_resizeMe2C_renderstart2Concircled2Cdrefresh2Cerefresh2Conloadid=I0_1443544219927parent=http3A2F2Fwwwhuffingtonpostcapfname=rpctoken=47705081']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2015.09.28
[debug] Python version 2.7.5 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-71727-g46778ab, rtmpdump 2.4
[debug] Proxy map: {}
[generic] entrepreneur-enters-hands-free-hoverboard-market-engulfed-in-patent-war_n_8207232: Requesting header
WARNING: Falling back on generic information extractor.
[generic] entrepreneur-enters-hands-free-hoverboard-market-engulfed-in-patent-war_n_8207232: Downloading webpage
[generic] entrepreneur-enters-hands-free-hoverboard-market-engulfed-in-patent-war_n_8207232: Extracting information
ERROR: Unsupported URL: http://www.huffingtonpost.ca/2015/09/28/entrepreneur-enters-hands-free-hoverboard-market-engulfed-in-patent-war_n_8207232.html#_methods=onPlusOne2C_ready2C_c
eMe2C_renderstart2Concircled2Cdrefresh2Cerefresh2Conloadid=I0_1443544219927parent=http3A2F2Fwwwhuffingtonpostcapfname=rpctoken=47705081
Traceback (most recent call last):
File "C:\Transmogrifier\youtube-dl.py\youtube_dl\extractor\generic.py", line 1240, in _real_extract
doc = parse_xml(webpage)
File "C:\Transmogrifier\youtube-dl.py\youtube_dl\utils.py", line 1656, in parse_xml
tree = xml.etree.ElementTree.XML(s.encode('utf-8'), **kwargs)
File "C:\Python27\lib\xml\etree\ElementTree.py", line 1300, in XML
parser.feed(text)
File "C:\Python27\lib\xml\etree\ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "C:\Python27\lib\xml\etree\ElementTree.py", line 1506, in _raiseerror
raise err
ParseError: undefined entity: line 47, column 0
Traceback (most recent call last):
File "C:\Transmogrifier\youtube-dl.py\youtube_dl\YoutubeDL.py", line 660, in extract_info
ie_result = ie.extract(url)
File "C:\Transmogrifier\youtube-dl.py\youtube_dl\extractor\common.py", line 288, in extract
return self._real_extract(url)
File "C:\Transmogrifier\youtube-dl.py\youtube_dl\extractor\generic.py", line 1838, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: http://www.huffingtonpost.ca/2015/09/28/entrepreneur-enters-hands-free-hoverboard-market-engulfed-in-patent-war_n_8207232.html#_methods=onPlusOne2
pen2C_resizeMe2C_renderstart2Concircled2Cdrefresh2Cerefresh2Conloadid=I0_1443544219927parent=http3A2F2Fwwwhuffingtonpostcapfname=rpctoken=47705081
....and now that I look at it, there seems to be a number international versions:
http://www.huffingtonpost.com.au/
http://www.huffpostarabi.com/
http://www.brasilpost.com.br/
http://www.huffingtonpost.ca/
http://www.huffingtonpost.de/
http://www.huffingtonpost.es/
http://www.huffingtonpost.fr/
http://www.huffingtonpost.gr/
http://www.huffingtonpost.in/
http://www.huffingtonpost.it/
http://www.huffingtonpost.jp/
http://www.huffingtonpost.kr/
http://www.huffpostmaghreb.com/
http://www.huffingtonpost.co.uk/
http://www.huffingtonpost.com/
Thanks
Ringo
| site-support-request,geo-restricted | low | Critical |
109,623,697 | go | runtime: runtime-gdb_test mapvar failure on Fedora 20 | I tried to build go from the sources. I followed the instruction from
https://golang.org/doc/install/source
*\* What version of Go are you using (go version)?
go version go1.4.2 linux/amd64
and
go1.4
*\* What operating system and processor architecture are you using?
fedora 20 (Linux 3.19.8-100.fc20.x86_64 )
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 42
Model name: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2103.320
CPU max MHz: 3200.0000
CPU min MHz: 800.0000
BogoMIPS: 4988.73
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 3072K
NUMA node0 CPU(s): 0-3
*\* What did you do?
I did the steps as described by the installation-from-source webpage.
- clone the go-language repository
- execute all.bash in subdir src.
*\* What did you see?
The tests failed which means, the installation failed. The following steps
went seemingly ok
##### Building Go toolchain using/ home/msteffen/go1.4.
##### Building go_bootstrap for host, linux/amd64.
##### Building packages and commands for linux/amd64.
The following section of the installation failed, namely the testing
##### Testing packages.
in particular
--- FAIL:
runtime-gdb_test.go:42: gdb version 7.7
runtime-gdb_test.go:146: print mapvar failed: $1 = map[string]string = {[<error reading variable: Cannot access memory at address 0xb5cc>] = <error reading variable: Cannot access memory at address 0xb5cc>, [<error reading variable: Cannot access memory at address 0x3>] = <error reading variable: Cannot access memory at address 0x3>}
FAIL
FAIL runtime 41.748s
which resulted in
2015/10/03 14:54:11 Failed: exit status 1
*\* Go Versions:
I made the above attempt for
```
git checkout master
git checkout go1.5
```
both gave the same error
The compilation of go1.5 was using go1.4.2 (which is shipped with my fedora version) and I also tried to use go1.4 (where I downloaded the binaries and moved them to ~/go1.4). In
all combinations, the result was the same.
However, when I did
```
git checkout go1.4
```
that one went through, i.e., the go1.4 version, I was able to build from the sources.
| compiler/runtime | low | Critical |
109,626,701 | youtube-dl | Site support request: JizzHut.com | This website is glorious. It is also unsafe for work.
| site-support-request | low | Major |
109,671,774 | three.js | Specular color map? | Hi everyone.
Now the `Specular map` only controls the intensity of the specular highlights, it's will be great if we have `Specular color map` (http://wiki.polycount.com/wiki/Specular_color_map), one more step toward to the realistic material.
> Specular Color Map
>
> A texture that controls the intensity/color of the specular highlights from real-time lights.
>
> Usually a specular map simply controls the brightness of the highlights, at a per-pixel level. If the shader supports RGB the specular map can be used to colorize specular highlights, useful for surfaces that have more complex reflective properties like metals, beetle shells, etc.
| Enhancement | medium | Major |
110,104,883 | go | proposal: spec: type inferred composite literals | Composite literals construct values for structs, arrays, slices, and maps. They consist of a type followed by a brace-bound list of elements. e.g.,
``` go
x := []string{"a", "b", "c"}
```
I propose adding untyped composite literals, which omit the type. Untyped composite literals are assignable to any composite type. They do not have a default type, and it is an error to use one as the right-hand-side of an assignment where the left-hand-side does not have an explicit type specified.
``` go
var x []string = {"a", "b", "c"}
var m map[string]int = {"a": 1}
type T struct {
V int
}
var s []*T = {{0}, {1}, {2}}
a := {1, 2, 3} // error: left-hand-type has no type specified
```
Go already allows the elision of the type of a composite literal under certain circumstances. This proposal extends that permission to all occasions when the literal type can be derived.
This proposal allows more concise code. Succinctness is a double-edged sword; it may increase or decrease clarity. I believe that the benefits in well-written code outweigh the harm in poorly-written code. We cannot prevent poor programmers from producing unclear code, and should not hamper good programmers in an attempt to do so.
This proposal may slightly simplify the language by removing the rules on when composite literal types may be elided.
## Examples
Functions with large parameter lists are frequently written to take a single struct parameter instead. Untyped composite literals allow this pattern without introducing a single-purpose type or repetition.
``` go
// Without untyped composite literals...
type FooArgs struct {
A, B, C int
}
func Foo(args FooArgs) { ... }
Foo(FooArgs{A: 1, B: 2, C:3})
// ...or with.
func Foo(args struct {
A, B, C int
}) { ... }
Foo({A: 1, B: 2, C: 3})
```
In general, untyped composite literals can serve as lightweight tuples in a variety of situations:
``` go
ch := make(chan struct{
value string
err error
})
ch <- {value: "result"}
```
They also simplify code that returns a zero-valued struct and an error:
``` go
return time.Time{}, err
return {}, err // untyped composite literal
```
Code working with protocol buffers frequently constructs large, deeply nested composite literal values. These values frequently have types with long names dictated by the protobuf compiler. Eliding types will make code of this nature easier to write (and, arguably, read).
``` go
p.Options = append(p.Options, &foopb.Foo_FrotzOptions_Option{...}
p.Options = append(p.Options, {...}) // untyped composite literal
```
| LanguageChange,Proposal,LanguageChangeReview | high | Critical |
110,111,650 | go | x/net/trace: no way to know whether Finish has been called | @Sajmani
The current trace package does not provide a way for the caller of trace.FromContext to know whether trace.Trace.Finish has been called on this Trace (usually by another goroutine) so that the caller may operate on a finished Trace.
| NeedsInvestigation | low | Minor |
110,122,519 | kubernetes | kubelet should treat each pod source independently | Right now kubelet waits until all sources are ready before deleting pods, so that it wouldn't accidentally kill desired pods from not ready sources (bug to-be-fixed by PR: #15167).
What kubelet should do is to treat each source independently, i.e., if a source is ready, we can sync _and_ clean up pods from that source.
In order to do this, kubelet needs to be able to recognize which source the pod (container) even after restarting. There are two options:
- checkpointing: see https://github.com/kubernetes/kubernetes/issues/15122#issuecomment-145926227 and the original checkpointing issue #489
- write it as a docker label (#15089); we'll need to do similar thing for rkt too.
/cc @dchen1107 @thockin
| priority/backlog,sig/node,kind/feature,lifecycle/frozen,area/pod-lifecycle | low | Critical |
110,140,315 | TypeScript | "Cannot find module" message should be more explicit, saying where it's looking | With all the vagaries around module loading, it'd be great to give a more verbose, helpful error message for "Cannot find module 'foo'", saying where the compiler looked for the module. That can help clue the user into (1) how TypeScript works and (2) how to fix their issue.
The message text could be semi-static, maybe something like: IF using commonjs loading AND path isn't relative THEN say "Couldn't find module Foo; searched node_modules directory off directory XXXX and its ancestors and the package.json Typings path YYYY.
Better doc around the module loading algorithm helps, but just putting some of that info in the error message makes it very easily discovered. It also helps clue the user into what isn't the problem (e.g. the message above confirms that using commonjs module loading & found Typings in package.json, but something else seems to be the problem).
| Suggestion,Help Wanted | low | Critical |
110,170,133 | material-ui | [Snackbar] Add stacking support/display several | ## Summary ๐ก
Place multiple snack bars on the page.
## Examples ๐
<img width="310" alt="Capture dโeฬcran 2020-09-04 aฬ 15 41 59" src="https://user-images.githubusercontent.com/3165635/92245839-326a0b80-eec5-11ea-87f5-fbcbc3f808f1.png">
## Motivation ๐ฆ
Will the material design specification discourages to display multiple snackbar, and could stay the default, display multiple is a common use case:
> Only one snackbar may be displayed at a time.
https://material.io/components/snackbars#usage
## Benchmark
- https://trello.com/c/tMdlZIb6/1989-snackbar-more-features
- https://react-hot-toast.com/
- https://mantine.dev/others/notifications/
- https://blueprintjs.com/docs/#core/components/toast
- https://seek-oss.github.io/braid-design-system/components/useToast
- https://github.com/fkhadra/react-toastify
- https://chakra-ui.com/toast
- https://ant.design/components/message/
- https://github.com/iamhosseindhv/notistack/
- https://sancho-ui.com/components/toast/
- https://evergreen.segment.com/components/toaster
- https://twitter.com/railto/status/1232997843020996610
- https://baseweb.design/components/toast/
- https://toasted-notes.netlify.com/
- https://github.com/kylecesmat/react-cheers
- https://jossmac.github.io/react-toast-notifications/
- https://github.com/mui-org/material-ui/issues/21053#issuecomment-630119128
- https://twitter.com/listenMrUtkarsh/status/1283480376904568833 | new feature,component: snackbar,priority: important | medium | Critical |
110,188,575 | TypeScript | typescript checking frontend template files | The greatest feature of TypeScript, in terms of big enterprise apps, is type checking. When modifying/extending something low-level, I'll get all errors that arise beneath - just like I would in Java and .Net. But this relates to TS code.
My question is: **is it possible to use TypeScript to check types used in frontend templating engines, such as Handlebars, Underscore or [your favourite templating engine]**?
I've read that [TypeScript 1.6 is gonna support react's JSX](https://github.com/Microsoft/TypeScript/wiki/What's-new-in-TypeScript), but it's a big limitation to one specific engne only. I guess there is no built-in platform-supported solution, but maybe there is someone who made it work somehow?
Or, in other words, what would you do if you wanted to use TypeScript to check types of variables in handlebars/underscore templates?
| Suggestion,Needs Proposal | medium | Critical |
110,285,638 | go | x/mobile: dedicated publish.Event type? | Moving a TODO from example/basic to an issue.
Should app code running a fast draw loop use a.Send(paint.Event{})? Should we have a dedicated publish.Event type? Should App.Publish take an optional event sender and send a publish.Event?
| mobile | low | Major |
110,318,123 | angular | Proposal: A directive should not receive events that it fired | E.g.
```
export class Checkbox {
@Output() change: EventEmitter = new EventEmitter();
@HostListener('change', [
'$event.target.value'
])
onChange(event) { ... }
}
```
| type: bug/fix,effort1: hours,freq1: low,area: core,state: confirmed,core: event listeners,core: inputs / outputs,design complexity: major,P3 | low | Minor |
110,318,233 | rust | Rust tries to be too helpful with impl suggestions, sometimes being unhelpful | [This code example](http://is.gd/riFsRe) is a simplified extraction of part of the hierarchy that I'm working on for https://github.com/sgrif/yaqb. It's still a bit contrived, but I think it really gets the point across about how unhelpful/nonsensical the resulting error message is.
The type structure goes like this: `AsExpression` represents a type which can be converted to an `Expression` which would return a given type. `Expression` represents... well, an expression. `AsExpression` gets a blanket impl for any type which implements `Expression`. `Column` represents a concrete column (which in the real code has other things associated to it). Ultimately the column has a type, and can be used as an expression of that type. Any type that implements `Column` gets a blanket impl for `Expression`.
In `main`, we're trying to compare a column representing an `i32`, to a `str`. This is incorrect, and should fail to compile. However, the error message we get out of this is:
> error: the trait `Column` is not implemented for the type `&str` [E0277]
In the context of a SQL database, and what `Column` means, this is a nonsense error, and would not be helpful to the user in seeing why their code is failing to compile. Rust is just being super clever and realizing that the blanket impls would apply if we just implement that instead. I'd like to propose changing the error message to something with a bit more information, such as:
> error: the trait `AsExpression<i32>` is not implemented for the type `&str` [E0277]
>
> A blanket implementation of `AsExpression`, located at <file_path>:<line_number> could apply if `Column` were implemented for the type `&str`
| C-enhancement,A-diagnostics,T-compiler,F-on_unimplemented,D-confusing | medium | Critical |
110,327,289 | rust | associated type not normalized when a where-clause is present | ## Affected Versions
At least 1.3, 1.4, `rustc 1.5.0-nightly (11a612795 2015-10-04)`
## STR
``` Rust
trait Foo { type Bar; }
impl<T> Foo for T { type Bar = u64; }
fn foo<T>() -> <T as Foo>::Bar
where T: Foo // <- the code compiles if this is removed
{ 1 }
fn main() {}
```
## Expected Result
the code should compile
## Actual Result
```
<anon>:7:14: 7:15 error: mismatched types:
expected `<T as Foo>::Bar`,
found `_`
(expected associated type,
found integral variable) [E0308]
<anon>:7 x = Some(1);
^
```
cc @nikomatsakis
| A-trait-system,P-low,A-associated-items,T-lang,C-bug | low | Critical |
110,331,728 | youtube-dl | Failing to download PBS video | When trying to download http://www.pbs.org/wgbh/nova/tech/making-stuff.html#making-stuff-stronger I get an error. Here's the debug output:
```
--verbose http://www.pbs.org/wgbh/nova/tech/making-stuff.html#making-stuff-stronger
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'--verbose', u'http://www.pbs.org/wgbh/nova/tech/making-stuff.html#making-stuff-stronger']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.10.06.2
[debug] Python version 2.7.10 - Darwin-14.5.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 2.8, ffprobe 2.8, rtmpdump 2.4
[debug] Proxy map: {}
[PBS] making-stuff: Downloading webpage
[PBS] making-stuff: Downloading JSON metadata
ERROR: Unable to download JSON metadata: HTTP Error 404: NOT FOUND (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 329, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1872, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 469, in error
result = self._call_chain(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 656, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
```
| geo-restricted | low | Critical |
110,452,948 | go | x/crypto/ssh: TerminalModes have no impact on Session.RequestPty | ``` golang
modes := ssh.TerminalModes{
ssh.ECHO: 0,
ssh.ECHOCTL: 0,
ssh.TTY_OP_ISPEED: 14400,
ssh.TTY_IP_SPEED: 14400,
}
session.RequestPty("xterm", 80, 40, modes)
```
Control characters are still echoed and tab does not appear to trigger autocompletion in bash.
| NeedsInvestigation | low | Minor |
110,484,697 | go | tour: rot13Reader hint | Context: http://tour.golang.org/methods/12
Great tutorial!
If I may humbly suggest, it would have helped me to be told that all characters in a reader should be processed even if there is an error. I put a guard statement to not bother to do the rot13 if the enclosed reader returned an error, for efficiency reasons. Although obvious after the fact, I didn't discover the source of my problem until I read the docs on Reader.
| NeedsInvestigation | low | Critical |
110,492,789 | go | go/types: request for NewSelection constructor | Consider (*ssa.Program).MethodValue(sel *types.Selection) *ssa.Function. It requires a *types.Selection as its argument. If you have the receiver type T and the selector name (pkg, name), you can obtain a Selection by calling types.NewMethodSet(T).Lookup(pkg, name), but this is asymptotically more expensive than calling types.LookupFieldOrMethod(T, false, pkg, name).
Ideally the client would be able to call LookupFieldOrMethod and then construct a Selection (field or method) from the result, but there is no way for clients to construct a Selection.
Could we expose a NewSelection constructor?
(see #10091 for background)
| NeedsInvestigation | low | Minor |
110,535,073 | neovim | restore :shell command | Now with the `:terminal` merged I think it makes sense to readd the `:shell` command which will basically do the following:
1. Open new tab.
2. Save `&showtabline` and `&laststatus` variables and set them to zero.
3. Run `:terminal`.
4. Add an autocmd which restores options when terminal exits.
5. Add something which make it exit immediately in place of showing โ[Program exited, press any key to close]โ.
: simple, easy (except that I do not know what to do with 5) and makes NeoVim more compatible.
| compatibility | low | Major |
110,542,994 | youtube-dl | Possibility of ignoring videos without subtitle files | Hey there youtube-dl devs! I am doing some mass information farming to hopefully generate an accurate acoustic model using audio and subtitle information gathered from youtube (or otherwise). I've found your application to do almost exactly as I need it to and am thoroughly pleased with the results! Now, my questions to you are as follows: Is there any to tell youtube-dl to ignore videos that do not have an accompanying subtitle file? If not, is there any way that youtube-dl will have this functionality in the near future? I've looked through the man-pages and couldn't find anything so I figure I'd ask you guys! So, yeah, no bugs, issues or otherwise, just a question. I appreciate any and all responses!
Cheers,
Jay.
| request | low | Critical |
110,566,494 | go | tour: introduce collapsible hint sections on page | Some recent CLs to the tour added hints to certain problems
where the user used to have to think for themselves.
I suggest that we add collapsible hint sections to hide these
spoiler contents by default.
| NeedsInvestigation | low | Minor |
110,600,293 | rust | #[repr(C)] enum bit width determination algo does not match that of the C compiler | In C, support for enum determinants that cannot be represented by an `int` is implementation-defined. Many compilers choose a different basic type for representation, based on the types or the values of the determinants. This is where the algorithm currently used by Rust for `#[repr(C)]` enums may differ from the one used by the C compiler, although the difference can be observed in fairly marginal cases.
GCC uses the types of the determinant expressions to find the best fit, so this is a 32-bit type on 64-bit Linux:
``` c
typedef enum {
A = 0x80000001,
B = -0x80000000,
} C;
```
Yet this is 64-bit:
``` c
typedef enum {
A = 0x80000001,
B = -2147483648 /* -0x80000000 */
} C;
```
This is because the type of [integer constant](http://en.cppreference.com/w/c/language/integer_constant) `0x80000000` is determined as `unsigned int` due to its hexadecimal notation, and the negation preserves the type.
Rust first coerces all discriminants to `isize` and then apparently works out the best fitting representation type from the value range, where fitting means that negative values are preserved as such. So this ends up being 64-bit:
``` rust
#[repr(C)]
enum C {
A = 0x80000001,
B = -0x80000000,
}
fn main() {
println!("{}", std::mem::size_of::<C>());
}
```
I'm uncertain as to which approach is the best for fixing this. Trying to match the behavior of C compilers quirk-for-quirk does not seem to be feasible: there may be more than one compiler per target, and in fact the behavior even changes with compiler options, as e.g. the type of an integer constant is determined differently pre-C99 and post-C99. Perhaps a more conservative solution would be to lint on discriminant values that are out of the `libc::c_int` domain and suggest using fixed-width representations such as `#[repr(u32)]`.
| A-repr | low | Major |
110,674,564 | thefuck | Strange suggestions for `cd ,,` | I often type `cd ,,` when I meant to type `cd ..`. thefuck suggest the following things for that command.
- `mkdir -p ,, && cd ,,`
- `mkdir -p ,, && cd ,,` (yes, the same twice)
- `cd .`
It would be great to get rid of the duplicate and to add `cd ..` as well
| help wanted | low | Minor |
110,684,867 | TypeScript | Performance of getSemanticDiagnostics in compiler API | The setup is somewhat similar to the one in the compiler API example (https://github.com/Microsoft/TypeScript/wiki/Using-the-Compiler-API#incremental-build-support-using-the-language-services)
The problem, as in the example, is that errors are reported only for files that were directly changed. It doesn't cover the case in which the changed file was a dependency of another file, and even though the file depending on it was not changed, it will still have errors, which won't be reported.
To be more specific, the setup I am talking about is ts-loader + webpack.
What happens is that `getSemanticDiagnostics(file)` has to be called for every file in the project, which ends up increasing the incremental build times considerably. Even `program.getSemanticDiagnostics()` iterates through all of the source files as it is seen in the typescript source code.
I can only see this problem fixed from the compiler API. It could expose a method which finds all files dependent on a given file. Because in an incremental build only a single file changed, this should be faster than calling `getSemanticDiagnostics()` for each and every project file.
The alternative would be some sort of caching on `getSemanticDiagnostics()` itself, but the compiler would probably have to do internally the same work as above to invalidate the cache on file change. This seems to be equivalent to a `getSemanticDiagnosticsForFileAndDependents(file)`-like function.
| Suggestion,Help Wanted,API | low | Critical |
110,806,522 | go | cmd/link: Incorrect DWARF scope representation | The DWARF information emitted does not correctly represent the scoping of variables within a procedure.
To illustrate, given the following Go source code:
``` go
package main
import "fmt"
func main() {
mystr := "foo"
{
mystr := "bar"
fmt.Println(mystr)
}
fmt.Println(mystr)
}
```
The relevant information produced in .debug_info is as follows:
```
<1><58>: Abbrev Number: 2 (DW_TAG_subprogram)
<59> DW_AT_name : main.main
<63> DW_AT_low_pc : 0x401000
<6b> DW_AT_high_pc : 0x401240
<73> DW_AT_external : 1
<2><74>: Abbrev Number: 4 (DW_TAG_variable)
<75> DW_AT_name : mystr
<7b> DW_AT_location : 5 byte block: 9c 11 80 7f 22 > (DW_OP_call_frame_cfa; DW_OP_consts: -128; DW_OP_plus)
<81> DW_AT_type : <0x34e72>
<2><89>: Abbrev Number: 4 (DW_TAG_variable)
<8a> DW_AT_name : mystr
<90> DW_AT_location : 5 byte block: 9c 11 90 7f 22 > (DW_OP_call_frame_cfa; DW_OP_consts: -112; DW_OP_plus)
<96> DW_AT_type : <0x34e72>
```
As you can see, both variables are represented at depth <2> in the tree, however, one should be at depth <3>, following a depth <2> entry of a `DW_AT_lexical_block` with low/high pc values.
The first solution is to produce a tree that properly reflects the lexical blocks in the source code. Secondly, it would be great for `DW_TAG_variable`, `DW_TAG_formal_parameter` and family to include `DW_AT_decl_file` and `DW_AT_decl_line` attributes within the entries.
| help wanted,NeedsFix,Debugging,compiler/runtime | medium | Critical |
110,814,561 | rust | Have rustdoc create hyperlinks on items in compiled code examples | Currently we just generate HTML for the code examples by compiling markdown.
However, rustdoc could compile these examples with a plugin that emits resolved path information for each `ExprPath` and import being used. This part isn't too hard. Might actually be useful as a plugin to be run on independent rust programs.
Then, we need to tweak the syntax highlighting bits so that these paths (those with a known span, that is), get linked appropriately.
cc @alexcrichton @steveklabnik
| T-rustdoc,E-hard,C-feature-request | low | Major |
110,821,971 | rust | Error when casting a function is not informative | ``` rust
fn heap_size(_: &usize) -> usize {
0
}
const f: *const fn(*const usize) -> usize = (&heap_size as *const fn(&usize) -> usize) as *const fn(*const usize) -> usize;
fn main() {
}
```
yields
```
<anon>:5:46: 5:86 error: casting `&fn(&usize) -> usize {heap_size}` as `*const fn(&usize) -> usize` is invalid
<anon>:5 const f: *const fn(*const usize) -> usize = (&heap_size as *const fn(&usize) -> usize) as *const fn(*const usize) -> usize;
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error: aborting due to previous error
```
This error tells me nothing about why it's invalid, and there's no extended explanation.
| A-diagnostics,T-compiler,C-bug | low | Critical |
110,879,513 | go | tour: automatic save of the last accessed tour page | I would like to suggest to add a new feature which is the automatic save of the last accessed tour page so the user could find himself again in the page that he left. I don't think newcomers will read all the everything in one shot.
Thx
| NeedsInvestigation | low | Minor |
110,953,351 | TypeScript | Namespace elements cannot export without declaration | es6 modules can export module elements without declaration.
``` ts
import foo from 'foo';
export {foo}
```
Namespaces are not.
``` ts
import foo from 'foo';
namespace NS {
export {foo} // not support syntax
export var foo = foo; // invalid
var bar = foo;
export var foo = bar; // too redundant
}
```
Namespaces should support non-declaration export.
related: https://github.com/Microsoft/TypeScript/issues/5175
| Suggestion,In Discussion | low | Major |
111,006,598 | nvm | .bash_profile vs. .bashrc | On Mac OS X the `.bash_profile` is loaded by default, but the `.bashrc` file is not. When installing NVM it will write to the `.bashrc` file if it exists, but then won't be accessible at the command line.
Adding `source ~/.bashrc` to the end of the `.bash_profile` makes the commands available but seems a bit awkward.
Moving
``` sh
export NVM_DIR="/Users/greg/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
```
to the `.bash_profile` works too, and seems a more sensible way to go.
I'm thinking installing NVM should write to the `.bash_profile` over the `.bashrc` if both exist. At least I think that would play nicer with Mac OS X
Note: tried this on both OS X 10.10 & 10.11
| installing nvm: profile detection,pull request wanted | low | Major |
111,028,950 | TypeScript | Proposal: The internal modifier for classes | # The `internal` modifier
Often there is a need to share information on types within a program or package that should not be
accessed from outside of the program or package. While the `public` accessibility modifier allows
types to share information, is insufficient for this case as consumers of the package have access
to the information. While the `private` accessibility modifier prevents consumers of the package
from accessing information from the type, it is insufficient for this case as types within the
package also cannot access the information. To satisfy this case, we propose the addition of the
`internal` modifier to class members.
## Goals
This proposal aims to describe the static semantics of the `internal` modifier as it applies to members of a class (methods, accessors, and properties).
## Non-goals
This proposal does not cover any other use of the `internal` modifier on other declarations.
# Static Semantics
## Visibility
Within a _non-declaration_ file, a class member marked `internal` is treated as if it had `public`
visibility for any property access:
**source.ts**:
``` ts
class C {
internal x: number;
}
let c = new C();
c.x = 1; // ok
```
When consuming a class from a _declaration_ file, a class member marked `internal` is treated as
if it had `private` visibility for any property access:
**declaration.d.ts**
``` ts
class C {
internal x: number;
}
```
**source.ts**
``` ts
/// <reference path="declaration.d.ts" />
let c = new C();
c.x = 1; // error
```
## Assignability
When checking assignability of types within a _non-declaration_ file, a class member marked
`internal` is treated as if it had `public` visibility:
**source.ts**:
``` ts
class C {
internal x: number;
}
interface X {
x: number;
}
let x: X = new C(); // ok
```
If one of the types is imported or referenced from a _declaration_ file, but the other is defined
inside of a _non-declaration_ file, a class member marked `internal` is treated as if it had
`private` visibility:
**declaration.d.ts**
``` ts
class C {
internal x: number;
}
```
**source.ts**
``` ts
/// <reference path="declaration.d.ts" />
interface X {
}
let x: X = new C(); // error
```
It is important to allow assignability between super- and subclasses from a declaration file
with overridden members marked `internal`. When both types are imported or referenced from a
_declaration_ file, a class member marked `internal` is treated as if it had `protected` visibility:
**declaration.d.ts**
``` ts
class C {
internal x(): void;
}
class D extends C {
internal x(): void;
}
```
**source.ts**
``` ts
/// <reference path="declaration.d.ts" />
let c: C = new D(); // ok
```
However, this does not carry over to subclasses that are defined in a _non-declaration_ file:
**declaration.d.ts**
``` ts
class C {
internal x(): void;
}
```
**source.ts**
``` ts
/// <reference path="declaration.d.ts" />
class D extends C {
internal x(): void; // error
}
```
| Suggestion,In Discussion | high | Critical |
111,043,439 | youtube-dl | [archive.org] Add support for collections | ```
css80@allied80 /cygdrive/c/Users/css80/Desktop
$ ./youtube-dl.exe --verbose https://archive.org/details/attentionkmartshoppers
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'--verbose', u'https://archive.org/details/attentionkmartshoppers']
[debug] Encodings: locale cp1252, fs mbcs, out None, pref cp1252
[debug] youtube-dl version 2015.10.12
[debug] Python version 2.7.8 - Windows-8-6.2.9200
[debug] exe versions: ffmpeg N-75841-g5911eeb, ffprobe N-75841-g5911eeb
[debug] Proxy map: {}
[archive.org] attentionkmartshoppers: Downloading JSON metadata
ERROR: attentionkmartshoppers: Failed to parse JSON (caused by ValueError('No JSON object could be decoded',)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "youtube_dl\extractor\common.pyo", line 483, in _parse_json
File "json\__init__.pyo", line 338, in loads
File "json\decoder.pyo", line 366, in decode
File "json\decoder.pyo", line 384, in raw_decode
ValueError: No JSON object could be decoded
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 660, in extract_info
File "youtube_dl\extractor\common.pyo", line 290, in extract
File "youtube_dl\extractor\archiveorg.pyo", line 37, in _real_extract
File "youtube_dl\extractor\common.pyo", line 477, in _download_json
File "youtube_dl\extractor\common.pyo", line 487, in _parse_json
ExtractorError: attentionkmartshoppers: Failed to parse JSON (caused by ValueError('No JSON object could be decoded',)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
| request | low | Critical |
111,048,579 | kubernetes | Label selector design doc | Capture design discussion from #15350, #341, #951, other PRs, and internal docs.
| priority/backlog,kind/documentation,area/api,help wanted,sig/architecture,lifecycle/frozen | low | Major |
111,229,130 | go | time: Parse too greedy when parsing some formats | 1. What version of Go are you using (`go version`)?
`$ go version go version devel +2d823fd Tue Oct 13 01:04:42 2015 +0000 linux/amd64`
2. What operating system and processor architecture are you using?
linux/amd64
3. What did you do?
Attempted to parse this time: `"618 AM MDT TUE OCT 13 2015"` with this format string: `"304 PM MST Mon Jan 02 2006`"
4. What did you expect to see?
No parsing error, and a time.Time representing that moment.
5. What did you see instead?
The parsing fails, and the error reported: `parsing time "618 AM MDT TUE OCT 13 2015": hour out of range`. The hour is interpreted as `61`.
The format works properly with a two digit hour. It seems `getnum()` in time/format.go takes up to two characters whenever the second character is valid which leads to the issue.
6. Playground example with examples that fail and pass for the given format:
http://play.golang.org/p/mZ-m7gJlvy
Apologies if my code is messy...
The format is used as a time/version stamp for the US National Weather Service in their forecast discussion products, for a real world example, http://www.wrh.noaa.gov/total_forecast/getprod.php?wfo=boi&pil=AFD&sid=BOI&version=1
Thank you.
| NeedsDecision | low | Critical |
111,295,172 | rust | Can't import super | I'm not sure if this falls into the realm of something needing an RFC or not, but I expected this to work:
``` rust
mod a {
use super as foo;
}
```
but it gives `error: expected identifier, found keyword `super``.
If I do it in an import list I get a different error:
``` rust
mod a {
use super::{self as foo};
}
```
`error: unresolved import `super`. There is no `super` in `???``
(I suppose this also applies to `self`, but I can't think of a reason to do that.)
Edit: This seems to work as a substitute:
``` rust
mod a {
mod foo { pub use super::super::*; }
}
```
| A-resolve,T-lang,C-bug | low | Critical |
111,361,577 | go | x/mobile/cmd/gomobile: CGO_CFLAGS is overwritten during go bind -target=ios | hi,
I faced an issue when I run gomobile bind with CGO_CFLAGS and CGO_LDFLAGS like
```
CGO_CFLAGS="-I/usr/local/include" \
CGO_LDFLAGS="-L/usr/local/lib" gomobile build -target=ios -v
```
but output told me that there a header file not found.
It turns out when we build ios, the CGO_CFLAGS and CGO_LDFLAGS are overwritten.
https://github.com/golang/mobile/blob/master/cmd/gomobile/env.go#L132
I have a way to workaround this:
```
index 5458bc2..00cb78c 100644
--- a/cmd/gomobile/env.go
+++ b/cmd/gomobile/env.go
@@ -119,8 +119,8 @@ func envInit() (err error) {
"GOARM=7",
"CC=" + clang,
"CXX=" + clang,
- "CGO_CFLAGS=" + cflags + " -arch " + archClang("arm"),
- "CGO_LDFLAGS=" + cflags + " -arch " + archClang("arm"),
+ "CGO_CFLAGS=" + os.Getenv("CGO_CFLAGS") + " " + cflags + " -arch " + archClang("arm"),
+ "CGO_LDFLAGS=" + os.Getenv("CGO_LDFLAGS") + " " + cflags + " -arch " + archClang("arm"),
"CGO_ENABLED=1",
}
darwinArm64Env = []string{
@@ -128,8 +128,8 @@ func envInit() (err error) {
"GOARCH=arm64",
"CC=" + clang,
"CXX=" + clang,
- "CGO_CFLAGS=" + cflags + " -arch " + archClang("arm64"),
- "CGO_LDFLAGS=" + cflags + " -arch " + archClang("arm64"),
+ "CGO_CFLAGS=" + os.Getenv("CGO_CFLAGS") + " " + cflags + " -arch " + archClang("arm64"),
+ "CGO_LDFLAGS=" + os.Getenv("CGO_LDFLAGS") + " " + cflags + " -arch " + archClang("arm64"),
"CGO_ENABLED=1",
}
@@ -142,8 +142,8 @@ func envInit() (err error) {
"GOARCH=386",
"CC=" + clang,
"CXX=" + clang,
- "CGO_CFLAGS=" + cflags + " -mios-simulator-version-min=6.1 -arch " + archClang("386"),
- "CGO_LDFLAGS=" + cflags + " -mios-simulator-version-min=6.1 -arch " + archClang("386"),
+ "CGO_CFLAGS=" + os.Getenv("CGO_CFLAGS") + " " + cflags + " -mios-simulator-version-min=6.1 -arch " + archClang("386"),
+ "CGO_LDFLAGS=" + os.Getenv("CGO_LDFLAGS") + " " + cflags + " -mios-simulator-version-min=6.1 -arch " + archClang("386"),
"CGO_ENABLED=1",
}
darwinAmd64Env = []string{
```
let me know if you think this is an issue.
| mobile | low | Critical |
111,498,795 | go | cmd/link: DWARF line tables seem to be off by 1 instruction | go version go1.5.1 darwin/amd64
I was noticing that values printed in the debugger often seem to be uninitialized when stepping past a line that initializes them. It appears that the DWARF line tables are advancing the line number one instruction too early. For example, with this go code
``` go
23 func main() {
24 lf, err := os.OpenFile("test", os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
25 if err != nil {
26 log.Fatalf("error opening file: %v", err)
27 }
28 ....
```
dwarfdump shows this for the line table:
```
0x000063e4: DW_LNE_set_address( 0x0000000004002e90 )
0x000063ef: DW_LNS_advance_line( 22 )
0x000063f1: address += 0, line += 0
0x0000000004002e90 1 23 0 is_stmt
0x000063f2: DW_LNS_advance_pc( 100 )
0x000063f4: address += 0, line += 1
0x0000000004002ef4 1 24 0 is_stmt
0x000063f5: DW_LNS_advance_pc( 70 )
0x000063f7: address += 0, line += 1
0x0000000004002f3a 1 25 0 is_stmt
```
It shows line 25 starting at 0x4002f3a. However, from the disassembly this appears to be the last instruction of line 24. Line 25 starts at 0x4002f42:
``` assembly
test`main.main:
0x4002e90 <+0>: movq %gs:0x8a0, %rcx
0x4002e99 <+9>: leaq -0x160(%rsp), %rax
0x4002ea1 <+17>: cmpq 0x10(%rcx), %rax
0x4002ea5 <+21>: jbe 0x4003d32 ; <+3746> at test.go:23
0x4002eab <+27>: subq $0x1e0, %rsp
0x4002eb2 <+34>: xorl %eax, %eax
0x4002eb4 <+36>: movq %rax, 0x1c0(%rsp)
0x4002ebc <+44>: movq %rax, 0x1c8(%rsp)
0x4002ec4 <+52>: movq %rax, 0x1d0(%rsp)
0x4002ecc <+60>: movq %rax, 0x1d8(%rsp)
0x4002ed4 <+68>: movq %rax, 0x170(%rsp)
0x4002edc <+76>: movq %rax, 0x178(%rsp)
0x4002ee4 <+84>: movq %rax, 0x130(%rsp)
0x4002eec <+92>: movq %rax, 0x138(%rsp)
0x4002ef4 <+100>: leaq 0x3b6865(%rip), %rbx ; go.string.* + 93856
0x4002efb <+107>: movq %rbx, (%rsp)
0x4002eff <+111>: movq $0xb, 0x8(%rsp)
0x4002f08 <+120>: movq $0x20a, 0x10(%rsp)
0x4002f11 <+129>: movl $0x1b6, 0x18(%rsp)
0x4002f19 <+137>: callq 0x40b8cb0 ; os.OpenFile at file_unix.go:85
0x4002f1e <+142>: movq 0x20(%rsp), %rbx
0x4002f23 <+147>: movq %rbx, 0x70(%rsp)
0x4002f28 <+152>: movq 0x28(%rsp), %rcx
0x4002f2d <+157>: movq 0x30(%rsp), %rdx
0x4002f32 <+162>: movq %rdx, 0xe8(%rsp)
-> 0x4002f3a <+170>: movq %rcx, 0xe0(%rsp)
0x4002f42 <+178>: cmpq $0x0, %rcx
```
| compiler/runtime | low | Critical |
111,512,248 | kubernetes | Write resource management overview | We need an overall design for how all our resource-management features should interact:
- horizontal pod autoscaling
- vertical pod autoscaling (initial and reactive)
- resource metrics APIs
- oversubscription
- scheduling (including taking resource usage into account)
- QoS
- out-of-resource killing
- rescheduling
cc @davidopp @fgrzadkowski @dchen1107 @vishh @derekwaynecarr
| priority/backlog,sig/scheduling,area/isolation,sig/node,kind/feature,lifecycle/frozen,needs-triage | medium | Major |
111,512,436 | rust | Higher-ranked lifetime bounds give confusing errors | ``` rust
#![allow(dead_code)]
fn x(_: &()) {}
trait HR {}
impl HR for fn(&()) {}
fn hr<T: HR>(_: T) {}
trait NotHR {}
impl<'a> NotHR for fn(&'a ()) {}
fn not_hr<T: NotHR>(_: T) {}
fn a<'a>() {
let not_hr_func: fn(&'a ()) = x;
let hr_func: fn(&()) = x;
let hr_func2: for<'b> fn(&'b ()) = x;
hr(not_hr_func);
not_hr(hr_func);
not_hr(hr_func2);
}
fn main() {}
```
gives
```
<anon>:17:5: 17:7 error: the trait `HR` is not implemented for the type `fn(&())` [E0277]
<anon>:17 hr(not_hr_func);
^~
<anon>:17:5: 17:7 help: see the detailed explanation for E0277
<anon>:17:5: 17:7 note: required by `hr`
<anon>:18:5: 18:11 error: the trait `NotHR` is not implemented for the type `fn(&())` [E0277]
<anon>:18 not_hr(hr_func);
^~~~~~
<anon>:18:5: 18:11 help: see the detailed explanation for E0277
<anon>:18:5: 18:11 note: required by `not_hr`
<anon>:19:5: 19:11 error: the trait `NotHR` is not implemented for the type `fn(&'b ())` [E0277]
<anon>:19 not_hr(hr_func2);
^~~~~~
<anon>:19:5: 19:11 help: see the detailed explanation for E0277
<anon>:19:5: 19:11 note: required by `not_hr`
error: aborting due to 3 previous errors
playpen: application terminated with error code 101
```
Summarizing, given the type declaration, we have the printed form:
| Declaration | Display |
| --- | --- |
| `fn(&'a ())` | `fn(&())` |
| `fn(&())` | `fn(&())` |
| `for<'b> fn(&'b ())` | `fn(&'b ())` |
So the non-higher-ranked type has its lifetime elided and is printed like a higher-ranked type, and higher-ranked types are sometimes printed without the quantifier but with the lifetime, looking like non-higher-ranked types.
| C-enhancement,A-diagnostics,A-lifetimes,T-compiler,A-higher-ranked | low | Critical |
111,529,445 | go | spec: document that Alignof, Offsetof, and Sizeof do not evaluate their arguments | At least in cmd/compile, unsafe.Alignof, unsafe.Offsetof, and unsafe.Sizeof do not evaluate their arguments: http://play.golang.org/p/4QE3mVrFaS
Although unsurprising and consistent with C/C++, the Go spec doesn't mention this. It does say
> Calls to Alignof, Offsetof, and Sizeof are compile-time constant expressions of type uintptr.
but I don't think that's necessarily mutually exclusive with evaluating the argument expression.
| Documentation,NeedsInvestigation | low | Minor |
111,530,913 | go | spec: ambiguity in definition of alignment | The spec (September 24, 2015) says:
> The following minimal alignment properties are guaranteed:
> 1. For a variable x of any type: unsafe.Alignof(x) is at least 1.
> 2. For a variable x of struct type: unsafe.Alignof(x) is the largest of all the values unsafe.Alignof(x.f) for each field f of x, but at least 1.
> 3. For a variable x of array type: unsafe.Alignof(x) is the same as unsafe.Alignof(x[0]), but at least 1.
While intuitive, this definition doesn't address structs with blank fields (because there's no valid x.f expression to access those fields) or zero-element arrays (because x[0] is an illegal expression, as 0 is a constant that's out of the array's bounds).
It seems like these can be addressed by changing the wording to something like:
1. The alignment of any type is at least 1.
2. The alignment of a struct type is at least as large as the alignment of each of its fields.
3. The alignment of an array type is the same as the alignment of its element type.
(The original definition repeats the "but at least 1" wording each time, but that seems redundant since nothing else indicates that rule 1 is mutually exclusive with rules 2 and 3.)
| Documentation,NeedsInvestigation | low | Minor |
111,612,193 | neovim | Uses `set_a_foreground` and `set_a_background` for invalid colour indexes | I'm trying to debug a problem a user has relating to terminal rendering, and I'm looking over a `script` log of neovim's output. One thing that initially troubles me is there are places in the log where it sends
```
SGR 310
SGR 312
SGR 314
```
These aren't known to `libvterm`, nor can I find reference to them in any terminal docs I have to hand.
Looking more closely at the places they appear, it seems they're intended as colour formatting; e.g.
```
{ESC (B}{SGR *}{SGR 36}{SGR 48}0
{ESC (B}{SGR *}{SGR 312}{SGR 48}]
```
A careful reading of the `set_a_foreground` terminfo entry reveals a likely candidate:
```
set_a_foreground=\E[3%p1%dm
```
The parameter gets printed verbatim, as if in a call to
```
printf("\e[3%dm", param);
```
at which point we can surmise that these are coming from `set_a_foreground` called with a parameter value of 10, 12 or 14.
This is outside the range that should be allowed, because the terminfo starts by declaring
```
max_colors#8
```
It is important that `set_a_foreground` and `set_a_background` are not invoked with a number outside the range allowed by `max_colors`, or weird things may result.
| tui | low | Critical |
111,743,443 | go | runtime: merge .go files produced by cgo into their counterparts | Merge
- `panic1.go` -> `panic.go`
- `defs1_darwin.go` + `defs2_darwin.go` -> `defs_darwin.go`
- etc
Discussed in https://groups.google.com/forum/#!topic/golang-nuts/yuh2fzooEpM
I'd like to work on this.
| compiler/runtime | low | Major |
Subsets and Splits