id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
148,968,992 | TypeScript | Use function type shorthands when displaying signature help | _From @unional on April 14, 2016 23:32_

I would be great if when `tape((<cursor>))` it would be able to drill in the `tape.TestCase` and show the signature of the callback. :tulip:
This is probably on `tsc`, but let's start here. :smile:
_Copied from original issue: Microsoft/vscode#5284_
| Suggestion,Help Wanted,API,VS Code Tracked | low | Major |
148,974,532 | TypeScript | Cannot get symbol inside class decorator (Cannot read property 'members' of un...) | **TypeScript Version:**
1.8.10
**Code**
``` ts
@Component({
selector: SELECTOR
})
class SampleComponent {}
const SELECTOR = 'ng-demo';
```
**Expected behavior:**
With sample [`SyntaxWalker`](https://github.com/palantir/tslint/blob/master/src/language/walker/syntaxWalker.ts) based on `tslint`, once I visit the `PropertyAssignment` - `selector: SELECTOR` and invoke `typeChecker.getSymbolAtLocation(prop.initializer);` I should get the `SELECTOR` symbol.
**Actual behavior:**
```
TypeError: Cannot read property 'members' of undefined
at resolveName (node_modules/typescript/lib/typescript.js:15307:73)
at resolveEntityName (node_modules/typescript/lib/typescript.js:15725:26)
at getSymbolOfEntityNameOrPropertyAccessExpression (node_modules/typescript/lib/typescript.js:28708:28)
at Object.getSymbolAtLocation (node_modules/typescript/lib/typescript.js:28770:28)
```
| Bug,Help Wanted,API | low | Critical |
149,011,192 | youtube-dl | Site request: h-a.no | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your _issue_ (like that [x])
- Use _Preview_ tab to see how your issue will actually look like
---
### Make sure you are using the _latest_ version: run `youtube-dl --version` and ensure your version is _2016.04.13_. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.13**
### Before submitting an _issue_ make sure you have:
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your _issue_?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### If the purpose of this _issue_ is a _site support request_ please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: http://www.h-a.no/sport/se-hoydepunktene-her?AutoPlay=true
---
### Description of your _issue_, suggested solution and other information
Hi, I don't know if I understood this new guide thing, but I'll try :-) I just wanted to request support for h-a.no, as in the above URL. :-) Thanks!
| site-support-request,geo-restricted | low | Critical |
149,037,249 | go | git-codereview: allow disabling of "foo.mailed" tag creation | Every time you run "git codereview mail" it creates a new tag (as per [its doc](https://godoc.org/golang.org/x/review/git-codereview)).
I've never actually used these tags for their advertised purpose (if I did need to diff against my last mailed commit, I would just use the advertised commit hash from within gerrit), so I don't benefit from them.
OTOH, they do interrupt my workflow. For example, the [common git bash completion script](https://github.com/git/git/blob/master/contrib/completion/git-completion.bash) prints all pending tags when auto-completing `git commit <TAB>`, and the sheer number of mailed tags obfuscates any useful information.
So, this is a feature request to disable their automatic creation.
| NeedsInvestigation | low | Major |
149,040,950 | go | debug/pe: extend package so it can be used by cmd/link | CC: @minux @ianlancetaylor @crawshaw
cmd/link and debug/pe share no common code, but they should - they do the same thing. If package debug/pe is worth its weight, it should be used by cmd/link. I accept that this was impossible to do when cmd/link was a C program, but now cmd/link is written in Go.
I also tried to use debug/pe in github.com/alexbrainman/goissue10776/pedump, and it is not very useful. I endup coping and changing some debug/pe code.
I also think there is some luck of PE format knowledge between us. So improving debug/pe structure and documentation should help with that.
I tried to rewrite src/cmd/link/internal/ld/ldpe.go by using debug/pe (see CL 14289). I had to extend debug/pe for that task. Here is the list of externally visible changes I had to do:
- PE relocations have SymbolTableIndex field that is an index into symbols table. But File.Symbols slice has Aux lines removed as it is built, so SymbolTableIndex cannot be used to index into File.Symbols. We cannot change File.Symbols behavior. So I propose we introduce File.COFFSymbols slice that is like File.Symbols, but with Aux lines left untouched.
- I have introduced StringTable that was used to convert long names in PE symbol table and PE section table into Go strings.
- I have also introduced Section.Relocs to access PE relocations.
I propose we add the above things to debug/pe and use new debug/pe to rewrite ldpe.go and pe.go in cmd/link/internal/ld.
Alex
PS: You can google for pecoff.doc for PE detials.
| help wanted,NeedsFix | medium | Critical |
149,072,274 | opencv | minMaxIdx not available in the JNI binding | Hi,
I noticed that the function minMaxIdx (core) is not available through the JNI binding. Only minMaxLoc is available, which is a problem given that minMaxLoc is limited to 2-dimensional arrays.
- OpenCV version: 3.1.0
- Host OS: Linux (Mint 17.3)
This issue was discussed here: http://answers.opencv.org/question/92676/minmaxidx-missing-in-the-jni-interface
| feature,affected: 3.4,category: java bindings | low | Minor |
149,072,396 | three.js | Proposal: Making flattened math classes that operate directly on views the default | ##### Description of the problem
Clara.io has been struggling with the non-memory efficient Three.JS math classes for four years now. Basically Three.JS math classes use object members to represent their data, e.g.:
```
THREE.Vector3 = function( x, y, z ) {
this.x = x;
this.y = y;
this.z = z;
}
```
This pattern above leads to a lot of inefficiencies. It is an object and it is an editable object. It has a large overhead per object in terms of allocation time and extra memory usage because it is an editable object. There are also costs to copy from a BufferGeometry array to a Vector3 and vice versa. Also there are costs to flatten these arrays when using them as uniforms.
As we are looking to speed up Clara.io, I think we need to move away from object-based math classes and instead make the hard switch to math classes that are based around internal arrays. This would be the primary representation of the Three.JS math types, although we could still emulate the object-based design we have by building on top of these. Thus I want to see this design being the primary design for all of Three.JS math:
```
Vector3.set( array, x, y, z ) {
array[0]=x;
array[1]=y;
array[2]=z;
}
Vector3.add( target, lhs, rhs ) {
target[0]=lhs[0]+rhs[0];
target[1]=lhs[1]+rhs[1]
target[2]=lhs[2]+rhs[2]
}
```
I think the above pattern is the only way that Clara.io can achieve the speed it needs.
Important point: I was talking to one of the JavaScript spec designers and he said that we should use large arraybuffer allocations and then try to use views for each individual math element in a large array (e.g. var usernameView = new Float32Array(buffer, 4, 16) ). He said that views were incredibly cheap as now you do not even pay an array allocation cost per element.
I think we could keep around the Vector3 object (as opposed to the new functional interface), but it could be built by creating a BufferArray (or alternatively initialized by a passed in view) and remembering it, and then using all of the pure array buffer functions I am advocating as the primary interface. It could have accessors for x, y, and z that map onto the BufferArray. This would give us backwards compatibility.
But going forward it would likely be best to nearly always use the versions of the math classes that can operate directly on a preallocated array. This design would be ultra fast -- no allocations for the most part, and ultra cheap and easy conversions to uniforms and BufferGeometry.attribute
I think this is necessary for Clara.io to move to the next level and I think Three.JS wants to get there as well. We, ThreeJS, have been moving to this solution slowly but we've never made the full break to transform the primary math-objects. I think it is time to do this.
/ping @WestLangley
##### Three.js version
- [x] Dev
- [ ] r75
- [ ] ...
##### Browser
- [x] All of them
- [ ] Chrome
- [ ] Firefox
- [ ] Internet Explorer
##### OS
- [x] All of them
- [ ] Windows
- [ ] Linux
- [ ] Android
- [ ] IOS
##### Hardware Requirements (graphics card, VR Device, ...)
| Suggestion | medium | Major |
149,168,960 | react | Iframe load event not firing in Chrome and Safari when src is 'about:blank' | See: https://jsfiddle.net/pnct6b7r/
It will not trigger the alert in Chrome and Safari, but it will work in Firefox and even IE8.
Is this a React issue or Webkit issue? If it is a Webkit issue, should we "fix it" in React given that we want [consistent events across browsers](http://facebook.github.io/react/docs/events.html)?
ps: The JSFiddle was based on the isuse #5332.
| Type: Bug,Component: DOM | low | Major |
149,183,640 | opencv | Mask is ignored in cv::CascadeClassifier | Output of mask generated by maskGenerator of cv::CascadeClassifier (see https://github.com/Itseez/opencv/blob/master/modules/objdetect/src/cascadedetect.cpp#L996) is ignored in detection step. It is assigned, but never actually used.
### In which part of the OpenCV library you got the issue?
- objdetect
### Expected behaviour
When my program provides custom (subclass of) cv::BaseCascadeClassifier::MaskGenerator object to cv::CascadeClassifier, it should be using the masks it produces to apply the detection algorithm only on areas defined by mask (and therefore speeding up the process when mask is "sparse").
### Actual behaviour
The detection is run on whole area of image anyway and it is very slow for huge images with "sparse" detection mask.
### Additional description
I have troubles to build and test OpenCV myself, but I have managed to edit "source\opencv\modules\objdetect\src\cascadedetect.cpp" and add mask check by adding following lines on line 996:
``` c++
if(mask.empty() ||
((x >= 0) && (y >= 0) &&
(x < mask.cols) && (y < mask.rows) &&
(mask.at<uchar>(y,x) != 0)))
{
```
and putting lines 996 to 1022 into the if block.
| feature,category: objdetect | low | Major |
149,251,959 | go | flag: handles unknown arguments in an unexpected less-helpful way | If the flag library encounters an unknown parameter, it discards that parameter and dumps everything else in the positional arguments bucket.
Instead, the flag library should (optionally, probably when ContinueOnError is specified) put all unknown parameters in the positional arguments bucket (returned by Args) and keep attempting to parse the following arguments. (unless encountering something that clearly indicated the beginning of positional arguments such as the naked double-dash "--")
1. What version of Go are you using (`go version`)?
go version go1.3.3 linux/amd64
2. What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN=""
GOCHAR="6"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/azani/go"
GORACE=""
GOROOT="/usr/lib/go"
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"
CXX="g++"
CGO_ENABLED="1"
3. What did you do?
https://play.golang.org/p/44mXvwTfLD
4. What did you expect to see?
list == "first", "second", "third"
flagSet.Args() == "--something", "blah"
1. What did you see instead?
list == "first", "second"
flagSet.Args() == "-I", "third", "blah"
also, Parse printed an error.
| NeedsDecision | low | Critical |
149,255,868 | youtube-dl | Option to combine -o and -g | - [X] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.13**
- [X] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [X] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your _issue_?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [X] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### Description of your _issue_, suggested solution and other information
I use Kodi Media Center for all my entertainment and media management. This program is capable of playing streaming video in many formats from an url written in a .strm file. I would like to use youtube-dl to output a playlist of music videos in a specific format (say, `'%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s'` which will put playlist files into a common folder, but instead of actually downloading the video, just echo the url (as given by the `-g` option) into the file. I have actually accomplished this using a bash script which parses the output of `youtube-dl --get-filename --get-url` but it is a very hacky solution. Any chance this could become a feature?
EDIT: To give an example, I would like to be able to run `youtube-dl -f best -g -o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' https://www.youtube.com/playlist?list=PLY5f8vtstsfjedMi6VHRAeFV85Xc_VoKG` and see afterwards a subdirectory in the working dir of youtube-dl, which is named by the title of the playlist (in this case, NateWantsToBattle Songs) and in that subfolder would be a number of files named according to their order in the playlist and title (in this case, one would be named "NateWantsToBattle Songs/01 - twenty one pilots - Stressed Out [NateWantsToBattle feat. ShueTube].mp4") and each file would contain only the plain-text url of the video as shown by -g (the example above would contain the url string shown [here](http://pastebin.com/raw/60uTA3Nb)).
| request | low | Critical |
149,541,041 | go | encoding/asn1: better error message when unmarshaling non-pointer | Please answer these questions before submitting your issue. Thanks!
1. What version of Go are you using (`go version`)?
NaCL / Playground
2. What operating system and processor architecture are you using (`go env`)?
Playground
3. What did you do?
https://play.golang.org/p/TnLoso9len
4. What did you expect to see?
```
1.2.840.113549.1.9.16.1.4
[6 11 42 134 72 134 247 13 1 9 16 1 4]
1.2.840.113549.1.9.16.1.4
```
1. What did you see instead?
```
1.2.840.113549.1.9.16.1.4
[6 11 42 134 72 134 247 13 1 9 16 1 4]
panic: reflect: call of reflect.Value.Elem on slice Value
goroutine 1 [running]:
panic(0x184720, 0x10434260)
/usr/local/go/src/runtime/panic.go:464 +0x700
reflect.Value.Elem(0x188f40, 0x10434250, 0x97, 0x10434250, 0x0, 0x0, 0x0, 0x188f40)
/usr/local/go/src/reflect/value.go:735 +0x2a0
encoding/asn1.UnmarshalWithParams(0x10444074, 0xd, 0x40, 0x188f40, 0x10434250, 0x0, 0x0, 0x10434220, 0x0, 0x0, ...)
/usr/local/go/src/encoding/asn1/asn1.go:989 +0xc0
encoding/asn1.Unmarshal(0x10444074, 0xd, 0x40, 0x188f40, 0x10434250, 0x10434250, 0x0, 0x0, 0x0, 0x0, ...)
/usr/local/go/src/encoding/asn1/asn1.go:983 +0x80
main.main()
/tmp/sandbox412831617/main.go:24 +0x480
```
| NeedsInvestigation | low | Critical |
149,548,258 | go | cmd/link: move away from *LSym linked lists | The linked lists come from the original C code. Slices make the code a bit easier to follow.
| compiler/runtime | low | Minor |
149,560,385 | go | cmd/compile: eliminate all (some?) convT2{I,E} calls | For pointer-shaped types, convT2{I,E} are already done by the compiler. We currently call into the runtime only for non-pointer-shaped types.
There are two cases, where the interface escapes and where it doesn't.
```
type T struct {
a, b, c int
}
func f(t T) interface{} {
return t
}
func g(t T) {
h(t)
}
//go:noescape
func h(interface{})
```
In both cases, I think, it would be easier to inline the work that convT2{I,E} does. f does this:
```
LEAQ type."".T(SB), AX
MOVQ AX, (SP)
LEAQ "".autotmp_0+40(SP), AX
MOVQ AX, 8(SP)
MOVQ $0, 16(SP)
CALL runtime.convT2E(SB)
MOVQ 24(SP), CX
MOVQ 32(SP), DX
```
instead it could do:
```
LEAQ type."".T(SB), AX
MOVQ AX, (SP)
CALL runtime.newobject(SB)
MOVQ 8(SP), DX
MOVQ "".autotmp_0+40(SP), AX
MOVQ "".autotmp_0+48(SP), BX
MOVQ "".autotmp_0+56(SP), CX
MOVQ AX, (DX)
MOVQ BX, 8(DX)
MOVQ CX, 16(DX)
LEAQ type."".T(SB), CX
```
11 instructions instead of 8, but several runtime calls bypassed (convT2E, plus the typedmemmove it calls, and everything that calls, ...).
The expanded code gets larger if T has a pointer in it. Maybe we use typedmemmove instead of explicit copy instructions in that case.
`g` is even easier because there is no runtime call required at all. Old code:
```
LEAQ type."".T(SB), AX
MOVQ AX, (SP)
LEAQ "".autotmp_2+64(SP), AX
MOVQ AX, 8(SP)
LEAQ "".autotmp_3+40(SP), AX
MOVQ AX, 16(SP)
CALL runtime.convT2E(SB)
MOVQ 24(SP), CX
MOVQ 32(SP), DX
```
new code:
```
MOVQ "".autotmp_2+64(SP), AX
MOVQ "".autotmp_2+72(SP), BX
MOVQ "".autotmp_2+80(SP), CX
MOVQ AX, "".autotmp_3+40(SP)
MOVQ BX, "".autotmp_3+48(SP)
MOVQ CX, "".autotmp_3+56(SP)
LEAQ type."".T(SB), CX
LEAQ "".autotmp_3+40(SP), DX
```
one less instruction, and if we can avoid the copy (if autotmp_2 is never modified after), it could be even better. And no write barriers are required on the copy even if T has pointers.
Maybe we do the no-escape optimization first. It seems like an obvious win. That would allow removing the third arg to convT2{I,E}. Then we could think about the escape optimization.
@walken-google
| Performance,NeedsFix,early-in-cycle | low | Major |
149,560,456 | opencv | Delaunay Triangulation, possible wrong calculation for huge graphs? | Hi there,
I was working with delaunay triangulation using OpenCV and trying to create a Gabriel Graph (https://en.wikipedia.org/wiki/Gabriel_graph) from delaunay. But the result is not like what I expected. Also, I compared results with R library (spdep).
thanks
### Please state the information for your system
- OpenCV version: 3.1.0
- Host OS: Windows 10
### In which part of the OpenCV library you got the issue?
- imgproc
### Expected behaviour
Whole complete graph.
### Actual behaviour
In the Gabriel graph, some nodes are not connected to the main graph (staying alone), this should not be. I draw also delaunay graph, in some parts most probably it's calculating the angles wrongly and connecting nodes which should not be also.
### Additional description
There are 3 pictures and the data (delaunay, gabriel (alone nodes are different colors) and r library's gabriel result). I used the 2d points for testing from (cluto-t8.8k, https://github.com/deric/clustering-benchmark/tree/master/src/main/resources/datasets/artificial)
[result.txt](https://github.com/Itseez/opencv/files/226689/result.txt)
delaunay:

gabriel:

r library's gabriel

| bug,category: imgproc,affected: 3.4 | low | Minor |
149,561,444 | TypeScript | Suggestion: Better error message when unable to resolve modules | Related to our woes with #8189, if the compilation errors gave some better hinting when it was unable to resolve a module, we might not have taken so long to find the solution (and we wouldn't have filed an issue!).
The idea would be that if there is a compilation error of `cannot file module "whatever"`, to make some intelligent checks against the parameters the compilation is running under and suggest using `--moduleResolution node` if the user is not already doing so.
| Docs | low | Critical |
149,574,952 | opencv | OpenCV build fails on Mac 10.10.5 | I manage to run cmake successfully and build the configuration. However, when I run make -j5, there is an error on "lib/libopencv_videoio.3.1.0.dylib" that has stopped me from building.
### Please state the information for your system
- OpenCV version: latest master branch for opencv and opencv_contrib
- Host OS: Mac OS X 10.10.5
- CMake 3.5.2
### In which part of the OpenCV library you got the issue?
- videoio
### Actual behaviour
Undefined symbols for architecture x86_64:
```
"_CMBlockBufferCreateWithMemoryBlock", referenced from:
_videotoolbox_common_end_frame in libavcodec.a(videotoolbox.o)
"_CMSampleBufferCreate", referenced from:
_videotoolbox_common_end_frame in libavcodec.a(videotoolbox.o)
"_CMVideoFormatDescriptionCreate", referenced from:
_av_videotoolbox_default_init2 in libavcodec.a(videotoolbox.o)
"_SSLClose", referenced from:
_tls_close in libavformat.a(tls_securetransport.o)
"_SSLCopyPeerTrust", referenced from:
_tls_open in libavformat.a(tls_securetransport.o)
"_SSLCreateContext", referenced from:
_tls_open in libavformat.a(tls_securetransport.o)
"_SSLHandshake", referenced from:
_tls_open in libavformat.a(tls_securetransport.o)
"_SSLRead", referenced from:
_tls_read in libavformat.a(tls_securetransport.o)
"_SSLSetCertificate", referenced from:
_tls_open in libavformat.a(tls_securetransport.o)
"_SSLSetConnection", referenced from:
_tls_open in libavformat.a(tls_securetransport.o)
"_SSLSetIOFuncs", referenced from:
_tls_open in libavformat.a(tls_securetransport.o)
"_SSLSetPeerDomainName", referenced from:
_tls_open in libavformat.a(tls_securetransport.o)
"_SSLSetSessionOption", referenced from:
_tls_open in libavformat.a(tls_securetransport.o)
"_SSLWrite", referenced from:
_tls_write in libavformat.a(tls_securetransport.o)
"_SecIdentityCreate", referenced from:
_tls_open in libavformat.a(tls_securetransport.o)
"_SecItemImport", referenced from:
_import_pem in libavformat.a(tls_securetransport.o)
"_SecTrustEvaluate", referenced from:
_tls_open in libavformat.a(tls_securetransport.o)
"_SecTrustSetAnchorCertificates", referenced from:
_tls_open in libavformat.a(tls_securetransport.o)
"_VTDecompressionSessionCreate", referenced from:
_av_videotoolbox_default_init2 in libavcodec.a(videotoolbox.o)
"_VTDecompressionSessionDecodeFrame", referenced from:
_videotoolbox_common_end_frame in libavcodec.a(videotoolbox.o)
"_VTDecompressionSessionInvalidate", referenced from:
_av_videotoolbox_default_free in libavcodec.a(videotoolbox.o)
"_VTDecompressionSessionWaitForAsynchronousFrames", referenced from:
_videotoolbox_common_end_frame in libavcodec.a(videotoolbox.o)
"_iconv", referenced from:
_avcodec_decode_subtitle2 in libavcodec.a(utils.o)
"_iconv_close", referenced from:
_avcodec_open2 in libavcodec.a(utils.o)
_avcodec_decode_subtitle2 in libavcodec.a(utils.o)
"_iconv_open", referenced from:
_avcodec_open2 in libavcodec.a(utils.o)
_avcodec_decode_subtitle2 in libavcodec.a(utils.o)
"_kCMFormatDescriptionExtension_SampleDescriptionExtensionAtoms", referenced from:
_av_videotoolbox_default_init2 in libavcodec.a(videotoolbox.o)
"_lame_close", referenced from:
_mp3lame_encode_close in libavcodec.a(libmp3lame.o)
"_lame_encode_buffer", referenced from:
_mp3lame_encode_frame in libavcodec.a(libmp3lame.o)
"_lame_encode_buffer_float", referenced from:
_mp3lame_encode_frame in libavcodec.a(libmp3lame.o)
"_lame_encode_buffer_int", referenced from:
_mp3lame_encode_frame in libavcodec.a(libmp3lame.o)
"_lame_encode_flush", referenced from:
_mp3lame_encode_frame in libavcodec.a(libmp3lame.o)
"_lame_get_encoder_delay", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_get_framesize", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_init", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_init_params", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_set_VBR", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_set_VBR_mean_bitrate_kbps", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_set_VBR_quality", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_set_bWriteVbrTag", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_set_brate", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_set_disable_reservoir", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_set_in_samplerate", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_set_mode", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_set_num_channels", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_set_out_samplerate", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_lame_set_quality", referenced from:
_mp3lame_encode_init in libavcodec.a(libmp3lame.o)
"_swr_alloc", referenced from:
_opus_decode_init in libavcodec.a(opusdec.o)
"_swr_close", referenced from:
_opus_decode_packet in libavcodec.a(opusdec.o)
_opus_decode_flush in libavcodec.a(opusdec.o)
"_swr_convert", referenced from:
_opus_decode_packet in libavcodec.a(opusdec.o)
"_swr_free", referenced from:
_opus_decode_close in libavcodec.a(opusdec.o)
"_swr_init", referenced from:
_opus_decode_packet in libavcodec.a(opusdec.o)
"_swr_is_initialized", referenced from:
_opus_decode_packet in libavcodec.a(opusdec.o)
"_x264_bit_depth", referenced from:
_X264_init_static in libavcodec.a(libx264.o)
_X264_frame in libavcodec.a(libx264.o)
"_x264_encoder_close", referenced from:
_X264_close in libavcodec.a(libx264.o)
"_x264_encoder_delayed_frames", referenced from:
_X264_frame in libavcodec.a(libx264.o)
"_x264_encoder_encode", referenced from:
_X264_frame in libavcodec.a(libx264.o)
"_x264_encoder_headers", referenced from:
_X264_init in libavcodec.a(libx264.o)
"_x264_encoder_open_148", referenced from:
_X264_init in libavcodec.a(libx264.o)
"_x264_encoder_reconfig", referenced from:
_X264_frame in libavcodec.a(libx264.o)
"_x264_levels", referenced from:
_X264_init in libavcodec.a(libx264.o)
"_x264_param_apply_fastfirstpass", referenced from:
_X264_init in libavcodec.a(libx264.o)
"_x264_param_apply_profile", referenced from:
_X264_init in libavcodec.a(libx264.o)
"_x264_param_default", referenced from:
_X264_init in libavcodec.a(libx264.o)
"_x264_param_default_preset", referenced from:
_X264_init in libavcodec.a(libx264.o)
"_x264_param_parse", referenced from:
_X264_init in libavcodec.a(libx264.o)
"_x264_picture_init", referenced from:
_X264_frame in libavcodec.a(libx264.o)
"_xvid_encore", referenced from:
_xvid_encode_init in libavcodec.a(libxvid.o)
_xvid_encode_frame in libavcodec.a(libxvid.o)
_xvid_encode_close in libavcodec.a(libxvid.o)
"_xvid_global", referenced from:
_xvid_encode_init in libavcodec.a(libxvid.o)
"_xvid_plugin_2pass2", referenced from:
_xvid_encode_init in libavcodec.a(libxvid.o)
_ff_xvid_rate_control_init in libavcodec.a(libxvid_rc.o)
_ff_xvid_rate_estimate_qscale in libavcodec.a(libxvid_rc.o)
_ff_xvid_rate_control_uninit in libavcodec.a(libxvid_rc.o)
"_xvid_plugin_lumimasking", referenced from:
_xvid_encode_init in libavcodec.a(libxvid.o)
"_xvid_plugin_single", referenced from:
_xvid_encode_init in libavcodec.a(libxvid.o)
"_xvid_plugin_ssim", referenced from:
_xvid_encode_init in libavcodec.a(libxvid.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [lib/libopencv_videoio.3.1.0.dylib] Error 1
make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2
make: *** [all] Error 2
```
| bug,category: build/install,affected: 3.4 | low | Critical |
149,591,140 | flutter | expose the navigator route as a service extension | Something like:
- [ ] get the current value for the route
- [ ] set a new value for the route
- [ ] enable and disable receiving events on route changes
@Hixie, I very briefly looked at the flutter sources, but didn't see how to get this info.
| c: new feature,tool,framework,f: routes,P3,team-framework,triaged-framework | low | Minor |
149,604,687 | TypeScript | Preselect completion list entries based on contextual type | It should be cool if tsserver could provide a new command `guesstypes` to give the capability to guess parameter types when completion is applied
In other word, I would like to support the [same feature that I have done with tern ](https://github.com/angelozerr/tern-guess-types)
For instance if you open completion for `document` and select `addEventListener` :

When you apply completion, I would like call a tsserver command 'guesstypes' to retrieve variables, functions for each function parameters. Here a screenshot which shows a list of variable with string type for the `addEventListener` type argument :

| Suggestion,Help Wanted,API | low | Minor |
149,780,545 | go | os: Stdin is broken in some cases on windows | Please answer these questions before submitting your issue. Thanks!
1. What version of Go are you using (`go version`)?
go version go1.6 windows/amd64
2. What operating system and processor architecture are you using (`go env`)?
set GOARCH=amd64
set GOOS=windows
3. What did you do?
I tried to make a Squid (a proxy server) auth helper with Go. It's a simple program that needs to read it's stdin, parse it and send the response to stdout. See http://wiki.squid-cache.org/Features/AddonHelpers for details.
Here is the link to my code: https://play.golang.org/p/NR9PkX0fu7
4. What did you expect to see?
My squid instance need to ask credentials upon http request. No error messages related to my program need to be in squid log.
5. What did you see instead?
Every thing went fine. I tested the program from console, I tested it with echo, sending login and pass by pipe, like 'echo vasya 1 | sqauth'. No errors detected. Then, I configured Squid to run the program as an auth helper. Squid didn't start. In log file I saw this record: 'read /dev/stdin: The parameter is incorrect.', thre record from line 26 of my code. I spent few hours investigating the problem, and that's what I got. It seems that Squid starts its child processes, and sets its stdin to async socket handlers. It's not pretty good documented in WINAPI doc, but when they run ReadFile on async IO handler with NULL lpOverlapped parameter (it's the last parameter to function), the error 87 happens (The parameter is incorrect). I looked go source code, and I saw that 'File.read' function calls syscall.ReadFile with lpOverlapped == nil in any case.
Thus that the case, when I start go program as children process with stdin as async socket, os.Stdin is always broken and I have no normal way to get data from stdin. Instead I need to use platform dependend syscall. It would be pretty good to use standard os.Stdin in any case. It's interesting that C fgets works fine with normal stdin in the case.
The working C programm: [fake.cc.zip](https://github.com/golang/go/files/228057/fake.cc.zip).
| OS-Windows | low | Critical |
149,828,463 | youtube-dl | Command line credentials are applied for all the extractors in extraction process | ---
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.19**
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your _issue_?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'http://vk.com/video356065542_456239041', u'-p', u'PRIVATE', u'-u', u'PRIVATE', u'-v']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2016.04.19
[debug] Git HEAD: 494ab6d
[debug] Python version 2.6.6 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg 3.0, ffprobe N-77883-gd7c75a5, rtmpdump 2.4
[debug] Proxy map: {}
[vk] Downloading login page
[vk] Logging in as <snip>
[vk] 356065542_456239041: Downloading webpage
[youtube] Downloading login page
[youtube] Logging in
ERROR: Unable to login: The email and password you entered don't match.
Traceback (most recent call last):
File "C:\Dev\git\youtube-dl\master\youtube_dl\YoutubeDL.py", line 671, in extract_info
ie_result = ie.extract(url)
File "C:\Dev\git\youtube-dl\master\youtube_dl\extractor\common.py", line 340, in extract
self.initialize()
File "C:\Dev\git\youtube-dl\master\youtube_dl\extractor\common.py", line 334, in initialize
self._real_initialize()
File "C:\Dev\git\youtube-dl\master\youtube_dl\extractor\youtube.py", line 187, in _real_initialize
if not self._login():
File "C:\Dev\git\youtube-dl\master\youtube_dl\extractor\youtube.py", line 132, in _login
raise ExtractorError('Unable to login: %s' % error_msg, expected=True)
ExtractorError: Unable to login: The email and password you entered don't match.
```
---
Account credentials when specified via command line are used for all the extractors involved in extraction process. For example in aforementioned log VK authentication successfully happens first, then YouTube embed is detected and extraction process is delegated to youtube extractor that tries to login with the same credentials obviously resulting in expected error.
In general authentication error can be worked around with .netrc authentication.
Since it's not feasible to add exclusive `--username/--password` options for each extractor that supports authentication, command line credentials probably should not be considered for any consequent extractor involved apart from the first one.
| bug | low | Critical |
149,831,208 | youtube-dl | Add support for hbonordic.com | - [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.19**
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your _issue_?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'https://no.hbonordic.com/series/into-the-badlands/season-1/episode-4/1f10ced-009986b7dd3']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2016.04.19
[debug] Python version 2.7.11 - Darwin-15.3.0-x86_64-i386-64bit
[debug] exe versions: avconv 11.4, avprobe 11.4, ffmpeg 3.0.1, ffprobe 3.0.1, rtmpdump 2.4
[debug] Proxy map: {}
[generic] 1f10ced-009986b7dd3: Requesting header
WARNING: Could not send HEAD request to https://no.hbonordic.com/series/into-the-badlands/season-1/episode-4/1f10ced-009986b7dd3: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>
[generic] 1f10ced-009986b7dd3: Downloading webpage
ERROR: Unable to download webpage: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)> (caused by URLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)'),))
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 388, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1940, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 431, in open
response = self._open(req, data)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 449, in _open
'_open', req)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/local/bin/youtube-dl/youtube_dl/utils.py", line 859, in https_open
req, **kwargs)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1197, in do_open
raise URLError(err)
```
---
### If the purpose of this _issue_ is a _site support request_ please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://no.hbonordic.com/series/into-the-badlands/season-1/episode-4/1f10ced-009986b7dd3
- Playlist: https://no.hbonordic.com/series/into-the-badlands/season-1/ea163cc3-31bf-4845-9542-b107e39e093f
---
### Description of your _issue_, suggested solution and other information
Only available in Sweden, Norway, Denmark and Finland.
| site-support-request,geo-restricted | low | Critical |
149,846,384 | three.js | BoundingBoxHelper for buffer geometry does not use drawrange neither groups | ##### Description of the problem
BoundingBoxHelper.update (setFromObject) for buffer geometry does not use drawrange.count ( neither groups), so the limits can include points with 0,0,0 (now it uses positions.length)
if you have the typedarray with data still with 0,0,0, the box dimension figured out is not correct.(0,0,0 is included...) (I have predimensioned positions array and filling as I need)
I'm using by now to create pointclouds but I imagine some problems in the future if the bufferGeometry is used to drawlines or triangles (using groups)
So I this this is a mix of bug & enhance ,?
...
##### Three.js version
- [ ] Dev
- [x] r76
- [ ] ...
##### Browser
- [x] All of them
- [ ] Chrome
- [ ] Firefox
- [ ] Internet Explorer
##### OS
- [x] All of them
- [ ] Windows
- [ ] Linux
- [ ] Android
- [ ] IOS
##### Hardware Requirements (graphics card, VR Device, ...)
| Suggestion | low | Critical |
149,887,783 | TypeScript | Do not rename imports from ambient modules | Issue explained by @bbgone in https://github.com/Microsoft/TypeScript/issues/8118#issuecomment-212218038
Renaming an import alias for a module coming from a .d.ts file, renames all instances. this is obviously wrong.
Options, 1. error (obviously not helpful), 2. only rename local symbol (better, but leaves the code in an invalid state), 3. rename local import and add `as oldname` clause to the import declaration as needed (looks like the best solution).
This is similar to the issue related to https://github.com/Microsoft/TypeScript/issues/7458, except that this issue adds renaming the import alias.
| Suggestion,Help Wanted,Effort: Moderate | low | Critical |
149,937,239 | go | runtime: significant performance improvement on single core machines | **1. What version of Go are you using (`go version`)?**
```
go version go1.5.4 linux/amd64
```
**2. What operating system and processor architecture are you using (`go env`)?**
```
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH=""
GORACE=""
GOROOT="/root/go"
GOTOOLDIR="/root/go/pkg/tool/linux_amd64"
GO15VENDOREXPERIMENT=""
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"
CXX="g++"
CGO_ENABLED="1"
```
Hosts are running stock Ubuntu 15.10:
```
# uname -r
4.2.0-27-generic
```
**3. What did you do?**
**4. What did you expect to see?**
**5. What did you see instead?**
Looking into performance of running containers on [Docker](https://github.com/docker/docker), we realized there's a significant difference when running on a single core machine versus a multi core machine. The result seem independent of the `GOMAXPROCS` value (using `GOMAXPROCS=1` on the multi core machine remains significantly slower).
Single core:
```
# time ./docker run --rm busybox true
real 0m0.255s
```
Multi core:
```
# time ./docker run --rm busybox true
real 0m0.449s
```
We cannot attribute that difference to Go with a 100% certainty, but we could use your help explaining some of the profiling results we've obtained so far.
##### Profiling / trace
We instrumented the code to find out where that difference materialized, and what comes out is that it is quite evenly distributed. However, syscalls are consistently taking much more time on the multi core machine (as made obvious with slower syscalls such as `syscall.Unmount`).
Using `go tool trace` to dig further, it appears that we're seeing discontinuities in goroutine execution on the multi core machine that the single core one doesn't expose, even with `GOMAXPROCS=1`.
Single core ([link to the trace file](https://www.dropbox.com/s/onxw64prot30kzo/layerStore-singleCore-1?dl=1))

Multi core with `GOMAXPROCS=1` ([link to the trace file](https://www.dropbox.com/s/unkchqgegmrey5u/layerStore-multiCore-gomaxprocs1?dl=1))

[Link to the binary which produced the trace files](https://www.dropbox.com/s/kni7ngcqqfutx1x/docker?dl=1).
##### Host information
The two hosts are virtual machines running on the same Digital Ocean zone, with the exact same CPU (`Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz`). The issue was also reproduced locally with Virtual Box VM with different core numbers.
Please let us know if there's any more information we can provide, or if you need us to test with different builds of Go. Thanks for any help you can provide!
Cc @crosbymichael @tonistiigi.
| compiler/runtime | low | Major |
149,941,579 | go | go/build: allow Import to ignore build constraints | There's no way at the moment to use `Import` or `ImportDir` to select all the files in package ignoring build constraints. There is the `Package.IgnoredGoFiles` field, but that does not extend to finding the dependencies of the ignored files, and so requires extra parsing steps.
One use case for this is in situations like App Engine: we want to select all the source files for an app, but not exclude source file that would be needed to compile a package in other/future versions of Go. So, we want to select files ignoring any go1.x tags.
/cc @dsymonds @adg
| NeedsInvestigation | low | Minor |
149,943,919 | rust | DWARF does not describe closure type | I wrote a simple test case that makes a closure. The closure is just:
```
let f2 = || println!("lambda f2");
```
The resulting DWARF doesn't describe the closure type at all:
```
<7><11e>: Abbrev Number: 6 (DW_TAG_variable)
<11f> DW_AT_location : 2 byte block: 91 78 (DW_OP_fbreg: -8)
<122> DW_AT_name : (indirect string, offset: 0x92): f2
<126> DW_AT_decl_file : 2
<127> DW_AT_decl_line : 28
<128> DW_AT_type : <0x636>
...
<1><636>: Abbrev Number: 19 (DW_TAG_structure_type)
<637> DW_AT_name : (indirect string, offset: 0x4c5): closure
<63b> DW_AT_byte_size : 0
```
I was planning to make it so the user can invoke a closure from gdb, but I think this bug prevents that.
| A-debuginfo,C-enhancement,P-medium,T-compiler | low | Critical |
149,951,109 | rust | DWARF doesn't describe use declarations | I wrote a small test program of `use` declarations:
```
pub mod mod_x {
pub fn f() {
use ::mod_y::g;
g();
}
pub fn g() {
println!("X");
}
}
pub mod mod_y {
pub fn g() {
println!("Y");
}
}
pub fn main() {
mod_x::f();
}
```
Examining the DWARF, I don't see anything in the body of `f` that reflect the `use` declaration.
What this means is that when stopped in `f` in gdb, name lookup will not work properly -- `call g()` will call the wrong `g`.
`use` declarations can be represented with something like `DW_TAG_imported_declaration` or `DW_TAG_imported_module`.
| A-debuginfo,C-enhancement,P-medium,T-compiler | low | Minor |
149,961,932 | go | cmd/compile: ephemeral slicing doesn't need protection against next object pointers | ```
func f(b []byte) byte {
b = b[3:]
return b[4]
}
```
We compile this to something like (bounds checks omitted):
```
p = b.ptr
inc = 3
if b.cap == 3 {
inc = 0
}
p += inc
return *(p+4)
```
The `if` in the middle is there to make sure we don't manufacture a pointer to the next object in memory. But the resulting pointer is never exposed to the garbage collector, so that `if` is unnecessary. Manufacturing a pointer to the next object in memory is ok if that pointer is never spilled at a safe point. (Bounds checks will make sure such a pointer is never actually used.)
Unfortunately, I don't see an easy way to do this optimization in the current compiler. Marked as unplanned.
See #14849
| Performance,compiler/runtime | low | Major |
150,014,693 | vscode | Allow to scope settings by platform | Hi
I develop on 3 different platform. When synchronizing settings, snippets and so on, i often must change path, adjust font-size, etc...
So, it could be great if we had a per platform settings set (Windows, Mac, Unix)
| feature-request,config | high | Critical |
150,092,855 | TypeScript | `Result value must be used` check | Hello.
Quite often while working with `ImmutableJS` data structures and other immutable libraries people forget that values are immutable:
They can accidentally write:
```
obj.set('name', 'newName');
```
Instead of:
```
let newObj = obj.set('name', 'newName');
```
These errors are hard to discover and they are very annoying. I think that this problem can't be solved by `tslint`, so I propose to add some sort of `"you must use the result"` check into the compiler. But right now I have no idea how I want it to be expressed in the language, so I create this issue mainly to start a discussion.
Rust lang, for example, solves a similar problem with `#[must_use]` annotation. This is not exact what we want, but it's a good example and shows that the problem is quite common.
| Suggestion,Awaiting More Feedback | medium | Critical |
150,184,770 | nvm | Chakra | Any plans to support versions of node running on [Chakra from Microsoft](https://github.com/nodejs/node-chakracore/releases)?
| OS: windows,feature requests,pull request wanted | low | Major |
150,194,510 | rust | trait objects with late-bound regions and existential bounds are resolved badly | ## STR
``` Rust
fn assert_sync<T: Sync>(_t: T) {}
fn main() {
let f: &(for<'a> Sync + Fn(&'a u32)->&'a u32) = &|x| x;
assert_sync(f);
}
```
## Expected Result
Code should compile and run
## Actual result
```
<anon>:4:33: 4:35 error: use of undeclared lifetime name `'a` [E0261]
<anon>:4 let f: &(for<'a> Sync + Fn(&'a u32)->&'a u32) = &|x| x;
^~
<anon>:4:33: 4:35 help: see the detailed explanation for E0261
<anon>:4:43: 4:45 error: use of undeclared lifetime name `'a` [E0261]
<anon>:4 let f: &(for<'a> Sync + Fn(&'a u32)->&'a u32) = &|x| x;
^~
<anon>:4:43: 4:45 help: see the detailed explanation for E0261
error: aborting due to 2 previous errors
```
| A-resolve,A-lifetimes,T-compiler,C-bug,T-types,A-trait-objects | low | Critical |
150,222,463 | youtube-dl | Soundcloud Go Track Fetching | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your _issue_ (like that [x])
- Use _Preview_ tab to see how your issue will actually look like
---
### Make sure you are using the _latest_ version: run `youtube-dl --version` and ensure your version is _2016.04.19_. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.19**
### Before submitting an _issue_ make sure you have:
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your _issue_?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [x] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
Add support for logging into soundcloud, and having the ability to fetch full length Soundcloud Go tracks.
| geo-restricted,account-needed | low | Critical |
150,226,679 | youtube-dl | --verify-archive | ---
### Make sure you are using the _latest_ version: run `youtube-dl --version` and ensure your version is _2016.04.19_. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.19**
### Before submitting an _issue_ make sure you have:
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your _issue_?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [x] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
### Description of your _issue_, suggested solution and other information
1. I have a very large archive of YouTube videos that I downloaded by loading a list of channel URLs. I used the `--download-archive` flag so I did not have to re-download any videos when I went over the archive again to update it.
2. I currently do not have a way to "re-run" the archive so I can use `--write-info-json` and or `--write-descriptions` without redownloading everything because I used `--download-archive`
3. I am proposing a new flag `--verify-archive` which would allow youtube-dl to check and make sure that it does indeed have all content associated with a video. If and only if the user used the appropriate flags to download such content.
4. Using `--verify-archive` with `--write-info-json` and or `--write-descriptions` while having used `--download-archive` in the past will check the specified download directory contents for such video to verify that the entire set of files exists.
5. `--verify-archive` will not affect new video downloads as they happen. `--verify-archive` should also work with all other flags that specify downloading a file and storing it in directory.
6. I have 74 thousand videos totaling 5TB. It's not economically feasible to re-download them in full again.
| request | low | Critical |
150,267,915 | rust | --target should ignore the machine part of the triplet in most cases | Currently, rustc accepts a limited set of --target values. Those match _some_ of the values one can get out of the typical `config.guess`, but for most platforms, the machine/manufacturer part of the target triplet should be ignored, or allowed to be omitted. (the output of config-guess is of the form `CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM` or `CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM`)
For example, the linux x86_64 target for rustc is `--target=x86_64-unknown-linux-gnu`.
Some time ago, `config.guess` would have returned that. Nowadays, it returns `x86_64-pc-linux-gnu`.
clang accepts both, as well as the shorter form: `--target=x86_64-linux-gnu` (in fact, it's very lax, `--target=x86_64-foo` works too).
A typical cross GCC toolchain will come as `x86_64-linux-gnu-gcc` too.
| T-compiler,T-dev-tools,C-feature-request | low | Major |
150,304,574 | opencv | opencv cv2.waitKey() do not work well with python idle or ipython | ### my OS
- OpenCV version: 2.4.5
- Host OS: Linux (CentOS 7)
### descirption of the problem
After loading an image, and then show the image, cv2.waitKey() can not work properly when I use opencv in python idle or jupyter console. For example, if I use `cv2.waitKey(3000)` after using `cv2.imshow('test', img)`, the image window should close automatically after 3 seconds, but it won't!
And neither can I close it by clicking the 'close' button in the window, it just gets stuck. I have to exit the python idle or shut down the jupyter console to close the window.
But if I run the whole script in the command line, it works well.
### Code example to reproduce the issue / Steps to reproduce the issue
```
import cv2
img = cv2.imread('cat.jpg')
cv2.imshow('test', img)
cv2.waitKey(5000)
```
| bug,category: python bindings,priority: low,affected: 2.4 | low | Major |
150,415,526 | flutter | Would be nice to have `flutter screenrecord` | Even if it only worked on Android for now.
I looked at writing it briefly this morning, but unlike a screen capture which is once and done, a screen recording is likely long running and waits for a keyboard interrupt to stop?
I suspect other CPTs have implemented something like this.
@devoncarew
| c: new feature,tool,P3,team-tool,triaged-tool | medium | Major |
150,440,734 | youtube-dl | Make these errors optional | - [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.19**
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your _issue_?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [x] Feature request (request for a new functionality)
- [x] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your _issue_
---
### Description of your _issue_, suggested solution and other information
"ERROR: YouTube said: This video is available in Andorra, United Arab Emirates, A
fghanistan, Antigua and Barbuda, Anguilla, Albania, Armenia, Angola, Antarctica,
Argentina, American Samoa, Austria, Australia, Aruba, Åland Islands, Azerbaijan
, Bosnia and Herzegovina, Barbados, Bangladesh, Belgium, Burkina Faso, Bulgaria,
Bahrain, Burundi, Benin, Saint Barthélemy, Bermuda, Brunei Darussalam, Bolivia,
Plurinational State of, Bonaire, Sint Eustatius and Saba, Brazil, Bahamas, Bhut
an, Bouvet Island, Botswana, Belarus, Belize, Cocos (Keeling) Islands, Congo, th
e Democratic Republic of the, Central African Republic, Congo, Switzerland, Côte
d'Ivoire, Cook Islands, Chile, Cameroon, China, Colombia, Costa Rica, Cuba, Cap
e Verde, Curaçao, Christmas Island, Cyprus, Czech Republic, Djibouti, Denmark, D
ominica, Dominican Republic, Algeria, Ecuador, Estonia, Egypt, Western Sahara, E
ritrea, Spain, Ethiopia, Finland, Fiji, Falkland Islands (Malvinas), Micronesia,
Federated States of, Faroe Islands, France, Gabon, United Kingdom, Grenada, Geo
rgia, French Guiana, Guernsey, Ghana, Gibraltar, Greenland, Gambia, Guinea, Guad
eloupe, Equatorial Guinea, Greece, South Georgia and the South Sandwich Islands,
Guatemala, Guam, Guinea-Bissau, Guyana, Hong Kong, Heard Island and McDonald Is
lands, Honduras, Croatia, Haiti, Hungary, Indonesia, Ireland, Israel, Isle of Ma
n, India, British Indian Ocean Territory, Iraq, Iran, Islamic Republic of, Icela
nd, Italy, Jersey, Jamaica, Jordan, Japan, Kenya, Kyrgyzstan, Cambodia, Kiribati
, Comoros, Saint Kitts and Nevis, Korea, Democratic People's Republic of, Korea,
Republic of, Kuwait, Cayman Islands, Kazakhstan, Lao People's Democratic Republ
ic, Lebanon, Saint Lucia, Liechtenstein, Sri Lanka, Liberia, Lesotho, Lithuania,
Luxembourg, Latvia, Libya, Morocco, Monaco, Moldova, Republic of, Montenegro, S
aint Martin (French part), Madagascar, Marshall Islands, Macedonia, the Former Y
ugoslav Republic of, Mali, Myanmar, Mongolia, Macao, Northern Mariana Islands, M
artinique, Mauritania, Montserrat, Malta, Mauritius, Maldives, Malawi, Mexico, M
alaysia, Mozambique, Namibia, New Caledonia, Niger, Norfolk Island, Nigeria, Nic
aragua, Netherlands, Norway, Nepal, Nauru, Niue, New Zealand, Oman, Panama, Peru
, French Polynesia, Papua New Guinea, Philippines, Pakistan, Poland, Saint Pierr
e and Miquelon, Pitcairn, Palestine, State of, Portugal, Palau, Paraguay, Qatar,
Réunion, Romania, Serbia, Russian Federation, Rwanda, Saudi Arabia, Solomon Isl
ands, Seychelles, Sudan, Sweden, Singapore, Saint Helena, Ascension and Tristan
da Cunha, Slovenia, Svalbard and Jan Mayen, Slovakia, Sierra Leone, San Marino,
Senegal, Somalia, Suriname, South Sudan, Sao Tome and Principe, El Salvador, Sin
t Maarten (Dutch part), Syrian Arab Republic, Swaziland, Turks and Caicos Island
s, Chad, French Southern Territories, Togo, Thailand, Tajikistan, Tokelau, Timor
-Leste, Turkmenistan, Tunisia, Tonga, Turkey, Trinidad and Tobago, Tuvalu, Taiwa
n, Province of China, Tanzania, United Republic of, Ukraine, Uganda, United Stat
es Minor Outlying Islands, Uruguay, Uzbekistan, Holy See (Vatican City State), S
aint Vincent and the Grenadines, Venezuela, Bolivarian Republic of, Virgin Islan
ds, British, Virgin Islands, U.S., Viet Nam, Vanuatu, Wallis and Futuna, Samoa,
Yemen, Mayotte, South Africa, Zambia, Zimbabwe only"
This is printed when some videos cant be accessed and youtube supplies a message I want a option to turn that printing of that message off as my dict currently does not work even if I have quiet set to `True`.
my dict is this.
```
ytdlo = {'quiet': True, 'no_warnings': True, 'ignoreerrors': True}
```
| request | low | Critical |
150,496,561 | rust | Missing auto-load script in gdb | When compiled with `-g`, we produce binaries which contain a `gdb_load_rust_pretty_printers.py` in `.debug_gdb_scripts` section. This makes gdb complain about missing scripts when run under plain gdb as opposed to the rust-gdb wrapper script.
Distribution puts this script into `$INSTALL_ROOT/lib/rustlib/etc`, which is someplace gdb wouldn’t ever look at by default. Some experimentation suggests that placing the scripts at gdb’s `DATA-DIRECTORY/python/gdb/printer/` will at least make gdb detect the presence of the script, but then it fails to load the module due to import errors.
| A-debuginfo,T-compiler,C-bug | medium | Critical |
150,572,725 | youtube-dl | Add an option to ignore PostProcessingError (was: --embed-thumbnail should warn, not error, on non-mp3/mp4) | - [ x ] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.19**
- [ x ] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [ x ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [ x ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [x] Other
#
```
> youtube-dl -x --embed-thumbnail http://youtube.com/watch?v=Jj1KTPOqd_A
[youtube] Jj1KTPOqd_A: Downloading webpage
[youtube] Jj1KTPOqd_A: Downloading video info webpage
[youtube] Jj1KTPOqd_A: Extracting video information
[youtube] Jj1KTPOqd_A: Downloading thumbnail ...
[youtube] Jj1KTPOqd_A: Writing thumbnail to: 01-Kenny Wayne Shepherd Band - Deja Voodoo (Video)-Jj1KTPOqd_A.jpg
[download] Destination: 01-Kenny Wayne Shepherd Band - Deja Voodoo (Video)-Jj1KTPOqd_A.webm
[download] 100% of 4.31MiB in 01:11
[ffmpeg] Destination: 01-Kenny Wayne Shepherd Band - Deja Voodoo (Video)-Jj1KTPOqd_A.ogg
Deleting original file 01-Kenny Wayne Shepherd Band - Deja Voodoo (Video)-Jj1KTPOqd_A.webm (pass -k to keep)
ERROR: Only mp3 and m4a/mp4 are supported for thumbnail embedding for now.
```
A failure to embed a thumbnail should probably be treated as a nonfatal error, don’t you think?
| request | low | Critical |
150,603,403 | angular | Input type='date' with ngModel bind to a Date() js object. | Hi, here it's the plunker. http://plnkr.co/edit/3ZDyWnabgp4S3m2rxJaX?p=preview
**Current behavior**
Input doesn't show the date at beginning (12-12-1900)
Only changing the year in input value myDate variable get refresh.
**Expected/desired behavior**
First, I would expect the input to show the correct date (12-12-1900).
And then myDate variable get refresh with any change in input.
**Question**
That's the way it worked on angular 1, isn't?
| feature,workaround1: obvious,freq3: high,area: forms,feature: under consideration | medium | Major |
150,624,847 | rust | DWARF does not mention Self type | I wrote a simple test program using the `Self` type:
```
struct Something(i32);
impl Something {
fn x(self: &Self) -> i32 { self.0 }
}
fn main() {
let y = Something(32);
let z = y.x();
()
}
```
When I examine the resulting DWARF, I don't see any mention of `Self`. I think it should be emitted as a typedef pointing to `Something`.
| A-debuginfo,C-enhancement,P-low,T-compiler | low | Minor |
150,643,388 | rust | rustc::lint could use some helper functions for working with macros | (See #22451 and possibly others)
Currently we have a few functions in [clippy](https://github.com/Manishearth/rust-clippy)`/src/util/mod.rs` to deal with macros. Namely we can check if some span stems from a macro expansion (though that one could need some work), or if some span was expanded by a given macro (by name) somewhere or directly, or if two spans are in the same macro expansion. This is often useful for readability lints which should not be invoked within macro-expanded code (unless perhaps the code before expansion had the same issue, but that is as of yet hard to detect).
I think those functions belong in the rustc::lint crate. This would both benefit internal lints and all other lints outside of clippy which could then make use of those helpers.
However, clippy is currently licensed under MPL, so we'd need agreement from the authors (of the respective code lines) to move this code into rustc proper. Also we'll want to look first if those functions are ready to be moved and clean up any technical debt we may have incurred so far.
cc @Manishearth
| A-lints,T-compiler,C-feature-request | low | Major |
150,677,226 | neovim | Remove spaces from formerly-indented blank lines | - Neovim version: 0.1.3
- Vim behaves differently?: no
- Operating system/version: Linux Fedora 23
- Terminal name/version: Gnome-terminal
- `$TERM`: xterm-256color
### Steps to reproduce using `nvim -u NONE`
1. `nvim -u NONE`
2. Type:
```
This is a test.
This is indented.
```
3. Backspace over all contents of the second line, including the spaces.
4. All the text including the spaces are gone, but there still appear to be spaces rendered on the terminal. Typing new text will insert it at the beginning of the line, as it should with all spaces gone. But arrowing right/left does not reveal the presence of these spaces. They appear to be entirely gone from the file but are till rendered on the screen, likely aren't visible, and aren't used to indent when new text is added on the line.
### Background
I am a blind screen reader user. I use the arrow keys rather than hjkl to navigate because screen readers don't know to present entire lines when hjkl are pressed, but they generally assume that the cursor moves in response to an arrow press so speak the updated change then. When I arrow to the second line in the above test data, after it is blank, my screen reader speaks "2 spaces", which is what it would speak before a line of text/whitespace that is indented. But, when typing text on that line, text is inserted at the beginning. As such, it is impossible to distinguish a line that actually has 2 spaces vs. one that had 2 and is now entirely blank. Vim does this too, is very annoying, and I hope Neovim might fix it.
To be very clear, these 2 spaces aren't in the file or editing buffer. They appear to be artifacts that aren't removed from the rendering, like istead of rendering the blank line as "\n\n" it's being rendered as "\n \n". It probably doesn't affect anyone other than screen reader users, so you may not notice it unless you dig into the rendering code.
| enhancement,tui,core | low | Minor |
150,703,506 | youtube-dl | Is there an option that yields the best available audio quality among all formats? | - [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.24**
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [x] Question
- [ ] Other
- Single video: https://www.youtube.com/watch?v=J9bjJEjK2dQ
---
I want to be able to download the best quality audio available. `-f bestaudio` doesn't always yield the best available audio, as sometimes the best audio only version has worse audio than the best video+audio version. See the example video: `-f bestaudio` yields 125 kbps m4a, and `-x -f best` yields 192 kbps m4a. I think I've seen an example of `-f best` yielding worse quality than `-f bestaudio` too, but I'm not sure.
Does `-f best` in fact always yield the best audio?
If not, is there an option that always yields the best audio?
If not, I would like to request such a feature. In fact, the way it's described, it sounds like the default option is intended to yield the best quality audio and video, and that's certainly what I expected. Also, is it guaranteed that either `-f best` or `-f bestaudio` yields the best audio? It's conceivable that there could be a video where the best audio-only version has medium quality, the best overall version has medium quality audio and high quality video, and there exists a version with high quality audio and low quality video, and then the answer would be 'no'.
| request | medium | Critical |
150,779,779 | neovim | defaults: "-u NORC" with invalid runtime shows E484 | If syntax.vim is missing, it should be quiet, not raise E484 _unless_ the user explicitly did `:syntax on` (as opposed to merely allowing the default).
https://github.com/neovim/neovim/pull/4252#issuecomment-183994718
https://twitter.com/ds26gte/status/724496755602128896
- Counterargument: if users normally allow the default, will this cause confusion (no syntax + no error)?
- Maybe, but as soon as user does explicit `:syntax on` interactively, E484 will show.
- Alternative: do a "passive" message which only appears in `:messages` history and avoids "Hit Enter"
| bug,ux,startup | low | Critical |
150,952,640 | kubernetes | Declarative update of configmaps and secrets from the contents of files | Currently AFAIK you can only create a secret from a file using:
```
kubectl create secret generic <name> --from-file=<key>=<file>
```
The drawback is this, If a change is made to the file you need to delete and then recreate the secret:
```
kubectl delete secret <name>
kubectl create secret generic <name> --from-file=<key>=<file>
```
This is awkward because to update the other items (service, deployment, pv) in the same app you just run:
```
kubectl apply -f <filename>
```
I realize that I can create a script to base64 encode the contents of the file and inject it into a secret spec file and then use create/apply/delete as with other API objects but this just feels awkward because the create/apply/delete workflow is so clean.
For context this is a redis config file that contains a password and I'm mounting it with a volumeMount in a deployment:
```
- name: redis-conf-secret
readOnly: true
mountPath: /etc/redis
```
| priority/important-soon,area/app-lifecycle,area/kubectl,kind/feature,sig/cli,area/secret-api,area/declarative-configuration,lifecycle/frozen | high | Critical |
151,094,471 | TypeScript | In JS, use JSDoc to specify a call's type arguments | In typescript we can specified template at function call
```
function A<T>(callBack:function(T):void) { }
A<{name:string}>(function(item) { /** item is {name:string} */ })
```
But I don't know how to force the type of its template when calling it from js file. And I can't find any clue. It seem like not possible so I want to make feature request if it don't have already
| Suggestion,Needs Proposal,Domain: JSDoc,Domain: JavaScript | low | Minor |
151,105,027 | youtube-dl | Unable to extract subtitles from TV3Play.no | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your _issue_ (like that [x])
- Use _Preview_ tab to see how your issue will actually look like
---
### Make sure you are using the _latest_ version: run `youtube-dl --version` and ensure your version is _2016.04.24_. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.24**
### Before submitting an _issue_ make sure you have:
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your _issue_?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your _issue_
---
### If the purpose of this _issue_ is a _bug report_, _site support request_ or you are not completely sure provide the full verbose output as follows:
Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
$ youtube-dl -v --skip-download --write-sub http://www.tv3play.no/programmer/the-daily-show/725244
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'--skip-download', u'--write-sub', u'http://www.tv3play.no/programmer/the-daily-show/725244']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2016.04.19
[debug] Python version 2.7.11 - Darwin-15.3.0-x86_64-i386-64bit
[debug] exe versions: avconv 11.4, avprobe 11.4, ffmpeg 3.0.1, ffprobe 3.0.1, rtmpdump 2.4
[debug] Proxy map: {}
[TVPlay] 725244: Downloading video JSON
WARNING: [TVPlay] This content might not be available in your country due to copyright reasons
[TVPlay] 725244: Downloading streams JSON
[TVPlay] 725244: Downloading f4m manifest
```
---
### Description of your _issue_, suggested solution and other information
When I try to only download the subtitles of a video, it fails, even though I am able to watch the video with subtitles from a web browser.
| geo-restricted | low | Critical |
151,167,981 | rust | linking staticlib files into shared libraries exports all of std:: | Consider this toy example:
``` rust
#[no_mangle]
pub fn hello() {
println!("hello world")
}
```
``` c++
extern "C" {
void hello();
}
void
really_hello()
{
hello();
}
```
Compile and link:
``` sh
$ rustc --crate-type staticlib --emit link=sl.a sl.rs
$ g++ -o hello.so -fPIC -shared driver.cpp sl.a
```
With rust 1.8.0, we have:
``` sh
$ ls -l hello.so
-rwxr-xr-x 1 froydnj froydnj 2141544 Apr 26 11:02 hello.so
```
which is quite large (2MB!) for such a simple program. Despite all of `std` being compiled with the moral equivalent of `-ffunction-sections`, adding `-Wl,--gc-sections` does very little to slim down the binary:
``` sh
$ g++ -o hello.so -fPIC -shared driver.cpp sl.a -Wl,--gc-sections
$ ls -l hello.so
-rwxr-xr-x 1 froydnj froydnj 2141544 Apr 26 11:02 hello.so
```
That's only about 400 bytes eliminated, which seems suboptimal.
The problem is that all of the public functions in `libstd.rlib` are marked as global symbols. When `sl.a` is linked into a shared library, all of those global symbols from `libstd.rlib` are now treated as symbols that the newly-created shared library should export as publically visible symbols. Which creates bloat in terms of a large PLT the shared library must tote around as well as rendering `-Wl,--gc-sections` ineffective, as virtually everything is transitively reachable from these public functions from `libstd.rlib`. `hello.so` has ~5000 visible functions, when it should really only have a handful. `hello.so` contains code for parsing floating-point numbers, even though it really shouldn't, according to the functions shown above.
This example is admittedly contrived, but Firefox's use of Rust is not terribly dissimilar from this: we compile all the crates we use into rlibs, link all of the rlibs together into a staticlib, and then link the staticlib into our enormous shared library, libxul. We're pretty careful with symbol visibility; we have hundreds of thousands of symbols in libxul, but fewer than 500 exported symbols. We would very much like it if:
1. libxul didn't suddenly grow thousands of newly-exported symbols overnight.
2. libxul didn't contain Rust code from `std` (or otherwise) that it doesn't use.
We didn't think terribly hard about this when we enabled Rust on our desktop platforms (though we should have), but our Android team cares quite a bit about binary size, and Rust support taking up this much space would be a hard blocker on our ability to ship Rust on Android. It would be somewhat less than the above because we'd be compiling for ARM, but it'd still be significant. (I assume the situation is similar on Mac and Windows, though I haven't checked.)
cc @alexcrichton @rillian @glandium
| A-linkage,T-compiler,C-bug | low | Major |
151,170,595 | opencv | Opencv with Qt cmd remains after actual opencv window is closed when it should be closed also | - OpenCV version: 2.4 / 3
- Host OS: Windows 7/10
have opencv various versions with Qt 4.x or 5.x build with visual sutdio 2010
when i run my exe, with simple opencv loading an image , waitkey and release , Qt seems to keep open the cmd window if i close the opecv window. ( it only closes the cmd window if i press button to realease and destroy window from code). this cmd also has output ''init done''.
without QT build, the cmd closes automatically when i close the opencv window, as expected.
moreover the exe, when i have the QT builded opencv, seems to have a memory leak and doesnt let me delete it.
| category: highgui-gui,RFC | low | Minor |
151,194,391 | go | cmd/compile: unexplained allocation for convT2I | Please answer these questions before submitting your issue. Thanks!
1. What version of Go are you using (`go version`)?
`go version devel +2bf7034 Mon Apr 25 16:18:10 2016 +0000 darwin/amd64`
1. What operating system and processor architecture are you using (`go env`)?
`````` darwin/amd64```
3. What did you do?
See attached typescript. I do not understand why line 28, the call to sort.Sort, is allocating. It is in a call to convT2I but a, a.r, and the slice a.r[i] points to are already on the heap. It seems like escape analysis and/or optimizations are missing a chance to avoid an allocation.
```dunnart=% go test -gcflags=-m
# _/Users/r/bug
./x_test.go:35: can inline sortable.Len
./x_test.go:36: can inline sortable.Less
./x_test.go:37: can inline sortable.Swap
./x_test.go:25: make([]int, 5) escapes to heap
./x_test.go:28: sortable(r) escapes to heap
./x_test.go:23: new(A) escapes to heap
./x_test.go:12: t.common escapes to heap
./x_test.go:8: leaking param: t
./x_test.go:12: "allocs:" escapes to heap
./x_test.go:12: allocs escapes to heap
./x_test.go:14: t.common escapes to heap
./x_test.go:14: "expected 23 allocations, got " escapes to heap
./x_test.go:14: allocs escapes to heap
./x_test.go:9: TestParseAllocs func literal does not escape
./x_test.go:12: TestParseAllocs ... argument does not escape
./x_test.go:14: TestParseAllocs ... argument does not escape
./x_test.go:35: sortable.Len s does not escape
./x_test.go:36: sortable.Less s does not escape
./x_test.go:37: sortable.Swap s does not escape
<autogenerated>:1: inlining call to sortable.Len
<autogenerated>:1: (*sortable).Len .this does not escape
<autogenerated>:2: inlining call to sortable.Less
<autogenerated>:2: (*sortable).Less .this does not escape
<autogenerated>:3: inlining call to sortable.Swap
<autogenerated>:3: (*sortable).Swap .this does not escape
<autogenerated>:4: leaking param: .this
<autogenerated>:5: leaking param: .this
<autogenerated>:6: leaking param: .this
# testmain
/var/folders/g9/s3yf6f1n54bgn16vwpxnbxvm0004fc/T/go-build913880170/_/Users/r/bug/_test/_testmain.go:52: inlining call to testing.MainStart
/var/folders/g9/s3yf6f1n54bgn16vwpxnbxvm0004fc/T/go-build913880170/_/Users/r/bug/_test/_testmain.go:37: leaking param: pat
/var/folders/g9/s3yf6f1n54bgn16vwpxnbxvm0004fc/T/go-build913880170/_/Users/r/bug/_test/_testmain.go:37: leaking param: str
/var/folders/g9/s3yf6f1n54bgn16vwpxnbxvm0004fc/T/go-build913880170/_/Users/r/bug/_test/_testmain.go:52: main &testing.M literal does not escape
--- FAIL: TestParseAllocs (0.00s)
x_test.go:12: allocs: 11
x_test.go:14: expected 23 allocations, got 11
FAIL
exit status 1
FAIL _/Users/r/bug 0.031s
dunnart=% go test -memprofile=foo -memprofilerate=1
PASS
ok _/Users/r/bug 0.034s
dunnart=% go tool pprof --alloc_objects bug.test foo
Entering interactive mode (type "help" for commands)
(pprof) list foo
Total: 1145
ROUTINE ======================== _/Users/r/bug.foo in /Users/r/bug/x_test.go
1111 1111 (flat, cum) 97.03% of Total
. . 18:type A struct {
. . 19: r [5][]int
. . 20:}
. . 21:
. . 22:func foo() *A {
101 101 23: a := new(A)
. . 24: for i := 0; i < 5; i++ {
505 505 25: a.r[i] = make([]int, 5)
. . 26: }
. . 27: for _, r := range a.r {
505 505 28: sort.Sort(sortable(r))
. . 29: }
. . 30: return a
. . 31:}
. . 32:
. . 33:type sortable []int
(pprof) disasm foo
Total: 1145
ROUTINE ======================== _/Users/r/bug.foo
1111 1111 (flat, cum) 97.03% of Total
. . 66740: GS MOVQ GS:0x8a0, CX
. . 66749: LEAQ -0x60(SP), AX
. . 6674e: CMPQ 0x10(CX), AX
. . 66752: JBE 0x668d0
. . 66758: SUBQ $0xe0, SP
. . 6675f: LEAQ 0x7cb9a(IP), AX
. . 66766: MOVQ AX, 0(SP)
101 101 6676a: CALL runtime.newobject(SB)
. . 6676f: MOVQ 0x8(SP), AX
. . 66774: MOVQ AX, 0x48(SP)
. . 66779: XORL CX, CX
. . 6677b: MOVQ CX, 0x30(SP)
. . 66780: CMPQ $0x5, CX
. . 66784: JGE 0x667fa
. . 66786: LEAQ 0x70333(IP), DX
. . 6678d: MOVQ DX, 0(SP)
. . 66791: MOVQ $0x5, 0x8(SP)
. . 6679a: MOVQ $0x5, 0x10(SP)
505 505 667a3: CALL runtime.makeslice(SB)
. . 667a8: MOVQ 0x20(SP), AX
. . 667ad: MOVQ 0x28(SP), CX
. . 667b2: MOVQ 0x18(SP), DX
. . 667b7: MOVQ 0x48(SP), BX
. . 667bc: TESTL AL, 0(BX)
. . 667be: MOVQ 0x30(SP), BP
. . 667c3: LEAQ 0(BP)(BP*2), R8
. . 667c8: MOVQ AX, 0x8(BX)(R8*8)
. . 667cd: MOVQ CX, 0x10(BX)(R8*8)
. . 667d2: LEAQ 0(BX)(R8*8), AX
. . 667d6: MOVL 0x13a034(IP), CX
. . 667dc: TESTL CL, CL
. . 667de: JNE 0x668b3
. . 667e4: MOVQ DX, 0(BX)(R8*8)
. . 667e8: LEAQ 0x1(BP), CX
. . 667ec: MOVQ BX, AX
. . 667ef: MOVQ CX, 0x30(SP)
. . 667f4: CMPQ $0x5, CX
. . 667f8: JL 0x66786
. . 667fa: MOVQ 0(AX), CX
. . 667fd: MOVQ CX, 0x68(SP)
. . 66802: LEAQ 0x8(AX), SI
. . 66806: LEAQ 0x70(SP), DI
. . 6680b: CALL 0x507ae
. . 66810: XORL CX, CX
. . 66812: LEAQ 0x68(SP), DX
. . 66817: MOVQ CX, 0x38(SP)
. . 6681c: MOVQ DX, 0x40(SP)
. . 66821: CMPQ $0x5, CX
. . 66825: JGE 0x668a3
. . 66827: MOVQ 0x10(DX), BX
. . 6682b: MOVQ 0x8(DX), BP
. . 6682f: MOVQ 0(DX), SI
. . 66832: MOVQ SI, 0x50(SP)
. . 66837: MOVQ BP, 0x58(SP)
. . 6683c: MOVQ BX, 0x60(SP)
. . 66841: LEAQ 0x10d518(IP), BX
. . 66848: MOVQ BX, 0(SP)
. . 6684c: LEAQ 0x50(SP), BX
. . 66851: MOVQ BX, 0x8(SP)
. . 66856: MOVQ $0x0, 0x10(SP)
505 505 6685f: CALL runtime.convT2I(SB)
. . 66864: MOVQ 0x20(SP), AX
. . 66869: MOVQ 0x18(SP), CX
. . 6686e: MOVQ CX, 0(SP)
. . 66872: MOVQ AX, 0x8(SP)
. . 66877: CALL sort.Sort(SB)
. . 6687c: MOVQ 0x40(SP), BX
. . 66881: LEAQ 0x18(BX), DX
. . 66885: MOVQ 0x38(SP), BX
. . 6688a: LEAQ 0x1(BX), CX
. . 6688e: MOVQ 0x48(SP), AX
. . 66893: MOVQ CX, 0x38(SP)
. . 66898: MOVQ DX, 0x40(SP)
. . 6689d: CMPQ $0x5, CX
. . 668a1: JL 0x66827
. . 668a3: MOVQ AX, 0xe8(SP)
. . 668ab: ADDQ $0xe0, SP
. . 668b2: RET
. . 668b3: MOVQ AX, 0(SP)
. . 668b7: MOVQ DX, 0x8(SP)
. . 668bc: CALL runtime.writebarrierptr(SB)
. . 668c1: MOVQ 0x48(SP), BX
. . 668c6: MOVQ 0x30(SP), BP
. . 668cb: JMP 0x667e8
. . 668d0: CALL runtime.morestack_noctxt(SB)
. . 668d5: JMP _/Users/r/bug.foo(SB)
. . 668da: INT $0x3
. . 668db: INT $0x3
. . 668dc: INT $0x3
. . 668dd: INT $0x3
. . 668de: INT $0x3
(pprof) quit
dunnart=% cat x_test.go
package foo
import (
"sort"
"testing"
)
func TestParseAllocs(t *testing.T) {
allocs := testing.AllocsPerRun(100, func() {
foo()
})
t.Log("allocs:", allocs)
if allocs != 11 {
t.Fatal("expected 11 allocations, got ", allocs)
}
}
type A struct {
r [5][]int
}
func foo() *A {
a := new(A)
for i := 0; i < 5; i++ {
a.r[i] = make([]int, 5)
}
for _, r := range a.r {
sort.Sort(sortable(r))
}
return a
}
type sortable []int
func (s sortable) Len() int { return len(s) }
func (s sortable) Less(i, j int) bool { return s[i] < s[j] }
func (s sortable) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
dunnart=%
``````
| Performance,compiler/runtime | low | Critical |
151,206,372 | three.js | Can character-based caching remove the memory manager from AnimationMixer? | ##### Description of the problem
Is there a way to remove the memory manager from the AnimationMixer? The memory manager is responsible for a large part of that code base. If we just made people pre-assign Clips/Actions to their characters, and then just not enable those Clips/Actions, the characters could act as their own caches -- which is generally the main use case. This would simplify the AnimationMixer and remove the main need for the memory manager as I understand it.
##### Three.js version
- [x] Dev
- [x] r76
- [ ] ...
##### Browser
- [x] All of them
- [ ] Chrome
- [ ] Firefox
- [ ] Internet Explorer
##### OS
- [x] All of them
- [ ] Windows
- [ ] Linux
- [ ] Android
- [ ] IOS
##### Hardware Requirements (graphics card, VR Device, ...)
| Suggestion | low | Minor |
151,207,259 | three.js | Request to refactor THREE.Interpolant to make it more flexible, less tightly coupled. | ##### Description of the problem
I would like to refactor THREE.Interpolant to be a bit simplier base class that has less assumptions about how it will be used. This would make it more flexible.
I would like to make the following changes:
- Remove the assumption in the base class that there is one sampleValues buffer, it has a sampleSize and it has a resultsBuffer. Rather I would prefer if the base class made no assumptions on the storage structure. We could then have a THREE.BufferInterpolant derived class that has these assumptions. This is a coupling reduction change.
- I would like to create a separate findIndex( time, optionalSuggestedIndex ) function isolated finding the current index given the current time and an optional suggested start index for the search. It will return a single integer, which is -1 if the time is before the range and T (where T is the size of the time array) if the time is after the range. This will remove the idea that different types of interpolators are called based on the results of the search -- you can still do that based on the returned integer value but it won't be tightly coupled as it is now.
- I would also like to remove the loop type/extrapolator awareness from the findIndex() function - if someone is ping/pong or repeat, that can be determined before going into the findIndex() function as it is fully separable.
- I would like to move _cachedIndex out of the base class and have it as an optional parameter to evaluate() -- just as a means of speeding up the search (it would be passed to findIndex.) The reason is I would like to share these Interpolants between multiple animations that may not be synchronized and thus I would be accessing the same Interpolant class in two or more ways, and thus I could use a different cachedIndex for each usage. This is reducing the coupling. THREE.BufferInterpolant could still have it if desired.
- Rename "ending" to "extrapolation" to be more consistent with other programs. (The other term that is used is "wrap" type but "extrapolation" I feel is better.)
- I would like to synchronize the extrapolation names with Loop names as much as possible as that is generally what they support. Right now we do not have an Extrapolation mode that matches up with LoopPingPong and the other two Wrap/Extrapolation names that do align with Loop modes are not that suggestive. I am unsure what is the right solution here. Here is what unity uses for both AnimationClips and AnimationCurves: http://docs.unity3d.com/ScriptReference/WrapMode.html
- Rename parameterPosition to time as that is the majority use case? (This is just a personal preference.)
The reason is I would like to replicate the behavior of Unity3D's AnimationCurve more closely:
http://docs.unity3d.com/ScriptReference/AnimationCurve.html
The main features of Unity3D's Animation Curve is that each keyframe has a value, and an inTangent and an outTangent. If you set inTangent and outTangent to 0 you get linear interpolation but if you set them to 1 each you get cubic interpolation. This is really useful for precise control of keyframe animations.
There is another pattern that is often used in animation it is the TCB pattern used by FBX, Maya, 3DS Max and Softimage. It uses on a per key basis a value (vec3, quat, etc.) + bias + continuity + tension - which is again is multiple bits of data per key that I would like to be able to support.
But I guess if we have baked all these flat animation primitives so deeply into mixer, can I do this decoupling so I can use more expressive animation curves and custom ones?
/ping @tschw
##### Three.js version
- [x] Dev
- [x] r76
- [ ] ...
##### Browser
- [x] All of them
- [ ] Chrome
- [ ] Firefox
- [ ] Internet Explorer
##### OS
- [x] All of them
- [ ] Windows
- [ ] Linux
- [ ] Android
- [ ] IOS
| Suggestion | low | Minor |
151,209,325 | neovim | Special marks change on diffupdate | - Neovim version: 0.1.4-dev
- Vim behaves differently? Vim version:
- Operating system/version: Ubuntu 15.10
- Terminal name/version: rxvt-unicode 9.21
- `$TERM`: tmux-256color
### Problem behaviour
When editing buffers in "diff mode", a `:diffupdate` causes the `'[` and `']` marks to be moved to the first and last lines of the buffer. (Vim apparently has this bug, too)
### Expected behaviour
`:diffupdate` should not change these marks.
### Steps to reproduce using `nvim -u NORC`
```
nvim -u NORC
:e file1
:vsp file2
:windo diffthis
yy
`[
`]
:diffupdate
`[
`]
```
| bug-vim,diff | low | Critical |
151,219,639 | opencv | /MT flag while building OpenCV | I am having problems while building OpenCV with /MT flags using Intel Compiler for Windows.
### Please state the information for your system
- OpenCV version: 3.1
- Host OS: Windows 7
- Compiler: Intel Compiler
### In which part of the OpenCV library you got the issue?
Building OpenCV
### Expected behaviour
To be able to build with /MT flag
### Actual behaviour
the /nodefaultlib:libcmt.lib causes my links to fail with unresolved C runtime symbols
### Additional description
If I remove the /nodefault:libcmt.lib flags from the CMakeLists.txt files, then my build is successful. Why is this flag present, and can it be removed to allow static-runtime builds without requiring modifications?
| bug,priority: low,category: build/install | low | Minor |
151,504,925 | TypeScript | Issue an error if tsconfig.json results in no files to compile | **TypeScript Version:** 1.8.9
I'm trying to run `tsc -p .` to use the tsconfig.json file and compile app.ts but it gives no warnings and no output.
The same tsconfig and app.ts in a Visual Studio project correctly gives app.js and app.d.ts files. This is happening on two different computers.
By the way, is there no tsc --verbose or tsc --debug? The first is "unrecognized" and the second is "unsupported".
```
C:\demo>dir /b
app.ts
tsconfig.json
C:\demo>type app.ts
var data = 'hello';
C:\demo>type tsconfig.json
{
"compilerOptions": {
"module": "es6",
"target": "es6",
"noImplicitAny": false,
"sourceMap": true,
"declaration": true
},
"exclude": [
"node_modules"
]
}
C:\demo>tsc -v
Version 1.8.9
C:\demo>tsc -p .
C:\demo>dir /b
app.ts
tsconfig.json
C:\demo>wtf
```

| Bug,Help Wanted | low | Critical |
151,511,386 | kubernetes | Clarify security philosophy around update & delete | We had some vigorous discussion in the API machinery SIG earlier today about what a system administrator should expect when granting a user update rights on a given object or class of objects.
My own position is that update is strictly more powerful than delete and you shouldn't give anyone update powers over an object you don't want them to do destructive things to.
The opposing position is, as @derekwaynecarr put it, update : delete :: paint house : burn house down.
We discussed whether it's reasonable to expect system admins to understand the exact power update gives you over any particular sort of object; some objects allow more destructive updates than others. We discussed extracting "harmless" field updates into subresources which would be safe, with source code that is easily checkably safe.
We discussed how our current update validation path appears to be a fertile source of CVEs, and whether or not we should expect that to change in the future.
We didn't come to clear conclusions on these issues, and whatever we do decide the system's philosophy should be here, we need committers and reviewers to understand it. So I'm opening this issue so we can hopefully come to some agreement.
The issue that triggered this is the garbage collector: #23656 and also see discussion on its API PR: #23928
| area/security,area/api,sig/api-machinery,lifecycle/frozen,sig/security | medium | Major |
151,612,670 | vscode | Handle pasting code from column selection specially | https://github.com/Microsoft/vscode/issues/1515#issuecomment-210166838
From @seva0stapenko
---
Tried column selection in VSCode 1.0. Well, rectangular selection does work, cutting and pasting back also works, but that's about it.
Cutting and pasting into a different part of the code does not work as expected. Both Notepad++ and VS remember that copied text was a rectangular block and paste it as a rectangular block.
VSCode treats copied block just like a regular text, so pasting it anywhere outside of the current multi-cursor context would create a mess.
It's good that the feature is being worked on, but it's not fully baked yet.
---
*Addition from @hediet*
This is (still) the current behavior of VS Code. The copied/pasted text matches the selection:

However, the copied/pasted text should match this selection:

See also #43145 for how other editors handle it. | feature-request,editor-columnselect | high | Critical |
151,671,632 | vscode | Support to print the editor contents | - VSCode Version: 1.0
- OS Version: Windows 10.0.10586.218
Steps to Reproduce:
1. Open a File in VSCode or enter some code
2. Try to print it to a printer
Sometimes paper is unbeatable.
| feature-request,editor-rendering | high | Critical |
151,759,934 | TypeScript | Allow `this` in static initializers | The following is an error:
``` ts
function factory() {
return class {
static x = 1;
static y = this.x * 2;
}
}
```
I can see some potential for confusion here, though I'm not sure what that confusion might be. On the flip side, it seems useful to be able to refer to an anonymous class in a static initializer. It is also easy to teach that "this in instance initializers = the instance, this in static initializers = the class, same as methods vs. static methods."
| Suggestion,ES Next,Awaiting More Feedback | low | Critical |
151,777,021 | kubernetes | Feature request: A way to signal pods | This has come up a bunch of times in conversations. The idea is that some external orchestration is being performed and the user needs to signal (SIGHUP usually, but arbitrary) pods. Sometimes this is related to ConfigMap (#22368) and sometimes not. This also can be used to "bounce" pods.
We can't currently signal across containers in a pod, so any sort of sidecar is out for now.
We can do it by `docker kill`, but something like `kubectl signal` is clumsy - it's not clearly an API operation (unless we revisit
'operation' constructs for managing imperative async actions).
Another issue is that a pod doesn't really exist - do we signal one container? Which one? Maybe all containers? Or do we nominate a signalee in the pod spec?
| priority/backlog,area/api,sig/node,kind/feature,lifecycle/frozen | high | Critical |
151,853,106 | youtube-dl | Implement --hls-prefer-livestreamer | - [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.04.24**
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your _issue_?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [x] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### Description of your _issue_, suggested solution and other information
FFmpeg seems to be very ineffecient on HLS streams, especially on a slow ones. For example, it drops a lot of segments on vlive.tv streams with middle/high quality (low-res streams are recorded fine). On the other hand `livestreamer "hls://stream-url.m3u8" best -o out.ts` works perfectly fine. So I think it would be great to have support for external livestreamer downloader. Note that `HlsFD` currently can't handle live streams.
(This is just a propose, I currently don't have any code for this.)
| request | low | Critical |
151,933,883 | go | runtime/metrics: add goroutine state counts, total goroutines created, total threads | **Update**, Jun 7 2023: runtime/metrics now exists, but there are a few metrics in the draft CL here that aren't yet exposed. See https://github.com/golang/go/issues/15490#issuecomment-1564615291.
- - -
MemStats provides a way to monitor allocation and garbage collection.
We need a similar facility to monitor the Scheduler.
Briefly:
- Total goroutines create
- Current number of of goroutines
- Total number of goroutines scheduled
- Current number of goroutines scheduled
- Total thread starts
- Current number of threads.
- Metrics on the delay between a goroutine being ready and running on a proc.
| help wanted,Proposal,Proposal-Accepted,FeatureRequest,early-in-cycle | medium | Major |
152,053,357 | rust | Add additional LoopUnswitch and LICM passes | This is [recommended by LLVM](http://llvm.org/docs/Frontend/PerformanceTips.html#pass-ordering) for languages with lots of guards that are rarely executed. Rust certainly does (mostly array bounds checks, but also `Option::unwrap()` and `Result::unwrap()`).
| A-LLVM,I-slow,C-enhancement,T-compiler | low | Major |
152,117,856 | electron | Add workspace API | All major platforms now support workspaces (virtual desktops), we should probably add a set of workspace related API, so it can:
- get the workspace of window;
- list available workspaces;
- choose which workspace to put the new window;
- move windows between workspaces.
| enhancement :sparkles: | high | Critical |
152,194,850 | rust | Suggest move closures when a Sync bound is not satisfied through a closure | ``` rust
use std::thread;
use std::sync::mpsc;
fn main() {
let (send, recv) = mpsc::channel();
let t = thread::spawn(|| {
recv.recv().unwrap();
});
send.send(());
t.join().unwrap();
}
```
[playpen](https://play.rust-lang.org/?gist=2fa73177cd7f0935cbf9883c8348715d&version=stable&backtrace=0)
This currently gives the error
```
<anon>:6:13: 6:26 error: the trait `core::marker::Sync` is not implemented for the type `core::cell::UnsafeCell<std::sync::mpsc::Flavor<()>>` [E0277]
<anon>:6 let t = thread::spawn(|| {
^~~~~~~~~~~~~
<anon>:6:13: 6:26 help: see the detailed explanation for E0277
<anon>:6:13: 6:26 note: `core::cell::UnsafeCell<std::sync::mpsc::Flavor<()>>` cannot be shared between threads safely
<anon>:6:13: 6:26 note: required because it appears within the type `std::sync::mpsc::Receiver<()>`
<anon>:6:13: 6:26 note: required because it appears within the type `[closure@<anon>:6:27: 8:6 recv:&std::sync::mpsc::Receiver<()>]`
<anon>:6:13: 6:26 note: required by `std::thread::spawn`
error: aborting due to previous error
playpen: application terminated with error code 101
```
If a closure needs to be Sync, we should perhaps suggest that it be made a move closure.
Perhaps we should verify that the closure _can_ be made move (i.e. it is `: 'static` when made move), but I'm not sure how easy that is to do.
We already suggest move closures when you do the same thing with a `Vec`, because the error reaches borrowck. Typeck sync issues don't suggest move.
cc @nikomatsakis
| C-enhancement,A-diagnostics,A-closures,T-compiler | low | Critical |
152,468,790 | opencv | Compiling Opencv with cuda fails Mac OS 10.9.5 | Opencv3.1.0
- Host OS:Mac 10.9.5
- Compiler & CMake: GCC 4.5 or NVCC from cuda 7.5 & CMake 3.5.2
cmake completes configuring and writes the build files.
(Although the error log is unhappy with a variety of inscrutable errors)
make gives
```
.
.
[ 5%] Building C object 3rdparty/libjasper/CMakeFiles/libjasper.dir/jp2_cod.c.o
nvcc fatal : GNU C/C++ compiler is no longer supported as a host compiler on Mac OS X.
CMake Error at cuda_compile_generated_gpu_mat.cu.o.cmake:208 (message):
Error generating
/Users/me/Downloads/opencv-master/build/modules/core/CMakeFiles/cuda_compile.dir/src/cuda/./cuda_compile_generated_gpu_mat.cu.o
make[2]: *** [modules/core/CMakeFiles/cuda_compile.dir/src/cuda/cuda_compile_generated_gpu_mat.cu.o] Error 1
make[1]: *** [modules/core/CMakeFiles/opencv_core.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
```
This error occurs with either compiler when configuring cmake
`-D WITH_CUDA=ON -D CMAKE_CXX_FLAGS=-fno-inline -D CUDA_HOST_COMPILER=/usr/local/cuda/bin/nvcc`
or
`-D WITH_CUDA=ON -D CMAKE_CXX_FLAGS=-fno-inline -D CUDA_HOST_COMPILER=/usr/local/bin/gcc-4.5`
Does anyone have insight on the problem??
| bug,category: build/install,category: gpu/cuda (contrib),platform: ios/osx | low | Critical |
152,493,425 | opencv | Please consider supporting RelWithDebInfo | I don't see the purpose of the following line in the top level CMake script:
`set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "Configs" FORCE)`
What are the reasons for disabling RelWithDebInfo?
| RFC | low | Critical |
152,584,753 | go | net/http: send and understand the hop-by-hop Keep-Alive header | Now that we have `net/http.Transport.IdleConnTimeout` we could also advertise our timeout to the peer:
https://tools.ietf.org/id/draft-thomson-hybi-http-timeout-01.html#rfc.section.2
It's a little useless for the client to send it, but perhaps the server could take care not to close the connection just before (~second) of the client's announced close time, just in case the client was about to send something as the server would have otherwise closed.
Independently, and more usefully, the server could announce its keep-alive idle time (once it exists: #14204) and the client could respect it.
| NeedsInvestigation | low | Minor |
152,596,246 | youtube-dl | [SR] Mediathek Site support request | - [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.05.01**
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Site support request (request for adding support for a new site)
- Single video: http://sr-mediathek.de/index.php?seite=7&id=1795
- Single audio: http://sr-mediathek.de/index.php?seite=7&id=7480
**description**
Can you please add support for SR Mediathek (Saarländischer Rundfunk)?
Log for audio example is attached.
**log**
```
youtube-dl.exe -v "http://sr-mediathek.de/index.php?seite=7&id=7480"
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://sr-mediathek.de/index.php?seite=7&id=7480']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2016.05.01
[debug] Python version 2.7.10 - Windows-XP-5.1.2600-SP3
[debug] exe versions: ffmpeg N-67100-g6dc99fd, ffprobe N-67100-g6dc99fd, rtmpdump 2.4
[debug] Proxy map: {}
[generic] index: Requesting header
WARNING: Falling back on generic information extractor.
[generic] index: Downloading webpage
[generic] index: Extracting information
ERROR: Unsupported URL: http://sr-mediathek.de/index.php?seite=7&id=7480
Traceback (most recent call last):
File "youtube_dl\extractor\generic.pyo", line 1393, in _real_extract
File "youtube_dl\compat.pyo", line 279, in compat_etree_fromstring
File "youtube_dl\compat.pyo", line 268, in _XML
File "xml\etree\ElementTree.pyo", line 1642, in feed
File "xml\etree\ElementTree.pyo", line 1506, in _raiseerror
ParseError: syntax error: line 1, column 1
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 673, in extract_info
File "youtube_dl\extractor\common.pyo", line 341, in extract
File "youtube_dl\extractor\generic.pyo", line 2063, in _real_extract
UnsupportedError: Unsupported URL: http://sr-mediathek.de/index.php?seite=7&id=7480
<end of log>
```
| site-support-request | low | Critical |
152,667,138 | react | RFC: Configure Warning Levels Using ENV Variable | Would it be feasible to specify the version of warnings you want through a static environment variable?
That would silence new warnings so that you can safely update minor versions without worrying about warning spew.
Dynamic configuration creates stateful shared module dependencies which we're very close to getting rid of for the isomorphic package (ReactCurrentOwner being the last one).
| Type: Enhancement,React Core Team | low | Minor |
152,669,545 | nvm | Option to disable curl-like download progress | ERROR: type should be string, got "https://github.com/creationix/nvm/blob/1c3f8da6c38bdfecf3dbf01c6753a6fd27032b9d/nvm.sh#L45-L51\n\nThat section of code looks pretty messy when nvm is used in a CI system like Jenkins. Here's a small snippet of what that looks like to Jenkins' streaming console log.\n\n```\n...\n### 5.3%\n### 5.3%\n### 5.3%\n### 5.4%\n### 5.4%\n### 5.4%\n### 5.4%\n### 5.5%\n### 5.5%\n### 5.5%\n#### 5.6%\n#### 5.7%\n#### 5.8%\n#### 5.9%\n#### 6.0%\n#### 6.1%\n#### 6.2%\n#### 6.3%\n#### 6.4%\n#### 6.4%\n#### 6.4%\n#### 6.5%\n#### 6.5%\n#### 6.5%\n#### 6.6%\n#### 6.6%\n#### 6.7%\n#### 6.7%\n#### 6.7%\n#### 6.7%\n#### 6.8%\n#### 6.8%\n#### 6.8%\n#### 6.9%\n##### 7.0%\n##### 7.0%\n##### 7.0%\n##### 7.1%\n##### 7.2%\n##### 7.2%\n##### 7.3%\n##### 7.3%\n##### 7.4%\n...\n```\n\nIt would be great if there was an option to disable the progress bar. While it's true I can discard output (e.g. `nvm install v4.2.3 &> /dev/null`) I fear discarding potentially valuable troubleshooting output from stderr.\n" | installing node,feature requests,pull request wanted | medium | Critical |
152,678,242 | go | cmd/compile: evaluate the need for shortcircuit | For Go1.8
1. What version of Go are you using (`go version`)?
go version devel +097e2c0 Mon May 2 21:02:54 2016 +0000 linux/amd64
2. What operating system and processor architecture are you using (`go env`)?
linux/amd64
3. What did you do?
./make.bash & check the size of binaries in pkg/tools/linux_amd64
4. What did you expect to see?
5. What did you see instead?
Shortcircuit pass increases the size of the binary by approx 0.04%, but this is not consistent.
with shortcircuit
3244469 addr2line
4909014 api
3982946 asm
3988940 cgo
10647306 compile
4914435 cover
3424419 dist
3825731 doc
2916445 fix
4070497 link
3207010 nm
3532598 objdump
1996235 pack
9308102 pprof
8435425 trace
6246651 vet
2879901 yacc
81530124 total
without shortcircuit
**3236277** addr2line
**4904918** api
**3978850** asm
_3993036_ cgo // higher
**10638988** compile
**4906243** cover
3424419 dist
3825731 doc
2916445 fix
4070497 link
3207010 nm
_3536694_ objdump // higher
1996235 pack
**9304006** pprof
8435425 trace
**6242555** vet
**2875805** yacc
81493134 total
| Performance,NeedsInvestigation,early-in-cycle | low | Minor |
152,739,187 | go | x/net/publicsuffix: automate periodic regeneration | Please answer these questions before submitting your issue. Thanks!
1. What version of Go are you using (`go version`)?
1.6.1
2. What operating system and processor architecture are you using (`go env`)?
linux/amd64
3. What did you do?
I used golang.org/x/net/publicsuffix and it doesn't contain the last updated list
1. What did you expect to see?
An updated list
1. What did you see instead?
A list from 2016-03-01
The list is changing more than before. The trigger was LetsEncrypt but people begin to understand the security implications beside Letsencrypt.
| Builders,NeedsInvestigation | low | Major |
152,773,104 | TypeScript | umd module compiler option doesn't have a fallback for global namespace. | Most umd patterns have a third fallback that allows exporting to the window.namesapace = export; As such the current umd module export is pretty broken when a huge number of users / library developers need to support all three.
``` js
(function(root, factory) {
if (typeof define === 'function' && define.amd) {
define(factory);
} else if (typeof exports === 'object') {
module.exports = factory(require, exports, module);
} else {
root.exceptionless = factory();
}
}(this, function(require, exports, module) {}
```
| Suggestion,Needs Proposal | high | Critical |
152,872,036 | bitcoin | Script disassembly for short byte arrays is confusing | ScriptToAsmString will convert any push shorter than 4 bytes to a number before display. This can lead to some confusion if you were expecting a hex-encoded blob as it does for longer-than-4-byte chunks. I'm not sure this is even documented anywhere.
| Docs | low | Minor |
152,882,370 | go | cmd/compile: SSA performance regression on polygon code | 1. What version of Go are you using (`go version`)?
go 1.6.2 and devel +15f7a66
2. What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/nkovacs/progs/go"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GO15VENDOREXPERIMENT="1"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"
CXX="g++"
CGO_ENABLED="1"
```
1. What did you do?
clone this repo: https://github.com/nkovacs/polygonperf
and run `go test -cpu x -bench .` with go 1.6.2 and devel +15f7a66
Results for BenchmarkContains (ns/op) on a Core 2 Q6600:
| master 4 cpu | 1.6.2 4 cpu | master 1 cpu | 1.6.2 1 cpu | master 2 cpu | 1.6.2 2 cpu |
| --- | --- | --- | --- | --- | --- |
| 370 | 285 | 297 | 282 | 334 | 264 |
| 378 | 295 | 297 | 284 | 325 | 264 |
| 374 | 304 | 299 | 283 | 367 | 235 |
| 364 | 295 | 298 | 411 | 316 | 251 |
| 386 | 285 | 298 | 278 | 353 | 301 |
| 375 | 272 | 298 | 277 | 328 | 254 |
| 374 | 305 | 297 | 279 | 351 | 260 |
| 379 | 318 | 298 | 394 | 333 | 241 |
| 374 | 289 | 298 | 310 | 361 | 247 |
| 380 | 296 | 298 | 272 | 346 | 247 |
| 375.4 | 294.4 | 297.8 | 307 | 341.4 | 256.4 |
Results for BenchmarkStructContains (ns/op) on a Core 2 Q6600:
| master 4 cpu | 1.6.2 4 cpu | master 1 cpu | 1.6.2 1 cpu | master 2 cpu | 1.6.2 2 cpu |
| --- | --- | --- | --- | --- | --- |
| 390 | 280 | 297 | 408 | 353 | 327 |
| 367 | 299 | 441 | 283 | 357 | 279 |
| 387 | 282 | 297 | 283 | 339 | 277 |
| 372 | 314 | 444 | 413 | 347 | 250 |
| 371 | 307 | 296 | 298 | 339 | 264 |
| 381 | 284 | 296 | 278 | 340 | 281 |
| 371 | 291 | 297 | 288 | 359 | 280 |
| 348 | 298 | 435 | 278 | 351 | 288 |
| 343 | 295 | 300 | 411 | 338 | 274 |
| 394 | 304 | 297 | 397 | 364 | 302 |
| 372.4 | 295.4 | 340 | 333.7 | 348.7 | 282.2 |
(last line is average)
I've seen a similar 30% increase in ns/op on an AMD Athlon II X2 270, but on that CPU the 1 cpu benchmark had the same result as the 2 cpu benchmark.
On the two more modern Intel CPUs I briefly tested, this simple polygon does not show a difference between master and 1.6.2. I added a second polygon (BenchmarkContains2 and BenchmarkStructContains2) that does show a difference, with 1.6.2 again being faster. On the Q6600, go 1.6.2 performs twice as fast in these benchmarks, on a Xeon server, go 1.6.2 is about 100-200 ns/op faster.
| Performance,NeedsFix,early-in-cycle | low | Major |
152,926,906 | nvm | Should io.js be removed from nvm ls? Now that it is merged with node again | Since io.js isn't going to be developed anymore.
| feature requests,io.js | medium | Critical |
152,930,596 | go | x/exp/shiny: widgets | Tracking bug for Shiny Widgets.
| Widget | Assignee | Notes |
| --- | --- | --- |
| [Button](https://www.google.com/design/spec/components/buttons.html) | @nigeltao | |
| [Cards](https://www.google.com/design/spec/components/cards.html) | | |
| [Chips](https://www.google.com/design/spec/components/chips.html) | | |
| [Dialogs](https://www.google.com/design/spec/components/dialogs.html) | | |
| [Dividers](https://www.google.com/design/spec/components/dividers.html) | | |
| [Grid lists](https://www.google.com/design/spec/components/grid-lists.html) | | |
| [Lists](https://www.google.com/design/spec/components/lists.html) | | See also [Lists: Controls](https://www.google.com/design/spec/components/lists-controls.html). |
| [Data tables](https://www.google.com/design/spec/components/data-tables.html) | @crawshaw | Ideally share a data model with lists. |
| [Menus](https://www.google.com/design/spec/components/menus.html) | | |
| [Pickers](https://www.google.com/design/spec/components/pickers.html) | | |
| [Progress Bar](https://www.google.com/design/spec/components/progress-activity.html) | | |
| [Selection controls](https://www.google.com/design/spec/components/selection-controls.html) | | |
| [Slider](https://www.google.com/design/spec/components/sliders.html) | | |
| [Snackbar](https://www.google.com/design/spec/components/snackbars-toasts.html) | | |
| [Stepper](https://www.google.com/design/spec/components/steppers.html) | | |
| [Subheader](https://www.google.com/design/spec/components/subheaders.html) | | |
| [Tabs](https://www.google.com/design/spec/components/tabs.html) | | |
| [Text field](https://www.google.com/design/spec/components/text-fields.html) | @nigeltao | |
| [Toolbars](https://www.google.com/design/spec/components/toolbars.html) | | |
| [Tooltips](https://www.google.com/design/spec/components/tooltips.html) | | |
| Scrolling | @nigeltao | |
| [Flex Layout](https://www.w3.org/TR/css-flexbox-1/) | @crawshaw | [shiny/widget/flex](https://golang.org/x/exp/shiny/widget/flex) |
| GL-in-Shiny | @crawshaw | A widget which contains OpenGL, like [NSOpenGLView](https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSOpenGLView_Class/). |
| Shiny-in-GL | | Project a screen into GL scene |
| NeedsInvestigation | low | Critical |
152,993,933 | opencv | Traincascade tool: reported training time invalid on multicore systems | ### Please state the information for your system
- OpenCV version: 3.x --> current master branch
- Host OS: Linux (Ubuntu 14.04)
### In which part of the OpenCV library you got the issue?
OpenCV's embedded train_cascade tool for building cascade classifiers using the AdaBoost process
### Expected behaviour
When running the tool, the output generate should specify how long the training of each stage took, by outputting the following line after each finished stage:
> Training until now has taken 0 days 0 hours 14 minutes 34 seconds.
### Actual behaviour
Due to the fact that part of the training can be done using multiple cores, when using larger core systems (in my case 24 cores), the difference between the actual processing time and the user time (reported using the embedded linux time tool), differs from the reported time.
> real 11m46.084s
> user 14m18.198s
> sys 0m18.812s
The difference gets bigger if more data is used and if processing takes longer, in the line of hours and days.
I will look into a possible solution, so that the real time is returned instead of the user time generated by counting the clock ticks (which takes into account all the cores available and used).
| feature,category: apps | low | Minor |
153,069,303 | kubernetes | Warn or error when a workload object (eg Deployment) would breach a quota | The quota system, in its current form, doesn't play well with high-level resources such as replication controllers/replica sets/deployments. While quota will correctly not allow my replicaset to scale up to the N pods I have asked for -because N pods may exceed {cpu,memory,pods} quota- the replicaset is blocked in limbo as its desired state has been updated but may never be fulfilled. This has implications for deployments too. A deployment that manages such a replicaset will never complete successfully because the size of the replicaset is not reflecting the reality. It's probably a problem for the user to solve but do we want to do something about it? Can we do something about it?
If this is a dupe of discussion elsewhere, please link it and close this.
@derekwaynecarr @kubernetes/rh-platform-management @bgrant0607
| priority/backlog,sig/scheduling,sig/architecture,lifecycle/frozen | medium | Critical |
153,083,507 | opencv | Unable to retrieve depth or points from occipital structure sensor using OpenNI2 | ### Please state the information for your system
- OpenCV version: 3.1 master from git
- Host OS: Windows 7
- MSVC 2013 & CMake 3.5
### In which part of the OpenCV library you got the issue?
Examples:
- videoio
### Expected behaviour
Acquisition of depth map and point cloud from sensor
### Actual behaviour
```
Attempted to access nullptr in m_pFrame
opencv_videoio310d.dll!openni::VideoFrameRef::getWidth() Line 525 C++
> opencv_videoio310d.dll!getDepthMapFromMetaData(const openni::VideoFrameRef & depthMetaData={...}, cv::Mat & depthMap={...}, int noSampleValue=0, int shadowValue=0) Line 720 C++
opencv_videoio310d.dll!CvCapture_OpenNI2::retrieveDepthMap() Line 736 C++
opencv_videoio310d.dll!CvCapture_OpenNI2::retrieveFrame(int outputType=0) Line 876 C++
opencv_videoio310d.dll!cvRetrieveFrame(CvCapture * capture=0x00000000225427b0, int idx=0) Line 102 C++
opencv_videoio310d.dll!cv::VideoCapture::retrieve(const cv::_OutputArray & image={...}, int channel=0) Line 630 C++
```
### Additional description
Using an occipital structure sensor (http://structure.io/) with the recommended configuration modification [http://com.occipital.openni.s3.amazonaws.com/Structure%20Sensor%20OpenNI2%20Quick%20Start%20Guide.pdf] I can view the point cloud via the NiViewer, however accessing the camera through the OpenCV interface results in a nullptr access violation.
### Code example to reproduce the issue / Steps to reproduce the issue
``` .cpp
#include "opencv2/videoio.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main()
{
cv::VideoCapture cam(1610);
cv::Mat depth, point_cloud;
cam.retrieve(depth, CAP_OPENNI_DEPTH_MAP);
cam.retrieve(point_cloud, CAP_OPENNI_POINT_CLOUD_MAP);
}
```
| bug,category: videoio,affected: 3.4 | low | Minor |
153,102,299 | kubernetes | pod status does not reflect 'Reason' codes for failures that one sees when doing kubectl describe pod <podname> | When a pod (or rep controller, or service) fails for reason like 'no corresponding docker image' or PodExceedsFreeCPU we see this when we do a "pod describe". I have included the describe output for one of our test runs below. Notice that the Status for the replication controller is Pending -- however an event occurred whose corresponding 'Reason' code is FailedScheduling.
I would argue this is an API usability bug. The suggested fix would be to add a new enum value to the 'Status' attribute of resources (like pods, rc's, etc) -- namely "Failed".
This would make it much easier to manage K8s clusters via REST API's like fabric8.
In fact, I posted a stack overflow question asking how -- using the fabric8 API -- we could find out that a pod (or other resource) failed to provision correctly. [ http://stackoverflow.com/questions/37020369/best-way-in-kubernetes-to-receive-notifications-of-errors-when-instantiating-res ]. If Kubernetes provided a "Failed" status, the current implementation of fabric8 API would automatically work better. And there would be less confusion for users who used kubectl to find out the status of provisioning a resource.
[ PS: we are using k8s version 1.1 ]
**Describe Output**
```
kubectl describe pod spark-worker-rc-bxkyo
Name: spark-worker-rc-bxkyo
Namespace: mervin-mervin-1544803905
Image(s): ecr.vip.acmec3.com/mousetown/spark-1.5.1:dev
Node: /
Labels: name=spark-worker,uses=spark-master
Status: Pending
Reason:
Message:
IP:
Replication Controllers: spark-worker-rc (4/4 replicas created)
Containers:
spark-worker:
Image: ecr.vip.acmec3.com/mousetown/spark-1.5.1:dev
Limits:
cpu: 4
memory: 16Gi
State: Waiting
Ready: False
Restart Count: 0
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Tue, 03 May 2016 12:43:38 -0700 Tue, 03 May 2016 13:38:54 -0700 25 {scheduler } FailedScheduling Failed for reason PodFitsHostPorts and possibly others
Tue, 03 May 2016 12:43:23 -0700 Tue, 03 May 2016 13:40:55 -0700 173 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
```
| priority/backlog,sig/scheduling,area/usability,area/kubectl,area/reliability,sig/cli,lifecycle/frozen | low | Critical |
153,115,780 | youtube-dl | TheVideo.me site support request | I think that it's a [XFileShare](https://github.com/rg3/youtube-dl/blob/57d8e32a3ec7fe70522edad6fd0c2847b4e00944/youtube_dl/extractor/xfileshare.py) extractor which you can see because it's similar HTML and you can see the form fields (op=download1). I tried it but I couldn't get it work - seems like the `file url` regex does not match.
Example URL: http://thevideo.me/zo5jqio9my56
| site-support-request | low | Minor |
153,242,372 | TypeScript | Overriden class member annotated with a subtype could be a recipe for disaster (by design.. ;) ) | **TypeScript Version:**
`1.9.0-dev.20160505` with both `strictNullChecks` and `noImplicitAny` enabled.
**Code:**
``` ts
class Animal {
name: string = "Any animal";
}
class Dog extends Animal {
name: string = "Dog";
bark() {
console.log("bark");
}
}
class AnimalCage {
member: Animal = new Animal();
}
class DogCage extends AnimalCage {
member: Dog;
}
let dogCage = new DogCage();
if (dogCage.member != null) { // Yup, it is even checked for null or undefined,
// but it turned out that wasn't enough..
dogCage.member.bark(); // Bang!
}
```
**Expected behavior:**
Design a language where inheritance actually works.
(_Edit: I was felling very 'passionate' when I wrote this.. sorry :) nothing personal_)
**Actual behavior:**
By design:
```
dogCage.member.bark();
^
TypeError: dogCage.member.bark is not a function
at Object.<anonymous> (C:\X\Documents\Drafts\TypeScript\Code\a.js:38:20)
at Module._compile (module.js:541:32)
at Object.Module._extensions..js (module.js:550:10)
at Module.load (module.js:456:32)
at tryModuleLoad (module.js:415:12)
at Function.Module._load (module.js:407:3)
at Function.Module.runMain (module.js:575:10)
at startup (node.js:159:18)
at node.js:444:3
```
**Resolution:**
`tslint` volunteers, for the rescue!
| Suggestion,Needs Proposal | medium | Critical |
153,297,034 | go | cmd/compile: support inlining non-escaping closures | Split out from #9337 and #15537.
| Performance,compiler/runtime | low | Major |
153,338,284 | kubernetes | Kubelet: We should separate status update with SyncPod | As is discussed in https://github.com/kubernetes/kubernetes/issues/19077#issuecomment-216686379.
Now we only update pod status at the beginning of `SyncPod`, so the pod status could never be able to reflect the changes happening in `SyncPod`, for example image pulling progress.
We should separate status update with `SyncPod`, so that we could update pod status whenever we need, and possibly driven by events.
/cc @kubernetes/sig-node
| priority/backlog,area/kubelet,sig/node,kind/feature,lifecycle/frozen | low | Major |
153,352,314 | kubernetes | Expected behavior for Deployment replicas with HPA during update | I am seeing unexpected behavior from a user standpoint when `kubectl apply -f mydeployment.yml`
The existing Deployment has an HPA associated with it and the current replica count is 3. When the deployment is accepted, the number of replicas jumps to 9, then scales down to 6, the value in the deployment. Eventually, the new pods info starts flowing in and the HPA does its thing and scales back to satisfaction.
The behavior I expected is that the rollout would use the current replicas from the HPA and the strategy parameters from the Deployment to keep replica count in line with the current target set by the HPA.
k8s version: 1.2.3
```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mydeployment
spec:
replicas: 6
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: "40%"
maxSurge: "50%"
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: "someimage"
```
```
apiVersion: extensions/v1beta1
kind: HorizontalPodAutoscaler
metadata:
name: mydeployment
namespace: default
spec:
scaleRef:
kind: Deployment
name: mydeployment
subresource: scale
minReplicas: 3
maxReplicas: 12
cpuUtilization:
targetPercentage: 50
```
| kind/bug,priority/backlog,kind/documentation,area/app-lifecycle,sig/autoscaling,area/workload-api/deployment,sig/cli,lifecycle/frozen | high | Critical |
153,357,729 | go | syscall: exec_windows.go: arguments should not be escaped to work with msiexec | Problem: When I try to run a command in Go on Windows using exec.Command() and then exec.Run(), the arguments are escaped using this logic: https://github.com/golang/go/blob/master/src/syscall/exec_windows.go#L26. That logic escapes the quotes around TARGETDIR="%v" which need to be there. I currently am assigning to Cmd.SysProcAttr manually to get around the escaping.
1. What version of Go are you using (`go version`)?
Go 1.6.2
2. What operating system and processor architecture are you using (`go env`)?
Windows 10 Pro x64
3. What did you do?
4. The code is available here, but needs to be run on Windows: https://play.golang.org/p/aU1PlbNTqM
5. Download any MSI. I used https://fastdl.mongodb.org/win32/mongodb-win32-x86_64-2008plus-ssl-3.2.6-signed.msi and renamed it to package.msi.
6. You can run Error() and NoError() to see the difference
7. What did you expect to see?
I expect to see the MSI say: Completed the ... Setup Wizard.
8. What did you see instead?
I see the Windows Installer help Windows and when I click it, I get: exit status 1639 which is:
ERROR_INVALID_COMMAND_LINE 1639 Invalid command line argument. Consult the Windows Installer SDK for detailed command line help.
| help wanted,OS-Windows,NeedsFix,early-in-cycle | medium | Critical |
153,363,395 | rust | Lint against leading 0 in integer literals | A literal like `0111` looks like an octal literal in C, but is actually decimal in Rust (octal would be`0o111`). This is [a footgun](http://stackoverflow.com/q/37062143/1256624), and so seems like something that could be trivially checked for.
(I suppose this could be a "clippy" lint, but that seems like it would lose most of the benefit: it seems to me that most people who encounter this will be just starting out, not invested in running external, third-party commands using nightly.)
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"GrigorenkoPV"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-lints,C-feature-request | low | Major |
153,390,508 | flutter | Provide demo of heroes being driven by the transition animation | 
| c: new feature,framework,f: material design,d: api docs,P3,team-design,triaged-design | low | Minor |
153,499,064 | go | cmd/compile: better error message for scoped type error | 1. What version of Go are you using (`go version`)?
`go version go1.6.2 darwin/amd64`
2. What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
```
1. What did you do?
https://play.golang.org/p/pJBsVvubBw
2. What did you expect to see?
An error message that was clear
3. What did you see instead?
Spent over an hour figuring out why `type a` was not taking `type a`
I think there should be a better error message for this case, as debugging it was super hard.
| NeedsFix,compiler/runtime | low | Critical |
153,517,698 | go | x/build: find new cloud provider for Solaris, Illumos builders? | Now that we have SmartOS builders on Joyent in a custom image using the buildlet, let's use the Joyent API and dynamically create the containers as needed. This should be done by creating a new pool type in x/build/cmd/coordinator, similar to the GCE, reverse, and Kubernetes pool types.
This will both be cheaper (run zero when we need zero), but also let us scale from 0 to dozens as needed and let us do sharded builds and let SmartOS be a trybot. (currently we just run 2 containers all the time)
I see lots of joyent stuff at https://godoc.org/?q=joyent
/cc @davecheney @4ad
| help wanted,OS-Solaris,Builders,new-builder | medium | Critical |
153,552,844 | go | cmd/vet: check for duplicate input to some binary ops | [moved from #15570]
@dominikh's [staticcheck](https://github.com/dominikh/go-staticcheck) found some bugs in the standard library -- see #15570. This issue is to consider whether it's worth adding a vet check along the same lines.
The check would be to look for expressions of the form (x BOP x), where:
- x is not of type float
- BOP is one of: & && | || == != - / % ^ &^
- x is a side-effect free expression (see the boolean conditions check)
These expressions are either redundant or have a constant value (with some very rare exceptions, like division and the smallest negative integer), which indicates that they are probably a mistake, and in any case would be better written in another way.
cc @robpike for opinions
cc @valyala in case you are interested in playing with more vet checks :)
| NeedsDecision,Analysis | low | Critical |
153,598,561 | rust | Unhelpful error message for specialization-related type projection failures. | Playground: http://is.gd/QMg2a8
| C-enhancement,A-diagnostics,T-compiler,F-specialization | low | Critical |
153,600,746 | go | x/net/http2: toggle HPACK dynamic table indexing for header | Often a service will have certain header fields that have no discernible repeatable pattern (or the possible combinations far outnumber the HPACK table slots). It would be good to be able to turn off HPACK for those headers so that these non-repeating values don't use up slots in the dynamic table.
Here's a quote from [Apple's APNs documentation](https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/APNsProviderAPI.html) touching on this:
> APNs requires the use of HPACK (header compression for HTTP/2), which prevents repeated header keys and values. APNs maintains a small dynamic table for HPACK. To help avoid filling up the APNs HPACK table and necessitating the discarding of table data, encode headers in the following way—especially when sending a large number of streams:
> - The `:path` header should be encoded as a literal header field without indexing,
> - The `apns-id` and `apns-expiration` headers should be encoded differently depending on initial or subsequent POST operation, as follows:
> - The first time you send these headers, encode them with incremental indexing to allow the header names to be added to the dynamic table
> - Subsequent times you send these headers, encode them as literal header fields without indexing
| FeatureRequest | low | Major |
153,644,786 | kubernetes | Requirements for multi-scheduler to graduate to Beta and then v1 | We'd like to graduate multi-scheduler to Beta in 1.4.
Beta
- [ ] Change annotation key from `scheduler.alpha.kubernetes.io/name` to `scheduler.beta.kubernetes.io/name`
- [ ] Make sure we have a rollback story or are sure we won't need to rollback earlier than when feature was first introduced
v1
- [ ] Change annotation key from `scheduler.beta.kubernetes.io/name` to `scheduler.kubernetes.io/name`
The design doc has some suggestions for "next steps" and we should incorporate any of those that we actually want to do into "Beta," but I'm not sure any of them are necessary?
ETA: Previously we had "Implement [MetadataPolicy](https://github.com/kubernetes/kubernetes/blob/master/docs/design/metadata-policy.md)" for Beta and "Use initializer pattern in MetadataPolicy" but I think we can separate that from this feature.
cc/ @HaiyangDING @bgrant0607
ref/ #17865 #18262
| sig/scheduling,lifecycle/frozen | medium | Major |
153,644,794 | kubernetes | Requirements for Affinity to graduate to Beta and then v1 | We'd like to graduate node and pod affinity to Beta in 1.4.
Beta
- [x] Change annotation key from `scheduler.alpha.kubernetes.io/affinity` to `scheduler.beta.kubernetes.io/affinity`
- [x] Make performance of pod affinity acceptable (#26144) (and verify that performance of node affinity is acceptable)
- [x] Better solution for "first pod problem"
- [ ] Last bullet point here https://github.com/kubernetes/kubernetes/pull/22985#issuecomment-216020283
- [ ] Before allowing users to use it, be sure we won't need to roll back to binary version that doesn't support it
- [x] Re-enable/fix all tests (see e.g. #26697, #26698)
v1
- [x] Move Affinity to be a field in PodSpec
- [ ] Implement RequiredDuringExecution
- [ ] Incorporate feedback from users of Beta
Better solution for DoS issues? (or just rely on priority/preemption when we have it?) (see https://github.com/kubernetes/kubernetes/pull/18265#issuecomment-172142482)
cc/ @kevin-wangzefeng @bgrant0607 @wojtek-t
ref/ #18261 #18265 #24853 #22985 #19758
| priority/important-soon,sig/scalability,sig/scheduling,lifecycle/frozen | medium | Major |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.