id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
182,788,179
opencv
Traincascade: grabbing negative window samples is seriously broken
### Please state the information for your system - OpenCV version: 3.x --> latest master branch - Host OS: Linux (Ubuntu 14.04 & 16.04) ### In which part of the OpenCV library you got the issue? apps --> opencv_traincascade process ### Expected behaviour Providing more then enough negative samples is a common approach in computer vision. This makes sure that through the rescaling and subsampling windows, the train cascade application succeeds in grabbing multiple millions of windows from a set of larger images. However this is not the case IF you supply a limited set of data. ### Actual behaviour & problem description Imagine the following setup (just confirmed on a detector setup I have here) - Given 2500 positive windows of 19x14 pixels - Given 1 negative image of 66x28 pixels, from which negative windows can be grabbed - Training LBP feature cascade classifier with `-numStages 25 -w 19 -h 14` ### Generated output ``` spu@TOBCAT:/data/zonnepanelen_test$ opencv_traincascade -data cascade/ -vec data.vec -bg negatives.txt -numPos 2500 -numNeg 10000 -numStages 25 -precalcValBufSize 8000 -precalcIdxBufSize 8000 -w 19 -h 14 -featureType LBP -minHitRate 0.999 PARAMETERS: cascadeDirName: cascade/ vecFileName: data.vec bgFileName: negatives.txt numPos: 2500 numNeg: 10000 numStages: 25 precalcValBufSize[Mb] : 8000 precalcIdxBufSize[Mb] : 8000 acceptanceRatioBreakValue : -1 stageType: BOOST featureType: LBP sampleWidth: 19 sampleHeight: 14 boostType: GAB minHitRate: 0.999 maxFalseAlarmRate: 0.5 weightTrimRate: 0.95 maxDepth: 1 maxWeakCount: 100 Number of unique features given windowSize [19,14] : 1710 ===== TRAINING 0-stage ===== <BEGIN POS count : consumed 2500 : 2500 NEG count : acceptanceRatio 10000 : 1 Precalculation time: 0 +----+---------+---------+ | N | HR | FA | +----+---------+---------+ | 1| 1| 1| +----+---------+---------+ | 2| 1| 1| +----+---------+---------+ | 3| 1| 1| +----+---------+---------+ | 4| 0.9996| 0.1268| +----+---------+---------+ END> Training until now has taken 0 days 0 hours 0 minutes 3 seconds. ===== TRAINING 1-stage ===== <BEGIN POS count : consumed 2500 : 2501 NEG count : acceptanceRatio 10000 : 0.127781 Precalculation time: 0 +----+---------+---------+ | N | HR | FA | +----+---------+---------+ | 1| 1| 1| +----+---------+---------+ | 2| 1| 1| +----+---------+---------+ | 3| 1| 1| +----+---------+---------+ | 4| 0.9996| 0.0994| +----+---------+---------+ END> Training until now has taken 0 days 0 hours 0 minutes 8 seconds. ===== TRAINING 2-stage ===== <BEGIN POS count : consumed 2500 : 2502 NEG count : acceptanceRatio 10000 : 0.0126893 Precalculation time: 0 +----+---------+---------+ | N | HR | FA | +----+---------+---------+ | 1| 1| 1| +----+---------+---------+ | 2| 0.9992| 0| +----+---------+---------+ END> Training until now has taken 0 days 0 hours 0 minutes 33 seconds. ===== TRAINING 3-stage ===== <BEGIN POS count : consumed 2500 : 2504 ``` ### Remarks on the generated output 1. How is it possible that from a single window of 66x18 pixels, more then 3 stages of 10.000 negatives can be grabbed? Imagining that the second stages only takes negatives that are not classified as negatives by the previous cascade, this can grow easily up to 40.000 - 50.000 samples just for those 3 stages trained. 2. A known problem is that the training hangs once it cannot find new negative samples. But in my opinion in this case it hangs way to late, and should rather drop an error message saying, `oh hello, I cannot find any more negative windows, do provide me more if you want to continue training`
bug,affected: 3.4,category: apps
low
Critical
182,837,879
opencv
Mouse callback gets wrong coordinates with small images
##### System information (version) <!-- Example - OpenCV => 3.1.0 (installed with brew install opencv3) - Operating System / Platform => Mac OSX El Capitan 10.11.6 - Compiler => Visual Studio 2015 --> - OpenCV => 3.1.0 (installed with brew install opencv3) - Operating System / Platform => Mac OSX El Capitan 10.11.6 - Compiler => Apple LLVM version 8.0.0 (clang-800.0.38) ##### Detailed description I load an image with imread() and display it with imshow(). On the window I set a mouse callback that gives out the coordinates when the image is clicked with the left mouse button. But when the image is too small (for example 100x100px) there is a gray "filling" area in the window where the image is displayed. The problem is, that this results in wrong coordinates when the image is clicked: -The window seems to be 200x100px with 100x100px of the real image and 100x100px of gray filling area -When I click on the right bottom of the image I get the coordinates (49,99) and when I click on the right bottom of the whole window I get the coordinates (99,99) My teacher compiled the same program on Linux and the image is displayed correctly in a window of 100x100px and correct coordinates in the mouse callback. ##### Steps to reproduce I attached a zip file which contains: -source code -input image with 100x100px which leads to the error (color_test.jpg) -input image with 300x300px which works correctly -screenshot of how the 100x100px image is displayed with the gray filling area (output.jpg) [onmouse_bug.zip](https://github.com/opencv/opencv/files/527658/onmouse_bug.zip)
bug,category: highgui-gui,platform: ios/osx
medium
Critical
182,858,343
go
cmd/compile: -bench should correct for GC
Currently -bench output is very sensitive to GC effects. For example: 1. Changing allocations in phase A might cause a GC cycle to shift from phase B to phase C, which can look like an improvement to B and a regression for phase C. 2. Reducing long-lived memory pressure from earlier phases gets credited to later phases, as the later phases benefit from reduced GC costs. This makes it hard to isolate performance improvements from frontend vs backend changes. I'm considering a few possible improvements to -bench: 1. Record GC pause times, and subtract them from phase times. 2. Record allocation stats for each phase. 3. Explicit GC cycle between FE and BE so we can measure how much live memory the FE has left for the BE to work with. Any other suggestions and/or implementation advice? /cc @griesemer @rsc @aclements
NeedsFix,compiler/runtime
low
Major
182,920,750
rust
Large array literal causes slow compilation
I have a large array of the form ``` static mut ARRAY: [AtomicU64; LARGE_NUMBER] = [ ATOMIC_U64_INIT, ATOMIC_U64_INIT, ... ]; ``` where `LARGE_NUMBER = 512 * 1024`. `ATOMIC_U64_INIT` is not `Copy`, so it's not obvious how to express this differently. I don't even think I even legally can use a `[u64; LARGE_NUMBER]` and transmute on use because `u64` has aliasing guarantees that `AtomicU64` breaks. This ends up adding more than 10s to my compile times. `time-passes`, filtering out passes that take <0.1s above their baseline, shows ``` time: 0.594; rss: 322MB parsing time: 0.295; rss: 349MB expansion time: 0.244; rss: 404MB name resolution time: 0.126; rss: 549MB lowering ast -> hir time: 0.104; rss: 358MB region resolution time: 0.148; rss: 358MB static item recursion checking time: 0.497; rss: 359MB compute_incremental_hashes_map time: 1.643; rss: 437MB item-types checking time: 0.562; rss: 436MB const checking time: 0.115; rss: 436MB privacy checking time: 0.568; rss: 492MB MIR dump time: 0.136; rss: 475MB death checking time: 0.164; rss: 475MB stability checking time: 0.559; rss: 475MB lint checking time: 3.098; rss: 523MB translation item collection time: 5.724; rss: 447MB translation ``` [There's a full version available elsewhere](https://gist.github.com/Veedrac/d53a6c2795890f94fa66e0bafb0bf73b), but it's not really more interesting.
C-enhancement,I-compiletime,T-compiler
low
Major
182,938,563
rust
Should `use super::{self, ...}` work?
Obviously `use super::self` would be a bit redundant, but what about if you're importing lots of items from `super`? We can already do something like: ``` use foo::{self, a, b, c}; ``` So why not something like this? ``` use super::{self, a, b, c}; ``` For reference, the specific error message on nightly is: ``` error[E0432]: unresolved import `super` --> ... | 1 | use super::{self, a, b, c}; | ^^^^ no `super` in the root ```
A-resolve,T-lang,C-feature-request
low
Critical
182,940,569
angular
Feature: more robust module loader for router
**I'm submitting a ...** (check one with "x") ``` [ ] bug report [x] feature request [ ] support request ``` **Current behavior** The router's current default module loader tries once and crashes the app if it fails to retrieve the module. That's not good in a mobile environment where **latency and connectivity may be intermittent or poor.** We are always more robust when designing for data requests. **We should ship a default loader that is more robust in the face of connectivity failures** A counter argument is that _we should do what the browser does with scripts ... we should fail_. The argument continues: _loading data is different from loading a module_. I disagree. If a script fails to load, the app never even starts. When lost connectivity is the cause, the HTML fails to load too and the user sees the familiar 404. The user cannot have accumulated unsaved work. Lazy loading and even preloading of modules can occur after the app has been running for some time. The user may be deep into a work flow when connectivity drops. If it were a data operation, the developer would have prepared for this possibility and found a way to keep the app alive and preserve unsaved work. The reasoning is the same for failed module loading. One might argue that the developer who cares about this should create a custom loader. That's a fine response if we were talking about unusual circumstances. I think we are talking about an environment that is so common we should handle it ourselves out-of-the-box. **Expected behavior** We should ship a default loader that is more robust. Here are some of the things it MIGHT do: - Distinguish among failures, perhaps giving special treatment to a connectivity failure and possibly to an authorization failure as both are potentially curable by the user. We can fail fast for a module 404. - After connectivity failure could retry 'n' times with progressive delay - Loader activity (loading/loaded/load failed) should be distinctive events that a Dev can handle - App should not fail if loader fails. At least it should be an option to not fail. Of course it doesn't navigate. But it's up to the dev to detect and handle the failure. - Dev should be able to handle event and establish a "try again when connectivity is restored" dialog - Dev should be able to take some kind of corrective action after retries to run offline until the user re-establishes the connection. - Dev should be able to hook into load/loaded events in order to show a spinner while the router is loading ... just as one would while loading data. **Minimal reproduction of the problem with instructions** Victor confirmed behavior and does not request a repro **What is the motivation / use case for changing the behavior?** A top Angular priority is running in mobile environments. Mobile environments are plagued by connectivity issues. - **Angular version:** 2.1.0 - **Browser:** [all ] - **Language:** [all ]
feature,area: router,feature: under consideration
high
Critical
183,032,016
rust
Inference fails because it doesn't consider deref coercions
Maybe this is intended, but it's not super-clear. [Playground](https://is.gd/v2SFxF) ``` rust use std::collections::HashMap; fn check(_: &str) -> bool { false } fn chain<'a>(some_key: &'a str) -> HashMap<&'a str, Vec<usize>> { let mut map = HashMap::new(); map.get(&some_key); { // uncomment type annotation to fix let key_ref /*: &&str*/ = map.keys().next().unwrap(); if check(key_ref) {} // due to autoderef `key_ref` could be `&&str` and // this would work. but `key_ref` is inferred to be // `&str` due to the `check` call, causing `map` to // be `HashMap<str, _>`. } map } fn main() {} ``` fails with <details> ``` error[E0277]: the trait bound `str: std::marker::Sized` is not satisfied --> min.rs:6:19 | 6 | let mut map = HashMap::new(); | ^^^^^^^^^^^^ | = note: `str` does not have a constant size known at compile-time = note: required by `<std::collections::HashMap<K, V>>::new` error[E0277]: the trait bound `str: std::marker::Sized` is not satisfied --> min.rs:7:9 | 7 | map.get(&some_key); | ^^^ | = note: `str` does not have a constant size known at compile-time error[E0277]: the trait bound `str: std::borrow::Borrow<&str>` is not satisfied --> min.rs:7:9 | 7 | map.get(&some_key); | ^^^ error[E0277]: the trait bound `str: std::marker::Sized` is not satisfied --> min.rs:8:18 | 8 | if check(map.keys().next().unwrap()) {} | ^^^^ | = note: `str` does not have a constant size known at compile-time error[E0277]: the trait bound `str: std::marker::Sized` is not satisfied --> min.rs:8:25 | 8 | if check(map.keys().next().unwrap()) {} | ^^^^ | = note: `str` does not have a constant size known at compile-time = note: required because of the requirements on the impl of `std::iter::Iterator` for `std::collections::hash_map::Keys<'_, str, _>` error[E0277]: the trait bound `str: std::marker::Sized` is not satisfied --> min.rs:8:14 | 8 | if check(map.keys().next().unwrap()) {} | ^^^^^^^^^^^^^^^^^ | = note: `str` does not have a constant size known at compile-time = note: required by `std::collections::hash_map::Keys` error[E0308]: mismatched types --> min.rs:9:5 | 9 | map | ^^^ expected reference, found str | = note: expected type `std::collections::HashMap<&'a str, std::vec::Vec<usize>>` = note: found type `std::collections::HashMap<str, _>` error: aborting due to 7 previous errors ``` </details>
T-lang,A-inference,C-feature-request
low
Critical
183,075,426
TypeScript
Incorrect (potentially) indentation of function arguments spanning multiple lines
_From @nomaed on October 14, 2016 10:40_ - VSCode Version: 1.6.1 - OS Version: macOS Sierra 10.12 Steps to Reproduce: - Write a function (or a method) with several arguments (I did it with JavaScript and TypeScript files, but I believe any would work). ``` javascript function myFunc(arg1, arg2, arg3, ...args) { } ``` - Split arguments to several lines by hitting Enter before arguments ``` javascript function myFunc(arg1, arg2, arg3, ...args) { } ``` Auto-formatting produces this result: ``` javascript function myFunc(arg1, arg2, arg3, ...args) { } ``` I would expect to see this result instead though: ``` javascript function myFunc(arg1, arg2, arg3, ...args) { } ``` Also, when manually formatting the arguments to appear in the same column (note: this is also the default/recommended setting in `tslint` and maybe other linters), then further lines will start with wrong indentation: ``` javascript function myFunc(arg1, arg2, arg3, ...args) { console.log('huh...'); } ``` ![vscode-multi-line-args-indent](https://cloud.githubusercontent.com/assets/9551921/19384766/a89ed4d8-9213-11e6-9890-ec41080e8843.gif) _Copied from original issue: Microsoft/vscode#13748_
Suggestion,Domain: Formatter,Awaiting More Feedback,VS Code Tracked
medium
Critical
183,105,915
go
net: add Sys field to Interface for retrieving platform-specific information
### What version of Go are you using (`go version`)? `go version go1.7 linux/amd64` ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/vagrant/vicsmb" GORACE="" GOROOT="/usr/local/go" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" CC="gcc" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build018341538=/tmp/go-build -gno-record-gcc-switches" CXX="g++" CGO_ENABLED="1" ``` ### What did you do? ``` package main import ( "fmt" "net" "os" "os/exec" ) func main() { alias := "testalias" ipalias := exec.Command("/sbin/ip", "link", "set", "dev", "lo", "alias", alias) _, err := ipalias.CombinedOutput() if err != nil { fmt.Printf("failed to invoke /sbin/ip to set alias on loopback interface: %s", err) os.Exit(1) } intf, err := net.InterfaceByName(alias) if intf == nil { fmt.Printf("failed to locate interface by alias %s: %s\n", alias, err) os.Exit(1) } fmt.Println("success!!") } ``` ### What did you expect to see? ``` Aliasing link lo success!! ``` ### What did you see instead? ``` Aliasing link lo failed to locate interface by alias testalias: route ip+net: no such network interface # ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 alias testalias ``` If `InterfaceByName` doens't check aliases on systems that support them then interface aliases become problematic. If you have to know that the name is an aliases it defeats the point.
NeedsDecision,FeatureRequest
medium
Critical
183,149,585
rust
MSVC profilers attribute most time to std::panicking::try::do_call<AssertUnwindSafe<closure>,()>
While building servo with a locally-built msvc nightly rust with debug symbols (same behaviour with regular nightly binaries, the debug symbols just give a lot more useful stacks/etc. overall since names are resolved to proper demangled ones), profilers end up attributing >70% of CPU time to `std::panicking::try::do_call<AssertUnwindSafe<closure>,()>`. They both see the call stack from `thread_start -> thread::start_thread -> __rust_maybe_catch_panic -> std::panicking:...`. If I break in the (Visual Studio) debugger, basically every Rust-created thread has a stack that looks like this: ``` servo.exe!net::image_cache_thread::Receivers::recv() Line 277 Unknown servo.exe!net::image_cache_thread::ImageCache::run(core::option::Option<webrender_traits::api::RenderApi> core_resource_thread, ipc_channel::ipc::IpcReceiver<net_traits::image_cache_thread::ImageCacheCommand>) Line 369 Unknown servo.exe!00007ff7b1b93ae9() Line 352 Unknown servo.exe!00007ff7b2dfa6e0() Line 97 Unknown servo.exe!alloc::boxed::{{impl}}::call_box<(),closure>(closure * self, ...) Line 595 Unknown servo.exe!std::sys_common::thread::start_thread(libc::c_void * main) Line 21 Unknown servo.exe!std::sys::thread::{{impl}}::new::thread_start(libc::c_void * main) Line 50 Unknown ``` or ``` win32u.dll!NtUserGetMessage() Unknown user32.dll!GetMessageW() Unknown > servo.exe!00007ff7b2bca4b0() Line 352 Unknown servo.exe!00007ff7b2dfa6e0() Line 97 Unknown servo.exe!alloc::boxed::{{impl}}::call_box<(),closure>(closure * self, ...) Line 595 Unknown servo.exe!std::sys_common::thread::start_thread(libc::c_void * main) Line 21 Unknown servo.exe!std::sys::thread::{{impl}}::new::thread_start(libc::c_void * main) Line 50 Unknown ``` I have a whole bunch of worker threads that are showing ``` > servo.exe!00007ff7b2009fca() Line 352 Unknown servo.exe!00007ff7b2dfa6e0() Line 97 Unknown servo.exe!alloc::boxed::{{impl}}::call_box<(),closure>(closure * self, ...) Line 595 Unknown servo.exe!std::sys_common::thread::start_thread(libc::c_void * main) Line 21 Unknown servo.exe!std::sys::thread::{{impl}}::new::thread_start(libc::c_void * main) Line 50 Unknown kernel32.dll!BaseThreadInitThunk() Unknown ntdll.dll!RtlUserThreadStart() Unknown ``` as their entire stack. The "Line 352", on each of those unnamed functions, takes me to `panicking.rs:352`, which is the `do_call` mentioned above. But they're also very clearly different addresses, all of which are in the `servo.exe` module (which is at 00007FF7B1A20000-00007FF7B3EEB000). Manually dumping the pdb for servo.exe and looking up the relevant offset (`0x5e9fca`) gives me: ``` 00000000005e7bb0 72712 SymTagFunction net::bluetooth_thread::BluetoothManager::start C:\proj\r\servo\components\net\bluetooth_thread.rs(181) 00000000005f9890 254 SymTagFunction net::bluetooth_thread::BluetoothManager::get_or_create_adapter C:\proj\r\servo\components\net\bluetooth_thread.rs(232) ``` Which is totally reasonable and correct -- that last thread is somewhere inside the BluetoothManager's thread func. The size -- 72712 -- also looks correct, and our address should be in the middle of it. The source file and line number are _also_ correct! But the tools are merging all of these things down to line 352. I'm wondering if they're not using the PDB info for this, and the compiler/llvm is generating incorrect info in another layer?
A-debuginfo,C-enhancement,T-compiler,O-windows-msvc
medium
Critical
183,167,032
rust
"cannot declare a new module at this location" should specify good locations
The error has two notes full of weasel words (2x "maybe", 1x "possibly") and it should probably say something straightforward like "modules are declared at the crate root, or mod.rs at the root of a source directory".
C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics
low
Critical
183,174,505
TypeScript
Give better error messages when reserved names are used for binding identifiers
``` typescript const data = { in: 1 }; const { in } = data; // <-- [ts] ':' expected. ```
Suggestion,Help Wanted,Effort: Moderate,Domain: Error Messages
low
Critical
183,201,944
rust
Inferring type succeeds with too little information
The following code fails to compile: ``` rust pub trait Bar { fn new() -> Self; } pub struct Foo {} impl Bar for Foo { fn new() -> Self { Foo {} } } pub struct Qux<B=Foo> where B: Bar { foo: B, } impl<B> Qux<B> where B: Bar { fn new() -> Self { Qux { foo: B::new() } } } fn main() { let q = Qux::new(); } ``` The error is: ``` rustc 1.14.0-nightly (6e8f92f11 2016-10-07) error[E0282]: unable to infer enough type information about `_` --> <anon>:24:13 | 24 | let q = Qux::new(); | ^^^^^^^^ cannot infer type for `_` | = note: type annotations or generic parameter binding required error: aborting due to previous error ``` (from playpen ; same with stable) Strangely, replacing that failing line with: ``` rust let q: Qux = Qux::new() ``` makes it compile.
C-enhancement,T-compiler,A-inference
low
Critical
183,226,916
rust
[testing infra] Pretty-printing tests fail when need to find an out-of-line module
Example, `run-pass\mod_file_with_path_attr.rs`: ``` #[path = "mod_file_aux.rs"] // <- can't find this file when compiling pretty-printing tests mod m; pub fn main() { assert_eq!(m::foo(), 10); } ``` Test runner needs to somehow provide a source directory when compiling tests, so out-of-line modules could be successfully resolved.
A-pretty,A-testsuite,C-bug
low
Minor
183,232,808
kubernetes
Restart controller-manager (and apiserver/scheduler) during our e2e tests
From time to time, we are hitting problems that restarting controller-manager is causing some serious issues. In the last few days we were e.g. forced to build 1.4 patch release because of bug in restarting node-controller (but IIRC this isn't the first case of such type of bugs). I think that we should introduce a simple e2e test that will just restart controller manager. If we start running this as part of parallel suites, this will: - enabling testing of controller-manager restarts - should not affect any other e2e tests (if they are not poorly written). - we will face controller-manager restarts during running other e2e tests, which will hopefully reduce risk of such situations in the future. Thoughts? @lavalamp @davidopp @gmarek @fabioy @ncdc @liggitt @kubernetes/sig-api-machinery @kubernetes/goog-control-plane
sig/api-machinery,sig/testing,lifecycle/frozen
low
Critical
183,235,904
rust
Pretty-printer inserts an extra newline before the first match arm if it has a comment
Example: ``` fn main() { match 0u8 { // Comment 0 => {} _ => {} } } ``` => ``` fn main() { match 0u8 { // Comment 0 => { } _ => { } } } ``` Affected tests: [pretty] pretty\borrowck\borrowck-pat-enum.rs [pretty] pretty\issue-11709.rs [pretty] pretty\issue-28839.rs
A-pretty,C-bug
low
Minor
183,249,727
rust
--emit=obj for Windows msvc producing .o instead of .obj
``` rustc 1.14.0-nightly (19ac57926 2016-10-08) binary: rustc commit-hash: 19ac57926abb749a93e2eb84502048d9c57f2d7b commit-date: 2016-10-08 host: x86_64-pc-windows-msvc release: 1.14.0-nightly ``` Doing `rustc --emit=obj foo.rs` results in `foo.o` instead of the expected `foo.obj`. TODO: * [ ] Investigate what automation or tools out there rely on `--emit=obj` using the `.o` extension **and** work with the `pc-windows-msvc` targets. * [ ] Investigate the feasibility of adjusting such automation or tools to support the `.obj` extension instead for `pc-windows-msvc`. * [ ] Add a `obj_suffix` to the target options and set it to `obj` only for `pc-windows-msvc` targets. * [ ] Adjust the aforementioned automation and tools and post something in the release notes. Note that because MSVC is such a *different* target than GNU targets, any sort of automation or tooling that specifically needs object files would already be quite conditional on MSVC being different to invoke the linker correctly and such. Therefore I am of the opinion that the fallout from this change should be minimal and very easy to fix. If anyone can find any examples that prove me wrong, please do so. Places to search for potential automation or tools: https://github.com/search?utf8=%E2%9C%93&q=--emit+obj&type=Code&ref=searchresults
O-windows,T-compiler,O-windows-msvc,C-bug
low
Minor
183,251,631
opencv
OpenCV VideoCapture API causes pi3 to freeze when graphic driver is enabled on pi3
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) - OpenCV => 3.1 - Operating System / Platform => Rapberry pi3 pixel OS, 64 bit - Compiler => gcc/g++ - Camera => raspicam - Enable Graphics Driver => using raspi config ##### Detailed description OpenCV Video Capture causes pi3 to hang after some time ( in couple of minutes only), I am pasting the sample code which I am using. ``` #include "opencv2/opencv.hpp" using namespace cv; int main(int, char**) { VideoCapture cap(0); // open the default camera if(!cap.isOpened()) // check if we succeeded return -1; for(;;) { Mat frame; cap >> frame; // get a new frame from camera waitKey(10); } // the camera will be deinitialized automatically in VideoCapture destructor return 0; } ``` ##### Steps to reproduce Run the above code on raspberry pi (Pixel OS) with Graphical driver enabled. Note to enable graphical driver , you can enable it via raspi-config. After the graphical driver is enabled go to /boot/config.txt and edit the following line: dtoverlay=vc4-kms-v3d to dtoverlay=vc4-fkms-v3d Now run the sample code I pasted Above, I followed the following script to compile the code. export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib g++ -std=c++0x `pkg-config --cflags opencv` -o `basename $1 .cpp` $1 `pkg-config --libs opencv` -I/usr/local/include/ -lrt -lm -pthread
bug,priority: low,category: videoio
low
Critical
183,255,385
youtube-dl
Download problem using youtube-dl at www.svtplay.se
Possible youtube-dl bug or svtplay database problem Problems retrieving videos (files) from svtplay (www.svtplay.se). After contacting the site and downloading initial info it only retrieves a couple of MB (should be around GB) for the following videos Millenium part 5 and 6 as well as 56orna part 2, 3, etc. Then the merge process fails. Output from youtube-dl with --verbose, namely > [debug] System config: [] > [debug] User config: [] > [debug] Command-line args: [u'--verbose', u'http://www.svtplay.se/video/2279502/millennium'] > [debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8 > [debug] youtube-dl version 2016.10.16 > [debug] Python version 2.7.12 - Linux-3.16.7-42-desktop-x86_64-with-SuSE-13.2-x86_64 > [debug] exe versions: ffmpeg 3.0.2, ffprobe 3.0.2 > [debug] Proxy map: {} > [SVTPlay] 2279502: Downloading webpage > [SVTPlay] 1122946-006A: Downloading JSON metadata > [SVTPlay] 1122946-006A: Downloading m3u8 information > [SVTPlay] 1122946-006A: Downloading m3u8 information > [SVTPlay] 1122946-006A: Downloading MPD manifest > [SVTPlay] 1122946-006A: Downloading f4m manifest > [debug] Invoking downloader on u'http://svtplay3r-f.akamaihd.net/d/se/open/delivery/20161007/1122946-006A/dash-live/PG-1122946-006A-MILLENNIUMTVSERI-02-1a894f3d-e7aa-336d-472e-56993a27cd04-live.mpd?alt=http://switcher.cdn.svt.se/./' > [dashsegments] Total fragments: 896 > [download] Destination: Avsnitt 6-1122946-006A.fdashhbbtv-PG-1122946-006A-MILLENNIUMTVSERI-02_2796_1.mp4 > [download] 100% of 2.66MiB in 00:06 > [debug] Invoking downloader on u'http://svtplay3r-f.akamaihd.net/d/se/open/delivery/20161007/1122946-006A/dash-live/PG-1122946-006A-MILLENNIUMTVSERI-02-1a894f3d-e7aa-336d-472e-56993a27cd04-live.mpd?alt=http://switcher.cdn.svt.se/./' > [dashsegments] Total fragments: 894 > [download] Destination: Avsnitt 6-1122946-006A.fdashhbbtv-PG-1122946-006A-MILLENNIUMTVSERI-02_988_2.m4a > [download] 100% of 2.65MiB in 00:07 > [ffmpeg] Merging formats into "Avsnitt 6-1122946-006A.mp4" > [debug] ffmpeg command line: ffmpeg -y -i 'file:Avsnitt 6-1122946-006A.fdashhbbtv-PG-1122946-006A-MILLENNIUMTVSERI-02_2796_1.mp4' -i 'file:Avsnitt 6-1122946-006A.fdashhbbtv-PG-1122946-006A-MILLENNIUMTVSERI-02_988_2.m4a' -c copy -map 0:v:0 -map 1:a:0 'file:Avsnitt 6-1122946-006A.temp.mp4' > ERROR: file:Avsnitt 6-1122946-006A.fdashhbbtv-PG-1122946-006A-MILLENNIUMTVSERI-02_2796_1.mp4: Invalid data found when processing input > Traceback (most recent call last): > File "/usr/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1837, in post_process > files_to_delete, info = pp.run(info) > File "/usr/bin/youtube-dl/youtube_dl/postprocessor/ffmpeg.py", line 444, in run > self.run_ffmpeg_multiple_files(info['__files_to_merge'], temp_filename, args) > File "/usr/bin/youtube-dl/youtube_dl/postprocessor/ffmpeg.py", line 192, in run_ffmpeg_multiple_files > raise FFmpegPostProcessorError(msg) > FFmpegPostProcessorError
geo-restricted
low
Critical
183,255,411
You-Dont-Know-JS
this & prototypes, chapter 5 #Setting & Shadowing Properties
> If the `myObject` object already has a normal data accessor property called `foo` directly present on it, the assignment is as simple as changing the value of the existing property. and > 1. If a normal data accessor (see Chapter 3) property named `foo` is found anywhere higher on the `[[Prototype]]` chain, **and it's not marked as read-only (`writable:false`)** then a new property called `foo` is added directly to `myObject`, resulting in a **shadowed property**. Is there really has data accessor property? I think threre are only have data properties and accessor properties. And in Chapter 3, I can't find any accurate definition about data accessor property. Have you ever say > For accessor-descriptors, the `value` and `writable` characteristics of the descriptor are moot and ignored Why not change "data accessor property" to "data property"?
for second edition
medium
Minor
183,265,104
rust
Make std::io::Take<R> an instance of std::io::Seek when R:std::io::Seek
This would be particularly useful for reading file formats containing other formats. Examples include: - OpenType font files, where CFF is an embedded format. - GRIB (format used by weather forecast organisms), containing grayscale JPEG to describe values. (and probably many others) In such cases, it would also be cool to have a "drain" method on std::io::Take, that would move the read cursor at the end of the Take.
T-libs-api,C-feature-accepted,A-io
low
Minor
183,300,527
rust
rust build on ARMv8 fails in test macro-stepping.rs, other debuginfo-gdb tests.
I built rust on a 96-core ARMv8 machine, and "make -j check" failed in one test: ``` test [debuginfo-gdb] debuginfo-gdb/macro-stepping.rs ... FAILED ``` I expected to see everything pass. A second time through, I got this, with four failures: ``` thread 'main' panicked at 'Some tests failed', /home/emv/src/rust-lang/rust/src/tools/compiletest/src/main.rs:296 make: *** [tmp/check-stage2-T-aarch64-unknown-linux-gnu-H-aarch64-unknown-linux-gnu-debuginfo-gdb.ok] Error 101 ``` ``` test [debuginfo-gdb] debuginfo-gdb/issue13213.rs ... FAILED test [debuginfo-gdb] debuginfo-gdb/cross-crate-type-uniquing.rs ... FAILED test [debuginfo-gdb] debuginfo-gdb/cross-crate-spans.rs ... FAILED test [debuginfo-gdb] debuginfo-gdb/macro-stepping.rs ... FAILED test result: FAILED. 95 passed; 4 failed; 6 ignored; 0 measured ``` ## Meta ``` $ ./aarch64-unknown-linux-gnu/stage2/bin/rustc --version --verbose rustc 1.14.0-dev (6dc035ed9 2016-10-15) binary: rustc commit-hash: 6dc035ed911672c6a1f7afc9eed15fb08e574e5b commit-date: 2016-10-15 host: aarch64-unknown-linux-gnu release: 1.14.0-dev ``` ``` % uname -a Linux armv8hello.local.lan 4.4.0-38-generic #57-Ubuntu SMP Wed Sep 7 10:19:14 UTC 2016 aarch64 aarch64 aarch64 GNU/Linux ``` ``` [ 0.000000] Linux version 4.4.0-38-generic (buildd@bos01-arm64-006) (gcc version 5.4.0 20160609 (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.2) ) #57-Ubuntu SMP Wed Sep 7 10:19:14 UTC 2016 (Ubuntu 4.4.0-38.57-generic 4.4.19) ```
A-debuginfo,T-compiler,C-bug,O-AArch64
medium
Critical
183,357,531
go
x/build: set up builders simulating old processors
We occasionally get a bug report from users that some assembly code (often in runtime or crypto or compression) causes a SIGILL. We should run builders (perhaps in qemu?) simulating older processors so we can exercise all the assembly fallback paths and CPU detection code.
help wanted,Builders,new-builder
low
Critical
183,494,018
angular
Optional Parameters for angular2 routes
<!-- IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING --> **I'm submitting a ...** (check one with "x") ``` [ ] bug report => search github for a similar issue or PR before submitting [X ] feature request [X ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question ``` I don't see a option for supporting Optional Parameters, don't want to use query params **Current behavior** right now, adding multiple routes for the same component { path: 'test/:p1', component:component }, { path: 'test/:p1/:p2', component:component }, { path: 'test/:p1/:p2/:p3', component:component }, **Expected behavior** Here i want to specify the optional parameters something like { path: 'test/:p1/:p2?/:p3?', component:component }, (optional ? ) I don't want to use query params _Minimal reproduction of the problem with instructions_* // Angular2 Module import { NgModule, ModuleWithProviders} from '@angular/core'; import { Route, RouterModule } from '@angular/router'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; // Components and pipes import { TestComponent } from './landing/landing.component'; // Setup routing export const routes: Route[] = [ { path: 'develop/:p1', component: TestComponent }, { path: 'develop/:p1/:p2', component: TestComponent }, ]; @NgModule({ imports: [ BrowserModule, FormsModule, RouterModule.forRoot(routes), ], providers: [ ], declarations: [ TestComponent, ] }) export class TestModule { static forRoot(): ModuleWithProviders { return { ngModule: TestModule }; } } I would like my Route Configuration to Change export const routes: Route[] = [ { path: 'develop/:p1', component: TestComponent }, { path: 'develop/:p1/:p2', component: TestComponent }, ]; **What is the motivation / use case for changing the behavior?** <!-- Describe the motivation or the concrete use case --> its annoying to create multiple routes for same component with different parameters, if there is a option for optional paramters, i would make it dynamic **Please tell us about your environment:** <!-- Operating system, IDE, package manager, HTTP server, ... --> - **Angular version:** 2.0.1 - **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] <!-- All browsers where this could be reproduced --> - **Language:** [TypeScript 2.0.1 | ES5] - **Node (for AoT issues):** `node --version` = 6.7.0
feature,area: router,feature: under consideration
medium
Critical
183,587,525
opencv
Regression issue with Calib3dTest.findFundamentalMat
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => 3.1 upstream/master (505c19bc20b971288f321f6092f9dfc479001e2b) - Operating System / Platform => Mac OSX 10.11.5 (15F34) - Compiler => Xcode Version 8.0 (8A218a) ##### Detailed description <!-- your description --> Java unit test is failing the test case **Calib3dTest.testFindFundamentalMatListOfPointListOfPoint** with the following message: ``` Max difference between expected and actiual Mats is 0.6962482904638616, that bigger than 0.001 junit.framework.AssertionFailedError: Max difference between expected and actiual Mats is 0.6962482904638616, that bigger than 0.0001 at org.opencv.test.OpenCVTestCase.compareMats(Unknown Source) at org.opencv..test.OpenCVTestCase.assertMatEqual(Unknown Source) at org.opencv.test.calib3d.Calib3dTest.testFindFundamentalMatListOfPointListOfPPoint(Unknown Source) at org.opencv.test.OpenCVTestCase.runTest(Unknown Source) ``` The test is running the following code: ``` public void testFindFundamentalMatListOfPointListOfPoint() { int minFundamentalMatPoints = 8; MatOfPoint2f pts = new MatOfPoint2f(); pts.alloc(minFundamentalMatPoints); for (int i = 0; i < minFundamentalMatPoints; i++) { double x = Math.random() * 100 - 50; double y = Math.random() * 100 - 50; pts.put(i, 0, x, y); //add(new Point(x, y)); } Mat fm = Calib3d.findFundamentalMat(pts, pts); truth = new Mat(3, 3, CvType.CV_64F); truth.put(0, 0, 0, -0.577, 0.288, 0.577, 0, 0.288, -0.288, -0.288, 0); assertMatEqual(truth, fm, EPS); } ``` ##### Steps to reproduce <!-- to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file --> Compile OpenCV with java enabled and run the autotests from the build folder using the following command: ``` python ../modules/ts/misc/run.py . -t java ``` Results of autotests can be seen on this page: opencv/build/modules/java/pure_test/.build/testResults/junit-noframes.html
bug,test,category: calib3d,category: java bindings
low
Critical
183,632,406
youtube-dl
Support for #EXT-X-BYTERANGE in native hls extractor
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your _issue_ (like that [x]) - Use _Preview_ tab to see how your issue will actually look like --- ### Make sure you are using the _latest_ version: run `youtube-dl --version` and ensure your version is _2016.10.16_. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [ x] I've **verified** and **I assure** that I'm running youtube-dl **2016.10.16** ### Before submitting an _issue_ make sure you have: - [x ] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your _issue_? - [ ] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [x] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your _issue_ --- ### If the purpose of this _issue_ is a _bug report_, _site support request_ or you are not completely sure provide the full verbose output as follows: Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` youtube-dl -v https://s3.amazonaws.com/1d4326f61a9a4ed596de9e1a41d48413/encryption/bad/bad.m3u8 [debug] System config: [] [debug] User config: [] [debug] Command-line args: ['-v', 'https://s3.amazonaws.com/1d4326f61a9a4ed596de9e1a41d48413/encryption/bad/bad.m3u8'] [debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252 [debug] youtube-dl version 2016.10.16 [debug] Python version 3.4.4 - Windows-10-10.0.10240 [debug] exe versions: ffmpeg N-82003-g9082603, ffprobe N-82003-g9082603 [debug] Proxy map: {} [generic] bad: Requesting header [generic] bad: Downloading m3u8 information [debug] Invoking downloader on 'https://s3.amazonaws.com/1d4326f61a9a4ed596de9e1a41d48413/encryption/bad/bad.m3u8' [download] Destination: bad-bad.mp4 [debug] ffmpeg command line: ffmpeg -y -headers 'Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0) Gecko/20150101 Firefox/47.0 (Chrome) Accept-Language: en-us,en;q=0.5 ' -i https://s3.amazonaws.com/1d4326f61a9a4ed596de9e1a41d48413/encryption/bad/bad.m3u8 -c copy -f mp4 -bsf:a aac_adtstoasc file:bad-bad.mp4.part ffmpeg version N-82003-g9082603 Copyright (c) 2000-2016 the FFmpeg developers built with gcc 5.4.0 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-libebur128 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib libavutil 55. 32.100 / 55. 32.100 libavcodec 57. 61.103 / 57. 61.103 libavformat 57. 52.100 / 57. 52.100 libavdevice 57. 0.102 / 57. 0.102 libavfilter 6. 64.100 / 6. 64.100 libswscale 4. 1.100 / 4. 1.100 libswresample 2. 2.100 / 2. 2.100 libpostproc 54. 0.100 / 54. 0.100 [NULL @ 0000000002488620] non-existing SPS 0 referenced in buffering period [NULL @ 0000000002488620] SPS unavailable in decode_picture_timing [h264 @ 000000000248a2a0] non-existing SPS 0 referenced in buffering period [h264 @ 000000000248a2a0] SPS unavailable in decode_picture_timing Input #0, hls,applehttp, from 'https://s3.amazonaws.com/1d4326f61a9a4ed596de9e1a41d48413/encryption/bad/bad.m3u8': Duration: 00:00:29.95, start: 1.407333, bitrate: 0 kb/s Program 0 Metadata: variant_bitrate : 0 Stream #0:0: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc Metadata: variant_bitrate : 0 Stream #0:1: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp Metadata: variant_bitrate : 0 Output #0, mp4, to 'file:bad-bad.mp4.part': Metadata: encoder : Lavf57.52.100 Stream #0:0: Video: h264 (Main) ([33][0][0][0] / 0x0021), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 23.98 fps, 23.98 tbr, 90k tbn, 90k tbc Metadata: variant_bitrate : 0 Stream #0:1: Audio: aac (LC) ([64][0][0][0] / 0x0040), 48000 Hz, stereo Metadata: variant_bitrate : 0 Stream mapping: Stream #0:0 -> #0:0 (copy) Stream #0:1 -> #0:1 (copy) Press [q] to stop, [?] for help frame= 270 fps=3.7 q=-1.0 Lsize= 4835kB time=00:00:22.50 bitrate=1759.8kbits/s speed=0.308x video:4656kB audio:173kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.110669% [ffmpeg] Downloaded 4950862 bytes [download] 100% of 4.72MiB ... <end of log> ``` m3u8 enc https://s3.amazonaws.com/1d4326f61a9a4ed596de9e1a41d48413/encryption/bad/bad.m3u8 unencrypted video : https://s3.amazonaws.com/1d4326f61a9a4ed596de9e1a41d48413/input.ts ### Description of your _issue_, suggested solution and other information Can you add support for encrypted single file hls (m3u8) in the native hls extractor, since ffmpeg seem have bug reading the byte range resulted in corrupted / skipped video https://tools.ietf.org/html/draft-pantos-http-live-streaming-20#section-4.3.2.2 https://developer.apple.com/library/content/technotes/tn2288/_index.html#//apple_ref/doc/uid/DTS40012238-CH1-BYTE_RANGE_ https://trac.ffmpeg.org/ticket/5858 also please add support for local m3u8 like youtube-dl.exe "C:\file.m3u8" since sometime it's much faster downloading the encrypted file locally first (to avoid dropped frames) thanks
request
low
Critical
183,655,184
youtube-dl
ERROR: lynda returned error: Session has expired (with cookies)
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your _issue_ (like that [x]) - Use _Preview_ tab to see how your issue will actually look like --- ### Make sure you are using the _latest_ version: run `youtube-dl --version` and ensure your version is _2016.10.16_. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.10.16** ### Before submitting an _issue_ make sure you have: - [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your _issue_? - [x] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your _issue_ --- ### If the purpose of this _issue_ is a _bug report_, _site support request_ or you are not completely sure provide the full verbose output as follows: Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` $ youtube-dl --cookies lyndacookie1.txt https://www.lynda.com/Python-tutorials/Distinguishing-mutable-immutable-objects/62226/70945-4.html?autoplay=true -v [debug] System config: [] [debug] User config: [] [debug] Command-line args: [u'--cookies', u'lyndacookie1.txt', u'https://www.lynda.com/Python-tutorials/Distinguishing-mutable-immutable-objects/62226/70945-4.html?autoplay=true', u'-v'] [debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2016.10.16 [debug] Python version 2.7.12 - Linux-4.4.0-24-generic-x86_64-with-Ubuntu-16.04-xenial [debug] exe versions: avconv 2.8.6-1ubuntu2, avprobe 2.8.6-1ubuntu2, ffmpeg 2.8.6-1ubuntu2, ffprobe 2.8.6-1ubuntu2, rtmpdump 2.4 [debug] Proxy map: {} [lynda] 70945: Downloading video JSON ERROR: lynda returned error: Session has expired Traceback (most recent call last): File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 694, in extract_info ie_result = ie.extract(url) File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 356, in extract return self._real_extract(url) File "/usr/local/bin/youtube-dl/youtube_dl/extractor/lynda.py", line 176, in _real_extract 'lynda returned error: %s' % video['Message'], expected=True) ExtractorError: lynda returned error: Session has expired ``` --- ### If the purpose of this _issue_ is a _site support request_ please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://www.youtube.com/watch?v=BaW_jenozKc - Single video: https://youtu.be/BaW_jenozKc - Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc --- ### Description of your _issue_, suggested solution and other information I am using an organization login, and hence the cookies. I extracted cookies using Cookie Exporter 1.5.1-signed and added # Netscape HTTP Cookie File on top of file. I also replaced all `\r\n` with `\n` with a hex editor. I first tried downloading all of the course but it only downloaded free ones. Then I tried with an individual locked video, I got this error. And, yes I can play the video fine on the browser. Is it a bug or a problem in my command?
account-needed
low
Critical
183,658,784
nvm
Run Travis tests on OS X
Should be a fairly simple tweak to `.travis.yml` and will test in what I would expect would be a pretty common (if not the most common) environment that `nvm` will normally run in. Also has the benefit of testing against BSD tools instead of GNU tools. The downside is it'll run all the current tests twice so effectively double the build time, not sure if that would be a deal breaker as I know the builds already take a long time. Travis seem to be having issues with OS X builds atm where it takes a while for them to start. They're aware of this and are working on it. If you're interested I'm happy to put a PR together.
testing,pull request wanted
low
Minor
183,742,091
opencv
VideoCapture() doesn't work (blocks) when running in a separate thread
##### System information (version) - OpenCV => 3.1.0_4 (via `brew install opencv3 --with-python3`) - OS => OSX Yosemite - Platform => python3.5.2 ##### Detailed description When using `cv2.VideoCapture(0)` in a thread, it doesn't work, but it does work when running in the main thread. It doesn't raise an Exception, merely blocks until the programme exits. ##### Steps to reproduce Using this script as a minimum viable example: ``` python from threading import Thread import cv2 from time import sleep from PIL import Image def main(): print("Running in main thread") cam = cv2.VideoCapture(0) print("Starting capture") sleep(2) _, img = cam.read() print("Acquired image") sleep(2) Image.fromarray(img).save("tmp.jpg", "JPEG") cam.release() print("End process") th = Thread(target=main, name="Testing Thread") th.daemon = True th.start() th.join() ``` When this is run on OSX Yosemite, the only thing that gets printed out is "Running in main thread". The rest of the `main()` function does not get executed. Stepping into `VideoCapture()` with `pdb` just hangs as well. If the `main()` function is run normally (without threading) then it works fine. N.B. I originally had this working on macOS Sierra, which I had to install via `brew install opencv3 --with-python3 --HEAD` (note the extra `--HEAD` option) as installing it without `--HEAD` wouldn't work on Sierra. Similarly, installing _with_ `--HEAD` does _not_ work on OSX Yosemite.
priority: low,category: videoio,platform: ios/osx,category: 3rdparty
low
Major
183,764,064
vscode
Feature Request: Show all errors and warnings in project for all JavaScript and TypeScript files, not just opened ones
I am using VS Code on a project at work that has hundreds of files and many layers of nested sub-directories. I frequently make changes that break many files, such as changing the call signature of a commonly used method. Since the project is entirely typescript, it is useful to be able to open the "Problems" view and see the errors and warnings my change caused in the files I have open. However, because of the size of the project, I still need to go to my terminal and run our own make commands to see the list of problems. I then need to perform the fun dance between my terminal and VS Code, searching for the right line in the right file before I can fix the problem. What I need is a way to tell the "Problems" view to show me all errors and warnings across all open and closed files in my project. This would allow me to see all problems in a single list and quickly click through and fix them.
feature-request,typescript
high
Critical
183,780,829
flutter
DropDownButton.style is busted
DropDownButton.style is intended to handle the special font size (and style?) of drop down buttons in data tables: https://material.google.com/components/data-tables.html#data-tables-specs However, right now it breaks if you change the theme at the same time.
framework,f: material design,a: quality,P3,team-design,triaged-design
low
Minor
183,793,696
vscode
Virtual Space is not implemented.
https://blogs.msdn.microsoft.com/zainnab/2010/02/28/understanding-virtual-space/ This is a much needed productivity option that has been available in Visual Studio and other editors for many years. See also the column select issue that requires it: https://github.com/Microsoft/vscode/issues/5402 --- *Addition from @hediet:* This is (still) the current behavior of VS Code. Only text that exists in the text buffer is selected: ![recording](https://user-images.githubusercontent.com/2931520/137472294-6333109f-44bb-4053-9d32-7e589fa68112.gif) However, the column selection mode should support rectangular selection like this (adding whitespace on demand): ![recording](https://user-images.githubusercontent.com/2931520/137472471-d06f55b3-0db1-4700-b4b1-e206e82cec07.gif) This would fix #5940 (which is just about copy&pasting such blocks) and #115559.
feature-request,editor-core,editor-columnselect
high
Critical
183,794,397
flutter
Automated testing infra doesn't test Flutter in landscape
I could be wrong, but I believe we're only doing automated infra testing in portrait. Flutter is designed to support both landscape and portrait orientations, and its widgets should work in both.
a: tests,team,framework,t: flutter driver,P3,team-framework,triaged-framework
low
Minor
183,867,278
react
[Fiber] Formalize States
In Fiber there are a number of states that a component can be in. However, it is not formalized in the code right now. Instead, the state is inferred. This leads to hard to follow code. Instead we can organize the code in terms of explicit states - which is what the original prototype did. Before componentDidMount (i.e. `current === null`): - Never begun. - Have been begun before but never completed. - Have been completed before, not committed, but hasn't begun this time around. - Begun but not yet completed. - Completed but not yet committed. After componentDidMount (i.e. `current !== null`): - Haven't begun an update yet - Have been begun before but never completed. - An update have been completed before, not committed, but hasn't begun this update. - Begun update but not yet completed. - Completed update but not yet committed. The "children" set of a component also have some states: - Never reconciled. - The current set last committed. - A previously reconciled set that hasn't committed yet.
Type: Enhancement,Component: Reconciler,React Core Team
medium
Minor
183,950,264
opencv
runtime asserts could be compile-time
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) - OpenCV => 3.1, git - Operating System / Platform => any - Compiler => any ##### Detailed description OpenCV throws a lot of compile-time asserts that could as well be detected at compile-time. In particular the `InputArray` type strips down all information regarding array element type, size, even if it's known. Examples encourage use of `Mat`, and `Mat` has no element type information either. While bounds checking can't feasibly be done at compile-time in most cases, enumerating `Mat` element types without having the information available at compile-time prevents any early checks. I understand that calls such as `getMat` happen all the time in the codebase. But this can be factored out incrementally. Even lack of C++11 support in the compiler doesn't preclude early checks. Partial specialization and the "classic" form of `static_assert` are enough. <!-- your description --> ##### Steps to reproduce <!-- to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file -->
feature,RFC
low
Critical
183,950,746
rust
LLVM pointer range loop / autovectorization regression part two
This is a follow up to #35662. The optimizer regression that resulted in that bug is still not fixed. That bug created a simplified test case, and fixed it for that case. That's good. However, I have not been able to remove my workaround, so the issue still persists in the original code. The issue appears for an 8x8 kernel and disappears if the kernel is shrunk to 4x4, so it's somehow related to the sheer size of the function, or the length it goes to in loop unrolling. Preamble for the code that produces the desired codegen: ``` rust let mut ab: [[f32; 8]; 8]; ab = ::std::mem::uninitialized(); loop8!(i, loop8!(j, ab[i][j] = 0.)); ``` What the loop8 macros do is that they expand the expression statically, so it corresponds to 64 assignments. Initialization part for code which is not optimizing well: ``` rust let mut ab: [[f32; 8]; 8]; ab = [[0.; 8]; 8]; ``` Another example which is not optimizing well: ``` rust let mut ab: [[f32; 8]; 8]; ab = ::std::mem::uninitialized(); for i in 0..8 { for j in 0..8 { ab[i][j] = 0.; } } ``` Full reproducer in the next comment.
A-LLVM,I-slow,C-enhancement,T-compiler,A-autovectorization
low
Critical
184,084,350
go
net, internal/poll, runtime: remove mutex from UDP reads/writes
Reads and writes on net.UDPConns are guarded by a mutex. Contention on the mutex makes it difficult to efficiently handle UDP requests concurrently. Or perhaps I'm overlooking the right way to do this. The attached benchmark attempts to demonstrate the problem: [socks_test.go.txt](https://github.com/golang/go/files/540322/socks_test.go.txt) Annotated benchmark results from my desktop: ``` # All tests are of a server reading 64-byte UDP messages and responding to them. # # /echo tests are of an echo server--no processing is done of messages, so # the test time is entirely spent in socket operations. # # /sha tests compute a SHA-256 sum of the input message 50 times, to simulate # doing a small amount of real work per message. # The read_1 tests process messages in serial on a single goroutine. # # Increasing GOMAXPROCS introduces a minor inefficiency for some reason, # but these results are largely what you would expect from a non-concurrent server. BenchmarkUDP/read_1/echo 1000000 8698 ns/op BenchmarkUDP/read_1/echo-2 1000000 11229 ns/op BenchmarkUDP/read_1/echo-4 1000000 11873 ns/op BenchmarkUDP/read_1/sha 200000 29676 ns/op BenchmarkUDP/read_1/sha-2 200000 30997 ns/op BenchmarkUDP/read_1/sha-4 200000 35817 ns/op # The read_n tests start multiple goroutines, each of which reads from # and writes to a shared UDP socket. # # Increasing the number of goroutines causes the server to become slower, # presumably due to lock contention on the socket. BenchmarkUDP/read_n/echo 1000000 10201 ns/op BenchmarkUDP/read_n/echo-2 500000 19274 ns/op BenchmarkUDP/read_n/echo-4 300000 24263 ns/op BenchmarkUDP/read_n/sha 200000 29522 ns/op BenchmarkUDP/read_n/sha-2 200000 41015 ns/op BenchmarkUDP/read_n/sha-4 200000 58748 ns/op # The read_1n1 tests start one reader, one writer, and multiple worker goroutines # connected by channels. # # Increasing the number of worker goroutines does not improve performance here either, # presumably due to lock contention on the channels. BenchmarkUDP/read_1n1/echo 1000000 11194 ns/op BenchmarkUDP/read_1n1/echo-2 500000 20991 ns/op BenchmarkUDP/read_1n1/echo-4 300000 28297 ns/op BenchmarkUDP/read_1n1/sha 200000 39178 ns/op BenchmarkUDP/read_1n1/sha-2 200000 45770 ns/op BenchmarkUDP/read_1n1/sha-4 200000 38197 ns/op # The read_fake tests just run the work function in a loop without network operations. # Performance scales mostly linearly with the number of worker goroutines. BenchmarkUDP/read_fake/echo 2000000000 4.05 ns/op BenchmarkUDP/read_fake/echo-2 3000000000 2.00 ns/op BenchmarkUDP/read_fake/echo-4 10000000000 1.02 ns/op BenchmarkUDP/read_fake/sha 300000 21178 ns/op BenchmarkUDP/read_fake/sha-2 500000 10691 ns/op BenchmarkUDP/read_fake/sha-4 1000000 5609 ns/op ```
Performance,NeedsInvestigation
low
Major
184,099,915
rust
Specify and possibly reconsider the precise semantics of #[no_mangle]
This internals post describes how people are exploiting the underspecified aspects of `#[no_mangle]` to practical effect: https://internals.rust-lang.org/t/precise-semantics-of-no-mangle/4098 . We may want to specify what guarantees the attribute actually has, and possibly introduce new attributes to represent the fine-grained roles that `#[no_mangle]` is currently being used for.
A-linkage,A-stability,T-lang
low
Minor
184,144,494
rust
Poor error message related to higher rank lifetimes
https://bitbucket.org/iopq/fizzbuzz-in-rust/src/6d739f4781c90be95ac47e067562471b0c52f9f8/src/lib.rs?at=error&fileviewer=file-view-default After I try to use the tool.rs implementation of `second` I get this error message: ``` Compiling fizzbuzz v0.0.1 (file:///C:/Users/Igor/Documents/rust/fizzbuzz) error[E0281]: type mismatch: the type `fn(_) -> _ {tool::second::<_>}` implements the trait `std::ops::FnMut<(_,)>`, but the trait `for<'r> std::ops::FnMut<(&'r _,)>` is required (expected concrete lifetime, found bound lifetime parameter ) --> src\lib.rs:52:11 | 52 | .filter(apply(second, i)) | ^^^^^ | = note: required by `apply` error[E0271]: type mismatch resolving `for<'r> <fn(_) -> _ {tool::second::<_>} as std::ops::FnOnce<(&'r _,)>>::Output == _` --> src\lib.rs:52:11 | 52 | .filter(apply(second, i)) | ^^^^^ expected bound lifetime parameter , found concrete lifetime | = note: concrete lifetime that was found is lifetime '_#11r = note: required by `apply` ``` I still don't quite understand this error message. The lifetimes are anonymous, so I don't know what it's talking about. The `'r` lifetime is elided, I assume? There's no error code to help me understand the issue with concrete lifetimes vs. bound lifetime parameters.
C-enhancement,A-diagnostics,A-closures,T-compiler,A-higher-ranked
low
Critical
184,171,545
go
tour: clarify when to use pointer receivers
There appears to be a confusion (at least to me) on what the appropriate receiver type ought to be by default. Looking at; https://tour.golang.org/methods/8 That tutorial encourages new go users that: `There are two reasons to use a pointer receiver.` `The first is so that the method can modify the value that its receiver points to.` `The second is to avoid copying the value on each method call.` this seems to imply that you should default to using pointer receivers in only those two cases. whereas looking at; https://github.com/golang/go/wiki/CodeReviewComments#receiver-type and a few other credible go sources eg; https://twitter.com/rob_pike/status/788743046280077313 we see that actually, pointer receivers should be the default while value receivers are the exception. So to summarize; the tour.golang notes makes it appear as if pointer receivers are the exception(that was my impression when I went through those notes as a new go user some months back) whereas in actual sense they are (or ought to be) the default.
Documentation,NeedsFix
low
Major
184,271,502
go
x/tools/cmd/eg: does not support variadic functions
### What version of Go are you using (`go version`)? Go 1.7.1 eg 69f6f5b782e1f090edb33f68be67d96673a8059e ### What operating system and processor architecture are you using (`go env`)? MacOSX 10.12 Sierra (Darwin) x64 ### What did you do? Given the template ``` package egtemplates import ( "fmt" "github.com/Sirupsen/logrus" ) func before(vlog logrus.Logger, format string, args ...interface{}) { vlog.Errorf(fmt.Sprintf(format, args...)) } func after(vlog logrus.Logger, format string, args ...interface{}) { vlog.Errorf(format, args...) } ``` and the test file ``` package test import ( "fmt" "github.com/Sirupsen/logrus" ) func articleCIPQ(vlog logrus.Logger) { vlog.Errorf(fmt.Sprintf("Table %s: %s", "something", "err")) } ``` I executed `eg -t egtemplates/logrus-errorf.go -v ./egtemplates/test/myfile.go`. ### What did you expect to see? ``` === egtemplates/test/myfile.go (1 matches) package test import ( "fmt" "github.com/Sirupsen/logrus" ) func articleCIPQ(vlog logrus.Logger) { vlog.Errorf("Table %s: %s", "something", "err") } ``` ### What did you see instead? No substitutions, but no errors either.
NeedsFix,Tools
low
Critical
184,281,553
kubernetes
kubectl exec escape sequence (tilde-dot)
As a big user of `kubectl exec` _and_ a big commuter to and from work, I often am in situations where I have some `exec` instance open, but the connection is flaky. When that happens, I _know_ that the connection will eventually timeout, but until then, that terminal window has become useless. ssh has a nice `~.` (tilde-dot) escape sequence that instantly terminates the connection in those situations. Perhaps `kubectl` can use the same technique? For someone like me, that would be a great UX improvement.
area/kubectl,kind/feature,sig/cli,lifecycle/frozen
medium
Major
184,298,228
flutter
Test Android back button is functional
We should add a regression test that verifies that we don't break the back button on Android.
a: tests,c: new feature,team,platform-android,framework,P3,team-android,triaged-android
low
Minor
184,344,609
kubernetes
kubectl apply umbrella issue tracker
We need to write kubectl apply user documentation, fix bugs, and complete the functionality. # Work Items ## TBD - [ ] #35345 Fix server side defaulting of Union Types - [ ] #34292 Fix server side defaulting for Union types and derived types - [x] #35345 Add apply support that work with Unions - [ ] #24198 Deployment strategy needs to be a Union - [ ] #34413 Apply Deployment and HPA in a single config sometimes fails (bug) - _Under triage_ - Known issue caused in csi libs - *out of scope* - [ ] #35991 Make apply --prune work on old versions of objects - _Stretch_ - [ ] #35149 Apply fails to patch ScheduledJob - [ ] Prevent user error from deleting unintended resources with --prune - [ ] Specify `apply` options in configuration file that is checked in so the invocation is reproducible - [ ] #20508 kubectl --record should avoid capturing secrets on the command line - [ ] #26234 Create fuzz testing for kubectl apply - [ ] #25238 Detect issues with HPA instead of causing unexpected behavior ## 1.9 - [ ] Api Open-Api: Include merge information in open-api spec so the client can merge values without statically linking the types (breaks version skew) - [ ] Don't erase new fields in kubectl apply when running an old client against a new server - [ ] #34292 Values defaulted by the server must remain correct and valid if their sources are changed. Includes but not limited to mutually exclusive fields. - [ ] Fix issue preventing changing the deployment rollout strategy to "Replace" - [ ] Provide a diff / dry-run option for apply to see what the changes will be before running them. When I run apply - [ ] List resource names that will be updated / created / deleted and prompt for confirmation when running apply, ## DONE - [x] #35999 Patch using info.VersionedObject - [x] #13576 kubectl apply conflict detection (feature) - [x] Audit Api Resources: Identify the set of defaulted values that will break apply and resolve - Requires full audit of all api objects ~days-~weeks - [x] #16569 kubectl apply --force (feature) - [x] #34274 Document apply --prune semantics - [x] kubernetes/kubernetes.github.io#1513 Apply documentation - [x] #35220 Fix pruning resource types that are no longer referenced - [x] #29542 ThirdPartyResource support - [x] Audit Api Resources: Verify merge keys are set correctly for objects and unique within an array (e.g. VolumeMount used a non-unique merge key and it broke patch) - Requires full audit of all api objects - [x] Audit Api Resources: Distinguish optional fields should be able to distinguish unset from set to default (e.g. optional fields must be pointers) - Requires full audit of all api objects - [x] #23564 kubectl apply leaks secret data (bug) - **Merged** - [x] #35163 Strategic merge should support remove or replace items in a list of primitive type ( bug) - _Unforseen complexity as it requires changing the API server PATCH method_ - **Merged** - [x] Verify kubectl is able to delete fields and set them to nil (potential bug) - _If fixes for anything that is broken are complex, this would be at risk_ - Fix in #35496 - [x] Make sure imperative operations (set / scale / annotate) don't get overridden by apply. (potential bug) - _If fixes for anything that is broken are complex, this would be at risk_
priority/important-soon,area/app-lifecycle,area/kubectl,sig/cli,area/declarative-configuration,lifecycle/frozen
low
Critical
184,373,284
youtube-dl
Add support spuul.com
Please add site support for free videos at spuul.com. Example: https://spuul.com/videos/1420-ladies-vs-ricky-bahl
site-support-request,account-needed
low
Minor
184,386,326
go
x/text/currency: TestLinking fails on ppc64le, s390x
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? 1.7.3 ### What operating system and processor architecture are you using (`go env`)? It's a ppc64le system running Ubuntu (various versions, version not important) ### What did you do? go test -v -p 1 golang.org/x/text/... ### What did you expect to see? Tests passing ### What did you see instead? --- FAIL: TestLinking (1.68s) currency_test.go:155: size(symbols)-size(base) was 772; want > 2K This was part of building for the distribution, you can see the full log at: https://launchpadlibrarian.net/290258937/buildlog_ubuntu-zesty-ppc64el.golang-x-text_0.0~git20161013.0.c745997-1ubuntu1~ppa1_BUILDING.txt.gz This passes with Go 1.6. But in general this test seems to be a bit of a hostage to the future. I think I'm going to add a distro patch disabling this (and the one in dict_test.go) for now.
NeedsFix
low
Major
184,392,313
youtube-dl
Access to info_dict before/after download/post-processing when embedding
- [X] I've **verified** and **I assure** that I'm running youtube-dl **2016.10.21.1** ### Before submitting an _issue_ make sure you have: - [X] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [X] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your _issue_? - [ ] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [X] Question - [ ] Other I'm trying to get the exact filename that'll be written(merged) by youtube-dl. It's the same filename sent to debug when setting **forcefilename** to true on the options. I want to access this: https://github.com/rg3/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L1531 With youtube_dl.prepare_filename(result) I can get the merged filename without extension.. how do I get the extension?
request
low
Critical
184,468,704
TypeScript
Unoptimized destructuring compilation
**TypeScript Version:** 2.0 **Code** ``` ts // source ts code let i = 1000; while (i--) { let [a, b, c] = [1, 2, 3]; } ``` **Expected behavior:** ``` js // compiled by babel var i = 1000; while (i--) { var a = 1; var b = 2; var c = 3; } ``` **Actual behavior:** ``` js // compiled by tsc var i = 1000; while (i--) { var _a = [1, 2, 3], a = _a[0], b = _a[1], c = _a[2]; } ``` On each iteration tsc creates new unnecessary array _a.
Suggestion,Help Wanted
low
Major
184,484,953
TypeScript
Service Worker typings
<!-- BUGS: Please use this template. --> <!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript --> <!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> **TypeScript Version:** 2.0.3 Types for Service Workers should be built into the default definitions for TypeScript. [There are some good examples](https://www.npmjs.com/package/@types/service_worker_api) out there of Service Worker typings from the community.
Suggestion,Help Wanted,Domain: lib.d.ts
high
Critical
184,529,305
TypeScript
Go to implementation misses some implementations
From tests/cases/fourslash/goToImplementationInterfaceMethod_01.ts ``` ts interface BaseFoo { hello(): void; } interface Foo extends BaseFoo { foo(): void; } interface Bar { hello(): void; bar(): void; } class FooImpl implements Foo { hello() { } foo() { } } class FooBaseImpl implements FooBase { hello() { } // missing this one! } class BarImpl implements Bar { hello() { } bar() { } } class FooBarImpl implements Foo, Bar { hello() { } foo() { } bar() { } } let foorbar: Foo | Bar; foorbar.hello/*1*/(); let foonbar: Foo & Bar; foonbar.hello/*2*/(); ``` Find all implementations on both 1 and 2 should find the hello method in all classes. But `FooBaseImpl.hello` doesn't show up. The test incorrectly says that it should not, but it should.
Bug
low
Minor
184,533,263
youtube-dl
Use /tmp directory for frag and .vtt files
I sometimes see leftover `--Frag17` and `-.ru.vtt` files in my home directory. I assume it's from me interrupting youtube-dl during a download. Can you use the /tmp (`TMPDIR`) instead for those files?
request
low
Major
184,546,535
You-Dont-Know-JS
Types & Grammer - Chapter 2 - Small Decimal Values
Probably just me, but I feel like this line in the Small Decimal Values section is worded a little odd? `As of ES6, Number.EPSILON is predefined with this tolerance value, so you'd want to use it, but you can safely polyfill the definition for pre-ES6:`
for second edition
medium
Minor
184,554,251
kubernetes
Reasonable defaults with eviction and PodDisruptionBudget
We are moving to have kubectl drain use the eviction subresource (https://github.com/kubernetes/kubernetes/issues/34277), and honor `PodDisruptionBudget`. From [comment by @erictune](https://github.com/kubernetes/kubernetes/issues/35318#issuecomment-255501443) which better captures this requirement. There should be a default behavior for disruption budgets. The default should optimize for these things: 1. Users should not need to create, view, or even know about with the PodDisruptionBudget resource, and still get "reasonable" default behaviors. 2. The default behavior should be reasonable for a wide variety of app use cases. 3. The default behavior could be expressed succinctly to users. 4. Cluster admins should be able to upgrade nodes quickly in clusters that have typical apps with default budgets, without interacting with app owners. Some options: 1. All pods in the cluster that match no user-created disruption budget have a budget of 1 at a time. - could be too conservative to allow for parallel node drains. 2. All pods in a namespace cluster that match no user-created disruption budget have a budget of 1 at a time. - less conservative than 1 - probably not too conservative for systems with many namespaces relative to number of nodes. 3. Each set of pods with the same controller and that match no user-created disruption budget, have a disruption budget of 1 at a time. - assuming spreading of collections by scheduler, parallel node drains likely to be possible - the disruption controller already computes this grouping of pods, sort of. 4. Each set of pods with the same service, that match no user-created disruption budget, have a disruption budget of 1 at a time. - matches a default spreading behavior by the scheduler, so unlikely to have two on the same machine from same service, so parallel node drains likely to be possible. The actual implementation could be to have a controller that makes PDBs for things without them and then cleans them up when not needed anymore, or for the eviction logic to just compute these sets, but not expose them. cc @erictune @davidopp @kow3ns @janetkuo @caesarxuchao @ymqytw @kubernetes/sig-apps
area/stateful-apps,sig/apps,area/node-lifecycle,lifecycle/frozen
high
Critical
184,566,580
angular
ng-content component in router-outlet
<!-- IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING --> ``` [x ] bug report => search github for a similar issue or PR before submitting [ ] feature request [ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question ``` **Current behavior** <!-- Describe how the bug manifests. --> I have a router-outlet inside a component like this ``` <app-leftnavbar> <router-outlet></router-outlet> </app-leftnavbar> ``` What I want to do is use `<ng-content select=".something">` inside app-leftnavbar in order to split the html of the loaded component into the router in two things, but no matter how I try, ng-content select doesnt work inside. But if I leave it without selectors it displays the content properly (the whole component (not-split of course)). Even if its a normal component and not a router-outlet its the same problem **Expected behavior** <!-- Describe what the behavior would be without the bug. --> I should be able to use <ng-content select=""> since the loaded component into the router-outlet will be transformed to html.. and since normal <ng-content> (without select) works, I expect to be able to select as well. **Minimal reproduction of the problem with instructions** <!-- If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> Create an angular component inside of another component and try to use `<ng-content select=".something">` in on the parent ``` <parent> <child></child> </parent> ``` and inside parent use ``` <ng-content select=".something"></ng-content> ``` and ``` <ng-content select=".somethingelse"></ng-content> ``` and nothing will be shown. (Of course I have defined elements with the appropriate classes in `child`), If used without selecting (just `<ng-content></ng-content>` the component will be shown normally **Please tell us about your environment:** <!-- Operating system, IDE, package manager, HTTP server, ... --> - **Angular version:** 2.0.0
type: bug/fix,freq1: low,area: router,area: core,state: confirmed,core: content projection,P3
medium
Critical
184,569,041
go
cmd/vet: move asm checks into cmd/asm
Currently cmd/vet does not expand preprocessor macros in assembly code. As a result, it missed several issues in the runtime (a73d68e) and x/crypto (golang/crypto@5953a47, golang/crypto@e67f5ec, golang/crypto@3c0d69f). It would be nice if it could preprocess the assembly code before vetting it.
Analysis
medium
Major
184,595,060
kubernetes
Add discriminators to all unions/oneof in the API
Forked from #6979. We have a number of cases in the API where only one of a set of fields is allowed to be specified -- an undiscriminated union / oneof. `VolumeSource` is the canonical example: `emptyDir` or `gcePersistentDisk` or `awsElasticBlockStore` or ... Currently we require that exactly one be set, but don't explicitly represent which one we expect to be set. Adding discriminators to all unions/oneof cases would have multiple advantages: 1. clients could effectively implement a switch instead of if-else trees to inspect the resource -- look at discriminator and lookup the corresponding field in a map (though differences in capitalization of the first letter in the API convention currently prevents the discriminator value from exactly matching the field name). 2. The apiserver could automatically clear non-selected fields, which would be convenient for kubectl apply #15894 and other cases. For backward compatibility, any request that succeeds today needs to work even after we implemented (2). That would mean that newly added discriminators would have to be optional. If the discriminator were set, we'd require that the field corresponding to its value were set and the apiserver (registry) could automatically unset the other fields. If the discriminator were unset, behavior would be as before -- exactly one of the fields in the union/oneof would be required to be set and the operation would otherwise fail validation. The main question: should the discriminator be set by default? If it weren't set by default, then it couldn't be relied upon for purpose (1), and if-else trees would still be required forever (for the current apiversions). OTOH, if it were set by default, we'd need to change it accordingly when the corresponding union/oneof fields were set and unset. This would be different than all other defaulting in the API and contradicts the letter of an existing convention, though not the spirit. We don't change populated fields in the apiserver, because we don't want to change the user's expressed intent. However, in this case the intent would be inferred from the union/oneof fields instead of the discriminator field. The client would otherwise receive an error. We could potentially do stronger consistency validation in some cases. For instance, during creation, we could require the discriminator matched the union/oneof fields if specified. On updates, we potentially could check whether the discriminator changed from its prior value and require consistency if it did, since inconsistency would suggest a mistake or a conflict, though it would force the client's retry logic to reason about whether a validation error could be due to a conflict. cc @pwittrock
priority/important-soon,area/api,sig/api-machinery,sig/apps,priority/important-longterm,sig/architecture,lifecycle/frozen
low
Critical
184,620,552
youtube-dl
Slide.ly Feature Request
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.10.21.1** - [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Feature request (request for a new functionality) ### Hi, yea, I wanted to request a feature that we can download from [Slide.ly](http://slide.ly) videos. Not in a hurry. But is it possible? We can publicly share videos thru anywhere but unable to download. Thanks you.
site-support-request
low
Critical
184,623,473
youtube-dl
Vimeo will download only 1st episode
asd
account-needed
medium
Major
184,674,106
java-design-patterns
Component Object pattern
**Description:** The Component Object pattern aims to encapsulate individual parts of a system into reusable components that can be dynamically composed. This pattern enhances modularity, reduces coupling, and promotes reusability of code. Each component represents a distinct piece of functionality and can be combined with other components to form complex behaviors without tightly binding them together. Main elements of the Component Object pattern include: - **Component Interface**: Defines the behavior and properties that all components must implement. - **Concrete Component**: Implements the component interface with specific functionality. - **Component Container**: Manages the lifecycle and interactions of components, allowing dynamic composition. - **Component Factory**: Responsible for creating component instances and assembling them as needed. **References:** - [Component Object Pattern Example](https://lkrnac.net/blog/2016/10/component-object-pattern-example/) - [Project Contribution Guidelines](https://github.com/iluwatar/java-design-patterns/wiki) **Acceptance Criteria:** 1. Implement a basic component interface that defines common behaviors. 2. Create at least two concrete components implementing the component interface with distinct functionalities. 3. Develop a component container that supports the dynamic composition and management of these components.
epic: pattern,type: feature
low
Major
184,698,540
java-design-patterns
State-Action-Model pattern
**Description:** The State-Action-Model (SAM) pattern is a modern approach to state management in JavaScript applications. Unlike traditional MVC frameworks, SAM separates the state, actions, and model, providing a clear and manageable structure for application logic. This pattern enhances testability and maintainability by isolating state transitions and business logic from the view. **Main Elements of the SAM Pattern:** 1. **State:** Represents the application's state and is managed separately from the business logic and UI. The state is immutable, meaning any change results in a new state rather than modifying the existing state. 2. **Actions:** Encapsulate the business logic and are responsible for transitioning the state. Actions receive the current state and an event, process them, and return the new state. 3. **Model:** The model serves as an intermediary that controls the interaction between the state and actions. It ensures that actions are executed correctly and that state transitions are valid. **References:** 1. [SAM.js Official Documentation](https://sam.js.org/) 2. [InfoQ Article on SAM Pattern](https://www.infoq.com/articles/no-more-mvc-frameworks/) 3. [Project Contribution Guidelines](https://github.com/iluwatar/java-design-patterns/wiki) **Acceptance Criteria:** 1. Implement a basic example of the SAM pattern in the project, demonstrating the separation of state, actions, and model. 2. Ensure that the implementation is well-documented, with clear comments and explanations for each component of the pattern. 3. Provide unit tests for the state transitions and actions to validate the correctness of the implementation.
epic: pattern,type: feature
low
Major
184,702,118
neovim
RPC/API: Use Cap'n Proto instead of MessagePack
For the last 2 years as I follow the development of NeoVim, every single time I open the main NeoVim GitHub page and see _MessagePack_, I ask myself why didn't I already proposed [Cap'n Proto](https://capnproto.org/) as a better alternative. Long story short, [Cap'n Proto](https://capnproto.org/) is basically the fastest possible binary serialization format. It's also quite easy to implement in [any language](https://capnproto.org/otherlang.html). I think it's worth switching to it as it will make NeoVim frontends faster and with lower latency (imagine interpreted languages like JavaScript working with MessagePack) and allow for higher efficiency in CPU utilization for larger buffer changes.
enhancement,api,channels-rpc
medium
Critical
184,709,506
youtube-dl
please support camwhores.tv
example vids http://www.camwhores.tv/tags/kelseysbedroom/ ``` $ youtube-dl -v 'http://www.camwhores.tv/videos/28542/kelseysbedroom-webcam-show-2015-november-24-21-11/' [debug] System config: [u'--prefer-ffmpeg', u'--hls-prefer-native', u'-n'] [debug] User config: [] [debug] Command-line args: [u'-v', u'http://www.camwhores.tv/videos/28542/kelseysbedroom-webcam-show-2015-november-24-21-11/'] [debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2016.10.21.1 [debug] Python version 2.7.10 - Darwin-16.0.0-x86_64-i386-64bit [debug] exe versions: avconv 11.4, avprobe 11.4, ffmpeg 3.1.4, ffprobe 3.1.4 [debug] Proxy map: {'http': 'http://localhost:8118', 'https': 'http://localhost:8118'} [generic] kelseysbedroom-webcam-show-2015-november-24-21-11: Requesting header WARNING: Falling back on generic information extractor. [generic] kelseysbedroom-webcam-show-2015-november-24-21-11: Downloading webpage [generic] kelseysbedroom-webcam-show-2015-november-24-21-11: Extracting information ERROR: Unsupported URL: http://www.camwhores.tv/videos/28542/kelseysbedroom-webcam-show-2015-november-24-21-11/ Traceback (most recent call last): File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1648, in _real_extract doc = compat_etree_fromstring(webpage.encode('utf-8')) File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2525, in compat_etree_fromstring doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory))) File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2514, in _XML parser.feed(text) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed self._raiseerror(v) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror raise err ParseError: mismatched tag: line 23, column 3 Traceback (most recent call last): File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 694, in extract_info ie_result = ie.extract(url) File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 356, in extract return self._real_extract(url) File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 2431, in _real_extract raise UnsupportedError(url) UnsupportedError: Unsupported URL: http://www.camwhores.tv/videos/28542/kelseysbedroom-webcam-show-2015-november-24-21-11/ ``` thanks for your awesome job
site-support-request,nsfw
low
Critical
184,809,858
nvm
ls-remote and install many times to run
I used the latest version of nvm. When `nvm ls-remote`, i must run 4 times this command then get 3 times result 'N/A' and a result valid. Similar for `nvm install <version>` (4times-3f-1ok), but take too-long time to finish. Pls check this and resolve. Thank you!
installing node,needs followup
low
Minor
184,809,859
opencv
remap bug when original image is larger than 32767 in size
##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => 3.1: - Operating System / Platform => Windows 64 Bit: - Compiler => Visual Studio 2013: ##### Detailed description <!-- your description --> When trying to do remap with a quite large image, whose size is larger than 32767 (max size of int16), remap function will not act normally. This is because that in implementation, remap uses int16 as the coordinate data, as `Mat _bufxy(brows0, bcols0, CV_16SC2), _bufa;`. So when using it, the rows larger than 32767 will be clamped to 32767 and generate the following striped results. PS: I have tried the ipp remap function and it will work well. ![image](https://cloud.githubusercontent.com/assets/3850191/19643270/13eda790-9a1c-11e6-98c9-39680c5c39b9.png) ##### Steps to reproduce <!-- to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file --> The following code should produce some black part on the bottom, however because of this bug, the result is all white. ``` #include <opencv2/core.hpp> #include <opencv2/imgproc.hpp> #include <opencv2/highgui.hpp> #include <stdint.h> int main(int argc, char **argv) { // rows should be larger than max of int16 int rows = 40000, cols = 1000; cv::Mat img = cv::Mat::zeros(rows, cols, CV_8U); uint8_t *idata = (uint8_t *)img.data; cv::Mat mapxy = cv::Mat::zeros(rows, cols, CV_32FC2); cv::Vec2f *mapdata = (cv::Vec2f *)mapxy.data; for (int r = 0; r < rows; ++r) { // ensure larger than 32767 uint8_t value = r > 38000 ? 0 : 255; for (int c = 0; c < cols; ++c) { *idata++ = value; (*mapdata)(0) = c; (*mapdata)(1) = r; mapdata++; } } cv::Mat result; cv::remap(img, result, mapxy, cv::noArray(), cv::INTER_LINEAR, cv::BORDER_CONSTANT); cv::imwrite("result.tif", result); return 0; } ```
feature,category: imgproc,priority: low,RFC
low
Critical
184,828,639
flutter
Would be nice if `flutter run` only downloaded the necessary artifacts
I was on a train with bad wifi. It was painful. I was trying to run the gallery on an iOS Simulator. ``` %flutter run Building flutter tool... Downloading package sky_engine... 8101ms Running 'pub get' in sky_engine... 450ms Downloading package sky_services... 596ms Running 'pub get' in sky_services... 3139ms Downloading package flutter_services... 1929ms Running 'pub get' in flutter_services... 2294ms Building Dart SDK summary... 1753ms Downloading engine artifacts android-arm... 177027ms Downloading engine artifacts android-arm-profile... 164117ms Downloading engine artifacts android-arm-release... 115925ms Downloading engine artifacts android-x64... 78964ms Downloading engine artifacts android-x86... 78600ms Downloading engine artifacts ios... 178979ms Downloading engine artifacts ios-profile... 189778ms Downloading engine artifacts ios-release... 152586ms Downloading darwin-x64 tools... 232022ms Downloading android-arm-profile/darwin-x64 tools... 34656ms Downloading android-arm-release/darwin-x64 tools... 139862ms Running 'pub get' in flutter_gallery... 870ms ``` FYI @danrubel
c: new feature,tool,P2,team-tool,triaged-tool
low
Major
184,877,974
go
cmd/compile: improve inlining cost model
The current inlining cost model is simplistic. Every gc.Node in a function has a cost of one. However, the actual impact of each node varies. Some nodes (OKEY) are placeholders never generate any code. Some nodes (OAPPEND) generate lots of code. In addition to leading to bad inlining decisions, this design means that any refactoring that changes the AST structure can have unexpected and significant impact on compiled code. See [CL 31674](golang.org/cl/31674) for an example. Inlining occurs near the beginning of compilation, which makes good predictions hard. For example, `new` or `make` or `&` might allocate (large runtime call, much code generated) or not (near zero code generated). As another example, code guarded by `if false` still gets counted. As another example, we don't know whether bounds checks (which generate lots of code) will be eliminated or not. One approach is to hand-write a better cost model: append is very expensive, things that might end up in a runtime call are moderately expensive, pure structure and variable/const declarations are cheap or free. Another approach is to compile lots of code and generate a simple machine-built model (e.g. linear regression) from it. I have tried both of these approaches, and believe both of them to be improvements, but did not mail either of them, for two reasons: - Inlining significantly impacts binary size, runtime performance, and compilation speed. Minor changes to the cost model or to the budget can have big impacts on all three. I hoped to find a model and budget that was clearly pareto optimal, but I did not. In order to make forward progress, we need to make a decision about what metric(s) we want to optimize for, and which code to measure those metric on. This is to my mind the single biggest blocker for improving inlining. - Changing inlining decisions impacts the runtime as well as other code and minor inlining changes to the runtime can have outsized performance impacts. I see several possible ways to address this. (1) We could add a //go:inline annotation for use in the runtime only, to allow runtime authors to force the compiler to inline performance-critical functions. If a non-inlinable function was marked //go:inline, compilation would fail. (2) We could add a //go:mustinline annotation for use in the runtime only (see [CL 22785](https://go-review.googlesource.com/c/22785/)), to allow runtime authors to protect currently-inlined functions against becoming non-inlined. (3) We could tune runtime inlining (cost model, budget, etc.) independently. Three other related ideas: - We might want to take into account the number and size of parameters and return values of a function when deciding whether to inline it, since those determine the cost in binary size of setting up the function call. - We might want to have separate budgets for expressions and control flow, since branches end up being more expensive on most metrics. - We could treat intra-package inlining different than inter-package inlining. When inlining across packages, we don't actually need to decide early on whether to allow inlining, since the actual inlining will occur in an entirely different compiler invocation. We could thus wait until function compilation is complete, and we know exactly how large the fully optimized code is, and then make our decision about whether other packages should inline that function. Downsides to this otherwise appealing idea are: (1) unexpected performance impact of moving a function from one package to another, (2) it introduces a significant dependency between full compilation and writing export data, which e.g. would prevent #15734. cc @dr2chase @randall77 @ianlancetaylor @mdempsky
Performance,ToolSpeed,NeedsInvestigation,compiler/runtime
high
Major
184,884,798
youtube-dl
[downloader/hlsnative] Resuming encrypted HLS streams leads to broken files
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.10.21.1** ### Before submitting an _issue_ make sure you have: - [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your _issue_? - [x] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other ``` [$] youtube-dl --hls-prefer-native http://www.hotstar.com/tv/savdhaan-india/363/the-promiscuous-wives/1000153444 [HotStar] 1000153444: Downloading JSON metadata [HotStar] 1000153444: Downloading TABLET JSON metadata [HotStar] 1000153444: Downloading m3u8 information [hlsnative] Downloading m3u8 manifest [hlsnative] Total fragments: 416 [download] Destination: The Promiscuous Wives-1000153444.mp4 ``` Now twice the video has been downloaded but each time I have not been able to view the video either under mpv or vlc but when it is in .part I can play the video. It is only when it is complete, it doesn't play. I suspect some sort of corruption happening. This is the ouput I get while viewing the .part file via ffprobe - ``` [$] ffprobe The\ Promiscuous\ Wives-1000153444.mp4.part ffprobe version 3.1.4-1 Copyright (c) 2007-2016 the FFmpeg developers built with gcc 6.2.0 (Debian 6.2.0-6) 20161010 configuration: --prefix=/usr --extra-version=1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --enable-shared --disable-libtesseract --disable-stripping --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libebur128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-frei0r --enable-chromaprint --enable-libopencv --enable-libx264 libavutil 55. 28.100 / 55. 28.100 libavcodec 57. 48.101 / 57. 48.101 libavformat 57. 41.100 / 57. 41.100 libavdevice 57. 0.101 / 57. 0.101 libavfilter 6. 47.100 / 6. 47.100 libavresample 3. 0. 0 / 3. 0. 0 libswscale 4. 1.100 / 4. 1.100 libswresample 2. 1.100 / 2. 1.100 libpostproc 54. 0.100 / 54. 0.100 [h264 @ 0x1693e00] non-existing SPS 0 referenced in buffering period [h264 @ 0x1693e00] SPS unavailable in decode_picture_timing [h264 @ 0x1693e00] non-existing SPS 0 referenced in buffering period [h264 @ 0x1693e00] SPS unavailable in decode_picture_timing [mpegts @ 0x168f3e0] PES packet size mismatch Input #0, mpegts, from 'The Promiscuous Wives-1000153444.mp4.part': Duration: 00:11:12.00, start: 0.100667, bitrate: 2144 kb/s Program 1 Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc Stream #0:1[0x101]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 60 kb/s ``` Look forward to know more.
bug
medium
Critical
184,967,915
flutter
Engine does not handle OOM well
When we run out of memory, we just crash. There's no clear indication of what the problem is. `flutter run` just says the application finished after printing `W/ActivityManager: Force finishing activity com.yourcompany.foo/org.domokit.sky.shell.SkyActivity` a couple of times, but doesn't quit. On the device, Android shows a material dialog that says "foo has stopped" with a single button "Open app again". Here's an app that demonstrates the problem: ``` dart import 'package:flutter/widgets.dart'; void main() { runApp(new CustomPaint(painter: new Painter())); } class Painter extends CustomPainter { @override void paint(Canvas canvas, Size size) { final Paint paint = new Paint() ..color = const Color(0xFFEE9944); while (true) canvas.drawRect(new Rect.fromLTWH(10.0, 10.0, 10.0, 10.0), paint); } } ``` Ideally we'd dump a stack trace of the currently running Dart code, at least.
c: crash,engine,c: performance,perf: memory,P3,team-engine,triaged-engine
low
Critical
185,008,912
go
runtime: gdb command "goroutine 1 bt" fails on core file
### What version of Go are you using (`go version`)? go version go1.5.3 linux/amd64 ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/ldata/comp/project/go" GORACE="" GOROOT="/ldata/bin/go" GOTOOLDIR="/ldata/bin/go/pkg/tool/linux_amd64" GO15VENDOREXPERIMENT="" CC="gcc" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0" CXX="g++" CGO_ENABLED="1" ### What did you do? 1. export GOTRACEBACK=crash 2. ulimit -c unlimited 3. write a buggy program and run it to generate a core file 4. use gdb to load core file 5. source runtime-gdb.py 6. run `goroutine 1 bt` ### What did you expect to see? print stack trace of goroutine 1 ### What did you see instead? print error msg: Python Exception <class 'gdb.error'> You can't do that without a process to debug.: Error occurred in Python command: You can't do that without a process to debug.
NeedsFix,Debugging,compiler/runtime
medium
Critical
185,036,045
flutter
`fluttter doctor` should check for incompatible adb in path
I used the Android Virtual Device Manager to create a new x86 + 7.1.1 instance, with API 25. ``` ~/tmp/testpixel $ flutter --version Flutter • channel master • [email protected]:flutter/flutter.git Framework • revision 80d578d93e (32 hours ago) • 2016-10-23 17:20:38 Engine • revision db12c5e621 Tools • Dart 1.21.0-dev.0.0 ``` I've tried a few times, I can't get the starter app to start... ``` ~/tmp/testpixel $ flutter -d An run Launching loader on Android SDK built for x86... --------- beginning of main W/ActivityManager: Force removing ActivityRecord{9558fc1 u0 com.yourcompany.testpixel/org.domokit.sky.shell.SkyActivity t7}: app died, no saved state Observatory listening on http://127.0.0.1:8100 Diagnostic server listening on http://127.0.0.1:8101 Syncing files to device... Exception from flutter run: HttpException: Connection closed before full header was received, uri = http://127.0.0.1:8100 dart:io _HttpClient.putUrl package:flutter_tools/src/devfs.dart 225 _DevFSHttpWriter._scheduleWrite.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter._scheduleWrite package:flutter_tools/src/devfs.dart 218 _DevFSHttpWriter._scheduleWrites package:flutter_tools/src/devfs.dart 206 _DevFSHttpWriter.write.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter.write package:flutter_tools/src/devfs.dart 387 DevFS.update.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 540 DevFS._scanDirectory.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/devfs.dart DevFS.update package:flutter_tools/src/hot.dart 357 HotRunner._updateDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 145 AssetBundle.build.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 291 _obtainLicenses.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._updateDevFS package:flutter_tools/src/hot.dart 263 HotRunner._run.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 280 DevFS.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 133 ServiceProtocolDevFSOperations.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 582 VM.createDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 561 VM.invokeRpcRaw.<async> dart:async _SyncCompleter.complete package:json_rpc_2/src/client.dart 175 Client._handleSingleResponse package:json_rpc_2/src/client.dart 165 Client._handleResponse dart:async _StreamController.add package:json_rpc_2/src/peer.dart 92 Peer.listen.<fn> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._run package:flutter_tools/src/hot.dart 150 HotRunner.run.<fn> package:stack_trace Chain.capture package:flutter_tools/src/hot.dart 149 HotRunner.run package:flutter_tools/src/commands/run.dart 221 RunCommand.runCommand.<async> Exception from flutter run: HttpException: Connection closed before full header was received, uri = http://127.0.0.1:8100 dart:io _HttpClient.putUrl package:flutter_tools/src/devfs.dart 225 _DevFSHttpWriter._scheduleWrite.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter._scheduleWrite package:flutter_tools/src/devfs.dart 218 _DevFSHttpWriter._scheduleWrites package:flutter_tools/src/devfs.dart 206 _DevFSHttpWriter.write.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter.write package:flutter_tools/src/devfs.dart 387 DevFS.update.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 540 DevFS._scanDirectory.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/devfs.dart DevFS.update package:flutter_tools/src/hot.dart 357 HotRunner._updateDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 145 AssetBundle.build.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 291 _obtainLicenses.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._updateDevFS package:flutter_tools/src/hot.dart 263 HotRunner._run.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 280 DevFS.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 133 ServiceProtocolDevFSOperations.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 582 VM.createDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 561 VM.invokeRpcRaw.<async> dart:async _SyncCompleter.complete package:json_rpc_2/src/client.dart 175 Client._handleSingleResponse package:json_rpc_2/src/client.dart 165 Client._handleResponse dart:async _StreamController.add package:json_rpc_2/src/peer.dart 92 Peer.listen.<fn> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._run package:flutter_tools/src/hot.dart 150 HotRunner.run.<fn> package:stack_trace Chain.capture package:flutter_tools/src/hot.dart 149 HotRunner.run package:flutter_tools/src/commands/run.dart 221 RunCommand.runCommand.<async> Exception from flutter run: HttpException: Connection closed before full header was received, uri = http://127.0.0.1:8100 dart:io _HttpClient.putUrl package:flutter_tools/src/devfs.dart 225 _DevFSHttpWriter._scheduleWrite.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter._scheduleWrite package:flutter_tools/src/devfs.dart 218 _DevFSHttpWriter._scheduleWrites package:flutter_tools/src/devfs.dart 206 _DevFSHttpWriter.write.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter.write package:flutter_tools/src/devfs.dart 387 DevFS.update.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 540 DevFS._scanDirectory.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/devfs.dart DevFS.update package:flutter_tools/src/hot.dart 357 HotRunner._updateDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 145 AssetBundle.build.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 291 _obtainLicenses.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._updateDevFS package:flutter_tools/src/hot.dart 263 HotRunner._run.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 280 DevFS.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 133 ServiceProtocolDevFSOperations.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 582 VM.createDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 561 VM.invokeRpcRaw.<async> dart:async _SyncCompleter.complete package:json_rpc_2/src/client.dart 175 Client._handleSingleResponse package:json_rpc_2/src/client.dart 165 Client._handleResponse dart:async _StreamController.add package:json_rpc_2/src/peer.dart 92 Peer.listen.<fn> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._run package:flutter_tools/src/hot.dart 150 HotRunner.run.<fn> package:stack_trace Chain.capture package:flutter_tools/src/hot.dart 149 HotRunner.run package:flutter_tools/src/commands/run.dart 221 RunCommand.runCommand.<async> Exception from flutter run: HttpException: Connection closed before full header was received, uri = http://127.0.0.1:8100 dart:io _HttpClient.putUrl package:flutter_tools/src/devfs.dart 225 _DevFSHttpWriter._scheduleWrite.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter._scheduleWrite package:flutter_tools/src/devfs.dart 218 _DevFSHttpWriter._scheduleWrites package:flutter_tools/src/devfs.dart 206 _DevFSHttpWriter.write.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter.write package:flutter_tools/src/devfs.dart 387 DevFS.update.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 540 DevFS._scanDirectory.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/devfs.dart DevFS.update package:flutter_tools/src/hot.dart 357 HotRunner._updateDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 145 AssetBundle.build.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 291 _obtainLicenses.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._updateDevFS package:flutter_tools/src/hot.dart 263 HotRunner._run.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 280 DevFS.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 133 ServiceProtocolDevFSOperations.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 582 VM.createDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 561 VM.invokeRpcRaw.<async> dart:async _SyncCompleter.complete package:json_rpc_2/src/client.dart 175 Client._handleSingleResponse package:json_rpc_2/src/client.dart 165 Client._handleResponse dart:async _StreamController.add package:json_rpc_2/src/peer.dart 92 Peer.listen.<fn> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._run package:flutter_tools/src/hot.dart 150 HotRunner.run.<fn> package:stack_trace Chain.capture package:flutter_tools/src/hot.dart 149 HotRunner.run package:flutter_tools/src/commands/run.dart 221 RunCommand.runCommand.<async> Exception from flutter run: HttpException: Connection closed before full header was received, uri = http://127.0.0.1:8100 dart:io _HttpClient.putUrl package:flutter_tools/src/devfs.dart 225 _DevFSHttpWriter._scheduleWrite.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter._scheduleWrite package:flutter_tools/src/devfs.dart 218 _DevFSHttpWriter._scheduleWrites package:flutter_tools/src/devfs.dart 206 _DevFSHttpWriter.write.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter.write package:flutter_tools/src/devfs.dart 387 DevFS.update.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 540 DevFS._scanDirectory.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/devfs.dart DevFS.update package:flutter_tools/src/hot.dart 357 HotRunner._updateDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 145 AssetBundle.build.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 291 _obtainLicenses.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._updateDevFS package:flutter_tools/src/hot.dart 263 HotRunner._run.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 280 DevFS.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 133 ServiceProtocolDevFSOperations.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 582 VM.createDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 561 VM.invokeRpcRaw.<async> dart:async _SyncCompleter.complete package:json_rpc_2/src/client.dart 175 Client._handleSingleResponse package:json_rpc_2/src/client.dart 165 Client._handleResponse dart:async _StreamController.add package:json_rpc_2/src/peer.dart 92 Peer.listen.<fn> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._run package:flutter_tools/src/hot.dart 150 HotRunner.run.<fn> package:stack_trace Chain.capture package:flutter_tools/src/hot.dart 149 HotRunner.run package:flutter_tools/src/commands/run.dart 221 RunCommand.runCommand.<async> Exception from flutter run: HttpException: Connection closed before full header was received, uri = http://127.0.0.1:8100 dart:io _HttpClient.putUrl package:flutter_tools/src/devfs.dart 225 _DevFSHttpWriter._scheduleWrite.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter._scheduleWrite package:flutter_tools/src/devfs.dart 218 _DevFSHttpWriter._scheduleWrites package:flutter_tools/src/devfs.dart 206 _DevFSHttpWriter.write.<async> ===== asynchronous gap =========================== dart:async Future.Future.microtask package:flutter_tools/src/devfs.dart _DevFSHttpWriter.write package:flutter_tools/src/devfs.dart 387 DevFS.update.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 540 DevFS._scanDirectory.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/devfs.dart DevFS.update package:flutter_tools/src/hot.dart 357 HotRunner._updateDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 145 AssetBundle.build.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/asset.dart 291 _obtainLicenses.<async> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._updateDevFS package:flutter_tools/src/hot.dart 263 HotRunner._run.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 280 DevFS.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/devfs.dart 133 ServiceProtocolDevFSOperations.create.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 582 VM.createDevFS.<async> dart:async _SyncCompleter.complete package:flutter_tools/src/vmservice.dart 561 VM.invokeRpcRaw.<async> dart:async _SyncCompleter.complete package:json_rpc_2/src/client.dart 175 Client._handleSingleResponse package:json_rpc_2/src/client.dart 165 Client._handleResponse dart:async _StreamController.add package:json_rpc_2/src/peer.dart 92 Peer.listen.<fn> ===== asynchronous gap =========================== dart:async _asyncThenWrapperHelper package:flutter_tools/src/hot.dart HotRunner._run package:flutter_tools/src/hot.dart 150 HotRunner.run.<fn> package:stack_trace Chain.capture package:flutter_tools/src/hot.dart 149 HotRunner.run package:flutter_tools/src/commands/run.dart 221 RunCommand.runCommand.<async> Application finished. ```
tool,t: flutter doctor,P3,team-tool,triaged-tool
low
Major
185,063,869
rust
Compiler switch to list bound checks
This is my first Rust enhancement request. With the Go language compiler version 1.7, you can build the code using -gcflags="-d=ssa/check_bce/debug=1" to show which code lines still need checking bounds: http://www.tapirgames.com/blog/golang-1.7-bce Finding where the compiler is not able to optimize away run-time bound checks is important for some kinds of code where performance matters: https://github.com/rust-lang/rust/pull/30917 Currently to do that in Rust I have to generate the asm with --emit asm, search the function(s) (this sometimes is not so easy because of inlining) and then read the asm to understand if and where it performs bound checks. This is a bit laborious and not handy. So is it possible to add to Rustc a compilation switch similar to Go that outputs all the lines and column where LLVM isn't able to remove array bound tests? ``` In-bound test(s): 16 | foo[i] -= 1; ^ 32 | foo[i] += bar[j]; ^ ^ ``` (Having this compiler switch is only about one third of this performance battle).
I-slow,C-enhancement,T-compiler,C-feature-request
low
Critical
185,158,832
go
net/http/cookiejar: add way to access all cookies in CookieJar
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? 1.6.3 ### What operating system and processor architecture are you using (`go env`)? darwin / amd64 ### What did you do? Tried to access _all_ cookies stored in a [http.CookieJar](https://golang.org/pkg/net/http/cookiejar/#Jar.Cookies). ### What did you expect to see? A Function for accessing the `[]*http.Cookie` data. The cookiejar struct is missing a way to access all the cookie structs saved in it. You have to know all the matching urls (as url.URL structs) in order to pull cookies out of it. ### What did you see instead? Only a method for [reading cookies](https://golang.org/pkg/net/http/cookiejar/#Jar.Cookies) from pre-known url.URL structs.
Suggested,help wanted,FeatureRequest
medium
Major
185,239,814
youtube-dl
Piping to stdout writes to a file instead of stdout with some format specifier
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your _issue_ (like that [x]) - Use _Preview_ tab to see how your issue will actually look like --- ### Make sure you are using the _latest_ version: run `youtube-dl --version` and ensure your version is _2016.10.25_. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.10.25** ### Before submitting an _issue_ make sure you have: - [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your _issue_? - [x] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- Piping to stdout does not work when using format `bestvideo+bestaudio/best`. It downloads to a file named `-.ext`. For example : `$ youtube-dl -v -f bestvideo+bestaudio/best -o - 'https://www.youtube.com/watch?v=0YSEagC7tsI' | mpv` It works without the above format specifier although ironically it is documented to be the default. I noticed this because I have `-f bestvideo[width<=?1920]+bestaudio/best` in my youtube-dl config file, but I expected it to use the format which does not require merging when piping.
bug
low
Critical
185,243,373
youtube-dl
[bellmedia] MTV Broken due to wrong video ID and more
**MTV.com & MTV.ca STILL Broken even after updating to the 10-25-16 version I am getting Unsupported Site on both domains** ## yt http://www.mtv.ca/shows/the-challenge/video/the-duel/the-duel-ep3/1546068/0/3 [9c9media] 154606: Downloading JSON metadata ## ERROR: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. ## ## ## ## ## ## ## ## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your _issue_ (like that [x]) - Use _Preview_ tab to see how your issue will actually look like --- ### Make sure you are using the _latest_ version: run `youtube-dl --version` and ensure your version is _2016.10.25_. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [ x] I've **verified** and **I assure** that I'm running youtube-dl **2016.10.25** ### Before submitting an _issue_ make sure you have: - [ ] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your _issue_? - [ x] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your _issue_ --- ### If the purpose of this _issue_ is a _bug report_, _site support request_ or you are not completely sure provide the full verbose output as follows: Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` $ youtube-dl -v <your command line> [debug] System config: [] [debug] User config: [] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] youtube-dl version 2016.10.25 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] Proxy map: {} ... <end of log> ``` --- ### If the purpose of this _issue_ is a _site support request_ please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://www.youtube.com/watch?v=BaW_jenozKc - Single video: https://youtu.be/BaW_jenozKc - Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc --- ### Description of your _issue_, suggested solution and other information Explanation of your _issue_ in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible. If work on your _issue_ requires account credentials please provide them or explain how one can obtain them.
bug,geo-restricted
low
Critical
185,244,655
youtube-dl
Set constant bitrate for audio only
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your _issue_ (like that [x]) - Use _Preview_ tab to see how your issue will actually look like --- ### Make sure you are using the _latest_ version: run `youtube-dl --version` and ensure your version is _2016.10.25_. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.10.25** ### Before submitting an _issue_ make sure you have: - [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your _issue_? - [ ] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [x] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your _issue_ --- ### If the purpose of this _issue_ is a _bug report_, _site support request_ or you are not completely sure provide the full verbose output as follows: Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` $ youtube-dl -v --audio-format mp3 --audio-quality 0 -x https://www.youtube.com/watch?v=Zej2OYAYDKY [debug] System config: [] [debug] User config: [] [debug] Command-line args: [u'-v', u'--audio-format', u'mp3', u'--audio-quality', u'0', u'-x', u'https://www.youtube.com/watch?v=Zej2OYAYDKY'] [debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2016.10.25 [debug] Python version 2.7.12+ - Linux-4.8.0-26-generic-x86_64-with-Ubuntu-16.10-yakkety [debug] exe versions: avconv 3.0.2-1ubuntu3, avprobe 3.0.2-1ubuntu3, ffmpeg 3.0.2-1ubuntu3, ffprobe 3.0.2-1ubuntu3, rtmpdump 2.4 [debug] Proxy map: {} [youtube] Zej2OYAYDKY: Downloading webpage [youtube] Zej2OYAYDKY: Downloading video info webpage [youtube] Zej2OYAYDKY: Extracting video information [youtube] {43} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {18} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {36} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {17} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {135} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {244} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {134} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {243} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {133} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {242} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {160} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {278} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {140} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {171} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {249} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {250} signature length 40.41, html5 player en_US-vflEz7zqU [youtube] {251} signature length 40.41, html5 player en_US-vflEz7zqU [debug] Invoking downloader on u'https://r3---sn-5hnednlr.googlevideo.com/videoplayback?gir=yes&nh=IgpwcjAzLmFtczE2Kg03Mi4xNC4yMDMuMTcx&expire=1477453920&requiressl=yes&keepalive=yes&pl=14&itag=251&key=yt6&mime=audio%2Fwebm&clen=8257189&ipbits=0&lmt=1410678477611640&mn=sn-5hnednlr&source=youtube&mm=31&upn=-hkb07XNYT8&dur=408.801&ms=au&mv=m&mt=1477432256&ip=87.212.130.235&ei=ANQPWMvzA4KeWbnxgOgD&sparams=clen%2Cdur%2Cei%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Ckeepalive%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cnh%2Cpl%2Crequiressl%2Csource%2Cupn%2Cexpire&id=o-AFeb2NMRbtc7nN0U5EEAhMfbUgpQ2NPFrnfjU4lKkRqe&initcwndbps=961250&signature=E3A9BB14B3495241E815FF78AFEF3EC48380D3D5.867C238DA23F4074A854ED34F1AF384F4E0EC8A5&ratebypass=yes' ``` --- ### If the purpose of this _issue_ is a _site support request_ please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://www.youtube.com/watch?v=BaW_jenozKc - Single video: https://youtu.be/BaW_jenozKc - Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc --- ### Description of your _issue_, suggested solution and other information Explanation of your _issue_ in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible. If work on your _issue_ requires account credentials please provide them or explain how one can obtain them. Please offer options to force constant bitrate for audio only download. Some dj applicatoins work better with CBR than VBR.
request
low
Critical
185,265,773
flutter
Some of our controls can't handle being built in 0x0 environments
We should create tests that put each UI control into a 0x0 container and make sure they don't crash. See https://github.com/flutter/flutter/pull/6535.
a: tests,framework,f: material design,good first issue,P2,team-design,triaged-design
low
Critical
185,430,251
go
net: add Buffers.ReadFrom and Buffers.Write?
Go 1.8 is adding net.Buffers, with Read and WriteTo methods. These both move data from the buffers to a target, either a []byte (for Read) or a Writer (for WriteTo). The implementation of WriteTo on one of the package net-implemented net.Conns uses writev. So far so good. What about readv? What is that going to look like? I ask because it might influence some of the finer details of Buffers. For example, if we can't support readv with the same data structure, we may want to call it WriteBuffers and have a separate ReadBuffers. I do think we can support readv, but we should make sure. For example you could imagine that readv works by setting up a Buffers with space in the []bytes between len and cap, like: ``` b := Buffers{make([]byte, 0, 16), make([]byte, 0, 1024)} ``` to read the first 16 bytes into the first slice and the next 1k into the second, for both `b.Write(data)` and `b.ReadFrom(conn)`. In order to allow repeating the reads until you get all the data you want, you would need to define that Write and ReadFrom (unlike Read and WriteTo) do not remove full slices from Buffers: they just top off what's available. Probably only top off the final non-empty buffer. That is, if you have ``` b := Buffers{make([]byte, 4, 10), make([]byte, 15, 20), make([]byte, 0, 1024)} ``` and you read 5 bytes, where do they go? It seems wrong for them to go into the first slice, since that would stick them in the middle of the 4+15 bytes already semantically in the buffer. So probably they go into the middle slice, which has 5 (20-15) spare bytes of capacity. This implies that Write/ReadFrom needs to scan backward from the end to find the first non-empty buffer. It would be nice if Read/WriteTo, as they pull data out of the slices, didn't throw away the slices entirely, so that you could imagine using a Buffers not unlike a bytes.Buffer, where once allocated to a particular size you could repeatedly Write into it and then Read out from it (or ReadFrom into it and WriteTo out of it). Unfortunately even if the writing operations did avoid cutting slices out of the buffer, they need some way to track what is left to be written from a particular slice, and the way to do that is to advance the base pointer, reducing len and cap. At the end of a complete write the lens of the buffers are necessarily 0, and there's no way to back them back up. Concretely, if I have: ``` b := Buffers{make([]byte, 100)} ``` and I write 99 bytes, I'm left now with Buffers{slice(len=1,cap=1)}. If I write 100 bytes, I'm left with Buffers{} (no slices). The desire to reuse a Buffer would suggest that maybe Read/WriteTo shouldn't pull slices out, but it's not terribly useful to leave them in when they'd have len (and likely cap) 0 so they are basically useless anyway. This implies that reuse of a Buffers for new reading into the buffers after writing from the buffers is probably just hard. You'd need to do something like: ``` allBuffers := Buffers{make([]byte, 0, 100), make([]byte, 0, 100)} b := make(Buffers, len(allBuffers)) for { copy(b, allBuffers) // restore full capacity b.ReadFrom(conn1) b.WriteTo(conn2) } ``` That seems OK to me. I just want to make sure we've thought about all this. @bradfitz, what do you think? Does this seem like a reasonable basic plan for readv? Should we do it for Go 1.8 so that there's never a net.Buffers that only works for writev?
help wanted,NeedsFix,FeatureRequest
medium
Major
185,436,327
TypeScript
JSDoc support for destructured parameters
**TypeScript Version:** 2.0.3 **Code** ``` ts interface Foo { /** * A bar value */ bar?: string; } /** * A function * * @param foo A param * @param { bar } Another param * @param bar Another param */ function foo(foo: string, { bar }: Foo): void { bar; foo; } foo('bar', { bar: 'play' }); ``` **Expected behavior:** Intellisense for the second argument, or the second argument properties. **Actual behavior:** No way of providing a description for the destructured elements of a destructured parameter. ![destructured](https://cloud.githubusercontent.com/assets/1282577/19733834/2adf8086-9b9d-11e6-945c-63136ded02d2.gif)
Suggestion,In Discussion,Help Wanted,VS Code Tracked,Domain: JSDoc
medium
Critical
185,437,822
go
syscall: add support for Windows job objects
I would like to reopen #6720. I excitedly read all the thread, ending with the greatest facepalm I had in the past years. Look, this is not a solution! If `nodejs` knows how to pass these signals between processes on windows - then no reason that `golang` should not. No magic involved. Full story bellow. Mind that if the `go` shim is out of the loop - it works as expected, so I deduct it has to be something that `golang` does wrong. I would appreciate your input on this... ### What version of Go are you using (`go version`)? donno. the one that `[email protected]` uses ### What operating system and processor architecture are you using (`go env`)? Windows 10, x64 ### What did you do? I'm using `nodist` which allows me to run different versions of nodejs side by side. `nodist` uses a shim layer of `go` to have a look around on the env vars and local folder to detect the desired node version and call the relevant executable accordingly See here: https://github.com/marcelklehr/nodist/issues/179 ### What did you expect to see? https://github.com/marcelklehr/nodist/issues/179 ### What did you see instead? https://github.com/marcelklehr/nodist/issues/179
help wanted,OS-Windows,NeedsInvestigation,FeatureRequest
low
Major
185,439,260
go
encoding/json: ambiguous fields are marshalled
Embedded fields work, even when there are selectors that are the same: ex: https://play.golang.org/p/fJTGL-HWEk You get a compile time error though when you select ambiguously: ex: https://play.golang.org/p/McjVYbnAhT I understand that it would be hard to achieve, but I expect a compile time error from the following code, but I don't get one: https://play.golang.org/p/2oOXH2uzWy I understand that others might rely on this inconsistency now, but if a compile time error is achievable (I understand that this would be difficult), it will be the least offensive fix possible.
NeedsDecision
low
Critical
185,475,548
go
all: spelling inconsistency for networking terms
Some packages use `hostname` and others `host name`. The worst is the net package which uses both. It might be better to align to `host name` defined in https://tools.ietf.org/html/rfc7719. ``` Host name: This term and its equivalent, "hostname", have been widely used but are not defined in [RFC1034], [RFC1035], [RFC1123], or [RFC2181]. ```
Documentation,help wanted,NeedsFix
low
Major
185,479,899
TypeScript
auto align colons in object literals, variables, and equals like WebStorm
_From @kinergy on October 17, 2016 2:56_ - VSCode Version: 1.6.1 - OS Version: macOS 10.12 Using Format Code on a js file: ``` var abcde = 'abcde', def = { a : 'aaa', bb: 'b' }; ``` turns into: ``` var abcde = 'abcde', def = { a: 'aaa', bb: 'b' }; ``` It would be really nice to have a setting to align the beginning of var names, align the equals, and align the colons in object literals like WebStorm. I tried looking for an extension, but wasn't able to identify one. _Copied from original issue: Microsoft/vscode#13841_
Suggestion,Awaiting More Feedback,VS Code Tracked
medium
Critical
185,519,656
bitcoin
Unbounded reorg memory usage
#9014 brought to light in issue introduced in #7946 - when we do a reorg, we now keep the full contents of each block we connect in memory until the reorg is complete. Luckily, #9014 makes this a bit eaiser to fix - dont keep more than a few blocks around in shared_ptrs, just re-read them from disk after the reorg is complete when you need to call signalers. This may be complicated due to pruning, but it should be doable.
Resource usage
low
Major
185,529,898
go
cmd/compile: optimize functions into boolean expressions
Using 587b80322c6ce34ab115d7a837a56d7450aa913d In http://golang.org/cl/32122, I optimized a switch-based implementation to be a boolean expression. The two implementations below are semantically equivalent: ``` go func ValidRune1(r rune) bool { switch { case r < 0: return false case surrogateMin <= r && r <= surrogateMax: return false case r > MaxRune: return false } return true } func ValidRune2(r rune) bool { return (0 <= r && r < surrogateMin) || (surrogateMax < r && r <= MaxRune) } ``` Testable benchmark: https://play.golang.org/p/KMlv43EVQn The benchmark results: ``` benchmark old ns/op new ns/op delta BenchmarkA-4 16.3 11.9 -26.99% ``` We could consider having the compiler recognize the pattern of switch (with boolean expression as cases) and all return values are either boolean literals or expressions and have the compiler automatically optimize that into a single larger boolean expression. \cc @bradfitz @minux @martisch
Performance,compiler/runtime
low
Minor
185,634,914
youtube-dl
Multiple connection support in internal dash downloader
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your _issue_ (like that [x]) - Use _Preview_ tab to see how your issue will actually look like --- ### Make sure you are using the _latest_ version: run `youtube-dl --version` and ensure your version is _2016.10.26_. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.10.26** ### Before submitting an _issue_ make sure you have: - [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your _issue_? - [ ] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [x] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your _issue_ --- ### If the purpose of this _issue_ is a _bug report_, _site support request_ or you are not completely sure provide the full verbose output as follows: Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` C:\Video>youtube-dl --external-downloader axel http://genflix.orangetv.asia.swiftserve.com/US/videoContent-1530-iqn62l7c-v1/videoContent-1530-iqn62l7c-m1-4-NoneUS/1487-1530-m1-iqn62l7c-default.ism/a.mpd -v [debug] System config: [] [debug] User config: [] [debug] Command-line args: ['--external-downloader', 'axel', 'http://genflix.orangetv.asia.swiftserve.com/US/videoContent-1530-iqn62l7c-v1/videoContent-1530-iqn62l7c-m1-4-NoneUS/1487-1530-m1-iqn62l7c-default.ism/a.mpd', '-v'] [debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252 [debug] youtube-dl version 2016.10.26 [debug] Python version 3.4.4 - Windows-10-10.0.10240 [debug] exe versions: ffmpeg N-82117-gc117343, ffprobe N-82117-gc117343, rtmpdump 2.3 [debug] Proxy map: {} [generic] a: Requesting header WARNING: Falling back on generic information extractor. [generic] a: Downloading webpage [generic] a: Extracting information [debug] Invoking downloader on 'http://genflix.orangetv.asia.swiftserve.com/US/videoContent-1530-iqn62l7c-v1/videoContent-1530-iqn62l7c-m1-4-NoneUS/1487-1530-m1-iqn62l7c-default.ism/dash/' [dashsegments] Total fragments: 3222 ``` --- ### If the purpose of this _issue_ is a _site support request_ please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://www.youtube.com/watch?v=BaW_jenozKc - Single video: https://youtu.be/BaW_jenozKc - Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc https://bitdash-a.akamaihd.net/content/sintel/sintel.mpd http://genflix.orangetv.asia.swiftserve.com/US/videoContent-1633-istumdrs-v1/videoContent-1633-istumdrs-m1-4-NoneUS/1584-1633-m1-istumdrs-default.ism/a.mpd --- ### Description of your _issue_, suggested solution and other information Explanation of your _issue_ in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible. If work on your _issue_ requires account credentials please provide them or explain how one can obtain them. Please add support for multiple connection / thread when downloading mpd video stream as sometime using youtube-dl to download it can be slow even when you have high internet speed, example when downloading from the second example above i mostly only get 20 - 200 KB/sec as youtube-dl seem to ignore the --external-downloader when downloading mpd file
request
low
Critical
185,716,192
opencv
solve CV_SVD very slow in the latest opencv
I'm on one of the most recent revisions of opencv (compiled from source): 1ae27eb6967f98c78c741539afb1d2de6a0bb322 (I compiled it with opencv_contrib if that matters). Once I moved from official release to the one I mentioned earlier, matrix decomposition became really slow (10^3 slower!). ``` solve(X, Y, res, CV_SVD); ``` Please find X and Y matrices in attachment, hope it helps to reproduce the issue. [data.txt](https://github.com/opencv/opencv/files/556422/data.txt)
incomplete
low
Major
185,765,165
go
cmd/compile: use better code for slicing
``` func f(s []int, i int) []int { return s[i:] } ``` The assembly starts out with ``` MOVQ "".i+32(FP), AX MOVQ "".s+16(FP), CX CMPQ AX, CX JHI $0, 60 SUBQ AX, CX ``` The CMP and SUB are almost redundant. Instead, we could do ``` MOVQ "".i+32(FP), AX MOVQ "".s+16(FP), CX SUBQ AX, CX JHI $0, 60 ``` We just need a subtract that also generates flags. (We have a 32-bit one for synthesizing 64-bit ops. We need a 64-bit one also.)
Performance,compiler/runtime
low
Minor
185,914,339
TypeScript
disallow comparing to null and undefined unless they are valid cases in strict null mode
the rationale is the same as for `--noUnusedLocals` which in this case would be `--noUselessNullChecks` (i am not proposing a new flag, the existing `--strictNullChecks` flag should be used, this is for illustration only) ``` typescript declare const value: number; if (value !== null) { // <-- somehow valid, expected to be invalid, since number doesn't have null as a possible value } if (value !== '') { // <-- invalid as expected } ```
Suggestion,Awaiting More Feedback,Has Repro
medium
Critical
185,924,188
opencv
conflicting typedef for int64
##### System information (version) - OpenCV => 2.4.9 (2.4.9.1+dfsg-1.5ubuntu1) - Operating System / Platform => Ubuntu 16.04, 64 bit - Compiler => GNU GCC 5.2 ##### Detailed description OpenCV has typedef in global namespace that will prevent compilation if other library has conflicting typedef. Currently, we are aware of one library (libgeos) that has conflicting typedef. We are going to submit similar bug report to libgeos. ``` .cpp // libgeos typedef long long int int64; // opencv typedef int64_t int64; ``` I am aware that this problem could be solved by compiling code in different compilation unit, but currently we have shared code that can't be separated (or at least not easily) in different compilation units. The only option in that case is to either mess with typedef, i.e.: ``` .cpp #define int64 opencv_broken_int #include <opencv2/imgproc/imgproc.hpp> #undef int64 #include <geos/geom/Coordinate.h> int main(int, argc**) { return 0; } ``` or make wrapper interface and compile it different compilation units. Nevertheless, the nicest solution would be if opencv could stop using common typename like int64 in global namespace. I am aware that using C++ namespace might not be option due to C interface, but at least prefixed typedef could be used, i.e. opencv_int64 / cv_int64 or similar. Such typedef would not break ABI, although it would change API. We are ready to commit resources to this issue and provide a patch once we agree that this should be fixed and agree what is best approach to this issue. ##### Steps to reproduce ``` .cpp // sample.cpp #include <geos/geom/Coordinate.h> #include <opencv2/imgproc/imgproc.hpp> int main(int, argc**) { return 0; } ``` ``` $ g++ -c sample.cpp In file included from /usr/include/opencv2/core/core.hpp:49:0, from /usr/include/opencv2/imgproc/imgproc.hpp:50, from sample.cpp:2: /usr/include/opencv2/core/types_c.h:163:20: error: conflicting declaration ‘typedef int64_t int64’ typedef int64_t int64; ^ In file included from /usr/include/geos/geom/Coordinate.h:19:0, from sample.cpp:1: /usr/include/geos/platform.h:66:26: note: previous declaration as ‘typedef long long int int64’ typedef long long int int64; ^ sample.cpp:4:15: error: ‘argc’ has not been declared int main(int, argc**) ^ sample.cpp:4:5: warning: second argument of ‘int main(int, int**)’ should be ‘char **’ [-Wmain] int main(int, argc**) ```
category: build/install,RFC,future
medium
Critical
185,961,353
go
cmd/go: wrong import stack in error message
This is not true: ``` $ go get -u rsc.io/github/issue package rsc.io/github/issue imports 9fans.net/go/acme imports 9fans.net/go/plan9 imports 9fans.net/go/plan9/client imports 9fans.net/go/draw imports 9fans.net/go/draw/drawfcall imports github.com/google/go-github/github imports github.com/google/go-querystring/query imports golang.org/x/net/context: exit status 1 ... error about x/net checkout being in a bad state ... ``` It looks like imports are being pushed on the import stack and not being popped correctly. rsc.io/github/issue imports github.com/google/go-github/github directly. 9fans.net/go/draw/drawfcall does not.
NeedsFix
low
Critical
185,967,321
youtube-dl
Forbes video download (hosted on Brigthcove)
Possible to add? example: http://www.forbes.com/video/5118388570001/
site-support-request
low
Major
186,029,164
youtube-dl
site-support in youtube-dl for ozee.com site- streaming video only in flash though.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2016.10.26** - [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Site support request (request for adding support for a new site) --- ## I would like to have support for ozee.com . This is what i get when I try/tried using youtube-dl - ``` [$] youtube-dl --verbose -F "http://www.ozee.com/shows/little-lord/video/little-lord-episode-23-october-28-2016-full-episode.html" [3:13:25] [debug] System config: [] [debug] User config: [] [debug] Command-line args: [u'--verbose', u'-F', u'http://www.ozee.com/shows/little-lord/video/little-lord-episode-23-october-28-2016-full-episode.html'] [debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2016.10.26 [debug] Python version 2.7.12+ - Linux-4.7.0-1-amd64-x86_64-with-debian-stretch-sid [debug] exe versions: ffmpeg 3.1.4-1, ffprobe 3.1.4-1, rtmpdump 2.4 [debug] Proxy map: {} [generic] little-lord-episode-23-october-28-2016-full-episode: Requesting header WARNING: Falling back on generic information extractor. [generic] little-lord-episode-23-october-28-2016-full-episode: Downloading webpage [generic] little-lord-episode-23-october-28-2016-full-episode: Extracting information ERROR: Unsupported URL: http://www.ozee.com/shows/little-lord/video/little-lord-episode-23-october-28-2016-full-episode.html Traceback (most recent call last): File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1634, in _real_extract doc = compat_etree_fromstring(webpage.encode('utf-8')) File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2525, in compat_etree_fromstring doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory))) File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2514, in _XML parser.feed(text) File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1653, in feed self._raiseerror(v) File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1517, in _raiseerror raise err ParseError: not well-formed (invalid token): line 76, column 24 Traceback (most recent call last): File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 694, in extract_info ie_result = ie.extract(url) File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 356, in extract return self._real_extract(url) File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 2417, in _real_extract raise UnsupportedError(url) UnsupportedError: Unsupported URL: http://www.ozee.com/shows/little-lord/video/little-lord-episode-23-october-28-2016-full-episode.html ```
site-support-request,geo-restricted
low
Critical
186,058,391
go
archive/tar: add Reader.NextRaw method to read only one raw header
The current master implementation [elides records](https://github.com/golang/go/blob/ee457118cd7b11264719647fa6f7422bac2a4431/src/archive/tar/reader.go#L117-L121), for example pax `x` typeflag records are collapsed into the following record with the extended headers being [merged into the normal file's headers](https://github.com/golang/go/blob/ee457118cd7b11264719647fa6f7422bac2a4431/src/archive/tar/reader.go#L168-L170). This is useful for most consumers, who are only interested in consuming the tar content. But two use cases would benefit from more direct access to the lower-level tar details: - Accessing information that is not yet represented in the the tar structure. For example, `tar.Header` does not currently expose pax extended headers outside of [these names and prefixes](https://github.com/golang/go/blob/ee457118cd7b11264719647fa6f7422bac2a4431/src/archive/tar/common.go#L175-L191). There are plans to rectify this limitation (#14472), but until the tar structures expose all of these sorts of details it will be useful to have a way to directly access the header and file data. This will allow users to add their own parsers for any extension they need which is not yet supported by the stdlib, without requiring them to fork archive/tar or replace it with a completely novel parser. - Validating tar files against a particular standard. If a protocol places requirements on the tar elements, users will need more direct access to the tar details to check the tar file against those requirements. For an example along these lines, see opencontainers/image-spec#342 and the extremely naive stub [here](https://github.com/wking/image-tools/commit/4f0b3c1bae97d212680dc73ee8b35b6e47b8a8c6#diff-201d416d61af62a246e5b552b6d42c92R60) (which currently doesn't work because `Next()` never returns the `x` typeflag headers). I think we can address both of these cases without breaking the existing API with two changes: 1. Adding a `tar.Reader.NextRaw()` that works like `tar.Reader.Next()`, but always returns the next record regardless of its typeflag. 2. Adding `tar.Header.Bytes` exposing a `[]byte` slice of the raw header data, to allow clients to access header fields that are not yet mapped to other `tar.Header` attributes. Alternatively, this could be a method `tar.Header.Bytes()` (or an attribute exposing a `string` or whatever) to make the data clearly read-only. Folks who wish to control writing at a low level may want similar changes on the `tar.Writer` side, but I think that is orthogonal enough to punt to a separate issue (if anyone wants it). CC @dsnet.
NeedsInvestigation,FeatureRequest
low
Major
186,062,357
go
net: isDomainName rejects valid domains
`isDomainName` is limited to hostname-compatible "preferred name" LDH labels, with an exception made for underscores per #1167. But [RFC 2181](https://tools.ietf.org/html/rfc2181#section-11) is clear about _any_ octet being valid in a DNS label: > Those [length] restrictions aside, any binary string whatever can be used as the label of any resource record. ### What version of Go are you using (`go version`)? go1.7.1 ### What operating system and processor architecture are you using (`go env`)? <details><summary>linux/amd64</summary> GOARCH="amd64" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" CC="gcc" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build693750050=/tmp/go-build -gno-record-gcc-switches" CXX="g++" CGO_ENABLED="1" </details> ### What did you do? <details><summary>query uncommon domain names</summary> ```go package main import ( "fmt" "net" ) func main() { domains := []string{ `*.golang.org`, `\.\.\..golang.org`, `\065.golang.org`, `\000.golang.org`, } for _, domain := range domains { fmt.Printf("\n%s\n", domain) fmt.Println(net.LookupHost(domain)) } } ``` </details> ### What did you expect to see? <details><summary>successful DNS queries</summary> ``` *.golang.org [216.58.219.209 2607:f8b0:4006:80e::2011] <nil> \.\.\..golang.org [216.58.219.209 2607:f8b0:4006:80e::2011] <nil> \065.golang.org [216.58.219.209 2607:f8b0:4006:80e::2011] <nil> \000.golang.org [216.58.219.209 2607:f8b0:4006:80e::2011] <nil> ``` </details> ### What did you see instead? <details><summary>valid names were rejected</summary> ``` *.golang.org [] lookup *.golang.org: invalid domain name \.\.\..golang.org [] lookup \.\.\..golang.org: no such host \065.golang.org [216.58.219.209 2607:f8b0:4006:80e::2011] <nil> \000.golang.org [] lookup \000.golang.org: no such host ``` </details>
NeedsDecision
medium
Critical
186,067,478
rust
Generic Fn wrapper breaks without type annotations
I have a generic `FnWrapper` struct for closures that take a `&i32` as their only parameter, and a generic `apply` function that consumes such wrappers (not directly, but via `Foo` trait). ``` rust trait Foo { fn foo(self, value: &i32); } struct FnWrapper<F>(F); impl<F: FnOnce(&i32)> Foo for FnWrapper<F> { fn foo(self, value: &i32) { (self.0)(value); } } fn apply<F: Foo>(_: F) {} fn main() { // Error. apply(FnWrapper(|_| {})); // Ok with type annotations. apply(FnWrapper(|_: &i32| {})); } ``` [Run in playground](https://play.rust-lang.org/?gist=c3b691fefa7325d69fdfc3b54eaa3c19&version=nightly&backtrace=0). When invoking `apply` without any annotations, compiler complains about: ``` error[E0271]: type mismatch resolving `for<'r> <[closure@<anon>:17:21: 17:27] as std::ops::FnOnce<(&'r i32,)>>::Output == ()` --> <anon>:17:5 | 17 | apply(FnWrapper(|_| {})); | ^^^^^ expected bound lifetime parameter , found concrete lifetime | = note: concrete lifetime that was found is lifetime '_#0r = note: required because of the requirements on the impl of `Foo` for `FnWrapper<[closure@<anon>:17:21: 17:27]>` = note: required by `apply` error[E0281]: type mismatch: the type `[closure@<anon>:17:21: 17:27]` implements the trait `std::ops::FnOnce<(_,)>`, but the trait `for<'r> std::ops::FnOnce<(&'r i32,)>` is required (expected concrete lifetime, found bound lifetime parameter ) --> <anon>:17:5 | 17 | apply(FnWrapper(|_| {})); | ^^^^^ | = note: required because of the requirements on the impl of `Foo` for `FnWrapper<[closure@<anon>:17:21: 17:27]>` = note: required by `apply` ``` Simply adding type annotation to the closure parameter will make this compile, so it looks like a bug to me.
C-enhancement,A-closures,T-compiler,A-inference
low
Critical
186,103,688
go
x/net/html: Allow getting HTML attribute values without unescape
### What version of Go are you using? go1.7.3 ### What operating system and processor architecture are you using? linux/amd64 ### What did you do? I'm working on an HTML sanitizer, where I want to access the original escaped HTML attribute values during the parse. ### What did you expect to see? An ideal solution to me would be a flag in the `Tokenizer` which could indicate if unescaping is required or not when calling `TagAttr()`. If you find this useful, i can implement it, or if you have any better idea, please reply. ### What did you see instead? As I see, currently the only option is `Tokenizer.TagAttr()` which automatically unescapes attribute values: https://github.com/golang/net/blob/master/html/token.go#L1158 . Escaping again the unescaped attribute values can be a solution, but that re-escaping adds computational and structural complexity to the program what I want to avoid if possible. thanks, a
FeatureRequest
low
Minor
186,127,144
TypeScript
TypeScript refuses to emit files from node_modules with getEmitOutput()
Hi. In out company we use workflow when all packages are written in pure TypeScript and we publish and consume them as-is, without any pre-compilation. I know that it's slower, but it's very convenient for developers, because they don't need to run several parallel `tsc --watch` processes. We used `webpack` and `awesome-typescript-loader` for this and everything worked fine. But I see that now TypeScript refuses to emit files from `node_modules` directory using `getEmitOutput()`, because of `isSourceFileFromExternalLibrary` check [here](https://github.com/Microsoft/TypeScript/blob/master/src/compiler/utilities.ts#L2622). Can we make this configurable?
Bug
low
Major
186,267,482
angular
Allow CanActivate[Child] Guards to be "blocking"
**I'm submitting a ...** (check one with "x") ``` [ ] bug report => search github for a similar issue or PR before submitting [x] feature request [ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question ``` **Current behavior** CanActivate[Child] Guards run, parent-first, in parallel, meaning there is no way to resolve data in a parent that may be needed in child guards, unless managed explicitly elsewhere. **Expected behavior** A CanActivate[Child] guard's behavior could possibly be set to block children guards from. **Minimal reproduction of the problem with instructions** ```javascript { path: 'parent', canActivateChild: [FiveSecondObservableGuard], // Takes a long time children: [ { path: 'child', component: ChildComponent, // waits a long time, as expected canActivate: [AuthGuard] // Runs immediately } ] } ``` **What is the motivation / use case for changing the behavior?** Loading server information/flags that are required in all Guards can now be done in two ways: - in `APP_INITIALIZER`, which severely hinders UX choices/flexibility - Making all requests for server data asynchronous, making the whole codebase needlessly complex. When a certain branch of routes is behind an AuthGuard that may need asynchronous loading, while the components respect the Guard the Guards themselves (which may require user identification to be complete) won't necessarily have access to that (unless all data is made asynchronous, as mentioned above) * **Angular version:** 2.0.X * **Browser:** all * **Language:** all
feature,area: router,feature: under consideration
medium
Critical
186,298,454
three.js
Proposed: Order Independent Transparency
##### Description of the problem We (@Jozain and myself) are currently implementing out-of-order transparency for ThreeJS for a project of ours. We'd like to contribute it back. We are implementing the method of Morgan Mcguire, Weighted-Blended Order Independent Transparency: http://casual-effects.blogspot.ca/2014/03/weighted-blended-order-independent.html We are helped a lot by @arose's example OIT code on which our stuff will be partially based, just our stuff will be designed into the core of Three.JS: https://github.com/mrdoob/three.js/issues/4814 Here is an proposed example of how to enable this mode: ```javascript var renderer = new THREE.WebGLRendererer( .... ); // add new "transparency" variable to renderer to select the transparency rendering mode //renderer.transparency = THREE.PaintersTransparency; // the current ThreeJS method, the painter's algorithm renderer.transparency = THREE.OrderIndependentTransperancy; // the new OIT method ``` This mode will be implemented by a new WebGLOrderIndependentTransparency class that will be responsible for managing the buffers. It will create two additional RenderTargets that will track the size of the current render buffer. The first will be the accumulation buffer which we will render all transparency objects to and then we will render the objects as well to some alpha product buffers - thus two separate renders. This will be followed by a "resolve" stage that renders the transparent objects over top of the existing beauty render of the opaque objects. This workflow will work with base WebGL and work with multi-sample buffers as well. It can obviously be speed up using multiple render targets, but this shouldn't be that slow for non-huge numbers of transparent objects. We are going to implement this now, just wanted to give a heads up as to our design so that if you have feedback on it, we can incorporate it now. /ping @WestLangley @arose @spidersharma03
Suggestion,Bounty
medium
Major
186,302,884
go
x/tools/refactor/rename: doesn't work on Windows
Go 1.7 Windows 64 Bit I hope, I fixed the problem with go move package under windows. bradfitz outcommented the tests for windows. Now they work. There was a tiny replace oversight - from [unix] package path to [windows] file path. https://play.golang.org/p/76oMBOgedE Here is the patch ```patch diff --git a/refactor/rename/mvpkg.go b/refactor/rename/mvpkg.go index cd416c5..eed338a 100644 --- a/refactor/rename/mvpkg.go +++ b/refactor/rename/mvpkg.go @@ -80,7 +80,8 @@ func Move(ctxt *build.Context, from, to, moveTmpl string) error { affectedPackages[r] = true } // Ensure directories have a trailing separator. - dest := strings.Replace(pkg, + dest := strings.Replace( + filepath.Join(pkg, ""), filepath.Join(from, ""), filepath.Join(to, ""), 1) diff --git a/refactor/rename/mvpkg_test.go b/refactor/rename/mvpkg_test.go index 674fe6c..ba2536a 100644 --- a/refactor/rename/mvpkg_test.go +++ b/refactor/rename/mvpkg_test.go @@ -12,7 +12,6 @@ import ( "path/filepath" "reflect" "regexp" - "runtime" "strings" "testing" @@ -114,9 +113,9 @@ var _ foo.T } func TestMoves(t *testing.T) { - if runtime.GOOS == "windows" { - t.Skip("broken on Windows; see golang.org/issue/16384") - } + // if runtime.GOOS == "windows" { + // t.Skip("broken on Windows; see golang.org/issue/16384") + // } tests := []struct { ctxt *build.Context from, to string ```
OS-Windows,Tools,Refactoring
low
Critical