id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
393,061 | youtube-dl | Output file size with -s or -g | Was: http://bitbucket.org/rg3/youtube-dl/issue/141/
If the file size was outputted it would be possible to script youtube-dl to test if the current video the the harddrive is in the best possible quality.
It happens that Youtube re-encode videos, and with higher resolution, so being able to extract the file size from youtube-dl would be very useful.
File size in bytes would be preferred.
| request | medium | Critical |
1,637,737 | youtube-dl | Create a php API and demo page | youtube-dl is often embedded by php applications. There should be an example php page included, with the following features:
- Enter URLs and execute youtube-dl (in the background)
- Show progress of youtube-dl instances
- Offer a way to abort youtube-dl
- Allow to download files from the webhost's temporary directory
- Offer conversion to mp3/aac/ogg vorbis
This may require an additional output option in youtube-dl, for example the ability to output JSON sentences.
| php | low | Major |
1,639,054 | youtube-dl | integrate template "special sequences" in help output | like in http://rg3.github.com/youtube-dl/documentation.html#d7
| request | low | Minor |
1,789,251 | youtube-dl | Add a path option to --keep-video | Hey there,
I think it would be a great idea to have the ability to save the video and audio in a different path when using --keep-video and --extract-audio.
I know the video gets deleted and you added the --keep-video option which is awesome. But wouldn't it be great to be able to add a path after like: --keep-video "../videos"
What do you think?
Cheers!
| request | low | Minor |
1,789,512 | youtube-dl | add support for picasaweb.google.com video clips | > /opt/local/bin/youtube-dl -t https://picasaweb.google.com/109059916371131820727/SommerhusTur08#5298873674471311698
> WARNING: Falling back on generic information extractor.
> [generic] SommerhusTur08#5298873674471311698: Downloading webpage
> [generic] SommerhusTur08#5298873674471311698: Extracting information
| site-support-request | low | Minor |
2,892,339 | rust | Add debug representation of trait objects | ### Updated descrption
Trait objects (`~T` and `@T` where `T` is a trait) are objects that hide their implementation and carry a virtual method dispatch table (i.e. vtable).
So, two things:
1. Debuggers will want to be able to bypass the abstraction barrier and see the hidden implementation.
2. They are also likely to want to be able to see the vtable.
- I am not sure how flexible the debug format is for gdb, but newer versions of gdb do support printing the vtable for C++ objects (via `info vtbl` or perhaps `info vtable`). It would be cool if we could massage our debug info so that gdb can just print out our vtables too, the same way.
### Original description
There is none.
| A-debuginfo,P-low,T-compiler,C-feature-request,A-trait-objects | medium | Critical |
3,134,637 | youtube-dl | --extract-audio progress | The following patch will send the encoding progress output from FFMPEG to youtube-dl stdout ( code taken from http://stackoverflow.com/a/2525493 )
``` diff
--- youtube-dl 2012-01-16 03:23:22.000000000 +0000
+++ youtube-dl.new 2012-02-08 01:02:28.366720567 +0000
@@ -4075,7 +4075,13 @@
cmd = ['ffmpeg', '-y', '-i', _encodeFilename(path), '-vn'] + acodec_opts + more_opts + ['--', _encodeFilename(out_path)]
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- stdout,stderr = p.communicate()
+ while True:
+ out = p.stderr.read(1)
+ if out == '' and p.poll() != None:
+ break
+ if out != '':
+ sys.stdout.write(out)
+ sys.stdout.flush()
except (IOError, OSError):
e = sys.exc_info()[1]
if isinstance(e, OSError) and e.errno == 2:
```
| request | low | Critical |
3,675,895 | rust | Rustdoc needs to handle conditional compilation somehow | We have some functions that are reimplemented per architecture, with docs only on one implementation. Rustdoc should ideally be able to merge these docs.
More seriously though a bunch of our libc hierarchy can't be viewed because it's conditionally compiled and the docs are built on linux.
Probably rustdoc should extract its documentation before pruning the AST for conditional compilation, collapse things with the same name into a single document, then run resolve.
| T-rustdoc,E-hard,P-low,C-feature-request | medium | Critical |
3,974,663 | youtube-dl | Add option for start-point | Hi, it would be wonderful to have an option to download a video from a given starting point.
For example this http://www.youtube.com/watch?v=d-fQJHi6A-U video is about 9h long, but i only want to have the last hour.
Of course it would be nice to be able to just pass a time code, but start-byte or something similar would be sufficient.
Is something like that possible?
| request | low | Major |
4,069,734 | youtube-dl | Coursera site download support? | Is it possible to support downloads from www.coursera.org courses? Assuming of course, someone has a valid userid and login to the courses?
| site-support-request | low | Major |
4,118,030 | youtube-dl | --extract-audio and skipping video file download | Hello,
By default --extract-audio erases the video file. That is, the next time using youtube-dl with --extract-audio again grabs the video file to convert it although it already exists as audio file.
Can you think of a way to implement a check procedure that skips to download the video file if the corresponding audio file already exists without actually keeping the corresponding video file?
Regards
Robert
| request | low | Minor |
4,119,425 | rust | rustdoc should be able to build documentation from compiled crates | rustdocs are built from attributes which are (should be) stored in the crate metadata. rustdoc should be able to construct the crate documentation from the crate metadata with nearly the same fidelity as from the original source.
**Update(2024, fmease)**: In today's terms, this essentially asks for the option[^1] to build docs based on `rustc_middle::ty` instead of the HIR in the *all* cases, not just for inlined cross-crate re-exports and synthetic impls.
[^1]: Some t-rustdoc members (incl. myself, @fmease) are even inclined to drop "HIR cleaning" entirely once the important https://github.com/rust-lang/rust/labels/A-cross-crate-reexports issues have been addressed (which is non-trivial I have to add). In the long term, that move would greatly improve the correctness of rustdoc and make it easier to develop certain features. Namely anything that depends on more complex "type-based reasoning" (for which `rustc_middle::ty` is a lot better suited compared to the HIR). | T-rustdoc,C-enhancement,A-metadata,P-low | low | Major |
4,757,025 | youtube-dl | Multi-Threaded downloading for playlists | When downloading playlists youtube-dl could use multiple connections to download videos at the same time, right now my dl speed averages at 50k/s, and the full playlist is about 15G (defcon19).
I've been running 4 terminals with --playlist-start=N so far, and spaced them out.
| request | high | Critical |
5,086,030 | youtube-dl | Support "rental" type videos (was: ERROR: unable to extract uploader nickname) | I attempted to download bnAsVApebL8, and got the error "ERROR: unable to extract uploader nickname".
| request | medium | Critical |
7,068,674 | youtube-dl | Ability to download a part of the video on youtube | Youtube allow to starts/ends the video where you want without having to download the full video
example:
http://youtube.com/embed/r3CYgOK7n1k?start=83&end=86
Only a part of the video is downloaded, not the whole. It would be nice if youtube-dl could only download a part of the video too.
| request | low | Minor |
8,082,113 | youtube-dl | Include video ID in ERROR: messages | Hi,
I was wondering if it is possible to include the video ID in the error messages, eg. turning lines like
_ERROR: YouTube said: This video contains content from UMG, who has blocked it in your country on copyright grounds._
to
_ERROR [videoIDhere]: YouTube said: This video contains content from UMG, who has blocked it in your country on copyright grounds._
to be able to know which video failed when letting youtube-dl work on a playlist.
I wrote a little script that utilizes youtube-dl and mplayer to playback videos, and if this was changed I could unblock a few videos on the fly (until then I would have to download the whole playlist again via a proxy).
Regards
| request | low | Critical |
9,760,364 | youtube-dl | Download range | Hi,
Is there any way to implement a download range? To download starting from 1m 30 to 1m 50?
Thanks
| request | high | Critical |
9,887,446 | youtube-dl | Add a --play option | A common case is just wanting to watch the downloaded video. For that, we should add an option to play the video.
For starters, it can be a simple post-processing action. However, I think the user experience would be better if we'd have an option to either autoplay once a certain buffer (say, 2MB) has been written to disk. We may also tee the output to a pipe and pass _that_ to the video player in order for the video player not to stop playback once it reaches the current number of downloaded bytes.
| postprocessors | low | Major |
10,173,479 | youtube-dl | Request rate limits | Another minor issue with the Justin.tv/Twitch.tv support. It seems the Justin.tv API has a rate limit, but it's not limited to a certain bandwidth, rather to a certain number of requests.
This creates a problem with huge channels. Eg, if I add a line to print the requested URLs:
```
$ youtube-dl -v http://www.twitch.tv/manvsgame
http://api.justin.tv/channel/archives/manvsgame.json?offset=0&limit=100
http://api.justin.tv/channel/archives/manvsgame.json?offset=100&limit=100
[snip]
http://api.justin.tv/channel/archives/manvsgame.json?offset=1600&limit=100
http://api.justin.tv/channel/archives/manvsgame.json?offset=1700&limit=100
ERROR: unable to download video info JSON: HTTP Error 400: Bad Request
```
Here's a simple sort-of-fix, that at least allows youtube-dl's --ignore-errors option to work: https://github.com/vasi/youtube-dl/compare/justintv-ratelimit . Ideally though, there should be some functionality within youtube-dl to allow limiting the request rate.
| request,yt-dlp | low | Critical |
13,122,965 | three.js | Fog gradients / Different colors | Hello,
Is there any way to have gradient color for the fog or have the fog color change in certain areas of the fog, instead of fog being a single color?
Thanks,
Reece
| Enhancement | low | Major |
15,376,196 | youtube-dl | Return codes/ Exit statuses -- are they documented ? | Hello there!
I use youtube-dl regularily on my laptop and i've seen it's also used in shk3/edx-downloader (a tool to download videso from edx online university), which I forked tonight and that i started hacking.
Now: one of the bugs i noticed is that it invokes youtube-dl via os.system, but if something bad happens it just lets the download fails and won't re-download the video.
I was then wondering if there are some particular exit codes to check for.
Or can i just assume that anything different from zero is a bad return value ?
Thanks in advance,
Emanuele Santoro
| request | medium | Critical |
16,393,197 | youtube-dl | Feature request: save audio as mono | This option is lacking and I think it is very important for those who download lectures or talk shows from youtube.
Stereo there is useless, and saving a file as mono makes the file size two times smaller.
Also, some lecture videos have good sound only on one channel, while the other remains nearly silent, which is quite annoying when listening. This function would get rid of that, too.
I think it is definitely worth implementing.
| request,postprocessors | low | Major |
16,622,939 | youtube-dl | Renovate the current youtube-dl website. | I know it's not really an issue but the speed with which youtube-dl is growing i think we should also think about renovating the old website by adding new and better graphics. I am not that good with graphics and have little experience making websites otherwise i would have done this myself . However i can help anyone who goes forward to renovate the website.
| gh-pages | low | Major |
18,382,292 | react | Declarative API for installing global DOM event handlers | #284 reminded me that one thing I've sometimes wanted is to install a handler on window for `keypress` (for keyboard shortcuts) or `scroll`. Right now I can just do `window.addEventListener` in `componentDidMount` but since React is listening already, it would be nice if there were some way for me to intercept those events. (In addition, receiving normalized synthetic events is generally more useful.)
| Type: Feature Request,Component: DOM,Resolution: Backlog,Partner | high | Critical |
18,492,689 | youtube-dl | Show ffmpeg progress | Hi there,
It would be nice to show progress of ffmpeg while extracting audio from video files.
| request,postprocessors | low | Major |
18,638,747 | youtube-dl | request: merger downloaded parts into one file | I use youtube-dl to download youku (www.youku.com) videos, and it downloads all parts of one video from xxx.part00 to xxx.partXX. Every part is 3 minutes long, so I have to open every part in order to watch the whole video.
Can merger all downloaded parts into one file?
| request | low | Major |
18,666,770 | youtube-dl | 2-way auth to google | Does youtube-dl work with 2-way authentication to google, or application-specific passwords?
I tried both account password and app-specific generated password, 'unable to log in'
| request | medium | Major |
19,036,506 | rust | Tracking issue for inherent associated types | This is a tracking issue for the ["inherent associate type" part of "inherent associated items" part of the RFC 195 "associated items"](https://github.com/rust-lang/rfcs/blob/master/text/0195-associated-items.md#inherent-associated-items)
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however not meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Original description
(This code is written in ancient version of Rust, and is not meant to be used directly.)
When developing a type-parametric impl, I found myself writing code somewhat like this:
``` rust
struct Grammar<T, NT> { ... }
impl<T:Clone,NT:Eq+Hash+Clone+Primable> Grammar<T,NT> {
fn to_nt_map(&self) -> HashMap<NT, ~[Prod<T,NT>]>
{
type Rules = ~[Prod<T,NT>];
type NTMap = HashMap<NT, Rules>;
...
}
fn from_nt_map(start: &NT, rules: &HashMap<NT, ~[Prod<T,NT>]>)
-> Grammar<T,NT>
{
type Rules = ~[Prod<T,NT>];
type NTMap = HashMap<NT, Rules>;
...
}
pub fn eliminate_left_recursion(&self) -> Grammar<T,NT>
{
...
}
...
}
```
I cannot make a `type` definition for `HashMap<NT, ~[Prod<T,NT>]>` outside of the `impl` because it then those free references to `NT` and `T` would be unbound.
And I cannot put a `type` definition within the impl, even an impl for a struct such as this (it simply does not parse with the current grammar).
(Being able to put type definitions within impls will probably arise naturally from #5033, when we get around to that. But that is nonetheless a separate issue from this: Associated Types (and Associated Items) are about enabling certain patterns for the _user_ of a trait, while this ticket describes a convenience for the _implementor_ of a struct.)
### Steps
<!--
Include each step required to complete the feature. Typically this is a PR
implementing a feature, followed by a PR that stabilises the feature. However
for larger features an implementation could be broken up into multiple PRs.
-->
- [ ] Implement the RFC
- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised.
-->
### Implementation history
* #82516
* #103621
* #104348
* #105224
* #105315
* #105768
* #105961
* #109410
* #110945
* #111486
* #114594
* #118118
* #118262 | B-RFC-approved,A-associated-items,T-lang,C-tracking-issue,F-inherent_associated_types,S-tracking-impl-incomplete | high | Critical |
19,160,832 | youtube-dl | feature request: download all videos of Xvideos user | In Youtube it's possible to download all videos of $USER. Wish the same for Xvideos
http://www.xvideos.com/profiles/$USER#_tabVideos for example http://www.xvideos.com/profiles/riyadutta#_tabVideos would download all 6 uploaded videos.
| request | low | Minor |
19,720,967 | rust | type-check lang items | These aren't currently well-defined by the language. If you define anything with a somewhat matching LLVM signature, it will compile. Not only does it not care about the type signatures matching what it expects, there is no handling for these being defined on the wrong types of AST nodes.
| A-frontend,A-type-system,I-crash,E-hard,C-enhancement,P-low,T-compiler,A-lang-item,glacier,T-types,requires-internal-features | medium | Major |
20,009,907 | youtube-dl | Faster Downloads | I love youtube-dl! Its amazing... but when trying to download larger files (1 hour or longer) it takes a decent amount of time. Ive heard of aria2 and some wrappers you can use but they haven't seem to been update in quite some time. Is there anything available to make the downloads faster without losing quality?
| request | low | Major |
20,075,796 | youtube-dl | Stop operation after "dateafter" videos are downloaded | When I use the "dateafter" command to download videos after a certain date, it starts to download the most recent video and then progresses backwards in time to download the first video after the "dateafter" date.
However, it then continues to go through all the videos before the "dateafter" date (although it doesn't download them). I'd like the operation just to stop after downloading the first video after the "dateafter" date.
Would this be possible?
| request | low | Major |
20,577,764 | youtube-dl | Netflix | Is this utility works with Netflix downloading?
I have got the error:"ERROR: Invalid URL"
Thanks
| site-support-request | low | Critical |
20,630,703 | youtube-dl | Add support for yle | ERROR: type should be string, got "https://github.com/aajanki/yle-dl by @aajanki may provide what needed\n\n$ ./youtube-dl --verbose http://areena.yle.fi/radio/2048905\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: ['--verbose', 'http://areena.yle.fi/radio/2048905']\n[debug] youtube-dl version 2013.10.07\n[debug] Python version 2.6.6 - Linux-3.2.46-grbfs-kapsi-x86_64-with-debian-6.0.7\n[debug] Proxy map: {}\nWARNING: Falling back on generic information extractor.\n[generic] 2048905: Downloading webpage\n[generic] 2048905: Extracting information\nERROR: Unsupported URL: http://areena.yle.fi/radio/2048905; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.\nTraceback (most recent call last):\n File \"./youtube-dl/youtube_dl/YoutubeDL.py\", line 348, in extract_info\n ie_result = ie.extract(url)\n File \"./youtube-dl/youtube_dl/extractor/common.py\", line 117, in extract\n return self._real_extract(url)\n File \"./youtube-dl/youtube_dl/extractor/generic.py\", line 152, in _real_extract\n raise ExtractorError(u'Unsupported URL: %s' % url)\nExtractorError: Unsupported URL: http://areena.yle.fi/radio/2048905; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.\n" | site-support-request | low | Critical |
20,786,644 | nvm | NVM in fish | How do I install NVM for the fish shell?
| shell: fish | high | Critical |
21,338,885 | youtube-dl | Error report on "https://www.culturall.com/ticket/ists/static/test_stream/1.mc" | Hi,
## I got the following error output, Win8 Pro 64Bit:
E:>youtube-dl --verbose https://www.culturall.com/ticket/ists/static/test_strea
m/1.mc
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', 'https://www.culturall.com/ticket/ists/
static/test_stream/1.mc']
[debug] youtube-dl version 2013.10.18.2
[debug] Python version 2.7.5 - Windows-8-6.2.9200
[debug] Proxy map: {}
WARNING: Falling back on generic information extractor.
[generic] 1.mc: Downloading webpage
[generic] 1.mc: Extracting information
ERROR: Unsupported URL: https://www.culturall.com/ticket/ists/static/test_stream
/1.mc; please report this issue on https://yt-dl.org/bug . Be sure to call youtu
be-dl with the --verbose flag and include its complete output. Make sure you are
using the latest version; type youtube-dl -U to update.
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 353, in extract_info
File "youtube_dl\extractor\common.pyo", line 117, in extract
File "youtube_dl\extractor\generic.pyo", line 180, in _real_extract
ExtractorError: Unsupported URL: https://www.culturall.com/ticket/ists/static/te
st_stream/1.mc; please report this issue on https://yt-dl.org/bug . Be sure to c
all youtube-dl with the --verbose flag and include its complete output. Make sur
## e you are using the latest version; type youtube-dl -U to update.
Kind regards, Hans
| site-support-request | low | Critical |
21,424,635 | youtube-dl | Download music stream from Amazon | It would be nice to be able to download the music here:
```
pierre@Rudloff:~$ youtube-dl "http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1001428681#" -v
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1001428681#', '-v']
[debug] youtube-dl version 2013.10.18.2
[debug] Python version 2.7.3 - Linux-3.2.0-4-amd64-x86_64-with-debian-7.2
[debug] Proxy map: {}
[redirect] Following redirect to http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1001428681
WARNING: Falling back on generic information extractor.
[generic] feature.html?ie=UTF8&docId=1001428681: Downloading webpage
[generic] feature.html?ie=UTF8&docId=1001428681: Extracting information
ERROR: Unsupported URL: http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1001428681; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
Traceback (most recent call last):
File "/usr/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 353, in extract_info
ie_result = ie.extract(url)
File "/usr/bin/youtube-dl/youtube_dl/extractor/common.py", line 117, in extract
return self._real_extract(url)
File "/usr/bin/youtube-dl/youtube_dl/extractor/generic.py", line 180, in _real_extract
raise ExtractorError(u'Unsupported URL: %s' % url)
ExtractorError: Unsupported URL: http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1001428681; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
```
| site-support-request,account-needed | low | Critical |
21,631,798 | rust | Audit for binary IEEE 754-2008 compliance on relevant platforms | @thestinger filed https://github.com/mozilla/rust/issues/9987, and I thought we should get the ball rolling on documenting and testing IEEE 754-2008 compliance for Rust on non-embedded platforms.
I don't think that there isn't any formal decision that we should implement IEEE 754-2008 but it would _really_ make sense for many applications of Rust.
- Scientific applications
- Games with synchronized or reproducible simulations (multiplayer, recordings)
- JS interpreters
- Anything that needs to be portable
The (binary part of the) standard covers
- Floating-point data format (for interchange and for computation)
- Basic operations (add, sub, mul, div, fma, sqrt, compare, &c.)
- Integer to floating-point conversion
- Floating-point to floating-point conversion
- Floating-point to string conversion
- Floating-point exceptions and handling (NaNs, exceptions, flags, &c.)
So all of this would need tests, and documentation. And my plan is to compile a suite of tests by slowly going through the standard page-by-page and write the basic conformance tests, then do another (even slower pass) that compiles another list of accuracy tests.
@bjz, @thestinger, @pcwalton, @catamorphism, @graydon and anyone interested: Any inputs on this? Is it a good idea at this point in time?
| E-hard,C-enhancement,T-libs-api,A-floating-point | low | Major |
21,815,401 | youtube-dl | Create --max-actual-downloads | Hi
The --max-downloads does not care about if the video has been downloaded now or not.
For backwardcompability it is better to create a new option rather then change.
In YoutubeDL.py line 546 should be moved to line 694 instead if the new option is active.
https://github.com/rg3/youtube-dl/blob/7193498811cb17a66ca57569a8588adb28ba2b27/youtube_dl/YoutubeDL.py#L546
https://github.com/rg3/youtube-dl/blob/7193498811cb17a66ca57569a8588adb28ba2b27/youtube_dl/YoutubeDL.py#L694
with same indention as Line 695
| request | low | Major |
22,381,373 | rust | Re-enable debuginfo tests on Android | Some debuginfo tests were disabled when running on Android when we added support for running tests on the Android bot (#9120). We should try to enable all debuginfo on Android. (See the results of https://cs.github.com/rust-lang/rust?q=ignore-android%20path%3A%2F%5Esrc%5C%2Ftest%5C%2Fdebuginfo%5C%2F%2F for which tests are disabled)
Debugging with gdb on android target works differently from debugging with gdb on linux.
There are two options however it is not high priority
- we can modify android gdb to work like linux gdb
- we can modify debug-info to work with android gdb
| A-testsuite,A-debuginfo,C-enhancement,O-android,T-compiler | low | Critical |
22,764,098 | youtube-dl | Unable to download purchased movie | Attempted to download a purchased movie to see if I could download it via youtube-dl
Unfortunately, the following trace occurred with the run args:
```
$ youtube-dl --verbose -ct -u '[email protected]' -p 'password' 'z4l7OZLIh9w'
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', '-ct', '-u', '<PRIVATE>', '-p', '<PRIVATE>', 'z4l7OZLIh9w']
[debug] youtube-dl version 2013.11.15.1
[debug] Python version 2.7.3 - Linux-3.2.0-55-generic-x86_64-with-Ubuntu-12.04-precise
[debug] Proxy map: {}
[youtube] Setting language
[youtube] Logging in
[youtube] Confirming age
[youtube] z4l7OZLIh9w: Downloading video webpage
[youtube] z4l7OZLIh9w: Downloading video info webpage
[youtube] z4l7OZLIh9w: Extracting video information
[youtube] z4l7OZLIh9w: Encrypted signatures detected.
[youtube] encrypted signature length 85 (40.44), itag 60, html5 player vflqSl9GX
[youtube] encrypted signature length 85 (40.44), itag 52, html5 player vflqSl9GX
ERROR: no known formats available for video; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
Traceback (most recent call last):
File "/usr/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 348, in extract_info
ie_result = ie.extract(url)
File "/usr/bin/youtube-dl/youtube_dl/extractor/common.py", line 125, in extract
return self._real_extract(url)
File "/usr/bin/youtube-dl/youtube_dl/extractor/youtube.py", line 1452, in _real_extract
video_url_list = self._get_video_url_list(url_map)
File "/usr/bin/youtube-dl/youtube_dl/extractor/youtube.py", line 1189, in _get_video_url_list
raise ExtractorError(u'no known formats available for video')
ExtractorError: no known formats available for video; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
```
Also tried specifying mp4. I'll try other formats.
| account-needed | medium | Critical |
23,171,384 | youtube-dl | Use the m4v extension for Dash video files | When downloading files I usually add a bunch of types, so i'm sure i'm at least downloading something.
youtube-dl --no-overwrites --restrict-filenames --ignore-errors -o "./%(uploader_id)s/%(upload_date)s %(uploader)s - %(title)s.%(ext)s" -f 137/136/135/134/133/38/37/22/18/17
youtube-dl --no-overwrites --restrict-filenames --ignore-errors -o "./%(uploader_id)s/%(upload_date)s %(uploader)s - %(title)s.%(ext)s" -f 141/140/139
The problem i'm running into is that I have to join the mp4/m4a Dash files to get a complete video, now in most cases this will produce a video only mp4 and a audio m4a.
In some cases this produces a lower quality "normal" mp4 file ( when Dash isn't available ), now i have no easy way to recognize Dash files.
so a simple change would be to change the extension of the Dash video files to mp4v
ps. My original suggestion would be m4v since we already got m4a but it seems Apple uses this format.
| request | low | Critical |
23,193,319 | youtube-dl | Optimize other video selection flags (--date, --datebefore, --dateafter, etc) | Issue #1745 was solved the other day. But to go along with it, it would be nice to optimize these video selections as well:
--match-title
--reject-title
--date
--datebefore
--dateafter
And maybe all the video selection options. ( https://github.com/rg3/youtube-dl#video-selection )
At the moment, if a youtube video from a playlist is outside a date range or has rejected title regex, the video and info pages still load for it anyway, and then afterward skip over the video since it doesn't match the filters. By adding the filter check before the info page loading, the info page loading could be skipped entirely. This is assuming that the data required for these filters is available prior to loading the individual video info pages.
| request | low | Minor |
23,219,991 | bitcoin | Exposed / Compromised (must sweep) / Used / Imported (?) flags on addresses | It was pointed out to me that some people have made the mistake of sending coins to addresses which generated by their wallet prior to encryption simply because they're at the top of their address list and they were engaging in the bad practice of reusing addresses.
In GIT the receive interface does a much better job of discouraging reuse, but users still canβ and when they do they receive no guidance about which of their prior addresses were exposed by being left unencrypted.
Might it be reasonable to have a set of flags that can show up in the list signifying addresses which are "exposed" meaning that they date from a prior encryption key?
Likewise, I know of at least one incident where a user randomly picked a key they'd imported from a third party as a key to receive funds and the funds were stolen, a flag indicating that the key was imported would have been helpful.
Along these lines, if we ever get around to having a auto-sweeping feature, might it make sense to have a "this wallet is compromised" button that rekeys, forces a backup, and then marks all existing addresses as compromised which flags them for automatic sweeping?
| Feature,GUI | low | Major |
23,288,315 | youtube-dl | Doesn't download Aparat user videos | youtube-dl http://www.aparat.com/zoomit
WARNING: Falling back on generic information extractor.
[generic] zoomit: Downloading webpage
[generic] zoomit: Extracting information
[download] Destination: Ψ’ΩΎΨ§Ψ±Ψ§Ψͺ - ΩΫΨ―ΫΩ ΩΨ§Ϋ Ψ²ΩΩ
ΫΨͺ - Ψ―ΩΫΨ§Ϋ ΩΩΨ§ΩΨ±Ϋ-32b8ab97320012d424786286a12683d2836225.apt
[download] 100% of 2.07MiB in 00:43
| request | low | Minor |
23,306,098 | rust | `-Zunpretty=identified` runs too early | ```rs
fn main() { println!("hello world"); }
```
`rustc -Zunpretty=identified`:
```
fn main() { println!("hello world"); } /* block 4294967040 */ /* 4294967040 */
```
Clearly it's useless if the `NodeId`s are all `NodeId::MAX_AS_U32` (`4294967040` aka `-256_i32 as u32`) (I think it may just have to be removed in favour of `-Zunpretty=expanded,identified` since the `NodeId`s are only assigned after macro expansion, and only correspond to elements of that AST, not the pre-expansion one).
| A-pretty,C-enhancement,T-compiler,requires-nightly | low | Major |
23,551,050 | rust | write!(wr,"foo") is 10% to 72% slower than wr.write("foo".as_bytes()) | This example demonstrates that the trivial case of `write!(wr, "foo")` is much slower than calling `wr.write("foo".as_bytes())`:
```
extern mod extra;
use std::io::mem::MemWriter;
use extra::test::BenchHarness;
#[bench]
fn bench_write_value(bh: &mut BenchHarness) {
bh.iter(|| {
let mut mem = MemWriter::new();
for _ in range(0, 1000) {
mem.write("abc".as_bytes());
}
});
}
#[bench]
fn bench_write_ref(bh: &mut BenchHarness) {
bh.iter(|| {
let mut mem = MemWriter::new();
let wr = &mut mem as &mut Writer;
for _ in range(0, 1000) {
wr.write("abc".as_bytes());
}
});
}
#[bench]
fn bench_write_macro1(bh: &mut BenchHarness) {
bh.iter(|| {
let mut mem = MemWriter::new();
let wr = &mut mem as &mut Writer;
for _ in range(0, 1000) {
write!(wr, "abc");
}
});
}
#[bench]
fn bench_write_macro2(bh: &mut BenchHarness) {
bh.iter(|| {
let mut mem = MemWriter::new();
let wr = &mut mem as &mut Writer;
for _ in range(0, 1000) {
write!(wr, "{}", "abc");
}
});
}
```
With no optimizations:
```
running 4 tests
test bench_write_macro1 ... bench: 280153 ns/iter (+/- 73615)
test bench_write_macro2 ... bench: 322462 ns/iter (+/- 24886)
test bench_write_ref ... bench: 79974 ns/iter (+/- 3850)
test bench_write_value ... bench: 78709 ns/iter (+/- 4003)
test result: ok. 0 passed; 0 failed; 0 ignored; 4 measured
```
With `--opt-level=3`:
```
running 4 tests
test bench_write_macro1 ... bench: 62397 ns/iter (+/- 5485)
test bench_write_macro2 ... bench: 80203 ns/iter (+/- 3355)
test bench_write_ref ... bench: 55275 ns/iter (+/- 5156)
test bench_write_value ... bench: 56273 ns/iter (+/- 7591)
test result: ok. 0 passed; 0 failed; 0 ignored; 4 measured
```
Is there anything we can do to improve this? I can think of a couple options, but I bet there are more:
- Special case no-argument `write!` to compile down into `wr.write("foo".as_bytes())`. If we go this route, it'd be nice to also convert a series of str `write!("foo {} {}", "bar", "baz")`.
- Revive `wr.write_str("foo")`. From what I understand, that's being blocked on #6164.
- Figure out why llvm isn't able to optimize away the `write!` overhead. Are there functions that should be getting inlined that are not? My scatter shot attempt didn't get any results.
| I-slow,C-enhancement,T-libs-api,T-compiler,A-fmt | medium | Critical |
23,636,864 | youtube-dl | Skills Matter Support | 1) Single:
http://skillsmatter.com/podcast/scala/david-pollak
2) Conference
http://skillsmatter.com/event/scala/scala-exchange-2013
| site-support-request,account-needed | low | Minor |
24,364,506 | youtube-dl | Lanyrd Support | 1) Speaker
http://lanyrd.com/profile/leaverou/
2) Conference
http://lanyrd.com/2013/lrug-december/
Video - http://lanyrd.com/2013/lrug-december/video/
3) Topic
http://lanyrd.com/speakers/web-development/
NB: Hosing is else where. Youtube, Vimeo, etc.
| site-support-request | low | Minor |
24,397,121 | youtube-dl | OCW Sites | media on - http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/ax-b-and-the-four-subspaces/the-geometry-of-linear-equations/
Course on - http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/
Above example is for MIT. Many other university sites have the same format.
| site-support-request | low | Minor |
24,668,251 | nvm | Move most logic out of sourced bash function | I fear putting a lot of functionality into a bash function. It's a very tricky environment and people like to use nvm in places like zsh that are mostly compatible.
The only thing we _need_ a bash function for is modifying the current $PATH. Everything else has permanent side effects (like downloading and installing node, copying packages, creating symlinks, etc) and can be done in a subshell where you can choose bash (or node or python) in your shebang line.
For maintainability, I think it would be nice to move all logic out of the main nvm.sh as possible into standalone scripts that are called from the main function.
Comments?
| feature requests | low | Major |
24,796,798 | nvm | reinstall-packages doesn't work with non-npm modules | I installed a module `jakl/rbwhat`. When copying packages, I got this error:
```
npm http GET https://registry.npmjs.org/rbwhat/1.0.3
npm http 404 https://registry.npmjs.org/rbwhat/1.0.3
npm ERR! 404 'rbwhat' is not in the npm registry.
npm ERR! 404 You should bug the author to publish it
```
It looks like the `copy-packages` code might be assuming everything's from npm. This should be smarter and actually check each package.json - for one, if there's no install scripts, the module can just be copied directly to the new location. In addition, it should ideally be able to figure out from where the module was installed.
| bugs,pull request wanted | low | Critical |
24,898,118 | youtube-dl | [espace.mu] podcast | Would it be possible to download podcasts from espace.mu?
http://www.espace.mu/console/reecoute/medianet/6980484/kflorence/iciflorence
| site-support-request | low | Minor |
25,063,162 | youtube-dl | Add support for iTunes U (was: ERROR: Unsupported URL: https://itunes.apple.com/...) | ```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', 'https://itunes.apple.com/us/itunes-u/uc-davis-symphony-orchestra/id403834767']
[debug] Encodings: locale 'UTF-8', fs 'UTF-8', out 'UTF-8', pref: 'UTF-8'
[debug] youtube-dl version 2014.01.03
[debug] Python version 2.6.6 - Linux-2.6.32-5-xen-686-i686-with-debian-6.0.4
[debug] Proxy map: {}
[generic] id403834767: Requesting header
WARNING: Falling back on generic information extractor.
[generic] id403834767: Downloading webpage
[generic] id403834767: Extracting information
ERROR: Unsupported URL: https://itunes.apple.com/us/itunes-u/uc-davis-symphony-orchestra/id403834767; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
Traceback (most recent call last):
File "/home/vi/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 487, in extract_info
ie_result = ie.extract(url)
File "/home/vi/bin/youtube-dl/youtube_dl/extractor/common.py", line 150, in extract
return self._real_extract(url)
File "/home/vi/bin/youtube-dl/youtube_dl/extractor/generic.py", line 332, in _real_extract
raise ExtractorError(u'Unsupported URL: %s' % url)
ExtractorError: Unsupported URL: https://itunes.apple.com/us/itunes-u/uc-davis-symphony-orchestra/id403834767; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
```
| site-support-request | low | Critical |
25,145,593 | rust | Weird borrowck issues with &mut &mut [u8] | I ran into some bizarre borrowck issues when trying to write a function that would modify a `&mut &mut [u8]`. The goal here was to make it simple to shove data into a `[u8, ..512]` by virtue of copying data into the corresponding `&mut [u8]` and then modifying the slice. I tried three different approaches, and they all had issues. The various errors are commented into the source.
(All 3 approaches require `use std::vdc::MutableCloneableVector`)
The first approach tried to define a function `append(buf: &mut &mut [u8], v: &[u8])` that would copy the data into `buf` and then modify `*buf` to contain the new slice. This ran into an odd lifetime error where it thinks that I can't say `*buf = buf.mut_slice_from(len)`. It claims the lifetime of `buf` is too short, but I don't see why that matters. I'm actually re-slicing `*buf`, and putting the result back into the same location that held the original slice, so I would expect it to have the same lifetime and, therefore, be valid.
``` Rust
fn one() {
let mut line = [0u8, ..512];
let mut buf = line.as_mut_slice();
fn append(buf: &mut &mut [u8], v: &[u8]) {
let len = buf.clone_from_slice(v);
*buf = buf.slice_from_mut(len);
// error: lifetime of `buf` is too short to guarantee its contents can be safely reborrowed
// ^~~
// note: `buf` would have to be valid for the anonymous lifetime #2 defined on the block at 7:45...
// note: ...but `buf` is only valid for the anonymous lifetime #1 defined on the block at 7:45
}
append(&mut buf, b"test");
append(&mut buf, b"foo");
}
```
The second approach was to give both levels of indirection the same lifetime, e.g. `append<'a>(&'a mut &'a mut [u8], v: &[u8])` to try and squelch the error. This didn't work because I wasn't allowed to reassign back to `*buf`, as it considered `buf.mut_slice_from(len)` to borrow it. I assume the borrow check is tied to the lifetime, which is shared at both levels, so borrowck thinks `buf` is borrowed when it's really `*buf` that's borrowed.
Curiously, it also decided I couldn't use `&mut buf` twice in the calling code, as it seemed to think it was already borrowed.
``` Rust
fn two() {
let mut line = [0u8, ..512];
let mut buf = line.as_mut_slice();
fn append<'a>(buf: &'a mut &'a mut [u8], v: &[u8]) {
let len = buf.copy_from(v);
*buf = buf.mut_slice_from(len);
// error: cannot assign to `*buf` because it is borrowed
// ^~~~
// note: borrow of `*buf` occurs here
// ^~~
}
append(&mut buf, bytes!("test"));
append(&mut buf, bytes!("foo"))
// error: cannot borrow `buf` as mutable more than once at a time
// ^~~~~~~~
// note: previous borrow of `buf` as mutable occurs here
// ^~~~~~~~
}
```
The third approach was to ditch `&mut &mut [u8]` entirely and try capturing `buf` in a closure instead. This gave some odd errors. First off, it kept referencing `(*buf)[]`, and I don't know what it meant by that. Also, the first error here indicates that a borrow on a _later_ line was blocking a borrow on an _earlier_ line, which is quite bizarre. How can `buf` have been borrowed already when the later line is, well, later? It also considered the same reference to `buf` to consist of multiple mutable borrows.
``` Rust
fn three() {
let mut line = [0u8, ..512];
let mut buf = line.as_mut_slice();
let append = |v: &[u8]| {
let len = buf.copy_from(v);
// error: cannot borrow `(*buf)[]` as mutable more than once at a time
// ^~~
buf = buf.mut_slice_from(len);
// note: previous borrow of `(*buf)[]` as mutable occurs here
// ^~~
// error: cannot borrow `(*buf)[]` as mutable more than once at a time
// ^~~
// note: previous borrow of `(*buf)[]` as mutable occurs here
// ^~~
// error: cannot assign to `buf` because it is borrowed
// ^~~
// note: borrow of `buf` occurs here
// ^~~
};
append(bytes!("test"));
append(bytes!("foo"));
}
```
In the end, I couldn't figure out any way to accomplish what I wanted. It seems to me the first approach should have worked.
/cc @nikomatsakis
| A-borrow-checker,T-compiler,C-bug | low | Critical |
25,807,391 | youtube-dl | Support catchall for sub language without country | When the `--sub-lang` switch is specified with a value of `en` it doesn't download subs labelled with a country code like `en-US`. See this session log:
```
$ youtube-dl -U
Updating to version 2014.01.17.2...
Updated youtube-dl. Restart youtube-dl to use the new version.
$ youtube-dl --skip-download --write-sub --sub-lang en WR9-GRQfEkU
[youtube] Setting language
[youtube] WR9-GRQfEkU: Downloading webpage
[youtube] WR9-GRQfEkU: Downloading video info webpage
[youtube] WR9-GRQfEkU: Extracting video information
WARNING: no closed captions found in the specified language "en"
[youtube] WR9-GRQfEkU: Encrypted signatures detected.
$ youtube-dl --skip-download --list-subs WR9-GRQfEkU
[youtube] Setting language
[youtube] WR9-GRQfEkU: Downloading webpage
[youtube] WR9-GRQfEkU: Downloading video info webpage
[youtube] WR9-GRQfEkU: Extracting video information
[youtube] WR9-GRQfEkU: Looking for automatic captions
[youtube] WR9-GRQfEkU: Downloading XML
WARNING: Video doesn't have automatic captions
[youtube] WR9-GRQfEkU: Available subtitles for video: en-US
[youtube] WR9-GRQfEkU: Available automatic captions for video:
$
```
It would be great if failing a perfect match the program downloaded whatever first entry matched partially the user specified value, changing the warning to:
```
WARNING: no closed captions found in the specified language "en", found partial match for "en-US"
```
Or maybe add the partial match behaviour as an additional switch if it breaks behaviour people depend on (aka: specifying a value and failing even if partial matches are found).
| request | low | Minor |
25,876,528 | three.js | CompressedTexture: Support `flipY` in shaders. | I convert a simple sheep model use convert_obj_three.py to .json ,
load it use THREE.JSONLoader(),
and use THREE.ImageUtils.loadCompressedTexture(texturepath) to add the texture,
but not reach my results,
here is:
![image](https://f.cloud.github.com/assets/6129694/1949074/30cad3ca-80dd-11e3-94ad-458e0113bb6a.png)
![image](https://f.cloud.github.com/assets/6129694/1949075/3990ce38-80dd-11e3-8fa5-dd62c70867ba.png)
here is the code : https://github.com/pikaya/lei ,
Thank you very much for someone to help me!
here is the result of DDS converted to PNG :
![image](https://f.cloud.github.com/assets/6129694/1951125/e1e79828-816e-11e3-8484-4128a9704870.png)
| Enhancement | low | Major |
26,243,575 | youtube-dl | [ustream] add support for live streams (was: ustream doesn't work) | When trying to download a live video from ustream youtube-dl doesn't get past "Downloading webpage" (it keeps repeating it). Adding a common user-agent didn't help.
| request,site-support-request | low | Major |
26,575,524 | rust | Should types that contain zero-length vectors of themselves be allowed? | In the discussion of pull request #11839, which aims to check the representability of structs and enums properly in `typeck`, @huonw pointed out that the semantics for types that directly contain zero-length vectors of themselves are potentially still undecided.
Consider for example ([play](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=6e390923e5e7c97fcc6c5757953ee64b)):
```rust
struct Foo {
recur: [Foo; 0],
}
fn main() { }
```
If `typeck` allows `enum Foo { A([Foo; 0]) }` or
`struct Bar { x: [Bar; 0] }` then there is an infinite recursion + stack overflow in `trans::adt::represent_type`, so I will amend #11839 to disallow these cases and add a FIXME referencing this issue.
To me, it seems more consistent to allow any zero-length vector than to allow only some, but the only use case I can think of is that it may make some macros simpler to write.
| A-type-system,T-lang,C-bug,A-zst,A-array,T-types | low | Critical |
26,588,588 | youtube-dl | Add support for http://www.streamago.tv/ | Thank you very much!
| site-support-request | low | Minor |
26,622,032 | rust | Link using the linker directly | Right now we link through a C compiler, but Rust should not depend on having a C compiler available nor expose details of cc linking semantics.
I know there have been other issues on this but I can't find them.
| A-linkage,T-compiler,C-feature-request | medium | Major |
26,735,096 | youtube-dl | How to check when merging of formats gets completed? | When importing `youtube-dl` we have `progress_hook` to get download status, but how to know when format merging has finished?
| request | low | Minor |
26,856,070 | youtube-dl | mtime not being set on audio files in certain cases | ### Description
I used
``` sh
youtube-dl -ci -o "%(title)s - %(id)s.%(ext)s" --extract-audio --audio-format "mp3" --audio-quality 0 --keep-video "some playlist"
```
to download videos from a playlist + extract mp3 audios from those videos.
While it was processing, I noted that one video errored for no particular reason. After everything was downloaded and converted, I decided to rerun the command, since youtube-dl claims to be able to skip already downloaded things. Already downloaded video files, in fact, weren't re-downloaded, but **every single audio file** was re-created. I guess there is nothing one can do, with a video you at least know its size from youtube, so you can easily check if it's already downloaded or not, but you don't know how big an extracted audio file would be, so you don't know if the audio file you have is a complete audio file or not, so to be safe it just re-converts every single audio file. This is a little annoying, try to re-encode 10gb of audio, but I don't think that this is a bug, it looks rather intentional, like a feature.
The actual issue here is that Last-Modified time of re-done audio files is set to **now** instead of the time of corresponding youtube video. That is a bug. youtube-dl sets Last-Modified correctly the first time you run the command, but then forgets to set it when the command is ran again and audio files are re-created.
I'm sorry if that was already reported, there are a lot of issues open here.
##
### Steps to reproduce
1. Run
``` sh
youtube-dl -ci -o "%(title)s - %(id)s.%(ext)s" --extract-audio --audio-format "mp3" --audio-quality 0 --keep-video "http://www.youtube.com/watch?v=qn4jgmmub20"
```
2. Repeat 1
##
### What I see
| File name | Last-Modified (yyyy-mm-dd) |
| --- | --- |
| γIAγ γ’γ¦γΏγΌγ΅γ€γ¨γ³γΉ γγͺγͺγΈγγ«MVγ - qn4jgmmub20.mp3 | **_2014-02-03 (current date)_** |
| γIAγ γ’γ¦γΏγΌγ΅γ€γ¨γ³γΉ γγͺγͺγΈγγ«MVγ - qn4jgmmub20.mp4 | 2014-01-07 |
##
### What I expected to see
| File name | Last-Modified (yyyy-mm-dd) |
| --- | --- |
| γIAγ γ’γ¦γΏγΌγ΅γ€γ¨γ³γΉ γγͺγͺγΈγγ«MVγ - qn4jgmmub20.mp3 | **_2014-01-07_** |
| γIAγ γ’γ¦γΏγΌγ΅γ€γ¨γ³γΉ γγͺγͺγΈγγ«MVγ - qn4jgmmub20.mp4 | 2014-01-07 |
| request | low | Critical |
27,027,653 | youtube-dl | is possible to list out all videos in a VAST and VPAID asset | youtube-dl -g --verbose http://www.longtailvideo.com/support/jw-player/31403/static-vast-xml-tag/ --include-ads
this lists out only one video.
http://content.bitsontherun.com/videos/bkaovAYt-kNspJqnJ.mp4
Is it possible to list out the ads too?
Thanks
Renu
| site-support-request | low | Minor |
27,440,748 | youtube-dl | about max file size and formats | Hi guys,
I want to do that.
I add --max-filesize option for download but if best quality file size is bigger than my limit youtube-dl auto change video format best to worst or normal
Is this possible ?
Sorry for my language.
| request | low | Major |
27,474,762 | youtube-dl | Ability to set a different output format for playlists | Using "-o, --output TEMPLATE" it's possible to set a default output template in the youtube-dl.conf file but this is applied to all downloaded files including videos form playlists.
Is it possible to add an option "--playlist-output TEMPLATE" to set a output template for playlists only.
| request | low | Minor |
27,792,132 | youtube-dl | Please add --sort-by option | Please add a new option, --sort-by. For example --sort-by views
That way I can get significant high amount of like/dislike ratings
YouTube API "orderby" can be found here: https://developers.google.com/youtube/2.0/developers_guide_protocol?hl=en
| request | low | Major |
28,086,808 | react | Provide a way to handle browser-autocompleted form values on controlled components | When there's a controlled component for form names that the user has saved in their browser (common with username/password fields), the browser will sometimes render the page with values in those fields without firing onChange events. If the user submits the form, the component state does not reflect what is showing to the user.
In experimenting with this, it appears that the data is there on load (tested by logging this.refs.myinput.getDOMNode().value)
| Type: Bug,Component: DOM | high | Critical |
28,110,488 | youtube-dl | Show video title in Console Title? | Dear youtube-dl:
In addition to showing the time-remaining, is there a way, or a possibility of adding an additional string to also display the Title of the video being downloaded?
| request | low | Minor |
28,574,005 | youtube-dl | Option to prevent --extract-audio from extracting if such audio file already exists | I want to run youtube-dl with `--extract-audio` and `--keep-video` periodically.
It downloads **only new video**, but extracts audio of **ALL videos**, even if such audio files were already created by previous runs of youtube-dl.
| request | low | Minor |
28,685,635 | rust | Type inference fails to determine closure argument type | The following fails to compile:
``` rust
pub fn main() {
let mut xs = HashMap::<(u32, u32), u32>::new();
let new_el = |&_| 30; // error: the type of this value must be known in this context
xs.insert((1,1), 10);
xs.insert((2,2), 20);
xs.find_or_insert_with((3,3), new_el);
println!("{}", xs);
}
```
The argument type must be explicitly spelled out:
``` rust
let new_el = |_: &(u32, u32)| 30;
```
Should the type inference work here?
| C-enhancement,A-closures,T-compiler,A-inference,T-types | low | Critical |
28,753,314 | rust | floating point intrinsics may clobber errno in unrelated code | These intrinsics assume `errno` is not set, but it may be set. In theory, one of these intrinsics could be reordered between a call to a C function setting `errno` and a check of the `errno` value in Rust. Rust could ignore this problem, as it will rarely (if ever) occur. However, safety mandates that we either supply our own math library known to not set errno on math errors (glibc is a black sheep in this regard) or stop using these unless something like `-fno-math-errno` is passed (as `clang` handles it on Linux).
| P-low,T-libs-api,T-compiler,C-bug | low | Critical |
28,770,218 | rust | Disallow duplicated extern declarations | ``` Rust
mod a {
extern {
fn func();
}
}
mod b {
extern {
fn func(i: i8); // different signature
}
}
```
Currently rustc accepts this, but I think it should be disallowed to reduce potential mistakes.
| A-FFI,T-compiler,C-feature-request | low | Major |
29,009,870 | neovim | :help viewer should support markdown (or markdoc) | Hey Team !
**TL;DR** :
Started porting helpfiles to rst and md formats. Repo is at : https://github.com/Pychimp/neovim-docs
---
#### Better Explanation:
So, I have started to port the help files to both rst and md formats.
I have cloned the main neovim/docs [repo](https://github.com/neovim/docs/commit/dc9d7ee48861a7102c50e4a34e2c7d39d5f20ade) (after @rjw57 latest PR was merged in)
As @tarruda had mentioned in #106, He wanted to keep the original format. So, what I plan to do, is to convert the help files to rst and md on seperate branches, and then send a PR to the main neovim/docs repo.
(Meaning, Even after my PR would be merged (Hopefully ! :smile:), The original *.txt **will be** as they are on the master branch, while the converted ones will be on their **own and respective** branches)
##
Reasons:
- The original help format used currently in vim (with `*<tag>*` and `|<tag>|`) is great, and _we_ can stick with those...
- Alternatively, as @Gaelan [mentioned](https://github.com/neovim/neovim/issues/106#issuecomment-35960969), neovim might be able to render out the docs easily (just the way, vim seems to do with the current help format).
> Why two formats (rst and md), can't you make simply one and then, once that is approved merge it in to main ?
Well ... The thing is we had a (long) bit of discussion (like this: #207), and I didn't want to waste the team's time by having a flame war like that, which is why I'll make them into rst and md formats. After some time, Should the team decide to choose one format over the other, the rejected one can be dropped off, and the selected one will continue to develop.
(Plus, I might have some time to kill, and I'm bad at C :sweat_smile: (the C code in this repo does look scary), So, I thought I might help out like this.)
Please feel free to discuss and let me know, what you guys think ! :smile:
Link to my repo : https://github.com/Pychimp/neovim-docs
| enhancement,documentation | medium | Critical |
29,015,891 | neovim | :source CRLF (fileformat=dos / ^M line endings) VimL on non-unix | I copied over my `.vimrc` to `.neovimrc` then git cloned @Shougo's NeoBundle for plugin management, which has files in it that are `fileformat=dos` and neovim throws these errors on startup:
http://hastebin.com/rixaxefofe.lua
For reference, here's the neobundle.vim file it mentions:
https://github.com/Shougo/neobundle.vim/blob/master/plugin/neobundle.vim
Basically, this is my `.vimrc` file:
https://gist.github.com/trusktr/6956110
The only change I made to it when copying to `.neovimrc` is to change .vim to .neovim on line 9.
The rc automatically creates the needed directories if they don't exist (e.g. `~/.neovim`, `~/.neovim/swap`, etc), clones NeoBundle, then let's NeoBundle install/initiate all the listed plugins, then sets some custom mappings and settings.
Works perfectly in Vim, but not NeoVim.
@Shougo, does NeoBundle work for you in NeoVim?
| enhancement,vimscript | low | Critical |
29,027,200 | bitcoin | Clients leak IPs if they are recipients of a transaction | So, this paper http://ifca.ai/fc14/papers/fc14_submission_71.pdf got me thinking about the current rules for transaction rebroadcasting:
Once a transaction has been broadcast, you stop rebroadcasting. Unless you own txins or txouts in the transaction.
So, you use the paper's techniques. But you can be much more speculative than they are, and get a low-likelihood but possible IP match for an address, connect your client up, and issue a transaction paying yourself and a small amount to the address you're interested in, just over the dust amount.
The transaction should be constructed so it's unlikely to be mined.
The transaction traverses the network, then it stops being rebroadcast, _except_ by the recipient and you. If your client is connected to the wallet that owns the address, it will see rebroadcasting for some time, providing a very strong link between the two.
This seems like a bad outcome.
I speculate that the 1Sochi transactions may have this motivation -- mapping addresses to determine IPs at large in Bitcoin.
It seems like the simplest thing would be to not re-re-broadcast to clients you've already spoken to, but I'll wait for smarter people than me to work out the right fix.
| Wallet,Privacy | medium | Major |
29,177,732 | youtube-dl | Save source link in extended attribute (Mac) | Browsers on the Mac set a "Where From" attribute on downloaded files. Files also have a comments field, where some people put a URL.
I implemented the latter as an external tool, and would integrate it into youtube-dl _if it has any chance to be accepted_. The former wouldn't be a problem, either.
| request | low | Major |
29,305,204 | react | iframe contents cause invariant violation | When using server rendering, putting an `<img>` in an `<iframe>` seems to invariably cause an invariant violation (it can't find the image).
This is related to #1252, but not identical. In both cases, the browser isn't aware of the inner elements however, in this case, it's because browsers that support iframes are actually mutating the DOM (by replacing the contents with the document specified in the `src` attribute).
| Type: Bug,Component: DOM | low | Major |
29,467,710 | react | Stop doing data-*, aria-*, start using dataSet | The DOM already exposes `data-*` as `dataset` but it's doing transformation from hyphenated to camelCase. [From MDN](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement.dataset):
```
<div id="user" data-id="1234567890" data-user="johndoe" data-date-of-birth>John Doe
</div>
var el = document.querySelector('#user');
// el.id == 'user'
// el.dataset.id === '1234567890'
// el.dataset.user === 'johndoe'
// el.dataset.dateOfBirth === ''
el.dataset.dateOfBirth = '1960-10-03'; // set the DOB.
// 'someDataAttr' in el.dataset === false
el.dataset.someDataAttr = 'mydata';
// 'someDataAttr' in el.dataset === true
```
We should just start supporting `dataSet` (because camelCase). This will allow a couple things:
- easier reasoning about data attributes (`Object.keys(this.props.dataSet)`)
- easier merging (`<div dataSet={merge(this.props.dataSet, {extra: 'value', override: 'value'})} />`)
- easier (potentially faster?) updates (just modify `node.dataset`)
We'll want to do the reverse of what the DOM is doing. eg `<div dataSet={{dateOfBirth: 'val', foo: 'bar'}} />` becomes `<div data-date-of-birth="val" data-foo="bar"></div>`.
To the best of my knowledge, `aria-*` doesn't have a corresponding API, but we should make it work the same way. I think `ariaSet` makes sense.
| Type: Feature Request,Component: DOM,Resolution: Backlog,Partner | high | Critical |
29,951,301 | rust | Private items with a public reexport could suggest the public path in privacy error message | E.g. `use std::io::buffered::BufferedReader;` says `struct BufferedReader is private` but could say
```
struct BufferedReader is private, but available publicly as `std::io::BufferedReader`
```
(This probably requires some significant work, for not a _huge_ benefit.)
| C-enhancement,A-diagnostics,A-resolve,T-compiler | low | Critical |
29,957,087 | youtube-dl | Suggestion option to dump only result, viz id, artist-title or error | While downloading a playlist one may encounter several issues such:
- one's artist-title does not correspond to the ouput
- errors
Those are very insteresting information, but a matching between the returned information from youtube-dl and your own can be lenghty process.
I propose to have an option (say -d summaryLog.txt) to dump a summary log unicode or UTF-8 text file with the following structure, for each videoID a single line as:
video_ID; either output name or occurred error
So instead of for instance:
F:\Youtube Extract\Russian>youtube-dl nG647q4Obds
[youtube] Setting language
[youtube] nG647q4Obds: Downloading webpage
[youtube] nG647q4Obds: Downloading video info webpage
[youtube] nG647q4Obds: Extracting video information
[youtube] nG647q4Obds: Encrypted signatures detected.
[download] Destination: - -nG647q4Obds.mp4
[download] 100% of 5.08MiB in 00:05
One would have:
nG647q4Obds; ΠΠ½ΡΠΈΠ½ΠΈΡΠΈ - Π’Π°Ρ-nG647q4Obds
Thanks
| request | low | Critical |
30,288,130 | youtube-dl | [FranceTV] france5.fr playlists | Hello,
Youtube-dl doesn't know what to do on a playlist/show page, because the URL does not contain an ID. We could add a way to download all the videos of the show.
```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['http://www.france5.fr/emissions/l-oeil-et-la-main', '-v']
[debug] Encodings: locale 'UTF-8', fs 'UTF-8', out 'UTF-8', pref: 'UTF-8'
[debug] youtube-dl version 2014.03.27.1
[debug] Python version 2.7.3 - Linux-3.2.0-4-amd64-x86_64-with-debian-7.4
[debug] Proxy map: {}
[francetv] l-oeil-et-la-main: Downloading webpage
ERROR: Unable to extract video ID; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
Traceback (most recent call last):
File "./youtube-dl/youtube_dl/YoutubeDL.py", line 509, in extract_info
ie_result = ie.extract(url)
File "./youtube-dl/youtube_dl/extractor/common.py", line 161, in extract
return self._real_extract(url)
File "./youtube-dl/youtube_dl/extractor/francetv.py", line 166, in _real_extract
video_id = self._html_search_regex(id_res, webpage, 'video ID')
File "./youtube-dl/youtube_dl/extractor/common.py", line 369, in _html_search_regex
res = self._search_regex(pattern, string, name, default, fatal, flags)
File "./youtube-dl/youtube_dl/extractor/common.py", line 359, in _search_regex
raise RegexNotFoundError(u'Unable to extract %s' % _name)
RegexNotFoundError: Unable to extract video ID; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
```
Regards,
| request | low | Critical |
30,479,636 | youtube-dl | Please support triart.se | $ youtube-dl --verbose http://www.triart.se/bio/article90562.ece
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['--verbose', 'http://www.triart.se/bio/article90562.ece']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2014.03.30.1
[debug] Python version 2.7.6 - Linux-3.13-1-amd64-x86_64-with-debian-jessie-sid
[debug] Proxy map: {}
[generic] article90562: Requesting header
WARNING: Falling back on generic information extractor.
[generic] article90562: Downloading webpage
[generic] article90562: Extracting information
ERROR: Unsupported URL: http://www.triart.se/bio/article90562.ece; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 379, in _real_extract
doc = parse_xml(webpage)
File "/usr/local/bin/youtube-dl/youtube_dl/utils.py", line 1315, in parse_xml
return xml.etree.ElementTree.XML(s.encode('utf-8'), **kwargs)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1300, in XML
parser.feed(text)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
ParseError: mismatched tag: line 34, column 119
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 511, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 161, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 612, in _real_extract
raise ExtractorError('Unsupported URL: %s' % url)
ExtractorError: Unsupported URL: http://www.triart.se/bio/article90562.ece; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
| site-support-request | low | Critical |
30,546,865 | rust | Tracking Issue for auto traits (auto_traits) -- formerly called opt-in built-in traits (optin_builtin_traits) | This is the tracking issue for [**RFC 19**](https://github.com/rust-lang/rfcs/blob/master/text/0019-opt-in-builtin-traits.md).
## Checklist
Here is a check-list of code to write and tricky scenarios to be sure we handle:
- [ ] forbid conditional negative impls [as described here](https://gist.github.com/nikomatsakis/bbe6821b9e79dd3eb477#negative-reasoning-from-oibit-and-rfc-586)
- [x] conditional negative impls not properly enforced https://github.com/rust-lang/rust/issues/23072
- [x] `impl !Pod for ..` should not be legal https://github.com/rust-lang/rust/issues/28475
- [x] fix feature-gate for `impl Foo for ..` https://github.com/rust-lang/rust/issues/23225
- [x] defaulted traits ought to be have more restrictive coherence rules https://github.com/rust-lang/rust/issues/22978
- [x] defaulted traits should have no methods #23080 / https://github.com/rust-lang/rust/pull/23117
- [x] Add parser support for `impl Foo for ..`
- [x] Port `Send/Sync` to use new infrastructure internally
- [x] Add `unsafe impl Send for ..` / `unsafe impl Sync for ..`
- [x] Object types should never match against a defaulted trait (though the object type itself may add a candidate for traits that appear in the object type)
- [x] When a defaulted trait matches, it should impose any supertrait bounds on the matched type as nested obligations
- [x] Be wary of type inference -- if we haven't resolved a type yet, we have to be ambiguous
- [ ] Systematic testing for `constituent_types`
- [x] Coherence interaction: a defaulted trait can only be implemented for structs/enums
- [x] Coherence interaction: a trait can only be defaulted in the crate where it is defined
- [x] Coherence interaction: a trait can only be defaulted once
- [x] Coherence interaction: an auto-trait can be redundantly implemented for an object that has it - #56934
- [x] Defaulted impls cannot be generic
- [x] Fix the interaction with PhantomData. OIBIT should treat PhantomData<T> as if there were an instance of T reachable rather than breaking it down like it would a different struct. https://github.com/rust-lang/rust/pull/23091
- [x] Allow negative implementations for traits that have a default implementation (besides Send/Sync).
- [x] https://github.com/rust-lang/rust/issues/104808 lifetimes on auto traits are buggy
- [x] Coherence rules: is `impl AutoTrait for dyn Trait` legal? https://github.com/rust-lang/rust/issues/13231#issuecomment-1397480267
- [x] Interaction with dyn safety. https://github.com/rust-lang/rust/pull/107082
- [x] Do `[u8]` negative impls affect `str`. https://github.com/rust-lang/rust/issues/13231#issuecomment-1399386472 | A-type-system,A-trait-system,P-medium,B-RFC-approved,T-lang,B-unstable,B-RFC-implemented,C-tracking-issue,F-auto_traits,S-tracking-perma-unstable,T-types | high | Critical |
30,900,007 | react | touchmove doesn't fire on removed element | If you have
```
{this.state.show &&
<div onTouchStart={this.hideTheDiv} onTouchMove={...} />}
```
such that the onTouchStart handler removes the div (and maybe replaces it with another one in the same place, useful in certain draggable interactions), the onTouchMove handler doesn't fire because the events of a detached element no longer bubble to document. We should probably bind the touchmove handler when the element receives touchstart instead of delegating to document.
Sort of related to #1254.
cc @merbs @eater
| Type: Enhancement,Component: DOM,Partner | low | Major |
31,188,736 | electron | Headless version for testing | @zcbenz how much work do you think it would be to create a headless version of atom-shell that could be used as a replacement for [phantomjs](http://phantomjs.org/)?
phantomjs is lagging behind more and more from what actual web browsers do today and it would be great to have something more up to date to use for headless testing.
| enhancement :sparkles: | high | Critical |
31,206,047 | nvm | reference $HOME in $PROFILE addition rather than hard-coding $NVM_DIR | I recently had to make this change when `$HOME` changed.
``` diff
-[ -s "/Users/michael/.nvm/nvm.sh" ] && . "/Users/michael/.nvm/nvm.sh" # This loads nvm
+[ -s "$HOME/.nvm/nvm.sh" ] && . "$HOME/.nvm/nvm.sh" # This loads nvm
```
Trying to reinstall nvm failed to fix the path differences because it simply looks for `nvm.sh` in `$PROFILE`.
| feature requests,installing nvm: profile detection | low | Critical |
31,255,298 | rust | Line-based breakpoints in inline functions don't show correct source | ``` rust
#[inline(always)]
fn bar() -> int {
5
}
fn main() {
let _ = bar();
}
```
```
(gdb) break inline.rs:3
Breakpoint 1 at 0x404e80: file inline.rs, line 3.
(gdb) r
Starting program: /tmp/inline
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Breakpoint 1, inline::main () at inline.rs:6
6 fn main() {
(gdb)
```
| A-debuginfo,P-medium,T-compiler,C-bug | low | Critical |
31,706,728 | rust | `-Zunpretty=expanded` does not preserve hygiene | rustc internally converts `for val in iter { ... }` to:
``` Rust
match &mut iter {
i =>
loop { match i.next() { None => break , Some(val) => { ... } } }
}
```
where `i` is created by `let local_ident = token::gensym_ident("i");`, so `i` does not clash with other local variables.
However, pprust just prints it as plain `i`. So the following code:
``` Rust
let mut i: int = 0;
for _ in iter {
i += 1;
}
```
is prettified by rustc as:
``` Rust
let mut i: int = 0;
match &mut iter {
i =>
loop { match i.next() { None => break , Some(_) => { i += 1; } } }
}
```
which is not correct anymore.
| A-pretty,P-low,T-compiler,C-feature-request,requires-nightly,A-hygiene | low | Major |
31,727,050 | neovim | Bidi language support | The biggest problem with Vim for right-to-left languages like Persian(Farsi), Arabic, Hebrew and Urdu is that Vim doesn't support them! :( Please add some bidi to #Neovim and save us!
| enhancement,localization | high | Critical |
32,026,911 | nvm | How to update nvm | If nvm has a command that will update himself, it will be more awesome. Now the method i know is remove nvm and reinstall, but this will set all my node reinstall
| feature requests | high | Critical |
32,078,016 | rust | Consider aggressively avoiding generating memcpys for moves if we can avoid it | Given that our assembler output tends to have a lot of moves in it and LLVM is bad at optimizing them out, I'm beginning to wonder if we shouldn't take more aggressive steps. For example:
- Convert (non-POD?) by-value move parameters in the Rust ABI to be passed by reference.
- Non-immediate values that go dead by dint of being moved into a new place should not be memcpy'd there.
Any others?
| I-slow,C-enhancement,A-codegen,T-compiler,A-mir-opt,A-mir-opt-nrvo | low | Major |
32,218,612 | neovim | Improve large files support | Currently vim loads the whole file into memory, which is bad idea for some files. I think we should consider reworking memfile,regexp engine and all related code to allow editing files that do not fit into the available memory.
At this time it means that all API functions should be designed as if files are all impossible to load into memory (e.g. you cannot request a piece of file without specifying maximum length you can accept). On VimL side this will result in adding string-pretending lua objects that may be returned by getline()/@c/... and that actually use file positions + binary diffs (API must have FileChangeWritten hooks providing all necessary information) with a warning that editing file (but _not_ appending to it) outside of vim may result in changing some "strings". It is better to create infrastructure for handling this on new VimL side right now.
An idea for API: getline() C implementation should look like
```
typedef struct {
fpos_t start; // First byte position
fpos_t end; // Last byte position
size_t len; // Length of the next field
char string[1];
} FileString;
```
| enhancement,performance,needs:design,gsoc | medium | Critical |
32,222,374 | rust | Compile ignored tests rather than skipping | Ignored tests:
Lack of support in codegen for closures/generics are always inline:
- [ ] `src/test/codegen-units/item-collection/cross-crate-closures.rs`
- [ ] `src/test/codegen-units/item-collection/non-generic-closures.rs`
- [ ] `src/test/codegen-units/partitioning/methods-are-with-self-type.rs`
This compile successfully, but it should fail:
- [ ] `src/test/ui/lint/dead-code/closure-bang.rs` | A-testsuite,E-hard,C-enhancement,T-compiler,A-compiletest,E-tedious,E-needs-design | low | Major |
32,447,498 | neovim | Is the nfa regex engine slow? | I'm not sure what's causing this, almost surely a plugin, but neovim freezes a lot for me. So I'm opening this issue here to get to the bottom of it when I have time.
I tried looking at it through Xcode instruments. In these periods of heavy activity (i.e.: freezing), the nfa regex engine manages to allocate and free no less than 134GB of memory. $(deity) vim, have you never heard of a scratch buffer?!
I'm thinking the offending plugin is [vim-easytags](https://github.com/xolox/vim-easytags), which periodically runs ctags and highlights recognized symbols. This is some nice functionality that I don't want to give up, but it's triggering some pathological behaviour. Perhaps @xolox could chime in (for reference @xolox, neovim doesn't support python yet, that's going to come later). The freezing only happens whenever I move around in the buffer (which is one of the most useful things I do in vim).
Here's some screenshots of instruments measuring both CPU usage and memory, where we can see that one CPU side it's more `vim_free` having difficulty keeping up with the massive amount of free'ing going on:
![screen shot 2014-04-29 at 15 58 04](https://cloud.githubusercontent.com/assets/189413/2829704/7c0271b4-cfa6-11e3-805c-402380563d5f.png)
![screen shot 2014-04-29 at 15 51 44](https://cloud.githubusercontent.com/assets/189413/2829710/89fc0d34-cfa6-11e3-9dad-903484d52532.png)
<bountysource-plugin>
---
Want to back this issue? **[Place a bounty on it!](https://www.bountysource.com/issues/1892448-is-the-nfa-regex-engine-slow?utm_campaign=plugin&utm_content=tracker%2F461131&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F461131&utm_medium=issues&utm_source=github).
</bountysource-plugin>
| performance | medium | Major |
32,459,195 | youtube-dl | Write to --download-archive if date is not in range or matched reject pattern | My proposal is to add some option like --write-anyway that would write the ID to the download archive folder. This way in future operations, would be processed faster because it would not need to open connection and would skip it locally.
Thank you for your great job
| request | low | Major |
32,578,298 | neovim | list-processing UI, "frecency" + fuzzy match selector UI | related: https://github.com/neovim/neovim/issues/396
[I'm just getting this out of my system after reading #396.]
Consuming list-sources should be a built-in feature that is easily accessible to end-users. The usefulness of this is indicated by the popularity of general "fuzzy finder" tools such as:
- [unite.vim](https://github.com/Shougo/unite.vim/tree/master/autoload/unite/filters)
- https://github.com/tpope/vim-haystack
- [ctrlp.vim](https://github.com/kien/ctrlp.vim)
- [YouCompleteMe](https://github.com/Valloric/YouCompleteMe)
- [fzf](https://github.com/junegunn/fzf) (for bash and zsh)
- emacs [helm](https://github.com/emacs-helm/helm) consumes numerous list sources and is easily extensible, like unite.vim
- emacs [ido-flx](https://github.com/lewang/flx) and [grizzl](https://github.com/d11wtq/grizzl)
- Sublime Text's ctrl-p, et. al. (I don't know if it works with generic list sources)
unite.vim (and ctrlp, helm, ...) has a very nice "composable" approach that allows the end-user to choose (and combine/intersect) from a library of _matchers_ (subsequence/fuzzy, glob, frequency) and _sorters_ (alphabetical, ordinal, rank). I suggest looking at this composable approach as a general model for providing list-searching facilities.
Plugins should not have to keep reinventing the "consume-a-list" wheel, because the problem domain is arguably arriving at well-defined boundaries with many implementations of equivalent solutions.
An end-user (or plugin) should be able to say "select from [list] using [fuzzy-matcher]", and the editor should have a high-quality fuzzy matching algorithm available. The user (or plugin) should not have to implement a fuzzy-matching algorithm.
_Consuming_ a list and fuzzy-matching against that list using a high-quality algorithm should be available out-of-the-box.
_Providing_ the list is what plugins (and/or external tools) should focus on. E.g., [tmuxcomplete.vim](https://github.com/wellle/tmux-complete.vim/) provides a list of tmux strings; [pt](https://github.com/monochromegane/the_platinum_searcher) provides a list of matched lines; [Omnisharp](https://github.com/nosami/Omnisharp) provides a list of C# symbols or types; etc, etc.
Plugins like unite.vim already make very good use of Vim's buffers to provide a very featureful UI. But they shouldn't need to re-implement fuzzy matching or ranking, just like they don't need to implement regex.
# Proposal
- builtin FZF-like "frecency" selector for commands, files, etc. (see also vscode's command palette)
- primitives (framework) to make it easier for plugins to build this kind of thing, in the spirit of `vim.lsp`
- LSP mapping: `gr/` ([ref](https://github.com/neovim/neovim/pull/30781#discussion_r1797719176)) | enhancement,ui,plugin,ux,lua,cmdline-mode,editor | medium | Major |
32,875,296 | youtube-dl | Unsupported URL: http://theync.com | > youtube-dl.exe -v http://theync.com/fight/girl-hit-in-the-face-with-a-shovel.htm
> [debug] System config: []
> [debug] User config: []
> [debug] Command-line args: ['-v', 'http://theync.com/fight/girl-hit-in-the-face-
> with-a-shovel.htm']
> [debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
> [debug] youtube-dl version 2014.05.05
> [debug] Python version 2.7.5 - Windows-8-6.2.9200
> [debug] Proxy map: {}
> [generic] girl-hit-in-the-face-with-a-shovel: Requesting header
> WARNING: Falling back on generic information extractor.
> [generic] girl-hit-in-the-face-with-a-shovel: Downloading webpage
> [generic] girl-hit-in-the-face-with-a-shovel: Extracting information
> ERROR: Unsupported URL: http://theync.com/fight/girl-hit-in-the-face-with-a-shov
> el.htm; please report this issue on https://yt-dl.org/bug . Be sure to call yout
> ube-dl with the --verbose flag and include its complete output. Make sure you ar
> e using the latest version; type youtube-dl -U to update.
> Traceback (most recent call last):
> File "youtube_dl\extractor\generic.pyo", line 418, in _real_extract
> File "youtube_dl\utils.pyo", line 1396, in parse_xml
> File "xml\etree\ElementTree.pyo", line 1300, in XML
> File "xml\etree\ElementTree.pyo", line 1642, in feed
> File "xml\etree\ElementTree.pyo", line 1506, in _raiseerror
> ParseError: not well-formed (invalid token): line 118, column 32
> Traceback (most recent call last):
> File "youtube_dl\YoutubeDL.pyo", line 516, in extract_info
> File "youtube_dl\extractor\common.pyo", line 161, in extract
> File "youtube_dl\extractor\generic.pyo", line 687, in _real_extract
> ExtractorError: Unsupported URL: http://theync.com/fight/girl-hit-in-the-face-wi
> th-a-shovel.htm; please report this issue on https://yt-dl.org/bug . Be sure to
> call youtube-dl with the --verbose flag and include its complete output. Make su
> re you are using the latest version; type youtube-dl -U to update.
| site-support-request | low | Critical |
33,034,271 | neovim | alternative to legacy 'cryptmethod' | Vim's encryption code was removed. This ticket explores the idea of providing a basic alternative to enable the user to decrypt, edit, and re-encrypt a file using Neovim in a way that is reasonably secure.
#694 suggests:
- avoiding any sort of random access mode and focusing entirely on something that decrypts the entire file in one fell swoop when it's opened, and rewrites a brand new ciphertext with a new random nonce each time the file is written.
- Using XSalsa20+Poly1305 (from NaCl) or ChaCha20+Poly1305 with a random nonce from /dev/urandom (or CryptoGenRandom on Windows) should be OK
See also: http://tonyarcieri.com/all-the-crypto-code-youve-ever-written-is-probably-broken
Potential solutions:
- [libsodium](https://download.libsodium.org/doc/)
- [aws-encryption-sdk-c](https://github.com/aws/aws-encryption-sdk-c/)
- [google/tink](https://github.com/google/tink)
# Plan
* [google/tink](https://github.com/tink-crypto) has a very good design ([more info](https://news.ycombinator.com/item?id=17880789)) which minimizes room for error. | enhancement,security,compatibility,has:plan | medium | Critical |
33,073,633 | youtube-dl | YouTube playlist metadata is downloaded one video at a time | If I run:
`youtube-dl --get-title https://www.youtube.com/playlist?list=PL71798B725200FA81`
it results in one title at a time being printed. But if I run:
`curl http://gdata.youtube.com/feeds/api/playlists/PL71798B725200FA81?alt=json`
I can get the entire playlist and all its metadata in one request.
I have a script that uses youtube-dl to download and play YouTube playlists in VLC, but since youtube-dl downloads playlist data one video at a time, it means that the initial download of the playlist metadata with youtube-dl takes a very long time, especially for large playlists (e.g. Yogscast Minecraft video playlists can take minutes just to get the list of titles and video IDs).
It would be good if youtube-dl used this public JSON interface to download the entire playlist metadata in one request. It would save literally minutes of waiting.
| request | low | Minor |
End of preview. Expand
in Dataset Viewer.
π GitHub Issues Dataset
π Dataset Name: github-issues-dataset
π Total Issues: 114073
π Format: Parquet (.parquet
)
π Source: GitHub Repositories (Top 100 Repos)
π Overview
This dataset contains 114,073 GitHub issues collected from the top 100 repositories on GitHub.
It is designed for issue classification, severity/priority prediction, and AI/ML training.
β This dataset is useful for:
- AI/ML Training: Fine-tune models for issue classification & prioritization.
- Natural Language Processing (NLP): Analyze software development discussions.
- Bug Severity Prediction: Train models to classify issues as Critical, Major, or Minor.
π Dataset Structure
The dataset is stored in Parquet format (github_issues_dataset.parquet
) for efficient storage and fast retrieval.
Columns in the Dataset:
Column | Type | Description |
---|---|---|
id |
int |
Github issue id |
repo |
str |
Repository name |
title |
str |
Issue title |
body |
str |
Issue description |
labels |
list |
Assigned GitHub labels |
priority |
str |
Estimated priority (high , medium , low ) |
severity |
str |
Estimated severity (Critical , Major , Minor ) |
π₯ Download & Use
Using datasets
Library
You can easily load this dataset using Hugging Face's datasets
library:
from datasets import load_dataset
dataset = load_dataset("sharjeelyunus/github-issues-dataset")
π Sample Data
id | repo | title | labels | priority | severity |
---|---|---|---|---|---|
101 | pytorch/pytorch |
"RuntimeError: CUDA out of memory" | ["bug", "cuda"] |
high | Critical |
102 | tensorflow/tensorflow |
"Performance degradation in v2.9" | ["performance"] |
medium | Major |
103 | microsoft/vscode |
"UI freeze when opening large files" | ["ui", "bug"] |
low | Minor |
π How This Dataset Was Created
- Collected open issues from the top 100 repositories on GitHub.
- Filtered only English issues with assigned labels.
- Processed priority and severity:
- Used labels to determine priority & severity.
- Used ML models to predict missing priority/severity values.
- Stored dataset in Parquet format for ML processing.
π Use Cases
- AI-Powered Bug Triage: Train AI models to predict priority & severity.
- NLP Research: Analyze software engineering discussions.
π License
This dataset is open-source and publicly available under the MIT License.
Please cite this dataset if you use it in research.
π« Feedback & Contributions
- Found an issue? Open an issue.
- Want to contribute? Feel free to submit a PR.
- For any questions, reach out on Hugging Face Discussions.
β Support
π If you find this dataset useful, please like β€οΈ the repository!
π Happy Coding! π
- Downloads last month
- 29