id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
221,405,549 | rust | Unclear error with type inference failure | ```rust
struct S {
v: Vec<(u32, Vec<u32>)>,
}
impl S {
pub fn remove(&mut self, i: u32) -> Option<std::vec::Drain<u32>> {
self.v.get_mut(i as _).map(|&mut (_, ref mut v2)| {
v2.drain(..)
})
}
}
```
Errors with:
```
error: the type of this value must be known in this context
--> src/main.rs:8:16
|
8 | v2.drain(..)
| ^^^^^
```
I'm not sure which value the error is pointing at here. The fix here is to explicitly specify the type we want to cast `i` into. | C-enhancement,A-diagnostics,T-compiler,A-inference,D-terse | low | Critical |
221,418,822 | youtube-dl | Feature request: Have youtube-dl teat search-result page (on C-Span.org) as a playlist | ---
- [X] I've **verified** and **I assure** that I'm running youtube-dl **2017.04.11**
### Before submitting an *issue* make sure you have:
- [X] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [X] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [X] Feature request (request for a new functionality)
---
```
youtube-dl -v https://www.c-span.org/search/?searchtype=Videos&sort=Newest&seriesid[]=37
[1] 5661
[2] 5662
bash: seriesid[]: bad array subscript
me@laptop ~/Videos/youtube-dl/kagan $ [debug] System config: [u'-o', u'~/Videos/youtube-dl/%(title)s_%(id)s.%(ext)s', u'--netrc', u'--restrict-filenames', u'--write-description', u'--write-sub', u'--yes-playlist', u'--ignore-errors']
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'https://www.c-span.org/search/?searchtype=Videos']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.04.11
[debug] Python version 2.7.12 - Linux-4.4.0-71-generic-x86_64-with-LinuxMint-18.1-serena
[debug] exe versions: ffmpeg 2.8.11-0ubuntu0.16.04.1, ffprobe 2.8.11-0ubuntu0.16.04.1
[debug] Proxy map: {}
[generic] ?searchtype=Videos: Requesting header
WARNING: Falling back on generic information extractor.
[generic] ?searchtype=Videos: Downloading webpage
[generic] ?searchtype=Videos: Extracting information
ERROR: Unsupported URL: https://www.c-span.org/search/?searchtype=Videos
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1835, in _real_extract
doc = compat_etree_fromstring(webpage.encode('utf-8'))
File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2526, in compat_etree_fromstring
doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory)))
File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2515, in _XML
parser.feed(text)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1653, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1517, in _raiseerror
raise err
ParseError: mismatched tag: line 49, column 116
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 761, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 429, in extract
ie_result = self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 2698, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: https://www.c-span.org/search/?searchtype=Videos
```
### Description of your *issue*, suggested solution and other information
I would wish that supplying a search-result page would lead youtube-dl to treat the list of "found' videos as a playlist.
| request | low | Critical |
221,483,470 | TypeScript | Iterator interface for LanguageService | Currently, functions like `LanguageService.getReferencesAtPosition()` or `getNavigateToItems()` return an array. That means, they have to collect all references/navigation items before returning, and blocks the CPU until then.
If you invoke such a function in repos like angular/angular the returned array can easily have 50k elements and the function can take a very long time to return. That means a UI can only show the results until all items have been aggregated (of course, you can pass a limit, but in the case of `getNavigateToItems()` that can result in the relevant items you wanted not being found because matching is done fuzzily and the function returns early when `limit` items that matched the query fuzzily were found. For example, in angular/angular, a query for _Http_ with a limit will return a lot of not-relevant `http` variables, but a query without a limit will return the `Http` class). If you have to do more filtering, transformation etc to the items you end up with an unneeded extra iteration.
I would like to propose to add or change the API to return an `Iterable` instead. This can be achieved with generators. This would allow the consumer to _pull_ items lazily from the iterator.
That means
- results (e.g. references, symbols) can be streamed and shown in the UI as soon as they are found in the source
- requests can be easily cancelled by aborting iteration (e.g. `break` in a `for of`)
- a limit parameter is not needed anymore, you can just abort the iteration after your limit is reached
- if you need to perform filtering, transformation etc on the result, you can do that in the same iteration "pipeline"
There is no dependency needed to make this work, it's all in the language.
The returned Iterable can be consumed in a variety of ways, with `for of`, generator delegation, with iteration libraries or Observables, or easily coverted to an Array with `Array.from()`.
This API could be added in a backwards-compatible way by adding a new variant and changing the old one to delegate and convert to array:
```ts
getReferencesAtPosition(fileName: string, position: number): ReferenceEntry[] {
return Array.from(this.getReferencesIterableAtPosition(fileName, position);
}
getReferencesIterableAtPosition(fileName: string, position: number): IterableIterator<ReferenceEntry> {
// ...
}
```
| Suggestion,API,Awaiting More Feedback | low | Minor |
221,509,585 | youtube-dl | [youtube] Youtube Red videos fail with '403: Forbidden' | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like that [x])
- Use *Preview* tab to see how your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.04.11*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.04.11**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Question
- [ ] Feature request (request for a new functionality)
- [ ] Other
---
### Description of your *issue*, suggested solution and other information
Downloading of Youtube Red videos appears to be broken. The following is a log of an attempt with a valid Youtube Red account with 2FA disabled to rule out that possibility. It seems like it attempts to get it, fails, and then falls back to downloading the trailer. I can provide credentials via PM in the IRC channel.
Tested on: OS X 10.12.3
```
bash-4.4$ ./youtube-dl https://www.youtube.com/watch?v=XD3M5JN1uhg --username (username) --password (password) --verbose
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'https://www.youtube.com/watch?v=XD3M5JN1uhg', u'--username', u'PRIVATE', u'--password', u'PRIVATE', u'--verbose']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.04.11
[debug] Python version 2.7.12 - Darwin-16.4.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 3.2.4, ffprobe 3.2.4
[debug] Proxy map: {}
[youtube] Downloading login page
[youtube] Logging in
[youtube] XD3M5JN1uhg: Downloading webpage
[youtube] XkVczL-rBSY: Downloading webpage
[youtube] XkVczL-rBSY: Downloading video info webpage
[youtube] XkVczL-rBSY: Extracting video information
WARNING: unable to extract uploader nickname
[youtube] {22} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {43} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {18} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {36} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {17} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {137} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {248} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {136} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {247} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {135} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {244} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {134} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {243} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {133} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {242} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {160} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {278} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {140} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {171} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {249} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {250} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] {251} signature length 46.40, html5 player en_US-vflaxXRn1
[youtube] XkVczL-rBSY: Downloading player https://www.youtube.com/yts/jsbin/player-en_US-vflaxXRn1/base.js
[youtube] XkVczL-rBSY: Downloading MPD manifest
WARNING: [youtube] XkVczL-rBSY: Skipping DASH manifest: ExtractorError(u'Failed to download MPD manifest: HTTP Error 403: Forbidden (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.',)
[debug] Invoking downloader on u'https://r15---sn-5hnedn7r.googlevideo.com/videoplayback?requiressl=yes&ipbits=0&gir=yes&gcr=nl&id=o-AKa1twzBgQ_Lug1oKQtZarJ0c0kmaaHwwkAETExl9Mmk&keepalive=yes&mn=sn-5hnedn7r&mm=31&mv=m&mt=1492075284&itag=248&ms=au&initcwndbps=3908750&sparams=clen%2Cdur%2Cei%2Cgcr%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Ckeepalive%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Crequiressl%2Csource%2Cupn%2Cexpire&upn=xtp-bvMdZpY&ip=145.74.162.103&ei=eUPvWIyQBcrL1wKh5Y7ACg&pl=15&mime=video%2Fwebm&expire=1492096985&clen=22141700&dur=141.224&source=youtube&lmt=1476124593692380&key=yt6&signature=5A111F2878A8C40E5E04BBA6B432FDFD05BC304E.D97160F8A2CB5EDEF75F162D43459EA04C37C09A&ratebypass=yes'
[download] Destination: The Thinning - Free Preview-XkVczL-rBSY.f248.webm
[download] 100% of 21.12MiB in 00:08
[debug] Invoking downloader on u'https://r15---sn-5hnedn7r.googlevideo.com/videoplayback?requiressl=yes&ipbits=0&gir=yes&gcr=nl&id=o-AKa1twzBgQ_Lug1oKQtZarJ0c0kmaaHwwkAETExl9Mmk&keepalive=yes&mn=sn-5hnedn7r&mm=31&mv=m&mt=1492075284&itag=251&ms=au&initcwndbps=3908750&sparams=clen%2Cdur%2Cei%2Cgcr%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Ckeepalive%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Crequiressl%2Csource%2Cupn%2Cexpire&upn=xtp-bvMdZpY&ip=145.74.162.103&ei=eUPvWIyQBcrL1wKh5Y7ACg&pl=15&mime=audio%2Fwebm&expire=1492096985&clen=2274202&dur=141.261&source=youtube&lmt=1476124337350623&key=yt6&signature=DF8DBA25A3275280E2F26A41A49C41F8283A6B9C.A895FEBB7B5F6A1753702F3BDD84341060005E64&ratebypass=yes'
[download] Destination: The Thinning - Free Preview-XkVczL-rBSY.f251.webm
[download] 100% of 2.17MiB in 00:00
[ffmpeg] Merging formats into "The Thinning - Free Preview-XkVczL-rBSY.webm"
[debug] ffmpeg command line: ffmpeg -y -i 'file:The Thinning - Free Preview-XkVczL-rBSY.f248.webm' -i 'file:The Thinning - Free Preview-XkVczL-rBSY.f251.webm' -c copy -map 0:v:0 -map 1:a:0 'file:The Thinning - Free Preview-XkVczL-rBSY.temp.webm'
Deleting original file The Thinning - Free Preview-XkVczL-rBSY.f248.webm (pass -k to keep)
Deleting original file The Thinning - Free Preview-XkVczL-rBSY.f251.webm (pass -k to keep)
bash-4.4$
```
| account-needed | low | Critical |
221,568,626 | go | all: check build reproducibility | It'd be nice to have the builder (and ideally also a trybot) check reproducibility of builds. This is not an good thing to add to all.bash, because it is slow and computationally expensive.
The implementation is fairly straightforward. I usually do it by running something like:
```
$ ./make.bash
$ toolstash save
$ for f in `seq 15`; do go build -a -toolexec='toolstash -cmp' std cmd; done
```
However, a simpler (if slower) approach that should also work is to run make.bash many times (n=5? 10?) and check that all the resulting .a and executable files are identical. In fact, I think all the filesystem contents should be identical.
This is about to become more important because the introduction of a concurrent compiler backend provides lots more opportunity to mess up reproducibility. Detection is currently manual (e.g. #19872).
@bradfitz @adams-sarah
| Builders,NeedsInvestigation | low | Major |
221,579,456 | go | build: run race-enabled toolchain | The compiler already has a concurrent lexer and parser, and it will soon have a concurrent backend. We should have a fast builder running a race-enabled version of the toolchain to catch data races.
I have in mind something like this (very rough sketch):
```
$ ./all.bash # instead of doing 'go install cmd' after bootstrapping, do 'go install -race cmd'
$ for all-platforms-like-misc-compile; do go build -a std cmd; done
```
This would provide race detector coverage for normal and error conditions, plus all the assemblers. Ideally "all platforms" would ultimately also end up including some variations like -shared, -dynlink, -race, -msan, etc.
There might also be some extra flags to add somewhere to ramp up the concurrency, to try to flush out races.
This is clearly not fully thought out yet. :) Step one is to get a basic builder going and make it easy to improve.
I'm happy to do the legwork on this, but I'll need some help and direction, I fear.
@bradfitz @adams-sarah
| Builders,NeedsFix | low | Critical |
221,647,272 | opencv | Unable to stop the stream: Inapproriate ioctl for device - ARM | Hi,
I noticed an error while I try to open a video file (in this case is an **avi** file), just on ARM arch (bot 32 and 64) .
Once I try to open the file, the following error is fired up:
**Unable to stop the stream: Inapproriate ioctl for device**
Looking into the source code, it seems a problem related to **libv4l**, since the error message it's executed at least from one of these files:line
**modules/videoio/src/cap_v4l.cpp:1833**
**modules/videoio/src/cap_libv4l.cpp:1879**
The error is fired only on ARM system (verified on Beaglebone Black and DragonBoard 410c), I understand this is a problem related with libv4l and not with opencv libs.
Perhaps the interaction with both libopencv and lib4vl must be reviewed for ARM arquitecture (maybe some macro not properly defined) ?
Regards,
simon
| priority: low,category: videoio(camera),incomplete | low | Critical |
221,657,505 | flutter | Support keyboard events in flutter_driver | Add an API for sending keyboard events to devices in Flutter Driver tests.
Useful for writing comprehensive tests of text input, input from physical keyboards, IME compose region etc. | a: tests,a: text input,tool,t: flutter driver,customer: crowd,c: proposal,P3,team-tool,triaged-tool | high | Critical |
221,688,677 | flutter | Flutter driver finder/scrollIntoView does not scroll lazily-created items into view | ## Steps to Reproduce
1. Edit `examples/flutter_gallery/test_driver/transitions_perf_test.dart`
1. From the `demos` list, remove `profiled: true` on any demos that include them.
1. Add `profiled: true` to the 'Sliders' demo or other demo that's initially offscreen.
1. `flutter drive test_driver/transitions_perf.dart`
## Flutter Doctor
```
[✓] Flutter (on Mac OS X 10.12.4 16E195, channel master)
• Flutter at /Users/cbracken/src/flutter/flutter
• Framework revision 00dfa224d1 (34 minutes ago), 2017-04-13 12:31:04 -0700
• Engine revision 5c4e20c4c5
• Tools Dart version 1.23.0-dev.11.6
``` | a: tests,c: new feature,tool,framework,t: flutter driver,P3,team-framework,triaged-framework | low | Minor |
221,717,029 | kubernetes | Make fsType specifyable in volumeClaimTemplates | <!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
**Is this a request for help?** (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No,
**What keywords did you search in Kubernetes issues before filing this one?** (If you have found any duplicates, you should instead reply there.): volumeClaimTemplates fstype
---
**Is this a BUG REPORT or FEATURE REQUEST?** (choose one): Feature Request
Currently, (at least following the API documentation), there is no way to specify an fsType in a volumeClaimTemplates. Some applications (mongodb) that fit the use case of a statefulSet should use a specific file system for some reason (performance). It is therefore desired to make this specifyable. | kind/feature,area/stateful-apps,sig/apps,lifecycle/frozen | low | Critical |
221,759,401 | youtube-dl | [sabaq.pk] Site support request | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like that [x])
- Use *Preview* tab to see how your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.04.14*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2017.04.14**
### Before submitting an *issue* make sure you have:
- [ ] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [ ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
$ youtube-dl -v <your command line>
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2017.04.14
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
...
<end of log>
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.
If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
I want a new site to be added its http://www.sabaq.pk
its very useful for student like me who need courses in our language also English in meanwhile.
this site has well maintained like courses are categorized well enough all. Each subject has its own Topic and Each Topic has its own Chapters in it . If i use to download each video manually it takes so much time.
Its not tough to fetch all Lessons under chapter from each Topics and save them with Chapter wise.
For example its a Topic and in this link there are also many chapters and all those are categorized using titles separately on this page not on videos http://www.sabaq.pk/chapter-page.php?vg=pakistan-physics-ECAT&vsg=pakistan-physics-ECAT-1
each topic has its own url http://www.sabaq.pk/video-page.php?sid=pakistan-physics-ECAT-1.1&v=p-9-10-phy-quan-1 this is a sub page of this site and it has a video for this chapter
then i want to download this video inside this page it has host of youtube and metacafe
Thanks if any can help
| site-support-request | low | Critical |
221,807,586 | rust | document what "thin pointers" are | I got the following error message:
error: casting `*const T` as `usize` is invalid
--> src/refset.rs:17:9
|
17 | (self.0 as *const T as usize).hash(state);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: cast through a thin pointer first
which was quite clear, except that there seems to be no documentation as to what a "thin" pointer is. I ended up through guesswork casting to a *const usize in between which made this work for me.
| C-enhancement,A-diagnostics,T-compiler,D-confusing | low | Critical |
221,815,421 | go | doc: explain how to debug performance problems in Go programs | We should probably have a doc explaining how to debug performance problems in Go programs. There is the profiling blog post, but a larger survey of the field might be good.
/cc @aclements @alandonovan | Documentation,help wanted,NeedsFix | medium | Critical |
221,825,598 | vscode | vscode reads AltGr key as cursor-left move through X11 connections | <!-- Do you have a question? Please ask it on http://stackoverflow.com/questions/tagged/vscode -->
- VSCode Version: 1.11.2 (Commit 6eaebe3b9c70406d67c97779468c324a7a95db0e)
- OS Version: Ubuntu 16.04.2 / Windows 10
Steps to Reproduce:
1. install/start X server on Windows system (Xming, MobaXterm or VcXsrv)
2. start code on Linux box (make sure your SSH client has X11 forwarding enbled), use AltGr to compose "{" or similar, cursor jumps left. Interesting to note that both the "{" character and the cursor-left move are produced. This has started with VScode 1.11. Same with code-insiders. Don't get me started.
3. Tested on 4 different computers.
| bug,help wanted,keybindings,linux | medium | Critical |
221,845,953 | kubernetes | CRI: Clearly define the `Privileged` field in the security context | CRI inherits the `Privileged` field from the kubernetes (and Docker) API. It's there to improve the user experience.
Since CRI is not intended to be directly user-facing, it should translate the field into a set of fields (e.g., capabilities).
At the least, we should clearly define what `Privileged` means in the CRI.
/cc @kubernetes/sig-node-api-reviews @Random-Liu @mrunalp @feiskyer @timstclair | sig/node,kind/feature,lifecycle/frozen | medium | Major |
221,846,784 | youtube-dl | Some M3U8 Downloads frozen indefinitely | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like that [x])
- Use *Preview* tab to see how your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.04.14*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [X] I've **verified** and **I assure** that I'm running youtube-dl **2017.04.14**
### Before submitting an *issue* make sure you have:
- [X] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [X] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [X] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
youtube-dl -v http://1253467418.vod2.myqcloud.com/2e2c2ce8vodgzp1253467418/b8840dca9031868222906143597/playlist.m3u8
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'http://1253467418.vod2.myqcloud.com/2e2c2ce8vodgzp1253467418/b8840dca9031868222906143597/playlist.m3u8']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2017.04.14
[debug] Python version 3.4.4 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-85091-g23ae3cc, ffprobe N-85091-g23ae3cc, rtmpdump 2.4
[debug] Proxy map: {}
[generic] playlist: Requesting header
WARNING: Falling back on generic information extractor.
[generic] playlist: Downloading webpage
[generic] playlist: Downloading m3u8 information
[debug] Invoking downloader on 'http://1253467418.vod2.myqcloud.com/2e2c2ce8vodgzp1253467418/b8840dca9031868222906143597/playlist.m3u8'
[download] Destination: playlist-playlist.mp4
[debug] ffmpeg command line: ffmpeg -y -headers 'Accept-Encoding: gzip, deflate
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0) Gecko/20150101 Firefox/47.0 (Chrome)
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Accept-Language: en-us,en;q=0.5
' -i http://1253467418.vod2.myqcloud.com/2e2c2ce8vodgzp1253467418/b8840dca9031868222906143597/playlist.m3u8 -c copy -f mp4 -bsf:a aac_adtstoasc file:playlist-playlist.mp4.part
ffmpeg version N-85091-g23ae3cc Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 6.3.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
libavutil 55. 57.100 / 55. 57.100
libavcodec 57. 88.100 / 57. 88.100
libavformat 57. 70.100 / 57. 70.100
libavdevice 57. 5.100 / 57. 5.100
libavfilter 6. 81.100 / 6. 81.100
libswscale 4. 5.100 / 4. 5.100
libswresample 2. 6.100 / 2. 6.100
libpostproc 54. 4.100 / 54. 4.100
```
---
### Description of your *issue*, suggested solution and other information
The download in this case don't start and stay indefinitely waiting. Was hoping for a successful download or an error after some minutes trying to download.
In some other links from same domain i have problems like this, but in most cases the download starts and get stuck at some point never returning an error.
When i try to download the m3u8 itself it works fine, so as the ts linked inside. I can use HLS live player to watch then too.
Another example on this URL: *http://1253467418.vod2.myqcloud.com/2e2c2ce8vodgzp1253467418/c681f12a9031868222906741782/playlist.m3u8*
| cant-reproduce | low | Critical |
221,867,309 | go | database/sql: add option to customize Begin statement | Currently, the `*DB.BeginTx` method looks like this:
```go
func (db *DB) BeginTx(ctx context.Context, opts *TxOptions) (*Tx, error)
type TxOptions struct {
// Isolation is the transaction isolation level.
// If zero, the driver or database's default level is used.
Isolation IsolationLevel
ReadOnly bool
}
```
However, there are transaction options across drivers that don't fit into these fields. For instance:
* In SQLite, [all transactions are serializable](https://www.sqlite.org/isolation.html), but transactions can [specify when to grab the global lock](https://www.sqlite.org/lang_transaction.html) as part of the `BEGIN` statement. There's no way to get at that from the Go SQLite driver (mattn/go-sqlite3#400)
* In PostgreSQL, the [`DEFERRABLE` option](https://www.postgresql.org/docs/9.6/static/sql-begin.html) can be specified when starting the transaction, which does not fit into either of these options. However, Go users can work around this by using the [`SET TRANSACTION`](https://www.postgresql.org/docs/9.6/static/sql-set-transaction.html) statement.
* In MySQL, the [`WITH CONSISTENT SNAPSHOT`](https://dev.mysql.com/doc/refman/5.7/en/commit.html) option can only be specified when starting the transaction and cannot be specified during [`SET TRANSACTION`](https://dev.mysql.com/doc/refman/5.7/en/set-transaction.html).
It would be nice to have another option in `TxOptions` that allows you to change which statement is used. As an idea:
```go
type TxOptions struct {
// Isolation is the transaction isolation level.
// If zero, the driver or database's default level is used.
Isolation IsolationLevel
ReadOnly bool
// Modifiers is a partial SQL statement that specifies database-specific transaction options.
// The database will prefix "BEGIN " or "START TRANSACTION " to form a full SQL statement, whichever is appropriate.
// The string may or may not have a trailing semicolon.
// If a database does not support transaction options, then Modifiers is ignored.
// If not empty, Isolation and ReadOnly are ignored.
Modifiers string
}
```
While this approach is convenient and ensures that the statement is a `BEGIN` or `START TRANSACTION` statement, it's not particularly performant, as it requires the statement to be prepared on each `BeginTx`. However, you could imagine adding a method to `DB` like:
```go
// PrepareBegin prepares a SQL statement to begin a transaction.
// modifiers has the same meaning as in TxOptions.
func (db *DB) PrepareBegin(ctx context.Context, modifiers string) (*Stmt, error)
type TxOptions struct {
// ...
// Stmt is used to start the transaction if non-nil.
// It must be the result of a call to PrepareBegin.
// If non-nil, all other fields are ignored.
Stmt *Stmt
}
``` | NeedsInvestigation,FeatureRequest | medium | Critical |
221,879,424 | flutter | Installation needs to be simpler | Our current installation process is multi-step, involves the command line, etc. We have data showing that this results in failed installations and abandoned installations.
I think we need to have a single-step install.
I suggest:
- [ ] On Mac OS, have a single DMG that you can drag onto your machine (Apps directory? Home directory?) that includes our git repo, cached binaries, and all the dependencies we can legally bundle. This gets you to the same state you have today, and leaves our current upgrade path intact.
- [ ] On Windows, have an installation app that you can download and run that bundles our git repo, cached binaries, and all the dependencies we can legally bundle, and leaves our current upgrade path intact.
- [ ] On Linux, have a bootstrap script that does the same thing into your home directory.
- [ ] We should pin and bundle all our Dart dependencies. https://github.com/flutter/flutter/issues/6767
- [ ] We should try to shed as many dependencies as we can.
| tool,a: first hour,customer: crowd,P3,team-tool,triaged-tool | medium | Critical |
221,940,929 | vscode | Allow to reset “Don't Show Again” preference | VSCode’s “top banner” often includes a “Don't Show Again” button. However it's not clear how a user should undo this action.
Would it be possible to save these preferences to the user's workspace settings, perhaps? Then it is easy for the user to undo this action.
An example of a user needing this: https://github.com/Microsoft/vscode/issues/23314 | feature-request,workbench-notifications | medium | Critical |
221,945,520 | opencv | Issue with BestOf2NearestMatcher on Opencv 3.x.x VS15 | ##### System information (version)
- OpenCV => 3.2
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
##### Detailed description
BestOf2NearestMatcher does not work on Opencv 3 compiled with Visual Studio.
See the code:
##### Steps to reproduce
vector<cv::DMatch> vDMatchesLR;
vector<cv::detail::MatchesInfo> vPairwiseMatchesLR;
cv::detail::BestOf2NearestMatcher mMatcherLR(true, match_conf);
mMatcherLR(vImageFeatruesLR, vPairwiseMatchesLR);
mMatcherLR.collectGarbage();
vPairwiseMatchesLR.clear(); // <----- crash
| incomplete | low | Critical |
221,958,461 | vscode | More font options | A lot of people use fonts like Fira Code, which doesn't has italics. Wouldn't that be great to have options to force no_italics, no_bold, etc? | feature-request,editor-core | low | Major |
221,961,281 | youtube-dl | Curiositystream HTTP Error 422 | ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.04.15*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.04.15**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
Problem with fetching video from curiosity stream. This worked some hours ago... Now I get this message:
```
C:\Users\User\Desktop>youtube-dl https://app.curiositystream.com/video/1348 --username [email protected] --password PaSsWoRd --verbose --write-sub --all-subs
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['https://app.curiositystream.com/video/1348', '--username', 'PRIVATE', '--password', 'PRIVATE', '--verbose', '--write-sub', '--all-subs']
[debug] Encodings: locale cp1252, fs utf-8, out utf-8, pref cp1252
[debug] youtube-dl version 2017.04.15
[debug] Python version 3.6.1 - Windows-10-10.0.14393-SP0
[debug] exe versions: ffmpeg N-85091-g23ae3cc, ffprobe N-85091-g23ae3cc
[debug] Proxy map: {}
[curiositystream] Downloading JSON metadata
ERROR: Unable to download JSON metadata: HTTP Error 422: Unprocessable Entity (caused by <HTTPError 422: 'Unprocessable Entity'>); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "c:\program files\python36\lib\site-packages\youtube_dl\extractor\common.py", line 498, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "c:\program files\python36\lib\site-packages\youtube_dl\YoutubeDL.py", line 2100, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "c:\program files\python36\lib\urllib\request.py", line 532, in open
response = meth(req, response)
File "c:\program files\python36\lib\urllib\request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "c:\program files\python36\lib\urllib\request.py", line 570, in error
return self._call_chain(*args)
File "c:\program files\python36\lib\urllib\request.py", line 504, in _call_chain
result = func(*args)
File "c:\program files\python36\lib\urllib\request.py", line 650, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
```
| account-needed | low | Critical |
221,978,024 | go | runtime: reuse evacuated map overflow buckets | runtime/hashmap.go contains this TODO:
```go
// TODO: reuse overflow buckets instead of using new ones, if there
// is no iterator using the old buckets. (If !oldIterator.)
```
This issue is to track this TODO, and make it easier to refer to in commit messages, etc., since I am looking into some related optimizations. | Performance,compiler/runtime | low | Major |
222,036,719 | vscode | [json] option to format code with leading commas | Can we get an option in VS Code to have it format all lists by using leading commas instead of trailing commas?
This would so much simplify editing code and it would also greatly simplify the IDE's automatic code completion (e.g. the VS Code Settings editor).
With the suggested option set to `true`, comma separated lists would proposedly be formatted like this:
```json
{ "workbench.iconTheme": "vs-seti"
, "editor.lineNumbers": "off"
, "editor.tabSize": 2
, "editor.insertSpaces": false
, "editor.autoClosingBrackets": false
}
```
Function calls using line wrapping for their arguments would look like this:
```js
call( 1
, "test"
, new Console()
);
```
a nested sample would look like this:
```json
{ "version": "0.2.0"
, "configurations":
[ { "type": "node"
, "request": "launch"
, "name": "Programm starten"
, "program": "${file}"
}
, { "type": "node"
, "request": "attach"
, "name": "An den Port anfügen"
, "address": "localhost"
, "port": 5858
}
]
}
```
<hr/>
This formatting style would result in much cleaner, tidier code.
<hr/>
### Moreover, adding a new property to an object would be a snap:
Either copy/paste from an existing property (=> no more syntax errors due to redundant trailing commas, see screenshot below):

... or create a new property and you might immediately get IntelliSense right after typing the leading comma of the property to create.
<hr/>
### The suggested formatting style is applicable to all languages
E.g. T-SQL:
```sql
CREATE TABLE
( id INT PRIMARY KEY IDENTITY
, name NVARCHAR(100) NOT NULL UNIQUE CHECK(LEN(name) > 0)
, created DATETIME NOT NULL DEFAULT(GETDATE())
, updated DATETIME NOT NULL DEFAULT(GETDATE())
)
``` | help wanted,feature-request,json,formatting | medium | Critical |
222,038,139 | go | crypto/x509: AKI is left blank even for non-self-signed cert when subject matches CA's subject field | ### What version of Go are you using (`go version`)?
1.8.1
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN="/home/dev/go/bin"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/dev/go"
GORACE=""
GOROOT="/usr/lib/golang"
GOTOOLDIR="/usr/lib/golang/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build983119777=/tmp/go-build -gno-record-gcc-switches"
CXX="g++"
CGO_ENABLED="1"
PKG_CONFIG="pkg-config"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
### What did you do?
Signing any TLS certificates with a matching Subject hits the self-signed cert AKI logic, ignoring the key pairs used, serial number, extensions, and SAN fields. This potentially leaves the AKI field blank even on non-self-signed certs.
### What did you expect to see?
The AKI field on a signed certificate should match the CA's SKI
### What did you see instead?
The AKI is left blank:
https://play.golang.org/p/MPSqzlITG7 | NeedsInvestigation | low | Critical |
222,040,247 | rust | Long trait bounds don't get broken up in rustdoc | Here's an example of some long bounds: [num_traits::NumAssignOps](https://docs.rs/num-traits/latest/num_traits/trait.NumAssignOps.html)
Copy-pasted here:
```rust
pub trait NumAssignOps<Rhs = Self>: AddAssign<Rhs> + SubAssign<Rhs> + MulAssign<Rhs> + DivAssign<Rhs> + RemAssign<Rhs> { }
```
Rustfmt, when given an adequately small width limit, will turn this into:
```rust
pub trait NumAssignOps<Rhs = Self>:
AddAssign<Rhs>
+ SubAssign<Rhs>
+ MulAssign<Rhs>
+ DivAssign<Rhs>
+ RemAssign<Rhs>
{
}
```
It'd be nice if there were some way to make the rustdoc for these a bit less… awful, by default. Preferably without having to require that the user format their code accordingly. | T-rustdoc,C-enhancement | low | Critical |
222,056,962 | go | cmd/go: unify DWARF knowledge | Knowledge about what platforms we should generate DWARF for is currently spread between the linker and cmd/go. We should put it all in one place, probably cmd/go. See CLs 40859 and 40865 for discussion.
| NeedsInvestigation | low | Minor |
222,058,108 | go | runtime: add per-G shadows of writeBarrier.enabled | 1. A large fraction of static instructions are used to implement the write barrier enabled check, which currently always uses an absolute memory reference.
2. On supported RISC architectures accessing data at a small offset from a pointer takes fewer instructions than accessing data at an absolute offset. On all(?) supported architectures, it takes fewer bytes.
3. Reserving a register to point at the `runtime.writeBarrier` struct is possible, but would be a difficult tradeoff. However, many architectures already reserve a register to point at the executing G. If we could check write barrier status relative to G, it would save instructions on all RISC architectures.
4. There are many Gs. Since the enabled flag is updated very rarely, it would be possible to update some or all Gs whenever the master flag is updated. There is a tradeoff of which Gs to update: all of them, those that have Ms, or those that have Ps.
5. I have a proof of concept patch against the riscv tree (https://review.gerrithub.io/#/c/357282/ or sorear/riscv-go@dab0f89). I can rebase it against master if there is interest. This patch takes the approach of keeping Gs updated if they are referenced by any M; thus the additional STW latency scales as the number of Ms, but there is less potential for a race with `exitsyscall`.
6. It is far from clear that I have accounted for all possible races, especially with regards to asynchronous cgo callbacks that could(?) create new Ms at any time.
7. Initial results measured by `.text` size on `cmd/compile`:
before after %
386 5730855 5731751 +0.016
amd64 6764675 6765155 +0.007
arm 6155060 6081080 -1.202
arm64 5850320 5725184 -2.139
mips64 7297336 7173880 -1.692
mips 7159648 7097940 -0.862
ppc64 6120800 6058392 -1.020
riscv 3986656 3924656 -1.555
s390x 8253200 8343808 +1.098
ppc64 and s390x do not benefit from this patch alone as the current backends for those architectures are unable to use the G register as a base register for memory accesses. For ppc64 I did the measurement with a [one-line change](https://github.com/sorear/riscv-go/commit/f2e59211b2941837be657a2a4cbd1dbe5e286001#diff-008717913872fea9b232df7cf2ca820dR140) that enables G as a base register, for s390x I tried to make a similar change but was not able to get it to work.
It may make sense to exclude s390x from the code generation change since s390x can fetch from an absolute address in one instruction; currently the code generation is conditionalized exclusively on `hasGReg`.
The demonstration patch keeps the per-G shadows updated even on 386 and amd64 where they are not used. There could be conditionals added to the runtime to avoid that overhead.
8. I do not have any physical user-programmable hardware of the most affected architectures. While a 2.1% reduction in static instructions for arm64 looks nice on paper, it's moot if it turns out to make things slower for whatever reason.
9. Is this strategically desirable? It makes moving away from STW at GC phase changes marginally more difficult, increases `g` size, and might cause other problems I'm not considering.
cc @josharian @aclements @randall77 | Performance,Proposal,Proposal-Accepted,compiler/runtime | low | Major |
222,071,589 | youtube-dl | Site-support request: 100huntley.com | - [X] I've **verified** and **I assure** that I'm running youtube-dl **2017.04.17**
- [X] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [X] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [X] Site support request (request for adding support for a new site)
```
$ youtube-dl -v http://www.100huntley.com/watch?id=223812&title=life-stories-julia-bayer
[1] 5611
me@laptop ~ $ [debug] System config: [u'-o', u'~/Videos/youtube-dl/%(title)s_%(id)s.%(ext)s', u'--netrc', u'--restrict-filenames', u'--write-description', u'--write-sub', u'--yes-playlist', u'--ignore-errors']
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'http://www.100huntley.com/watch?id=223812']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.04.17
[debug] Python version 2.7.12 - Linux-4.4.0-71-generic-x86_64-with-LinuxMint-18.1-serena
[debug] exe versions: ffmpeg 2.8.11-0ubuntu0.16.04.1, ffprobe 2.8.11-0ubuntu0.16.04.1
[debug] Proxy map: {}
[generic] watch?id=223812: Requesting header
WARNING: Falling back on generic information extractor.
[generic] watch?id=223812: Downloading webpage
[generic] watch?id=223812: Extracting information
ERROR: Unsupported URL: http://www.100huntley.com/watch?id=223812
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1898, in _real_extract
doc = compat_etree_fromstring(webpage.encode('utf-8'))
File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2526, in compat_etree_fromstring
doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory)))
File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2515, in _XML
parser.feed(text)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1653, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1517, in _raiseerror
raise err
ParseError: syntax error: line 1, column 0
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 760, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 429, in extract
ie_result = self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 2765, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: http://www.100huntley.com/watch?id=223812
[1]+ Exit 1 youtube-dl -v http://www.100huntley.com/watch?id=223812
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: http://www.100huntley.com/watch?id=223812&title=life-stories-julia-bayer
---
### Description of your *issue*, suggested solution and other information
When I right-click on the video, I see a pop-up box that says "Powered by JW Player 7.10.5".
It looks like youtube-dl is set up to download "JW Player" videos, as evidenced https://github.com/rg3/youtube-dl/issues/685. I guess this means that youtube-dl just has to add 100huntley.com to the list of supported URLs.
| site-support-request | low | Critical |
222,121,812 | kubernetes | Downward API does not support gpu resource | I can not fetch the GPU limit resource by downward API using YAML like :
```yaml
valueFrom:
resourceFieldRef:
resource: limits.alpha.kubernetes.io/nvidia-gpu
```
The errror message:
```yaml
The Job "gpu-job" is invalid: spec.template.spec.containers[0].env[4].valueFrom.resourceFieldRef.resource: Unsupported value: "limits.alpha.kubernetes.io/nvidia-gpu": supported values: limits.cpu, limits.memory, requests.cpu, requests.memory
``` | sig/node,kind/feature,needs-triage | medium | Major |
222,149,127 | youtube-dl | [Swivl] Site support request | ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.04.17*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.04.17**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
$ youtube-dl -v https://cloud.swivl.com/v/a504a4448d88a4479af8480b25f301dd
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'https://cloud.swivl.com/v/a504a4448d88a4479af
8480b25f301dd']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.04.17
[debug] Python version 3.5.3 - FreeBSD-11.0-RELEASE-p8-amd64-64bit-ELF
[debug] exe versions: ffmpeg 3.2.4, ffprobe 3.2.4, rtmpdump 2.4
[debug] Proxy map: {}
[generic] a504a4448d88a4479af8480b25f301dd: Requesting header
WARNING: Falling back on generic information extractor.
[generic] a504a4448d88a4479af8480b25f301dd: Downloading webpage
[generic] a504a4448d88a4479af8480b25f301dd: Extracting information
ERROR: Unsupported URL: https://cloud.swivl.com/v/a504a4448d88a4479af8480b25f301dd
Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/youtube_dl/YoutubeDL.py", line 760, in extract_info
ie_result = ie.extract(url)
File "/usr/local/lib/python3.5/site-packages/youtube_dl/extractor/common.py", line 429, in extract
ie_result = self._real_extract(url)
File "/usr/local/lib/python3.5/site-packages/youtube_dl/extractor/generic.py", line 2765, in _real_extract
raise UnsupportedError(url)
youtube_dl.utils.UnsupportedError: Unsupported URL: https://cloud.swivl.com/v/a504a4448d88a4479af8480b25f301dd
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single regular video: https://cloud.swivl.com/v/e4a0e92223672c37a21ff2788007f823
- Single video with slides attached: https://cloud.swivl.com/v/a504a4448d88a4479af8480b25f301dd
---
### Description of your *issue*, suggested solution and other information
youtube-dl is able to download a lot of videos from Swivl just fine but it seems that when the videos have slides attached, it doesn't work at all.
youtube-dl is able to download the following regular video fine: https://cloud.swivl.com/v/e4a0e92223672c37a21ff2788007f823
youtube-dl is unable to download the following video which has slides attached: https://cloud.swivl.com/v/a504a4448d88a4479af8480b25f301dd
Ideally youtube-dl might even download the slides that go with such videos, but the most important bit is that the videos themselves are downloaded. | site-support-request | low | Critical |
222,150,579 | angular | Routing: Preserve trailing slash | <!--
IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING
-->
**I'm submitting a ...** (check one with "x")
```
[ ] bug report => search github for a similar issue or PR before submitting
[x] feature request
[ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
```
**Current behavior**
<!-- Describe how the bug manifests. -->
A trailing slash at the end of a route is automatically removed.
1) when you load the initial url e.g. example.com/sub/ into the browser (As part of the normalization in the router the function stripTrailingSlash in packages/common/src/location/location.ts seems to remove the trailing slash. The routing works as long as you do not configure the trailing slash in the corresponding route. The address bar will change from the initial example.com/sub/ to example.com/sub)
2) when we use `<a [routerLink]="'/sub/'">go to sub component</a>` we end up with the resulting HTML/DOM that looks like `<a href="/sub">go to sub component</a>` (The trailing slash gets removed (or maybe ignored is the better word) in the function computeNavigation in packages/router/src/create_url_tree.ts.)
**Expected behavior**
<!-- Describe what the behavior would be without the bug. -->
to 1) The trailing slash in the address bar is not removed and still shows example.com/sub/. We can use a trailing slash in our routes.
to 2) when we use `<a [routerLink]="'/sub/'">go to sub component</a>` we should end up with `<a href="/sub/">go to sub component</a>`
Our intent is to implement the feature request ourselves. Looking at the code however, the change doesn't seem trivial, so it would be nice to get some guidance and feedback on the issue before starting
**Minimal reproduction of the problem with instructions**
<!--
If the current behavior is a bug or you can illustrate your feature request better with an example,
please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
We made a repo just to test the behaviour https://github.com/antonkarsten/trailingslash
This issue on stack overflow describes the same (or a similar) requirement http://stackoverflow.com/questions/40840444/angular2-keep-add-trailing-slash
The following issue seems related (same part of the code) but the actual requirement/issue is different (just linking it here for awareness)
#14905
**What is the motivation / use case for changing the behavior?**
<!-- Describe the motivation or the concrete use case -->
Some of the tooling (tracking & analytics) we are using only support URLs with a trailing slash.
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
* **Angular version:** 4.0.0
<!-- Check whether this is still an issue in the most recent Angular version -->
* **Browser:** [all ]
<!-- All browsers where this could be reproduced -->
* **Language:** [all]
* **Node (for AoT issues):** `node --version` = | feature,freq2: medium,area: router,hotlist: devrel,state: confirmed,router: URL parsing/generation,P3,feature: under consideration | high | Critical |
222,167,031 | rust | Misleading error when a trait method implementation has explicit lifetime parameters but the trait signature does not. | Consider this example ([playground](https://play.rust-lang.org/?gist=f6e6314f04cbe19a3f30d1e03208a281&version=nightly&backtrace=0)):
```
use std::vec::Vec;
use std::option::Option;
trait MyTrait {
fn bar(&self, vec: &Vec<u32>) -> Option<&u32>;
}
struct Foo;
impl MyTrait for Foo {
fn bar<'a>(&self, vec: &'a Vec<u32>) -> Option<&'a u32> {
vec.get(0)
}
}
```
The error message is:
```
rustc 1.18.0-nightly (7627e3d31 2017-04-16)
error[E0495]: cannot infer an appropriate lifetime for lifetime parameter 'a in generic type due to conflicting requirements
--> <anon>:11:5
|
11 | fn bar<'a>(&self, vec: &'a Vec<u32>) -> Option<&'a u32> {
| _____^ starting here...
12 | | vec.get(0)
13 | | }
| |_____^ ...ending here
|
note: first, the lifetime cannot outlive the anonymous lifetime #2 defined on the body at 11:60...
--> <anon>:11:61
|
11 | fn bar<'a>(&self, vec: &'a Vec<u32>) -> Option<&'a u32> {
| _____________________________________________________________^ starting here...
12 | | vec.get(0)
13 | | }
| |_____^ ...ending here
note: ...so that method type is compatible with trait (expected fn(&Foo, &std::vec::Vec<u32>) -> std::option::Option<&u32>, found fn(&Foo, &std::vec::Vec<u32>) -> std::option::Option<&u32>)
--> <anon>:11:5
|
11 | fn bar<'a>(&self, vec: &'a Vec<u32>) -> Option<&'a u32> {
| _____^ starting here...
12 | | vec.get(0)
13 | | }
| |_____^ ...ending here
note: but, the lifetime must be valid for the anonymous lifetime #1 defined on the body at 11:60...
--> <anon>:11:61
|
11 | fn bar<'a>(&self, vec: &'a Vec<u32>) -> Option<&'a u32> {
| _____________________________________________________________^ starting here...
12 | | vec.get(0)
13 | | }
| |_____^ ...ending here
note: ...so that method type is compatible with trait (expected fn(&Foo, &std::vec::Vec<u32>) -> std::option::Option<&u32>, found fn(&Foo, &std::vec::Vec<u32>) -> std::option::Option<&u32>)
--> <anon>:11:5
|
11 | fn bar<'a>(&self, vec: &'a Vec<u32>) -> Option<&'a u32> {
| _____^ starting here...
12 | | vec.get(0)
13 | | }
| |_____^ ...ending here
error: aborting due to previous error
```
All of the text of the error is focussed on the body of the function, even though there are no lifetime problems in the body with respect to the signature of the surrounding function. The actual error is that the signatures of the trait method and its implementation do not match.
| C-enhancement,A-diagnostics,A-lifetimes,A-trait-system,T-compiler,D-papercut | low | Critical |
222,207,728 | flutter | Is there a way to dynamically collapse / control SliverAppBar? | Hi,
I am just starting to learn flutter and I would like to know how to for example collapse SliverAppBar when I click on a button.
Best regards
Matej
| c: new feature,framework,f: material design,f: scrolling,P3,team-design,triaged-design | low | Major |
222,209,689 | angular | Async Router Guard Navigation Issue | **I'm submitting a ...** (check one with "x")
```
[x] bug report => search github for a similar issue or PR before submitting
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
```
**Current behavior**
When the URL is changed manually to a parent route that redirects to a child route and that child route navigates to different route with an async guard, the last child route does not always load.
**Expected behavior**
The last child route should load as expected.
**Minimal reproduction of the problem with instructions**
Plunker: http://plnkr.co/edit/PNubvwfYsOTkwPJHrsPb?p=preview
If you remove the guard, everything works as expected.
**Steps to reproduce**
1. Open the plunker
2. Open the execution window in it's own window
3. Copy the URL without the hash (ex. http://run.plnkr.co/mOTJamm7zWdPs0N2/)
4. Paste the URL in a new tab of your browser. This is necessary so you have access to change the hash.
5. Let the page load normally once. You will be directed to `#/test => #/test/testStart => #/test/123`
6. Delete the ID portion of the URL leaving `#/test`. Press enter.
7. You should see in the console the router guard resolves but the route never changes.
** Additional Info**
These are some screenshots from an actual application. I added some debug code to the router. I hope this helps. The first image is a result of me manually modifying the hash in the URL to a parent route. The second is the first child route navigating to a second child route which contains an async guard. Not sure what the third one is or where it came from. There are no additional calls to navigate or navigateByUrl.
<img width="744" alt="screen shot 2017-04-17 at 12 53 44" src="https://cloud.githubusercontent.com/assets/3984979/25101618/9289671e-237a-11e7-9f6f-1119a78f5883.png">
<img width="749" alt="screen shot 2017-04-17 at 12 53 58" src="https://cloud.githubusercontent.com/assets/3984979/25101623/94939bf6-237a-11e7-9219-5872792b0d70.png">
<img width="745" alt="screen shot 2017-04-17 at 12 54 06" src="https://cloud.githubusercontent.com/assets/3984979/25101629/981c50d8-237a-11e7-9ff7-ffc61625d5a8.png">
My routes configuration (this is a subset of the entire app):
```
export const CHANGE_SERVICES_PLANS_ROUTES: Routes = [
{
path: 'change-services',
children: [
{
path: '',
redirectTo: 'select-device',
pathMatch: 'full'
},
{
path: 'select-device',
component: ChangeServicesSelectDeviceComponent,
canActivate: [
KMSISessionGuard
]
},
{
path: 'error',
component: ChangeServicesPlansErrorPageComponent
},
{
path: ':subscriptionId',
component: MakeChangesComponent,
canActivate: [
KMSISessionGuard,
ChangeServicesPlansGuard
]
},
{
path: ':subscriptionId/review-changes',
component: ReviewChangesComponent,
canActivate: [
KMSISessionGuard,
ChangeServicesPlansGuard
]
}
]
},
{
path: 'change-plans',
children: [
{
path: '',
redirectTo: 'select-plan',
pathMatch: 'full'
},
{
path: 'select-plan',
component: ChangePlansComponent
}
]
}
];
```
**Please tell us about your environment:**
Mac OS X 10.11.6
Webstorm 2017.1
* **Angular version:**
4.0.2
* **Browser:**
Chrome 57.0.2987.133
* **Language:**
Typescript 2.2.2
* **Node (for AoT issues):**
6.9.2 | type: bug/fix,freq4: critical,area: router,state: confirmed,router: guards/resolvers,P3 | medium | Critical |
222,233,307 | vscode | SCM - Support keyboard shortcuts | <!-- Do you have a question? Please ask it on http://stackoverflow.com/questions/tagged/vscode -->
- VSCode Version: 1.11.2
- OS Version: macOS Sierra 10.12.4
Steps to Reproduce:
1. Open a project under git, with some changed files
2. Ctrl-Shift-G to open the SCM view. It has a commit message at the top and a list of Changes below.
3. Tab to go to the list of Changes.
4. Cursor down until an interesting file is selected.
Now I can do Return to see a diff. After that, the file is highlighted and there are icons in the SCM view for reverting this file (curved arrow) or for staging it (plus sign).
But I can't find a keyboard shortcut for the "stage" icon. I tried Return, Ctrl-Return, Space, Ctrl-Space, S, Shift-S, the Plus key, Cmd-Down (because that opens a file in Explorer view), Cmd-Up, Cmd-Right, Cmd-Left.
I also tried Cmd-Shift-P to find the command, but it doesn't have a keyboard shortcut printed. I also tried to right-click the file, which gave me a context menu (with "stage" in it), again without keyboard shortcut.
| feature-request,scm | high | Critical |
222,242,132 | go | reflect: StructOf should support embedding types with non-exported methods | There's this code in reflect/type.go:
if ift.nameOff(m.name).pkgPath() != "" {
// TODO(sbinet)
panic("reflect: embedded interface with unexported method(s) not implemented")
}
See also #5748. | help wanted,NeedsFix,compiler/runtime | low | Minor |
222,334,283 | go | cmd/compile: go:linkname prevents inlining | go version devel +33c3477039 Tue Apr 18 03:56:16 2017 +0000 linux/amd64
`sync_runtime_procPin` is marked with `//go:linkname sync_runtime_procPin sync.runtime_procPin`. Runtime marks it as inlinable (which is good). However, sync fails to inline it with `cannot inline runtime_canSpin: no function body`.
There are more functions exposed from runtime to sync, sync/atomic, reflect, time, os.
Can we inline them? | Performance,compiler/runtime | medium | Major |
222,356,680 | You-Dont-Know-JS | Add some 'why' explanation to the 4 ways that `this` is set | You briefly introduce this here https://github.com/getify/You-Dont-Know-JS/blob/master/up%20%26%20going/ch2.md#this-identifier and here https://github.com/getify/You-Dont-Know-JS/blob/master/up%20%26%20going/ch3.md#this--object-prototypes.
I know that you don't want to get into details about these four methods, and that you'll do so in a later title, but I think the reader would benefit from a sentence that explains at a high level, what the thinking behind it being set in four different ways is. Why it is useful or necessary to understand it.
I'm thinking about something similar to what you have written about closure and modules (`One important application of closure is the module pattern, as we briefly introduced in this book in Chapter 2. The module pattern is perhaps the most prevalent code organization pattern in all of JavaScript; deep understanding of it should be one of your highest priorities.`). Something that sparks the readers imagination and helps fire some neurons before they get to the chapter.
I haven't read the rest of the book yet so there may be a good reason why you don't do this, but I wanted to feed back now to give you the perspective of a new reader (it might be that when I know more about this, I don't find the lack of context so significant).
PS. Loving the book so far :) | for second edition | medium | Minor |
222,428,421 | go | cmd/compile: big array not allocated on heap, which make program panic. | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.8.1 linux/amd64
### What did you do?
```golang
package main
const N = 1000 * 1000 * 537
var a [N]byte
func main() {
// this will stack overflow
for _, v := range a {
_ = v
}
// edit: blow solo also stack overflow
var x [N]byte
var y [N]byte
_, _ = x, y
}
```
[edit]: now(Go SDK 1.12.2), the above program doesn't crash. But below @agnivade shows [one case which will still crash](https://github.com/golang/go/issues/20021#issuecomment-481574765).
### What did you expect to see?
runs ok
### What did you see instead?
crash
```
runtime: goroutine stack exceeds 1000000000-byte limit
fatal error: stack overflow
runtime stack:
runtime.throw(0x469cdc, 0xe)
/sdks/go/src/runtime/panic.go:596 +0x95
runtime.newstack(0x0)
/sdks/go/src/runtime/stack.go:1089 +0x3f2
runtime.morestack()
/sdks/go/src/runtime/asm_amd64.s:398 +0x86
goroutine 1 [running]:
main.main()
/tmp/main.go:6 +0x88 fp=0xc460057f88 sp=0xc460057f80
runtime.main()
/sdks/go/src/runtime/proc.go:185 +0x20a fp=0xc460057fe0 sp=0xc460057f88
runtime.goexit()
/sdks/go/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc460057fe8 sp=0xc460057fe0
exit status 2
```
| NeedsFix,compiler/runtime | medium | Critical |
222,431,052 | go | cmd/compile: some problems on optimization when takes care about of arrays in struct | As @ianlancetaylor and @ALTree mentioned, I changed the version from 1.7.4 to 1.8.1 to show the bug clearly.
### What version of Go are you using (`go version`)?
```go
go version go1.8.1 linux/amd64
```
### What operating system and processor architecture are you using (`go env`)?
```go
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH=""
GORACE=""
GOROOT="/home/zhaozq/src/go"
GOTOOLDIR="/home/zhaozq/src/go/pkg/tool/linux_amd64"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build701526462=/tmp/go-build"
CXX="g++"
CGO_ENABLED="1"
```
### What did you do?
I test several examples to test the performance of iterating an array. All of the version are compile by `go build`
#### version I
I put the array into a struct
```go
package main
const NUM = 100
type T struct {
arr [NUM]int
}
func sub(t *T) {
for i := 0; i < NUM; i++ {
t.arr[i] = 3
}
}
func main() {
var t T
sub(&t)
}
```
The assemble of function sub is
```go
000000000044d5b0 <main.sub>:
44d5b0: 48 8b 44 24 08 mov 0x8(%rsp),%rax
44d5b5: 31 c9 xor %ecx,%ecx
44d5b7: 48 83 f9 64 cmp $0x64,%rcx
44d5bb: 7d 13 jge 44d5d0 <main.sub+0x20>
44d5bd: 84 00 test %al,(%rax)
44d5bf: 48 c7 04 c8 03 00 00 movq $0x3,(%rax,%rcx,8)
44d5c6: 00
44d5c7: 48 ff c1 inc %rcx
44d5ca: 48 83 f9 64 cmp $0x64,%rcx
44d5ce: 7c ed jl 44d5bd <main.sub+0xd>
44d5d0: c3 retq
```
And the loop is
```go
44d5bd: 84 00 test %al,(%rax)
44d5bf: 48 c7 04 c8 03 00 00 movq $0x3,(%rax,%rcx,8)
44d5c6: 00
44d5c7: 48 ff c1 inc %rcx
44d5ca: 48 83 f9 64 cmp $0x64,%rcx
44d5ce: 7c ed jl 44d5bd <main.sub+0xd>
```
The line `44d5bd`is quite strange. What's the usage of `test` here?? I think it's useless, and it will waste a cycle. We care about it, because we will use this kind of function a lot.
#### version II
We use a way to make the compiler do not what we are passing.
```go
package main
import "unsafe"
const NUM = 100
type T struct {
arr [NUM]int
}
func sub(p unsafe.Pointer) {
for i := 0; i < NUM; i++ {
*(*int)(p) = 3
p = unsafe.Pointer(uintptr(p) + 8)
}
}
func main() {
var t T
sub(unsafe.Pointer(&t))
}
```
The assembler of function sub
```go
000000000044d5b0 <main.sub>:
44d5b0: 48 8b 44 24 08 mov 0x8(%rsp),%rax
44d5b5: 31 c9 xor %ecx,%ecx
44d5b7: 48 83 f9 64 cmp $0x64,%rcx
44d5bb: 7d 14 jge 44d5d1 <main.sub+0x21>
44d5bd: 48 c7 00 03 00 00 00 movq $0x3,(%rax)
44d5c4: 48 ff c1 inc %rcx
44d5c7: 48 83 c0 08 add $0x8,%rax
44d5cb: 48 83 f9 64 cmp $0x64,%rcx
44d5cf: 7c ec jl 44d5bd <main.sub+0xd>
44d5d1: c3 retq
```
The loop is
```go
44d5bd: 48 c7 00 03 00 00 00 movq $0x3,(%rax)
44d5c4: 48 ff c1 inc %rcx
44d5c7: 48 83 c0 08 add $0x8,%rax
44d5cb: 48 83 f9 64 cmp $0x64,%rcx
44d5cf: 7c ec jl 44d5bd <main.sub+0xd>
```
The number of operations in version II is the same as version I, but memory read times have been reduced, so this kind of version is faster.
#### version III
To reduce the numbers of operations in version II, I changed again.
```go
package main
import "unsafe"
const NUM = 100
type T struct {
arr [NUM]int
}
func sub(p unsafe.Pointer) {
maxp := uintptr(p) + 8 * NUM
for ; uintptr(p) < maxp; {
*(*int)(p) = 3
p = unsafe.Pointer(uintptr(p) + 8)
}
}
func main() {
var t T
sub(unsafe.Pointer(&t))
}
```
The assemble of function sub
```go
000000000044d5b0 <main.sub>:
44d5b0: 48 8b 44 24 08 mov 0x8(%rsp),%rax
44d5b5: 48 89 c1 mov %rax,%rcx
44d5b8: 48 81 c1 20 03 00 00 add $0x320,%rcx
44d5bf: 48 89 c2 mov %rax,%rdx
44d5c2: 48 39 ca cmp %rcx,%rdx
44d5c5: 73 13 jae 44d5da <main.sub+0x2a>
44d5c7: 48 c7 00 03 00 00 00 movq $0x3,(%rax)
44d5ce: 48 83 c0 08 add $0x8,%rax
44d5d2: 48 89 c2 mov %rax,%rdx
44d5d5: 48 39 ca cmp %rcx,%rdx
44d5d8: 72 ed jb 44d5c7 <main.sub+0x17>
44d5da: c3 retq
```
The loop is
```go
44d5c7: 48 c7 00 03 00 00 00 movq $0x3,(%rax)
44d5ce: 48 83 c0 08 add $0x8,%rax
44d5d2: 48 89 c2 mov %rax,%rdx
44d5d5: 48 39 ca cmp %rcx,%rdx
44d5d8: 72 ed jb 44d5c7 <main.sub+0x17>
```
The number of operations are not reduced. The reason is `44d5d2` and `44d5d2`.
```go
44d5d2: 48 89 c2 mov %rax,%rdx
44d5d5: 48 39 ca cmp %rcx,%rdx```
Why not `cmp %rcx %rax`? Why introduce `%rdx`....?
### What did you expect to see?
Please focus version I and version III.
| Performance,help wanted,NeedsFix,compiler/runtime | low | Critical |
222,443,933 | opencv | CV_CAP_GSTREAMER_QUEUE_LENGTH fails |
I have a scenario in which I am processing an mp4 file in batch mode using VideoCapture via gstreamer. In this case appsink requires the sync=false property so that the execution is at fastest speed without delays.
The problem is that the appsink is configured by cap_gstreamer.cpp as drop with buffer 1. Using the property CV_CAP_GSTREAMER_QUEUE_LENGTH to modify the buffering makes the pipeline crash probably because it is not
Example (OSX with OpenCV 3.2.0) with the following input
filesrc location=x.mp4 ! qtdemux ! h264parse ! vtdec_hw ! videoconvert ! appsink sync=false
Performance motivato: the same FULL HD video playbacks in 100-200 us using gstreamer and in 3-4ms using the hardware decoding provided by OSX VideoToolbox and made accessible by gstreamer via vtdec_hw.
The specific error is with qtdemux. | bug,category: videoio | low | Critical |
222,521,773 | TypeScript | The check "Parameter cannot be referenced in its initializer" for default parameters is too strict | <!-- BUGS: Please use this template. -->
<!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript -->
<!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md -->
**TypeScript Version:** 2.2.1 / nightly (2.2.0-dev.201xxxxx)
**Code**
```ts
(function(x = () => x) {
console.log(x());
})();
```
**Expected behavior:**
This is valid code according to ES2015 specs and it should compile correctly.
**Actual behavior:**
Getting "Parameter 'x' cannot be referenced in its initializer". | Bug | low | Critical |
222,524,094 | go | net/http: Server hangs and cannot receive any more requests when using ReverseProxy | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
$ go version
go version go1.7.5 linux/amd64
### What operating system and processor architecture are you using (`go env`)?
$ go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/user/tmp/github/reverse-proxy-test"
GORACE=""
GOROOT="/opt/emc/apps/golang/go"
GOTOOLDIR="/opt/emc/apps/golang/go/pkg/tool/linux_amd64"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build252819066=/tmp/go-build -gno-record-gcc-switches"
CXX="g++"
CGO_ENABLED="1"
### What did you do?
Source: [https://github.com/hleong25/reverse-proxy-test](https://github.com/hleong25/reverse-proxy-test)
I cross compiled from linux to windows: `GOOS=windows go build henryleong.com/...`
The binary will start an HTTP server and will act as a reverseproxy. I will refer to it as the agent. The agent will spawn the same binary but with a new port (agents port +1), read the stdout and stderr through separate pipes. The spawned binary will be referred to as plugin. The plugin will handle the actual HTTP request from the agent through a reverseproxy request.
On a separate machine, I do a loop to the agents endpoint: `while true; do curl http://<host>:<port>/greetings; done;`
### What did you expect to see?
I expect the curl command will be processed very quickly.
### What did you see instead?
Intermittently, after a short period, curl will hang waiting for the agents server to process the HTTP request.
I believe it has to do with the stdout/stderr pipes which is not being flushed and creates some sort of deadlock/race condition.
### Workaround 1
If it gets into this situation, to get out of it, I would do a HTTP request to the plugin directly, and the agent starts processing the HTTP requests. Which after a while, the agent will get into this issue again and I would need to do a HTTP request to the plugin directly.
### Workaround 2
If I modify the plugin to output something to stdout or to stderr periodically, the agents HTTP server will start responding to HTTP requests. | NeedsInvestigation | medium | Critical |
222,587,432 | opencv | imdecode should allow more control over decode process | Presently, imdecode is all-or-nothing. If the user's data decodes correctly into some type of image, then the image is returned, and otherwise it isn't.
It would be really helpful if the user had more control over this process. For example, it would be handy if the user could decide if they want to abort the decode process after inspecting the type of image or its width/height. imdecode presently does not allow this. | feature,category: imgcodecs | low | Minor |
222,587,621 | opencv | imdecode does not reuse dst if size differs | imdecode does allow users to reuse a pixel buffer by passing it as dst. Unfortunately, this only works if the size is actually the right size to be decoded. Otherwise, opencv allocates a new buffer. It seems this is not really within the intent for this argument. It would be really useful if users could make sure that their buffer can always be reused (even if it has to be wrapped in a new Mat structure) | feature,category: imgcodecs | low | Minor |
222,790,685 | opencv | Rigid body transformations using estimateAffine3D | ##### System information (version)
- OpenCV => 3.0+
- Operating System / Platform => ALL
- Compiler => ALL
##### Detailed description
In case of rigid body transforms, the determinant of the rotation part returned is positive and equal to one. However, `estimateAffine3D` seems to estimate non-rigid scale as well. Current behavior can produce reflected and scaled rotations. The documentation does not show any details, and it seems that this function performs (or shall perform) rigid transformation of physically rotated and translated bodies (orthogonal rotation and transformation). However, when some outliers exists, it handle them as scale rather than searching for different candidates.
Hence, there should exist a way to restrict the current method to only rigid transforms, in which it should RANSACly search for non-scaled pairs or to fail if no proper points are found.
##### Current signature
```.c
int estimateAffine3D(InputArray src, InputArray dst,
OutputArray out, OutputArray inliers,
double ransacThreshold = 3, double confidence = 0.99);
```
##### Suggested signature
```.c
int estimateAffine3D(InputArray src, InputArray dst,
OutputArray out, OutputArray inliers,
double ransacThreshold = 3, double confidence = 0.99,
bool rigid = false);
```
##### Steps to reproduce
```py
import numpy as np
import cv2
src = np.array([
[10, 10, 10],
[10, 10, 20],
[10, 20, 10],
[10, 20, 20],
])
dst = np.array([
[-20, 20, 20],
[-20, 20, 40],
[20, 40, 20],
[20, 40, 40],
])
retval, Rt, inliers = cv2.estimateAffine3D(src, dst) #, rigid=True
if retval:
print("det: %.3f" % np.linalg.det(Rt[:,:3]))
else:
print("failed")
```
##### Expected output
`det: 1.000` OR `failed`
##### Actual output
`det: -23.762`
##### Additional
It would be great if the returned transform is a 4x4 matrix. Changing `modules/calib3d/src/ptsetreg.cpp` like below will do the trick:
```diff
solve(A, B, X, DECOMP_SVD);
- X.reshape(1, 3).copyTo(_model);
+ Mat last_row = Mat::zeros(4, 1, CV_64F);
+ last_row.at<double>(3) = 1;
+ X.push_back(last_row);
+ X.reshape(1, 4).copyTo(_model);
``` | feature,category: calib3d | low | Critical |
222,792,162 | vscode | full width tab glyph for editor.renderWhitespace | When viewing invisible characters (editor.renderWhitespace), I would like the character/glyph used for tabs to extend across the full width of the tab (N spaces / to the next tab stop). Sublime Text 3 uses what looks like an emdash(?) or essentially a line than runs to the next tab stop. VSCode uses a single character width arrow.
In descending order of preference:
- a line that extends the full tab width (to the next tab stop) like ST3
- an arrow that extends the full tab width (to the next tab stop)
| feature-request,editor-render-whitespace | medium | Major |
222,941,303 | opencv | Python wrapper: Incorrectly wrapped members of cv::ml::TrainData | ##### System information (version)
- OpenCV => 3.0+
- Operating System / Platform => ALL
- Compiler => ALL
##### Detailed description
- `cv::ml::TrainData::getSample` has a `NULL` pointer as a default argument for its `buf` parameter.
However, this is incorrectly wrapped in the generated code as a pointer to a zero-valued float:
```c
float buf=0.f;
//...
ERRWRAP2(_self_->getSample(varIdx, sidx, &buf));
```
- Similarly, `cv::ml::TrainData::getValues` has a NULL pointer as default argument for its `values` parameter, yet it is wrapped as a pointer to a zero-valued float.
```c
float values=0.f;
//...
ERRWRAP2(_self_->getValues(vi, sidx, &values));
```
This will cause undesired behaviors and generate garbage data and possibly causes crashes.
##### Steps to reproduce
- Build opencv_python2
- Check the generated code at `{BUILD_DIR}/modules/python2/pyopencv_generated_types.h` and see how the NULL pointers are changed into pointers to zero-valued floats.
##### How to solve
- `cv::ml::TrainData::getSample` wrapper should be corrected to something similar to:
```diff
- float buf=0.f;
+ PyObject* pyobj_buf = NULL;
+ Ptr<float> buf;
const char* keywords[] = { "varIdx", "sidx", "buf", NULL };
- if( PyArg_ParseTupleAndKeywords(args, kw, "Oif:ml_TrainData.getSample", (char**)keywords, &pyobj_varIdx, &sidx, &buf) &&
- pyopencv_to(pyobj_varIdx, varIdx, ArgInfo("varIdx", 0)) )
+ if( PyArg_ParseTupleAndKeywords(args, kw, "OiO:ml_TrainData.getSample", (char**)keywords, &pyobj_varIdx, &sidx, &pyobj_buf) &&
+ pyopencv_to(pyobj_varIdx, varIdx, ArgInfo("varIdx", 0)) &&
+ pyopencv_to(pyobj_buf, buf, ArgInfo("buf", 0)) )
{
- ERRWRAP2(_self_->getSample(varIdx, sidx, &buf));
+ ERRWRAP2(_self_->getSample(varIdx, sidx, buf));
```
- `cv::ml::TrainData::getValues` wrapper should be corrected to something similar to:
```diff
- float values=0.f;
+ PyObject* pyobj_values = NULL;
+ Ptr<float> values;
const char* keywords[] = { "vi", "sidx", "values", NULL };
- if( PyArg_ParseTupleAndKeywords(args, kw, "iOf:ml_TrainData.getValues", (char**)keywords, &vi, &pyobj_sidx, &values) &&
- pyopencv_to(pyobj_sidx, sidx, ArgInfo("sidx", 0)) )
+ if( PyArg_ParseTupleAndKeywords(args, kw, "iOO:ml_TrainData.getValues", (char**)keywords, &vi, &pyobj_sidx, &pyobj_values) &&
+ pyopencv_to(pyobj_sidx, sidx, ArgInfo("sidx", 0)) &&
+ pyopencv_to(pyobj_values, values, ArgInfo("values", 0)) )
{
- ERRWRAP2(_self_->getValues(vi, sidx, &values));
+ ERRWRAP2(_self_->getValues(vi, sidx, values));
``` | bug,category: python bindings | low | Critical |
222,969,430 | kubernetes | Understand why resource usage of master components is noticeably different in Kubemark | We see pretty big discrepancy between what we see in kubemark clusters and real ones. It needs to be understood. Difference is mostly in the API server usage.
E.g. 99 percentile in kubemark (both results from after Density, which is run as a first test):
```
container cpu(cores) memory(MB)
"apiserver/apiserver" 0.606 407.89
"controller-manager/controller-manager" 0.120 218.27
"etcd/etcd/data" 0.120 160.75
"etcd/etcd/data-events" 0.034 46.22
"scheduler/scheduler" 0.080 114.16
```
and real cluster:
```json
{
"Name": "etcd-server-e2e-scalability-master/etcd-container",
"Cpu": 0.167254853,
"Mem": 371601408
},
{
"Name": "etcd-server-events-e2e-scalability-master/etcd-container",
"Cpu": 0.14851711,
"Mem": 159068160
},
{
"Name": "kube-apiserver-e2e-scalability-master/kube-apiserver",
"Cpu": 0.972463425,
"Mem": 787206144
},
{
"Name": "kube-controller-manager-e2e-scalability-master/kube-controller-manager",
"Cpu": 0.180771427,
"Mem": 235991040
},
{
"Name": "kube-scheduler-e2e-scalability-master/kube-scheduler",
"Cpu": 0.169906087,
"Mem": 113385472
},
```
@wojtek-t @shyamjvs @kubernetes/sig-scalability-misc | sig/scalability,lifecycle/frozen,lifecycle/stale | medium | Critical |
222,997,448 | TypeScript | Casting of a union type variable doesn't work if interface has all optional properties | <!-- BUGS: Please use this template. -->
<!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript -->
<!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md -->
**TypeScript Version:** at least 2.0.0 - 2.3.0
**Code**
```ts
interface ITest1 { IsTest: boolean; }
interface ITest2 { IsTest?: boolean; }
let case1: ITest1 | string;
case1 = case1 as string;
case1.indexOf("_"); // <-- no error
let case2: ITest2 | string;
case2 = case2 as string;
case2.indexOf("_"); // <-- ERROR
```
**Expected behavior:**
In the second case there should be no error reported, same as in the first one. `case2` variable should have type `string` after casting `case2 = case2 as string`
**Actual behavior:**
When union of types contain interface with all optional properties than casting as one of the types does not change the type of a variable to more specific one. | Suggestion,In Discussion | low | Critical |
223,120,273 | thefuck | Use type annotations and mypy | It would be nice to have type annotations and check code with [mypy](http://mypy-lang.org/).
It can be done in a few steps:
- [X] write [type eraser](https://github.com/nvbn/thefuck/blob/types/type_eraser.py) that will remove annotations from code before release (for python 2 support);
- [ ] annotate sources;
- [ ] make sources pass mypy check.
| enhancement,WIP | low | Minor |
223,139,106 | go | net/http: Transport doesn't support NTLM challenge authentication | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
Go 1.8
### What operating system and processor architecture are you using (`go env`)?
OSX darwin-amd64
### What did you do?
I have send an https request to a proxy (ntlm) below request and initial response
(via wireshark)
-------------------------------------------------------------------------------------------
**Request:**
CONNECT www.endpoint.com:443 HTTP/1.1
Host: www.endpoint.com:443
User-Agent: Go-http-client/1.1
Location: https://www.endpoint.com
Proxy-Authorization: NTLM TlRMTVNTUAABAAAAB4IAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMAAAAAAAMAA=
------------------------------------------------------------------------------------------
**Response**
HTTP/1.1 407 Proxy Authentication Required
Server: FreeProxy/4.50
Date: Thu, 20 Apr 2017 15:20:10 GMT
Content-Type: text/html
Transfer-Encoding: Chunked
Proxy-Authenticate: NTLM
TlRMTVNTUAACAAAADAAMADgAAAAFgoECloLVra5EaVAAAAAAAAAAA
A9KAEYAUgBPAEcAMAACAAwASgBGAFIATwBHADAAAQAOAFcASQBOA
ZgByAG8AZwAuAGwAbwBjAGEAbAADACYAdwBpAG4AMgAwADEAMgAu
wAbwBjAGEAbAAFABYAagBmAHIAbwBnAC4AbABvAGMAYQBsAAcACAD
Proxy-Connection: Keep-Alive
------------------------------------------------------------------------------------------------
The response above never reach the client, **on transport.dialConn** the response return status code 407 for challenge , because the response code != 200 the persist connection become nil
-------------------------------------------------------------------------------------------
br := bufio.NewReader(conn)
resp, err := ReadResponse(br, connectReq) // resp.StatusCode =407
if err != nil {
conn.Close()
return nil, err
}
if resp.StatusCode != 200 {
f := strings.SplitN(resp.Status, " ", 2)
conn.Close()
return nil, errors.New(f[1]) // persist connection become nil
}
------------------------------------------------------------------------------------------------
since the persist connection return nil then request is cancelled and response return as nil
with error **Proxy Authentication Required**
see --> transport.RoundTrip
--------------------------------------------------------------------------------------------
pconn, err := t.getConn(treq, cm) // pconn = nil
if err != nil {
t.setReqCanceler(req, nil)
req.closeBody()
return nil, err
}
-------------------------------------------------------------------------------------------------
### What did you expect to see?
I expect the response to return is it send from the proxy with status code 407
### What did you see instead?
I got nil response with error: Proxy Authentication Required
Note: if I use http instead of https it works OK
This issue is blocking us from developing support to NTLM Proxy , as requests https endpoint do not return challenge from proxy
| help wanted,FeatureRequest | low | Critical |
223,139,536 | kubernetes | Proportional scaling in Deployments in not fully re-entrant | The fix is to avoid scaling replica sets with replica annotations synced to the actual size of the deployment. While doing that, we need to handle gracefully the case where all replica sets have their annotations synced but the sum of their spec.replicas does not match spec.replicas of the deployment.
@kubernetes/sig-apps-bugs | kind/bug,area/workload-api/deployment,sig/apps,lifecycle/frozen | low | Critical |
223,171,991 | angular | canActivate=>false can result in blank screen/dead link | <!--
IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING
-->
**I'm submitting a ...** (check one with "x")
```
[ X ] bug report => search github for a similar issue or PR before submitting
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
```
**Current behavior**
<!-- Describe how the bug manifests. -->
canActivate, canActivateChild will result in NavigationCancel event if 'false'/reject() happens in the Guard. Additionally the URL is reset to the base path before the navigation attempt (as though to leave the user on the page they navigated from).
This current behavior means that if you just go directly to a location in your browser (e.g. localhost:4200/some-guarded-route) and it returns false the URL will read "localhost:4200/" with a blank page as navigation is canceled.
**Expected behavior**
<!-- Describe what the behavior would be without the bug. -->
The right behavior to me seems that the router should try to see if there are further matches in the routing table (i.e. the guard 'false' case would cover the use case scenario of #14515). By continuing down the route table the behavior would be to land on '**' route in the worst case while also enabling conditional routing.
Minimally NavigationCancel event should determine if the resetUrlToCurrentUrlTree is an already displayed route (either rendering it if necessary or doing nothing if it's already rendered).
I will see if I can issue a PR to fix this for review. Will need to determine what the side effects would be for cases where people would expect just to stay on the currently rendered page (considering it was already rendered). However, to me I think the correct behavior for a dead link should be to go down the route table and hit '**' for a 404 or something vs doing nothing when the user clicks on it and rendering nothing if the user hits the route directly before any route has been rendered.
**Minimal reproduction of the problem with instructions**
<!--
If the current behavior is a bug or you can illustrate your feature request better with an example,
please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
Plunker won't work for this because you would need to be able to hit the route as though it was bookmarked to reproduce.
Effectively just create project using CLI, create a route guard, return false in the methods. Then attempt to hit the route directly without loading a root non-guarded route first.
**What is the motivation / use case for changing the behavior?**
<!-- Describe the motivation or the concrete use case -->
Motivation is that you won't have the user on a blank screen if they happen to hit a guarded route without meeting the conditions.
Second part is that by resolving these scenarios by continuing down the route table you would enable conditional routing for scenarios such as:
Creating routes like:
localhost:4200/:organization
localhost:4200/:user
If the route guard found a valid organization it could allow the request to move forward vs user. This is all gravy. A start is to not give the user a blank page when resetting the route via resetUrlToCurrentUrlTree when the current Url wasn't rendered in the first place.
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
Mac OS Sierra (10.12.4), WebStorm, NPM, ng serve
* **Angular version:** Angular 4.0.2
<!-- Check whether this is still an issue in the most recent Angular version -->
* **Browser:** Chrome 57
<!-- All browsers where this could be reproduced -->
* **Language:** TypeScript 2.2.2
* **Node (for AoT issues):** `node --version` =
Node 7.9 | type: bug/fix,freq3: high,area: router,state: has PR,state: confirmed,P3 | high | Critical |
223,181,479 | go | crypto/tls: slow server-side handshake performance for RSA certificates without client session cache | Both go 1.8 and go tip provides too slow server-side handshake performance for RSA certificates if the client doesn't use TLS session cache:
```
$ go get -u github.com/valyala/fasthttp/fasthttputil
$ GOMAXPROCS=1 go test github.com/valyala/fasthttp/fasthttputil -bench=TLSHandshake
goos: linux
goarch: amd64
pkg: github.com/valyala/fasthttp/fasthttputil
BenchmarkPlainHandshake 300000 3953 ns/op
BenchmarkTLSHandshakeWithClientSessionCache 20000 81960 ns/op
BenchmarkTLSHandshakeWithoutClientSessionCache 500 3493016 ns/op
BenchmarkTLSHandshakeWithCurvesWithClientSessionCache 20000 80307 ns/op
BenchmarkTLSHandshakeWithCurvesWithoutClientSessionCache 500 3518508 ns/op
PASS
ok github.com/valyala/fasthttp/fasthttputil 11.683s
```
The results show that a single amd64 core may perform only 300 handshakes per second from new clients without session tickets. This is very discouraging performance comparing to `openssl` as described on https://istlsfastyet.com/ :
```
$ openssl version
OpenSSL 1.0.2g 1 Mar 2016
$ openssl speed ecdh
...
op op/s
256 bit ecdh (nistp256) 0.0001s 12797.0
384 bit ecdh (nistp384) 0.0007s 1416.8
521 bit ecdh (nistp521) 0.0005s 1968.0
```
Note that `openssl` performs 12797 256-bit ecdh operations per second on a single CPU core. This is 40x higher than the results from the comparable `BenchmarkTLSHandshakeWithCurvesWithoutClientSessionCache` above. Below are cpu profiles for this benchmark:
Mixed client and server profile:
```
(pprof) top20
Showing nodes accounting for 154.20ms, 87.56% of 176.10ms total
Dropped 200 nodes (cum <= 0.88ms)
Showing top 20 nodes out of 103
flat flat% sum% cum cum%
82.50ms 46.85% 46.85% 82.50ms 46.85% math/big.addMulVVW /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
19.50ms 11.07% 57.92% 113.20ms 64.28% math/big.nat.montgomery /home/aliaksandr/work/go-tip/src/math/big/nat.go
13.70ms 7.78% 65.70% 13.70ms 7.78% runtime.memmove /home/aliaksandr/work/go-tip/src/runtime/memmove_amd64.s
7.40ms 4.20% 69.90% 21.10ms 11.98% math/big.nat.divLarge /home/aliaksandr/work/go-tip/src/math/big/nat.go
5ms 2.84% 72.74% 5ms 2.84% math/big.mulAddVWW /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
3.40ms 1.93% 74.67% 3.40ms 1.93% crypto/sha256.block /home/aliaksandr/work/go-tip/src/crypto/sha256/sha256block_amd64.s
2.70ms 1.53% 76.21% 2.70ms 1.53% math/big.subVV /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
2.70ms 1.53% 77.74% 2.70ms 1.53% p256MulInternal /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
2.60ms 1.48% 79.22% 2.60ms 1.48% math/big.addVV /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
2.60ms 1.48% 80.69% 2.60ms 1.48% p256SqrInternal /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
2.10ms 1.19% 81.89% 2.10ms 1.19% math/big.shlVU /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
1.80ms 1.02% 82.91% 1.80ms 1.02% syscall.Syscall /home/aliaksandr/work/go-tip/src/syscall/asm_linux_amd64.s
1.30ms 0.74% 83.65% 1.30ms 0.74% crypto/elliptic.p256Sqr /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
1.10ms 0.62% 84.27% 2.30ms 1.31% sync.(*Pool).Put /home/aliaksandr/work/go-tip/src/sync/pool.go
1ms 0.57% 84.84% 4.20ms 2.39% math/big.nat.add /home/aliaksandr/work/go-tip/src/math/big/nat.go
1ms 0.57% 85.41% 3.70ms 2.10% math/big.nat.mulAddWW /home/aliaksandr/work/go-tip/src/math/big/nat.go
1ms 0.57% 85.97% 1ms 0.57% sync.(*Mutex).Lock /home/aliaksandr/work/go-tip/src/sync/mutex.go
1ms 0.57% 86.54% 1ms 0.57% sync.(*Mutex).Unlock /home/aliaksandr/work/go-tip/src/sync/mutex.go
0.90ms 0.51% 87.05% 24.90ms 14.14% math/big.(*Int).GCD /home/aliaksandr/work/go-tip/src/math/big/int.go
0.90ms 0.51% 87.56% 5.40ms 3.07% math/big.(*Int).Mul /home/aliaksandr/work/go-tip/src/math/big/int.go
```
Server profile:
```
(pprof) top20 Server
Showing nodes accounting for 149.20ms, 84.72% of 176.10ms total
Dropped 110 nodes (cum <= 0.88ms)
Showing top 20 nodes out of 86
flat flat% sum% cum cum%
82.50ms 46.85% 46.85% 82.50ms 46.85% math/big.addMulVVW /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
19.50ms 11.07% 57.92% 113.20ms 64.28% math/big.nat.montgomery /home/aliaksandr/work/go-tip/src/math/big/nat.go
13.40ms 7.61% 65.53% 13.40ms 7.61% runtime.memmove /home/aliaksandr/work/go-tip/src/runtime/memmove_amd64.s
7.40ms 4.20% 69.73% 21.10ms 11.98% math/big.nat.divLarge /home/aliaksandr/work/go-tip/src/math/big/nat.go
5ms 2.84% 72.57% 5ms 2.84% math/big.mulAddVWW /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
2.70ms 1.53% 74.11% 2.70ms 1.53% math/big.subVV /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
2.60ms 1.48% 75.58% 2.60ms 1.48% math/big.addVV /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
2.30ms 1.31% 76.89% 2.30ms 1.31% crypto/sha256.block /home/aliaksandr/work/go-tip/src/crypto/sha256/sha256block_amd64.s
2.10ms 1.19% 78.08% 2.10ms 1.19% math/big.shlVU /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
1.40ms 0.8% 78.88% 1.40ms 0.8% p256MulInternal /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
1.30ms 0.74% 79.61% 1.30ms 0.74% syscall.Syscall /home/aliaksandr/work/go-tip/src/syscall/asm_linux_amd64.s
1.20ms 0.68% 80.30% 1.20ms 0.68% p256SqrInternal /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
1.10ms 0.62% 80.92% 2.30ms 1.31% sync.(*Pool).Put /home/aliaksandr/work/go-tip/src/sync/pool.go
1ms 0.57% 81.49% 4.20ms 2.39% math/big.nat.add /home/aliaksandr/work/go-tip/src/math/big/nat.go
1ms 0.57% 82.06% 3.70ms 2.10% math/big.nat.mulAddWW /home/aliaksandr/work/go-tip/src/math/big/nat.go
1ms 0.57% 82.62% 1ms 0.57% sync.(*Mutex).Lock /home/aliaksandr/work/go-tip/src/sync/mutex.go
1ms 0.57% 83.19% 1ms 0.57% sync.(*Mutex).Unlock /home/aliaksandr/work/go-tip/src/sync/mutex.go
0.90ms 0.51% 83.70% 24.90ms 14.14% math/big.(*Int).GCD /home/aliaksandr/work/go-tip/src/math/big/int.go
0.90ms 0.51% 84.21% 5.40ms 3.07% math/big.(*Int).Mul /home/aliaksandr/work/go-tip/src/math/big/int.go
0.90ms 0.51% 84.72% 114.30ms 64.91% math/big.nat.expNNMontgomery /home/aliaksandr/work/go-tip/src/math/big/nat.go
```
Client profile:
```
(pprof) top20 Client
Showing nodes accounting for 14.10ms, 8.01% of 176.10ms total
Showing top 20 nodes out of 202
flat flat% sum% cum cum%
2.60ms 1.48% 1.48% 2.60ms 1.48% p256SqrInternal /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
2.10ms 1.19% 2.67% 2.10ms 1.19% p256MulInternal /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
1.60ms 0.91% 3.58% 2.50ms 1.42% math/big.nat.divLarge /home/aliaksandr/work/go-tip/src/math/big/nat.go
1.10ms 0.62% 4.20% 1.10ms 0.62% crypto/sha256.block /home/aliaksandr/work/go-tip/src/crypto/sha256/sha256block_amd64.s
0.90ms 0.51% 4.71% 0.90ms 0.51% crypto/elliptic.p256Sqr /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
0.80ms 0.45% 5.17% 4.50ms 2.56% crypto/elliptic.p256PointDoubleAsm /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
0.80ms 0.45% 5.62% 0.80ms 0.45% math/big.addMulVVW /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
0.70ms 0.4% 6.02% 0.70ms 0.4% syscall.Syscall /home/aliaksandr/work/go-tip/src/syscall/asm_linux_amd64.s
0.50ms 0.28% 6.30% 1.30ms 0.74% math/big.basicMul /home/aliaksandr/work/go-tip/src/math/big/nat.go
0.40ms 0.23% 6.53% 0.40ms 0.23% math/big.subVV /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
0.30ms 0.17% 6.70% 0.30ms 0.17% crypto/elliptic.p256Select /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
0.30ms 0.17% 6.87% 0.30ms 0.17% crypto/hmac.New /home/aliaksandr/work/go-tip/src/crypto/hmac/hmac.go
0.30ms 0.17% 7.04% 0.30ms 0.17% math/big.mulAddVWW /home/aliaksandr/work/go-tip/src/math/big/arith_amd64.s
0.30ms 0.17% 7.21% 0.30ms 0.17% p256SubInternal /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
0.30ms 0.17% 7.38% 0.30ms 0.17% runtime.mallocgc /home/aliaksandr/work/go-tip/src/runtime/mbitmap.go
0.30ms 0.17% 7.55% 0.30ms 0.17% runtime.memmove /home/aliaksandr/work/go-tip/src/runtime/memmove_amd64.s
0.20ms 0.11% 7.67% 0.60ms 0.34% crypto/elliptic.p256PointAddAffineAsm /home/aliaksandr/work/go-tip/src/crypto/elliptic/p256_asm_amd64.s
0.20ms 0.11% 7.78% 1.70ms 0.97% encoding/asn1.parseField /home/aliaksandr/work/go-tip/src/encoding/asn1/asn1.go
0.20ms 0.11% 7.89% 0.20ms 0.11% math/big.nat.setBytes /home/aliaksandr/work/go-tip/src/math/big/nat.go
0.20ms 0.11% 8.01% 0.20ms 0.11% runtime.heapBitsSetType /home/aliaksandr/work/go-tip/src/runtime/mbitmap.go
```
As you can see, the client side takes 1/10 part of CPU time comparing to the server side.
@agl , @vkrasnov | Performance,help wanted | medium | Major |
223,184,677 | youtube-dl | [Youtube] Unable to download the videos on a web page that has Youtube videos embedded inside (http://www.doxtremesports.com/archery-tag) | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like that [x])
- Use *Preview* tab to see how your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.04.17*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.04.17**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
$ youtube-dl -v http://www.doxtremesports.com/archery-tag
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'http://www.doxtremesports.com/archery-tag']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.04.17
[debug] Python version 2.7.12 - Linux-4.4.0-72-generic-x86_64-with-Ubuntu-16.04-xenial
[debug] exe versions: none
[debug] Proxy map: {}
[generic] archery-tag: Requesting header
WARNING: Falling back on generic information extractor.
[generic] archery-tag: Downloading webpage
[generic] archery-tag: Extracting information
ERROR: Unsupported URL: http://www.doxtremesports.com/archery-tag
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1898, in _real_extract
doc = compat_etree_fromstring(webpage.encode('utf-8'))
File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2526, in compat_etree_fromstring
doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory)))
File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2515, in _XML
parser.feed(text)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1653, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1517, in _raiseerror
raise err
ParseError: not well-formed (invalid token): line 20, column 14
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 760, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 429, in extract
ie_result = self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 2765, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: http://www.doxtremesports.com/archery-tag
...
<end of log>
```
---
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
This web page contains Youtube videos. This should already be supported by Youtube-dl. But for some reason, it is unable to grab the videos on this web page. It claims that the URL is unsupported.
| site-support-request | low | Critical |
223,209,532 | flutter | Template files for Android do not pass IntelliJ Code Inspection | ## Steps to Reproduce
- Create a new Intellij flutter project
- VCS/Import into version control/Create GIT repository
- Commit changes on unversioned files
- Browse and add unversioned files
- Select all the files
- Do the commit
- Code analysis dialog box says there are errors

If you review the errors things are very mysterious and seem like there is a big problem.
If you just commit the project works fine
## Flutter Doctor
```
[✓] Flutter (on Linux, channel master)
• Flutter at /home/darrell/flutter
• Framework revision ca2bf1efd0 (3 hours ago), 2017-04-20 11:37:10 -0700
• Engine revision a5b64899c9
• Tools Dart version 1.23.0-dev.11.7
[✓] Host Executable Compatibility
• Downloaded executables execute on host
[✓] Android toolchain - develop for Android devices (Android SDK 25.0.2)
• Android SDK at /home/darrell/Android/Sdk
• Platform android-25, build-tools 25.0.2
• Java binary at: /home/darrell/android-studio/jre/bin/java
• Java version: OpenJDK Runtime Environment (build 1.8.0_112-release-b736)
[✓] Android Studio (version 2.4)
• Android Studio at /home/darrell/android-studio
• Gradle version 3.4.1
• Java version: OpenJDK Runtime Environment (build 1.8.0_112-release-b736)
[✓] IntelliJ IDEA Community Edition (version 2017.1)
• Dart plugin version 171.4249.16
• Flutter plugin version 12.1
[✓] Connected devices
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 7.1.1 (API 25) (emulator)
``` | platform-android,tool,t: gradle,P2,team-android,triaged-android,fyi-tool | low | Critical |
223,217,737 | rust | Attribute macros invoked at crate root have issues | When invoking some attribute procedural macro with inner form, i.e. `#![foo_attr]` at the crate root, a resolution error occurs:
```rust
#![feature(proc_macro)]
#![foo_attr]
//^ ERROR: cannot find attribute macro `foo_attr` in this scope
extern crate foo_macros;
use foo_attr_macro::foo_attr;
```
This should, ideally, resolve and execute properly. | A-attributes,T-compiler,A-macros-2.0,C-bug | medium | Critical |
223,234,231 | flutter | If debugDumpRenderTree crashes during a "t" tree dump, so does the flutter tool | I did something to the framework that made the debugDumpRenderTree function throw. This means the RPC did not return successfully when I hit "t" in `flutter run`. The result was that the tool itself crashed, with the following stack (I was running in `--verbose` mode):
```
Unhandled exception:
JSON-RPC error -32000: Server error
package:json_rpc_2/src/client.dart 109 Client.sendRequest
package:json_rpc_2/src/peer.dart 68 Peer.sendRequest
package:flutter_tools/src/vmservice.dart 129 VMService._sendRequest
package:flutter_tools/src/vmservice.dart 648 VM.invokeRpcRaw
===== asynchronous gap ===========================
package:flutter_tools/src/vmservice.dart 886 Isolate.invokeRpcRaw
package:flutter_tools/src/vmservice.dart 966 Isolate.invokeFlutterExtensionRpcRaw
===== asynchronous gap ===========================
package:flutter_tools/src/vmservice.dart 982 Isolate.flutterDebugDumpRenderTree
package:flutter_tools/src/resident_runner.dart 132 ResidentRunner._debugDumpRenderTree
===== asynchronous gap ===========================
package:flutter_tools/src/resident_runner.dart 295 ResidentRunner._commonTerminalInputHandler
===== asynchronous gap ===========================
package:flutter_tools/src/resident_runner.dart 333 ResidentRunner.processTerminalInput
===== asynchronous gap ===========================
dart:async/zone.dart 1128 _rootRunUnary
dart:async/zone.dart 1012 _CustomZone.runUnary
dart:async/zone.dart 909 _CustomZone.runUnaryGuarded
dart:async/stream_impl.dart 330 _BufferingStreamSubscription._sendData
dart:async/stream_impl.dart 257 _BufferingStreamSubscription._add
dart:async/stream_transformers.dart 68 _SinkTransformerStreamSubscription._add
dart:async/stream_transformers.dart 15 _EventSinkWrapper.add
dart:convert/string_conversion.dart 268 _StringAdapterSink.add
dart:convert/ascii.dart 298 _SimpleAsciiDecoderSink.add
dart:convert/chunked_conversion.dart 96 _ConverterStreamEventSink.add
dart:async/stream_transformers.dart 120 _SinkTransformerStreamSubscription._handleData
dart:async/zone.dart 1128 _rootRunUnary
dart:async/zone.dart 1012 _CustomZone.runUnary
dart:async/zone.dart 909 _CustomZone.runUnaryGuarded
dart:async/stream_impl.dart 330 _BufferingStreamSubscription._sendData
dart:async/stream_impl.dart 257 _BufferingStreamSubscription._add
dart:async/stream_controller.dart 757 _StreamController&&_SyncStreamControllerDispatch._sendData
dart:async/stream_controller.dart 628 _StreamController._add
dart:async/stream_controller.dart 574 _StreamController.add
dart:io-patch/socket_patch.dart 1617 _Socket._onData
dart:async/zone.dart 1132 _rootRunUnary
dart:async/zone.dart 1012 _CustomZone.runUnary
dart:async/zone.dart 909 _CustomZone.runUnaryGuarded
dart:async/stream_impl.dart 330 _BufferingStreamSubscription._sendData
dart:async/stream_impl.dart 257 _BufferingStreamSubscription._add
dart:async/stream_controller.dart 757 _StreamController&&_SyncStreamControllerDispatch._sendData
dart:async/stream_controller.dart 628 _StreamController._add
dart:async/stream_controller.dart 574 _StreamController.add
dart:io-patch/socket_patch.dart 1203 _RawSocket._RawSocket.<fn>
dart:io-patch/socket_patch.dart 760 _NativeSocket.issueReadEvent.issue
dart:async/schedule_microtask.dart 41 _microtaskLoop
dart:async/schedule_microtask.dart 50 _startMicrotaskLoop
dart:isolate-patch/isolate_patch.dart 99 _runPendingImmediateCallback
dart:isolate-patch/isolate_patch.dart 152 _RawReceivePortImpl._handleMessage
../../runtime/vm/dart_api_impl.cc: 1871: error: Dart_GetMainPortId expects there to be a current isolate. Did you forget to call Dart_CreateIsolate or Dart_EnterIsolate?
/usr/local/google/home/ianh/dev/flutter/bin/flutter: line 91: 31996 Aborted (core dumped) "$DART" $FLUTTER_TOOL_ARGS "$SNAPSHOT_PATH" "$@"
```
cc @johnmccutchan | tool,P3,team-tool,triaged-tool | low | Critical |
223,273,649 | react | Seb's Deprecation Wishlist Umbrella | I have a list of breaking changes that I'd like to see because I think they're not strictly necessary features, can often be replaced by other APIs and their very existence makes implementations more constrained, even when they're not used.
This list is not meant to be anything we're planning on actively doing. It's just a drop point where I can add things as I think of them.
- [ ] Shallow freeze the `defaultProps` object and make the `defaultProps` property non-configurable/non-writable after the first `createElement` or `createFactory` call. (Enables inlining/resolution of defaults statically.)
- [ ] Treat `key`/`ref` as a separate namespace in JSX. Meaning that objects that are spread onto JSX don't transfer `key` and `ref`. Enables inlining of props object even if spread type is unknown. E.g.
```js
let x = <Foo {...{key:'bar'}} />;
x.key; // null
x.props.key; // 'bar'
let y = <Foo key="bar" />;
y.key; // 'bar'
y.props.key; // undefined
```
- [ ] Drop support for string refs.
- [ ] Drop support for `ReactDOM.findDOMNode(...)` and `ReactNative.findNodeHandle(...)`. These are slower in Fiber and requires a tree to be materialized/stateful/introspectable at arbitrary times/threads even before we know if this will ever get called. Less automatic cleanup. Could possibly have an alternative API that works more like refs. However, just ref forwarding probably solves all legit use cases better.
- [ ] Make `.type` and `.props` private on `ReactElement`s so that they can't be introspected (just like bound functions/closures). This makes optimizations like automatic making components asynchronous/synchronous safe or inlining components several levels deep. | Type: Umbrella,React Core Team | low | Major |
223,428,054 | awesome-mac | This is a crazy plan. Everybody vote for me. | I am going to develop an APP for this list. The following is the design draft. Design drafts welcome comments.

I will use [React Native macOS](https://github.com/ptmt/react-native-desktop) or [Electron](http://electron.atom.io) to develop it. In fact, I prefer to use [React Native macOS](https://github.com/ptmt/react-native-desktop) to develop. Please give me a suggestion!
[Electron](http://electron.atom.io) vote with "👍", [React Native macOS](https://github.com/ptmt/react-native-desktop) vote with "😀". | vote | medium | Critical |
223,469,118 | neovim | stderr from clipboard provider is drawn over buffer | - `nvim --version`:
```
NVIM v0.2.0-1395-gce7cba6d
Build type: Debug
Compilation: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -Wconversion -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -DNVIM_MSGPACK_HAS_FLOAT32 -g -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wvla -fstack-protector-strong -fdiagnostics-color=auto -DINCLUDE_GENERATED_DECLARATIONS -I/Users/anrussell/src/neovim/build/config -I/Users/anrussell/src/neovim/src -I/Users/anrussell/src/neovim/.deps/usr/include -I/Users/anrussell/src/neovim/.deps/usr/include -I/Users/anrussell/src/neovim/.deps/usr/include -I/Users/anrussell/src/neovim/.deps/usr/include -I/Users/anrussell/src/neovim/.deps/usr/include -I/Users/anrussell/src/neovim/.deps/usr/include -I/usr/local/opt/gettext/include -I/usr/include -I/Users/anrussell/src/neovim/build/src/nvim/auto -I/Users/anrussell/src/neovim/build/include
Compiled by anrussell@anrussell-mba
Optional features included (+) or not (-): +acl +iconv +jemalloc +tui
For differences from Vim, see :help vim-differences
system vimrc file: "$VIM/sysinit.vim"
fall-back for $VIM: "/usr/local/share/nvim"
```
- Vim (version: ) behaves differently? N/A
- Operating system/version: macOS 10.12
- Terminal name/version: iTerm 2
- `$TERM`: screen-256color
### Actual behaviour
When a clipboard provider prints to stderr (such as `lemonade` being unable to connect to a socket), it draws the stderr over the buffer.
### Expected behaviour
It should print a warning in the command line instead.
### Steps to reproduce using `nvim -u NORC`
Set `PATH` to only include lemonade. Do not start the server.
Try to paste.
### Additional Information
I have a pull request forthcoming for this issue, but I'm not sure how to test it. How can I mock a clipboard provider printing to STDERR?
| provider,ux,clipboard | low | Critical |
223,476,019 | go | net/http/httputil: ReverseProxy gives "context canceled" message when client disappears | When clients prematurely abandon a proxied HTTP request, there is no identified cause other than "context canceled".
### What version of Go are you using (`go version`)?
`go version go1.8.1 darwin/amd64`
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/aron/go"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/ts/s940qvdj5vj1czr9qh07fvtw0000gn/T/go-build063338539=/tmp/go-build -gno-record-gcc-switches -fno-common"
CXX="clang++"
CGO_ENABLED="1"
PKG_CONFIG="pkg-config"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
```
### What did you do?
```go
package main
import (
"fmt"
"log"
"net"
"net/http"
"net/http/httptest"
"net/http/httputil"
"net/url"
"time"
)
func main() {
backend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log.Printf("backend: %s", r.URL)
w.Write([]byte("This is unexpected!"))
}))
defer backend.Close()
backendURL, err := url.Parse(backend.URL)
if err != nil {
log.Fatalf("could not parse backend url %s: %s", backend.URL, err)
}
director := func(req *http.Request) {
req.URL = &url.URL{}
req.URL.Scheme = "http"
req.URL.Host = backendURL.Host
req.Host = ""
}
pxy := &httputil.ReverseProxy{Director: director}
frontend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log.Printf("proxying: %s", r.URL)
pxy.ServeHTTP(w, r)
}))
defer frontend.Close()
frontendURL, err := url.Parse(frontend.URL)
if err != nil {
log.Fatalf("could not parse frontend url %s: %s", frontend.URL, err)
}
conn, err := net.DialTimeout("tcp", frontendURL.Host, 5*time.Second)
if err != nil {
log.Fatalf("could not dial: %s: %s", frontendURL.Host, err)
}
conn.Write([]byte("GET / HTTP/1.1\n"))
conn.Write([]byte(fmt.Sprintf("Host: %s\n", frontendURL.Host)))
conn.Write([]byte("\n"))
//_, err = io.Copy(os.Stdout, conn)
err = conn.Close()
if err != nil {
log.Fatalf("could not close: %s: %s", frontendURL.Host, err)
}
// without this, the "http: proxy error: context canceled" message might not appear.
time.Sleep(2 * time.Second)
}
```
### What did you expect to see?
Not entirely sure the right amount of transparency, but it would have been helpful to have something indicating that the client has closed the connection or otherwise gone away.
### What did you see instead?
```
2017/04/21 15:30:07 proxying: /
2017/04/21 15:30:07 http: proxy error: context canceled
```
I believe the context is canceled by the following code from `ReverseProxy.ServeHTTP`:
```go
ctx := req.Context()
if cn, ok := rw.(http.CloseNotifier); ok {
var cancel context.CancelFunc
ctx, cancel = context.WithCancel(ctx)
defer cancel()
notifyChan := cn.CloseNotify()
go func() {
select {
case <-notifyChan:
cancel()
case <-ctx.Done():
}
}()
}
```
| NeedsInvestigation | medium | Critical |
223,503,425 | go | net: Windows test use PowerShell "getmac", not always available | The Windows tests use PowerShell "getmac", which only works in Windows Server 2008.
In 2012 and 2016, it fails.
Maybe the replacement is now "Get-NetAdapter | select Name,MacAddress" ?
/cc @alexbrainman @mikioh | Testing,OS-Windows | low | Major |
223,519,780 | go | runtime: TestSetGCPercent flaky | Despite CL 41374, I'm seeing a bunch of trybot failures like https://storage.googleapis.com/go-build-log/6c83b9bd/linux-amd64_cce08b9e.log:
```
--- FAIL: TestSetGCPercent (0.17s)
garbage_test.go:151: NextGC = 172 MB, want 150±20 MB
```
cc @aclements
| Testing,help wanted,NeedsInvestigation,compiler/runtime | low | Critical |
223,540,151 | pytorch | Unhelpful CrossEntropyLoss dimension error message | I believe I've stumbled upon a slight whoops in nn.CrossEntropyLoss(). If the criterion is called with (a, y) where a in (N, C) and y in (N) such that some yi > C, I get the internal error message below (took a while to parse)... seems like this could use a wrapper. A simple note following the internal error would suffice--how about: "Ensure the class dimension of the predictions matches the class dimension of the targets" ?
THCudaCheck FAIL file=/py/conda-bld/pytorch_1490903321756/work/torch/lib/THC/generic/THCTensorCopy.c line=65 error=59 : device-side assert triggered
System: Ubuntu 16.06, Python 3.6 (conda install).
cc @ngimel | module: loss,module: cuda,module: error checking,triaged | low | Critical |
223,544,396 | pytorch | [proposed feature] Eve: Improving Stochastic Gradient Descent with Feedback | Hi all,
I implemented a paper "Improving Stochastic Gradient Descent with Feedback" as called [Eve](https://arxiv.org/abs/1611.01505).
Eve is a modified version of Adam, and outperforms other SGD algorithms on some benchmark tasks including image classification.
Please, give me any advice and code review.
The code is uploaded in this [gist](https://gist.github.com/snowyday/19b959b268d3af7785b2dd0e2f37f6bb), and below:
```python
import math
# from .optimizer import Optimizer
from torch.optim import Optimizer
class Eve(Optimizer):
"""Implements Eve (Adam with feedback) algorithm.
It has been proposed in `Improving Stochastic Gradient Descent with Feedback, `_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-2)
betas (Tuple[float, float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999, 0.999))
thr ((Tuple[float, float], optional): lower and upper threshold for relative change
(default: (0.1, 10))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
.. _Eve\: Improving Stochastic Gradient Descent with Feedback
https://arxiv.org/abs/1611.01505
"""
def __init__(self, params, lr=1e-2, betas=(0.9, 0.999, 0.999), eps=1e-8, thr=(0.1, 10), weight_decay=0):
defaults = dict(lr=lr, betas=betas, eps=eps, thr=thr, weight_decay=weight_decay)
super(Eve, self).__init__(params, defaults)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
if closure is not None:
loss = closure()
loss_val = loss.data[0]
else:
raise ValueError("Eve requires a value of the loss function.")
for group in self.param_groups:
for p in group['params']:
grad = p.grad.data
state = self.state[id(p)]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = grad.new().resize_as_(grad).zero_()
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = grad.new().resize_as_(grad).zero_()
# Previous loss value
state['loss_hat_prev'] = loss_val
# Feed-back from the loss function
state['decay_rate'] = 1
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2, beta3 = group['betas']
thl, thu = group['thr']
loss_hat_prev = state['loss_hat_prev']
state['step'] += 1
if group['weight_decay'] != 0:
grad = grad.add(group['weight_decay'], p.data)
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
if state['step'] > 1:
if loss_val >= loss_hat_prev:
lower_bound = thl + 1
upper_bound = thu + 1
else:
lower_bound = 1 / (thu + 1)
upper_bound = 1 / (thl + 1)
clip = min(max(lower_bound, loss_val / loss_hat_prev), upper_bound)
loss_hat = clip * loss_hat_prev
relative_change = abs(loss_hat - loss_hat_prev) / min(loss_hat, loss_hat_prev)
state['decay_rate'] = beta3 * state['decay_rate'] + (1 - beta3) * relative_change
state['loss_hat_prev'] = loss_hat
denom = exp_avg_sq.sqrt().mul_(state['decay_rate']).add_(group['eps'])
p.data.addcdiv_(-step_size, exp_avg, denom)
return loss
``` | feature,module: optimizer,triaged | low | Critical |
223,588,182 | go | runtime: fatal error: addspecial on invalid pointer | A user of our library, github.com/ethereum/go-ethereum/mobile, has submitted the crash report below. The JNI library that crashed was built using gomobile and go1.8.1 android/arm64 and ran on a Galaxy Note 4. We don't have more details, sorry.
```text
04-22 20:42:32.717 11258 11351 E Go : fatal error: addspecial on invalid pointer
04-22 20:42:32.717 11258 11352 E GoLog : fatal error: addspecial on invalid pointer
04-22 20:42:32.717 11258 11351 E Go : runtime stack:
04-22 20:42:32.717 11258 11351 E Go : runtime.throw(0x9f1ef649, 0x1d)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/panic.go:596 +0x78
04-22 20:42:32.717 11258 11351 E Go : runtime.addspecial(0x9e7d3600, 0x7d785190, 0x0)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/mheap.go:1131 +0x1bc
04-22 20:42:32.717 11258 11351 E Go : runtime.setprofilebucket(0x9e7d3600, 0x7d2a33d0)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/mheap.go:1292 +0x70
04-22 20:42:32.717 11258 11351 E Go : runtime.mProf_Malloc.func1()
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/mprof.go:258 +0x24
04-22 20:42:32.717 11258 11351 E Go : runtime.systemstack(0x8e7ee000)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/asm_arm.s:264 +0x8c
04-22 20:42:32.717 11258 11351 E Go : runtime.mstart()
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/proc.go:1132
04-22 20:42:32.717 11258 11351 E Go : goroutine 30422 [running]:
04-22 20:42:32.717 11258 11351 E Go : runtime.systemstack_switch()
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/asm_arm.s:209 +0x4 fp=0x9bd9c6e4 sp=0x9bd9c6e0
04-22 20:42:32.717 11258 11351 E Go : runtime.mProf_Malloc(0x9e7d3600, 0x100)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/mprof.go:259 +0x104 fp=0x9bd9c798 sp=0x9bd9c6e4
04-22 20:42:32.717 11258 11351 E Go : runtime.profilealloc(0x8e7fa780, 0x9e7d3600, 0x100)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/malloc.go:831 +0x38 fp=0x9bd9c7a4 sp=0x9bd9c798
04-22 20:42:32.717 11258 11351 E Go : runtime.mallocgc(0x100, 0x9f5e9798, 0x9ea67501, 0x9f9c3798)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/malloc.go:761 +0x554 fp=0x9bd9c7fc sp=0x9bd9c7a4
04-22 20:42:32.717 11258 11351 E Go : runtime.newobject(0x9f5e9798, 0x9f5e9798)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/malloc.go:808 +0x2c fp=0x9bd9c810 sp=0x9bd9c7fc
04-22 20:42:32.717 11258 11351 E Go : reflect.unsafe_New(0x9f5e9798, 0x9bd9c860)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/malloc.go:813 +0x1c fp=0x9bd9c81c sp=0x9bd9c810
04-22 20:42:32.717 11258 11351 E Go : reflect.New(0x9f9c3798, 0x9f5e9798, 0x91, 0x9f9c3798, 0x9f5e9798)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/reflect/value.go:2138 +0x40 fp=0x9bd9c83c sp=0x9bd9c81c
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/rlp.writeByteArray(0x9f5e9798, 0x9e7d3500, 0x91, 0x9cfe8930, 0x0, 0x0)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/rlp/encode.go:451 +0xec fp=0x9bd9c86c sp=0x9bd9c83c
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/rlp.writeInterface(0x9f572728, 0x9e6e68b0, 0x194, 0x9cfe8930, 0x9f572728, 0x9e6e68b0)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/rlp/encode.go:506 +0x1c8 fp=0x9bd9c8a4 sp=0x9bd9c86c
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/rlp.makeSliceWriter.func1(0x9f54a100, 0x5e634090, 0x97, 0x9cfe8930, 0x0, 0x0)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/rlp/encode.go:520 +0xb4 fp=0x9bd9c8d0 sp=0x9bd9c8a4
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/rlp.(*encbuf).encode(0x9cfe8930, 0x9f54a100, 0x5e634090, 0x9f60f420, 0x9cfe8930)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/rlp/encode.go:188 +0xdc fp=0x9bd9c8f8 sp=0x9bd9c8d0
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/rlp.Encode(0x7e23c040, 0x5e630d00, 0x9f54a100, 0x5e634090, 0x0, 0x0)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/rlp/encode.go:89 +0xe8 fp=0x9bd9c914 sp=0x9bd9c8f8
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/consensus/clique.sigHash(0x90f8d400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/consensus/clique/clique.go:159 +0x4bc fp=0x9bd9c960 sp=0x9bd9c914
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/consensus/clique.ecrecover(0x90f8d400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x2ad872e6, 0x42c78412)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/consensus/clique/clique.go:173 +0x9c fp=0x9bd9c9c4 sp=0x9bd9c960
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/consensus/clique.(*Snapshot).apply(0x9e6d6af0, 0x9b7994e8, 0x1, 0x2, 0x9e6d6af0, 0x9b799401, 0x0)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/consensus/clique/snapshot.go:193 +0x334 fp=0x9bd9cc5c sp=0x9bd9c9c4
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/consensus/clique.(*Clique).snapshot(0x8eb30690, 0x9f9c0a80, 0x8eb306e0, 0xb8dd, 0x0, 0x65866e97, 0x523021a8, 0xbc9da3e9, 0xb536bc67, 0xaf345ce9, ...)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/consensus/clique/clique.go:418 +0x770 fp=0x9bd9ce14 sp=0x9bd9cc5c
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/consensus/clique.(*Clique).verifyCascadingFields(0x8eb30690, 0x9f9c0a80, 0x8eb306e0, 0x90f8d600, 0x8f83e500, 0x7b6, 0x32c0, 0x2, 0x0)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/consensus/clique/clique.go:338 +0x24c fp=0x9bd9cedc sp=0x9bd9ce14
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/consensus/clique.(*Clique).verifyHeader(0x8eb30690, 0x9f9c0a80, 0x8eb306e0, 0x90f8d600, 0x8f83e500, 0x7b6, 0x32c0, 0x0, 0x0)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/consensus/clique/clique.go:311 +0x2fc fp=0x9bd9cf48 sp=0x9bd9cedc
04-22 20:42:32.717 11258 11351 E Go : github.com/ethereum/go-ethereum/consensus/clique.(*Clique).VerifyHeaders.func1(0x8f83e500, 0x800, 0x32c0, 0x8eb30690, 0x9f9c0a80, 0x8eb306e0, 0x9b839780, 0x9b8397c0)
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/consensus/clique/clique.go:243 +0x70 fp=0x9bd9cfcc sp=0x9bd9cf48
04-22 20:42:32.717 11258 11351 E Go : runtime.goexit()
04-22 20:42:32.717 11258 11351 E Go : /home/travis/build/ethereum/go-ethereum/go/src/runtime/asm_arm.s:1017 +0x4 fp=0x9bd9cfcc sp=0x9bd9cfcc
04-22 20:42:32.717 11258 11351 E Go : created by github.com/ethereum/go-ethereum/consensus/clique.(*Clique).VerifyHeaders
04-22 20:42:32.717 11258 11351 E Go : /home/travis/go/src/github.com/ethereum/go-ethereum/consensus/clique/clique.go:251 +0xb4
... other stacks elided, see linked issue for more ...
``` | help wanted,OS-Android,NeedsInvestigation,mobile,compiler/runtime | low | Critical |
223,603,575 | node | readline: prompt opt-out behavior | * **Version**: v8.0.0-rc.0
* **Platform**: Windows 4 x64
* **Subsystem**: readline
I am not sure if this is a bug or a feature.
Consider this simple echo interface:
```js
'use strict';
const readline = require('readline');
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
rl.on('line', (line) => {
console.log(line);
});
```
It does not use `rl.prompt()` anywhere and with a common input there will be no default prompt:
<details>
<summary>GIF 1:</summary>

</details><br>
However, if a user press DELETE, BACKSPACE or UP button, default prompt immediately and somehow unexpectedly appears shifting the whole line:
<details>
<summary>GIF 2:</summary>

</details><br>
This can be prevented by setting the `prompt` option into the empty line. However, I am not sure if this opt-out rule is an intended behavior. | confirmed-bug,help wanted,readline | low | Critical |
223,605,000 | TypeScript | Variable declaration list comments from async functions are removed | **TypeScript Version:** 2.2.1
**Code**
```typescript
async function example() {
// result.value will be promise
const promise = Promise.resolve("foo");
await promise;
// result.value will be "foo"
return "foo";
}
```
**Expected behavior:**
Ignoring `__awaiter` and `__generator`, the output should include `// result.value will be promise`
**Actual behavior:**
```typescript
function example() {
return __awaiter(this, void 0, void 0, function () {
var promise;
return __generator(this, function (_a) {
switch (_a.label) {
case 0:
promise = Promise.resolve("foo");
return [4 /*yield*/, promise];
case 1:
_a.sent();
// result.value will be "foo"
return [2 /*return*/, "foo"];
}
});
});
}
``` | Bug,Effort: Difficult,Domain: Transforms,Domain: Comment Emit | low | Major |
223,636,199 | go | cmd/compile: postpone argument conversions after inlining | Look at this code:
```go
package main
import (
"fmt"
)
type Log struct {
Enabled bool
}
func (l *Log) Print(args ...interface{}) {
if l.Enabled {
fmt.Println(args...)
}
}
func main() {
var x int = 1000
l := Log{Enabled: false}
l.Print("data", x)
}
```
If the code is compiled with `-gcflags=-l=4`, `Log.Print()` is inlined into `main()`. Unfortunately, this does not get rid of the expensive argument conversions to empty interfaces.
I know some work has been merged to speed up conversions of common data to empty interfaces, and that's great for when you are actually logging; but if logging is disabled, you can't fully get rid of the overhead.
I have a real high-performance program where I selectively enable logging for debugging reasons. Unfortunately, even with logging disabled, the overhead is measurable so I'm forced to actually comment out logging lines.
If the above pattern was optimized correctly, I could reword the API of my high-perf logging library to make sure such an optimization triggers for me.
| Performance,NeedsFix,compiler/runtime | low | Critical |
223,640,342 | go | cmd/compile: add special binary export encoding for iotas? | Poking around the export data for the ssa package, I noticed a lot of runs of constants generated by iota--each with an int value + 1 and a line +1 from its predecessory. It is a somewhat common structure that admits of a compact encoding, something like this:
```go
// export
func (p *exporter) iotas(e []*Node) {
p.tag(iotaTag)
p.int(len(e))
n := e[0]
p.pos(n)
p.typ(unidealType(n.Type, n.Val()))
p.value(n.Val())
for _, n := range e {
p.qualifiedName(n.Sym)
}
}
// import
// ...
case iotaTag:
niota := p.int()
p.pos() // TODO: hook this up and increase by one line at each new node
typ := p.typ()
val := p.value(typ)
for i := 0; i < niota; i++ {
sym := p.qualifiedName()
importconst(p.imp, sym, idealType(typ), nodintconst(val.Interface().(int64)+int64(i)))
}
return niota
// ...
```
A possibly even more compact encoding is to detect common prefixes in names across the entire run, which is fairly common. Even without that, this has a pretty significant impact on the ssa package:
```
name old export-bytes new export-bytes delta
Template 19.0k ± 0% 19.0k ± 0% ~ (all equal)
Unicode 4.45k ± 0% 4.43k ± 0% -0.31% (p=0.002 n=6+6)
GoTypes 29.7k ± 0% 29.6k ± 0% -0.53% (p=0.002 n=6+6)
Compiler 75.6k ± 0% 74.3k ± 0% -1.62% (p=0.002 n=6+6)
SSA 76.2k ± 0% 65.7k ± 0% -13.79% (p=0.002 n=6+6)
Flate 4.98k ± 0% 4.98k ± 0% +0.02% (p=0.002 n=6+6)
GoParser 8.81k ± 0% 8.81k ± 0% +0.01% (p=0.002 n=6+6)
Reflect 6.25k ± 0% 6.10k ± 0% -2.37% (p=0.002 n=6+6)
Tar 9.49k ± 0% 9.46k ± 0% -0.31% (p=0.002 n=6+6)
XML 16.0k ± 0% 16.0k ± 0% ~ (all equal)
[Geo mean] 15.1k 14.8k -1.98%
```
It's not quite obvious that it's worth doing in general--SSA might be somewhat special.
For discussion, if we make another round of export format changes.
| ToolSpeed,compiler/runtime | low | Major |
223,704,000 | youtube-dl | abc.go.com: Season playlist only downloads the first episode. | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like that [x])
- Use *Preview* tab to see how your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.04.17*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x ] I've **verified** and **I assure** that I'm running youtube-dl **2017.04.17**
### Before submitting an *issue* make sure you have:
- [ x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [ x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
So I discovered that ABC has a full list of "all episodes this season" page for the shows. But tossing that at youtube-dl only fetches the first episdode.
```
keybounceMBP:TimeAfterTime michael$ youtube-dl -v http://abc.go.com/shows/time-after-time/episode-guide
[debug] System config: []
[debug] User config: [u'-o', u'%(title)s-%(timestamp)6i.%(ext)s', u'-f', u'\nbest[ext=mp4][height>431][height<=576]/\nbestvideo[ext=mp4][height=480]+bestaudio[ext=m4a]/\nbest[ext=mp4][height>340][height<=431]/\nbestvideo[ext=mp4][height>360][height<=576]+bestaudio/\nbest[height>340][height<=576]/\nbestvideo[height>360][height<=576]+bestaudio/\nbestvideo[height=360]+bestaudio/\nbest[ext=mp4][height>=280][height<=360]/\nbest[height<=576]/\nworst', u'--ap-mso', u'Dish', u'--ap-username', u'PRIVATE', u'--ap-password', u'PRIVATE', u'--write-sub', u'--write-auto-sub', u'--sub-lang', u'en,enUS,en-us', u'--sub-format', u'ass/srt/best', u'--convert-subs', u'ass', u'--embed-subs', u'--recode-video', u'mp4', u'--mark-watched']
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'http://abc.go.com/shows/time-after-time/episode-guide']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.04.17
[debug] Python version 2.7.5 - Darwin-13.4.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 3.2.4, ffprobe 3.2.4, rtmpdump 2.4
[debug] Proxy map: {}
[Go] episode-guide: Downloading webpage
[Go] 3717893: Downloading JSON metadata
[debug] Using fake IP 3.218.9.175 (US) as X-Forwarded-For.
[Go] 3717893: Downloading JSON metadata
[Go] 3717893: Downloading m3u8 information
WARNING: en subtitles not available for 3717893
WARNING: enUS subtitles not available for 3717893
[info] Writing video subtitles to: Pilot-NA.en-us.ttml
[debug] Invoking downloader on u'http://content-ause3.uplynk.com/b2aa9a7f908240c68a18b488e3884334/f.m3u8?exp=1493009807&ct=a&oid=d874124ecca24c88a3c9575e78686acf&eid=10087747&iph=3007d0539a73b02a94462bfe1759be8903dcbbf33e8fc043e2d5aeeae3066bf4&rays=jihgfedcb&euid=14FDC435-3748-495D-951D-ABF3D359C9F8_000_0_001_lf_01-06-00_NA&cdn=ec&stgcfg=datg&pp2ip=0&sig=7d1c06fce297025cb0f21ecf2c76911fdb779cd81f724a5c4beb2f427d6e967b&pbs=d3615bf3363c44ccb9836a473a375aea'
[download] Pilot-NA.mp4 has already been downloaded
[download] 100% of 347.67MiB
[ffmpeg] Not converting video file Pilot-NA.mp4 - already is in target format mp4
[ffmpeg] Converting subtitles
WARNING: You have requested to convert dfxp (TTML) subtitles into another format, which results in style information loss
[debug] ffmpeg command line: ffmpeg -y -i file:Pilot-NA.en-us.srt -f ass file:Pilot-NA.en-us.ass
Deleting original file Pilot-NA.en-us.ttml (pass -k to keep)
Deleting original file Pilot-NA.en-us.srt (pass -k to keep)
[ffmpeg] Embedding subtitles in 'Pilot-NA.mp4'
[debug] ffmpeg command line: ffmpeg -y -i file:Pilot-NA.mp4 -i file:Pilot-NA.en-us.ass -map 0 -c copy -map -0:s -c:s mov_text -map 1:0 -metadata:s:s:0 language=eng file:Pilot-NA.temp.mp4
Deleting original file Pilot-NA.en-us.ass (pass -k to keep)
keybounceMBP:TimeAfterTime michael$
``` | bug,geo-restricted | low | Critical |
223,720,411 | opencv | Issues Compiling opencv3.2 with CUDA on Mac OS 10.12 | My env and lib versions:
`Mac OS 10.12`,
`opencv 3.2`(download from [here](https://sourceforge.net/projects/opencvlibrary/) )
`cuda8.0` ,
`qt5` installed via `brew`,
`Xcode8 & Xcode7`(with CLT to compile cuda stuff)
I intended to install opencv3.2 by compiling source code.
cmake settings like below:
build without `opencv_highgui` & `opencv_videoio`(if these settings turned on, compiling would fail)
**TURN OFF** ` WITH_AVFOUNDATION`,`WITH_1394`,`WITH_QT` ,`WITH_QTKIT`,`WITH_QUICKTIME`,`WITH_TBB`,`WITH_IPP`,`WITH_GSTREAMER` ...
**TURN ON** `WITH_CUDA`,`WITH_EIGEN`,`WITH_JASPER`,`WITH_JPEG`,`WITH_LAPACK`,`WITH_OPENEXR`,`WITH_PNG`,`WITH_PTHREADS_PF`,`WITH_TIFF`
with the setting as above compiled successfully!
but without opencv_highgui & opencv_videoio how can I show a image or read camera frames?
if I build opencv_videoio, and select any one of `WITH_AVFOUNDATION, WITH_QTKIT,WITH_QUICKTIME` will not successfully compiling.
errors like:
(this error generate with quicktime FLAG)
```
~/openCV/opencv-3.2.0/modules/videoio/src/cap_qt.cpp:75:5: error: unknown type name 'Movie'
Movie myMovie; // movie handle
^
~/openCV/opencv-3.2.0/modules/videoio/src/cap_qt.cpp:103:9: error: use of undeclared identifier 'EnterMovies'
EnterMovies();
^
~/openCV/opencv-3.2.0/modules/videoio/src/cap_qt.cpp:142:5: error: use of undeclared identifier 'ClearMoviesStickyError'
ClearMoviesStickyError ();
^
``` | bug,priority: low,category: build/install,category: gpu/cuda (contrib),platform: ios/osx | low | Critical |
223,730,524 | go | cmd/compile: big binary and slow compilation times with maps & []interface{} in static code | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.8.1 darwin/amd64
### What operating system and processor architecture are you using (`go env`)?
darwin_amd64
### What did you do?
I switched some static data from being encoded in the binary as a large raw string to generating Golang code representing data structures, where the data represented covers "all assigned Unicode codepoints", at 31618 elements.
I removed a couple of map[rune]whatever items and constructed them at runtime, to get compiler times down from one minute back to a couple of seconds.
Even after making that change, I did not initially notice that the `unicode.a` file had gone up to 509MB and the executable to 491MB. I tried a couple of things, but the key was to switch a slice of `[]interface{}` to `[]ConcreteType`. At runtime I need the `[]interface{}` (to feed to Ferret for substring search) but switching the compiled-in slice from elements of `interface{}` to elements of concrete types brought `unicode.a` down to 4.4MB (from 509MB) and the executable down to 9.4MB.
Code is: https://github.com/philpennock/character
(go get compatible, though `make` will do some embedding)
### What did you expect to see?
Non-degenerate performance in compile times or library/executable sizes.
### What did you see instead?
Something which made me think I was compiling C++, not Golang. 😄
| ToolSpeed,NeedsInvestigation | low | Major |
223,745,272 | rust | include!() in statement position expects an expression | a.rs
```rust
fn main() {
include!("b.rs")
}
```
b.rs
```rust
fn b() {}
```
rustc 1.18.0-nightly (2bd4b5c6d 2017-04-23)
```rust
error: expected expression, found keyword `fn`
--> b.rs:1:1
|
1 | fn b(){}
| ^^
``` | A-macros,I-needs-decision,T-lang,C-bug | low | Critical |
223,972,618 | rust | Tracking issue for trait aliases | This is a tracking issue for trait aliases (rust-lang/rfcs#1733).
TODO:
- [x] Implement: [tracking issue](https://github.com/rust-lang/rust/issues/55628)
- [x] #56485 — Bringing a trait alias into scope doesn't allow calling methods from its component traits (done in https://github.com/rust-lang/rust/pull/59166)
- [x] #56488 — ICE with trait aliases and use items
- [x] #57023 — Nightly Type Alias Compiler panic `unexpected definition: TraitAlias`
- [x] #57059 — `INCOHERENT_AUTO_TRAIT_OBJECTS` future-compatibility warning (superseded by https://github.com/rust-lang/rust/pull/56481)
- [ ] Document
- [ ] Stabilize
Unresolved questions:
- [ ] Are bounds on other input types than the receiver enforced or implied?
- [ ] Should we keep the current behaviour of "shallow-banning" `?Sized` bounds in trait objects, ban them "deeply" (through trait alias expansion), or permit them everywhere with no effect?
- [ ] [Insufficient flexibility](https://github.com/rust-lang/rust/issues/41517#issuecomment-544340795)
- [ ] [Inconvenient syntax](https://github.com/rust-lang/rust/issues/41517#issuecomment-544763195)
- [ ] [Misleading naming](https://github.com/rust-lang/rust/issues/41517#issuecomment-479226997)
- [ ] Which of constraint/bound/type aliases are desirable
| A-trait-system,B-RFC-approved,T-lang,B-unstable,C-tracking-issue,F-trait_alias,S-tracking-needs-summary | high | Critical |
223,982,899 | go | cmd/compile: move arch-specific rewrite rules and ops into arch-specific packages | Right now, all the .rules files, all the generated code, and all the ops are in package ssa. This monolith has a few downsides: It seems a bit semantically incorrect, and it contributes to issues like #20084, and it makes rebuilding the compiler slower when working on the ssa package.
I'd like to move all the arch-specific stuff into cmd/compile/internal/ARCH. This is non-trivial and does have costs. Rough proposed plan:
1. Export all functions used by rewrite rules, so that they can be called from outside package ssa. There's a lot of this, including API surface we might normally keep private.
2. Everywhere we refer to arch-specific ops in package ssa, add some form of generalization.
3. Set up registration hooks for the value and block rewriters, must as we do now for SSAMarkMoves.
4. Move .rules and ops files to package ssa and teach rulegen about that. Within the generated files, do `import . "cmd/compile/internal/ssa"` to avoid having the teach rulegen to do `ssa.NewValue` for arch rules but just `NewValue` for generic rules. This will also reduce churn in the rules files themselves.
I think that's all that needs doing, but we might find more along the way; I started on this but it was big enough I wanted to discuss first.
Comments?
cc @mdempsky @bradfitz @randall77 | Performance,ToolSpeed,NeedsFix,compiler/runtime | low | Major |
224,009,950 | angular | @ContentChildren not populating from parent <ng-content> | ```
[*] bug report => search github for a similar issue or PR before submitting
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
```
**Current behavior**
@ContentChildren does not seem to work on Parent <ng-content> containers. Even if the ng-content is a child of the component.
**Expected behavior**
I expect that ContentChildren is filled if it is a child of the component
**Minimal reproduction of the problem with instructions**
https://plnkr.co/edit/J5itJmDfwKwtBsG6IveP?p=preview
Main HTML:
```html
<div>
<child>
<test></test>
<test></test>
<test></test>
</child>
<br><br>
<parent>
<test></test>
<test></test>
<test></test>
</parent>
</div>
```
Child Component:
```html
<h2>Child</h2>
<parent>
<ng-content></ng-content>
</parent>
```
Parent Component:
```html
<h3>Parent</h3>
<ng-content></ng-content>
<div class='length'>Line count : {{length}}</div>
```
**Please tell us about your environment:**
Windows, Angular4
| feature,effort2: days,state: Needs Design,workaround3: complex,area: core,core: queries,core: content projection,P4,feature: under consideration | high | Critical |
224,173,335 | go | x/arch/x86/x86asm: 64-bit CMOV is disassembled as 32-bit CMOV | These two sequences are disassembled by x86asm with the same string:
```
0f4dd0 CMOVGE AX, DX
480f4dd0 CMOVGE AX, DX
```
If I'm not mistaken, the second is the 64-bit version and should probably use a mnemonic like `CMOVQGE` (or the other one should use `CMOVLGE`). | NeedsFix | low | Minor |
224,238,896 | go | proposal: cmd/vet: detect homograph attacks | A [homograph attack](https://en.wikipedia.org/wiki/IDN_homograph_attack) is an attack that exploits the visual similarity of two glyphs. Traditionally, this has been used in phishing attacks to trick a user into visiting a malicious domain that looks identical to the real domain. However, homographs can also be used to sneak malicious source code past review. This is possible in any language that supports Unicode source code.
Here is a simple example of a homograph attack in Go source code:
```go
package main
import (
"log"
"os"
)
func main() {
log.SetFlags(0)
f, err := os.Create("test")
if err != nil {
log.Fatal(err)
}
f.Close()
if _, еrr := f.Write([]byte("data")); err != nil {
log.Fatal(еrr)
}
}
```
([Playground link](https://play.golang.org/p/EturtcBpds))
The expected output is `write test: file already closed`, but the actual program prints nothing. This happens because `e` and `е` are homographs, and thus `err` and `еrr` refer to different variables. This example is not very threatening, but it should demonstrate that sophisticated attacks could be constructed using this mechanism.
In my analysis, strings are the most likely vector for a homograph attack. For example, a homograph could be inserted in a `switch` statement over runes or strings, such that a particular `case` branch would never be taken. A homograph could also be used in an `import` path, although sites like GitHub seem to do a good job of preventing users from registering names or repos that contain homographs. Finally, as in the example above, homographs could be inserted in variable names where scoping rules make the duplication difficult to detect.
I propose that `go vet` make a "best effort" to detect homograph attacks. This is a bit nuanced because there are many valid reasons to include Unicode characters in source code; distinguishing malicious homographs from harmless ones is probably impossible in general. However, I believe that a few simple heuristics can catch the vast majority of viable attacks. For example, `go vet` could flag identifiers that mix ASCII with known homographs.
I have already developed a simple tool for this purpose [here](https://github.com/NebulousLabs/glyphcheck), though it is currently too strict to be used in projects that contain harmless homographs. Perhaps an external tool is sufficient, but adding it to `go vet` increases the chance that a security-conscious project will be protected from such attacks. At any rate, I recommend that open-source projects at Google run any publicly-submitted patches through a homograph detector. | Proposal,Proposal-Hold,Analysis | medium | Major |
224,297,549 | go | go/types: vague docs about "incrementally" type checking files | The go/types documentation twice mentions "incrementally":
> Alternatively, create a new type checker with NewChecker and invoke it incrementally by calling Checker.Files.
> NewChecker returns a new Checker instance for a given package. Package files may be added incrementally via checker.Files.
However, it's unclear to me what that's supposed to mean. The types tutorial doesn't mention "incrementally" at all either, or even describe using the types.Checker API.
Intuitively, I would think an "incremental" API means the caller only needs to supply new information; e.g., to call Files({A}) and then Files({B}). But looking at the implementation, it appears the second call actually needs to be Files({A,B}). That's fine (and can't change now anyway), but I'm not clear in what sense it's "incremental." It doesn't build on any previous work, except to reuse the same maps previously allocated.
/cc @griesemer @alandonovan | Documentation,NeedsInvestigation | low | Minor |
224,297,714 | kubernetes | apimachinery - Support for Float comparisons in selector.go for Gt Lt operations | FEATURE REQUEST
Provide support for label selector comparisons of Float values. One use case is for targeted selection to nodes with a label greater than a specific major version number of some component. There may be other use cases.
```
[{"key": "myComponentVersion", "operator": "Gt", "values": ["1.8"]}]
```
The current [NewRequirement](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/labels/selector.go#L135) and [Matches](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/labels/selector.go#L208) function only support comparison of Integers.
Are there any known dependancies on this being Int64?
Any concern with updating this to convert and compare Floats instead of Integers?
Would it need to support both or is that overkill?
| sig/api-machinery,lifecycle/frozen | low | Major |
224,298,572 | angular | Each HammerJS event is causing a new HammerJS instance to be created. | **I'm submitting a ...** (check one with "x")
```
[X] bug report => search github for a similar issue or PR before submitting
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
```
**Current behavior**
Multiple instances of HammerJS are created on for each event on a given element. It's causing performance issue and makes reacting to events hard (weird behaviour).
As workaround one can register one listener on most generic event `(pan)="drag($event)"`
**Expected behavior**
For a given Element only one Hammer instance should be created.
**Minimal reproduction of the problem with instructions**
```
<div (panstart)="drag($event)" (panend)="dragEnd($event)"
(panup)="drag($event)" (pandown)="drag($event)">
```
**What is the motivation / use case for changing the behavior?**
Dragging element over page looks terrible (the square looks like jumping or very low FPS).
**Please tell us about your environment:**
Playing around with angular quickstart.
* **Angular version:** 4.0.3
* **Browser:** all
The performance problems were noticed on Chrome Mobile
* **Language:** TypeScript 2.2
* **Node (for AoT issues):** `node --version` = 7.9
Code where it gets created:
https://github.com/angular/angular/blob/4.0.x/packages/platform-browser/src/dom/events/hammer_gestures.ts#L115 | type: bug/fix,area: core,core: event listeners,P4 | low | Critical |
224,313,485 | vscode | Problem saving keybindings | <!-- Do you have a question? Please ask it on http://stackoverflow.com/questions/tagged/vscode -->
- VSCode Version: 1.11.2 and 1.12.0 - insiders
- OS Version: Windows 10
-- Sorry for the English, I used Google Translator --
When I set a shortcut, if you press <kbd>Ctrl</kbd> + <kbd>k</kbd>, the VSCode is waiting for a second combination. Which works great.
But if I press any keyboard and then <kbd>Ctrl</kbd> + <kbd>k</kbd>, this key takes the function of <kbd>Ctrl</kbd> + <kbd>K</kbd> and always waits for a second combination.
Example: <kbd>n</kbd> <kbd>Ctrl</kbd> + <kbd>k</kbd>
If you type a text and press <kbd>n</kbd> it is waiting for the second combination.
I know the same open multiple ports for shortcuts, but maybe one maybe a setting where I only limit use to <kbd>Ctrl</kbd> + <kbd>k</kbd> to receive a second combination.
My suggestion is (I do not know if it's possible):
1- Limit second combination only to <kbd>Ctrl</kbd> + <kbd>k</kbd>, or until the user changes.
2- Whenever you have this situation, reverse the shortcuts automatically, either on the new screen or manually. If you typed <kbd>n</kbd> <kbd>Ctrl</kbd> + <kbd>k</kbd> it should interpret or edit the file for <kbd>Ctrl</kbd> + <kbd>k</kbd> <kbd>n</kbd>
3 - Prevent saving the shortcut key if it is wrong or prevents it from working.
| feature-request,ux | low | Minor |
224,332,412 | flutter | Dropdown menus are expected to animate both width and height | Feedback from Material Design review:
"Overflow menu popup animates incorrectly. It is supposed to expand both vertically and horizontally, instead it seems to jump to full width and then animates the height"
I believe this is in reference to menus such as the dropdown menus seen in Flutter Gallery > Menus demos.
An example gif is here:
https://material.io/guidelines/components/menus.html#menus-usage
The last gif before the Menu Items section titled "Action overflow menu" appears to have this behavior (if very quickly).
I think the material itself is expanding vertically and horizontally and the text is also fading in at the same time?
This was marked as a P1 so would block further releases of the Gallery at current. | framework,f: material design,from: review,has reproducible steps,P2,found in release: 3.3,found in release: 3.4,team-design,triaged-design | low | Major |
224,362,797 | opencv | GaussianBlur in OpenCL work incorrectly | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit / AMD Radeon RX 480 Graphics
- Compiler => Visual Studio 2015
-->
- OpenCV => 3.1 and 3.2 are both tested
- Operating System / Platform => Windows 10 64Bit / AMD Radeon RX 480 Graphics
- Compiler => Visual Studio 2015
##### Detailed description
Pass cv::UMat(below left) into GaussianBlur, the result(below right) contains strange vertical stripes:

##### Steps to reproduce
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
#include <opencv2/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/ocl.hpp>
#include <opencv2/highgui/highgui.hpp>
int main() {
cv::Mat src = cv::imread("1.jpg");
cv::UMat uorig, ublur;
src.copyTo(uorig);
cv::GaussianBlur(uorig, ublur, cv::Size(0, 0), 0.8);
cv::imshow("orig", src);
cv::imshow("blured", ublur);
cv::waitKey();
return 0;
}
```
or attach as .txt or .zip file
-->
```.cpp
#include <opencv2/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/ocl.hpp>
#include <opencv2/highgui/highgui.hpp>
int main() {
cv::Mat src = cv::imread("1.jpg");
cv::UMat uorig, ublur;
src.copyTo(uorig);
cv::GaussianBlur(uorig, ublur, cv::Size(0, 0), 0.8);
cv::imshow("orig", src);
cv::imshow("blured", ublur);
cv::waitKey();
return 0;
}
```
| category: ocl,incomplete | low | Critical |
224,392,856 | vscode | Editor drag and drop mouse feedback missing | Refs: #25338
OS X, insider
1. Enable editor drag and drop and select some text
2. Drag that text
3. Notice that the editor cursor nicely updates however the mouse cursor does not update to reflect that I am dropping something. This leaves me puzzled as the user what is going on as there is no prominent feedback that I am actually dragging the text in the editor
It would be great if we could fix this as it would greatly increase the experinece imho
Notice in the gif that I am dragging the text all the time while the mouse is unchanged

| ux,under-discussion,editor-drag-and-drop | low | Minor |
224,406,200 | vscode | Drop cursor missing if drop location === current cursor location | <!-- Do you have a question? Please ask it on http://stackoverflow.com/questions/tagged/vscode -->
Refs: #25338
IMO the current cursor location is less important that the drop indication. So the cursor should show the drop location style.

| ux,under-discussion,editor-drag-and-drop | low | Minor |
224,411,656 | neovim | 'guicursor' coloring is not updated by `:highlight Cursor` | - `nvim --version`:
```console
$ n -v
NVIM v0.2.0-1464-g7e571bca
Build type: RelWithDebInfo
Compilation: /usr/local/Homebrew/Library/Homebrew/shims/super/clang -Wconversion -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -DNVIM_MSGPACK_HAS_FLOAT32 -O2 -g -DDISABLE_LOG -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wvla -fstack-protector-strong -fdiagnostics-color=auto -DINCLUDE_GENERATED_DECLARATIONS -I/tmp/neovim-20170426-23155-ad758q/build/config -I/tmp/neovim-20170426-23155-ad758q/src -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/opt/gettext/include -I/usr/include -I/usr/include -I/tmp/neovim-20170426-23155-ad758q/build/src/nvim/auto -I/tmp/neovim-20170426-23155-ad758q/build/include
Compiled by kborges@13592-storage
Optional features included (+) or not (-): +acl +iconv +jemalloc +tui
For differences from Vim, see :help vim-differences
system vimrc file: "$VIM/sysinit.vim"
fall-back for $VIM: "/usr/local/Cellar/neovim/HEAD-7e571bc/share/nvim"
```
- Vim (version: ) behaves differently? **yes**
- Operating system/version: macOS Sierra 10.12.4 (16E195)
- Terminal name/version: iTerm2 Build 3.0.15
- `$TERM`:
```console
$ echo $TERM
xterm-256color
```
- minimal init.vim:
```viml
set rtp-=/Users/kborges/.config/nvim
set rtp-=/Users/kborges/.config/nvim/after
colorscheme desert
hi! Cursor ctermfg=1 ctermbg=1 guifg=#FF0000 guibg=#FF0000
set termguicolors
set guicursor=n-c-v:block-Cursor/Cursor-blinkon0
set guicursor+=i-ci:ver1-Cursor/Cursor-blinkwait300-blinkon200-blinkoff150
```
### Actual behaviour
**nvim:**

**vim:**

I think there are 2 different errors:
1. the `:hi! Cursor "...` seems not to be working
2. Even when I force the `:hi! Cursor` on the command, the cursor color doesn't change
PS.: I marked that the behaviour on vim is different because the `:hi! Cursor "...` on init.vim seems to work on vim. | bug,highlight,options | low | Critical |
224,413,263 | pytorch | Conjugate Gradient mentioned in docs, but not implemented | The torch.nn.optim docs promised me conjugate gradient descent (mentions it in the closure section), but it's not there. No references to the algorithm in the code base either :(. Not yet ported from Lua?
cc @vincentqb | todo,feature,module: optimizer,triaged | medium | Major |
224,415,200 | go | x/pkgsite: list types that satisfy an interface within its package | Forked from https://github.com/golang/go/issues/19412#issuecomment-296766847, to continue discussion with @jimmyfrasche and any others without taking over that proposal thread.
The pattern of having an interface with an unexported method and types that implement it is a fairly common pattern in Go as we don't have [sum types](https://github.com/golang/go/issues/19412). A good example is in `go/ast`: https://golang.org/pkg/go/ast/#Spec
For a user reading the godoc, it's very useful to them to know what are the types that can go in that interface. The actual information is hidden in the code, so most packages like `go/ast` above list them manually: `The Spec type stands for any of *ImportSpec, *ValueSpec, and *TypeSpec.`
This works OK, but the issue is that these names aren't linked to the types in the same page. I propose that this happen automatically, much like https://github.com/golang/go/issues/4953 for packages. Linking to types (and potentially other exported names like funcs and consts) could be useful for other purposes.
This is the conservative solution. A more aggressive solution would be to automatically generate the list of types that can fit into the interface. If we had sum types that could be limited to those kinds of types and be trivial to implement.
As long as we don't have sum types, @jimmyfrasche was suggesting to just list all the types that satisfy the interface in its package. I'm not sure if that's a good idea for all interfaces, but would certainly be useful for those that follow the pattern of an unexported method to mimic sum types. Not sure how safe/desired such a change would be.
Manually listing the types in the godoc isn't a huge pain, but not having links is a problem.
CC @griesemer who wrote the `go/ast` godoc and might have some input | Proposal,Proposal-Accepted,Tools,pkgsite | medium | Critical |
224,515,103 | go | runtime: shrink map as elements are deleted | ### What version of Go are you using (`go version`)?
go version go1.8 windows/amd64
### What operating system and processor architecture are you using (`go env`)?
set GOARCH=amd64
set GOBIN=
set GOEXE=.exe
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=C:\dev\Go
set GORACE=
set GOROOT=C:\Go
set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0
set CXX=g++
set CGO_ENABLED=1
set PKG_CONFIG=pkg-config
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
### What did you do?
See example on playground: https://play.golang.org/p/odsk9F1UH1
(edit: forgot to remove sleeps and changed the number of elements)
### What did you expect to see?
removing elements from m1 map should release memory.
### What did you see instead?
total allocated memory is always increasing
In the example the issue is not so relevant, but in my production scenario (several maps with more than 1million elements each) I can easily get OOM error, and the process is being killed.
Also I don't know if memstats.Alloc is the right counter to expose here, but I can observe the issue with regular process management tools in linux (e.g. top or htop)
| Performance,NeedsFix,compiler/runtime | high | Critical |
224,562,251 | pytorch | Feature Request: noise contrastive estimation/negative sampling | There isn't a standard loss function implementing this, even though it's pretty common. I am perfectly willing to implement it myself, if nobody else feels like it. It shouldn't be terribly complicated. I would structure it something like:
```python
class NCELoss(torch.nn.modules.loss._Loss):
r"""Noise contrastive estimation loss function.
Args:
num_classes: int number of classes for the output layer
num_sampled: int number of samples to extract from noise distribution
noise_sampler: () -> int function
Function to generate k class labels according to noise distribution.
By default, noise will be assumed log uniform unigram.
Shape:
- Input: :math:`(N, C)` where `C = num_classes`
- Target: :math:`(N)` where each value is `0 <= targets[i] <= C-1`
"""
```
The API for tensorflow also uses a parameter `subtract_log_q` so that by setting it to `False` you can switch to a negative sampling objective from nce. Is this worth doing?
Is this something that already exists? Is this something worth implementing? | feature,module: nn,triaged | medium | Critical |
224,586,490 | TypeScript | Proposal: Add new __construct helper for better ES5/ES6 class interop | I propose we add a new helper to assist with class instance construction runtime semantics when extending ES6 built-in classes while compiling with `--target ES5`.
## Background
Our current emit for classes for `--target ES5` assumes that the superclass follows the same runtime semantics as the classes we emit. Generally this means that the constructor can be called as a function via `call()` or `apply()`. However, a number of ES6 built-in classes are specified to throw when not used as a constructor (i.e. `Promise`, `Map`, etc.), and other ES6 built-in classes return a value when called, ignoring the `this` value provided to `call()` or `apply()` (i.e. `Error`, `Array`, etc.).
Previously we [provided guidance](https://github.com/Microsoft/TypeScript-wiki/blob/master/Breaking-Changes.md#extending-built-ins-like-error-array-and-map-may-no-longer-work) for possible workarounds for this to support the latter scenario, but we do not currently have a solution for the former scenario.
## Proposal
The following code listing describes a new `__construct` helper that we would need to emit for any file that contains an explicit (or implicit, for property declarations) `super()` call:
```ts
class MyPromise extends Promise {
constructor(executor) {
super(executor);
}
}
// becomes...
var __extends = ...;
var __construct = (this && this.__construct) || (typeof Reflect !== "undefined" && Reflect.construct
? function (s, t, a, n) { return t !== null ? Reflect.construct(t, a, n) : s; }
: function (s, t, a) { return t !== null && t.apply(s, a) || s; });
var MyPromise = (function (_super) {
__extends(MyPromise, _super);
function MyPromise(executor) {
var _this = this;
var _newTarget = this.constructor;
_this = __construct(this, _super, [executor], _newTarget);
return _this;
}
return MyPromise;
})(Promise);
```
### Benefits
* Allows down-level class emit to extend ES6 built-ins if running in an ES6 host by feature detecting `Reflect.construct`.
* Falls back to the [existing behavior](http://www.typescriptlang.org/play/index.html#src=var%20a%20%3D%20null%3B%0D%0Aclass%20C%20extends%20a%20%7B%0D%0A%20%20%20%20x%20%3D%201%3B%0D%0A%7D) when running in an ES5 host.
* Handles `extends null` and `extends x` when `x` is `null` in the same way as [existing behavior](http://www.typescriptlang.org/play/index.html#src=var%20a%20%3D%20null%3B%0D%0Aclass%20C%20extends%20a%20%7B%0D%0A%20%20%20%20x%20%3D%201%3B%0D%0A%7D).
* Handles custom return values from `super` in the same way as [existing behavior](http://www.typescriptlang.org/play/index.html#src=var%20a%20%3D%20null%3B%0D%0Aclass%20C%20extends%20a%20%7B%0D%0A%20%20%20%20x%20%3D%201%3B%0D%0A%7D).
### Drawbacks
* Larger helper footprint
* Subclassing a built-in in an ES5 host has different runtime semantics than subclassing a built-in in an ES6 host:
* In ES5, subclassing `Array` or `Error` will not have the correct prototype chain. The only solution is to explicitly set the prototype chain using the non-standard `__proto__` property as per [established guidance](https://github.com/Microsoft/TypeScript-wiki/blob/master/Breaking-Changes.md#extending-built-ins-like-error-array-and-map-may-no-longer-work).
| Suggestion,Awaiting More Feedback | medium | Critical |
224,597,303 | TypeScript | Suggestion: a built-in TypedArray interface | I would have expected a [`TypedArray`][ecma-typed-array] interface to exist in
the built-in declaration libraries. Instead, there are independent types for
[`Int8Array`][lib-int8-array], etc, with no common base type.
```ts
interface Int8Array { /* ... */ }
interface Int8ArrayConstructor { /* ... */ }
declare const Int8Array: Int8ArrayConstructor;
interface Uint8Array { /* ... */ }
interface Uint8ArrayConstructor { /* ... */ }
declare const Uint8Array: Uint8ArrayConstructor;
// ...
```
It seems sensible that there should be a common `TypedArray` type, both because
- a TypeScript user might want to declare a variable as a `TypedArray`, and
- it would reduce repetition in the built-in declaration files
```ts
interface TypedArray { /* ... */ }
interface Int8Array extends TypedArray { }
interface Int8ArrayConstructor { /* ... */ }
declare const Int8Array: Int8ArrayConstructor;
interface Uint8Array extends TypedArray { }
interface Uint8ArrayConstructor { /* ... */ }
declare const Uint8Array: Uint8ArrayConstructor;
// ...
```
Is there a good reason why there is no `TypedArray` type? If not, I can submit a PR.
## Use case
Consider the [`isTypedArray()`][lodash-is-typed-array] function from lodash.
Currently, it is declared as follows:
```ts
function isTypedArray(value: any): boolean;
```
This proposal would enable an appropriate type-guard, without declaring
a convoluted union type:
```ts
function isTypedArray(value: any): value is TypedArray;
```
[ecma-typed-array]: http://www.ecma-international.org/ecma-262/6.0/#sec-typedarray-objects
[lib-int8-array]: https://github.com/Microsoft/TypeScript/blob/6756e3e44c3b21d021a7542b63f2de84b65695c4/src/lib/es5.d.ts#L1550
[lodash-is-typed-array]: https://lodash.com/docs/4.17.4#isTypedArray
| Suggestion,Help Wanted,Domain: lib.d.ts,Fix Available | medium | Critical |
224,611,346 | pytorch | "Sparsified" mathematical operations | I had a discussion with @soumith about this on Friday, but I wanted to record things down here so that I don't forget.
**Background.** Imagine that you are @martinraison and you are implementing sparse Adagrad. You end up writing this code:
if p.grad.data.is_sparse:
grad_indices = grad.indices()
grad_values = grad.values()
size = torch.Size([x for x in grad.size()])
def make_sparse(values):
constructor = type(p.grad.data)
if grad_indices.dim() == 0 or values.dim() == 0:
return constructor()
return constructor(grad_indices, values, size)
state['sum'].add_(make_sparse(grad_values.pow(2)))
std = state['sum'].sparse_mask(grad)
std_values = std.values().sqrt_().add_(1e-10)
p.data.add_(-clr, make_sparse(grad_values / std_values))
else:
state['sum'].addcmul_(1, grad, grad)
std = state['sum'].sqrt().add_(1e-10)
p.data.addcdiv_(-clr, grad, std)
So, as you can see, the sparse version of this code is four times as long as the non-sparse version. Why? There are a few contributory factors:
1. `addcmul_` doesn't understand how to handle sparse arguments, so we have to compute the sparse tensor we want to add, and then add it in. In the code above, this is done by projecting out the underlying values, performing the necessary operations, and then constructing a new sparse tensor (using `make_sparse`)
2. The line `state['sum'].sqrt().add_(1e-10)` works in both the sparse and dense settings, because `state['sum']` is always a dense tensor. However, we can see that we will subsequently use this to divide the gradient, which means we don't actually need to compute all of the entries, only those which will divide non-zero coefficients of gradient. Once again, this is done by pulling out the values you need, doing the necessary ops, and then turning it back into a sparse vector.
OK, so what can we do about this.
**Proposal 1: Nothing is wrong, carry on.** The code above is wordy, but some of it can be simplified by clone()'ing the sparse tensor, and then performing the updates on values in place. (@soumith suggested this to me.) For example, it's an easy matter to rewrite `state['sum'].add_(make_sparse(grad_values.pow(2)))` to:
s = grad.clone()
s.values().pow_(2)
state['sum'].add_(s)
Which avoids making use of the `make_sparse` function.
There are some downsides to this approach:
1. You have to be careful to make sure your sparse tensor is coalesced; an uncoalesced tensor may have multiple entries for an index in its values array; if we naively apply an operation to the values tensor of an uncoalesced tensor, that is bad. Indeed, the adagrad code initially got this wrong, and it was fixed in 1018b238acf2ecfc836484f3153c2864e9f4e963/
2. There are two minor performance hits to doing things this way. By running `grad.clone()` you are forced to make a copy of values and indexes. The values copy is wasted, because you're going to immediately do an inplace update; you should have just directly written in the updated values. Depending on what you do to the sparse tensor, the index copy may also be wasted, because you may not ever actually change the shape of the tensor (as far as I can tell, we don't have any copy on write.)
3. I get really nervous when performing mutations on tensors which are part of the internal representations of others, e.g., as occurs in `s.values().pow_()`. Yes, it is pretty clear what the intent is in this code snippet, but I think when there is an indirection, it's easier to accidentally end up sharing state when you don't want to. Then eek!
**Proposal 2: Add lots of methods.** If we want to get users to avoid mucking about `values()` manually, the next obvious thing to do is add methods to sparse tensor to let them do the things they want to do. This is a clear win for well defined, easy-to-compute operations on sparse tensors like `pow(scalar)` and `sqrt`. We could implement them in Python or hand-write kernels for them (my personal inclination is to implement them in Python, since that reduces the TCB, but @soumith tells me there are good reasons to implement kernels for each of them.)
However, there are some pitfalls.
1. Consider `std_values = std.values().sqrt_().add_(1e-10)`. It is tempting to say that we should replace this line with `std.sqrt_().add_(1e-10)`, implementing appropriate sparse versions of sqrt_ and add_. But now there is a problem: the "sparse" version of add... isn't an add at all! If you add a scalar to a sparse tensor, the result isn't going to be sparse anymore! The operation you *really* wanted here was "add scalar if we actually have an entry for it in the sparse vector" which is a kind of weird operation, and in any case really shouldn't be called add. (To add insult to injury, it's not even "add scalar if entry is nonzero" because, prior to coalescing, we might very well have redundant zero entries in our sparse tensor!) So really, if we add a method for this operation, we should not call it add; perhaps we should name it `sadd` ("sparse add"). This highlights a merit of Proposal 1, which is that we didn't have to come up with new names for the operations.
2. Let's look more closely at `grad_values / std_values`. We can do this division because we know that the indexes of grad and std are the same. If we are defining a (sparse!) division between two sparse tensors, we do not necessarily know if this is the case: the sparse tensors might be on completely different indexes. (Actually, I don't even know what the intended semantics of division should be in this case.) Nor can I think of a cheap way of testing that this is the case, since if you were using `clone()` object equality no longer holds between the indexes.
**Proposal 3: A lifting operator (or two).** One final possibility is to define a lifting function, which takes a method invocation on dense tensors, and lifts it into a sparse variant. For example, we might rewrite `std_values = std.values().sqrt_().add_(1e-10)` as `std.lift_sparse(lambda x: x.sqrt_().add_(1e-10))`. Similarly, `torch.lift_sparse(torch.div, std, grad)` would take division on dense tensors and lift it to apply on sparse vectors, handling the necessary unwrapping and wrapping, and ensuring that the inputs are coalesced.
@soumith didn't really like this idea, because when you provide an operator like this, users may end up using it in ways that you don't expect, leading to strange errors on their end. I know lifting operators like this work quite nicely in Haskell, but it is certainly true that Python is not Haskell.
Thanks for reading. Interested to hear your thoughts. | module: sparse,low priority,triaged | medium | Critical |
224,870,953 | flutter | Canonical routing example code isn't documented | We find that our developers learn by reading the code, and documenting the code with dartdocs is a helpful way to teach and orient the developer.
https://github.com/flutter/flutter/blob/master/examples/stocks/lib/main.dart is our canonical "simple" example of routing. We'd love to see inline dartdocs here, pointing out the main concepts and what the developer should see and learn by looking at this file.
We link to https://github.com/flutter/flutter/blob/master/examples/stocks/lib/main.dart from Gallery source code, and we'll link to it from our now in-progress /routing-and-navigation page.
Thanks! | framework,d: api docs,d: examples,f: routes,P2,team-framework,triaged-framework | low | Minor |
224,872,518 | nvm | --reinstall-packages-from fails when local repositories are npm link'd | - Operating system and version: macOS Sierra 10.12.4
- `nvm debug` output:
<details>
<!-- do not delete the following blank line -->
```sh
nvm --version: v0.33.0
$SHELL: /bin/zsh
$HOME: /Users/phulce
$NVM_DIR: '$HOME/.nvm'
$PREFIX: ''
$NPM_CONFIG_PREFIX: ''
nvm current: v7.9.0
which node: $NVM_DIR/versions/node/v7.9.0/bin/node
which iojs: iojs not found
which npm: $NVM_DIR/versions/node/v7.9.0/bin/npm
npm config get prefix: $NVM_DIR/versions/node/v7.9.0
npm root -g: $NVM_DIR/versions/node/v7.9.0/lib/node_modules
```
</details>
- `nvm ls` output:
<details>
<!-- do not delete the following blank line -->
```sh
v6.9.5
v7.2.1
-> v7.9.0
default -> v7.2.1
node -> stable (-> v7.9.0) (default)
stable -> 7.9 (-> v7.9.0) (default)
iojs -> N/A (default)
lts/* -> lts/boron (-> N/A)
lts/argon -> v4.8.2 (-> N/A)
lts/boron -> v6.10.2 (-> N/A)
```
</details>
- How did you install `nvm`? (e.g. install script in readme, homebrew):
homebrew
- What steps did you perform?
Ran `nvm install v7.9.0 --reinstall-packages-from=v7.2.1`
- What happened?
Saw errors in the console and no globally installed packages were reinstalled (`yarn` for example).
<details>
```sh
Downloading and installing node v7.9.0...
Downloading https://nodejs.org/dist/v7.9.0/node-v7.9.0-darwin-x64.tar.xz...
######################################################################## 100.0%
Computing checksum with shasum -a 256
Checksums matched!
Now using node v7.9.0 (npm v4.2.0)
VERSION=''
xargs: unterminated quote
Reinstalling global packages from v7.2.1...
Linking global packages from v7.2.1...
nvm_cd:cd:2: no such file or directory: /Users/phulce/Code/npm-linked-package-a\n/Users/phulce/Code/npm-linked-package-b\n/Users/phulce/Code/npm-linked-package-c\n/Users/phulce/Code/npm-linked-package-d\n/Users/phulce/Code/npm-linked-package-e
```
</details>
- What did you expect to happen?
No errors printed to console and globally installed packages from my previous version are available in the new installation (re-linking local packages that are `npm link`'d would be a nice bonus but I did not expect it :) )
- Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`?
Yes before the `nvm.sh` eval I add a few local
| bugs,needs followup | medium | Critical |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.