id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
111,744,658 | neovim | Vim may leave file named "4913" under certain conditions | An interesting case was [mentioned](https://twitter.com/cortesi/status/654769811914780673): Neo/Vim (~~on Windows only?~~) [creates a temporary file](https://github.com/neovim/neovim/blob/536c0ba27e79929eb30850d8e11f2ed026930ab3/src/nvim/fileio.c#L2710) to check if a directory is writable and see the resulting ACL.
> So, if you’re writing software that watches for file changes, you’ll find **Vim [creates and deletes file 4913](https://github.com/b4winckler/vim/blob/1f611c1f921bc219b44272f13a298cd8d97aa6f0/src/fileio.c#L3704) on almost every edit**. [ref](https://twitter.com/cortesi/status/654770457141350400)
vim_dev thread (2009): https://groups.google.com/d/msg/vim_dev/sppdpElxY44/v9fOtS1ji-cJ
```
3508 /* Close the file before removing it, on MS-Windows we
3509 * can't delete an open file. */
3510 close(fd);
3511 mch_remove(IObuff);
```
> There's a race condition between lines 3510 and 3511, wherein another process could open a handle to the file and thus prevent the delete from succeeding. One class of software that is known to do this and cause problems is antivirus programs, which want to scan things for viruses as soon as they are created. Another class of software that sometimes causes problems is local search programs, which want to index things as soon as they are created. Such software can do their tasks unobtrusively if they are written carefully and using the mechanisms provided by the operating system, namely oplocks. Unfortunately there are lots of bad virus scanners and indexers out there (lots of bad software in general!).
>
> One solution which is very commonly used in Windows programs is to detect when a delete or rename operation fails (can be narrowed to specific error codes -- sharing violation and access denied), and retry a couple times with short delays in between. The idea is to give the interfering process some time to finish whatever work it was doing with the file. ... How many people out there have tried to delete a large hierarchy of files using "rd /s /q", and had it inexplicably fail to delete a few files? Yet if you repeat the command it will generally succeed. Most likely a virus scanner or search indexer got in the way and is now out of the way.
>
> Another solution, which is cleaner but can only be used in cases where you know you want to delete a file immediately after closing its handle, is to **set the delete disposition on the file before closing the handle.** There are a couple of ways using Windows APIs to do this. One way is to pass FILE_FLAG_DELETE_ON_CLOSE to CreateFile() when opening the handle originally. Another way is to send down a FileDispositionInfo using SetFileInformationByHandle() or equivalent. (Essentially the DeleteFile() API just does the following: open a handle, set delete disposition, and close the handle; if we break apart these steps, we gain finer control.)
| bug-vim,filesystem | low | Critical |
111,751,149 | TypeScript | Formatting multiple-line comment doesn't normalize leading whitespaces | ``` typescript
function foo() {
/*
multiple line comment
*/
}
```
Formatter does not change `\t` character before `/*` into spaces. Is this also by design?
| Bug,Help Wanted | low | Minor |
111,788,157 | rust | Wrong error message (or type system bug) for unsatisfied trait bound | Here's the [code](https://play.rust-lang.org/?gist=8295fe2b1cf5225ae082&version=nightly). Not sure how to get reduced example, sorry about that.
So, in general, the problem is that I have
``` rust
pub trait WhereType<'a>: ToSQL + CloneToTrait<'a> {}
```
and some implementations of it (32-37 lines). At 37 line I'm trying to implement this trait for struct which does not implement `ToSQL`. I would expect error message about that and instead I'm getting this:
``` rust
<anon>:37:1: 37:43 error: the trait `Numeric` is not implemented for the type `Subquery<'a>` [E0277]
<anon>:37 impl<'a> WhereType<'a> for Subquery<'a> {}
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
<anon>:37:1: 37:43 help: see the detailed explanation for E0277
<anon>:37:1: 37:43 note: required by `WhereType`
<anon>:37:1: 37:43 error: the trait `core::fmt::Display` is not implemented for the type `Subquery<'a>` [E0277]
<anon>:37 impl<'a> WhereType<'a> for Subquery<'a> {}
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
<anon>:37:1: 37:43 help: see the detailed explanation for E0277
<anon>:37:1: 37:43 note: `Subquery<'a>` cannot be formatted with the default formatter; try using `:?` instead if you are using a format string
<anon>:37:1: 37:43 note: required by `WhereType`
error: aborting due to 2 previous errors
playpen: application terminated with error code 101
```
It looks like rustc is trying to satisfy this impl instead:
``` rust
impl<'a, T: Numeric + ToSQL + Clone> WhereType<'a> for T {}
```
Not sure if it's a bug, but it's definitely wrong/misleading error message.
It's the same for stable/beta/nightly.
| C-enhancement,A-diagnostics,A-trait-system,T-compiler | low | Critical |
111,882,982 | go | tour: confusion with pointer and methods. | Context: http://tour.golang.org/methods/3
I played around with the code and tried this:
.....
func (v Vertex) Scale(f float64) Vertex {
v.X = v.X \* f
v.Y = v.Y \* f
return v
}
...
func main() {
v := &Vertex{3, 4}
fmt.Printf("Before scaling: %+v, Abs: %v\n", v, v.Abs())
v = &v.Scale(5)
fmt.Printf("After scaling: %+v, Abs: %v\n", v, v.Abs())
}
The compiler gives the error: prog.go:25: cannot take the address of v.Scale(5)
But if I write instead:
w := v.Scale(5)
v = &w
..., then the code works.
This was confusing for me! Can you clarify this?
| NeedsInvestigation | low | Critical |
111,924,997 | rust | dropck doc oversights / revision | The implementation of non-parametric dropck (#28861) included comments and doc that is missing a few things.
1. There was feedback from @Gankro on changes to the rustonomicon that should be further tweaks (see comment thread from #28861)
2. There should be a _huge_ warning (maybe even a lint or something) that if you put `#[unsafe_destructor_blind_to_params]` on a struct that is using native pointers (e.g. `struct Foo<T>( { ptr: *mut T }`) then you almost certainly need a `PhantomData` in there.
The old code before #28861 was in fact busted in this respect, as one can see by taking the test case from #29106 and running it on older versions of Rust.
| C-enhancement,A-destructors | low | Major |
111,953,410 | rust | Dead code warning should provide some help for making the code public in library crates | When compiling a library crate a dead code warning occurs it might be a sign that programmers actually wanted to make the code public. The warning message should tell them how to do that (e.g. `pub` keyword missing on `mod` or `fn`/`enum`/...).
Especially making the `mod` `pub` can easily be forgotten. (E.g. [this person](https://www.reddit.com/r/rust/comments/2dt2v6/why_does_rustc_complain_about_dead_code_when_im/) and me did.)
| C-enhancement,A-lints,A-diagnostics,T-compiler,L-dead_code | low | Minor |
112,021,282 | youtube-dl | Add support for tvfplay.com | youtube-dl http://tvfplay.com/video/detail/477/TVF-Pitchers-S01E05-Where-Magic-Happens --verbose
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['http://tvfplay.com/video/detail/477/TVF-Pitchers-S01E05-Where-Magic-Happens', '--verbose']
[debug] Encodings: locale 'UTF-8', fs 'UTF-8', out 'UTF-8', pref: 'UTF-8'
[debug] youtube-dl version 2014.02.17
[debug] Python version 2.7.6 - Linux-3.16.0-50-generic-x86_64-with-Ubuntu-14.04-trusty
[debug] Proxy map: {}
[generic] TVF-Pitchers-S01E05-Where-Magic-Happens: Requesting header
WARNING: Falling back on generic information extractor.
[generic] TVF-Pitchers-S01E05-Where-Magic-Happens: Downloading webpage
[generic] TVF-Pitchers-S01E05-Where-Magic-Happens: Extracting information
ERROR: Unsupported URL: http://tvfplay.com/video/detail/477/TVF-Pitchers-S01E05-Where-Magic-Happens; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 493, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python2.7/dist-packages/youtube_dl/extractor/common.py", line 158, in extract
return self._real_extract(url)
File "/usr/lib/python2.7/dist-packages/youtube_dl/extractor/generic.py", line 380, in _real_extract
raise ExtractorError('Unsupported URL: %s' % url)
ExtractorError: Unsupported URL: http://tvfplay.com/video/detail/477/TVF-Pitchers-S01E05-Where-Magic-Happens; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
| site-support-request,account-needed | low | Critical |
112,168,118 | go | runtime: Plan 9 stuck forever in TestNoHelperGoroutines | Plan 9 sometimes hangs in TestNoHelperGoroutines:
https://storage.googleapis.com/go-build-log/eb8fa651/plan9-386_f2178495.log
```
##### GOMAXPROCS=2 runtime -cpu=1,2,4
panic: test timed out after 5m0s
goroutine 232244 [running]:
testing.startAlarm.func1()
/tmp/workdir/go/src/testing/testing.go:703 +0xfd
created by time.goFunc
/tmp/workdir/go/src/time/sleep.go:129 +0x35
goroutine 1 [chan receive, 4 minutes]:
testing.RunTests(0x238e74, 0x2e75a0, 0x99, 0x99, 0x185801)
/tmp/workdir/go/src/testing/testing.go:562 +0x71a
testing.(*M).Run(0x104d4f84, 0xb171f)
/tmp/workdir/go/src/testing/testing.go:494 +0x67
main.main()
runtime/_test/_testmain.go:896 +0xff
goroutine 232243 [syscall, 4 minutes]:
syscall.Syscall6(0x4, 0x11bc5400, 0x200, 0xffffffff, 0xffffffff, 0x0, 0x0, 0xee4b, 0x200, 0x184a00, ...)
/tmp/workdir/go/src/syscall/asm_plan9_386.s:57 +0x5
syscall.Pread(0x4, 0x11bc5400, 0x200, 0x200, 0xffffffff, 0xffffffff, 0x2910a, 0x0, 0x0)
/tmp/workdir/go/src/syscall/zsyscall_plan9_386.go:228 +0x72
syscall.Read(0x4, 0x11bc5400, 0x200, 0x200, 0x0, 0x0, 0x0)
/tmp/workdir/go/src/syscall/syscall_plan9.go:123 +0x54
os.(*File).read(0x104883e8, 0x11bc5400, 0x200, 0x200, 0x11bc5400, 0x0, 0x0)
/tmp/workdir/go/src/os/file_plan9.go:248 +0x49
os.(*File).Read(0x104883e8, 0x11bc5400, 0x200, 0x200, 0x1, 0x0, 0x0)
/tmp/workdir/go/src/os/file.go:95 +0x6e
bytes.(*Buffer).ReadFrom(0x104b65a0, 0x30461840, 0x104883e8, 0x0, 0x0, 0x0, 0x0)
/tmp/workdir/go/src/bytes/buffer.go:173 +0x1c2
io.copyBuffer(0x30461810, 0x104b65a0, 0x30461840, 0x104883e8, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/tmp/workdir/go/src/io/io.go:374 +0x128
io.Copy(0x30461810, 0x104b65a0, 0x30461840, 0x104883e8, 0x0, 0x0, 0x0, 0x0)
/tmp/workdir/go/src/io/io.go:350 +0x52
os/exec.(*Cmd).writerDescriptor.func1(0x0, 0x0)
/tmp/workdir/go/src/os/exec/exec.go:232 +0x71
os/exec.(*Cmd).Start.func1(0x107aa3c0, 0x1083f880)
/tmp/workdir/go/src/os/exec/exec.go:340 +0x1c
created by os/exec.(*Cmd).Start
/tmp/workdir/go/src/os/exec/exec.go:341 +0x77a
goroutine 232236 [syscall, 4 minutes, locked to thread]:
syscall.Syscall(0x106fdcc8, 0x200, 0x0, 0x0, 0x3, 0x2, 0x1, 0x801)
/tmp/workdir/go/src/syscall/asm_plan9_386.s:22 +0x5
syscall.await(0x106fdcc8, 0x200, 0x200, 0x104eb09c, 0x0, 0x0)
/tmp/workdir/go/src/syscall/zsyscall_plan9_386.go:45 +0x5a
syscall.Await(0x10836220, 0x0, 0x0)
/tmp/workdir/go/src/syscall/syscall_plan9.go:200 +0x89
syscall.startProcess.func1(0x10817b40, 0x16, 0x10817a80, 0x4, 0x4, 0x10834060, 0x11bbab00)
/tmp/workdir/go/src/syscall/exec_plan9.go:564 +0x212
created by syscall.startProcess
/tmp/workdir/go/src/syscall/exec_plan9.go:568 +0x91
goroutine 232234 [chan receive, 4 minutes]:
syscall.WaitProcess(0x48f, 0x11bc32e0, 0x0, 0x0)
/tmp/workdir/go/src/syscall/exec_plan9.go:640 +0x96
os.(*Process).wait(0x10818d90, 0x0, 0x0, 0x0)
/tmp/workdir/go/src/os/exec_plan9.go:71 +0x71
os.(*Process).Wait(0x10818d90, 0x1081a200, 0x0, 0x0)
/tmp/workdir/go/src/os/doc.go:45 +0x2a
os/exec.(*Cmd).Wait(0x107aa3c0, 0x0, 0x0)
/tmp/workdir/go/src/os/exec/exec.go:380 +0x19a
os/exec.(*Cmd).Run(0x107aa3c0, 0x0, 0x0)
/tmp/workdir/go/src/os/exec/exec.go:258 +0x57
os/exec.(*Cmd).CombinedOutput(0x107aa3c0, 0x0, 0x0, 0x0, 0x0, 0x0)
/tmp/workdir/go/src/os/exec/exec.go:424 +0x23e
runtime_test.executeTest(0x104b6540, 0x23c0e0, 0xb4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/tmp/workdir/go/src/runtime/crash_test.go:83 +0xcd9
runtime_test.TestNoHelperGoroutines(0x104b6540)
/tmp/workdir/go/src/runtime/crash_test.go:230 +0x4d
testing.tRunner(0x104b6540, 0x2e7708)
/tmp/workdir/go/src/testing/testing.go:456 +0x8e
created by testing.RunTests
/tmp/workdir/go/src/testing/testing.go:561 +0x6e8
FAIL runtime 300.057s
2015/10/19 11:12:40 Failed: exit status: 'go 158: 1'
```
| OS-Plan9,compiler/runtime | low | Critical |
112,170,266 | TypeScript | Combining destructuring with parameter properties | Today, we can take advantage of parameter properties to reduce the boilerplate, e.g:
``` typescript
class Person {
constructor(public firstName: string, public lastName: number, public age: number) {
}
}
```
Since 1.5, we can also use destructuring, e.g:
``` typescript
class Person {
firstName: string;
lastName: string;
age: number;
constructor({ firstName, lastName, age } : { firstName: string, lastName: string, age: number }) {
this.firstName = firstName;
this.lastName = lastName;
this.age = age;
}
}
```
I've tried in many ways to combine both features, but had no success. So:
1. Is it possible to combine them nowadays, and if yes, how?
2. If not, could it be an improvement to a future TypeScript version? E.g:
``` javascript
class Person {
constructor(public { firstName, lastName, age } : { firstName: string, lastName: string, age: number }) {
}
}
// the code above would possibly transpile to:
var Person = (function () {
function Person(_a) {
var firstName = _a.firstName, lastName = _a.lastName, age = _a.age;
this.firstName = firstName;
this.lastName = lastName;
this.age = age;
}
return Person;
})();
```
| Suggestion,In Discussion,Effort: Moderate | high | Critical |
112,171,070 | youtube-dl | [cbsnews] Add support for playlists | It's been working fine for months now... Here it is;
C:\Notify+>C:\Python27\Scripts\youtube-dl.exe --verbose -o r:+cbs-60.minutes.fl
v --ffmpeg-location=c:\python27\scripts --recode-video=mp4 http://www.cbsnews.co
m/latest/60-minutes/full-episodes/
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'--verbose', u'-o', u'r:\+cbs-60.minutes.flv', u'-
-ffmpeg-location=c:\python27\scripts', u'--recode-video=mp4', u'http://www.cbs
news.com/latest/60-minutes/full-episodes/']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2015.10.18
[debug] Python version 2.7.8 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-72086-g51f6455, ffprobe N-72086-g51f6455, rtmpdum
p 2.4
[debug] Proxy map: {}
[CBSNews] full-episodes: Downloading webpage
ERROR: Unable to extract video JSON info; please report this issue on https://yt
-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U t
o update. Be sure to call youtube-dl with the --verbose flag and include its com
plete output.
Traceback (most recent call last):
File "youtube_dl\YoutubeDL.pyo", line 661, in extract_info
File "youtube_dl\extractor\common.pyo", line 291, in extract
File "youtube_dl\extractor\cbsnews.pyo", line 53, in _real_extract
File "youtube_dl\extractor\common.pyo", line 594, in _html_search_regex
File "youtube_dl\extractor\common.pyo", line 585, in _search_regex
RegexNotFoundError: Unable to extract video JSON info; please report this issue
on https://yt-dl.org/bug . Make sure you are using the latest version; type you
tube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and in
clude its complete output.
| request | low | Critical |
112,190,748 | go | tour: add a tour of text/template and html/template | The Go text/template and html/template are widely used and well-documented, but a tour would provide a better way of teaching the template language, starting from the basics and moving on to more advanced features. I'd also like to see non-HTML uses of text/template called out, such as generating markdown or complex SQL queries.
| NeedsInvestigation | low | Minor |
112,197,102 | opencv | VideoCapture::isOpened() test error | modules/videoio/src/cap.cpp on line 600
``` .cpp
bool VideoCapture::isOpened() const
{
return (!cap.empty() || !icap.empty());
}
```
Does the test shouldn't be ?
``` .cpp
return (!cap.empty() && !icap.empty());
```
I ran in a case where isOpened return true but the device is already in use and icap is empty but not cap
| bug,priority: normal,category: videoio,affected: 3.4 | low | Critical |
112,288,567 | TypeScript | Heuristic/loose completions for the 'any' type | # Motivation
Today, if you have a value of type 'any', dotting off of the value and requesting members for completion will return nothing back. The original motivation for this was that we should never risk giving users incorrect completions.
This is great for a fully typed TypeScript project. However, for those migrating to TypeScript, as well as those who need to drop down to `any` fairly frequently, this can be slightly frustrating.
For those used to the JS editing experience in editors like Sublime where they just "got completions", the current behavior might be seen as unattractive.
# Proposal
We should consider adding a language service option for users to get "loose" completions akin to what we give in the Salsa language service.
We can still keep the warning for each completion, but we should provide completions with a builder.
| Suggestion,In Discussion | low | Major |
112,346,379 | opencv | MacOSX: VideoWriter slows down after some point | Performance of VideoWriter's write method gets very low after some point. In these two stackoverflow questions, problem is explained clearly:
- http://stackoverflow.com/questions/26695486/python-opencv-videowriter-slows-down-at-around-3000-write
- http://stackoverflow.com/questions/32163691/opencv-videowriter-become-slower-and-slower
What is reason of this specific problem? Is it not suitable to use VideoWriter class for writing long (longer than 3 minutes) videos?
| bug,wontfix,priority: low,category: videoio | low | Major |
112,483,782 | kubernetes | NFS volume plugin should translate to service IP before making mount call | I'm trying to brush off the [nfs example](https://github.com/kubernetes/kubernetes/tree/master/examples/nfs) now that privileged is working by default, and one of the small warts in it is the web server pod needs a hardcoded IP or hostname in order to bring the pieces together. (And I'm running through it for GCE to make sure it works there.)
I think this is straightforward to fix with an API call during the mount, then we can just call system mount with the cluster IP.
cc @thockin @cjcullen @jsafrane
| sig/storage,lifecycle/frozen | low | Major |
112,576,608 | rust | Associated item projections across crate boundaries slightly broken | ``` rust
pub trait Foo<T> { }
pub trait Mirror {
type Dual;
}
pub struct Eps;
pub struct A<T, U>(T, U);
impl<T, U: Mirror> Mirror for A<T, U> { type Dual = B<T, U::Dual>; }
pub struct B<T, U>(T, U);
impl<T, U: Mirror> Mirror for B<T, U> { type Dual = A<T, U::Dual>; }
impl Mirror for Eps { type Dual = Eps; }
```
```rs
extern crate whatever;
use whatever::*;
struct Dummy;
impl Foo<A<Eps, B<Eps, Eps>>> for Dummy {}
// doesn't work, causes "conflicting implementations for trait `whatever::Foo`"
// impl Foo<<A<Eps, B<Eps, Eps>> as Mirror>::Dual> for Dummy {}
// works just fine
// impl Foo<B<Eps, A<Eps, Eps>>> for Dummy {}
```
This despite the fact that `<A<Eps, B<Eps, Eps>> as Mirror>::Dual` and `B<Eps, A<Eps, Eps>>` are the same type. It works just fine when they're in the same crate.
| A-trait-system,P-medium,A-associated-items,T-lang,C-bug,T-types,A-coherence | low | Critical |
112,581,353 | opencv | Documentation: Update docs about connected component label images | The docs should be updated to state that: given the same binary input image (inverted for `cv::distanceTransform()`), the labels in the images returned by all the following functions will all be consistent:
- `cv::distanceTransform()` (version with labels)
- `cv::connectedComponents()`
- `cv::connectedComponentsWithStats()`
See [SO Q&A](http://stackoverflow.com/questions/33256284/label-images-in-opencv) about this.
| priority: normal,feature,category: documentation,affected: 3.4 | low | Minor |
112,651,600 | kubernetes | Testing doesn't sufficiently cover client/cluster version skew | As we're moving forward with the `v1.1` release, we're realizing we don't have sufficient test coverage for version-skew between clients and clusters. Recently, we put in place upgrade tests:
```
# kubernetes-upgrade-gke-1.0-1.1
#
# This suite:
#
#1. launches a cluster at release/latest-1.0,
#2. upgrades the master to ci/latest-1.1
#3. runs release/latest-1.0 e2es,
#4. upgrades the rest of the cluster,
#5. runs release/latest-1.0 e2es again, then
#6. runs ci/latest-1.1 e2es and tears down the cluster.
```
When these tests run, the client version always matches the e2e version, so in `3.` and `5.` the client is a `1.0` client, in `6.` it's a `1.1` client.
Problems:
- [ ] We don't cover the reverse case, where we have a `1.1` client working a `1.0` cluster.
- This is tough because we don't expect `1.1` tests to pass against a `1.0` cluster, so we'd, in theory, have to run `1.0` tests through a `1.1` client, which is tough.
Possible solutions:
- [x] Run `1.1` tests against a `1.0` cluster, and see what breaks (#16041);
- [ ] Figure out a way to link different versions of the tests and client library together;
- [ ] Figure out a way to run the test-command suite against a live, version-skewed cluster.
This is _not_ a `v1.1` blocker.
Please add other cases we don't catch, so we have a sense of where we need to go.
| area/test,priority/important-soon,area/kubectl,sig/api-machinery,area/test-infra,area/upgrade,lifecycle/frozen | low | Major |
112,681,844 | opencv | opencv-2.4.* compilation issue with cuda, bumblebee | Hi
i have an issue and i have searched many forums, maillists for a solution so i believe it is a bug. I have tried to install OpenCV-2.4.11 on my laptop with an integrated Intel videocontroller and a separate NVIDIA Geforce 940M videocontroller. I have installed Cuda with the capability version 5.0 with the Cuda 7.5 driver and runtimeversion. The repo is the OpenSUSE bumblebee repository with nvidia-bumblebee. I haven't started the bumblebee daemon yet to avoid any troubles that maybe comes with that. When i want to make it with this configuration
```
-- Linker flags (Debug):
-- Precompiled headers: YES
--
-- OpenCV modules:
-- To be built: core flann imgproc highgui features2d calib3d ml video legacy objdetect photo gpu ocl nonfree contrib java python stitching superres ts videostab
-- Disabled: world
-- Disabled by dependency: -
-- Unavailable: androidcamera dynamicuda viz
--
-- GUI:
-- QT 5.x: YES (ver 5.4.2)
-- QT OpenGL support: YES (Qt5::OpenGL 5.4.2)
-- OpenGL support: YES (/usr/lib64/libGLU.so /usr/lib64/libGL.so /usr/lib64/libSM.so /usr/lib64/libICE.so /usr/lib64/libX11.so /usr/lib64/libXext.so)
-- VTK support: NO
--
-- Media I/O:
-- ZLib: /usr/lib64/libz.so (ver 1.2.8)
-- JPEG: /usr/lib64/libjpeg.so (ver )
-- PNG: /usr/lib64/libpng.so (ver 1.6.13)
-- TIFF: /usr/lib64/libtiff.so (ver 42 - 4.0.4)
-- JPEG 2000: /usr/lib64/libjasper.so (ver 1.900.1)
-- OpenEXR: /usr/lib64/libImath.so /usr/lib64/libIlmImf.so /usr/lib64/libIex.so /usr/lib64/libHalf.so /usr/lib64/libIlmThread.so (ver 2.1.0)
--
-- Video I/O:
-- DC1394 1.x: NO
-- DC1394 2.x: YES (ver 2.2.2)
-- FFMPEG: YES
-- codec: YES (ver 56.26.100)
-- format: YES (ver 56.25.101)
-- util: YES (ver 54.20.100)
-- swscale: YES (ver 3.1.101)
-- gentoo-style: YES
-- GStreamer: NO
-- OpenNI: NO
-- OpenNI PrimeSensor Modules: NO
-- PvAPI: NO
-- GigEVisionSDK: NO
-- UniCap: NO
-- UniCap ucil: NO
-- V4L/V4L2: Using libv4l1 (ver 1.2.1) / libv4l2 (ver 1.2.1)
-- XIMEA: NO
-- Xine: YES (ver 1.2.6)
--
-- Other third-party libraries:
-- Use IPP: NO
-- Use Eigen: YES (ver 3.2.2)
-- Use TBB: NO
-- Use OpenMP: YES
-- Use GCD NO
-- Use Concurrency NO
-- Use C=: NO
-- Use Cuda: YES (ver 7.5)
-- Use OpenCL: YES
--
-- NVIDIA CUDA
-- Use CUFFT: YES
-- Use CUBLAS: YES
-- USE NVCUVID: YES
-- NVIDIA GPU arch: 50
-- NVIDIA PTX archs:
-- Use fast math: YES
--
-- OpenCL:
-- Version: dynamic
-- Include path: /home/peter/programs/opencv-2.4.11/3rdparty/include/opencl/1.2
-- Use AMD FFT: NO
-- Use AMD BLAS: NO
--
-- Python:
-- Interpreter: /usr/bin/python2 (ver 2.7.8)
-- Libraries: /usr/lib64/libpython2.7.so (ver 2.7.8)
-- numpy: /usr/lib64/python2.7/site-packages/numpy/core/include (ver 1.9.0)
-- packages path: lib/python2.7/site-packages
--
-- Java:
-- ant: /usr/bin/ant (ver 1.9.4)
-- JNI: /usr/lib64/jvm/java/include /usr/lib64/jvm/java/include/linux /usr/lib64/jvm/java/include
-- Java tests: YES
--
-- Documentation:
-- Build Documentation: YES
-- Sphinx: /usr/bin/sphinx-build (ver 1.2.3)
-- PdfLaTeX compiler: /usr/bin/pdflatex
-- Doxygen: YES (/usr/bin/doxygen)
--
-- Tests and samples:
-- Tests: YES
-- Performance tests: YES
-- C/C++ Examples: YES
--
-- Install path: /usr/local
--
-- cvconfig.h is in: /home/peter/programs/opencv-2.4.11/build
-- -----------------------------------------------------------------
--
i get this linking error.
-- Generating done
-- Build files have been written to: /home/peter/programs/opencv-2.4.11/build
[ 0%] Built target opencv_core_pch_dephelp
[ 0%] Built target pch_Generate_opencv_core
[ 3%] Built target opencv_core
[ 3%] Built target opencv_ts_pch_dephelp
[ 3%] Built target pch_Generate_opencv_ts
[ 3%] Built target opencv_flann_pch_dephelp
[ 3%] Built target pch_Generate_opencv_flann
[ 6%] Built target opencv_flann
[ 9%] Built target opencv_imgproc_pch_dephelp
[ 9%] Built target pch_Generate_opencv_imgproc
[ 9%] Built target opencv_imgproc
[ 9%] Automatic moc for target opencv_highgui
[ 9%] Built target opencv_highgui_automoc
[ 9%] Automatic moc for target opencv_highgui_pch_dephelp
[ 9%] Built target opencv_highgui_pch_dephelp_automoc
[ 12%] Built target opencv_highgui_pch_dephelp
[ 12%] Built target pch_Generate_opencv_highgui
[ 12%] Built target opencv_highgui
[ 12%] Built target opencv_features2d_pch_dephelp
[ 12%] Built target pch_Generate_opencv_features2d
[ 12%] Built target opencv_features2d
[ 12%] Built target opencv_video_pch_dephelp
[ 12%] Built target pch_Generate_opencv_video
[ 15%] Built target opencv_video
[ 15%] Built target opencv_ts
[ 15%] Built target opencv_perf_core_pch_dephelp
[ 15%] Built target pch_Generate_opencv_perf_core
Linking CXX executable ../../bin/opencv_perf_core
../../lib/libopencv_core.so.2.4.11: undefined reference to `__cudaRegisterLinkedBinary_52_tmpxft_00004cee_00000000_7_matrix_operations_cpp1_ii_332650c4'
collect2: error: ld returned 1 exit status
modules/core/CMakeFiles/opencv_perf_core.dir/build.make:529: recipe for target 'bin/opencv_perf_core' failed
make[2]: *** [bin/opencv_perf_core] Error 1
CMakeFiles/Makefile2:875: recipe for target 'modules/core/CMakeFiles/opencv_perf_core.dir/all' failed
make[1]: *** [modules/core/CMakeFiles/opencv_perf_core.dir/all] Error 2
Makefile:147: recipe for target 'all' failed
make: *** [all] Error 2
peter@linux-opht:~/programs/opencv-2.4.11/build>
```
I have found this error and the solution in most time has to do with adding a an nvcc flag called
rdc=true but adding this to the configuration leads causes another troubles. If i use make -j8 then it swallows the error and continues building on the other processes. The error appears again at 27% and the making stops at
27%.
Thx peter
| bug,priority: normal,category: build/install,affected: 2.4,category: gpu/cuda (contrib) | low | Critical |
112,907,210 | go | x/build/cmd/coordinator: limit number of TryBot reattempts | I started http://farmer.golang.org/try?commit=ed6176f8 about 4.5 hours ago, and it appears to have attempted to start a Plan 9 GCE image every 6 minutes (i.e., ~45 times now). While its determination is laudable, I think it could stop after maybe 3.
CC @bradfitz @adg
| Builders,NeedsFix | low | Minor |
113,000,744 | neovim | Feature: Disable jump list for buffer | Would be useful to be able to disable the recording of jumps for a specific buffer. Vim doesn't have this, but several searches online indicate it would be useful for others than me.
| enhancement | low | Minor |
113,002,840 | opencv | VideoCapture problem with big videodata | When try to load a video with using the whole UInt32 limit (over 4GByte uncompressed data). I get the error message "Frame offset points outside movi section." exactly 64 times.
The concerned line is in file "modules/videoio/src/cap_mjpeg_decoder.cpp". For me it looks like some header offsets overflow the uint32 max allowed videosize.
The videofile itself works fine on some (but not all) players.
| bug,priority: normal,category: videoio,affected: 3.4 | low | Critical |
113,010,337 | opencv | GPU-hog doesn't work with win_stride smaller then (8, 8) | If you execute the gpu-example-hog on a picture with win_stride (8, 8) and the other default parameters :
- ./gpu-example-hog /lfs/psc/OpenCV_3_TestData/road.png --win_stride_width 8 --win_stride_height 8
the output is :

with win_stride (4, 4) :
- ./gpu-example-hog /lfs/psc/OpenCV_3_TestData/road.png --win_stride_width 4 --win_stride_height 4
the ouput is :

So if you pick a smaller win_stride then 8 the gpu-hog doesn't work but the FPS decrease because he has to calculate more.
Image I used for testing (road.png) :

| bug,priority: normal,category: samples,category: gpu/cuda (contrib) | low | Minor |
113,128,982 | TypeScript | Stdlib typings for typed arrays (and specific errors) need brand fields to improve type safety | All typed arrays are assignable in our present dts.
``` ts
let arr = new Uint8Array(20);
let arr2 = new Int16Array(20);
arr = arr2; // Valid.
```
The same is true of specific errors:
``` ts
let err = new SyntaxError();
let err2 = new RangeError();
err = err2; // Also valid
```
We should fix this, as both of these sets of types are sets whose members are meant to be distinguished by type rather than by structure. This is really important in the case of typed arrays, as the following happens:
``` ts
let arr = new Uint8Array(20);
let arr2 = new Int16Array(20);
arr = arr2; // Valid.
let sliced = arr.slice(0, 10); // Typed as a Uint8Array (really an Int16Array at runtime)
```
| Bug,Help Wanted,Domain: lib.d.ts | low | Critical |
113,170,348 | youtube-dl | Site Support Request: Supersport.com | ```
Iliad:~ aaron$ youtube-dl --version
2015.10.24
```
```
Iliad:~ aaron$ youtube-dl -v -j http://www.supersport.com/cricket/sa-team/video/598304
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'-j', u'http://www.supersport.com/cricket/sa-team/video/598304']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.10.24
[debug] Python version 2.7.10 - Darwin-15.0.0-x86_64-i386-64bit
[debug] exe versions: none
[debug] Proxy map: {}
WARNING: Falling back on generic information extractor.
ERROR: Unsupported URL: http://www.supersport.com/cricket/sa-team/video/598304
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1240, in _real_extract
doc = parse_xml(webpage)
File "/usr/local/bin/youtube-dl/youtube_dl/utils.py", line 1667, in parse_xml
tree = xml.etree.ElementTree.XML(s.encode('utf-8'), **kwargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1300, in XML
parser.feed(text)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
ParseError: not well-formed (invalid token): line 30, column 24
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 661, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 291, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 1838, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: http://www.supersport.com/cricket/sa-team/video/598304
```
| site-support-request | low | Critical |
113,189,219 | rust | #[macro_export] stops macro's documentation tests from working | Adding `#[macro_export]` to a macro stops it from working in documentation tests. Here's a few examples:
`````` rust
/// Test macro
///
/// #Examples
/// ```
/// test_macro!(TestStructure);
/// ```
#[macro_export]
macro_rules! test_macro {
($expression:ident) => (struct $expression);
}
``````
`````` rust
#[macro_export]
/// Test macro
///
/// #Examples
/// ```
/// test_macro!(TestStructure);
/// ```
macro_rules! test_macro {
($expression:ident) => (struct $expression);
}
``````
They both will yield output similar to this (using cargo):
```
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured
Doc-tests macro_doc
running 1 test
test test_macro!_0 ... FAILED
failures:
---- test_macro!_0 stdout ----
<anon>:2:5: 2:15 error: macro undefined: 'test_macro!'
<anon>:2 test_macro!(TestStructure);
^~~~~~~~~~
error: aborting due to previous error
failures:
test_macro!_0
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured
thread '<unnamed>' panicked at 'Some tests failed', ../src/libtest/lib.rs:252
```
Removing the export makes it work fine:
`````` rust
/// Test macro
///
/// #Examples
/// ```
/// test_macro!(TestStructure);
/// ```
macro_rules! test_macro {
($expression:ident) => (struct $expression);
}
``````
| T-rustdoc,A-macros,C-bug,A-doctests | low | Critical |
113,218,141 | youtube-dl | Request: upload_year tag in output template | I'm trying to folder my videos by year. It would be very useful if I could do this natively, rather than managing it from my script.
This could be fixed by copying the relevant code for the upload_date tag, and returning the first four characters.
This would result in %(upload_year)s returning year in YYYY format.
| request | low | Major |
113,271,622 | electron | Expose the "Spelling and Grammar" related APIs | On OS X apart from the basic spell-checking it also provides a "Spelling and Grammar" menu that allows some more advanced operations:
<img width="666" alt="screen shot 2015-10-26 at 10 02 39 am" src="https://cloud.githubusercontent.com/assets/639601/10719716/bff72fd4-7bc8-11e5-9b97-980e9a75dd91.png">
Chrome browser has implemented most of the features, we should find a way to make Electron apps use it without any pain.
| enhancement :sparkles:,platform/macOS,component/menu | medium | Critical |
113,374,777 | neovim | Repeated append is O(n²) | Demonstration:
1. Open nvi (BSD nvi).
2. Type `1000000afnord` then ESC.
3. Screen fills up almost instantly.
4. Open neovim.
5. Tyle `1000000afnord` then ESC.
6. Wait... and wait... and wait...
Originally pointed out by [this rant](http://www.galexander.org/vim_sucks.html) (not mine).
| performance,bug-vim,normal-mode | low | Major |
113,456,516 | go | runtime: lock_sema.go uses LIFO ordering for waking blocked Ms | In the lock_sema.go implementation of mutexes, lock handles contention by adding the current M to the front of the linked list of waiting Ms, and then unlock wakes up the M at the front of the list. I.e., waiting Ms are awoken in LIFO order.
Is it worth changing unlock to awake the M at the _end_ of the list to effect a FIFO order?
For comparison, lock_futex.go leaves resolving contention to the OS. On Linux, futex wakeup ordering doesn't seem to be guaranteed, but they do at least currently appear to use FIFO ordering (for threads with the same priority/nice level): http://lists.openwall.net/linux-kernel/2015/01/24/106
| compiler/runtime | low | Minor |
113,504,167 | go | go/types: inconsistent handling of untyped expressions | Currently if you type check this code with go/types:
```
var (
_ int = 0
_ = int(0)
_ *int = nil
_ = (*int)(nil)
)
```
both `0` expressions will have type "int", whereas the first `nil` expression will have type "untyped nil", and the second will have type "*int". This seems at the least inconsistent. See http://play.golang.org/p/cw8Ldz1U5D
My expectation was that the `0`s would type check as "untyped int" and the `nil`s would type check as "untyped nil". The current behavior of rewriting the type of untyped expressions seems to have two negative consequences that I've noticed trying to use go/types:
1. It makes it difficult to identify useless conversion operations; i.e., expressions `T(x)` where `x` is already of type `T`. Expressions like `int(0)` will trigger as false positives.
2. It causes `types.TypeAndValue.IsNil` to return false for the `nil` subexpression in `(*int)(nil)`, because it doesn't have the type "untyped nil" anymore.
It also seems inconsistent with conversions of already typed expressions, where they're not rewritten.
However, I notice api_test.go has a bunch of tests that seem to explicitly test for this behavior, so it seems intentional?
CC @griesemer
| NeedsFix | low | Major |
113,512,800 | rust | Consider having special debugger pretty printers/handling for Unique/Shared/NonZero | A debugger is particularly useful for diagnosing problems in unsafe code, and these types appear reasonably often there. Currently they're printed in a rather ugly way:
``` rust
#![feature(unique)]
use std::ptr::Unique;
struct Bar { y: u8 }
struct Foo {
ptr: Unique<Bar>,
}
fn main() {
let mut x = Bar { y: 10 };
unsafe {
let f = Foo { ptr: Unique::new(&mut x) };
drop(f);
}
}
```
Compiling with `rustc -g unique.rs` and using `rust-gdb unique` to break on the `drop(f)` line allows one to print `f`:
``` rust
(gdb) break unique.rs:13
(gdb) r
...
(gdb) p f
$1 = Foo = {ptr = Unique<unique::Bar> = {pointer = NonZero<*const unique::Bar> = {0x7fffffffdef8}, _marker = PhantomData<unique::Bar>}}
```
Pretty much the only thing that's even slightly interesting there is the `0x7fffffffdef8` and maybe the `unique::Bar`, the layers of `NonZero` and `PhantomData` are just noise. (And even the raw address is pretty useless, and the type is often obvious from context.)
Also, the only way to examine what the pointer points to is to do a pile of field accesses, like:
```
(gdb) p *f.ptr.pointer.__0
$2 = Bar = {y = 10 '\n'}
```
In the best case, it'd be great if `*f.ptr` could work. (Also, `f.ptr.pointer.__0[x]` being written `f.ptr[x]`, for when the `Unique` is representing an array.)
(I guess there may be other standard library types to consider in a similar context: making `rust-gdb` handle them even more nicely.)
| A-debuginfo,P-low,T-compiler,E-medium,T-dev-tools,C-feature-request | low | Critical |
113,523,266 | TypeScript | More strong Promise<T> in lib.es6.d.ts | I think it would be good to be able to define the type of error in `Promise` type. I suggest promise type to be something like this:
``` typescript
interface Promise<T,U> {
/**
* Attaches callbacks for the resolution and/or rejection of the Promise.
* @param onfulfilled The callback to execute when the Promise is resolved.
* @param onrejected The callback to execute when the Promise is rejected.
* @returns A Promise for the completion of which ever callback is executed.
*/
then<TResult>(onfulfilled?: (value: T) => TResult | PromiseLike<TResult>, onrejected?: (reason: U) => TResult | PromiseLike<TResult>): Promise<TResult>;
then<TResult>(onfulfilled?: (value: T) => TResult | PromiseLike<TResult>, onrejected?: (reason: U) => void): Promise<TResult>;
/**
* Attaches a callback for only the rejection of the Promise.
* @param onrejected The callback to execute when the Promise is rejected.
* @returns A Promise for the completion of the callback.
*/
catch(onrejected?: (reason: U) => T | PromiseLike<T>): Promise<T>;
catch(onrejected?: (reason: U) => void): Promise<T>;
[Symbol.toStringTag]: string;
}
```
Or at least have a `Promise<T, U>` type which extends `Promise<T>`.
| Suggestion,Awaiting More Feedback | low | Critical |
113,663,538 | angular | ng-content projection for root component | `ng-content` projection doesn't work for root component. E.g. for the following code:
```
<app>
<p> Some content that needs to be projected </p>
</app>
@Component({
selector: 'app',
})
@View({
template: `<p>Hello world</p>
<ng-content></ng-content>`
})
class App {}
bootstrap(App);
```
only `<p> Hello world <p>` persists in the DOM after `App` is bootstrapped.
I've reviewed the [design doc](https://docs.google.com/document/d/1BXUFVpImyWN7ynyPKehAesctP0UJTaDYbDTNVkarSeQ/edit) and haven't found such restriction.
[Plunker](http://plnkr.co/edit/10aPDcz0EUML1rjK7S3V?p=preview)
| feature,area: core,core: bootstrap,feature: under consideration | high | Critical |
113,938,515 | go | x/crypto/ssh/terminal: cannot handle xterm based terminals on Windows | At work, I use a combination of Windows + Cygwin/MSYS2 to get my job done. Both Cygwin and MSYS2 using a mintty which is a xterm-256color terminal. My preferred backup tool on this setup is [restic](https://github.com/restic/restic). Restic use x/crypto/ssh/terminal for password authentication. This works fine with a standard Dos Box, but not with my bash (Posix) setup [see restic issue #330](https://github.com/restic/restic/issues/330).
So I've changed the terminal code to use the `util_linux.go` version, but syscall.Termios is not defined.
| OS-Windows | low | Minor |
114,054,977 | thefuck | "git brnch" never completes. Freezes output. | I am running on windows 7 with git bash.
When I run the command "git add -all" then "fuck" I get "no fucks given"
When I run the command "git brnch" (as per the example) I get no output. I can ctrl-c to continue.
When I turn on the debug flag it appears to stop after
"Importing rule: open; took: 0:00:00.000500"
| windows | low | Critical |
114,285,485 | go | cmd/go: go test should be more explicit about test failures | In the best case, the output of `go test github.com/blevesearch/bleve/...` is 88 lines long. If one of the early test fails, the error message can easily be scrolled out and short of looking at `$?` or scrolling back manually, the user has no way to know the test run failed.
I propose `go test` prints something visible at the end of the run in that case like:
```
TEST RESULT: FAILED
```
Having a prefix like `TEST RESULT:` may make it easier for tools processing test output to ignore this line in the future, should its format change (by adding the number of tests run/successful, or time it took, etc.).
A `TEST RESULT: OK` might be displayed on success to.
Thoughts?
| NeedsDecision | low | Critical |
114,363,804 | go | cmd/gofmt: Faulty Comment Re-Arrangement in Argument Lists | Hi,
I noticed today that gofmt (on the Playground at least) reformats `/* */` comments that are inlined in argument lists inconsistently and in a non-faithful manner to their original ordering in the list:
## Pre-Gofmt
`fmt.Println(/* before first */ "first", /* before second */ "second")`
[demo](http://play.golang.org/p/cBIcd0DWoC)
## Post-Gofmt
`fmt.Println( /* before first */ "first" /* before second */, "second")`
[demo](http://play.golang.org/p/Ouu8UiHMmC)
Notice how in the _Post-Gofmt_ case it moves the comment that occurred _after_ the comma to _before_.
This behavior is incongruent to what happens when the argument lists are spread across multiple lines:
## Newline Distributed Pre-Gofmt
```
package main
import "fmt"
func main() {
fmt.Println(
/* before first */ "first")
fmt.Println(
/* before first */ "first",
/* before second */ "second")
}
```
[demo](http://play.golang.org/p/A21ZkTDdmC)
## Newline Distributed Post-Gofmt
```
package main
import "fmt"
func main() {
fmt.Println(
/* before first */ "first")
fmt.Println(
/* before first */ "first",
/* before second */ "second")
}
```
[demo](http://play.golang.org/p/rtgoQRJO1w)
For the _pre_ cases, be sure to click `Format` in the Go Playground demo to see the post-state.
| NeedsInvestigation | low | Major |
114,366,807 | kubernetes | Static pod Lifecycle Formalization (was: Decide fate of mirror pods) | I would like to open a discussion on how and whether we should continue support mirror pods, given that 1) mirror pods need to be treated differently in multiple places in kubelet, making them brittle, and 2) there are some limitations about mirror pods that may cause confusion (e.g., what to expect if user modifies mirror pods)?
_Background_
kubelet watches a few sources for pods: apiserver, http, and files in a local directory. We support non-apiserver sources for several main use cases:
- Standalone kubelets
- Schedule one pod per-node (e.g., fluentd). This will soon be deprecated in favor of DaemonSet of controller.
These pods are sometimes called _static pods_ in the codebase.
For pods from non-apiserver sources, kubelet would create a corresponding _mirror pod_ in the apiserver and continues to update the status. These mirror pods serve two main purposes:
1. Allow users to inspect the pod status through kubectl.
2. Allow scheduler to account for resource usage.
Note that currently the mirror pods may not reflect truthfully what's been running on the node because apiserver can apply default (or cluster-specific) values to certain fields in the mirror pod, but kubelet will adhere to the original pod spec obtained from the source.
**[Option 1]: Deprecate mirror pods completely.**
- The majority use case is covered by DaemonSet.
- For resource accounting, kubelet can report *AllocatableResources" to the scheduler, which already excludes resources occupied by static pods.
- Mirror pods are meaningless for standalone kubelet users.
- The downside would be that if user wants to use static pods, they would not be able to inspect them from the apiserver.
**[Option 2]: Only allow the use of static pods in standalone kubelets.**
- This eliminates the need of mirror pods at the cost of user flexibility.
**[Option 3]: Continue supporting mirror pods, but mark them readonly (non-updatable) in apiserver**
- User won't be allowed to modify the mirror pods arbitrarily.
- Mirror pods could still have values that are different than its static pod.
- We still need to treat mirror pods different in the kubelet codebase.
**[Option 4]: Continue supporting mirror pods, and kubelet should sync to the mirror pods**
- If apiserver is present, kubelet will first create a mirror pod, and then sync to the pod when it receives the mirror pod through watch.
- If apiserver is not present, kubelet will sync the static pods as well.
- Mirror pods will reflect the truth since kubelet syncs to the mirror pod, and apiserver can supply any additional values in the pod spec.
- Mirror pods are read-only.
- Increased code complexity in kubelet.
**[Option 5]: Option 4 + updatable mirror and static pods**
- If a user updates the mirror pod, kubelet would overwrite the file-based static pod.
- What if the static pod is from an http source or the directory is not writable?
- What if user updates the static pod at the same time?
This is my least preferred option since we'll need to resolve conflicts and handle all corner cases. I think it's reasonable to have a single source of truth.
**[Option 6]: Maintain status quo**
- Static pod is the single source of truth. Users can modify the mirror pods arbitrarily, but kubelet would ignore all the modifications as long as the annotation kubelet uses remains unchanged.
- If the mirror annotation changes in the mirror pod, kubelet recreates the mirror pod.
- If the static pod changes, kubelet recreates the mirror pod.
- Mirror pod may not reflect the true status of the static pod due to apiserver settings, as mentioned before.
/cc a couple folks who have expressed opinions about mirror pods: @bgrant0607 @thockin @davidopp @kubernetes/goog-node
| priority/backlog,sig/node,kind/feature,lifecycle/frozen,area/pod-lifecycle | high | Critical |
114,401,646 | rust | println! sometimes panics in drop methods | If you try to println!-debug a drop method in a type which is being used in TLS, you can have problems because the TLS variable which holds the local stdout might already have been destroyed, turning your innocent print into a panic with "cannot access a TLS value during or after it is destroyed". This seems unnecessarily user-hostile.
(I solved the bug in my code, but I'm kinda curious what the recommended way to trace a drop method is. rt::util::dumb_print?)
| P-medium,T-libs-api,A-thread-locals,C-bug | medium | Critical |
114,442,486 | rust | fs::remove_dir_all rarely succeeds for large directories on windows | I've been trying to track this one down for a while with little success. So far I've been able to confirm that no external programs are holding long-standing locks on any of the files in the directory, and I've confirmed that it happens with trivial rust programs such as the following:
``` rust
use std::fs;
fn main() {
println!("{:?}", fs::remove_dir_all("<path>"));
}
```
I've also confirmed that deleting the folder from explorer (equivalent to using SHFileOperation), or using `rmdir` on the command line will always succeed.
Currently my best guess is that either windows or other programs are holding transient locks on some of the files in the directory, possibly as part of the directory indexing windows performs to enable efficient searching.
Several factors led me to think this:
- `fs::remove_dir_all` will pretty much always succeed on the second or third invocations.
- it seems to happen most commonly when dealing with large numbers of text files
- unlocker and other tools show no active handles to any files in the directory
Maybe there should be some sort of automated retry system, such that temporary locks do not hinder progress?
edit:
The errors returned seem somewhat random: sometimes it gives a "directory is not empty" error, while sometimes it will show up as an "access denied" error.
| O-windows,E-help-wanted,C-bug,T-libs,A-io | medium | Critical |
114,495,567 | go | x/tools/cmd/stringer: Gets confused with pointers to C.struct_xxx types. | Save this http://play.golang.org/p/7IHAOsDFqO in `foo.go` and say:
```
stringer -type MyT foo.go
```
Stringer will fail with:
```
stringer: checking package: foo.go:22:2: invalid operation: pfoo (variable of type *invalid type) has no field or method i
```
Stringer works ok with this, though: http://play.golang.org/p/uIrtOCI59N
Another similar program that triggers the same behavior (stringer failure) is this: http://play.golang.org/p/P2t3O0zONt
| Tools | low | Critical |
114,495,887 | rust | Metadata for DLLs should actually go in the LIB import library on Windows | Since metadata is only needed when building things, but the DLL is needed at runtime, it would be better if the metadata was moved to the LIB import library instead since that is only needed when building things as well. This would save significantly on space when distributing a binary with DLLs.
| O-windows,C-enhancement | low | Minor |
114,520,835 | angular | ViewEncapsulation.None styles bleed into ViewEncapsulation.Native components when siblings | When there are sibling components, one with ViewEncapsulation.Native and the other with ViewEncapsulation.None, the styles that any component stylings are being appended to the shadow root of the one component set to Native.
~~Plunker: http://plnkr.co/edit/huN8xe0S7xH5CUw4B2P6?p=preview~~
**Updated Stackblitz**: https://stackblitz.com/edit/angular-issue-5059?file=src/app/app.component.html
<img width="491" alt="screen shot 2015-11-01 at 9 58 03 pm" src="https://cloud.githubusercontent.com/assets/79500/10874083/afa88d42-80e3-11e5-8f8e-ad09cd8414df.png">
A component set to Native should not adopt the rest of the CSS rules from non-Native components. In the screenshot of the rendered DOM, the last three style elements are appended to the end of the shadow root, but they are actually declared in the other two components which are set to None and Emulated modes.
If all three components are set to Native, then the rules are applied only to the local component, but when they are mixed the rules bleed over into Native (but don't bleed out of Native).
| type: bug/fix,freq1: low,area: core,core: CSS encapsulation,P3 | medium | Major |
114,541,254 | opencv | the problem of stitching module | I built opencv3.0 with extra module xfeatures2d on win 7 with VS 2013, and then I used the below
> # pragma comment(lib, "opencv_core300.lib")
> # pragma comment(lib, "opencv_imgproc300.lib")
> # pragma comment(lib, "opencv_imgcodecs300.lib")
> # pragma comment(lib, "opencv_stitching300.lib")
> # pragma comment(lib, "opencv_xfeatures2d300.lib")
libraries to built and link stitching_detailed.cpp which located in samples\cpp folder. When I run the program, it showed me as follow
> OpenCL program build log: -D depth=0 -D scn=3 -D PIX_PER_WI_Y=4 -D dcn=1 -D bidx
> =0 -D STRIPE_SIZE=1 -D INTEL_DEVICE
> igdfcl64.dll successfully completed build.
> igdfcl64.dll successfully completed build.
> Error, line 14118: compile failed.
So, what is wrong with the code? I can run the stitching code on Opencv 2.4.11.
| bug,priority: normal,affected: 3.4,category: t-api | low | Critical |
114,543,494 | opencv | "error: (-217) invalid resource handle" when using OpenCV 2.4.11-2.4.x compiled with Cuda 7.5 on second graphic card | This issue only occurs with OpenCV 2.4.11-2.4.x branch compiled with Cuda 7.5 and does not with Cuda 6.5.
Each stream and buffers are alloced and page-locked on the correct device with gpu::setDevice.
It runs well with Cuda 6.5 on multiple GPUs. However it fails on the first OpenCV kernel call on the second GPU (GPU device 1) with the following exception with Cuda 7.5:
"d:\opencv-2.4\modules\gpu\include\opencv2\gpu\device\detail/transform_detail.hpp:386: error: (-217) invalid resource handle\n". Strangely, no issue happens on the first GPU (GPU device 0)
When using directly NPP on Cuda 7.5 with multiple GPUs, no issue happens. Unfortunately, I could not managed to find out what's happening...
I can provide sample code if needed.
Thanks
| bug,priority: normal,affected: 2.4,category: gpu/cuda (contrib) | low | Critical |
114,707,570 | rust | Provide a way of linking libgcc statically | gcc et al provide "-static-libgcc" for platforms where a link to libgcc_s.so is undesirable. I need similar functionality in rustc because the cloud I'm deploying to doesn't have that in its runtime.
See issue #29482 for some background.
/cc @alexcrichton
| A-linkage,T-compiler,C-feature-request | low | Major |
114,795,330 | youtube-dl | [8tracks] add a flag to return immediately the first 5 playlist items | fetching 8tracks playlists waits for some time to retrieve items which can block for a while (see #7349 for details).
It would be nice to have a flag where it return the first 5 playlist items immediately and exits.
| request | low | Minor |
114,897,420 | go | x/build: builders for alternate buildmodes? | For -buildmode=shared on linux/386, @mwhudson's current set of patches would borrow the CX register. This seems reasonable to me, but means that assembly that works under -buildmode=exe will not work under -buildmode=shared (see the discussion on http://golang.org/cl/16385).
Should we have a builder that runs the standard library tests compiled with -buildmode=shared? What about c-shared and c-archive?
cc @ianlancetaylor @bradfitz
| Builders,new-builder | low | Minor |
114,927,638 | TypeScript | Inconsistent quick info between interface and namespace declaration | 

| Bug,Help Wanted,Domain: Quick Info | low | Minor |
114,950,275 | rust | Tracking issue for #[bench] and benchmarking support | This is a tracking issue for the `#[bench]` attribute and its stability in the compiler. Currently it is not possible to use this from stable Rust as it requires `extern crate test` which is itself not stable.
Core APIs for benchmarking:
* `#[bench]`, which means the function should take a `&mut Bencher` argument and will be run as part of regular tests (and specifically benchmarked with --bench passed to the test binary).
```rust
crate test {
mod bench {
#[derive(Clone)]
struct Bencher { ... }
impl Bencher {
fn iter<T, F>(&mut self, inner: F)
where
F: FnMut() -> T;
}
}
}
``` | T-libs-api,B-unstable,C-tracking-issue,Libs-Tracked | medium | Critical |
114,952,544 | rust | Produce packaging guidelines | Summarize what we've learned into some general guidelines for packagers.
Potential topics:
- Maintaining independently-bootstrapped Rust compilers.
- Packaging Cargo libraries / applications
- Generating offline docs
re https://internals.rust-lang.org/t/perfecting-rust-packaging-the-plan/2767
| P-low,T-infra,C-feature-request | low | Major |
115,034,712 | opencv | Error running script for WinRT | Hello,
I followed this guide: https://github.com/Itseez/opencv/tree/master/platforms/winrt/readme.txt
and when I want to execute this command: setup_winrt.bat "WP,WS" "8.0,8.1" "x86,ARM" -b I always get this errors:
```
window.obj : error LNK2019: unresolved external symbol "void __cdecl cvS
etModeWindow_WinRT(char const *,double)" (?cvSetModeWindow_WinRT@@YAXPBD
N@Z) referenced in function _cvSetWindowProperty [C:\Users\matej\Desktop
\opencv-master\opencv-master\bin\WP\8.0\x86\modules\highgui\opencv_highg
ui.vcxproj]
window.obj : error LNK2019: unresolved external symbol "double __cdecl c
vGetModeWindow_WinRT(char const *)" (?cvGetModeWindow_WinRT@@YANPBD@Z) r
eferenced in function _cvGetWindowProperty [C:\Users\matej\Desktop\openc
v-master\opencv-master\bin\WP\8.0\x86\modules\highgui\opencv_highgui.vcx
proj]
C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\bin\De
bug\opencv_highgui300d.dll : fatal error LNK1120: 2 unresolved externals
[C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\modu
les\highgui\opencv_highgui.vcxproj]
"C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\OpenC
V.sln" (default target) (1) ->
"C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\ALL_B
UILD.vcxproj.metaproj" (default target) (2) ->
"C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\modul
es\core\opencv_core.vcxproj.metaproj" (default target) (8) ->
"C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\modul
es\core\opencv_core.vcxproj" (default target) (57) ->
(ClCompile target) ->
C:\Users\matej\Desktop\opencv-master\opencv-master\modules\core\src\oc
l.cpp(2219): warning C4505: 'cv::ocl::parseOpenCLDeviceConfiguration' :
unreferenced local function has been removed [C:\Users\matej\Desktop\ope
ncv-master\opencv-master\bin\WP\8.0\x86\modules\core\opencv_core.vcxproj
]
"C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\OpenC
V.sln" (default target) (1) ->
"C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\ALL_B
UILD.vcxproj.metaproj" (default target) (2) ->
"C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\modul
es\highgui\opencv_highgui.vcxproj.metaproj" (default target) (12) ->
"C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\modul
es\highgui\opencv_highgui.vcxproj" (default target) (66) ->
(Link target) ->
window.obj : error LNK2019: unresolved external symbol "void __cdecl c
vSetModeWindow_WinRT(char const *,double)" (?cvSetModeWindow_WinRT@@YAXP
BDN@Z) referenced in function _cvSetWindowProperty [C:\Users\matej\Deskt
op\opencv-master\opencv-master\bin\WP\8.0\x86\modules\highgui\opencv_hig
hgui.vcxproj]
window.obj : error LNK2019: unresolved external symbol "double __cdecl
cvGetModeWindow_WinRT(char const *)" (?cvGetModeWindow_WinRT@@YANPBD@Z)
referenced in function _cvGetWindowProperty [C:\Users\matej\Desktop\ope
ncv-master\opencv-master\bin\WP\8.0\x86\modules\highgui\opencv_highgui.v
cxproj]
C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\bin\
Debug\opencv_highgui300d.dll : fatal error LNK1120: 2 unresolved externa
ls [C:\Users\matej\Desktop\opencv-master\opencv-master\bin\WP\8.0\x86\mo
dules\highgui\opencv_highgui.vcxproj]
1 Warning(s)
3 Error(s)
Time Elapsed 00:00:36.04
INFO> Error: Failure executing command: msbuild OpenCV.sln /p:Configuration='Deb
ug' /m
C:\Users\matej\Desktop\opencv-master\opencv-master\platforms\winrt>
```
| bug,priority: low,affected: 3.4,platform: winrt/uwp | low | Critical |
115,042,760 | ant-design | Ant Design Users 👨🏻💻👩🏻💻👨🏻💻👩🏻💻 | > :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2:
>
> **欢迎去知乎回答这个问题:[如何评价 Ant Design 这个项目?](https://www.zhihu.com/question/33629737),说出你的真实开发感受即可。**
>
> :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2: :star2:

---
如果您和您的公司或组织使用了 Ant Design ,非常感谢您的支持,欢迎留下公司或产品名,您的回复将成为维护者、社区用户和观望者的信心来源。
> 在不泄露信息的前提下,建议把截图晒一晒~
> 无关回复将会定期删除
推荐回复格式:
```
- 产品:
- 公司或组织:(如果可以)
- 链接:(如果可以)
- 截图:(如果可以)
```
---
We appreciate you support if you or your organization is using Ant Design. You are welcome to leave replies about your product and organization here, which could became the confidence of maintiners, communication and undecided watchers.
Recommended reply format:
```
- Product:
- Company or Organization: (if any)
- Link: (if any)
- Screenshot: (if any)
``` | 🗣 Discussion,gitauto | high | Critical |
115,183,224 | rust | Tracking issue for `thread_local` stabilization | The `#[thread_local]` attribute is currently feature-gated. This issue tracks its stabilization.
Known problems:
- [ ] `#[thread_local]` translates directly to the `thread_local` attribute in LLVM. This isn't supported on all platforms, and it's not even supported on all distributions within the same platform (e.g. macOS 10.6 didn't support it but 10.7 does). I don't think this is necessarily a blocker, but I also don't think we have many attributes and such which are so platform specific like this.
- [x] Statics that are thread local shouldn't require `Sync` - https://github.com/rust-lang/rust/issues/18001
- [x] Statics that are thread local should either not borrow for the `'static` lifetime or should be unsafe to access - https://github.com/rust-lang/rust/issues/17954
- [x] Statics can currently reference other thread local statics, but this is a bug - https://github.com/rust-lang/rust/issues/18712
- [ ] Unsound with generators https://github.com/rust-lang/rust/issues/49682
- [ ] static mut can be given `'static` lifetime with NLL (https://github.com/rust-lang/rust/issues/54366) | A-FFI,T-lang,B-unstable,A-thread-locals,C-tracking-issue,F-thread_local,S-tracking-impl-incomplete | high | Critical |
115,184,152 | rust | Tracking issue for `log_syntax`, `trace_macros` | The `log_syntax` feature gate allows use of the `log_syntax` macro attribute, and similarly for `trace_macros`. Both are "nasty hacks that will certainly be removed" (according to the reference manual).
| T-lang,B-unstable,C-tracking-issue,S-tracking-design-concerns | medium | Major |
115,184,366 | rust | Tracking issue for `concat_idents` | Tracks stabilization for the `concat_idents` macro.
**Update(fmease)**: Please see https://github.com/rust-lang/rust/issues/124225#issue-2255069330 (an alternative to this feature).
| A-macros,T-lang,T-libs-api,B-unstable,C-tracking-issue,Libs-Tracked,S-tracking-design-concerns | high | Critical |
115,192,931 | rust | Tracking issue for `link_llvm_intrinsics` | Tracks ~~stabilization for~~ the `link_llvm_intrinsics` feature, used via `#[link_name="llvm.*"]`.
**Edit**: As this is obviously back-end specific, it will most likely never be stabilized. | T-lang,B-unstable,C-tracking-issue,S-tracking-perma-unstable | medium | Major |
115,193,095 | rust | Tracking issue for the `linkage` feature | Tracks stabilization for the `linkage` attribute.
| A-linkage,A-attributes,A-FFI,T-lang,B-unstable,C-tracking-issue,S-tracking-perma-unstable | medium | Critical |
115,312,169 | go | runtime: print all threads in GOTRACEBACK >= all | Currently, GOTRACEBACK=all is a misnomer. It prints stacks for all goroutines that happen to be non-running or running on the current OS thread, but it does not print stacks for goroutines that are running on other OS threads. This is frustrating. For purely internal reasons, it's currently necessary to set GOTRACEBACK=crash in order to get stacks for goroutines on other threads, but that also gets you runtime frames and an abort at the end, which is often undesirable.
We should make GOTRACEBACK=all (or higher) print stacks for all goroutines, regardless of what thread they're running on. This will make "all" do what it says in the name and will make the only difference between "system" and "crash" be whether or not it aborts at the end of the traceback.
In other words, this is the current behavior of the GOTRACEBACK settings:
| | none | single | all | system | crash |
| --- | :-: | :-: | :-: | :-: | :-: |
| show user frames | N | Y | Y | Y | Y |
| show runtime frames | N | N | N | Y | Y |
| show other goroutines | N | N | Y | Y | Y |
| show other threads | N | N | N | N | Y |
| abort | N | N | N | N | Y |
This is what it should be:
| | none | single | all | system | crash |
| --- | :-: | :-: | :-: | :-: | :-: |
| show user frames | N | Y | Y | Y | Y |
| show runtime frames | N | N | N | Y | Y |
| show other goroutines | N | N | Y | Y | Y |
| show other threads | N | N | *Y* | *Y* | Y |
| abort | N | N | N | N | Y |
With this, we would eliminate the distinction between "show other goroutines" and "show other threads", and each GOTRACEBACK level would enable exactly one additional feature.
We could do this using the same signal hand-off mechanism GOTRACEBACK=crash currently uses to interrupt the other threads. Historically we couldn't do this because this mechanism wasn't entirely robust, but it's been improved to the point where it should be reliable.
/cc @rsc @ianlancetaylor @randall77
| NeedsFix,compiler/runtime | low | Critical |
115,327,339 | rust | Improve MIR match generation so that we more effecitvely rule out inapplicable match-pairs | Once we perform a test -- such as checking a value or a variant -- we then weed out the matches that are invalidated by this test. The current code is no as smart as it could be: for example, if we test and find that a value equals 'c', we ought to be able to rule out that the character also equals 'd'. But, because the test tells us nothing if we find that the character is NOT equal to 'c', we take the simple path right now and say that an equality test against 'c' and an equality test against 'd' are completely orthogonal from one another. Similar limitations exist for range tests, slice length tests, etc. We should do better!
| C-enhancement,T-compiler,A-MIR | low | Minor |
115,335,368 | rust | Tracking issue for Fn traits (`unboxed_closures` & `fn_traits` feature) | Tracks stabilization for the `Fn*` traits.
Random bugs:
- [x] https://github.com/rust-lang/rust/issues/45510 – type-based dispatch not working
- [ ] https://github.com/rust-lang/rust/issues/42736 – `foo()` sugar doesn't work where you have `&Foo: FnOnce`
| T-lang,T-libs-api,B-unstable,B-RFC-implemented,C-tracking-issue,F-unboxed_closures,S-tracking-design-concerns | high | Critical |
115,362,366 | rust | Tracking issue for `fundamental` feature | This feature flag, part of [RFC 1023](https://github.com/rust-lang/rfcs/pull/1023), is not intended to be stabilized as-is. But this issue tracks discussion about whether _some_ external feature it needed. Perhaps there is a cleaner way to address the balance between negative reasoning and API evolution. See the RFC for details.
| T-lang,B-unstable,C-tracking-issue,S-tracking-needs-summary,F-fundamental | medium | Critical |
115,375,038 | rust | Tracking issue for internal feature `no_core` | The `no_core` feature allows you to avoid linking to `libcore`.
| T-lang,B-unstable,C-tracking-issue,S-tracking-needs-summary | medium | Critical |
115,375,403 | rust | Tracking issue for `box_patterns` feature | Box patterns were gated by [RFC 469](https://github.com/rust-lang/rfcs/pull/469).
| T-lang,B-unstable,C-tracking-issue,A-patterns,S-tracking-perma-unstable | high | Critical |
115,526,806 | rust | Add the GDB pretty-printers to the Windows Rust installation | The pretty printers can be made to work in Windows, as described here: http://stackoverflow.com/questions/33570021/how-to-set-up-gdb-for-debugging-rust-programs-in-windows/33570022#33570022
At a minimum the GDB pretty-printers should be added to the Windows GNU ABI Rust installation, so that they don't have to downloaded separately.
At best, the pretty-printers GDB auto-loading should work as well. I think for that the GDB auto-load info should be added to the debug information of the generated code in Windows.
@michaelwoerister Regarding this comment: https://github.com/rust-lang/rust/issues/16365#issuecomment-67150133 , what issues did you have trying this out in Windows?
| A-debuginfo,O-windows-gnu,T-bootstrap,C-feature-request | medium | Critical |
115,533,639 | go | doc: Effective Go should use sync.WaitGroup, not dummy channel, for Parallelization example | This is a documentation issue.
The section in Effective Go on [parallelization](https://golang.org/doc/effective_go.html#parallel) shows how to use a channel to count goroutine completions. This seems like bad advice for Go learners, since [sync.WaitGroup](https://golang.org/pkg/sync/#WaitGroup) is a more concise alternative that is expressly designed for this idiom.
| Documentation,NeedsInvestigation | low | Major |
115,559,940 | rust | Tracking issue for RFC 2532, "Associated type defaults" | This is a tracking issue for the RFC "Associated type defaults" (rust-lang/rfcs#2532) under the feature gate `#![feature(associated_type_defaults)]`.
--------------------------
The [associated item RFC](https://github.com/rust-lang/rfcs/pull/195) included the ability to provide defaults for associated types, with some tricky rules about how that would influence defaulted methods.
The early implementation of this feature was gated, because there is a widespread feeling that we want a different semantics from the RFC -- namely, that default methods should not be able to assume anything about associated types. This is especially true given the [specialization RFC](https://github.com/rust-lang/rfcs/pull/1210), which provides a much cleaner way of tailoring default implementations.
The new RFC, rust-lang/rfcs#2532, specifies that this should be the new semantics but has not been implemented yet. The existing behavior under `#![feature(associated_type_defaults)]` is buggy and does not conform to the new RFC. Consult it for a discussion on changes that will be made.
--------------------------
### Steps:
- [ ] Implement rust-lang/rfcs#2532 (cc @rust-lang/wg-traits @rust-lang/compiler)
- [X] #61812
- [ ] handle https://github.com/rust-lang/trait-system-refactor-initiative/issues/46
- [ ] Implement the changes to object types specified in the RFC
- [ ] Improve type mismatch errors when an associated type default can not be projected (https://github.com/rust-lang/rust/pull/61812#discussion_r293952743)
- [ ] Adjust documentation ([see instructions on forge][doc-guide])
- [ ] Stabilization PR ([see instructions on forge][stabilization-guide])
[stabilization-guide]: https://forge.rust-lang.org/stabilization-guide.html
[doc-guide]: https://forge.rust-lang.org/stabilization-guide.html#updating-documentation
### Unresolved questions:
- [ ] [1. When do suitability of defaults need to be proven?](https://github.com/rust-lang/rfcs/blob/master/text/2532-associated-type-defaults.md#1-when-do-suitability-of-defaults-need-to-be-proven)
- [ ] [2. Where are cycles checked?](https://github.com/rust-lang/rfcs/blob/master/text/2532-associated-type-defaults.md#2-where-are-cycles-checked)
### Test checklist
Originally created [as a comment](https://github.com/rust-lang/rust/pull/61812#issuecomment-527959874) on #61812
- Trait objects and defaults
- Independent defaults (`type Foo = u8;`)
- [ ] where not specified (`dyn Trait`)
- show that it's an error to coerce from `dyn Trait<Foo = u16>` to that
- show that we assume it is `u8` by invoking some method etc
- [ ] where specified (`dyn Trait<Foo = u16>`)
- show that it's an error to coerce from `dyn Trait<Foo = u16>` to that
- show that we assume it is `u8` by invoking some method etc
- Mixed with type without a default (`type Foo = u8; type Bar;`)
- [ ] where neither is specified (`dyn Trait`) -- error
- [ ] where `Foo` is specified (`dyn Trait<Foo = u16>`) -- error
- [ ] where `Bar` is specified (`dyn Trait<Bar = u32>`) -- ok, check `Foo` defaults to `u8`
- [ ] where both are specified (`dyn Trait<Foo = u16, Bar = u32>`) -- ok
- Dependent defaults (`type Foo = u8; type Bar = Vec<Self::Foo>`)
- [ ] where neither is specified (`dyn Trait`) -- error
- [ ] where `Foo` is specified (`dyn Trait<Foo = u16>`) -- unclear, maybe an error?
- [ ] where `Bar` is specified (`dyn Trait<Bar = u32>`) -- unclear, maybe an error?
- [ ] where both are specified (`dyn Trait<Foo = u16, Bar = u32>`) -- ok
- Cyclic defaults (`type Foo = Self::Bar; type Bar = Self::Foo`)
- [ ] where neither is specified (`dyn Trait`)
- [ ] where `Foo` is specified (`dyn Trait<Foo = u16>`)
- [ ] where `Bar` is specified (`dyn Trait<Bar = u32>`)
- [ ] where both are specified (`dyn Trait<Foo = u16, Bar = u32>`)
- Non-trivial recursive defaults (`type Foo = Vec<Self::Bar>; type Bar = Box<Self::Foo>;`)
- [ ] where neither is specified (`dyn Trait`)
- [ ] where `Foo` is specified (`dyn Trait<Foo = u16>`)
- [ ] where `Bar` is specified (`dyn Trait<Bar = u32>`)
- [ ] where both are specified (`dyn Trait<Foo = u16, Bar = u32>`)
- Specialization
- [x] Default values unknown in traits
- [x] trait definition cannot rely on `type Foo = u8;` (`defaults-in-other-trait-items.rs`)
- [x] impl for trait that manually specifies *can* rely
- [x] also, can rely on it from outside the impl
- [x] impl for trait that does not specify *can* rely
- [x] impl with `default type Foo = u8`, cannot rely on that internally (`defaults-specialization.rs`)
- [x] default impl with `type Foo = u8`, cannot rely on that internally (`defaults-specialization.rs`)
- [x] impl that specializes but manually specifies *can* rely
- [ ] impl that specializes but does not specify *can* rely
- right? want also a test that this impl cannot be *further* specialized -- is "default" inherited, in other words?
- Correct defaults in impls (type)
- Independent defaults (`type Foo = u8;`)
- [x] overriding one default does not require overriding the others (`associated-types/associated-types-overridden-default.rs`)
- (does not test that the projections are as expected)
- [x] where not specified (`impl Trait { }`) (`associated-types/issue-54182-2.rs`)
- [x] where specified (`impl Trait { type Foo = u16; }`) (`issue-54182-1.rs`)
- Mixed with type without a default (`type Foo = u8; type Bar;`)
- [x] where neither is specified (`impl Trait { }`) -- error
- [x] where `Foo` is specified (`impl Trait { type Foo = u16; }`) -- error
- [x] where `Bar` is specified (`impl Trait { type Bar = u32; }`) -- ok
- [x] where both are specified (`impl Trait { type Foo = u16; type Bar = u32; }`) -- ok
- Dependent defaults (`type Foo = u8; type Bar = Vec<Self::Foo>`) -- `defaults-in-other-trait-items-pass.rs`, `defaults-in-other-trait-items-fail.rs`
- [x] where neither is specified (`impl Trait { }`)
- [x] where `Foo` is specified (`impl Trait { type Foo = u16; }`)
- [x] where `Bar` is specified (`impl Trait { type Bar = u32; }`)
- [x] where both are specified (`impl Trait { type Foo = u16; type Bar = u32; }`)
- Cyclic defaults (`type Foo = Self::Bar; type Bar = Self::Foo`) -- `defaults-cyclic-fail.rs`, `defaults-cyclic-pass.rs`
- [x] where neither is specified (`impl Trait { }`)
- considered to be an error only if a projection takes place (is this what we want?)
- [x] where `Foo` is specified (`impl Trait { type Foo = u16; }`)
- [x] where `Bar` is specified (`impl Trait { type Bar = u32; }`)
- [x] where both are specified (`impl Trait { type Foo = u16; type Bar = u32; }`)
- Non-trivial recursive defaults (`type Foo = Vec<Self::Bar>; type Bar = Box<Self::Foo>;`)
- [x] where neither is specified (`impl Trait { }`)
- [x] where `Foo` is specified (`impl Trait { type Foo = u16; }`)
- [x] where `Bar` is specified (`impl Trait { type Bar = u32; }`)
- [x] where both are specified (`impl Trait { type Foo = u16; type Bar = u32; }`)
- Correct defaults in impls (const)
- Independent defaults
- [ ] where not specified
- [ ] where specified
- Mixed with type without a default
- [ ] where neither is specified
- [ ] where `Foo` is specified
- [ ] where `Bar` is specified
- [ ] where both are specified
- Dependent defaults
- [ ] where neither is specified
- [ ] where `Foo` is specified
- [ ] where `Bar` is specified
- [ ] where both are specified
- Cyclic defaults -- `defaults-cyclic-fail.rs`, `defaults-cyclic-pass.rs`
- [x] where neither is specified (`impl Trait { }`)
- considered to be an error only if a projection takes place (is this what we want?)
- [x] where `Foo` is specified
- [x] where `Bar` is specified
- [x] where both are specified
- Non-trivial recursive defaults
- [ ] where neither is specified
- [ ] where `Foo` is specified
- [ ] where `Bar` is specified
- [ ] where both are specified
- Overflow errors in const evaluation
- check that errors in evaluation do not occur based on default values, but only the final values
- Dependent defaults (defaults-not-assumed-fail, defaults-not-assumed-pass)
- [x] where neither is specified
- [x] where `Foo` is specified
- [x] where `Bar` is specified
- [x] where both are specified
- WF checking (`defaults-suitability.rs`)
- requires that defaults meet WF check requirements (is this what we want?)
- [x] `type` in trait body, bound appears on the item
- [x] `type` in trait body, type not wf
- [x] `type` in trait body, bound appears as trait where clause
- [ ] `default type` in impl, bound appears on the item
- [ ] `type` in impl, bound appears on the item
- [ ] `type` in default impl, bound appears on the item
- [x] `type` in trait body, conditionally wf depending on another default
- currently gives an error (is this what we want?)
- [x] `type` in trait body, depends on another default whose bounds suffice
| B-RFC-approved,A-associated-items,T-lang,B-unstable,C-tracking-issue,F-associated_type_defaults,S-tracking-needs-summary | high | Critical |
115,572,304 | TypeScript | namespace / module: Duplicate declaration & needlessly remove line-breaks | I wonder why TSC re-declare a global var after each namespace or module block.
``` ts
namespace OurName {
export var isAndroid = true;
export var isWinPhone = false;
// do something
var checkCookies = function() {
return true;
}
}
namespace OurName {
export var HTML = '<b>hello</b>';
export var state = [1, 2, 3];
export var addToCart = function() {
return true;
}
}
namespace OurName {
var xyz = false;
// do something
export module Office {
var isClosed = false;
}
}
```
Above code is translated to:
``` ts
var OurName;
(function (OurName) {
OurName.isAndroid = true;
OurName.isWinPhone = false;
// do something
var checkCookies = function () {
return true;
};
})(OurName || (OurName = {}));
var OurName;
(function (OurName) {
OurName.HTML = '<b>hello</b>';
OurName.state = [1, 2, 3];
OurName.addToCart = function () {
return true;
};
})(OurName || (OurName = {}));
var OurName;
(function (OurName) {
var xyz = false;
// do something
var Office;
(function (Office) {
var isClosed = false;
})(Office = OurName.Office || (OurName.Office = {}));
})(OurName || (OurName = {}));
```
It also removes the line-breaks that help me keeping the code easier to look. I am porting our production ES5 code to TypeScript, so I have to carefully re-check the output of TSC. This behaviour makes my work even harder.
| Suggestion,Help Wanted | low | Major |
115,630,919 | youtube-dl | Feature Request - Add playlist number to metadata | I've been using youtube-dl to download audio playlists. Instead of having to go through and manually add all the track numbers, it would be nice if youtube-dl could automatically add them to the file metadata.
(Not sure if this is the correct place to make a feature request, I'm not sure where else to put it.)
| request,postprocessors | low | Major |
115,640,387 | youtube-dl | Document ytsearch | it isn't clear from the documentation how to search youtube using wildcards with regex.
for example, as i've tried various regex expressions but youtube-dl complains about providing a URL
"ERROR: Unable to download webpage: HTTP Error 404: Not Found (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output."
what i want to do is search youtube for say videos with keyword _photograph_ for example and dump out the audio codec and the youtube video number for each one it finds
here is part of my bash script
opt="--ignore-config --ignore-errors --no-warnings "
opt+="--skip-download --match-title *photograph* "
opt+="--get-format --get-id --get-format "
youtube-dl ${opt} http://www.youtube.com
normal ywt searches should look like
https://www.youtube.com/results?search_query=def+leoppard+photograph&page=1
| documentation | low | Critical |
115,703,110 | youtube-dl | Site support request: memritv.org | Could you add support for downloading videos from the video site of MEMRI?
Here's a few links to random videos from the site:
http://www.memritv.org/clip/en/5148.htm
http://www.memritv.org/clip/en/5142.htm
http://www.memritv.org/clip/en/5141.htm
Note that every or almost every video has also a transcript of the speech linked to it, it'd be nice, if you could just integrate it into the .description file or something like that.
| site-support-request | low | Minor |
115,703,516 | neovim | API for v:windowid or equivalent | The value of `v:windowid` is not defined. In a terminal emulator, I can use the value of `$WINDOWID` instead, and @equalsraf has just implemented `$WINDOWID` on Neovim-Qt too (see: https://github.com/equalsraf/neovim-qt/issues/64). Perhaps the variable `v:windowid` is superfluous and can be deleted.
This issue is somewhat related with https://github.com/neovim/neovim/issues/2681
| needs:decision,api,gui,ui,needs:design | medium | Major |
115,715,664 | youtube-dl | add learn.infiniteskills.com | please can you add this elearning website
learn.infiniteskills.com
it will very helpful for everyone
thanks
| site-support-request,account-needed | low | Major |
115,739,931 | go | encoding/xml: unexpected behavior of encoder.Indent("", "") | Marshaling xml with prefix and indent set to empty strings results in unindented xml. Tested in Go 1.5.1 darwin/amd64.
The following code:
``` go
package main
import (
"encoding/xml"
"fmt"
)
type Person struct {
XMLName xml.Name `xml:"person"`
Id int `xml:"id,attr"`
FirstName string `xml:"name>first"`
LastName string `xml:"name>last"`
Age int `xml:"age"`
}
func main() {
v := &Person{Id: 13, FirstName: "John", LastName: "Doe", Age: 42}
output, err := xml.MarshalIndent(v, "", "")
if err != nil {
fmt.Printf("error: %v\n", err)
}
fmt.Println(string(output))
}
```
gives:
``` xml
<person id="13"><name><first>John</first><last>Doe</last></name><age>42</age></person>
```
but expteded was:
``` xml
<person id="13">
<name>
<first>John</first>
<last>Doe</last>
</name>
<age>42</age>
</person>
```
I know, not major, but believe it or not, I actually need the latter behavior. :-)
| NeedsFix | low | Critical |
115,884,888 | You-Dont-Know-JS | es6 & beyond: ch8, remove `Object.observe(..)` | `Object.observe(..)` has been withdrawn from TC39 proposal.
| for second edition | medium | Minor |
115,952,078 | go | cmd/link: ppc64 (big endian) cgo errors | I've built golang from master with the patches from issue 11184 to get external linking to work with ppc64le. That seems to work well on ppc64le with external linking and cgo.
In cmd/dist/build.go, linux/ppc64le is in the cgoEnabled map but linux/ppc64 is not. So the build of golang with the latest patches on ppc64 does not quite work with cgo.
If I build the golang toolchain on ppc64 with CGO_ENABLED=1, I first get this error:
cannot use dynamic imports with -d flag
I made a change to cmd/link/internal/ppc64/obj.go to get rid of this message, but then hit this error:
/home/boger/golang/gitsrc/latest/go/pkg/linux_ppc64/net.a(_all.o): unknown relocation type 51; compiled without -fpic?
......
runtime/cgo(.opd): unexpected relocation type 307
.......
and then too many errors
I can see that relocation type 51 is R_PPC64_TOC and that is not handled by the code. I tried adding those defines but not sure what should be generated for this relocation type. If there are suggestions on what to do I can try them out.
| compiler/runtime | high | Critical |
115,959,250 | flutter | We should support ideographic, hanging, and mathematical baselines | <a href="https://github.com/Hixie"><img src="https://avatars.githubusercontent.com/u/551196?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [Hixie](https://github.com/Hixie)**
_Thursday June 23, 2015 at 19:55_
_Originally opened as domokit/mojo#264 then copied to https://github.com/flutter/engine/issues/46_
---
The TrueType 'bsln' table exposes metrics for ideographic, hanging, and mathematical baselines. We should add support for those to FreeType and our other font backends, then add support to Skia to report these metrics, then add support to the Sky engine to be able to align to those baselines and report the distance from the top of a LayoutRoot to each of those baselines, then add support for those baselines to the Dart-level RenderBox baseline protocol.
| c: new feature,engine,a: typography,P3,team-engine,triaged-engine | low | Major |
115,959,756 | nvm | [Bug] nodejs on FreeBSD may need to be patched | This may be a hard work, I hope not.
FreeBSD version via `uname -a`:
> FreeBSD vm1 10.1-RELEASE-p19 FreeBSD 10.1-RELEASE-p19 #0: Sat Aug 22 03:55:09 UTC 2015 [email protected]:/usr/obj/usr/src/sys/GENERIC amd64
nvm version:
```
$ git -C ~/.nvm status
HEAD detached at v0.29.0
nothing to commit, working directory clean
```
I found that I cannot do `$ nvm install 0.12` on FreeBSD 10.1 amd64 with success, and the error messages came from compiler, both clang LLVM or gcc will have the same problem, and this is not nvm's problem but nvm looks to support FreeBSD in the codes:
https://github.com/creationix/nvm/blob/master/nvm.sh#L936
``` sh
nvm_get_os() {
local NVM_UNAME
NVM_UNAME="$(uname -a)"
local NVM_OS
case "$NVM_UNAME" in
Linux\ *) NVM_OS=linux ;;
Darwin\ *) NVM_OS=darwin ;;
SunOS\ *) NVM_OS=sunos ;;
FreeBSD\ *) NVM_OS=freebsd ;;
esac
echo "$NVM_OS"
}
```
https://github.com/creationix/nvm/blob/master/nvm.sh#L1230-L1233
``` sh
if [ "_$NVM_OS" = "_freebsd" ]; then
make='gmake'
MAKE_CXX="CXX=c++"
fi
```
So I think we should do something to work around.
The error message:
```
gmake -C out BUILDTYPE=Release V=1
gmake[1]: Entering directory '/net/gcs/104/0456096/.nvm/src/node-v0.12.7/out'
gmake[1]: Warning: File '../deps/v8/src/base/platform/platform-freebsd.cc' has modification time 34 s in the future
c++ '-DV8_TARGET_ARCH_X64' '-DENABLE_DISASSEMBLER' -I../deps/v8 -pthread -Wall -Wextra -Wno-unused-parameter -m64 -fno-strict-aliasing -I/usr/local/include -O3 -ffunction-sections -fdata-sections -fno-omit-frame-pointer -fdata-sections -ffunction-sections -O3 -fno-rtti -fno-exceptions -MMD -MF /net/gcs/104/0456096/.nvm/src/node-v0.12.7/out/Release/.deps//net/gcs/104/0456096/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_libbase/deps/v8/src/base/platform/platform-freebsd.o.d.raw -c -o /net/gcs/104/0456096/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_libbase/deps/v8/src/base/platform/platform-freebsd.o ../deps/v8/src/base/platform/platform-freebsd.cc
c++ '-DV8_TARGET_ARCH_X64' '-DENABLE_DISASSEMBLER' -I../deps/v8 -pthread -Wall -Wextra -Wno-unused-parameter -m64 -fno-strict-aliasing -I/usr/local/include -O3 -ffunction-sections -fdata-sections -fno-omit-frame-pointer -fdata-sections -ffunction-sections -O3 -fno-rtti -fno-exceptions -MMD -MF /net/gcs/104/0456096/.nvm/src/node-v0.12.7/out/Release/.deps//net/gcs/104/0456096/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_libbase/deps/v8/src/base/platform/platform-posix.o.d.raw -c -o /net/gcs/104/0456096/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_libbase/deps/v8/src/base/platform/platform-posix.o ../deps/v8/src/base/platform/platform-posix.cc
../deps/v8/src/base/platform/platform-posix.cc:330:10: error: static_cast from 'pthread_t' (aka 'pthread *') to 'int' is not allowed
return static_cast<int>(pthread_self());
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../deps/v8/src/base/platform/platform-freebsd.cc:159:11: error: member reference base type 'int' is not a structure or union
result.push_back(SharedLibraryAddress(start_of_path, start, end));
~~~~~~^~~~~~~~~~
1 error generated.
deps/v8/tools/gyp/v8_libbase.target.mk:100: recipe for target '/net/gcs/104/0456096/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_libbase/deps/v8/src/base/platform/platform-posix.o' failed
gmake[1]: *** [/net/gcs/104/0456096/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_libbase/deps/v8/src/base/platform/platform-posix.o] Error 1
1 errorgmake[1]: *** Waiting for unfinished jobs....
generated.
deps/v8/tools/gyp/v8_libbase.target.mk:100: recipe for target '/net/gcs/104/0456096/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_libbase/deps/v8/src/base/platform/platform-freebsd.o' failed
gmake[1]: *** [/net/gcs/104/0456096/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_libbase/deps/v8/src/base/platform/platform-freebsd.o] Error 1
gmake[1]: Leaving directory '/net/gcs/104/0456096/.nvm/src/node-v0.12.7/out'
Makefile:45: recipe for target 'node' failed
gmake: *** [node] Error 2
```
I remember that I installed it from ports and pkg with success before, so I copied the patch from FreeBSD's ports system
(https://www.freshports.org/www/node/files/patch-deps_v8_src_base_platform_platform-posix.cc)
> /usr/ports/www/node012/files/patch-deps_v8_src_base_platform_platform-freebsd.cc
> /usr/ports/www/node012/files/patch-deps_v8_src_base_platform_platform-posix.cc
Apply the patches:
```
$ patch < patch-deps_v8_src_base_platform_platform-freebsd.cc
Hmm... Looks like a unified diff to me...
The text leading up to this was:
--------------------------
|--- deps/v8/src/base/platform/platform-freebsd.cc.orig 2015-03-31 22:13:01 UTC
|+++ deps/v8/src/base/platform/platform-freebsd.cc
--------------------------
Patching file deps/v8/src/base/platform/platform-freebsd.cc using Plan A...
Hunk #1 succeeded at 131.
Hunk #2 succeeded at 182.
Hunk #3 succeeded at 260.
Hunk #4 succeeded at 288.
done
$ patch < patch-deps_v8_src_base_platform_platform-posix.cc
Hmm... Looks like a unified diff to me...
The text leading up to this was:
--------------------------
|--- deps/v8/src/base/platform/platform-posix.cc.orig 2015-03-31 22:13:01 UTC
|+++ deps/v8/src/base/platform/platform-posix.cc
--------------------------
Patching file deps/v8/src/base/platform/platform-posix.cc using Plan A...
Hunk #1 succeeded at 327 (offset -1 lines).
done
```
After the patches applied, there is no problem anymore!
| OS: FreeBSD / OpenBSD,installing node,feature requests | low | Critical |
115,970,200 | go | x/net: add Deadline, ReadBuffer, ReadDeadline, WriteBuffer, WriteDeadline getters | Currently there are the following setter functions in the net package for the IPConn, TCPConn, UDPConn, and UnixConn objects:
```
SetDeadline(t time.Time) error
SetReadBuffer(bytes int) error
SetReadDeadline(t time.Time) error
SetWriteBuffer(bytes int) error
SetWriteDeadline(t time.Time) error
```
Under the covers, these all call the unix C `setsockopt()` function
Since `setsockopt` has a corresponding `getsockopt` function, I'm proposing that we add the corresponding Getter functions to the `net` package, looking something like this:
```
GetDeadline() (time.Time, error)
GetReadBuffer() (int, error)
GetReadDeadline() (time.Time, error)
GetWriteBuffer() (int, error)
GetWriteDeadline() (time.Time, error)
```
This shouldn't be too difficult to implement, since the generated `zsyscall_*.go` files already have the `getsockopt` function.
This could be useful for debugging and logging information, and for checking if the corresponding `Set` function needs to be called.
| NeedsInvestigation,FeatureRequest | low | Critical |
116,009,731 | TypeScript | Add "tonicExample" or "tonicExampleFilename" field to package.json | To inform [this,](https://tonicdev.com/npm/typescript) which is linked to from the [npm package page](https://www.npmjs.com/package/typescript) in the right sidebar under ["try it out"](http://blog.tonicdev.com/2015/10/28/npm-plus-tonic.html). I imagine we'd rather the entire experience just be within [the Playground](http://www.typescriptlang.org/Playground), but it's not under our control, so we should probably improve the example as best we can.
| Suggestion,Help Wanted | low | Minor |
116,167,242 | neovim | syntax: end delimiter of a region highlighted incorrectly | This is a minor graphical bug with an easy workaround, so there’s really no hurry to fix it, but I thought I’d report it anyway.
## Steps to reproduce
- `nvim -u NORC +"syntax region Error matchgroup=Error transparent start='b' end='d'"`
- Type `iabcde`
Note: the problem does not happen if the region and matchgroup have different highlighting groups. For example, changing “region Error” to “region Whatever” works around the problem.
## Expected result
The characters `b` and `d` are highlighted as `Error`, and thus appear red.
## Actual result
Only `b` is red. `d` has the default color.
| bug-vim | low | Critical |
116,253,244 | go | x/build: remove redundant [...] on build.golang.org for branches | When you view, for example, the dev.ssa branch on build.golang.org, all commits\* are prefixed with [dev.ssa]. Together with the fact that there's not a lot of space for the commit title, you end up with the build dashboard looking like:
[dev.ssa] cmd/compile: ...
[dev.ssa] cmd/compile: ...
[dev.ssa] cmd/compile: ...
[dev.ssa] cmd/compile: ...
[dev.ssa] cmd/compile: ...
which is pretty uninformative. It's similarly redundant for all the other branches. Could we elide the [branch] prefix? It's useful for email but for the dashboard we've already filtered on this field.
*except merges from master
@bradfitz @adg
| Builders | low | Minor |
116,354,040 | javascript | React eslint not catching the use of underscore with internal methods | In the [React/JSX Styleguide 'Methods' section](https://github.com/airbnb/javascript/blob/master/react/README.md#methods) it says you shouldn't use `_` to prefix internal methods. However, some of my code contains this convention and it isn't being caught by the linter. I double-checked the eslint rules for react/jsx files and there doesn't seem to be a rule defined for this convention.
Is this intentional? If so, why specify the rule in your documentation? I found a related issue (#490), but it wasn't clearly stated whether or not this rule should definitely be adhered to but rather that it's more or less up to the programmer.
Any clarification would be greatly appreciated.
| pull request wanted | low | Minor |
116,390,688 | go | x/mobile/app: lifecycle events not firing correctly | Hi ! Im having some issues related to lifecycle event. The problem is that, i want to know when an App wants to Quit, but i don't get the corrects events
the gist is here : https://gist.github.com/BrianCraig/bb2bb411a202b35c1a88
### First, im gonna run it on my PC (x86 AMD - default linux mint 17 Quiana cinnamon distro)
**start the process**
StageDead none
StageAlive on
StageVisible on
StageFocused on
_this is a correct event, which says that the stage is created (alive), visible, and focused_
paint internal
_now this paint is internal, but, it should be external; there is not a single paint event sent in my code_
**minimizes window**
_nothing happens_
**close button window**
_nothing happens_
**log out session**
_nothing happens_
### Then, im gonna run it on my phone (Android 4.4.2 - Moto E first gen)
**start the process**
StageDead none
StageAlive on
StageVisible on
StageFocused on
_this is a correct event, same for pc_
paint external
_correct context_
**minimizes app**
StageDead none
StageAlive none
StageVisible off
StageFocused off
_correct event_
**opens view**
StageDead none
StageAlive none
StageVisible on
StageFocused on
_correct event_
**minimizes app again**
StageDead none
StageAlive none
StageVisible off
StageFocused off
_correct event_
**closes the app**
_nothing_
So, is this working right ?
| mobile | low | Minor |
116,415,735 | go | x/sys: provide termios.h-like functionality | Go needs a way to flush the stdin before prompt and Scanln.
See <a href="https://github.com/odeke-em/drive/issues/157">https://github.com/odeke-em/drive/issues/157</a> for psoible fix for Linux.
| NeedsInvestigation | low | Major |
116,669,271 | kubernetes | Implement dedicated nodes using taints and tolerations | This is really a meta-feature; it can be built from other features that we already have or plan to have.
The requirements are
- nodes are partitioned into groups
- administrator can specify a policy that forces pods meeting particular criteria to only run on machines in one of these groups
- optionally, the policy can say that pods that do _not_ meet the criteria can _not_ run on those machines
One possible implementation that meets these requirements is
- each node optionally has a label with key 'dedicated' and some value denoting a group name
- an admission controller has a table mapping namespace name to label value, and adds the corresponding <"dedicated", value> node selector to a pod if its namespace is in the table (ideally this is done in a way that the end-user cannot later modify, but I don't think we have that ability yet)
- not sure yet about how to implement the third bullet from the requirements; #14573 has some discussion.
(I guess this could be done with annotations instead of labels.)
The user who requested this feature also requested the following: kube-proxy on a node in the dedicated machine group belonging to namespace A should not know about any of the services outside of namespace A (except system services of course). Of course this only makes sense if the policy in the admission controller assigns pod to dedicated machine group based on the pod's namespace.
This is closely related to the discussion in #14573, but here I'm trying to capture the exact feature that was requested from a user in-person recently.
There is of course a "preferred" variant of this that acts as a preference rather than a hard constraint.
| priority/backlog,sig/scheduling,kind/feature,area/admin,lifecycle/frozen | medium | Critical |
116,685,837 | kubernetes | Apiserver should expose a non-authenticated endpoint to check its health | While setting up k8s in a HA configuration in AWS and configuring an internal elastic load balancer to front the apiservers I noticed that the apiserver does not appear to expose a non-authenticated endpoint that ELB could use to perform an HTTPS health check (it expects a 200 response). That limits the health check to a generic SSL or TCP check.
| priority/backlog,sig/api-machinery,lifecycle/frozen,triage/accepted | medium | Major |
116,763,515 | nvm | Install script should also update ~/.bash_profile | I was installing nvm on Mac OSX 10.11.1 and ran the install script in the README, which added the following line to my `~/.bashrc`:
```
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
```
But when restarting my shell, I got: `-bash: nvm: command not found`
I resolved this by adding the line above to my `~/.bash_profile` instead. Maybe the install script should also do this in case other people have this issue (more info about `~/.bashrc` vs `~/.bashprofile`: http://stackoverflow.com/a/415444)
(Note: you could also add `[[ -r ~/.bashrc ]] && . ~/.bashrc` to `~/.bash_profile` to make it load the `~/.bashrc` file.)
| installing nvm: profile detection,pull request wanted | low | Minor |
116,841,266 | go | cmd/go: `go install -buildmode=c-shared std` does nothing | Should it? The similar command `go install -buildmode=shared std` works.
cc @ianlancetaylor
| NeedsInvestigation,GoCommand | low | Minor |
116,860,275 | kubernetes | Move all mirror pod logic to a single module | Right now we have mirror pods being passed to the pod workers. Removing the mirror pod references scattered in kubelet.go and centralize them in one place brings benefits such as
- Easy disabling/enabling of such a feature, if necessary
- More readable code
- Pod workers no longer need to interact with apiserver
- Allow enhancements such as waiting a certain period of time before trying to recreate (to account for watch delay)
| priority/backlog,kind/cleanup,sig/node,lifecycle/frozen | low | Major |
116,884,806 | go | cmd/go, cmd/cgo: repeatable builds on Solaris | Binaries built with "go build" that use cgo or include packages that use cgo contain references to a temporary directory. Multiple builds for the same binary will produce inconsistent results.
Simple reproduction:
``` go
// foo.go
package main
// #include <math.h>
// #cgo LDFLAGS: -lm
import "C"
import "fmt"
func main() {
fmt.Println(C.sqrt(4))
}
```
```
$ go build foo.go && md5sum foo
d4cc4febe540953e8115417476adc4a4 foo
$ go build foo.go && md5sum foo
28f5f670a48e6f72a2f31405d9fbf2cc foo
$ strings foo | grep go-build
/tmp/go-build847878379/command-line-arguments/_obj/_cgo_export.c
/tmp/go-build847878379/command-line-arguments/_obj/foo.cgo2.c
/tmp/go-build988195549/runtime/cgo/_obj/_cgo_export.c
/tmp/go-build988195549/runtime/cgo/_obj/cgo.cgo2.c
```
Some build systems require reproducible results: The same inputs should produce precisely the same outputs. The above behavior violates that requirement.
The problem appears to be that the gcc command invoked by "go build" includes the absolute path of the source file in $WORKDIR, which gcc then bakes into the resulting object file.
One fix might be to execute gcc from within $WORKDIR. There is, however, a comment in cmd/go/build.go indicating that the current behavior is intentional: "We always pass absolute paths of source files so that the error messages will include the full path to a file in need of attention."
Another possibility might be to use -fdebug-prefix-map to elide $WORKDIR from the debugging information written by gcc. I don't know if this can be generalized to other compilers.
| OS-Solaris,NeedsFix,compiler/runtime | medium | Critical |
117,014,359 | youtube-dl | Site request: npodoc.nl | npodoc.nl is in some way part of npo.nl, so I expected it to be supported already. However, when I enter for instance http://www.npodoc.nl/live/vipdoc/laura-poitras.html it does not grab all present videos on that site. Videos on npodoc.nl are not available on npo.nl.
Would you like to add support for npodoc.nl?
| site-support-request | low | Minor |
117,148,560 | TypeScript | Language service: open Type.id | I am using language service to "traverse" the program, and I would like very much to "remember" which types and symbols I already saw, and to cache my computation results for them. Performance boost _and_ avoiding infinite loops.
The compiler very helpfully supplies `id` for `Type`, `Symbol` and `Node`, and it seems to be "really, really" unique for [symbols](https://github.com/Microsoft/TypeScript/blob/master/src/compiler/checker.ts#L16) and [nodes](https://github.com/Microsoft/TypeScript/blob/master/src/compiler/checker.ts#L10), and "relatively unique" (i.e. within a given typechecker) for [types](https://github.com/Microsoft/TypeScript/blob/master/src/compiler/checker.ts#L40).
The problem is, however, that, unlike nodes and symbols, types hide their `id`s behind the ["internal" qualifier](https://github.com/Microsoft/TypeScript/blob/master/src/compiler/types.ts#L1850) and the lack of accessor function, making them not legally visible to me as a consumer of the language service. Of course, this being JavaScript, nothing prevents me from accessing them anyway, and that is what I currently do, but this is not ideal, because I am not protected from the code evolution in the future.
So it seems to me that nothing stands in the way of adding a `getTypeId` method to the typechecker, unless I'm missing something very important.
| Suggestion,Help Wanted,API | low | Major |
117,172,135 | rust | Tracking issue for allowing overlapping implementations for marker trait | Tracking issue for rust-lang/rfcs#1268.
## Status
- [x] Initial implementation: #41309
- [ ] Documentation
- [ ] Move to stabilize
- [ ] Stabilization PR
## Known bugs
- [ ] [cross-crate support lacking?](https://github.com/rust-lang/rust/issues/29864#issuecomment-295907958)
- [x] #102360
## History
- initial implementation #41309
- https://github.com/rust-lang/rust/pull/53693
## Prior to stabilization
1. Is it ok that adding items to a previously empty trait is a breaking change? Should we make declaring something a marker trait more explicit somehow? --> resolved by adding explicit `#[marker]` annotation, see https://github.com/rust-lang/rust/pull/53693
## Other notes
In #96766 we decided NOT to disable the orphan check for marker traits as part of this work (just the overlap check), which was a proposed extension.
| A-trait-system,T-lang,B-unstable,B-RFC-implemented,C-tracking-issue,F-marker_trait_attr,S-tracking-needs-to-bake,T-types,S-tracking-blocked | high | Critical |
117,178,138 | TypeScript | Language Service: TypeParameter.constraint is lazily calculated, but does not have an accessor function | Because of this, I have to go through this awkward motion every time:
``` typescript
function getConstraint( type: ts.TypeParameter ) {
// This call will cause the typechecker to resolve properties, as well as a bunch of other information
// about the type (such as generic constraints), but we don't actually need its result right now.
type.getProperties();
return type.constraint;
}
```
For some members (such as "properties" - above), there is an accessor function, which I can use to be sure the data is up to date. Another category of members don't need an accessor function, because they are calculated eagerly, and so are "always there".
But there is this third category of members, which fall through the gap. One example is `TypeParameter.constraint`. Another is `InterfaceTypeWithDeclaredMembers.declaredProperties`. There are probably more, but I haven't come across them yet.
| Bug,Help Wanted,API | low | Minor |
117,187,788 | go | net: validate DNSSEC in Go's DNS resolver | DNSSEC is being deployed.
We should support it eventually.
| NeedsInvestigation | medium | Critical |
117,216,989 | kubernetes | v2 API proposal "desired vs actual" | This has come up a bunch in a lot of small ways, and my mind keeps coming back to this idea, so I want to write it down.
In early early kube API we had `desired` and `actual` (or something like that) structs. Since they were the same struct there were a bunch of things in the struct that were things we didn't want users to set (which eventually became status) and other problems. As we changed to `spec`+`status`, we lost the concept of "actual" and just folded it into `spec`.
This conflates a couple of ideas and makes it hard to distinguish what the user actually asked for from what was assigned to them. I am proposing we reinstate that distinction.
I propose that every top-level object (all of them, not just the ones that make sense RIGHT NOW) holds 4 fields (plus ObjectMeta):
- `metadata` - same meaning as today
- `spec` - what the user asked for, with API-defined defaults applied AND NOTHING ELSE. Every field is available to users.
- `actual` - the exact same structure as `spec`, but with other fields populated by the system (e.g. nodeName when scheduled, clusterIP when auto-assigned, and so on)
- `status` - same meaning as today
If a user wants to save a pod and move it to another cluster, `spec` is what they want. If a piece of software wants to know how a pod is operating, `actual` and `status` are what they want.
I chose this layout (which changes the meaning of `spec` because I figured it makes it easier on users to change over, while making it harder for system components (change to `actual`). This may be wrong, but it's not obvious. An alternative would be that `spec` means "actual" and `request` means "what the user asked for"; users write `request` and the system writes `spec`. This has the advantage of software that observes state being easier to change over, but means that users have more pain. But it is perhaps better to break users, who actually SEE error messages. Sometimes bigger changes are better because the need to something is obvious (see the bugs people hit with some really subtle v1beta1 -> v1 changes).
It would be nice to make one field writable ONLY by users (create/update) and one field writable only by the system. We've used binding for scheduling, but there are other things (like clusterIP) that are written directly in-process.
We could make this round-trippable by embedding `request` under `status` in v1 or even as a new top-level object + sub-resources.
Lastly, there's some question about preserving exactly what the user asked for WITHOUT defaults applied, but I am less sure that is needed or particularly useful.
@bgrant0607 @smarterclayton @lavalamp
| priority/important-soon,area/api,area/app-lifecycle,sig/api-machinery,kind/feature,sig/cli,sig/architecture,lifecycle/frozen,wg/api-expression | medium | Critical |
117,304,361 | tensorflow | Make TensorFlow compatible with PyPy | I know it's not a priority and will be a long way to get there; but making TF compatible with PyPy woud be super cool.
Thoughts?
| stat:contribution welcome,type:feature | high | Critical |
117,428,157 | go | net: more complete DNS stub resolver tests for uncommon scenarios | In working on #12778 I found there is not an easy way to test more uncommon DNS scenarios where tight control of responses and errors is needed.
For the pure Go stub resolver consider restructuring things to make this easier, such as with the ability to swap out real TCP/UDP connections for a stub so tight control is possible while still exercising the bulk of code.
@mdempsky mentioned considering possibly related restructuring for #13281.
| NeedsInvestigation | low | Critical |
Subsets and Splits