id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
91,665,384 | go | x/net/websocket: TestClose fails on Plan 9 | See http://build.golang.org/log/86c5f54b2e864b4a89f8756c4c069739fb314cc9
```
015/06/28 17:48:38 Test WebSocket server listening on 127.0.0.1:51846
--- FAIL: TestClose (0.00s)
websocket_test.go:447: ws.Close(): expected error, got <nil>
FAIL
```
| Testing,OS-Plan9 | low | Critical |
91,676,649 | You-Dont-Know-JS | "async & performance": ch6, update TCO section to mention strict mode | strict mode is required for TCO.
| for second edition | medium | Major |
91,933,966 | go | x/tools/cmd/present: enable support for right-to-left languages | The `present` tool is currently unusable with right-to-left scripts. Text is left-aligned, and punctuation and English words appear at the wrong end of sentences (see screenshot). This can be fixed by setting the right HTML attributes.
Current output for a slide with Arabic text:

Expected output:

Adding RTL support would enable slides to be written in Arabic (incl. Farsi & Urdu) and Hebrew
I would implement this by adding a function (e.g. ".rtl") that when invoked would set the RTL mode for the entire presentation. The effects of this would be:
- add the "dir=rtl" attribute to the `<body>` tag.
- add "dir=ltr" to all `<pre>` tags so code remains correctly formatted.
- invert the slide switch animation so that the "next" slide is on the left and the "past" slide is on the right.
Adding a special function to set a rarely used mode isn't ideal. I'm open to suggestions of alternatives. Automatically setting it based on the language of the first word might work but isn't "least-surprise".
| Tools | low | Minor |
91,934,103 | TypeScript | Indentation is too aggressive for ternary operator | Actual:
``` ts
var v =
0 ? 1 :
2 ? 3 :
4;
```
Expected:
``` ts
var v =
0 ? 1 :
2 ? 3 :
4;
```
I believe the fix is that if the false branch of the ternary operator should have the same indentation as the true branch. In the case where the true branch is on the same line as the condition, it should not be considered indented, and therefore the false branch should not be indented.
| Bug,Help Wanted,Good First Issue,Domain: Formatter,Domain: Smart Indentation | medium | Major |
92,351,315 | youtube-dl | Return code is 0 when max-filesize/min-filesize is not met | I'm integrating youtube_dl in a python script with the following configuration:
```
ydl_opts = {
"outtmpl": u"%s/%%(id)s.%%(ext)s",
'postprocessors': [{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '192',
}],
"prefer_ffmpeg": True,
"format": "bestaudio",
"max_filesize": 1000000
}
```
and the download code:
```
def _download(self, video_id):
try:
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
return 0 == ydl.download([http://www.youtube.com/watch?v=%s' % video_id])
except:
return False
```
However, if the downloaded failed because the file size exceeds, the call returns 0 (there has been no problem) so there is no way of knowing that actually the download failed.
I propose that the downloaders throw an exception when they cannot met a parameter:
in `youtube_dl/downloader/http.py`:
```
@@ -130,6 +130,7 @@ class HttpFD(FileDownloader):
return False
if max_data_len is not None and data_len > max_data_len:
self.to_screen('\r[download] File is larger than max-filesize (%s bytes > %s bytes). Aborting.' % (data_len, max_data_len))
+ raise ExampleException("File is greater than filesize")
```
or update YoutubeDL class to process `False` as a result of a call to download and set a corresponding retvalue to a value different than 0 to notify the main process that the download actually failed.
Thank you.
| request | low | Critical |
92,361,879 | go | encoding/xml: Serializing XML with namespace prefix | Hi!
I'm struggling with serializing XML (and deserializing again).
But first things first:
``` sh
go version devel +434e0bc Mon Jun 29 16:07:14 2015 +0000 darwin/amd64 # (tried 1.4 and master)
Darwin aero 14.4.0 Darwin Kernel Version 14.4.0: Thu May 28 11:35:04 PDT 2015; root:xnu-2782.30.5~1/RELEASE_X86_64 x86_64
```
I'm trying to serialize a struct to generate XML like this (see http://play.golang.org/p/fMvL86lzB0):
``` xml
<person xmlns="ns1" xmlns:ns2="ns2">
<name>Oliver</name>
<ns2:phone>110</ns2:phone>
</person>
```
When defining `Person` like this ...
``` go
type Person struct {
XMLName xml.Name `xml:"ns1 person"`
Name string `xml:"name"`
Phone string `xml:"ns2 phone,omitempty"`
}
```
... it serialized into the following (which is semantically correct, I guess, but not the same as above):
``` xml
<person xmlns="ns1">
<name>Oliver</name>
<phone xmlns="ns2">110</phone>
</person>
```
I can fake it like this:
``` go
type Person struct {
XMLName xml.Name `xml:"ns1 person"`
NS2 string `xml:"xmlns:ns2,attr"`
Name string `xml:"name"`
Phone string `xml:"ns2:phone,omitempty"`
}
```
... by initializing NS2 before serializing (see http://play.golang.org/p/2dEljm97c8). Unfortunately then I'm not able to deserialize correctly as the `Phone` field will not be blank (see http://play.golang.org/p/RxG2ImcWbm).
Maybe it's just a documentation issue and I'm missing an example. There are some other issues regarding XML and namespaces/namespace prefixes (e.g. #6800, #9519, #7113) some of which are closed and some of which are open, so I'm a bit confused about the status.
| help wanted,NeedsInvestigation | medium | Critical |
92,429,922 | go | cmd/gofmt: inconsistent spacing on slice indices | **What version of Go are you using (go version)?** `go version go1.4.2 linux/amd64`
**What operating system and processor architecture are you using?** Ubuntu 14.04
**What did you do?** Run `gofmt` on below code examples
**What did you expect to see?** Consistent spacing of slice indices
These are two lines within the same function. They vary slightly, but the interesting part is the `[1:len(key-1]` which has inconsistent spacing. Adding spaces to the first one will result in them being removed when `gofmt` is run, and removing the spaces from the second will only have them added back.
If this is not a bug, please let me know what difference here is causing the spacing to change.
``` Go
return "?" + key[1:len(key)-1]
```
``` Go
key = key[1 : len(key)-1]
```
| NeedsInvestigation | low | Critical |
92,576,580 | rust | Can't infer the type of index | This doesn't type-check
```rs
fn main() {
let m = [[0.; 2]; 2];
println!("{}", (|i, j| m[i][j])(0, 1));
}
```
failing with
```
error[E0282]: type annotations needed
--> inf.rs:3:28
|
3 | println!("{}", (|i, j| m[i][j])(0, 1));
| ^^^^ cannot infer type
```
Annotating as `usize` fixes it, but I would expect the compiler to be able to infer the type. Is there a reason why it can't?
| A-closures,A-inference,C-bug,T-types | low | Critical |
92,683,589 | go | cmd/trace: view large trace all at once | Here's an example.
x_test.go:
``` go
package p
import (
"sync"
"testing"
)
func BenchmarkBlocking(b *testing.B) {
var mu sync.Mutex
b.RunParallel(func(pb *testing.PB) {
var x int
for pb.Next() {
for i := 0; i < 10000; i++ {
x *= 2
x /= 2
}
mu.Lock()
for i := 0; i < 1000; i++ {
x *= 2
x /= 2
}
mu.Unlock()
}
})
}
```
Do this:
``` bash
$ go test -bench=. -trace=trace.out
BenchmarkBlocking-8 500000 3401 ns/op
$ go tool trace p.test trace.out
```
Then click on View trace.
Result: Browser tab crashes.
@egonelbre did an initial analysis on a related trace that came from real code (thanks, Egon!):
> Preliminary analysis seems to indicate that the trace-viewer is blowing some sort of internal memory limit in Chrome. i.e. the trace json file is 133MB and chrome dies after allocating 800MB of RAM.
> Found somewhat related issue https://github.com/google/trace-viewer/issues/298
>
> I did some experimenting and was able to load 1.07 million events, but not 1.08M. It seems that trace-viewer can load around ~1M events. The whole dataset is ~1.5M events.
> From DOM/JavaScript perspective it is quite impressive that it can handle that much and the tool stays quite responsive.
> I think it might be better to do this at the go trace tool level, enforce the limit that you cannot look over 1M events. And then from the selection part you can select the range you wish to see.
>
> Note that there is also a possibility of removing some particular counters from the trace.json they make up over half of the events.
> Alternatively we can reduce counters using some similarity: https://play.golang.org/p/GnJIyzmsA5
Given how easy it is to create giant (and thus unusable) traces, I think we should handle them gracefully, since the trace-viewer can't.
One option (perhaps not great): 'go tool trace' could allow you to specify a time window to display, and refuse to send windows that are too large to the trace viewer.
/cc @dvyukov @egonelbre
| FeatureRequest,compiler/runtime | medium | Critical |
92,708,268 | TypeScript | Down-level destructuring in `for..in` statement | We report an error for the following code, :
``` ts
for (var {toString} in { a: 1 }) { // error TS2491: The left-hand side of a 'for...in' statement cannot be a destructuring pattern.
console.log(toString);
}
```
We also emit the following incorrect javascript:
``` ts
for (var toString = (void 0).toString in { a: 1 }) {
console.log(toString);
}
```
We should instead emit:
``` ts
for (var _a in { a: 1 }) {
var toString = _a.toString;
console.log(toString);
}
```
While it is unlikely this will be very useful, this is another of the [ES6 compatibility tests](http://kangax.github.io/compat-table/es6/#destructuring_in_for-in_loop_heads) on the kangax compatibility table that we could consider addressing as it seems a small enough change.
| Bug,ES6 | low | Critical |
92,745,214 | kubernetes | Remove non-mergeable fields from .kubeconfig (such as preferences) | As per #9298, there are two fields (Preferences and CurrentContext) in kubeconfig which are destructive when kubeconfigs are merged and prevent clean and transparent merging of kubeconfig files. We should pull out these fields to support the use case of more merging of kubeconfig files (#9298).
.kubeconfig currently holds two kinds of data:
- Maps of data that are _fully mergable_ (save for conflicts across files, which can be considered an error):
- Clusters - a map of referencable names to cluster configs
- AuthInfos - a map of referencable names to user configs
- Contexts - a map of referencable names to context configs
- Unmergeble data that would overwrite each other if merged:
- Preferences - holds general information to be use for cli interactions
- CurrentContext - the name of the context that you would like to use by default
By pulling out Preferences and CurrentContext, we can arbitrarily merge in .kubeconfigs in any order without worrying about overwriting or confusion/anger (like that described in #4428).
The proposed design:
- For all non-kubectl programs (kubelet, controller manager, etc.), CurrentContext is either inferred when there is only one context, or something passed in as a command line parameter. It is treated just like any other config param for the program.
- For kubectl, CurrentContext can be merged into the Preferences struct (since it's really just another kubectl preference), and the Preferences struct can be moved to a new file, ~/.kubectl. It can also be set at the command line or as an env var, given how important a Preference it is.
- For backwards compatibility, if the CurrentContext field is specified in kubeconfig it can continue to be respected (I don't think Preferences is even used anywhere yet), but all our examples/docs/auto-generated kubeconfig files would be updated to not contain it.
I believe this should be in 1.0, if we can resource it.
| priority/important-soon,area/kubectl,sig/cli,lifecycle/frozen | medium | Critical |
92,828,127 | javascript | jQuery: $(this) vs $(event.currentTarget) | When working with jQuery in ES6 you have to pay particular attention to `$(this)` in event handlers. Using `$(this)` in ES5 is quite popular to access the DOM element of the event handler. In ES6 the code breaks if you switch from function to arrow function syntax:
``` js
// works:
$selector.on('click', function() {
$(this).hide();
});
// doesn't work:
$selector.on('click', () => $(this).hide());
```
Instead you have to access the DOM element via `event.currentTarget`:
``` js
// works:
$selector.on('click', ev => $(ev.currentTarget).hide());
```
IMO the main problem is that you can accidentally break code by just switching from `function` syntax to arrow functions. I'm aware that this problem is not exclusive to jQuery but since `$(this)` is so widely used in jQuery code, imo it wouldn't be the badest idea idea to add this to the list of bad styles.
What do you think?
| enhancement,pull request wanted,editorial | medium | Major |
92,852,849 | three.js | "Fit all" (show all) feature | Dear ThreeJS developers!
Please implement "fit all" ("show all") feature to the camera and/or OrbitControl and other controls.
This feature is very missing thing.
To provide some help, here is implementation of same feature in X3dom:
https://github.com/x3dom/x3dom/blob/master/src/Viewarea.js#L1077
Also, there is "fit object" (e.g. show object) feature:
https://github.com/x3dom/x3dom/blob/master/src/Runtime.js#L631
Many thanks in advance.
Please do not refer me to other feature requests like https://github.com/mrdoob/three.js/issues/1095
If you deny to do this, please at least point me, where is it better to code it, e.g. in camera, or in OrbitControl/etc, to get them (camera and control) synchronized?
| Enhancement | medium | Critical |
92,867,964 | You-Dont-Know-JS | "async & performance": ch4, Delegating Recursion | I've been trying to wrap my brain around the recursive `yield`-delegation example in [ch4](https://github.com/getify/You-Dont-Know-JS/blob/master/async%20%26%20performance/ch4.md#delegating-recursion), but I can't quite seem to fully grok it, so I'm hoping you could clearify a couple of things:
In step 8, it says
> When the promise resolves, its fulfillment message is sent [...] to the normal `yield` that's waiting in the `*foo(3)` generator instance.
But this is how I understand what's going on:
(I set up a working example over at jsbin: https://stebru.jsbin.com/noheco/2/edit?js,console, and according to the console output it seems to make sense the way I have (mis?-)understood this.)
When the first promise resolves, the response value (let's say its the number `1`) is sent (via `it.next(value)`in the `run()`method) to `*bar()`, which delegates to `*foo(3)`, which delegates to `*foo(2)`, which delegates to `*foo(1)`, where the waiting `return yield request(...);` is "replaced" with `return 1;`.
So the value is `return`ed (from `foo(1)`) to the waiting `val = yield *foo(1);` (in `foo(2)`), and so the yield expression and var assignment is replaced with `val = 1;`.
If my understanding is correct (which it probably isn't), step 8 should read:
> When the promise resolves, its fulfillment message is sent [...] to the normal `yield` that's waiting in the **`*foo(1)`** generator instance.
And in step 9 where it states:
> That first call's Ajax response is now immediately `return`ed from the `*foo(3)`generator instance [...]
it should rather read:
> That first call's Ajax response is now immediately `return`ed from the **`*foo(1)`** generator instance [...]
And lastly, in step 10 it states:
> [...] the second Ajax response propagates all the way back into the `*foo(2)`generator instance, and is assigned to its local `val` variable.
But as I understand it, the response is not assigned to the local `val` in `*foo(2)`, but rather replaced with the waiting `return yield request(...);`, e.g. `return 2`, and `return`èd from `*foo(2)` into the waiting `val = yield *foo(2);` in `*foo(3)` (which is replaced with `val = 2`).
I'm sure I'm just missing something here, but as my brain has now stopped working from thinking about all of this for too long, I hope you can help me out! :)
| for second edition | low | Major |
92,939,128 | rust | Rust .msi installer ignores current install | One of the convenient features of the defunct .exe installer is it would remove old library archives before installing new ones, so that one could update Rust just by running over the current install. The .msi installer does not do this, it just naively installs the new libraries next to the old ones, and Rust can't figure out which to use:
```
rustbook\src\main.rs:1:1: 1:1 error: multiple matching crates for `std`
rustbook\src\main.rs:1 // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
^
note: candidates:
note: path: \\?\C:\Rust\bin\rustlib\x86_64-pc-windows-gnu\lib\std-d855c359.dll
note: path: \\?\C:\Rust\bin\rustlib\x86_64-pc-windows-gnu\lib\libstd-d855c359.rlib
note: crate name: std
note: path: \\?\C:\Rust\bin\rustlib\x86_64-pc-windows-gnu\lib\std-74fa456f.dll
note: path: \\?\C:\Rust\bin\rustlib\x86_64-pc-windows-gnu\lib\libstd-74fa456f.rlib
note: crate name: std
rustbook\src\main.rs:1:1: 1:1 error: found staticlib `std` instead of rlib or dylib
rustbook\src\main.rs:1 // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
^
rustbook\src\main.rs:1:1: 1:1 help: please recompile this crate using --crate-type lib
rustbook\src\main.rs:1:1: 1:1 note: crate `std` path #1: C:\Rust\bin\rustlib\x86_64-pc-windows-gnu\lib\libstdc++.a
error: aborting due to 2 previous errors
Could not compile `rustbook`.
To learn more, run the command again with --verbose.
```
I can't simply update my Rust install anymore. I have to remove the old libraries either manually or by running the Rust uninstaller. This is a _major_ convenience discrepancy compared to using Rust on the Unix-based platforms.
| O-windows,P-low,T-infra,T-dev-tools,C-bug | low | Critical |
92,968,499 | nvm | Support `node v4+`/`io.js` release candidates | This issue is to track adding support for `io.js` release candidates. The same process will work for `node` post-merge.
| installing node,feature requests,io.js | medium | Major |
93,041,709 | go | test: add per-test timeout to run.go | On the SSA branch, after fixing some other issues, the compiler hangs while compiling ken/label.go. This causes run.go to run forever. Let's add a per-test time limit, even if it is high, so we don't ever spin forever.
| NeedsFix,FeatureRequest | low | Minor |
93,127,999 | rust | Confusing error messages when invariant type is involved | #### Reproducer
```rs
use std::{cell::RefCell, fmt::Debug};
#[derive(Default)]
struct S<'a> {
c: RefCell<Option<Box<dyn Debug + 'a>>>,
}
impl<'a> S<'a> {
fn r(&'a self) -> &'a () {
panic!()
}
}
fn main() {
let s = S::default();
let _ = s.r();
}
```
<details><summary>Original now malfunctioning reproducer from 2015</summary>
Playpen: http://is.gd/f1SPdn
``` rust
#[derive(Default)]
struct S<'a> {
c: RefCell<Option<Box<Sized + 'a>>>, // Doesn't work
//c: RefCell<Option<Box<&'a Sized>>>, // Works
//c: Option<Box<Sized + 'a>>, // Also works
}
impl <'a> S<'a> {
fn r(&'a self) -> &'a () {
panic!();
}
}
fn main() {
let s = S::default();
let r = s.r(); //~ ERROR `s` does not live long enough [E0597]
}
```
</details>
#### Compiler Output
```
error[E0597]: `s` does not live long enough
--> src/main.rs:16:13
|
15 | let s = S::default();
| - binding `s` declared here
16 | let _ = s.r();
| ^ borrowed value does not live long enough
17 | }
| -
| |
| `s` dropped here while still borrowed
| borrow might be used here, when `s` is dropped and runs the destructor for type `S<'_>`
```
<details><summary>Original compiler output from 2015</summary>
```
<anon>:19:13: 19:14 error: `s` does not live long enough
<anon>:19 let r = s.r();
^
<anon>:17:11: 20:2 note: reference must be valid for the block at 17:10...
<anon>:17 fn main() {
<anon>:18 let s = S::default();
<anon>:19 let r = s.r();
<anon>:20 }
<anon>:18:26: 20:2 note: ...but borrowed value is only valid for the block suffix following statement 0 at 18:25
<anon>:18 let s = S::default();
<anon>:19 let r = s.r();
<anon>:20 }
```
</details>
This is confusing, since the compiler seems to demand that `s` is valid in the line before it is declared (the only difference between the regions is that the first one includes the `fn main() {`). It is never mentioned that this is caused by `S` being invariant on `'a`, which disallows rustc to enlarge the lifetime to include the whole block.
I think rustc should give a better error is this case (maybe something like `` note: cannot extend lifetime of `s` since its type `S` is invariant with respect to `'a` ``).
| C-enhancement,A-diagnostics,A-lifetimes,T-compiler | low | Critical |
93,346,252 | rust | Weird inference failure with unary minus | ``` rust
fn f<T>() -> T { panic!() }
fn main() {
let a = f();
let b = -a;
let c : &i32 = &a;
}
```
```
<anon>:4:14: 4:15 error: the type of this value must be known in this context
<anon>:4 let b = -a;
^
error: aborting due to previous error
```
As far as I can tell, there's enough information here for inference to succeed.
| A-type-system,T-compiler,C-bug,T-types | low | Critical |
93,350,645 | javascript | ECMAScript 6 and Block Scope | Add block scope to let & const bullet under ECMAScript 6 Styles section or give it its own overview and link.
| pull request wanted,editorial | low | Major |
93,407,278 | rust | Inference failure with int inference variable | ``` rust
fn f<T>() -> T { panic!(); }
fn main() {
let mut a = f();
let b : u8 = a+a;
a = 1;
}
```
```
<anon>:4:18: 4:21 error: type mismatch resolving `<i32 as core::ops::Add>::Output == u8`:
expected i32,
found u8 [E0271]
<anon>:4 let b : u8 = a+a;
^~~
```
Given that the type of `b` is `u8`, the type of `a` is a builtin integer type, and the result of `a+a` is assigned to `b`, the type of `a` must be `u8`. Currently, this rule only applies "forwards": it only works if `a` is known to be a builtin integer type when the addition is being checked. However, type inference could be extended to apply it backwards as well.
I'm not sure if this is relevant in practical usage of Rust.
| A-type-system,T-compiler,A-inference,C-bug,T-types | low | Critical |
93,413,923 | go | x/build: support searching across build logs | For issues like #11617, it would be handy to be able to search across all build logs server-side rather than needing to download them locally.
| Builders | low | Minor |
93,442,923 | go | x/mobile/event: lifecycle APIs are overly complicated | The new lifecycle events and having to deal with crosses to determine basic states are both overly complicated for an average user. Majority of the time, users don't want to have a good understanding of the lifecycle's itself but want to perform actions such as freeing resources or stopping the playback at certain states. Currently, Go mobile has a novel (invented here™) event model that requires user to have good understanding of its novel terminology and application lifecycle details. The new event model is a big barrier for new and existing users.
I'd, at least, prefer `LifecycleStageInvisible` and `LifecycleStageDestroyed` to be pushed to the user, it's very unlikely the users will be comfortable with determining these fundamental states themselves.
| mobile | low | Minor |
93,927,916 | go | runtime: display fully-qualified types when strings are equal in unequal type panic | > What version of Go are you using (go version)?
go version devel +bd45bce Wed Jul 8 01:20:02 2015 +0000 linux/amd64
> What operating system and processor architecture are you using?
Ubuntu 15.04, amd64
> What did you do?
Created a test case to see what would happen when vendoring the same package twice in two different repositories.
go15vendortest has a copy of vendortedthing under the vendor directory
it also pulls in go15vendortest2 which has its own copy of vendoredthing under the vendor directory
```
export GO15VENDOREXPERIMENT=1
go get github.com/frumious/go15vendortest
go15vendortest
```
> What did you expect to see?
A type exception with a way to tell which type came from which package.
> What did you see instead?
panic: interface conversion: interface is vendoredthing.Data, not vendoredthing.Data
I know why this is happening because I created it on purpose, but there seems to be no way to display where these types came from.
| NeedsInvestigation,compiler/runtime | low | Minor |
94,019,726 | TypeScript | Cannot 'export default' abstract, ambient class or interface | ```
export default abstract class B {
}
```
By the way, the same with
```
export default declare class B {
}
```
Seems to be a parser bug
| Suggestion,Help Wanted | medium | Critical |
94,092,766 | angular | Add validators to Forms verifying basic data types | Data types: email, url, number, date, time, week, month.
| feature,effort2: days,area: forms,feature: under consideration | medium | Critical |
94,133,185 | youtube-dl | Recording flv into mp4, but not recoding mkv | I want to be able to record flv files into mp4.
So far, I have used this script:
```
#!/bin/bash
# Usage: youtube-480 url
youtube-dl -f "bestvideo[height<=480][ext=mp4]+bestaudio/[height <=? 480]" --write-sub --recode-video mp4 "$@"
```
But I discovered recently that if it is fetching a .mkv file (I think it was from vimeo), then it spends forever in ffmpeg trying to convert it.
I'd like to say "Convert a .flv file to .mp4".
I have no problem working with a .mkv instead.
How would I do that?
| request | low | Minor |
94,161,444 | rust | `#[derive]` sometimes uses incorrect bounds (aka lack of "perfect derive") | In the following code:
```rs
#[derive(Copy, Clone)]
struct Y<T>(&'static fn(T));
```
both derives expand to impls that require the corresponding trait to be implemented on the type parameter, e.g.:
```rs
#[automatically_derived]
impl<T: ::std::marker::Copy> ::std::marker::Copy for Y<T>
where
T: ::std::marker::Copy
{}
```
However, this isn't actually necessary, as `Y<T>` will still be eligible for `Copy` regardless of whether `T` is.
This may be hard to fix, given the compilation phase at which `#[derive]` works...
| A-trait-system,P-low,T-lang,C-bug,A-proc-macros | high | Critical |
94,162,042 | go | regexp: port RE2's DFA matcher to the regexp package | The regexp package currently chooses between the standard NFA matcher, onepass, or the backtracker. This proposal is for porting over RE2's DFA matcher to be used as an option by exec.
| Performance,Proposal,Proposal-Accepted,NeedsFix | high | Critical |
94,172,878 | go | runtime: refactor into separate subpackages | Now that the runtime is Go code, we should be able to break it apart into multiple packages.
Among other things we should consider putting locking, the gc, and special symbols in separate packages.
| NeedsFix,compiler/runtime | medium | Major |
94,326,427 | go | x/tools/cmd: add tool to convert between functions and methods | Discussed with @alandonovan yesterday, recording here.
It would be useful to have a refactoring tool that could convert between:
`func (t T) f(x int)`
and
`func f(t T, x int)`
and clean up all the callers. This could be do with `eg`, but for common refactorings, it might be nicer to have a tool that can just do it, without having to write a template (or write a template generator), so that it can be invoked easily by IDEs. This probably just means new frontends to eg.
| FeatureRequest,Tools | low | Minor |
94,402,030 | rust | Abort on some large allocation requests, Panic on other | Compare these two:
```
let v = vec![0u8; !0];
let u = vec![0u16; !0];
```
We request a vector with 18446744073709551615 elements.
- For `u8` we receive out of memory (null) from the allocator, and call `alloc::oom`, which aborts the program: `application terminated abnormally with signal 4 (Illegal instruction)`
- For `u16`, we get an assertion: `thread '<main>' panicked at 'capacity overflow', ../src/libcore/option.rs:330`
This is inconsistent. Why don't we abort on both of these cases? We abort on too large allocation requests, so those that are even larger could abort too?
| E-hard,P-low,A-allocators,T-libs-api,C-bug | medium | Critical |
94,415,215 | TypeScript | Visual Studio formatting is wrong in some cases | Here is an example
``` typescript
var webApps: ng.ui.IState[] = [{
name: "home.webapp",
templateUrl: "templates/empty-shell.html"
}, {
name: "home.webapp.templates",
templateUrl: "templates/templates.html"
}, {
name: "home.webapp.work",
templateUrl: "templates/work.html"
}];
```
VS formats it to
``` typescript
var webApps: ng.ui.IState[] = [{
name: "home.webapp",
templateUrl: "templates/empty-shell.html"
}, {
name: "home.webapp.templates",
templateUrl: "templates/templates.html"
}, {
name: "home.webapp.work",
templateUrl: "templates/work.html"
}];
```
another example is
``` typescript
function test($http: ng.IHttpService) {
$http({
url: "https://github.com",
method: "GET"
})
.success((d) => {
console.log(d);
})
.finally(() => {
console.log("finally");
});
}
```
VS formats it to
``` typescript
function test($http: ng.IHttpService) {
$http({
url: "https://github.com",
method: "GET"
})
.success((d) => {
console.log(d);
})
.finally(() => {
console.log("finally");
});
}
```
| Bug,Help Wanted,VS Code Tracked | low | Major |
94,415,469 | go | cmd/link: android c-shared libraries are big | A single-function Go library importing fmt is 2.6mb:
2627776 00-00-80 00:00 jni/armeabi-v7a/libgojni.so
That's a bit larger than the equivalent binary darwin/amd64.
| OS-Android,compiler/runtime | low | Minor |
94,502,809 | youtube-dl | [filminlatino] Add extractor for filminlatino | Filminlatino is a streaming service by IMCINE (Mexican Film Institute) with the help of https://www.filmin.es/.
Here is an example link which is free.
https://www.filminlatino.mx/pelicula/llamenme-mike/110
| site-support-request | low | Minor |
94,513,049 | rust | Deref coercions do not work with blocks | It seems that the compiler handles a block differently when coercing a value.
``` rust
fn f(_: &str) {}
fn main() {
let x = "Akemi Homura".to_owned();
f(&x); // OK
f(&(x)); // OK
f(&{x}); // Error
}
```
[RFC 401](https://github.com/rust-lang/rfcs/blob/master/text/0401-coercions.md) says that a block with type `U` is also a target for coercion, so I think this behavior is a bug.
> blocks, if a block has type `U`, then the last expression in the block (if it
> is not semicolon-terminated) is a coercion site to `U`. This includes blocks
> which are part of control flow statements, such as `if`/`else`, if the block
> has a known type.
Also, the compiler seems to be able to coerce blocks using some "trivial" rules (e.g. `&mut T` -> `&T`).
``` rust
fn f(_: &i32) {}
fn main() {
let x = &mut 42;
f(x); // OK
f((x)); // OK
f({x}); // OK
}
```
So I guess this is more likely a problem of auto-deref.
| A-type-system,P-medium,T-compiler,T-types | low | Critical |
94,524,183 | go | cmd/pprof: pprof doesn't show useful info for closure | In pprof, a closure is displayed like this: `package.func.017`.
It could be useful to add the file name and line number.
`go tool pprof -lines` is too verbose.
| compiler/runtime | low | Minor |
94,633,918 | youtube-dl | [lynda] Add support for playlists | hi,
Can I know how do I download the whole course from Lynda. I try to create a playlist and download that playlist but fail......
Note: I can download individual video...But how do I do that with playlist/whole course?
C:\Python34\Scripts>youtube-dl http://www.lynda.com/MyPlaylists?playlistId=5180308 --username xxxx --password xxxxx
[generic] MyPlaylists?playlistId=5180308: Requesting header
[redirect] Following redirect to https://www.lynda.com/login/Login.aspx?redirectTo=MyPlaylists%3fplaylistId%3d5180308
[generic] Login: Requesting header
WARNING: Falling back on generic information extractor.
[generic] Login: Downloading webpage
[generic] Login: Extracting information
ERROR: Unsupported URL: https://www.lynda.com/login/Login.aspx?redirectTo=MyPlaylists%3fplaylistId%3d5180308
| site-support-request | low | Critical |
94,750,353 | angular | Forms: add support for parsers and formatters | Implementing custom formatters and parsers is a common way to extend control types in Angular 1.
I think there are two ways to do it in Angular 2:
- Implement parsers and formatters similar to Angular 1
- Make defining new value accessors easier, so the use case can be handled by defining a new accessor.
| feature,state: Needs Design,freq3: high,area: forms,feature: under consideration | high | Critical |
94,773,334 | TypeScript | T.constructor should be of type T | Given
``` typescript
class Example {
}
```
The current type of `Example.constructor` is `Function`, but I feel that it should be `typeof Example` instead. The use case for this is as follows:
I'd like to reference the current value of an overridden static property on the current class.
In TypeScript v1.5-beta, doing this requires:
``` typescript
class Example {
static someProperty = "Hello, world!";
constructor() {
// Output overloaded value of someProperty, if it is overloaded.
console.log(
(<typeof Example>this.constructor).someProperty
);
}
}
class SubExample {
static someProperty = "Overloaded! Hello world!";
someMethod() {
console.log(
(<typeof SubExample>this.constructor).someProperty
);
}
}
```
After this proposal, the above block could be shortened to:
``` typescript
class Example {
static someProperty = "Hello, world!";
constructor() {
// Output overloaded value of someProperty, if it is overloaded.
console.log(
this.constructor.someProperty
);
}
}
class SubExample {
static someProperty = "Overloaded! Hello world!";
someMethod() {
console.log(
this.constructor.someProperty
);
}
}
```
This removes a cast to the current class.
| Suggestion,In Discussion | high | Critical |
94,822,275 | TypeScript | Enum types are not checked in binary operators | I ran into this for real while trying to fix a bug in the compiler. We disallow assigning one enum to another if you use the `=` operator. But we do not disallow it for any of the compound assignment operators. Nor do we disallow for bitwise operators. I think we should disallow all of them.
``` ts
enum E { }
enum F { }
var e: E;
var f: F;
e = f; // Error
e |= f; // No error
var g = e | f; // No error, g is number
```
| Bug,Breaking Change,Help Wanted | low | Critical |
94,882,662 | go | x/sys/unix: Select returns (int, error) on linux, but just error elsewhere | Is this inconsistency intentional? Select is a POSIX function and has the same 'int' return value on all supported platforms where it's available, so it seems odd to only expose it on Linux.
(Motivation: I was trying to use github.com/jaracil/poll on OpenBSD, but ironically its Select-based fallback for non-Linux OSes is written assuming the Linux Select API.)
| compiler/runtime | low | Critical |
94,997,043 | youtube-dl | Bandcamp and Youtube-dl | Is there any way to do youtube-dl username.bandcamp.com and it download all of the albums? Is that supported yet?
When I tried it gave me this error.
nicks-mac-mini:~ nick$ youtube-dl http://liluglymane.bandcamp.com --verbose flag
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'http://liluglymane.bandcamp.com', u'--verbose', u'flag']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2015.07.07
[debug] Python version 2.7.10 - Darwin-13.3.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 2.7.1, ffprobe 2.7.1
[debug] Proxy map: {}
[Bandcamp:album] liluglymane: Downloading webpage
ERROR: The page doesn't contain any tracks; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 654, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 273, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/bandcamp.py", line 169, in _real_extract
raise ExtractorError('The page doesn\'t contain any tracks')
ExtractorError: The page doesn't contain any tracks; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
nicks-mac-mini:~ nick$
| site-support-request | medium | Critical |
95,231,106 | go | cmd/vet: flag/traversal parity with `go tool vet` | There are two ways to invoke `vet`:
- `go vet`, which operates on packages but does not take options:
`usage: vet [-n] [-x] [packages]`
- `go tool vet`, which takes options but operates on files or directories:
```
Usage of vet:
vet [flags] directory...
vet [flags] files... # Must be a single package
```
It would be good to have parity between `go vet` and `go tool vet` one way or the other: either `go tool vet` should be able to operate on packages, or `go vet` should accept options. Perhaps `go tool vet` should be a superset of `go vet`. | Analysis | low | Minor |
95,346,743 | go | net: Expose return values of default implementations of net.Addr.Network() as constants rather than hardcoded strings | Currently the default implementations of net.Addr.Network() functions in the net package return hard-coded strings representing the network type as below. This means that callers wanting to check the return value of these functions also have to hard-code these in their own code with the possibilities of typos leading to possibility of bugs.
- `IPAddr.Network() : "ip"`
- `IPNet.Network() : "ip+net"`
- `TCPAddr.Network() : "tcp"`
- `UDPAddr.Network() : "udp"`
- `UnixAddr.Network() : "unix" or "unixgram" or "unixpacket"`
If we expose these default return strings as package constants and have functions use the constants instead of hard-coded strings, callers can compare function return values against constants and be protected against these types of bugs. This would also be fully backwards-compatible.
A block of constants will also serve as a useful single point of reference in the documentation for these functions as currently we need to jump around the documentation to understand what each implementation returns.
Happy to work out a CL for this
| NeedsInvestigation | low | Critical |
95,409,294 | go | encoding/xml: empty namespace conventions are badly documented | During the discussion in #11724 @rogpeppe wrote:
> The existing Go convention is that if no namespace is specified in the struct tag, it will match any namespace including the empty namespace.
But [the package docs](http://golang.org/pkg/encoding/xml/) don't seem to mention that. This behaviour should be documented better, because a lot of people will assume that `xml:"name,attr"` will match `name="foo"` but not `ns:name="bar"`.
| Documentation,NeedsFix | low | Minor |
95,467,210 | youtube-dl | --console-title doesn't work in tmux (or GNU screen) | I do most of my terminal work inside tmux, so I almost never get to take advantage of the delightful `--console-title` option. Would it be possible to detect when youtube-dl is running inside tmux (or GNU screen, I suppose) and add the appropriate DCS to the escape sequences?
- For tmux: `\ePtmux;\e\e]2;${window_title}\a\e\\`
- For screen: `\eP\e]2;${window_title}\a\e\\`
| request | low | Minor |
95,567,070 | go | unicode/utf8: add "Rune"-less aliases | The names in the utf8 package are clumsy. We could add nicer aliases, such as Decode for DecodeRune.
Decode(p []byte) (r rune, size int)
DecodeString(s string) (r rune, size int)
DecodeLast(p []byte) (r rune, size int)
DecodeLastString(s string) (r rune, size int)
similarly for Encode.
| NeedsInvestigation,FeatureRequest | low | Major |
95,679,957 | rust | The generic-enum-with-different-disr-sizes test fails in LLDB | Add in https://github.com/rust-lang/rust/pull/27070 I merged the test in with https://github.com/rust-lang/rust/pull/27076 but ignored the LLDB failure to land the LLVM update ahead of time. This test fails in LLDB currently, and it'd be good to know why! For posterity the error is:
```
---- [debuginfo-lldb] debuginfo/generic-enum-with-different-disr-sizes.rs stdout ----
NOTE: compiletest thinks it is using LLDB version 310
error: line not found in debugger output: [...]$1 = Variant1(101)
status: exit code: 0
command: "/usr/bin/python2.7" "/Users/rustbuild/src/rust-buildbot/slave/auto-mac-64-opt/build/src/etc/lldb_batchmode.py" "x86_64-apple-darwin/test/debuginfo-lldb/generic-enum-with-different-disr-sizes.stage2-x86_64-apple-darwin" "x86_64-apple-darwin/test/debuginfo-lldb/generic-enum-with-different-disr-sizes.debugger.script"
stdout:
------------------------------------------
LLDB batch-mode script
----------------------
Debugger commands script is 'x86_64-apple-darwin/test/debuginfo-lldb/generic-enum-with-different-disr-sizes.debugger.script'.
Target executable is 'x86_64-apple-darwin/test/debuginfo-lldb/generic-enum-with-different-disr-sizes.stage2-x86_64-apple-darwin'.
Current working directory is '/Users/rustbuild/src/rust-buildbot/slave/auto-mac-64-opt/build/obj'
Creating a target for 'x86_64-apple-darwin/test/debuginfo-lldb/generic-enum-with-different-disr-sizes.stage2-x86_64-apple-darwin'
settings set auto-confirm true
version
lldb-310.2.37
command script import /Users/rustbuild/src/rust-buildbot/slave/auto-mac-64-opt/build/./src/etc/lldb_rust_formatters.py
type summary add --no-value --python-function lldb_rust_formatters.print_val -x ".*" --category Rust
type category enable Rust
breakpoint set --line 88
Breakpoint 1: where = generic-enum-with-different-disr-sizes.stage2-x86_64-apple-darwin`generic_enum_with_different_disr_sizes::main + 151 at generic-enum-with-different-disr-sizes.rs:88, address = 0x0000000100000da7
run
Hit breakpoint 1.1: where = generic-enum-with-different-disr-sizes.stage2-x86_64-apple-darwin`generic_enum_with_different_disr_sizes::main + 151 at generic-enum-with-different-disr-sizes.rs:88, address = 0x0000000100000da7, resolved, hit count = 1
Process 55204 launched: '/Users/rustbuild/src/rust-buildbot/slave/auto-mac-64-opt/build/obj/x86_64-apple-darwin/test/debuginfo-lldb/generic-enum-with-different-disr-sizes.stage2-x86_64-apple-darwin' (x86_64)
print eight_bytes1
(generic_enum_with_different_disr_sizes::Enum<f64>) $0 = Variant1(100)
print four_bytes1
(generic_enum_with_different_disr_sizes::Enum<i32>) $1 = { = Variant1(101) = Variant2(101) }
print two_bytes1
(generic_enum_with_different_disr_sizes::Enum<i16>) $2 = Variant1(102)
print one_byte1
(generic_enum_with_different_disr_sizes::Enum<u8>) $3 = { = Variant1('A') = Variant2('A') }
print eight_bytes2
(generic_enum_with_different_disr_sizes::Enum<f64>) $4 = Variant2(100)
print four_bytes2
(generic_enum_with_different_disr_sizes::Enum<i32>) $5 = { = Variant1(101) = Variant2(101) }
print two_bytes2
(generic_enum_with_different_disr_sizes::Enum<i16>) $6 = Variant2(102)
print one_byte2
(generic_enum_with_different_disr_sizes::Enum<u8>) $7 = { = Variant1('A') = Variant2('A') }
continue
quit
------------------------------------------
stderr:
------------------------------------------
Traceback (most recent call last):
File "/Users/rustbuild/src/rust-buildbot/slave/auto-mac-64-opt/build/src/etc/lldb_rust_formatters.py", line 155, in print_val
return print_val(lldb_val.GetChildAtIndex(discriminant_val), internal_dict)
File "/Library/PrivateFrameworks/LLDB.framework/Versions/A/Resources/Python/lldb/__init__.py", line 10192, in GetChildAtIndex
return _lldb.SBValue_GetChildAtIndex(self, *args)
NotImplementedError: Wrong number of arguments for overloaded function 'SBValue_GetChildAtIndex'.
Possible C/C++ prototypes are:
GetChildAtIndex(lldb::SBValue *,uint32_t)
GetChildAtIndex(lldb::SBValue *,uint32_t,lldb::DynamicValueType,bool)
Traceback (most recent call last):
File "/Users/rustbuild/src/rust-buildbot/slave/auto-mac-64-opt/build/src/etc/lldb_rust_formatters.py", line 155, in print_val
return print_val(lldb_val.GetChildAtIndex(discriminant_val), internal_dict)
File "/Library/PrivateFrameworks/LLDB.framework/Versions/A/Resources/Python/lldb/__init__.py", line 10192, in GetChildAtIndex
return _lldb.SBValue_GetChildAtIndex(self, *args)
NotImplementedError: Wrong number of arguments for overloaded function 'SBValue_GetChildAtIndex'.
Possible C/C++ prototypes are:
GetChildAtIndex(lldb::SBValue *,uint32_t)
GetChildAtIndex(lldb::SBValue *,uint32_t,lldb::DynamicValueType,bool)
------------------------------------------
thread '[debuginfo-lldb] debuginfo/generic-enum-with-different-disr-sizes.rs' panicked at 'explicit panic', /Users/rustbuild/src/rust-buildbot/slave/auto-mac-64-opt/build/src/compiletest/runtest.rs:1490
failures:
[debuginfo-lldb] debuginfo/generic-enum-with-different-disr-sizes.rs
```
cc @michaelwoerister
| A-debuginfo,T-compiler,C-bug | low | Critical |
95,696,864 | neovim | Terminal UI buffer drops data? | I'm attempting to run a GNU Make instance with termopen, and I'm seeing messages being dropped. I'm hooking into the on_exit() callback to write the terminal buffer to an external log, as well as parse it for errors for the quickfix list.
I deliberately added a compile error so that Make would error-out. When this happens, GCC prints to stderr, make exits, and my messages show up on the terminal. When I look at test.log, however, the last few lines are just _gone._
Rather than sharing the repository, I've managed to find a minimal case that results in lost data, namely by throwing lots of text at the terminal. I don't know if this is the same problem as the make issue, but with this test case, I get about 700 lines of "stdout," when I am expecting 2000.
I poked into it a bit and my guess is that data is being dropped somewhere after SIGCHLD is sent but there is still data either to be read in the TTY fds, or its already been sucked out of the fd but it makes it to the terminal buffer after the process has already exited and the exit callback has been called.
I've attached a minimal test case that fails on my computer as of
5e9f9a875645af1e3c858daba799fe4a9021a767
Am I doing something stupid?
```
let s:Output = {}
function! s:Output.on_exit(id, code)
write! test.log
endfunction
function! s:termtest()
only
vert botright sp | enew
let s:Output['bufid'] = bufnr('%')
call termopen([&sh, &shcf, './test.sh'], s:Output)
endfunction
command! -nargs=0 Termtest call s:termtest()
```
EDIT (@mhinz: shortened 2000 lines of code to two loops, as suggested by @lucc):
``` shell
#!/bin/sh
for i in {0..1000}; do echo "stdout $i"; done
for i in {0..1000}; do echo "stderr $i" 1>&2; done
```
| bug,job-control,terminal,complexity:low | medium | Critical |
95,732,561 | go | x/mobile/exp/sprite: pass config.Event to arrangers | As shown in https://go-review.googlesource.com/#/c/12339/ arranger functions may need the screen dimension info. The CL work around the problem using a global var, but a better approach is to let the sprite engine provides the info it already has.
@nigeltao @crawshaw
| mobile | low | Minor |
95,767,060 | go | runtime: make it possible to know the go initialization in shared libraries is finished without calling a go function | Please see [golang-nuts › How to export C variables from Go shared libraries?](https://groups.google.com/d/msg/golang-nuts/IAw-d5mXzk8/7QpXyPAa930J) for discussion.
I tried to write a Go shared library to make Apache or nginx modules.
What I would like to do is exporting a C struct variable, not a C function, from a Go shared library.
#1. What version of Go are you using (go version)?
1.5beta2
```
$ go version
go version go1.5beta2 linux/amd64
```
#2. What operating system and processor architecture are you using?
Ubuntu Linux trusty amd64
```
$ uname -a
Linux vagrant-ubuntu-trusty-64 3.13.0-55-generic #92-Ubuntu SMP Sun Jun 14 18:32:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
```
#3. What did you do?
Open a Go shared library written from a C main program. Please refer to the commit https://github.com/hnakamur/export_c_variable_from_go_dll_experiment/tree/0972d57461b091aed35c9ed083af0edae0c885cc for source codes.
[runtime_load.c](https://github.com/hnakamur/export_c_variable_from_go_dll_experiment/blob/1951dfb411dcff8986687ca98f801505397fc6cf/Makefile)
```
#include <stdio.h>
#include <dlfcn.h>
int main(int argc, char **argv) {
printf("runtime_load started\n");
void *handle = dlopen("libgodll.so", RTLD_LAZY);
if (!handle) {
printf("dlopen failed\n");
return 1;
}
printf("after dlopen. handle=%llx\n", (long long unsigned)handle);
int *i_ptr = dlsym(handle, "i");
printf("i=%d\n", *i_ptr);
#ifndef NO_DLCLOSE
printf("calling dlclose\n");
dlclose(handle);
printf("after dlclose\n");
#endif
return 0;
}
```
[libgodll.go](https://github.com/hnakamur/export_c_variable_from_go_dll_experiment/blob/1951dfb411dcff8986687ca98f801505397fc6cf/libgodll.go)
```
package main
// int i;
import "C"
func init() {
C.i = 1
}
//export libInit
func libInit() {}
func main() {}
```
[Makefile](https://github.com/hnakamur/export_c_variable_from_go_dll_experiment/blob/1951dfb411dcff8986687ca98f801505397fc6cf/Makefile)
```
test_runtime: runtime_load libgodll.so
LD_LIBRARY_PATH=. ./runtime_load
echo status=$$?
test_compiletime: compiletime_load libgodll.so
LD_LIBRARY_PATH=. ./compiletime_load
echo status=$$?
runtime_load: runtime_load.c
$(CC) -o runtime_load runtime_load.c $(CFLAGS) -ldl
compiletime_load: libgodll.so compiletime_load.c
$(CC) -o compiletime_load compiletime_load.c -L. -lgodll
libgodll.so: libgodll.go
go build -buildmode=c-shared -o libgodll.so libgodll.go
clean:
-rm runtime_load compileitme_load core libgodll.h libgodll.so
```
With these source codes above, I ran `make test_runtime`
#4. What did you expect to see?
I expect to see `i=1` is printed at https://github.com/hnakamur/export_c_variable_from_go_dll_experiment/blob/0972d57461b091aed35c9ed083af0edae0c885cc/runtime_load.c#L13
#5. What did you see instead?
Actually `i=0` is printed at https://github.com/hnakamur/export_c_variable_from_go_dll_experiment/blob/0972d57461b091aed35c9ed083af0edae0c885cc/runtime_load.c#L13
Please see https://github.com/hnakamur/export_c_variable_from_go_dll_experiment/tree/0972d57461b091aed35c9ed083af0edae0c885cc#load-a-shared-library-at-runtime-using-dlopen for the whole output.
When I added `usleep(1000 * 1000)` between `dlopen` and referencing `i`, `i=1` is printed at https://github.com/hnakamur/export_c_variable_from_go_dll_experiment/blob/1951dfb411dcff8986687ca98f801505397fc6cf/runtime_load.c#L14
Please see https://github.com/hnakamur/export_c_variable_from_go_dll_experiment/tree/1951dfb411dcff8986687ca98f801505397fc6cf#load-a-shared-library-at-runtime-using-dlopen for the whole output.
Please refer to the commit https://github.com/hnakamur/export_c_variable_from_go_dll_experiment/tree/1951dfb411dcff8986687ca98f801505397fc6cf for source codes.
[runtime_load.c](https://github.com/hnakamur/export_c_variable_from_go_dll_experiment/blob/1951dfb411dcff8986687ca98f801505397fc6cf/runtime_load.c)
```
#include <stdio.h>
#include <dlfcn.h>
int main(int argc, char **argv) {
printf("runtime_load started\n");
void *handle = dlopen("libgodll.so", RTLD_LAZY);
if (!handle) {
printf("dlopen failed\n");
return 1;
}
printf("after dlopen. handle=%llx\n", (long long unsigned)handle);
usleep(1000 * 1000);
int *i_ptr = dlsym(handle, "i");
printf("i=%d\n", *i_ptr);
#ifndef NO_DLCLOSE
printf("calling dlclose\n");
dlclose(handle);
printf("after dlclose\n");
#endif
return 0;
}
```
I would like to know when the Go initialization of shared libraries is finished without calling a Go function.
Ideally, I would like to set values of C variables at Go shared library compile time, instead of updating values of C variables with `init()` in a Go shared library.
| compiler/runtime | low | Critical |
95,775,525 | go | net: apply ICANN/IANA-managed semantics to IP.IsGlobalUnicast | net.IP.IsGlobalUnicast() applies incorrect address semantics. The current implementation returns True if the address is not Unspecified, Loopback, LinkLocal, or Multicast. However, a Global Unicast address, by definition, excludes more than just those categories. Class E space is to be excluded, as is all of RFC1918 & RFC6598.
This manifests in a few significant ways:
net.ParseIP("255.255.255.255").IsGlobalUnicast() currently returns True
net.ParseIP("10.1.2.3").IsGlobalUnicast() currently returns True
I would propose the following changes:
- Rename IsGlobalUnicast() to IsUnicast()
- Add a check to IsUnicast() to exclude Class E and broadcast addresses
- Create a new IsGlobalUnicast() which extends IsUnicast() by also excluding RFC1918 & RFC6598 addresses
Ideally, it would be even better to extend both further using the tables in RFC6890 as a guideline.
| NeedsInvestigation | low | Major |
95,785,133 | go | go/printer: AST rewriting and then formatting results in mangled docs | In golang.org/cl/12373, I wrote a cmd/fix rule to rewrite "types.Typ[x]" into "x.Basic()". However, the naive approach causes
```
package foo
func f() {
_ = types.Typ[types.Int]
}
// dummy godoc
func g() {
}
```
to get formatted as
```
package foo
func f() {
_ = types.Int.Basic(
// dummy godoc
)
}
func g() {
}
```
I.e., the godoc for g is moved up into the arguments section for the .Basic() invocation. ~~See http://play.golang.org/p/dwq4E8dsMW for a working example.~~ _Edit: See http://play.golang.org/p/CP8ylmNjmQ for an updated working example using ast.CommentMap._
As a workaround, I found setting Rparen seems to prevent the issue. I wouldn't expect that to be necessary though.
| NeedsInvestigation | low | Minor |
95,793,198 | kubernetes | Promotable canary deployments (aka auto-pause) | Writing the new user guide made me think about this.
It's currently easy to run multiple release tracks (e.g., daily release and weekly release), by just running separate replication controllers indefinitely.
However, if someone wants to run a couple canary instances for a while and then roll out the same image to replace the full replica set once it has been sufficiently validated, we don't have direct support for that.
It is relatively easy to just kill kubectl rolling-update in the middle and resume or rollback later, but only if the rate of the rollout is sufficiently slow and one is watching it closely.
The simplest solution I can think of is to automate killing kubectl rolling-update: a --canaries=N flag, which would cause it to break out of the update loop after ramping up the new replication controller to N. The rolling update should be resumable just as if it were killed manually, to promote the canary to the current version.
cc @kelseyhightower
| area/app-lifecycle,area/workload-api/deployment,sig/apps,priority/important-longterm,lifecycle/frozen | medium | Major |
95,828,073 | rust | str and [u8] should hash the same | `str` and `[u8]` hash differently and have done since the solution to #5257.
This is inconvenient in that it leads to things like the `StrTendril` type (from the `tendril` crate) hashing like a `[u8]` rather than like a `str`; one is thus unable to happily implement `Borrow<str>` on it in order to make `HashMap<StrTendril, _>` palatable.
`[u8]` gets its length prepended, while `str` gets 0xff appended. Sure, one u8 rather than one usize is theoretically cheaper to hash; but marginally so only, marginally so. I see no good reason why they should use different techniques, and so I suggest that `str` should be changed to use the hashing technique of `[u8]`. This will prevent potential nasty surprises and bring us back closer to the blissful land of “str is just [u8] with a UTF-8 guarantee”.
Hashes being internal matters, I do not believe this should be considered a _breaking_ change, but it would still probably be a release-notes-worthy change as it _could_ conceivably break eldritch codes.
| T-libs-api,C-feature-request | medium | Critical |
95,849,682 | rust | Failure to fulfill higher-kinded "outlives" predicate could have better error message | Example:
``` rust
fn foo<T>() where for<'a> T: 'a {}
fn bar<'b>() {
foo::<&'b i32>();
}
fn main() {}
```
Error:
```
foo.rs:4:5: 4:19 error: the type `&'b i32` does not fulfill the required lifetime
foo.rs:4 foo::<&'b i32>();
^~~~~~~~~~~~~~
note: type must outlive the static lifetime
error: aborting due to previous error
```
It's not necessarily obvious why the type has to be `'static` in this case.
| E-hard,C-enhancement,A-diagnostics,T-compiler | medium | Critical |
95,853,897 | youtube-dl | site support request - lattelecom.tv | please add support for:
https://www.lattelecom.tv/tiesraide
in particular I am trying to record this festival livestream:
https://www.lattelecom.tv/tiesraide/360tv_positivus
in order to watch it I had to register for free by email and clicking on an activation link. else the video stream would just play one minute.
I could not spot anything like HDS, HLS streaming but only this url:
http://195.13.206.152/loadbalancer?stream=lattelecom_lv_lq.stream&stream=lattelecom_lv_hq.stream
cheers
| site-support-request | low | Minor |
95,918,536 | go | cmd/pprof: make runtime.reflectcall followable by pprof | Context: http://stackoverflow.com/q/31419307/532430
Proposal: allow pprof to follow reflective function invocations via `runtime.reflectcall` so that code "after the jump" can be profiled.
I think the idea is simple enough but if anything was unclear just ask and I'll try to clarify.
| NeedsInvestigation | low | Major |
95,937,897 | youtube-dl | Add support to RASD TV | Hello!
Could you please add support to the Saharawi people Television site?[1] An example URL is this one.[2] Any hint is welcome too.
Thanks
[1] http://www.rasd.tv/
[2] http://www.rasd.tv/index.php/video/2958/%D9%86%D8%B4%D8%B1%D8%A9-%D8%A7%D9%84%D8%A7%D8%AE%D8%A8%D8%A7%D8%B1-18-07-2015
| site-support-request | low | Minor |
95,955,129 | go | image/color: NRGBA(64).RGBA() optimization | If alpha is equal to 0xffff, return r,b,g,a.
If alpha is equal to 0, return 0,0,0,0.
Else, multiply by alpha, divide by 0xffff, return r,g,b,a.
New code:
``` go
func (c NRGBA) RGBA() (r, g, b, a uint32) {
a = uint32(c.A)
a |= a << 8
if a == 0 {
return 0, 0, 0, 0
}
r = uint32(c.R)
r |= r << 8
g = uint32(c.G)
g |= g << 8
b = uint32(c.B)
b |= b << 8
if a == 0xffff {
return
}
r = r * a / 0xffff
g = g * a / 0xffff
b = b * a / 0xffff
return
}
func (c NRGBA64) RGBA() (r, g, b, a uint32) {
a = uint32(c.A)
if a == 0 {
return 0, 0, 0, 0
}
r = uint32(c.R)
g = uint32(c.G)
b = uint32(c.B)
if a == 0xffff {
return
}
r = r * a / 0xffff
g = g * a / 0xffff
b = b * a / 0xffff
return
}
```
| Performance,NeedsInvestigation | low | Major |
96,001,027 | java-design-patterns | Open Session In View pattern | ### Description
The Open Session In View (OSIV) design pattern is aimed at solving the problem of lazy loading in web applications. It allows an open persistence context for the duration of a request, enabling lazy-loaded associations to be accessed and resolved within the view layer.
#### Main Elements of Open Session In View Pattern:
1. **Persistence Context**: Maintains the session open throughout the entire request.
2. **Interceptor or Filter**: Manages the lifecycle of the session, ensuring it starts at the beginning of the request and is closed at the end.
3. **Lazy Loading**: Allows associations to be lazily loaded and accessed in the view layer without encountering `LazyInitializationException`.
4. **Transactional Boundaries**: Ensures transactions are committed before the view is rendered but keeps the session open to resolve lazy-loaded associations.
### References
- [Project Contribution Guidelines](https://github.com/iluwatar/java-design-patterns/wiki)
- [The Open Session In View Anti-Pattern](https://vladmihalcea.com/the-open-session-in-view-anti-pattern/)
- [Open Session In View Pattern](https://www.linkedin.com/pulse/open-session-view-pattern-hamed-hatami/)
### Acceptance Criteria
1. Implement an interceptor or filter to manage the session lifecycle, ensuring the session is opened at the start of the request and closed at the end.
2. Ensure that the transactional boundaries are respected, with transactions being committed before rendering the view.
3. Validate that lazy-loaded associations can be accessed in the view layer without triggering `LazyInitializationException`.
| info: help wanted,epic: pattern,type: feature | low | Minor |
96,209,104 | thefuck | Add ability to edit suggested command | It'll be good to have ability to edit suggested command.
| enhancement | low | Minor |
96,279,727 | thefuck | How to install it on Windows? | I'm using Windows 7
I installed successfully python 3.4.3 and "the fuck" package by using "pip install thefuck" on command line
However, I don't know how to apply "the fuck" for my Git Bash (located at "C:\Program Files\Git\bin\sh.exe")
| windows | medium | Major |
96,442,051 | go | all: user-facing golang.org/x repos need to stay green | We need to get the subrepos green consistently for 1.5 and moving forward.
edit 2023-06-23 by @heschi:
Modules that are vendored into the main release, such as `net` and `sys`, as well as user-facing libraries like `tools` and `text`, must be healthy before a release can be issued. Other modules are out of scope. | NeedsFix,release-blocker | high | Critical |
96,485,419 | youtube-dl | szpport hclips.com | could we add hclips.com ?
Thanks
| site-support-request | low | Minor |
96,833,446 | go | cmd/vet: shadow false positive when using exported function | `go tool vet --shadow` returns a false positive when the variable being shadowed is declared from the return of an exported user function from another package. That's a mouthful, so here's some code:
./other/other.go:
``` go
package other
func Foo() (int, error) {
return 0, nil
}
```
./shadow_test.go:
``` go
package shadow
import (
"errors"
"testing"
"github.com/tamird/shadow/other"
)
func foo() (int, error) {
return 0, nil
}
func TestVetShadow(t *testing.T) {
// Local varibles: this passes.
// a, err := "a", errors.New("Foo")
// Function from same package: this passes.
// a, err := foo()
// Function from standard library: this passes.
// r := &bytes.Buffer{}
// a, err := r.Read(nil)
// Function from different package: this triggers shadowing warning.
a, err := other.Foo()
if err != nil {
}
if _, err := other.Foo(); err != nil {
}
b, err := "b", errors.New("Foo")
if err != nil {
}
_, _ = a, b
}
```
The comments in the code describe the problem. Use `other.Foo()` and you get a false positive; use any of the others (local function, local variables, exported method from the stdlib) and the warning goes away. Weird!
All the code is also available here: https://github.com/tamird/shadow
cc @mberhault
| Analysis | low | Critical |
96,880,215 | rust | Move HashMap to liballoc | This is blocked on https://github.com/rust-lang/rust/pull/26870
- [ ] Move FnvHasher from src/librustc/util/nodemap.rs to libcollections
- [ ] Expose FnvHasher as DeterministicState
- [ ] Move HashMap's implementation into libcollections defaulting to DeterministicState
- [ ] Unhardcode HashMap's constructors/methods from using RandomState (instead relying on default fallback)
- [ ] Re-export HashMap in std::collections as `pub type HashMap<K, V, S = RandomState> = core_collections::HashMap<K, V, S>`
- [ ] Do the same for HashSet
- [ ] Re-export DeterministicState in std as well (so we have a better answer to "HashMap is slow")
- [ ] Document the performance-stability/security tradeoff between DeterministicState and RandomState, and that users of the std facade will get a different default from direct users of libcollections (because RandomState requires OS rng to seed its state)
I am willing to mentor this if someone else wants to do it.
Note that re-exporting HashMap may be the trickiest one; it's not clear to me that it's trivial, and may involve manually re-exporting everything in std::collections::hash_map to get the right behaviour (annoying and potentially fragile).
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"clarfonthey"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | C-cleanup,E-mentor,A-collections,T-libs-api | high | Major |
96,934,206 | go | runtime: make panic dumps more readable | When a Go program panics, it dumps information on running goroutines, stack trace, error etc. But it's hard for humans to parse. In a terminal, which is where you're most likely to see panics as you're testing your program, the error message is the first line and you have to scroll to see it.
Python does this really well, as they should, since everything is a runtime "panic". The last line is the error message. Then there's a blank line, and then the stack trace in reverse order, so that the frame that had the error is the next line reading up. 95% of the time those last two lines are all you need to see to understand what's going wrong.
I'd like to see Go lay things out similarly, with the most important information at the bottom, and the least important at the top. It would make debugging Go programs in the terminal a little bit more pleasant. Again that would be the most common case for seeing a panic by far. Production panics are hopefully few and far between.
| NeedsDecision,compiler/runtime | low | Critical |
96,964,106 | go | doc: document the way the tools work | Nowhere is it explained how go build constructs software: how the compiler and linker run, where files are placed, how test builds its code, and so on. There should be an implementation and explanation document outlining this process. Such a document would also help someone who must use Go from within an existing software construction system.
| Documentation,help wanted,NeedsFix | low | Minor |
96,979,974 | java-design-patterns | Disruptor Pattern | **Description:**
The Disruptor is a high-performance inter-thread messaging library. It provides a way to achieve high throughput and low latency in message processing systems. The main elements of the Disruptor pattern include:
1. **Ring Buffer:** A pre-allocated circular buffer that holds the data to be processed. The ring buffer is the core of the Disruptor, allowing for efficient memory allocation and access.
2. **Event Processors:** Components that consume events from the ring buffer. They can be set up in different configurations such as single, multiple, or parallel consumers.
3. **Sequencers:** Components that control the order of events being processed. They ensure that events are processed in the correct sequence and manage dependencies between different event processors.
4. **Producers:** Components that publish events to the ring buffer. Producers write data to the ring buffer, which is then consumed by the event processors.
5. **Wait Strategies:** Strategies that determine how consumers wait for events to be available in the ring buffer. Different strategies can be used to balance between CPU usage and latency.
**References:**
1. [Disruptor Pattern Wikipedia](https://en.wikipedia.org/wiki/Disruptor_(software))
2. [LMAX Disruptor: High Performance Inter-thread Messaging Library](https://lmax-exchange.github.io/disruptor/)
3. [Martin Fowler on LMAX Architecture](https://martinfowler.com/articles/lmax.html)
**Acceptance Criteria:**
1. Implement a ring buffer with configurable size to hold the events.
2. Develop event processors that consume events from the ring buffer, supporting different processing configurations.
3. Ensure proper sequencing of events using a sequencer component.
4. Implement producers that can publish events to the ring buffer efficiently.
5. Include at least one wait strategy for managing how consumers wait for events.
Please refer to the [project contribution guidelines](https://github.com/iluwatar/java-design-patterns/wiki) before submitting your pull request. | info: help wanted,epic: pattern,type: feature | low | Major |
97,012,356 | go | encoding/json: UnmarshalTypeError.Offset and json subparsers | This is a followup to #9693 that added UnmarshalTypeError.Offset to record textual position of errors during unmarshaling. The problem with the landed patch is that it does not deal with UnmarshalTypeError returned from implementations of Unmarshaler.UnmarshalJSON(). The offset in such case would reflect an offset from the beginning of the slice, not from the start of the original JSON.
It would be nice to be able to recover the full offset for such cases. For example, UnmarshalTypeError could include a boolean flag indicating that UnmarshalTypeError.Offset should be updated by the caller to reflect the real position in the file. Then the code that handles errors from UnmarshalJSON would do just that.
| NeedsInvestigation | low | Critical |
97,038,262 | nvm | Cannot get this to build on Rasperberry Pi Model B+ Debian Jessie | I am trying to install node.js on my Raspberry pi Model B+ it has a native Debian Jessie install. I followed this instructions here : https://github.com/creationix/nvm
Can anyone help with this please.
Below is the start and end of the install process
nvm install stable
###### ################################################################## 100.0%
HTTP 404 at URL https://nodejs.org/dist/v0.12.7/node-v0.12.7-linux-arm-pi.tar.gz
Binary download failed, trying source.
###### ################################################################## 100.0%
creating ./icu_config.gypi
{ 'target_defaults': { 'cflags': [],
'default_configuration': 'Release',
'defines': [],
'include_dirs': [],
'libraries': []},
'variables': { 'arm_float_abi': 'hard',
'arm_fpu': 'vfpv3',
'arm_neon': 0,
'arm_thumb': 0,
'arm_version': '6',
'clang': 0,
'gcc_version': 49,
'host_arch': 'arm',
'icu_small': 'false',
'node_install_npm': 'true',
'node_prefix': '/home/osmc/.nvm/versions/node/v0.12.7',
'node_shared_cares': 'false',
'node_shared_http_parser': 'false',
'node_shared_libuv': 'false',
'node_shared_openssl': 'false',
'node_shared_v8': 'false',
'node_shared_zlib': 'false',
'node_tag': '',
'node_use_dtrace': 'false',
'node_use_etw': 'false',
'node_use_mdb': 'false',
'node_use_openssl': 'true',
'node_use_perfctr': 'false',
'openssl_no_asm': 0,
'python': '/usr/bin/python',
'target_arch': 'arm',
'uv_library': 'static_library',
'uv_parent_path': '/deps/uv/',
'uv_use_dtrace': 'false',
'v8_enable_gdbjit': 0,
'v8_enable_i18n_support': 0,
'v8_no_strict_aliasing': 1,
'v8_optimized_debug': 0,
'v8_random_seed': 0,
'v8_use_snapshot': 'true',
'want_separate_host_toolset': 0}}
creating ./config.gypi
creating ./config.mk
make -C out BUILDTYPE=Release V=1
make[1]: Entering directory '/home/osmc/.nvm/src/node-v0.12.7/out'
...
...
...
LD_LIBRARY_PATH=/home/osmc/.nvm/src/node-v0.12.7/out/Release/lib.host:/home/osmc/.nvm/src/node-v0.12.7/out/Release/lib.target:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; cd ../deps/v8/tools/gyp; mkdir -p /home/osmc/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_snapshot/geni; "/home/osmc/.nvm/src/node-v0.12.7/out/Release/mksnapshot" --log-snapshot-positions --logfile "/home/osmc/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_snapshot/geni/snapshot.log" "/home/osmc/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_snapshot/geni/snapshot.cc"
Illegal instruction
deps/v8/tools/gyp/v8_snapshot.target.mk:13: recipe for target '/home/osmc/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_snapshot/geni/snapshot.cc' failed
make[1]: **\* [/home/osmc/.nvm/src/node-v0.12.7/out/Release/obj.target/v8_snapshot/geni/snapshot.cc] Error 132
make[1]: Leaving directory '/home/osmc/.nvm/src/node-v0.12.7/out'
Makefile:45: recipe for target 'node' failed
make: **\* [node] Error 2
nvm: install v0.12.7 failed!
| OS: Raspberry Pi,needs followup | low | Critical |
97,088,953 | kubernetes | Document the environment for kubectl exec | What PATH is set, etc. if the PATH is not useful, we should make it useful, so user don't have to 'sh -c'
@ncdc
| priority/backlog,kind/documentation,sig/node,sig/docs,lifecycle/frozen | medium | Major |
97,159,486 | go | path/filepath: Glob should support `**` for zero or more directories | Go version 1.4.2
Mac OS X 10.10
Example:
```
package main
import "fmt"
import "path/filepath"
import "os"
func main() {
files, err := filepath.Glob("/usr/local/go/src/**/*.go")
if err != nil {
fmt.Print(err)
os.Exit(1)
}
fmt.Printf("files: %d\n", len(files))
for _, f := range files {
fmt.Println(f)
}
}
```
Expected:
```
% ls /usr/local/go/src/**/*.go | wc -l
1633
```
Actual:
```
files: 732
```
It seems that `**` is equivalent to `*`. The extended `**` pattern is common in shells and is supported in Rust and Java for example.
| Documentation,NeedsDecision,FeatureRequest | high | Critical |
97,179,424 | opencv | matchShapes and all zeros HuMoments | I've encountered an issue where, when collecting a list of similar shapes using the matchShapes function, I found one shape was being given a perfect match against any shape. Now, after investigation, it turned out that humoments being calculated for the shape matching all were all zero. I'm using a customized version of matchShapes where the hu moments are precalculated, so I can catch these things. For the built in function, this is all done at function level, abstracted away from the user.
I realize this could be reasonably chalked up to a user issue and a lack of sanitizing data before processing it, but I thought I'd bring the issue up here to raise the idea of validating the calculated moments in the function. Otherwise, you'll have a matchShapes function that could, under the right circumstances, match anything to anything.
Edit - After reviewing the code closely, there definitely needs to at least be an assert in matchShapes to verify that neither of the compared HUMoments have all 7 values set to 0, because if either of them are all zero, the > eps will always return false and no difference will be calculated, returning a perfect match when in fact there is no match at all.
| bug,priority: normal,category: shape | low | Minor |
97,306,228 | bitcoin | Does not use bind to local address for outgoing connections | I have several IPv4 / IPv6 addresses on a host and I wish to use a specific one. So I set up the bind so it would use that. It seems to be using that for the bind() in the listen socket but not for the outgoing connections.
I've tried settings externalip. That only seems to change the "localaddresses" returned by getnetworkinfo. I'm guessing that's mostly useful for NAT.
| Feature,P2P | low | Major |
97,324,926 | rust | Emit warnings on parameter list in closures after { | # Reproducing steps
This code compiles (and, in my opinion, it shouldn't):
``` rust
let p = Some(45).and_then({|x|
Some(x * 2)
});
```
While this doesn't:
``` rust
let p = Some(45).and_then({|x|
println!("doubling {}", x);
Some(x * 2)
})
```
### Error message 1 (using x)
The error message is absolutely unintuitive and won't help anyone:
```
<anon>:4:14: 4:15 error: unresolved name `x` [E0425]
<anon>:4 Some(x * 2)
^
error: aborting due to previous error
```
[**Playpen link**](http://is.gd/vgOyRD)
### Error message 2 (not using x)
When the last expression does not use a parameter from the list, the generated error message is even more frightening to newcomers (albeit in retrospect, this one makes more sense):
```
<anon>:2:22: 5:7 error: the trait `core::ops::FnOnce<(_,)>` is not implemented for the type `core::option::Option<_>` [E0277]
<anon>:2 let p = Some(45).and_then({|x|
<anon>:3 println!("doubling {}", x);
<anon>:4 Some(100 * 2)
<anon>:5 });
<anon>:2:22: 5:7 help: see the detailed explanation for E0277
<anon>:2:22: 5:7 error: the trait `core::ops::FnOnce<(_,)>` is not implemented for the type `core::option::Option<_>` [E0277]
<anon>:2 let p = Some(45).and_then({|x|
<anon>:3 println!("doubling {}", x);
<anon>:4 Some(100 * 2)
<anon>:5 });
<anon>:2:22: 5:7 help: see the detailed explanation for E0277
error: aborting due to 2 previous errors
```
[**Playpen link**](http://is.gd/zQAHZU)
# Proposed solution
Emit a warning when encountering this fallible syntax.
# Anecdotic background
Having used a lot of Ruby, I had grown way accustomed to the `{ |x| x * 2 }` syntax. Although I had seen the proper Rust syntax a couple of times, it never was emphasized and the significance never stuck. I spent 30+ minutes trying to figure out why a println! statement breaks the build, generating a random error message that has no obvious connections to the actual bug at hand. Only after inquiring the IRC channel did a solution unveil itself. It shouldn't happen.
| C-enhancement,A-diagnostics,P-low,T-compiler | low | Critical |
97,374,955 | go | cmd/go: document that a .go file is required even if there is a .swig file | ```
go version devel +aad4fe4 Thu Jul 23 05:50:53 2015 +0000 windows/amd64
```
Only include `example.swig` and `simple.c`:
```
go build
can't load package: package mydev/swig/simple: no buildable Go source files in C:\go\gopkg\src\mydev\swig\simple
```
If i create a `x.go` like this:
```
package example
```
`go build` is ok.
| Documentation,NeedsFix | low | Minor |
97,410,071 | opencv | Expose more of the HOG functionality in the python API | Transferred from http://code.opencv.org/issues/103
```
|| Gijs Molenaar on 2010-01-28 12:03
|| Priority: Normal
|| Affected: None
|| Category: python bindings
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Expose more of the HOG functionality in the python API
```
I would love to see more of the HOG functionality exposed in the Python API. It is really great that it is already implemented in the SVN version, but It would be great to be able to use it for other shapes than people.
```
## History
##### anonymous - on 2010-05-06 10:23
```
After implementing and running the HOG descriptor and classifier of OPENCV with python, i can say that it performs great on people with default pose(walking,standing). On the other hand, it performs bad when i tried to detect people in a little bit complicated pose like sitting,playing golf etc etc. I wish that this algorithm trained on different kinds of human poses.
Thanks for this nice work
```
##### Gijs Molenaar on 2010-05-06 11:51
```
it is trained with standing people, that's why doesn't work other poses. You should train it with other poses to detect other poses. The problem is that you can't do that at the moment with the python API.
```
| auto-transferred,priority: normal,feature,category: python bindings | low | Minor |
97,410,481 | opencv | Separable morphology filters | Transferred from http://code.opencv.org/issues/137
```
|| Dirk Van Haerenborgh on 2010-02-22 10:11
|| Priority: Normal
|| Affected: None
|| Category: imgproc, video
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Separable morphology filters
```
Right now, when doing a morphology operation, it is only split into a row and a column filter if the structuring element is a filled rectangle.
In many cases however, due to associativity of morphology operations, a noticeable speed-up (theoretically at least 1,5) can be achieved by using a separated row and column filter.
When using a structuring element of M * N, the speed-up is equal to (M*N)/(M+N).
More information here: http://blogs.mathworks.com/steve/2006/11/28/separable-convolution-part-2/
```
## History
##### Kirill Kornyakov on 2012-06-19 06:39
```
Vadim, am I right that we already have separable implementation of morphology filters? If so, we should close this ticket...
```
##### Vadim Pisarevsky on 2012-10-26 18:42
```
Separable morphology is a bit more general thing, comparing to what we have. Morphological operation with the boolean kernel K can be done as a separable operation iff the kernel can be represented as:
<pre>
Kij = Pi & Qj,
</pre>
where P and Q are 1D kernels. In the case of the rectangular morphology both P and Q are all ones. The practical application of such operations is unclear. Many of real-life non-rectangular morphological kernels that I know (used for edge thinning, finding blobs of certain size etc.) are non-separable.
```
| auto-transferred,priority: normal,feature,category: imgproc,category: video | low | Minor |
97,410,785 | opencv | Unicode support requested | Transferred from http://code.opencv.org/issues/148
```
|| Илья Москвин on 2010-02-25 21:54
|| Priority: Low
|| Affected: None
|| Category: highgui-gui
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Unicode support requested
```
We use the new Python bindings and have to use non-english chars in window names and file pathes. Please, couldn@t you add Unicode support to OpenCV, if possible?
```
## History
##### David Ullmann on 2011-04-25 09:50
```
It's still not possible to build OpenCV (highgui module) with unicode support (platform windows 7, visual studio 2010, OpenCV SVN, 32 bit).
This makes it almost impossible to use OpenCV with wxWidgets > 2.9, which defaults to unicode.
```
##### None on 2012-02-12 18:57
```
- Description changed from We use the new Python bindings and have to
use non-english chars in window n... to We use the new Python
bindings and have to use non-english chars in window n... More
```
##### Vadim Pisarevsky on 2012-04-24 14:47
```
1. OpenCV can create files with non-English names when they are encoded using UTF-8 and passed as normal text strings (tested on Linux and MacOSX).
2. Since OpenCV does not yet support Python 3.x and in Python 2.x text strings are encoded with UTF-8 (i.e. in the format as OpenCV expects it), it's possible to pass a unicode filename to OpenCV function directly using UTF-8 encoding.
3. On Windows it can work too, but instead of UTF-8 the current 1-byte encoding should be used.
So, it's possible (and there is always rename() function in the case of file names); Therefore, I lower the priority of the task.
- Priority changed from High to Low
- Assignee set to Vadim Pisarevsky
```
##### Andrey Kamaev on 2012-08-16 15:23
```
- Category changed from highgui-images to highgui-gui
```
| auto-transferred,feature,priority: low,category: highgui-gui | low | Minor |
97,410,820 | opencv | Add analogon to CV_NEXT_GRAPH_EDGE | Transferred from http://code.opencv.org/issues/151
```
|| Matthäus Brandl on 2010-03-03 15:22
|| Priority: Low
|| Affected: None
|| Category: core
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Add analogon to CV_NEXT_GRAPH_EDGE
```
Add a macro which returns the other vertex, which belongs to an edge
```
## History
##### Alexander Smorkalov on 2012-10-03 08:42
```
- Target version set to 2.4.3
- Assignee set to Vadim Pisarevsky
```
##### Andrey Kamaev on 2012-10-31 14:19
```
- Target version changed from 2.4.3 to 3.0
```
| auto-transferred,feature,priority: low,category: core | low | Minor |
97,410,852 | opencv | Video codecs adjustment | Transferred from http://code.opencv.org/issues/177
```
|| Илья Москвин on 2010-03-14 20:31
|| Priority: Normal
|| Affected: None
|| Category: highgui-video
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Video codecs adjustment
```
We need a possibility to setup codec options from our program by using python API to OpenCV. Probably it could be smth. like "CreateVideoWriter()" arguments. If it will be difficult to realise a common solution for many codecs that options for some particular codecs would be valuable.
```
## History
##### James Bowman on 2010-05-14 20:25
```
So... I think this ticket is asking for more control over the video codec, by adding arguments to "CreateVideoWriter()". Is that correct?
```
##### Илья Москвин on 2010-05-14 20:42
```
Replying to [comment:3 jamesb]:
> So... I think this ticket is asking for more control over the video codec, by adding arguments to "CreateVideoWriter()". Is that correct?
Yes, we are requesting for an additional control over the video codec.
And it seems that adding arguments to "CreateVideoWriter()" could be the handiest way to user.
```
##### James Bowman on 2010-05-14 21:54
```
OK, reassigning to highgui.
```
##### Alexander Shishkov on 2012-02-12 21:41
```
- Description changed from We need a possibility to setup codec
options from our program by using python... to We need a possibility
to setup codec options from our program by using python... More
```
##### Andrey Kamaev on 2012-08-16 14:47
```
- Category changed from highgui-images to highgui-video
```
| auto-transferred,priority: normal,feature,category: videoio | low | Minor |
97,410,927 | opencv | Functions undocumented in Python and C | Transferred from http://code.opencv.org/issues/229
```
|| James Bowman on 2010-03-26 14:29
|| Priority: Low
|| Affected: None
|| Category: documentation
|| Tracker: Bug
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Functions undocumented in Python and C
```
From the latex2sphinx error log:
The following functions are undocumented
@Abs@
@BackProjectPCA@
@CV_16SC@
@CV_16UC@
@CV_32FC@
@CV_32SC@
@CV_64FC@
@CV_8SC@
@CV_8UC@
@CV_CMP@
@CV_FOURCC@
@CV_IABS@
@CV_IS_SEQ_CLOSED@
@CV_IS_SEQ_CONVEX@
@CV_IS_SEQ_CURVE@
@CV_IS_SEQ_HOLE@
@CV_IS_SEQ_INDEX@
@CV_IS_SEQ_SIMPLE@
@CV_MAKETYPE@
@CV_MAT_CN@
@CV_MAT_DEPTH@
@CV_SIGN@
@CalcArrBackProject@
@CalcArrHist@
-@CalcOpticalFlowFarneback@-
@CalcPCA@
-@CalibrationMatrixValues@-
@CheckArr@
@ClearSeq@
-@ConvertMaps@-
@CreateCameraCapture@
@CreateFileCapture@
@CvtPixToPlane@
@CvtScale@
@DecodeImage@
@DecodeImageM@
@EncodeImage@
-@EqualizeHist@-
-@EstimateRigidTransform@-
-@GetReal1D@-
-@GetReal2D@-
-@GetReal3D@-
-@GetRealND@-
@HOGDetectMultiScale@
-@HoughCircles@-
@MatMul@
@MatMulAdd@
@MaxRect@
-@Normalize@-
@ProjectPCA@
-@PyrUp@-
-@RandShuffle@-
@Range@
@RealScalar@
@Scalar@
@ScalarAll@
@Scale@
-@SolvePoly@-
-@Sort@-
@StartWindowThread@
@Subdiv2DEdgeOrg@
-@Subdiv2DNextEdge@-
-@Watershed@-
@Zero@
Some of these are synonyms for functions that are documented.
```
## History
##### James Bowman on 2010-05-10 18:40
```
<pre>
```
##### Vadim Pisarevsky on 2011-08-06 19:18
```
Watershed, HoughCircles, DecodeImage, DecodeImageM, EncodeImage, CalcOpticalFlowFarneback, EqualizeHist done in r6354
```
##### Andrey Kamaev on 2012-03-20 11:45
```
- Description changed from From the latex2sphinx error log: <pre> The
following functions are und... to From the latex2sphinx error log:
The following functions are undocumented ... More
```
##### Alexander Shishkov on 2012-03-21 22:29
```
- Target version deleted ()
- Priority changed from High to Normal
```
##### Alexander Shishkov on 2012-03-22 14:26
```
- Priority changed from Normal to Low
```
##### Alexander Shishkov on 2012-03-25 20:49
```
- Assignee deleted (James Bowman)
```
##### Alexander Shishkov on 2012-04-05 12:48
```
- Target version deleted ()
```
| bug,auto-transferred,priority: low,category: documentation,affected: 2.4 | low | Critical |
97,411,040 | opencv | FIX_INTRINSICS for a single camera in StereoCalibrate | Transferred from http://code.opencv.org/issues/268
```
|| Patrick Beeson on 2010-04-07 20:34
|| Priority: Normal
|| Affected: None
|| Category: calibration, 3d
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## FIX_INTRINSICS for a single camera in StereoCalibrate
```
I wanted to write a ROS node that will find edges or other common features in a scene (I'm using the checkerboard) to allow 3D range devices like 3D lidars or stereo cameras, to be registered with a color camera in order to produce colored blobs. I have done this manually in the past, but I want an automated solution.
The problem is that in order to do this correctly, you need to be able to freeze the internal parameters of one camera, while letting the others drift. This is because some devices have hardcoded internal parameters that they use in making the pointclouds. These are often quite different from what comes out of StereoCalibrate. So, I'd like to fix these intrinsic parameters, but let the intrinsic parameters of the second camera be optimized to "absorb" any error that exists from the first camera's intrinsic parameters.
(Hopefully that made sense).
One can do this with the MATLAB stereo_calib toolbox, and it works great for registering a color camera with a SwissRanger lidar (the extrinsic parameters and the 2D color camera intrinsic parameters are optimized while the SwissRanger intrinsic parameters remain fixed).
Unfortunately, there is no way to do this in OpenCV due to the rigidness of StereoCalibrate.
```
## History
##### Alexander Shishkov on 2012-02-12 21:43
```
- Description changed from I wanted to write a ROS node that will find
edges or other common features in... to I wanted to write a ROS node
that will find edges or other common features in... More
```
##### Andrey Kamaev on 2012-04-04 20:23
```
- Category changed from imgproc, video to calibration, 3d
```
| auto-transferred,priority: normal,feature,category: calib3d | low | Critical |
97,411,063 | opencv | Shape Descriptors for Maximally Stable Extremal Regions | Transferred from http://code.opencv.org/issues/340
```
|| René Meusel on 2010-05-11 10:01
|| Priority: Normal
|| Affected: None
|| Category: imgproc, video
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Shape Descriptors for Maximally Stable Extremal Regions
```
Existing feature descriptors like SURF or SIFT that are already available in OpenCV don't work for all scenarios because they mainly operate on textures and their surroundings. MSER which works on shapes is already available in OpenCV, however, comparing those features is done with SURF or SIFT which operate on the textureing in the center of the shape as well.
The paper by Forssén and Lowe (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72.3296&rep=rep1&type=pdf) proposes a solution that might work more reliably for comparing shape descriptors.
```
## History
##### Alexander Shishkov on 2012-02-12 21:39
```
- Description changed from Existing feature descriptors like SURF or
SIFT that are already available in ... to Existing feature
descriptors like SURF or SIFT that are already available in ... More
```
| auto-transferred,priority: normal,feature,category: imgproc,category: video | low | Minor |
97,411,099 | opencv | serialization | Transferred from http://code.opencv.org/issues/379
```
|| James Bowman on 2010-06-11 00:47
|| Priority: High
|| Affected: None
|| Category: python bindings
|| Tracker: Feature
|| Difficulty: Medium
|| PR:
|| Platform: None / None
```
## serialization
```
It would be very nice if Python OpenCV objects were pickleable.
To do this, can the load/store methods of cpp objects accept strings?
```
## History
##### Alexander Shishkov on 2012-02-12 21:33
```
- Description changed from It would be very nice if Python [[OpenCV]]
objects were pickleable. To do th... to It would be very nice if
Python OpenCV objects were pickleable. To do this, ... More
```
##### Andrey Kamaev on 2012-04-05 08:49
```
- Category changed from imgproc, video to python bindings
```
##### Vadim Pisarevsky on 2012-05-29 16:00
```
- Target version set to 3.0
```
##### Stefan R on 2012-11-28 10:22
```
+1
```
##### Maksim Shabunin on 2015-04-28 13:50
```
- Difficulty set to Medium
- Target version changed from 3.0 to 3.1
```
| auto-transferred,feature,category: python bindings,priority: low | low | Major |
97,411,201 | opencv | Mat Type Coherence: always returning CV_64FC1 Mat | Transferred from http://code.opencv.org/issues/443
```
|| Vito Macchia on 2010-07-09 12:48
|| Priority: Low
|| Affected: None
|| Category: imgproc, video
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Mat Type Coherence: always returning CV_64FC1 Mat
```
When porting my code from C to C++, I noticed that some C++ functions like:
cv::findHomography() or cv::getAffineTransform()
always return a CV_64FC1 cv::Mat, even when passing float (CV_32FCx) data.
Furthermore the cv::findHomography() and cv:convertPointsHomogeneous() only take CV_32 data (Mats).
So when writing code one has to take always care of the type and perform conversions such as:
H=findHomography(ptsA,ptsB,CV_RANSAC,0.05); //returns a CV_64FC1!!!
H.convertTo(H,CV_32FC1,1,0);
In order to perform other algebraic operations without getting a type mismatch error.
This did not happen with corresponding C functions, that where much more flexible (you could pass both double and float CvMat) and coherent (32->32 and 64->64).
I would like to do all my computations with double precision numbers (CV_64) but I cannot because some of these functions do not manage double, but only float data (so why do they return double in many cases??).
Could you please make this function type independent, or at least overload them in a way that also CV_64 data may be taken as input and the type returned is coherent with the input type? Also having back the same flexibility of cvConvertPointHomogeneous would not be bad.
Thanks
Vito
Ubuntu Linux 9.10, OpenCV 2.1, g++ 4.4.1
```
## History
##### Alexander Shishkov on 2012-02-12 21:31
```
- Description changed from When porting my code from C to C++, I
noticed that some C++ functions like: ... to When porting my code
from C to C++, I noticed that some C++ functions like: ... More
```
##### Vadim Pisarevsky on 2012-04-24 14:32
```
let's lower priority of the task and postpone it till the next release
- Target version set to 3.0
- Priority changed from High to Low
- Assignee set to Vadim Pisarevsky
```
| auto-transferred,feature,category: imgproc,category: video,priority: low | low | Critical |
97,411,350 | opencv | Cocoa highgui implementation resets exit status | Transferred from http://code.opencv.org/issues/653
```
|| Michael Koval on 2010-11-07 01:29
|| Priority: Low
|| Affected: None
|| Category: highgui-gui
|| Tracker: Bug
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Cocoa highgui implementation resets exit status
```
If a highgui window is active at the time of the program's exit the return value of main() or parameter of exit() is ignored and treated as zero. As a minimal example, compile and execute the following program:
// Begin Example
#include <opencv/cv.h>
#include <opencv/highgui.h>
int main(int argc, char **argv) {
cv::Mat img = cv::imread(argvr1, 1);
cv::imshow("Image", img);
cv::waitKey(0);
return 1;
}
// End Example
Despite returning "1" from main(), Bash reports (i.e. echo $?) the return code as "0". I have confirmed this in OpenCV 2.0 using the Cocoa implementation of highgui, but have not checked other configurations.
```
## History
##### Alexander Shishkov on 2012-02-12 21:24
```
- Description changed from If a highgui window is active at the time
of the program's exit the return va... to If a highgui window is
active at the time of the program's exit the return va... More
```
##### Alexander Shishkov on 2012-03-21 22:03
```
- Target version deleted ()
```
##### Alexander Shishkov on 2012-03-25 20:38
```
- Priority changed from Normal to Low
```
##### Alexander Shishkov on 2012-04-05 12:48
```
- Target version deleted ()
```
##### Andrey Kamaev on 2012-08-16 15:11
```
- Category changed from highgui-images to highgui-gui
```
| bug,auto-transferred,priority: low,category: highgui-gui | low | Critical |
97,411,552 | opencv | Maybe there is a logical bug in function icvCalcOpticalFlowLK_8u32fR | Transferred from http://code.opencv.org/issues/784
```
|| BAI YUN on 2010-12-31 03:26
|| Priority: Low
|| Affected: None
|| Category: imgproc, video
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Maybe there is a logical bug in function icvCalcOpticalFlowLK_8u32fR
```
I had read the code of function @icvCalcOpticalFlowLK_8u32fR@, find that, it is not perfect even contain some mistakes:
# The caculation of gaussian convolution is done not really in the define window;
# Alloc and free mem frequently;
# A mistake in the fellow code segment:
<pre><code class="cpp">
for( j = 0; j < imageWidth; j++ )
{
int addr = address;
A1B2 = 0;
A2 = 0;
B1 = 0;
C1 = 0;
C2 = 0;
for( i = -USpace; i <= BSpace; i++ )
{
//! Maybe "addr + j" should be changed to "addr + j + i * imgWidth"? whatever "addr + j" is not correct.
A2 += WII[addr + j].xx * KerY[i];
A1B2 += WII[addr + j].xy * KerY[i];
B1 += WII[addr + j].yy * KerY[i];
C2 += WII[addr + j].xt * KerY[i];
C1 += WII[addr + j].yt * KerY[i];
//!
addr += BufferWidth;
addr -= ((addr >= BufferSize) ? 0xffffffff : 0) & BufferSize;
}
}
</code></pre>
```
## History
##### Vadim Pisarevsky on 2012-04-24 15:20
```
Since non-dense variant of LK is now considered obsolete (moved to legacy module), I lower priority of the bug
- Priority changed from High to Low
```
##### Andrey Kamaev on 2012-04-24 15:41
```
- Description changed from I had read the code of function
icvCalcOpticalFlowLK_8u32fR, find that, it is... to I had read the
code of function @icvCalcOpticalFlowLK_8u32fR@, find that, it ...
More
```
| auto-transferred,feature,category: imgproc,category: video,priority: low | low | Critical |
97,411,923 | opencv | DCT for odd length | Transferred from http://code.opencv.org/issues/895
```
|| Leonid Bilevich on 2011-02-15 02:09
|| Priority: Normal
|| Affected: None
|| Category: core
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## DCT for odd length
```
The DCT of the odd-length sequence is not implemented. The DCT of the even-length sequence is implemented by the method of Narasimha-Peterson r1.
There exists a very simple extension of r1 to the case of odd N (this extension is described in r2). I suggest to use this extension in order to realize DCT for odd N.
I realized the suggested code in Matlab.
References:
r1 M. J. Narasimha and A. M. Peterson, "On the computation of the discrete cosine transform," IEEE Trans. Commun., vol. 26, no. 6, pp. 934–936, June 1978.
r2 J. Makhoul, "A fast cosine transform in one and two dimensions," IEEE Trans. Acoust. Speech Signal Process., vol. 28, no. 1, 27-34, February 1980.
```
## History
##### Vadim Pisarevsky on 2011-02-16 11:41
```
Thanks for the reference. It will be put to OpenCV. However, because the current limitation is described in the reference manual, and for many real applications the array of an odd size can be padded with extra elements, I change the ticket type and priority.
```
##### Vadim Pisarevsky on 2012-04-24 15:21
```
lowering priority of the feature request
- Priority changed from High to Normal
```
| auto-transferred,priority: normal,feature,category: core | low | Minor |
97,411,988 | opencv | Add support for weighted datasets in clustering | Transferred from http://code.opencv.org/issues/923
```
|| Simon Pearson on 2011-02-27 19:53
|| Priority: Normal
|| Affected: None
|| Category: core
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Add support for weighted datasets in clustering
```
Many datasets include lots of repeated points. Such datasets can be represented more compactly by removing duplicate points and storing the number of each point as a "weight". It would be good if OpenCV clustering and partitioning algorithms supported such datasets.
Note: I will be attempting to rework the clustering code to support this as part of a project I'm working on and will submit it for inclusion if I am successful.
```
## History
##### Alexander Shishkov on 2012-02-12 21:15
```
- Description changed from Many datasets include lots of repeated
points. Such datasets can be represent... to Many datasets include
lots of repeated points. Such datasets can be represent... More
```
| auto-transferred,priority: normal,feature,category: core | low | Minor |
97,412,040 | opencv | Add API to enumerate cameras | Transferred from http://code.opencv.org/issues/935
```
|| Rune Espeseth on 2011-03-08 10:27
|| Priority: Normal
|| Affected: None
|| Category: highgui-camera
|| Tracker: Feature
|| Difficulty:
|| PR:
|| Platform: None / None
```
## Add API to enumerate cameras
```
OpenCV should have an API to enumerate cameras. Without it, writing programs that support non-fixed hardware is not possible.
The point of this API is to retrieve information about the cameras attached to the system:
- NumberOf
- Name of each camera (can be used for user-selection in a GUI)
- Resolution of each camera
- other properties.
It will then be possible to write programs (f.i. surveillance software) that support 1 to any number of cameras instead of requiring a given number of cameras.
```
## History
##### Alexander Shishkov on 2012-02-12 21:12
```
- Description changed from [[OpenCV]] should have an API to enumerate
cameras. Without it, writing progr... to OpenCV should have an API
to enumerate cameras. Without it, writing programs ... More
```
##### Alexander Reshetnikov on 2012-04-25 14:27
```
- Priority changed from High to Normal
- Assignee set to Vadim Pisarevsky
```
##### Andrey Kamaev on 2012-08-16 15:23
```
- Category changed from highgui-images to highgui-camera
```
##### Neil Jansen on 2015-02-03 04:59
```
Hi, I know this issue has been open for a while, but is there any chance that it could be fixed easily? It would be very helpful for those of us using machine vision with more than 1 camera. I am even willing to place a cash bounty on this bug if necessary (tried to do it at bountysource.com, but they don't support the ChiliProject bug tracking platform that OpenCV uses...). Thanks!
```
##### Steven Puttemans on 2015-02-03 09:29
```
Depending on which camera type you are using this already exists. The recently adapted PvAPI interface of Allied Vision Technology cameras does exactly this. No idea if other do it or not.
```
| auto-transferred,priority: normal,feature,category: videoio(camera),future | medium | Critical |
97,412,084 | opencv | RANSAC parameters for estimateRigidTransform | Transferred from http://code.opencv.org/issues/936
```
|| Do Bi on 2011-03-08 20:26
|| Priority: Normal
|| Affected: None
|| Category: imgproc, video
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## RANSAC parameters for estimateRigidTransform
```
Hi,
I think, it would be cool, if RANSAC_MAX_ITERS, RANSAC_SIZE0 and RANSAC_GOOD_RATIO in cvEstimateRigidTransform were parameters and not hard coded, and if this flexibility would also be usable in cv::estimateRigidTransform.
Regards,
Dobias
```
## History
| auto-transferred,priority: normal,feature,category: imgproc,category: video | low | Minor |
97,412,153 | opencv | Statistical Shape Model | Transferred from http://code.opencv.org/issues/966
```
|| Hamed Habibi Aghdam on 2011-03-27 13:17
|| Priority: Normal
|| Affected: None
|| Category: objdetect
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Statistical Shape Model
```
Statistical Shape Model such as Active Shape Model (ASM) and Active Appearance Model (AAM) plays important role in shape analysis problems. Specially, Active Appearance Model with Inverse Composition fitting method shows promising results in many applications. From my point of view, openCV team can add these features in the future releases. Following are two ASM/AAM libraries based on openCV.(They need some improvements)
http://www.google.com/url?sa=t&source=web&cd=2&ved=0CCEQFjAB&url=http%3A%2F%2Fcode.google.com%2Fp%2Faam-library%2F&ei=YTaPTdDPA8udOs-t6aAC&usg=AFQjCNGBABkiGClVjsHhE8CjomLKl2Lxeg
http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBsQFjAA&url=http%3A%2F%2Fcode.google.com%2Fp%2Faam-opencv%2F&ei=YTaPTdDPA8udOs-t6aAC&usg=AFQjCNHBbqQloYPg8fYR0yrzyCFCZCkEzw
```
## History
| auto-transferred,priority: normal,feature,category: objdetect | low | Minor |
97,412,346 | opencv | SURF Buffer Reuse | Transferred from http://code.opencv.org/issues/1017
```
|| Nick Kitten on 2011-04-20 14:35
|| Priority: Normal
|| Affected: None
|| Category: nonfree
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## SURF Buffer Reuse
```
The CPU SURF implementation could really use an option to recycle buffers / filters between consecutive calls, at least when input images all have the same type and size (a very common case). The GPU implementation already includes this optimization, and allowing constant reallocation of memory limits the usefulness of this version for real-time applications.
```
## History
##### Andrey Kamaev on 2012-08-21 15:05
```
- Category changed from features2d to nonfree
```
| auto-transferred,priority: normal,feature,category: nonfree | low | Minor |
97,412,516 | opencv | Usage of DescriptorExtractor for custom (=own) code: Design flaw? | Transferred from http://code.opencv.org/issues/1126
```
|| Stefan R on 2011-06-08 11:32
|| Priority: Normal
|| Affected: None
|| Category: features2d
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Usage of DescriptorExtractor for custom (=own) code: Design flaw?
```
Hi all, Hi Vadim,
I use the cv::FeatureDetector and cv::DescriptorExtractor interfaces to wrap some of our own existing feature detection and descriptor functions in a uniform way. So I can test different features by plugging in different implementations like this:
<pre>
<code class="cpp">
cv::FeatureDetector* detector = ...some implementation or NULL if not required...
cv::DescriptorExtractor * extractor = ...some implementation or NULL if not required...
vector<KeyPoint> keypoints;
if( detector != NULL )
detector->detect(frameSmallGray, keypoints, mask);
cv::Mat descriptors;
if( extractor != NULL )
extractor->compute(frameSmallGray, keypoints, descriptors);
</code></pre>
Due to the internal structure of some in some cases the actual feature detection part can not be separated from the descriptor extraction part. In other words, such an implementation can only implement the DescriptorExtractor interface. However - and that's IMHO a potential design flaw - the DescriptorExtractor base class never calls the virtual computeImpl() function of the actual implementation when the given keypoint vector is empty:
<pre>
<code class="cpp">
void DescriptorExtractor::compute( const Mat& image, vector<KeyPoint>& keypoints, Mat& descriptors ) const
{
if( image.empty() || keypoints.empty() )
{
descriptors.release();
return;
}
KeyPointsFilter::runByImageBorder( keypoints, image.size(), 0 );
KeyPointsFilter::runByKeypointSize( keypoints, std::numeric_limits<float>::epsilon() );
computeImpl( image, keypoints, descriptors );
}
</code></pre>
As a workaround one can do the following, but that's rather unelegant:
<pre>
<code class="cpp">
cv::Mat descriptors;
vector<KeyPoint> keypoints;
if( keypoints.size() == 0 )
keypoints.push_back( [[KeyPoint]]() );
extractor->compute(frameSmallGray, keypoints, descriptors);
</code></pre>
I am not sure, maybe I am abusing these interfaces. If there is any particular better way to use these, please tell me.
Thanks,
Stefan
```
## History
##### Alexander Shishkov on 2012-02-12 21:03
```
- Description changed from Hi all, Hi Vadim, I use the
cv::FeatureDetector and cv::DescriptorExtractor ... to Hi all, Hi
Vadim, I use the cv::FeatureDetector and cv::DescriptorExtractor ...
More
```
##### Alexander Shishkov on 2012-02-12 21:03
```
- Description changed from Hi all, Hi Vadim, I use the
cv::FeatureDetector and cv::DescriptorExtractor ... to Hi all, Hi
Vadim, I use the cv::FeatureDetector and cv::DescriptorExtractor ...
More
```
##### Alexander Shishkov on 2012-03-21 20:47
```
- Tracker changed from Bug to Feature
```
| auto-transferred,priority: normal,feature,category: features2d | low | Critical |
97,412,587 | opencv | A method to discover what keypoints are removed when descriptors are computed. | Transferred from http://code.opencv.org/issues/1148
```
|| Hordur Johannsson on 2011-06-18 13:44
|| Priority: Normal
|| Affected: None
|| Category: features2d
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## A method to discover what keypoints are removed when descriptors are computed.
```
When calling DescriptorExtractor::compute it will remove the keypoints for which a descriptor cannot be computed. This can
be inconvenient if you need to associate the Descriptors to some
auxillary structure.
For example, if you have detected keypoints and computed disparity of the points at some later stage. If the keypoints are removed then the original structure needs to be searched to find the disparity.
If ordering is preserved then the search might not be too hard to do.
Possible solutions would be:
* Template the Keypoint allowing for custom keypoints, e.g. storing all extra information or a reference to the original structure.
* Have the remove optional and instead return a vector of bools specifying which descriptors are valid. Or a list of the id's to original vector.
```
## History
##### Alexander Shishkov on 2012-02-12 21:01
```
- Description changed from When calling
[[DescriptorExtractor]]::compute it will remove the keypoints fo...
to When calling DescriptorExtractor::compute it will remove the
keypoints for wh... More
```
| auto-transferred,feature,priority: low,category: features2d | low | Minor |
97,412,677 | opencv | remap for 64-bit images is performed with 32-bit precision | Transferred from http://code.opencv.org/issues/1167
```
|| Mark Desnoyer on 2011-06-24 18:22
|| Priority: Low
|| Affected: None
|| Category: imgproc, video
|| Tracker: Feature
|| Difficulty:
|| PR:
|| Platform: None / None
```
## remap for 64-bit images is performed with 32-bit precision
```
Hi,
I could be wrong here, but I believe that the remap function (used by all the warping functions) can only be run with 32-bit precision. Let me know if I'm misinterpreting the code.
Change r5351 allows the user to use 64-bit images now, however, remap does an optimization to switch the image to fixed-point format and then does a lookup. Unfortunately, the lookup table is only 16-bit, which works fine if you're in a 32-bit image, but you lose precision in the 64-bit case.
I know this will be a pain in the butt to fix, so for now, I think it's most important to include a warning about the precision loss in the documentation. Maybe even a compiler/runtime warning if the user is trying to remap 64-bit images.
For some context, I'm running into this because I'm trying to do some very precise affine warps to images. It works fine in Matlab, but I get a slightly different answer with the OpenCV warpAffine consistent with losing precision somewhere.
```
## History
##### Alexander Shishkov on 2012-02-12 20:59
```
- Description changed from Hi, I could be wrong here, but I believe
that the remap function (used by al... to Hi, I could be wrong here,
but I believe that the remap function (used by al... More
```
##### Alexander Shishkov on 2012-03-21 20:37
```
- Target version deleted ()
```
##### Alexander Shishkov on 2012-03-25 20:32
```
- Priority changed from Normal to Low
```
##### Alexander Shishkov on 2012-03-25 20:46
```
- Assignee deleted (Vadim Pisarevsky)
```
##### Alexander Shishkov on 2012-04-05 12:44
```
- Target version deleted ()
```
##### Andrey Kamaev on 2012-04-10 11:18
```
- Description changed from Hi, I could be wrong here, but I believe
that the remap function (used by al... to Hi, I could be wrong here,
but I believe that the remap function (used by al... More
```
##### Vadim Pisarevsky on 2015-05-25 21:35
```
ok, converting it to a feature request
- Tracker changed from Bug to Feature
```
| auto-transferred,feature,category: imgproc,category: video,priority: low | low | Critical |
Subsets and Splits