id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
358,861,012 | go | internal/bytealg: valgrind reports invalid reads by C.GoString | ### What version of Go are you using (`go version`)?
`go version go1.11 linux/amd64`
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details>
```
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/kivikakk/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/kivikakk/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build515123667=/tmp/go-build -gno-record-gcc-switches"
```
</details>
### What did you do?
```go
package main
/*
#include <string.h>
#include <stdlib.h>
char* s() {
return strdup("hello");
}
*/
import "C"
import "unsafe"
func main() {
s := C.s()
C.GoString(s)
C.free(unsafe.Pointer(s))
}
```
```console
$ go build
$ valgrind ./sscce
==11241== Memcheck, a memory error detector
==11241== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==11241== Using Valgrind-3.12.0.SVN and LibVEX; rerun with -h for copyright info
==11241== Command: ./sscce
==11241==
==11241== Warning: ignored attempt to set SIGRT32 handler in sigaction();
==11241== the SIGRT32 signal is used internally by Valgrind
==11241== Warning: ignored attempt to set SIGRT32 handler in sigaction();
==11241== the SIGRT32 signal is used internally by Valgrind
==11241== Warning: client switching stacks? SP change: 0xfff0001b0 --> 0xc0000367d8
==11241== to suppress, use: --max-stackframe=755931244072 or greater
==11241== Warning: client switching stacks? SP change: 0xc000036790 --> 0xfff000260
==11241== to suppress, use: --max-stackframe=755931243824 or greater
==11241== Warning: client switching stacks? SP change: 0xfff000260 --> 0xc000036790
==11241== to suppress, use: --max-stackframe=755931243824 or greater
==11241== further instances of this message will not be shown.
==11241== Conditional jump or move depends on uninitialised value(s)
==11241== at 0x40265B: indexbytebody (/usr/local/go/src/internal/bytealg/indexbyte_amd64.s:151)
==11241==
==11241==
==11241== HEAP SUMMARY:
==11241== in use at exit: 1,200 bytes in 6 blocks
==11241== total heap usage: 10 allocs, 4 frees, 1,310 bytes allocated
==11241==
==11241== LEAK SUMMARY:
==11241== definitely lost: 0 bytes in 0 blocks
==11241== indirectly lost: 0 bytes in 0 blocks
==11241== possibly lost: 1,152 bytes in 4 blocks
==11241== still reachable: 48 bytes in 2 blocks
==11241== suppressed: 0 bytes in 0 blocks
==11241== Rerun with --leak-check=full to see details of leaked memory
==11241==
==11241== For counts of detected and suppressed errors, rerun with: -v
==11241== Use --track-origins=yes to see where uninitialised values come from
==11241== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
```
(Ignore the "possibly lost" blocks; they're pthreads started by the Go runtime.)
### What did you expect to see?
No conditional jump/move depending on uninitialised values.
### What did you see instead?
A conditional jump/move depending on uninitialised values.
---
The nature of the issue becomes more apparent if you run Valgrind with `--partial-loads-ok=no`:
```console
$ valgrind --partial-loads-ok=no ./sscce
==11376== Memcheck, a memory error detector
==11376== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==11376== Using Valgrind-3.12.0.SVN and LibVEX; rerun with -h for copyright info
==11376== Command: ./sscce
==11376==
==11376== Warning: ignored attempt to set SIGRT32 handler in sigaction();
==11376== the SIGRT32 signal is used internally by Valgrind
==11376== Warning: ignored attempt to set SIGRT32 handler in sigaction();
==11376== the SIGRT32 signal is used internally by Valgrind
==11376== Warning: client switching stacks? SP change: 0xfff0001b0 --> 0xc0000367d8
==11376== to suppress, use: --max-stackframe=755931244072 or greater
==11376== Warning: client switching stacks? SP change: 0xc000036790 --> 0xfff000260
==11376== to suppress, use: --max-stackframe=755931243824 or greater
==11376== Warning: client switching stacks? SP change: 0xfff000260 --> 0xc000036790
==11376== to suppress, use: --max-stackframe=755931243824 or greater
==11376== further instances of this message will not be shown.
==11376== Invalid read of size 32
==11376== at 0x40264E: indexbytebody (/usr/local/go/src/internal/bytealg/indexbyte_amd64.s:148)
==11376== Address 0x53f47c0 is 0 bytes inside a block of size 12 alloc'd
==11376== at 0x4C2BBAF: malloc (vg_replace_malloc.c:299)
==11376== by 0x45165D: s (main.go:7)
==11376== by 0x4516A5: _cgo_a004886745c9_Cfunc_s (cgo-gcc-prolog:54)
==11376== by 0x44A0DF: runtime.asmcgocall (/usr/local/go/src/runtime/asm_amd64.s:637)
==11376== by 0x7: ???
==11376== by 0x6C287F: ??? (in /home/kivikakk/sscce/sscce)
==11376== by 0xFFF00024F: ???
==11376== by 0x4462B1: runtime.(*mcache).nextFree.func1 (/usr/local/go/src/runtime/malloc.go:749)
==11376== by 0x448905: runtime.systemstack (/usr/local/go/src/runtime/asm_amd64.s:351)
==11376== by 0x4283BF: ??? (/usr/local/go/src/runtime/proc.go:1146)
==11376== by 0x448798: runtime.rt0_go (/usr/local/go/src/runtime/asm_amd64.s:201)
==11376== by 0x451DEF: ??? (in /home/kivikakk/sscce/sscce)
==11376==
==11376==
==11376== HEAP SUMMARY:
==11376== in use at exit: 1,200 bytes in 6 blocks
==11376== total heap usage: 10 allocs, 4 frees, 1,316 bytes allocated
==11376==
==11376== LEAK SUMMARY:
==11376== definitely lost: 0 bytes in 0 blocks
==11376== indirectly lost: 0 bytes in 0 blocks
==11376== possibly lost: 1,152 bytes in 4 blocks
==11376== still reachable: 48 bytes in 2 blocks
==11376== suppressed: 0 bytes in 0 blocks
==11376== Rerun with --leak-check=full to see details of leaked memory
==11376==
==11376== For counts of detected and suppressed errors, rerun with: -v
==11376== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
```
I understand some work has been done to ensure `IndexByte` doesn't run over a page boundary (https://github.com/golang/go/issues/24206), and so that this is unlikely to have a negative effect. Should I just add a suppression and call it a day?
```
{
indexbytebody_loves_to_read
Memcheck:Addr32
fun:indexbytebody
}
``` | help wanted,NeedsInvestigation | medium | Critical |
358,945,806 | TypeScript | Conditional types don't work with Mapped types when you extend enum keys | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.0.3, 3.1.0-dev.20180907
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
mapped, conditional, enum, extends
**Code**
```ts
enum Commands {
one = "one",
two = "two",
three = "three"
}
const SpecialCommandsObj = {
[Commands.one]: Commands.one,
[Commands.two]: Commands.two,
};
// a and b are correct
type a = Commands.one extends keyof typeof SpecialCommandsObj ? null : any; // null - ok
type b = Commands.three extends keyof typeof SpecialCommandsObj ? null : any; // any - ok
interface CommandAction {
execute(element: any): any;
}
interface SpecialCommandAction {
execute(element: null): any;
}
type CommandsActions = {
[Key in keyof typeof Commands]: Key extends keyof typeof SpecialCommandsObj ? SpecialCommandAction : CommandAction
};
const Actions: CommandsActions = {
[Commands.one]: {
// ERROR
execute: (element) => {} // ELEMENT IS ANY, NOT NULL
},
[Commands.two]: {
// ERROR
execute: (element) => {} // ELEMENT IS ANY, NOT NULL
},
[Commands.three]: {
execute: (element) => {}
},
}
```
**Expected behavior:**
In the Actions object the argument of the execute method for Commands.one and Commands.two should be of type `null`, not `any`
**Actual behavior:**
`element` is always `any`,
type inference for type `a` and `b` are fine so it only occurs when you use mapped types
it works if you don't use enums as keys in SpecialCommandObj. Swap your SpecialCommandsObj with this one and it will start working
```
const SpecialCommandsObj = {
"one": Commands.one,
"two": Commands.two,
};
```
**Playground Link:** https://bit.ly/2QmdTIo
| Needs Investigation | low | Critical |
358,952,723 | flutter | Allow plugins to depend on podspecs outside of CocoaPods specs repo | I want to make a plugin that depends on the source which is outside of Specs
like this in **myplugin.podspec**:
s.dependency 'Hoge', :git => 'https://github.com/hoge/fuga_specs.git'
However it looks it is not possible to do that in podspec.
Reference:
https://stackoverflow.com/questions/22447062/how-do-i-create-a-cocoapods-podspec-that-has-a-dependency-that-exists-outside-of
Is there a way to solve the problem?
Thank you in advance.
<!-- Thank you for using Flutter!
Please check out our documentation first:
* https://flutter.io/
* https://docs.flutter.io/
If you can't find the answer there, please consider asking a question on
the Stack Overflow Web site:
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
Please don't file a GitHub issue for support requests. GitHub issues are
for tracking defects in the product. If you file a bug asking for help, we
will consider this a request for a documentation update.
-->
| platform-ios,tool,d: stackoverflow,P3,a: plugins,team-ios,triaged-ios | low | Critical |
358,954,088 | TypeScript | Check property declarations in JS | Recently I found it's helpful to add JSDOC style comment to hint my IDE the variable's type.
For example, If I add comment to a variable:
```js
function fx(jsonAsset) {
/** @type {import("a-dts-file-describes-the-type-of-the-json-object").Type} */
const typedObject = jsonAsset.json;
// IDE will give the member list of the type
typedObject.
}
```
My IDE(Visual Studio Code Lastest) would correctly identify the type of ```typedObject```.
So that it give me a very nice intelligent prompt. Even when I enable the type check through ```// @ts-check```, it will give the type errors.
This feature is helpful to me and my project. It's written in **Javascript**(although we will try to rewrite it using Typescript in future but not for now).
We are using Babel with a plugin called ```transform-class-properties``` means that we can declare class member in ES6 class just like Typescript does:
```js
class C {
/* @types {HTMLCanvasElement} */
@decorator(/* ... */)
canvas = null;
}
```
But when I add similar comment to ```canvas``` declaration, it can not be recognized by IDE. The IDE seems like only decides the type through its initializer. Could you support this feature please? | Bug | low | Critical |
358,974,323 | pytorch | [feature request] - Allow sequences lengths to be 0 in PackSequence | Hi,
### Context
I'm currently working on some NLP stuff which includes working on character-level encoding through RNNs. For this, I'm using pack_padded_sequence/pad_packed_sequence which does the job for word-level encoding, but starts to be a little bit more annoying for the chars.
**_I'm using batch_first=True axis, but the same can be easily applied to batch_first=False_**
The character tensor's shape is [B, W, C, *], where B is the batch axis, W is the word axis and C is the character axis for each word (In our case we flatten the first two axis B and W as each word is independent from each other, resulting in a forwarded tensor's shape: [B x W, C, *]).
Then, as W already contains some padded indexes, some C entries are then padded entries, thus having length = 0 which throw the following error when trying to use pack_padded_sequence :
`ValueError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0`
It seems that this behaviour comes from the following: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/PackedSequence.cpp#L22
### Proposition
It would be interesting to relax the constraint on the sequence length to be >= 0 in PackSequence, and handling the "0-case" directly as it is currently done to generate each batch that will go to the RNN.
Thanks,
Morgan
cc @albanD @mruberry @jbschlosser @zou3519 | module: nn,module: rnn,triaged,enhancement | low | Critical |
358,999,708 | go | cmd/go: add modvendor sub-command | Creating this issue as a follow up to https://github.com/golang/go/issues/26366 (and others).
`go mod vendor` is documented as follows:
```
Vendor resets the main module's vendor directory to include all packages
needed to build and test all the main module's packages.
It does not include test code for vendored packages.
```
Much of the surprise in https://github.com/golang/go/issues/26366 comes about because people are expecting "other" files to also be included in `vendor`.
An alternative to the Go 1.5 `vendor` is to instead "vendor" the module download cache. A proof of concept of this approach is presented here:
https://github.com/myitcv/go-modules-by-example/blob/master/012_modvendor/README.md
Hence I propose `go mod modvendor`, which would be documented as follows:
```
Modvendor resets the main module's modvendor directory to include a
copy of the module download cache required for the main module and its
transitive dependencies.
```
Name and the documentation clearly not final.
### Benefits (WIP)
* Eliminates any potential confusion around what is in/not in `vendor`
* Easier to contribute patches/fixes to upstream module authors (via something like [`gohack`](https://github.com/rogpeppe/gohack)), because the entire module is available
* The modules included in `modvendor` are an _exact_ copy of the original modules. This makes it easier to check their fidelity at any point in time, with either the source or some other reference (e.g. Athens)
* Makes clear the source of modules, via the use of `GOPROXY=/path/to/modvendor`. No potential for confusion like "will the `modvendor` of my dependencies be used?"
* A single deliverable
* Fully reproducible and high fidelity builds (modules in general gives us this, so just re-emphasising the point)
* ...
### Costs (WIP)
* The above steps are currently manual; tooling (the go tool?) can fix this
* Reviewing "vendored" dependencies is now more involved without further tooling. For example it's no longer possible to simply browse the source of a dependency via a GitHub PR when it is added. Again, tooling could help here. As could some central source of truth for trusted, reviewed modules (Athens? cc @bketelsen @arschles)
* ...
### Related discussion
Somewhat related to discussion in https://github.com/golang/go/issues/27227 (cc @rasky) where it is suggested the existence of `vendor` should imply the `-mod=vendor` flag. The same argument could be applied here, namely the existence of `modvendor` implying the setting of `GOPROXY=/path/to/modvendor`. This presupposes, however, that the idea of `modvendor` makes sense in the first place.
### Background discussion:
https://twitter.com/_myitcv/status/1038885458950934528
cc @StabbyCutyou @fatih
cc @bcmills | NeedsDecision,modules | medium | Major |
359,065,710 | rust | (Identical) function call with Generic arguments breaks compilation when called from within another function that has unrelated Generic arguments | I searched and couldn't find a similar issue for now.
The following code shows an example of a function ```interpolate_linear``` that when called with identical (literal) arguments, causes a complier error in the function ```percentile``` (line 16), but no error in the function ```call_interpolate_linear``` (line 21). I haven't yet had time to try and narrow the issue yet, but nevertheless I think the code is valid.
```
use std::convert::From;
use std::fmt::Debug;
pub fn percentile<T: Debug + Copy>(
data_set: &Vec<T>,
n: f64,
) -> f64
where
f64: From<T>,
{
let rank = n * (data_set.len() as f64 - 1f64);
let below = rank.floor() as usize;
let mut above = rank.floor() as usize + 1;
if above == data_set.len() {
above = data_set.len() - 1;
}
let lower_value = data_set[below];
let higher_value = data_set[above];
interpolate_linear(&1.0, &2.0, 0.5); // this line does not compile
interpolate_linear(&lower_value, &higher_value, n)
}
pub fn call_interpolate_linear() {
interpolate_linear(&1.0, &2.0, 0.5); // this line does compile
}
pub fn interpolate_linear<T: Copy>(
a: &T,
b: &T,
how_far_from_a_to_b: f64,
) -> f64
where
f64: From<T>,
{
let a = f64::from(*a);
let b = f64::from(*b);
a + (how_far_from_a_to_b * (b - a))
}
fn main() {}
```
Here is the compiler error:
```
error[E0308]: mismatched types
--> functions_issue.rs:16:24
|
16 | interpolate_linear(&1.0, &2.0, 0.5); // this line does not compile
| ^^^^ expected type parameter, found floating-point variable
|
= note: expected type `&T`
found type `&{float}`
error[E0308]: mismatched types
--> functions_issue.rs:16:30
|
16 | interpolate_linear(&1.0, &2.0, 0.5); // this line does not compile
| ^^^^ expected type parameter, found floating-point variable
|
= note: expected type `&T`
found type `&{float}`
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0308`.
``` | C-enhancement,A-diagnostics,T-compiler | low | Critical |
359,070,956 | rust | NLL: document specs for (new) semantics in rust ref (incl. deviations from RFC) | The [NLL RFC][] provided a specification for what we planned to implement. (Or at least it tried to do so.)
[NLL RFC]: https://github.com/rust-lang/rfcs/blob/master/text/2094-nll.md
Since then, the NLL implementation made something that deviated in various ways from that specification.
This ticket is just noting that:
1. we did deviate in various ways, and
2. we should plan to document the actual semantics, in a manner suitable for the rust reference.
It would be good to link to here any PRs/issues where such deviations were implemented or discussed | A-lifetimes,P-medium,A-borrow-checker,T-compiler,A-NLL,NLL-reference | low | Major |
359,071,761 | electron | AppContainer Process Isolation on Windows 10 | Using tools like [`electron-windows-store`](https://github.com/felixrieseberg/electron-windows-store), Electron can be packaged as an `appx` app and run in the same environment as Windows Store apps, commonly known as UWP apps. They're still `exe` binaries, they're just running as part of a package and with a package identity attached.
While those applications are running within a scoped amount of virtualization (namely, filesystem and registry redirection), they're not actually running in a process isolation sandbox like their proper UWP siblings. Their capability is therefore `<rescap:Capability Name="runFullTrust"/>`.
With RS5 (October 2018 Update), Windows 10 introduces "partial trust" that'll allow applications running in the desktop bridge to make use of the same app container process isolation security proper UWP applications.
### Getting Electron ready for AppContainer
- [ ] **Get final appxmanifest.xml for partial trust applications from Microsoft**
- [ ] **Create a simple test harness to create and test partial trust Electron apps**
- [ ] **Verify which APIs need UWP additions to function within sandbox**
- [ ] **Extend non-functional APIs with UWP APIs**
We know, as an example, that `shell.*` APIs do not work. We'll need to augment those APIs to use UWP counterparts when [`isRunningInDesktopBridge()`](https://github.com/electron/electron/blob/163e2d35272a1269394add1b941ad961ad801c31/brightray/common/application_info_win.cc#L67-L107) returns `true`. We need to audit every single API and make sure that they either work fine or are documented as non-functional. | enhancement :sparkles:,platform/windows | low | Minor |
359,095,183 | pytorch | at::Device makes it very easy to write buggy code | Imagine this method:
```cpp
Tensor Tensor::cuda(Device dev = at::Device(at::DeviceKind::CUDA)) {
if (this->device() == dev) {
return *this;
}
return ...; // do the transfer
}
```
Can you see the error? I didn't.
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
The problem lies in `operator==` of `at::Device`, which enforces strict equality of device kinds and indices. This would be fine if indices were always guaranteed to be valid, but they are in fact optional. On the other hand, the device objects returned from `this->device()` are always guaranteed to have a device set, and so they will always compare as not-equal to the default argument, *always forcing the tensor copy*.
It would be great if we could resolve this in some way. I have three proposals:
#### Create another device type
We want to allow people to specify a device type for their scripts, which often involves specifying the kind only and ignoring the index. This is what `at::Device` aims to do. On the other hand, internal library functions are almost always functional, and use the device to decide what to do *right now*. Those use cases pretty much always require us to have a device index bound to that object.
This proposal would be to add `at::FullDevice` (better name ideas welcome), which acts similarly to `at::Device`, but is guaranteed to have a device index set. We would need implicit conversions going in both ways because:
- `at::Device -> at::FullDevice` would be needed when users are calling into library functions (they would take `FullDevice` as arguments now). This conversion would be a no-op if the index is specified, and would retrieve the currently selected CUDA device otherwise.
- `at::FullDevice -> at::Device` this is a trivial conversion. `Tensor::device()` should now return `at::FullDevice`, but user code will always work with `at::Device`, which is why we need the conversion.
#### Do nothing. Add a warning maybe.
Plus a method like `ensure_has_index()`, which would set the index to currently selected device of this kind if it's missing.
#### Change `operator==` of `at::Device` to depend on `thread_local` state
This means that if we have a CUDA device with no index set, inside `operator==` we will assume that it has an index equal to the currently selected device. I don't like this one all that much, because it makes a supposedly simple operation quite complicated, and might break `operator==` invariants when someone e.g. uses a set of devices, and changes the thread local state during its lifetime.
---
cc @goldsborough | triaged,better-engineering | low | Critical |
359,138,529 | vue | Oddity with JS transition hooks used in combination with CSS | ### Version
2.5.17
### Reproduction link
[https://codesandbox.io/s/6x4k5vrrkn](https://codesandbox.io/s/6x4k5vrrkn)
### Steps to reproduce
Remove the [unused] `done` parameter from the `leave` callback signature in `SideSheet.vue`.
### What is expected?
The component to transition both on enter and leave.

### What is actually happening?
The component enters immediately (without transitioning).

---
The `done` callback shouldn't be needed if the transition duration is implicit in CSS (as noted in the docs). However, by retaining the `done` parameter in the function signature it is unclear why it should "work" (since it's unused within the function definition itself).
<!-- generated by vue-issues. DO NOT REMOVE --> | transition | medium | Minor |
359,139,444 | vue | Race condition in transition-group | ### Version
2.5.17
### Reproduction link
[https://jsfiddle.net/nkovacs/Lskfredn/](https://jsfiddle.net/nkovacs/Lskfredn/)
### Steps to reproduce
1. Click the add button
### What is expected?
the animation should work properly, and animation classes should be cleaned up
### What is actually happening?
the enter animation doesn't work and the new item's element keeps the `list-enter-to` class forever
---
The style tag binding triggers a second rerender of the transition-group component between the transition-group setting `_enterCb` on the entering new child and `nextFrame` triggering its callback. `prevChildren` is updated to include the new item, and `update` calls the pending `_enterCb` callback. Then after that, `nextFrame` triggers, but because `_enterCb` can only be called once, it won't be called, so the `enter-to` class will remain on the element.
This only happens if the transition-group has a move transition.
The bug also occurs if the elements are changed between `update` and `nextFrame`: https://jsfiddle.net/nkovacs/cnjso1h5/
<!-- generated by vue-issues. DO NOT REMOVE --> | transition | low | Critical |
359,151,138 | go | cmd/compile: automatically stack-allocate small non-escaping slices of dynamic size | This commit:
https://github.com/golang/go/commit/95a11c7381e01fdaaf34e25b82db0632081ab74e
shows a real-world performance gain triggered by moving a small non-escaping slice to the stack. It is my understanding that the Go compiler always allocated the slice in the heap because the length was not known at compile time.
Would it make sense to attempt a similar code transformation for many/all non escaping slices? What would be the cons? Any suggestion on how to identify which slices could benefit from this transformation and which would possibly just create overhead? | Performance,NeedsInvestigation,compiler/runtime | low | Major |
359,165,290 | pytorch | [JIT][tracer] Slicing shape is specialized to tensor rank | Example:
```
import torch
def fill_row_zero(x):
x = torch.cat((torch.rand(1, *x.shape[1:]), x[1:]), dim=0)
return x
traced = torch.jit.trace(fill_row_zero, (torch.rand(3, 4),))
print(traced.graph)
traced(torch.rand(3, 4, 5))
```
```
graph(%0 : Float(3, 4)) {
%4 : int = prim::Constant[value=1]()
%5 : int = aten::size(%0, %4)
%6 : Long() = prim::NumToTensor(%5)
%7 : int = prim::TensorToNum(%6)
%8 : int = prim::Constant[value=1]()
%9 : int[] = prim::ListConstruct(%8, %7)
%10 : int = prim::Constant[value=6]()
%11 : int = prim::Constant[value=0]()
%12 : int[] = prim::Constant[value=[0, -1]]()
%13 : Float(1, 4) = aten::rand(%9, %10, %11, %12)
%14 : int = prim::Constant[value=0]()
%15 : int = prim::Constant[value=1]()
%16 : int = prim::Constant[value=9223372036854775807]()
%17 : int = prim::Constant[value=1]()
%18 : Float(2, 4) = aten::slice(%0, %14, %15, %16, %17)
%19 : Dynamic[] = prim::ListConstruct(%13, %18)
%20 : int = prim::Constant[value=0]()
%21 : Float(3, 4) = aten::cat(%19, %20)
return (%21);
}
```
these `size()` calls we emit (e.g. ` %5 : int = aten::size(%0, %4)`) are specialized to the rank of the tensor we called `.shape` on | oncall: jit | low | Minor |
359,184,495 | flutter | Search widget's text field might not have large enough tap area | Currently at 44 height due to a the default edgeInsets of the no border decoration. Android a11y scanner doesn't flag this for some reason, we should determine why. | framework,a: accessibility,P2,team-framework,triaged-framework | low | Minor |
359,199,107 | pytorch | [CLEANUP] Context functions should return TypeExtendedInterface, not Type | See https://github.com/pytorch/pytorch/pull/11461 for information.
CC @ezyang. | triaged,better-engineering | low | Minor |
359,213,205 | kubernetes | Reduce the set of metrics exposed by the kubelet | ### Background
In 1.12, the kubelet exposes a number of sources for metrics directly from [cAdvisor](https://github.com/google/cadvisor#cadvisor). This includes:
* [cAdvisor prometheus metrics](https://github.com/google/cadvisor/blob/master/docs/storage/prometheus.md#prometheus-metrics) at [`/metrics/cadvisor`](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L277)
* [cAdvisor v1 Json API](https://github.com/google/cadvisor/blob/master/info/v1/container.go#L126) at [`/stats/`, `/stats/container`, `/stats/{podName}/{containerName}`, and `/stats/{namespace}/{podName}/{uid}/{containerName}`](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/stats/handler.go#L111)
* [cAdvisor machine info](https://github.com/google/cadvisor/blob/master/info/v1/machine.go#L159) at [`/spec`](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L291)
The kubelet also exposes the [summary API](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go#L24), which is not exposed directly by cAdvisor, but queries cAdvisor as one of its sources for metrics.
The [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md) documentation describes the path for "core" metrics, and for "monitoring" metrics. The [Core Metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/core-metrics-pipeline.md#core-metrics-in-kubelet) proposal describes the set of metrics that we consider core, and their uses. The motivation for the split architecture is:
* To minimize the performance impact of stats collection for core metrics, allowing these to be collected more frequently
* To make the monitoring pipeline replaceable, and extensible.
### Current kubelet metrics that are not included in core metrics
* Pod and Node-level Network Metrics
* Persistent Volume Metrics
* Container-level (Nvidia) GPU Metrics
* Node-Level RLimit Metrics
* Misc Memory Metrics (e.g. PageFaults)
* Container, Pod, and Node-level Inode metrics (for ephemeral storage)
* Container, Pod, and Node-level DiskIO metrics (from cAdvisor)
Deprecating and removing the Summary API will require out-of-tree sources for each of these metrics. "Direct" cAdvisor endpoints are not often used, and have even been broken for multiple releases (https://github.com/kubernetes/kubernetes/pull/62544) without anyone raising an issue.
### Working Items
* [x] [1.13] Introduce Kubelet `pod-resources` grpc endpoint; KEP: https://github.com/kubernetes/community/pull/2454
* [x] [1.14] Introduce Kubelet Resource Metrics API
* [x] [1.15] Deprecate the "direct" cAdvisor API endpoints by adding and deprecating a `--enable-cadvisor-json-endpoints` flag
* [x] [1.18] Default the `--enable-cadvisor-json-endpoints` flag to disabled
* [ ] [1.21] Remove the `--enable-cadvisor-json-endpoints` flag
* [ ] [1.21] Transition Monitoring Server to Kubelet Resource Metrics API ([requires 3 versions skew](https://github.com/kubernetes/kubernetes/pull/67829#issuecomment-416873857))
* [ ] [TBD] Propose out-of-tree replacements for kubelet monitoring endpoints
* [ ] [TBD] Deprecate the Summary API and cAdvisor prometheus endoints by adding and deprecating a `--enable-container-monitoring-endpoints` flag
* [ ] [TBD+2] Remove "direct" cAdvisor API endpoints
* [ ] [TBD+2] Default the `--enable-container-monitoring-endpoints` flag to disabled
* [ ] [TBD+4] Remove the Summary API, cAdvisor prometheus metrics and remove the `--enable-container-monitoring-endpoints` flag.
### Open Questions
* Should the kubelet be a source for any monitoring metrics?
* For example, metrics about the kubelet itself, or DiskIO metrics for empty-dir volumes (which are "owned" by the kubelet).
* What will provide the metrics listed above, now that the kubelet no longer does?
* cAdvisor can provide Network, RLimit, Misc Memory metrics, Inode metrics, and DiskIO metrics.
* cAdvisor only works for some runtimes, but is a drop-in replacement for "direct" cAdvisor API endpoints
* Container Runtimes can be a source for container-level Memory, Inode, Network and DiskIO metrics.
* NVidia GPU metrics provided by a daemonset published by NVidia
* No source for Persistent Volume metrics?
/sig node
/sig instrumentation
/kind feature
/priority important-longterm
cc @kubernetes/sig-node-proposals @kubernetes/sig-instrumentation-misc | sig/node,kind/feature,sig/instrumentation,priority/important-longterm,lifecycle/frozen | high | Critical |
359,214,282 | go | cmd/go: do not cache tool output if tools print to stdout/stderr | # **Update, Oct 7 2020**: see https://github.com/golang/go/issues/27628#issuecomment-702252564 for most recent proposal in this issue.
### What version of Go are you using (`go version`)?
tip (2e5c32518ce6facc507862f4156d4e6ac776754f), also Go 1.11
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
darwin/amd64
### What did you do?
```
$ go build -toolexec=/usr/bin/time hello.go
# command-line-arguments
0.01 real 0.00 user 0.00 sys
# command-line-arguments
0.12 real 0.11 user 0.02 sys
$ go build hello.go
# command-line-arguments
0.01 real 0.00 user 0.00 sys
# command-line-arguments
0.12 real 0.11 user 0.02 sys
$
```
### What did you expect to see?
The second invocation of `go build` doesn't have `-toolexec`, so it should not invoke the toolexec command (which I think it doesn't), nor reprint its output.
### What did you see instead?
toolexec output is reprinted.
In fact, I think it probably should not cache at all if `-toolexec` is specified, since the external command that toolexec invokes may do anything, and (intentionally) not reproducible.
cc @dr2chase
| Proposal,Proposal-Accepted,GoCommand | medium | Critical |
359,222,238 | rust | Tracking Issue: Procedural Macro Diagnostics (RFC 1566) | This is a tracking issue for diagnostics for procedural macros spawned off from https://github.com/rust-lang/rust/issues/38356.
## Overview
### Current Status
* Implemented under `feature(proc_macro_diagnostic)`
* In use by Rocket, Diesel, Maud
### Next Steps
- [x] https://github.com/rust-lang/rust/pull/44125
- [x] Implement introspection methods (https://github.com/rust-lang/rust/pull/52896)
- [x] Implement multi-span support (https://github.com/rust-lang/rust/pull/52896)
- [ ] Implement lint id for warnings (https://github.com/rust-lang/rust/pull/135432)
- [ ] Document thoroughly
- [ ] Stabilize
## Summary
The initial API was implemented in https://github.com/rust-lang/rust/pull/44125 and is being used by crates like Rocket and Diesel to emit user-friendly diagnostics. Apart from thorough documentation, I see two blockers for stabilization:
1. **Multi-Span Support**
At present, it is not possible to create/emit a diagnostic via `proc_macro` that points to more than one `Span`. The internal diagnostics API makes this possible, and we should expose this as well.
The changes necessary to support this are fairly minor: a `Diagnostic` should encapsulate a `Vec<Span>` as opposed to a `Span`, and the `span_` methods should be made generic such that either a `Span` or a `Vec<Span>` (ideally also a `&[Vec]`) can be passed in. This makes it possible for a user to pass in an empty `Vec`, but this case can be handled as if no `Span` was explicitly set.
2. **Lint-Associated Warnings**
At present, if a `proc_macro` emits a warning, it is unconditional as it is not associated with a lint: the user can never silence the warning. I propose that we require proc-macro authors to associate every warning with a lint-level so that the consumer can turn it off.
No API has been formally proposed for this feature. I informally proposed that we allow proc-macros to create lint-levels in an ad-hoc manner; this differs from what happens internally, where all lint-levels have to be known apriori. In code, such an API might look lIke:
```rust
val.span.warning(lint!(unknown_media_type), "unknown media type");
```
The `lint!` macro might check for uniqueness and generate a (hidden) structure for internal use. Alternatively, the proc-macro author could simply pass in a string: `"unknown_media_type"`. | A-diagnostics,T-lang,T-libs-api,B-unstable,C-tracking-issue,A-macros-1.2,Libs-Tracked,I-lang-radar | high | Minor |
359,231,336 | pytorch | CrossEntropyLoss, ignore_index does not prevent back-prop if the logits are -inf | ## Issue description
When using CrossEntropyLoss, I assumed that as long as I ignore a target, its loss would not be calculated and will not get back propagated. Therefore I would pass logits = -float(inf) when I attempted to skip that target.
However, even though the loss would be skipped; the loss.backward() would bring the infinity into my gradients.
I think this behavior is at least worth noticing in the document for CrossEntropyLoss.
## Code example
```
import torch
loss = torch.nn.CrossEntropyLoss(ignore_index=-1)
input = torch.randn(3, 5, requires_grad=True)
target = torch.empty(3).long().fill_(-1)
logits = input - float('inf')
output = loss(logits, target)
output.backward()
input.grad
```
## System Info
Collecting environment information...
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1060 6GB
Nvidia driver version: 396.44
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
Versions of relevant libraries:
[pip] Could not collect
[conda] cuda90 1.0 h6433d27_0 pytorch
[conda] pytorch 0.4.1 py36_cuda9.0.176_cudnn7.1.2_1 soumith
[conda] torchvision 0.1.9 py36h7584368_1 soumith
cc @brianjo @mruberry @albanD @jbschlosser | module: docs,module: nn,module: loss,triaged | low | Critical |
359,267,822 | go | x/build/internal/gophers: improve internal package design | ## Problem
> _Total mess, but a functional mess, and a starting point for the future._
> — Commit [`891b12dc`](https://github.com/golang/build/commit/891b12dcbdd4ee448d573a78681b2e785daa71ca)
The `gophers` package is currently hard to use and hard to modify. It's not easy to read its [documentation](https://godoc.org/golang.org/x/build/internal/gophers) and start using it:
```Go
// (no documentation)
func GetPerson(id string) *Person
```
I've used and modified it multiple times, and each time, I had to read its internal code to figure out:
- what kind of value can "id" be?
- what is its exact format?
- is leading '@' required for GH usernames? optional? unneeded?
- is it case sensitive or not?
- in what order/what type of information to add to the `addPerson(...)` lines?
Despite being an internal package, `gophers` is an important package providing value to 4 other packages, and potentially becoming used in more places. It's no longer just for computing stats, but also for tracking package owners and assigning reviews. Being internal means we can change it easily (even break the API if needed) if we come to agreement on an improved design.
## Proposed Solution
I think it can be made easier to use by:
- documenting it (so its [godoc](https://godoc.org/golang.org/x/build/internal/gophers) is all you need to use it, no need to read code)
For example:
```Go
// GetPerson looks up a person by id and returns one if found, or nil otherwise.
//
// The id is case insensitive, and may be one of:
// - full name ("Brad Fitzpatrick")
// - GitHub username ("@bradfitz")
// - Gerrit <account ID>@<instance ID> ("5065@62eb7196-b449-3ce5-99f1-c037f21e1705")
// - email ("[email protected]")
func GetPerson(id string) *Person
```
@bradfitz If you prefer not to be used as an example, let me know, and we can use someone else (I'm happy to volunteer) or use a generic name. But I think a well known real user makes for a better example.
Made easier to modify by:
- making its internal `addPerson` logic more explicit rather than implicit
For example, instead of what we have now:
```Go
addPerson("Filippo Valsorda", "", "6195@62eb7196-b449-3ce5-99f1-c037f21e1705")
addPerson("Filippo Valsorda", "[email protected]")
addPerson("Filippo Valsorda", "[email protected]", "11715@62eb7196-b449-3ce5-99f1-c037f21e1705")
addPerson("Filippo Valsorda", "[email protected]", "[email protected]", "@FiloSottile")
// what kind of changes should be done to modify the end result Person struct?
```
It could be something more explicit, along the lines of:
```Go
add(Person{
Name: "Filippo Valsorda",
GitHub: "FiloSottile",
Gerrit: "[email protected]",
GerritIDs: []int{6195, 11715}, // Gerrit account IDs.
GitEmails: []string{
"[email protected]",
"[email protected]",
"[email protected]",
},
gomote: "valsorda", // Gomote user.
})
```
The intention is to make it easy for people to manually add and modify their entries, with predictable results, while still being able to to use code generation (ala `gopherstats -mode=find-gerrit-gophers`) to add missing entries.
This is just a quick draft proposal, not necessarily the final API design. If the general direction is well received but there are concerns or improvement suggestions, I'm happy to flesh it out and incorporate feedback. I wouldn't send a CL until I have a solid design.
/cc @bradfitz @andybons | Documentation,Builders,NeedsFix | low | Major |
359,299,918 | rust | Allow setting breakpoint when Err() is constructed in debug builds | This would be really helpful for tracking down the source of errors. It seems the simplest way to allow this would be to emit a no-inline function that's used for setting a particular enum variant. | C-feature-request | low | Critical |
359,473,423 | three.js | FBXLoader not working with many skeleton animations (e.g., from Mixamo.com) | ##### Description of the problem
The FBXLoader does not work with many skeleton animations. e.g., from Mixamo.com. One example from Mixamo that fails is the following:
Character: WHITECLOWN N HALLIN
Animation: SAMBA DANCING
You can get this model by downloading it from Mixamo directly but I have also attached it to this issue.
[WhiteClownSambaDancing.zip](https://github.com/mrdoob/three.js/files/2375109/WhiteClownSambaDancing.zip)
Here is the displayed result using the webgl_loader_fbx.html sample in THREE v96 modified to load the WhiteClownSamaDancing.fbx:

_Screen shot using webgl_loader_fbx.html_
Here is the same FBX file displayed in AutoDesk FBX Review:

_Screen shot using AutoDesk FBX Review_
There are many other Mixamo character/animations pairings that work fine with the FBXLoader but many that do not. It is possible (though not confirmed) that the ones that fail were created with Maya (this has been stated as a possible problem in some other issues).
Also, the WhiteClownSambaDancing.fbx file loads correctly in many other software including the Mixamo site itself and [http://www.open3mod.com/](http://www.open3mod.com/). The later is fully open source so you can see the exact code they use to perform the node/bone transforms and animations. They actually use AssImp for their conversion and you can see the exact code there. In particular, see the following for their FBX import 3D transform handling:
[https://github.com/assimp/assimp/blob/master/code/FBXConverter.cpp#L644](https://github.com/assimp/assimp/blob/master/code/FBXConverter.cpp#L644)
After digging into the FBXLoader code a bit, it seems there may be several areas where the issue could lie:
- There does not seem to be any code to honor the FBX inherit type on the nodes.
- There does not seem to be any code to implement rotate/scale pivot points on the nodes (NOT the geometry transforms which are implemented).
The following code from the AutoDesk FBX SDK may be of some help for implementing both of the above, esp. the code and comments in CalculateGlobalTransform():
[AutoDesk FBX source code Transformations/main.cxx](http://docs.autodesk.com/FBX/2014/ENU/FBX-SDK-Documentation/index.html?url=cpp_ref/_transformations_2main_8cxx-example.html,topicNumber=cpp_ref__transformations_2main_8cxx_example_htmlfc10a1e1-b18d-4e72-9dc0-70d0f1959f5e)
There are some other THREE.js issues which have not been fully addressed regarding incorrect FBX animations loaded via the FBXLoader (#11895, #13466, #13821). This issue is about improving the FBXLoader, not a particular file or asset pipeline. Please do not suggest use of FBX2GLTF (which just bakes the animations) or other converters.
Also, we are willing to provide some help with design and coding, if need be, but doing a full solution with a PR is beyond our bandwidth at this time (ping @looeee @Kyle-Larson @takahirox ?).
##### Three.js version
- [ ] Dev
- [X] r96
- [ ] ...
##### Browser
- [x] All of them
- [ ] Chrome
- [ ] Firefox
- [ ] Internet Explorer
##### OS
- [x] All of them
- [ ] Windows
- [ ] macOS
- [ ] Linux
- [ ] Android
- [ ] iOS
##### Hardware Requirements (graphics card, VR Device, ...)
| Bug,Loaders | medium | Major |
359,473,497 | vscode | Use the modifier properties on mouse event instead of tracking keydown/keyup | Extracted from our conversation at https://github.com/Microsoft/vscode/commit/e82498a544b88f5041ec7f8b531ab8ca6eb29eaf | help wanted,debt,editor-drag-and-drop | low | Minor |
359,530,410 | go | x/text/message: package level docs about MatchLanguage are unclear | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go1.11 linux/amd64
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
linux/amd64
### What did you do?
I tried to use `message.MatchLanguage("nl")` to obtain `language.Dutch`, as per the example on [its site](https://godoc.org/golang.org/x/text/message).
See the following code for an example:
```
package main
import (
"golang.org/x/text/message"
"golang.org/x/text/language"
"fmt"
)
func main() {
nl := message.MatchLanguage("nl")
fmt.Println(nl) // Prints "und", expected "nl"
fmt.Println(language.Dutch) // Prints "nl" as expected
p := message.NewPrinter(message.MatchLanguage("nl"))
p.Printf("%.2f\n", 5000.00) // Prints "5,000.00", expected "5.000,00"
p2 := message.NewPrinter(message.MatchLanguage("bn"))
p2.Println(123456.78) // Prints "5,000.00", expected "১,২৩,৪৫৬.৭৮"
}
```
### What did you expect to see?
I expected to receive `language.Dutch`, so that when I called `p.Printf("%.2f\n", 5000.00)`, I would get "5.000,00", as Dutch uses "." for thousand separators and "," for decimal separators.
### What did you see instead?
Instead `message.MatchLanguage("nl")` returned the `und Tag` rather than the `Dutch Tag`. And therefore when I called `p.Printf()` as per the example on the package's description, I got "5,000.00", which ironically matches the example in the description, but the example is wrong.
I also tried the third example in opening example, which also did not seem to produce the expected result.
It seems to me that `message.MatchLanguage()` either does not work or does not work as described.
Note: Making a new `Printer` with `language.Dutch` works as expected. | Documentation,NeedsInvestigation | low | Minor |
359,536,193 | You-Dont-Know-JS | Calling template literals "interpolated string literals" is misleading | Chapter in question: https://github.com/getify/You-Dont-Know-JS/blob/master/es6%20%26%20beyond/ch2.md#template-literals
The suggestion seems to be that the template literals would be all about strings; there's only examples that result in strings, and phrases like "final string value" and "generating the string from the literal" are used in what should be a general context, but the tag functions can return any object, and there's obvious use cases where they'd return regexp objects, DOM nodes, etc. | for second edition | medium | Minor |
359,548,761 | flutter | Would like to measure/track Flutter's total download size | I've seen several claims that our total download is too large. e.g.
https://twitter.com/FerventGeek/status/1038480155990261761
@gspencergoog do you know if we already track this as part of the bundle building?
CC @FerventGeek @mit-mit | team,framework,P2,team-framework,triaged-framework | low | Major |
359,567,274 | pytorch | Request to import pytest in test/*.py | Currently PyTorch uses the builtin `unittest` module for testing. Would it be possible to add a dependency on [pytest](https://docs.pytest.org/en/latest/) so developers can more easily write parametrized tests?
While working on [test/test_distributions.py](https://github.com/pytorch/pytorch/blob/master/test/test_distributions.py), @neerajprad, @alicanb, and @fritzo have found it challenging to write heavily parametrized tests. As a result, test coverage suffers. By contrast in Pyro we use heavily parametrized tests [using pytest](https://docs.pytest.org/en/latest/parametrize.html), and our Pyro tests seem to catch many bugs that aren't caught in PyTorch's own tests (#9917, #9521, #9977, #10241).
In particular, I'd like to be able to gather [xfailing tests](https://docs.pytest.org/en/documentation-restructure/how-to/skipping.html) in batch
```py
DISTRIBUTIONS = [Bernoulli, Beta, Cauchy, ..., Weibull]
@pytest.mark.parametrize('Dist', DISTRIBUTIONS)
def test_that_sometime_fails(Dist):
dist = Dist(...)
assert something(dist)
```
and then mark xfailing parameters
```py
DISTRIBUTIONS = [
Bernoulli,
Beta,
pytest.param(Cauchy, marks=[pytest.mark.xfail(reason='schema not found for node')]),
...
Weibull,
]
```
pytest makes it easy to collect xfailing tests in batch (`pytest -v --tb=no`) and to see the entire list of xfailing tests, and makes it easy to run xfailing tests to see which have started passing since the last time tests were run (`pytest -v`, or `pytest --runxfail`). In contrast, `unittest` fixtures typically parametrize via for loops and can report only a single failure on each run. (Apologies if I'm unaware of convenient functionality in `unittest`!)
I think allowing usage of pytest in test/*.py could help improve PyTorch test coverage.
cc @mruberry | module: tests,triaged | medium | Critical |
359,574,800 | opencv | VideoCapture bug with Acer Switch 5 tablet | ##### System information (version)
- OpenCV = 3.4.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
##### Detailed description
Camera capture on Acer Switch 5 tablet appears to work, but if I try to manually focus the camera (using the onscreen menu) it actually just applies some kind of sharpen/blur, not focus.
The Windows Camera app allows a very close (macro) focus with the tablet camera, but OpenCV is unable to reproduce this.
It is as if the parameter OpenCV thinks is focus, is actually something completely different
##### Steps to reproduce
```
#include "opencv2/opencv.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(){
// Create a VideoCapture object and open the input file
// If the input is the web camera, pass 0 instead of the video file name
VideoCapture cap(0);
// Check if camera opened successfully
if(!cap.isOpened()){
cout << "Error opening video stream or file" << endl;
return -1;
}
cap.set(CV_CAP_PROP_SETTINGS,1); // To display the settings
// Now if the user tries to adjust the focus, something other than focus happens!
// If I leave it on auto-focus I can focus very close up so autofocus still works
// If I switch off autofocus and try to focus using the slider, I cannot focus
// I suspect the parameter being changed by OpenCV is NOT focus, even though it thinks it is
// (Note. I do think the parameter being changed by the auto-focus on/off check box is correct - it switches off auto focus)
while(1){
Mat frame;
// Capture frame-by-frame
cap >> frame;
// If the frame is empty, break immediately
if (frame.empty())
break;
// Display the resulting frame
imshow( "Frame", frame );
// Press ESC on keyboard to exit
char c=(char)waitKey(25);
if(c==27)
break;
}
// When everything done, release the video capture object
cap.release();
// Closes all the frames
destroyAllWindows();
return 0;
}
```
| priority: low,category: videoio,platform: winrt/uwp | low | Critical |
359,579,444 | rust | Confusing error message when wildcard importing two same-named traits | Discovered this when using `tokio-async-await` and `futures`. Both define `FutureExt` and `StreamExt` traits in their `preludes`. Both traits are meant to be used at the same time (from what I understand).
```rust
use tokio::prelude::*; // contains StreamExt, FutureExt
use futures::prelude::*; // contains StreamExt, FutureExt
```
After doing both this imports rust proceeds to act like _neither_ of the traits got imported and give error help on any method usage something like "trait not in scope; use `tokio::async_await::stream::StreamExt`".
Here is a Rust playground illustrating the bug: https://play.rust-lang.org/?gist=c8ebd2f79eca354ecb2dd6c828d73ad0&version=nightly&mode=debug&edition=2015
And here is a work-around for it (in nightly): https://play.rust-lang.org/?gist=f28928ef69a1985ee79d321cb58ae7e1&version=nightly&mode=debug&edition=2015
I think the way forward here is a good warning (or error) message when a duplicate trait item import happens between two wildcard imports.
| C-enhancement,A-diagnostics,A-trait-system,T-compiler | low | Critical |
359,608,021 | bitcoin | Test coverage of our networking code | Our python functional testing framework is pretty limited for what kinds of p2p behaviors we can test. Basically, we can currently only make manual connections between bitcoind nodes (using the `addnode` rpc), which are treated differently in our code than outbound peers selected using addrman.
While we do have some unit-testing coverage of some of the components (like addrman, and parts of net_processing), I don't believe we currently are able to test the overall logic of how bitcoind uses those components (I recall this coming up when working on #11560, as a specific example).
Anyway I am just mentioning this here as a potential project idea, as this is a material gap in our testing that I think would be valuable to work towards improving, and I wasn't sure how well known this is. | Tests | medium | Critical |
359,656,898 | flutter | Document how to set SystemChrome brightness properly | ## Steps to Reproduce
1. Create a simple app with an appbar
2. Try to set the color of the icons manually with `SystemChrome.setSystemUIOverlayStyle(SystemUiOverlayStyle.dark)`
3. The icon's colors don't change
[Here's an example app](https://pastebin.com/k00S4PWs)
[Here's an example video](https://cdn.discordapp.com/attachments/408312522521706496/489544643256516619/2018-09-12_23-15-07.mp4)
I believe the problem stems from [line 479-497 in app_bar.dart](https://github.com/flutter/flutter/blob/d927c9331005f81157fa39dff7b5dab415ad330b/packages/flutter/lib/src/material/app_bar.dart#L479)
## Logs
```
[√] Flutter (Channel beta, v0.7.3, on Microsoft Windows [Version 10.0.17134.228], locale en-BE)
• Flutter version 0.7.3 at C:\tools\flutter
• Framework revision 3b309bda07 (2 weeks ago), 2018-08-28 12:39:24 -0700
• Engine revision af42b6dc95
• Dart version 2.1.0-dev.1.0.flutter-ccb16f7282
[√] Android toolchain - develop for Android devices (Android SDK 27.0.3)
• Android SDK at C:\Android\android-sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-27, build-tools 27.0.3
• ANDROID_HOME = C:\Android\android-sdk
• Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b02)
• All Android licenses accepted.
[√] Android Studio (version 3.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin version 27.1.1
• Dart plugin version 173.4700
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b02)
[√] Connected devices (1 available)
• Android SDK built for x86 64 • emulator-5554 • android-x64 • Android 7.0 (API 24) (emulator)
• No issues found!
```
| framework,d: api docs,has reproducible steps,P2,found in release: 3.7,found in release: 3.8,team-framework,triaged-framework | low | Major |
359,695,769 | pytorch | [feature request] Triangular Matrix Representation | I tried searching the documentation, but besides sparse matrices (which in most cases would use _more_ space than a dense matrix), I didn't see any tensor types that would take advantage of the ability to save space with the knowledge that the tensor is triangular. This would also save time when performing `matmul()`, I'd imagine. For example, my use case is a symmetric matrix (a modification of the distance matrix (https://en.wikipedia.org/wiki/Distance_matrix), and I only need the upper half without the diagonal, so performing operations on the other elements would be worthless, and saving them would be a waste of space.
Can such a type be implemented? Apologies if I missed something and it already exists. | feature,triaged | low | Major |
359,707,887 | godot | Audio clipping / static / interference on rapid intervals of sound (fixed in `master`) | **Godot version:**
3.1 alpha
**OS/device including version:**
Kubuntu 18.04 and Windows 10
**Issue description:**
Creating a rapidly firing weapon, if sounds are played at intervals less than 0.5 or 0.6 seconds there is a clipping or static sound.
Demo: https://www.dropbox.com/s/q5tx4y6272lgdae/2018-09-12%2017-16-00.mp4?dl=0
Improvement can come from modifying sound files with fade in and fade out at either end, but the issue is still present.
Demo: https://www.dropbox.com/s/hnvlu161x1t62fl/secondtest.mp4?dl=0
In the case of testing with a sine wave sound file, instead of clipping / static there is a sort of interference or modulation sound instead.
Demo: https://www.dropbox.com/s/nd2rcsrdrqxo6sd/2018-09-12%2021-19-52.mp4?dl=0
The clipping / static sound can seemingly be eliminated by setting a LowPassFilter and LowShelfFilter on a sound bus, both with a cutoff of 1000Hz.
These filters don't eliminate the modulation type of sound that comes up when using a sine testing sound.
**Steps to reproduce:**
Create a timer with an interval of about 0.3 seconds and have either a single sound repeat at that interval, or rotate between two different sounds.
Issue present in the editor on Linux and Windows, in builds for Linux and Windows, on 2 PCs in my house, through 3 headphone sets and two speaker sets. Also verified by another user who tested on 2 of their headphone sets.
**Minimal reproduction project:**
[SoundTesting.zip](https://github.com/godotengine/godot/files/2377320/SoundTesting.zip)
**Tests and attempted solutions that didn't help**
Before trying the filters, I also tried:
* Having an AudioStreamPlayer2D as a child of each instantiated missile (this made the effect worse)
* Having the Player character scene play a sound from a child AudioStreamPlayer or AudioStreamPlayer2D
* Creating two child AudioStreamPlayers and alternating between playing one or the other, using the same sound
* Alternating between those two child nodes, but playing two different sounds
* Creating a stand alone scene with an AudioStreamPlayer in it, and instantiating it newly and adding it as a child each time
* Outputting the sound from the two child AudioStreamPlayers to different audio buses
* All three different mix target options
* Reducing volume - tested at -2Db, -4Db, -6Db
* Tested with 9 sound files
* Modified a sound effect with fade in and fade out processing to ensure no clicking in the file itself
* Tried reducing the duration of the sound file to 0.3 seconds, but still had the static effect at intervals of 0.5 seconds or less
* Trying with different sound playback hardware
* Tested on Linux and two installations of Windows
* Tested on 2 PCs
* Tested in editor and in Linux and Windows builds (Windows builds tested on two PCs)
**Extra information**
I spent a few hours with the helpful folks at the Godot discord last night going through various test to try and isolate the source of the clipping / interference sounds. Our tests did lead us to believe it wasn't an issue localized to my setup alone.
Different people heard different effects from the videos and test project above. Some heard clipping / static on every sound, some heard none at all. One other person heard it as clearly as I did, and confirmed the same on two of their headphone sets.
@starry-abyss had the theory it was something to do with too many high frequency sounds at once, hence we tried the low pass / shelf filters which eliminated the clipping / static sounds.
Given the effect of the filters I'm not sure this is actually an issue as opposed to a sound management technique being required.
If we can confirm this is just about handling sound in a certain way, perhaps I can contribute to the docs with some information for people in the future who run into this problem. | bug,confirmed,topic:audio | high | Critical |
359,732,707 | pytorch | Add min mode to embedding bags | It would be nice to add the min mode to embeddingBag.
Following [this paper](https://arxiv.org/pdf/1803.01400.pdf) it seems like it can trigger pretty good result. More globally adding the power-mean formula would be awesome.
Additionally, the current error `ValueError: mode has to be one of sum or mean` is not correct since it does allow `max`
cc @albanD @mruberry @jbschlosser | module: nn,triaged,enhancement | low | Critical |
359,784,320 | rust | Exponential type/trait-checking behavior from linearly nested iterator adapters. | The following test takes an exponential amount of time to type-check the body of `huge` (at the time of this writing, reported by `-Z time-passes` under "item-types checking"):
```rust
#![crate_type = "lib"]
pub fn unit() -> std::iter::Empty<()> {
std::iter::empty()
}
macro_rules! nest {
($inner:expr) => (unit().flat_map(|_| {
$inner
}).flat_map(|_| unit()))
}
macro_rules! nests {
() => (unit());
($_first:tt $($rest:tt)*) => (nest!(nests!($($rest)*)))
}
pub fn huge() -> impl Iterator<Item = ()> {
nests! {
// 1/x * 6/5 seconds.
4096 2048 1024 512 256 128 64 32 16 8 4 2
// x * 6/5 seconds.
1 2 4 8 // 16 32 64
}
}
```
This has been reduced from an ambiguity-aware parse tree visitor, and you can see a partial reduction here: https://gist.github.com/eddyb/5f20b8f48b68c92f7d4f022a18c374f4#file-repro-rs.
cc @nikomatsakis | C-enhancement,A-trait-system,I-compiletime,T-compiler | low | Minor |
359,838,986 | flutter | embedder channel apis poorly documented, makes it harder to write custom embedders | trying to create a embedder for flutter, but the documentation around the platform channels is pretty sparse.
documentation I've looked at:
[custom flutter engines](https://github.com/flutter/engine/wiki/Custom-Flutter-Engine-Embedders)
[flutter api docs](https://master-docs-flutter-io.firebaseapp.com/)
[desktop embedder text model](https://github.com/google/flutter-desktop-embedding/blob/master/linux/library/src/internal/text_input_model.cc)
particularly around `flutter/textinput` would love a way to see what platform messages are being sent back and forth within android studio to reverse engineer the protocol and see what the expected messages/responses are. since that would be faster than trying to read the c++ source code.
any way to do this easily?
otherwise where does one find the individual platform channel code for the particular platforms? (android preferably) | engine,d: api docs,e: embedder,P2,team-engine,triaged-engine | low | Minor |
359,873,268 | pytorch | DataLoader: Could not wrapper a exception in threads | ## Issue description
It seems that the code in dataloader.py try to wrap exceptions in threads and re-raise it with traceback info which formatted to string.
https://github.com/pytorch/pytorch/blob/v0.4.0/torch/utils/data/dataloader.py#L22
https://github.com/pytorch/pytorch/blob/v0.4.0/torch/utils/data/dataloader.py#L303
(I paste links in v0.4.0 same to my environment, but it's not changed in master branch)
It did not work in some custom exception which not have a "good" `__init__` method.
I have tried to fix it but failed. it is impossible to just re-raise a exception (`exc_type` in code) without knowing the detail of `__int__` method.
## Code example
```
class CustomException(BaseException):
def __init__(self, **kwargs):
pass
class SomeDataset(torch.utils.data.Dataset):
def __init__(self, data):
self.data = data
def __getitem__(self, index):
raise CustomException('test')
def __len__(self):
return len(self.data)
train_dataset = SomeDatset(train_data)
train_dataloder = torch.utils.data.DataLoader(train_dataset)
for data in train_loader:
pass
```
Error Message:
```
Traceback (most recent call last):
File "train.py", line 221, in <module>
for data in train_loader:
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 286, in __next__
return self._process_next_batch(batch)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 307, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
TypeError: __init__() takes 1 positional argument but 2 were given
```
## System Info
```
# python collect_env.py
Collecting environment information...
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.4 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration: GPU 0: GeForce GTX 1080
Nvidia driver version: 390.30
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.4
/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
/usr/local/lib/python3.6/dist-packages/cntk/libs/libcudnn.so.7
Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
```
| module: dataloader,module: error checking,triaged | low | Critical |
359,884,864 | pytorch | [Feature request] Advanced indexing in functions like `expand` | For functions like `torch.expand`, it would be nice to support advanced indexing with symbols like `...` and `:`.
For example,
```
>>> a = torch.randn(2, 3, 4, 1)
>>> a.expand(..., 10).shape
torch.Size([2, 3, 4, 10])
``` | triaged,module: advanced indexing | low | Minor |
359,885,254 | TypeScript | Inline function refactoring | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
inline function method refactoring
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
I would like a refactoring that inlines a function from
```
1: function foo() { return 42; }
2: function bar() { const meaningOfLife = foo(); }
```
to
```
1: function bar() { const meaningOfLife = 42; }
```
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
This is a very common refactoring and thus widely used while cleaning up code.
## Examples
<!-- Show how this would be used and what the behavior would be -->
In the above code sample, block 1, line 1: selecting foo, the user should be able to inline this function to every occurence and optionally delete the function definition.
In the above code sample, block 1, line 2: selecting foo, the user should be able to inline the function to this occurence, and, if it's the only one, optionally delete the function definition.
I am not sure how the option can be handled in vscode. Eclipse brings up a pop-up with the two options, but afaik vs code tries to be minimalist.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,In Discussion,Domain: Refactorings | medium | Critical |
359,900,859 | pytorch | undefined reference to caffe2 | I am trying to cross compile caffe2 and generate a binary for my platform.
I have successfully generated libcaffe2.so.
But while compiling the code facing few issue:
```
/tmp/ccIOey60.o: In function `caffe2::Argument::set_name(char const*)':
temp.cpp:(.text._ZN6caffe28Argument8set_nameEPKc[_ZN6caffe28Argument8set_nameEPKc]+0x24): undefined reference to `caffe2::GetEmptyStringAlreadyInited[abi:cxx11]()'
/tmp/ccIOey60.o: In function `caffe2::OperatorDef::set_type(char const*)':
temp.cpp:(.text._ZN6caffe211OperatorDef8set_typeEPKc[_ZN6caffe211OperatorDef8set_typeEPKc]+0x24): undefined reference to `caffe2::GetEmptyStringAlreadyInited[abi:cxx11]()'
/tmp/ccIOey60.o: In function `caffe2::NetDef::name[abi:cxx11]() const':
temp.cpp:(.text._ZNK6caffe26NetDef4nameB5cxx11Ev[_ZNK6caffe26NetDef4nameB5cxx11Ev]+0x18): undefined reference to `caffe2::GetEmptyStringAlreadyInited[abi:cxx11]()'
/tmp/ccIOey60.o: In function `caffe2::NetDef::set_name(char const*)':
temp.cpp:(.text._ZN6caffe26NetDef8set_nameEPKc[_ZN6caffe26NetDef8set_nameEPKc]+0x24): undefined reference to `caffe2::GetEmptyStringAlreadyInited[abi:cxx11]()'
/tmp/ccIOey60.o: In function `std::default_delete<caffe2::ThreadPool>::operator()(caffe2::ThreadPool*) const':
temp.cpp:(.text._ZNKSt14default_deleteIN6caffe210ThreadPoolEEclEPS1_[_ZNKSt14default_deleteIN6caffe210ThreadPoolEEclEPS1_]+0x24): undefined reference to `caffe2::ThreadPool::~ThreadPool()'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `int caffe2::ArgumentHelper::GetSingleArgument<int>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int const&) const'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > caffe2::ArgumentHelper::GetRepeatedArgument<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) const'
/homehost/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::DeviceTypeName[abi:cxx11](int const&)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `bool caffe2::ArgumentHelper::GetSingleArgument<bool>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool const&) const'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `numa_bind'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::Axpy<float, caffe2::CPUContext>(int, float, float const*, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::MurmurHash3_x64_128(void const*, int, unsigned int, void*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > caffe2::ArgumentHelper::GetSingleArgument<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::Div<float, caffe2::CPUContext>(int, float const*, float const*, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::CpuId::f7b_'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `numa_node_of_cpu'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::ArgumentHelper::ArgumentHelper(caffe2::OperatorDef const&)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::ThreadPool::defaultThreadPool()'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `std::vector<float, std::allocator<float> > caffe2::ArgumentHelper::GetRepeatedArgument<float>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<float, std::allocator<float> > const&) const'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::ReduceMax<float, caffe2::CPUContext>(int, float const*, float*, caffe2::Tensor<caffe2::CPUContext>*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::Scale<float, caffe2::CPUContext>(int, float, float const*, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe::GetEmptyStringAlreadyInited[abi:cxx11]()'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `numa_allocate_nodemask'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `std::vector<int, std::allocator<int> > caffe2::ArgumentHelper::GetRepeatedArgument<int>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<int, std::allocator<int> > const&) const'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::ArgumentHelper::HasArgument(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `bool caffe2::ArgumentHelper::HasSingleArgumentOfType<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::Log<float, caffe2::CPUContext>(int, float const*, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::Mul<float, caffe2::CPUContext>(int, float const*, float const*, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::split(char, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::IsSameDevice(caffe2::DeviceOption const&, caffe2::DeviceOption const&)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `numa_num_configured_nodes'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::CpuId::f1c_'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::SmartTensorPrinter::PrintTensor(caffe2::Tensor<caffe2::CPUContext> const&)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::ReduceMin<float, caffe2::CPUContext>(int, float const*, float*, caffe2::Tensor<caffe2::CPUContext>*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::TextFormat::ParseFromString(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, google::protobuf::Message*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `numa_bitmask_setbit'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `numa_bitmask_free'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `numa_max_node'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::RandUniform<float, caffe2::CPUContext>(unsigned int, float, float, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::Exp<float, caffe2::CPUContext>(int, float const*, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::Maximum<float, caffe2::CPUContext>(int, float, float const*, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::CopyVector<float, caffe2::CPUContext>(int, float const*, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::ReadProtoFromBinaryFile(char const*, google::protobuf::MessageLite*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::Sqrt<float, caffe2::CPUContext>(int, float const*, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::WriteProtoToBinaryFile(google::protobuf::MessageLite const&, char const*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::Sub<float, caffe2::CPUContext>(int, float const*, float const*, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `long long caffe2::ArgumentHelper::GetSingleArgument<long long>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, long long const&) const'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `get_mempolicy'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::Set<bool, caffe2::CPUContext>(unsigned int, bool, bool*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `mbind'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::ReadProtoFromTextFile(char const*, google::protobuf::Message*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::GetCpuId()'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `void caffe2::math::Dot<float, caffe2::CPUContext>(int, float const*, float const*, float*, caffe2::CPUContext*)'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `numa_available'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `float caffe2::ArgumentHelper::GetSingleArgument<float>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, float const&) const'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `numa_bitmask_clearall'
/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib/libcaffe2.so: undefined reference to `caffe2::ProtoDebugString[abi:cxx11](google::protobuf::Message const&)'
collect2: error: ld returned 1 exit status
```
Here is the command for compilation:
```
/home/host/usr/bin/arm-linux-gnueabihf-g++ temp.cpp -o temp -std=c++11 -I/home/cerebroz/chinmay/ip-cross/temp/caffe2/build/include -L/home/host/usr/lib -L/home/host/opt/ext-toolchain/arm-linux-gnueabihf/lib -lgflags -lglog -lprotobuf -lcaffe2
``` | caffe2 | low | Critical |
359,932,025 | rust | Move `backtrace` option to the target-specific configuration | When building a compiler for a target triple which differs from the host triple, it is impossible to express that the host supports libbacktrace, but the target doesn't.
Moving the option from `[rust]` to `[target.xyz]` would make this more flexible, allowing the host compiler to be built with backtrace support, while the target isn't.
cc @alexcrichton | C-enhancement,T-bootstrap | low | Minor |
359,951,460 | rust | Building standard library with LLD fails on Windows with "undefined symbol" errors | Currently it's not possible to cross-compile the standard library for `aarch64-pc-windows-msvc`. To reproduce, install the MSVC's ARM64 toolchain (I used MSVC 15.8.3) and configure Rust as follows:
```
./configure --host=x86_64-pc-windows-msvc --target=aarch64-pc-windows-msvc --set rust.lld
```
Running `./x.py build` will abort with linker errors (error log provided by @froydnj):
```
rust-lld: error: undefined symbol: _ZN47_$LT$std..fs..File$u20$as$u20$std..io..Read$GT$4read17h278f0b774c14e76aE
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.11.rcgu.o:(_ZN71_$LT$std..io..buffered..BufReader$LT$R$GT$$u20$as$u20$std..io..Read$GT$4read17he17302c0375fd14cE)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.11.rcgu.o:(_ZN71_$LT$std..io..buffered..BufReader$LT$R$GT$$u20$as$u20$std..io..Read$GT$4read17he17302c0375fd14cE)
```
<details>
<summary>complete error log</summary>
```
error: linking with `rust-lld` failed: exit code: 1
|
= note: "rust-lld" "-flavor" "link" "/LIBPATH:C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2\\lib\\rustlib\\aarch64-pc-windows-msvc\\lib" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.0.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.1.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.10.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.11.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.12.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.14.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.15.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.2.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.3.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.4.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.5.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.6.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.7.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.8.rcgu.o" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.term.brlane49-cgu.9.rcgu.o" "/OUT:C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.dll" "/DEF:C:\\Users\\froyd\\AppData\\Local\\Temp\\rustcr7cV5j\\lib.def" "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.2ubmzgze4ttlfqh0.rcgu.o" "/OPT:REF,ICF" "/DEBUG" "/LIBPATH:C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps" "/LIBPATH:C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\release\\deps" "/LIBPATH:C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2\\lib\\rustlib\\aarch64-pc-windows-msvc\\lib" "kernel32.lib" "/LIBPATH:C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2\\lib\\rustlib\\aarch64-pc-windows-msvc\\lib" "std-9e897034a22da8f8.dll.lib" "C:\\Users\\froyd\\AppData\\Local\\Temp\\rustcr7cV5j\\libcompiler_builtins-aa78be22f43afdbb.rlib" "advapi32.lib" "ws2_32.lib" "userenv.lib" "shell32.lib" "libcmt.lib" "/DLL" "/IMPLIB:C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage2-test\\aarch64-pc-windows-msvc\\release\\deps\\term-2c32649037e8f914.dll.lib"
= note: rust-lld: error: undefined symbol: _ZN73_$LT$core..fmt..Arguments$LT$$u27$a$GT$$u20$as$u20$core..fmt..Display$GT$3fmt17h2a972b9cd14c0b34E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.10.rcgu.o:(_ZN4core5slice29_$LT$impl$u20$$u5b$T$u5d$$GT$15copy_from_slice17h84c3d20285d3797dE)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.10.rcgu.o:(_ZN4core5slice29_$LT$impl$u20$$u5b$T$u5d$$GT$15copy_from_slice17h84c3d20285d3797dE)
rust-lld: error: undefined symbol: _ZN3std5error221_$LT$impl$u20$core..convert..From$LT$$RF$$u27$b$u20$str$GT$$u20$for$u20$alloc..boxed..Box$LT$$LP$dyn$u20$std..error..Error$u20$$u2b$$u20$core..marker..Sync$u20$$u2b$$u20$core..marker..Send$u20$$u2b$$u20$$u27$a$RP$$GT$$GT$4from17h9eeab4eb33fac824E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.11.rcgu.o:(_ZN3std2io4Read10read_exact17h54ee1715c74e9886E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.11.rcgu.o:(_ZN79_$LT$std..io..buffered..BufWriter$LT$W$GT$$u20$as$u20$core..ops..drop..Drop$GT$4drop17h0a2a4eb954c6a297E)
rust-lld: error: undefined symbol: _ZN47_$LT$std..fs..File$u20$as$u20$std..io..Read$GT$4read17h278f0b774c14e76aE
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.11.rcgu.o:(_ZN71_$LT$std..io..buffered..BufReader$LT$R$GT$$u20$as$u20$std..io..Read$GT$4read17he17302c0375fd14cE)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.11.rcgu.o:(_ZN71_$LT$std..io..buffered..BufReader$LT$R$GT$$u20$as$u20$std..io..Read$GT$4read17he17302c0375fd14cE)
rust-lld: error: undefined symbol: _ZN79_$LT$std..path..Path$u20$as$u20$core..convert..AsRef$LT$std..path..Path$GT$$GT$6as_ref17he945222ff32fd726E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.12.rcgu.o:(_ZN3std2fs4File4open17h17da0bceedbdeefbE)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.12.rcgu.o:(_ZN3std2fs4File4open17h17da0bceedbdeefbE)
rust-lld: error: undefined symbol: _ZN82_$LT$std..path..PathBuf$u20$as$u20$core..convert..AsRef$LT$std..path..Path$GT$$GT$6as_ref17he10fff899ec3ba9aE
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.12.rcgu.o:(_ZN3std2fs8metadata17h0fa7f67ccea52baeE)
rust-lld: error: undefined symbol: _ZN3std3ffi6os_str85_$LT$impl$u20$core..convert..AsRef$LT$std..ffi..os_str..OsStr$GT$$u20$for$u20$str$GT$6as_ref17h4fff6be7d583afb5E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.12.rcgu.o:(_ZN3std3env3var17h443aa255a44023c3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.12.rcgu.o:(_ZN3std3env6var_os17hdf78d67fb86bc31aE)
rust-lld: error: undefined symbol: _ZN60_$LT$alloc..string..String$u20$as$u20$core..clone..Clone$GT$5clone17h4b24dcf27e218542E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.12.rcgu.o:(_ZN44_$LT$T$u20$as$u20$alloc..borrow..ToOwned$GT$8to_owned17he1b1c763f42eb1b9E)
rust-lld: error: undefined symbol: _ZN3std5error229_$LT$impl$u20$core..convert..From$LT$alloc..string..String$GT$$u20$for$u20$alloc..boxed..Box$LT$$LP$dyn$u20$std..error..Error$u20$$u2b$$u20$core..marker..Sync$u20$$u2b$$u20$core..marker..Send$u20$$u2b$$u20$$u27$static$RP$$GT$$GT$4from17h6a7d84947f47f853E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.12.rcgu.o:(_ZN50_$LT$T$u20$as$u20$core..convert..Into$LT$U$GT$$GT$4into17had9622a930073bdcE)
rust-lld: error: undefined symbol: _ZN40_$LT$str$u20$as$u20$core..fmt..Debug$GT$3fmt17hbe61a3430dd2168dE
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.12.rcgu.o:(_ZN53_$LT$$RF$$u27$a$u20$T$u20$as$u20$core..fmt..Debug$GT$3fmt17h19e4b58322a0debfE)
rust-lld: error: undefined symbol: _ZN60_$LT$std..io..stdio..StdoutRaw$u20$as$u20$std..io..Write$GT$5write17hf039c99cc25f3872E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.12.rcgu.o:(_ZN65_$LT$std..io..stdio..Maybe$LT$W$GT$$u20$as$u20$std..io..Write$GT$5write17h36c941d8633c1a65E)
rust-lld: error: undefined symbol: _ZN3std4path95_$LT$impl$u20$core..convert..AsRef$LT$std..path..Path$GT$$u20$for$u20$alloc..string..String$GT$6as_ref17hb5273f3d4d5c62ceE
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.12.rcgu.o:(_ZN66_$LT$$RF$$u27$a$u20$T$u20$as$u20$core..convert..AsRef$LT$U$GT$$GT$6as_ref17h88469db36cd005bbE)
rust-lld: error: undefined symbol: _ZN3std3ffi6os_str85_$LT$impl$u20$core..convert..AsRef$LT$std..ffi..os_str..OsStr$GT$$u20$for$u20$str$GT$6as_ref17h4fff6be7d583afb5E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
rust-lld: error: undefined symbol: _ZN92_$LT$std..path..PathBuf$u20$as$u20$core..convert..From$LT$std..ffi..os_str..OsString$GT$$GT$4from17hd32fe85aee6e7f6bE
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
rust-lld: error: undefined symbol: _ZN3std4path77_$LT$impl$u20$core..convert..AsRef$LT$std..path..Path$GT$$u20$for$u20$str$GT$6as_ref17h0ca87727d4f1f137E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
rust-lld: error: undefined symbol: _ZN82_$LT$std..path..PathBuf$u20$as$u20$core..convert..AsRef$LT$std..path..Path$GT$$GT$6as_ref17he10fff899ec3ba9aE
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
rust-lld: error: undefined symbol: _ZN3std4path95_$LT$impl$u20$core..convert..AsRef$LT$std..path..Path$GT$$u20$for$u20$alloc..string..String$GT$6as_ref17hb5273f3d4d5c62ceE
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
rust-lld: error: undefined symbol: _ZN4core3fmt3num55_$LT$impl$u20$core..fmt..LowerHex$u20$for$u20$usize$GT$3fmt17h167a3950eded3036E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.13.rcgu.o:(_ZN4term8terminfo8searcher19get_dbpath_for_term17hf8fd73e6c94258f3E)
rust-lld: error: undefined symbol: _ZN3std5error221_$LT$impl$u20$core..convert..From$LT$$RF$$u27$b$u20$str$GT$$u20$for$u20$alloc..boxed..Box$LT$$LP$dyn$u20$std..error..Error$u20$$u2b$$u20$core..marker..Sync$u20$$u2b$$u20$core..marker..Send$u20$$u2b$$u20$$u27$a$RP$$GT$$GT$4from17h9eeab4eb33fac824E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.15.rcgu.o:(_ZN3std2io5error5Error3new17h2e9937565e5a4accE)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.15.rcgu.o:(_ZN75_$LT$$RF$$u27$a$u20$mut$u20$I$u20$as$u20$core..iter..iterator..Iterator$GT$4next17h4d0546aa6657129fE)
rust-lld: error: undefined symbol: _ZN3std5error229_$LT$impl$u20$core..convert..From$LT$alloc..string..String$GT$$u20$for$u20$alloc..boxed..Box$LT$$LP$dyn$u20$std..error..Error$u20$$u2b$$u20$core..marker..Sync$u20$$u2b$$u20$core..marker..Send$u20$$u2b$$u20$$u27$static$RP$$GT$$GT$4from17h6a7d84947f47f853E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.15.rcgu.o:(_ZN3std2io5error5Error3new17h4abe32e5817d446fE)
rust-lld: error: undefined symbol: _ZN3std4path77_$LT$impl$u20$core..convert..AsRef$LT$std..path..Path$GT$$u20$for$u20$str$GT$6as_ref17h0ca87727d4f1f137E
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.15.rcgu.o:(_ZN3std4path7PathBuf4push17h0ae214a74ebc332bE)
>>> referenced by C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps\term-2c32649037e8f914.term.brlane49-cgu.15.rcgu.o:(_ZN3std4path7PathBuf4push17h401dc332b52f3a78E)
rust-lld: error: too many errors emitted, stopping now (use /errorlimit:0 to see all errors)
error: aborting due to previous error
error: Could not compile `term`.
Caused by:
process didn't exit successfully: `C:\Users\froyd\rust\build\bootstrap/debug/rustc --crate-name term libterm\lib.rs --error-format json --crate-type dylib --crate-type rlib --emit=dep-info,link -C prefer-dynamic -C opt-level=2 -C metadata=2c32649037e8f914 -C extra-filename=-2c32649037e8f914 --out-dir C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps --target aarch64-pc-windows-msvc -L dependency=C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\aarch64-pc-windows-msvc\release\deps -L dependency=C:\Users\froyd\rust\build\x86_64-pc-windows-msvc\stage2-test\release\deps` (exit code: 1)
command did not execute successfully: "C:\\Users\\froyd\\rust\\build\\x86_64-pc-windows-msvc\\stage0\\bin\\cargo.exe" "build" "--target" "aarch64-pc-windows-msvc" "-j" "12" "--release" "--manifest-path" "C:\\Users\\froyd\\rust\\src/libtest/Cargo.toml" "--message-format" "json"
expected success, got: exit code: 101
thread 'main' panicked at 'cargo must succeed', bootstrap\compile.rs:1155:9
note: Run with `RUST_BACKTRACE=1` for a backtrace.
failed to run: C:\Users\froyd\rust\build\bootstrap\debug\bootstrap build
Build completed unsuccessfully in 0:39:29
```
</details> | A-linkage,O-windows,I-ICE,T-compiler,C-bug | medium | Critical |
359,956,450 | rust | rustdoc does not warn about broken links if they contain `.` or `[]` | I got a PR yesterday that included something like this:
```rust
/// The tour de force of my career is [`foo.bar()`]
pub mod foo {
pub fn bar() {}
}
```
In the current nightly rustdoc, `foo.bar()` neither resolves nor warns:

Our CI with `--deny warnings` did not catch this mistake (although I did 😄). I think rustdoc should trigger `intra-doc-link-resolution-failure` for this case. | T-rustdoc,C-feature-request,A-intra-doc-links | low | Critical |
359,965,614 | rust | Tracking issue for -Z emit-stack-sizes | This is an *experimental* feature (i.e. there's no RFC for it) [approved] by the compiler team and added in #51946. It's available in `nightly-2018-09-27` and newer nightly toolchains.
[approved]: https://github.com/rust-lang/rust/pull/51946#issuecomment-411042650
Documentation can be found in [the unstable book](https://doc.rust-lang.org/nightly/unstable-book/compiler-flags/emit-stack-sizes.html). | A-LLVM,T-compiler,B-unstable,C-tracking-issue,WG-embedded,S-tracking-needs-summary,A-CLI | low | Major |
359,974,988 | pytorch | One GPU is more memory efficient than Multiple GPUs | ## Issue description
Multiple GPUs runs out of memory with ``DataParallel`` while one GPU handles the load.
```
Hidden Size: 7150
Input Size: 1024
Sequence Length: 1024
Batch Size: 64
====================================================================================================
One GPU...
====================================================================================================
Multiple GPUs...
abc.py:16: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
output, hidden_state = self.rnn(input_[i].unsqueeze(0))
Traceback (most recent call last):
File "abc.py", line 50, in <module>
module=model, inputs=input_, dim=1, output_device=device.index)[-1]
File "/home/michaelp/.local/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 168, in data_parallel
outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)
File "/home/michaelp/.local/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py", line 77, in parallel_apply
raise output
File "/home/michaelp/.local/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py", line 53, in _worker
output = module(*input, **kwargs)
File "/home/michaelp/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "abc.py", line 16, in forward
output, hidden_state = self.rnn(input_[i].unsqueeze(0))
File "/home/michaelp/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/michaelp/.local/lib/python3.5/site-packages/torch/nn/modules/rnn.py", line 192, in forward
output, hidden = func(input, self.all_weights, hx, batch_sizes)
File "/home/michaelp/.local/lib/python3.5/site-packages/torch/nn/_functions/rnn.py", line 324, in forward
return func(input, *fargs, **fkwargs)
File "/home/michaelp/.local/lib/python3.5/site-packages/torch/nn/_functions/rnn.py", line 288, in forward
dropout_ts)
RuntimeError: CUDA error: out of memory
```
## Code example
This is a minimal code example of my actual model. I am unable to use multiple GPUs due to this issue.
```python
import torch
from torch import nn
class Model(nn.Module):
def __init__(self, *args):
super().__init__()
self.rnn = nn.LSTM(*args)
def forward(self, input_):
hidden_state = None
outputs = []
for i in range(input_.shape[0]):
output, hidden_state = self.rnn(input_[i].unsqueeze(0))
outputs.append(output)
return outputs
hidden_size = 7150
input_size = 1024
sequence_length = 1024
batch_size = 64
model = Model(input_size, hidden_size)
input_ = torch.FloatTensor(sequence_length, batch_size, input_size).uniform_(0, 1)
device = torch.device('cuda')
model = model.to(device)
input_ = input_.to(device)
print('Hidden Size:', hidden_size)
print('Input Size:', input_size)
print('Sequence Length:', sequence_length)
print('Batch Size:', batch_size)
print('=' * 100)
# One GPU
print('One GPU...')
output = model(input_)[-1]
model.zero_grad()
output.sum().backward()
print('=' * 100)
# Multiple GPUs
print('Multiple GPUs...')
output = torch.nn.parallel.data_parallel(
module=model, inputs=input_, dim=1, output_device=device.index)[-1]
model.zero_grad()
output.sum().backward()
```
## System Info
```
Collecting environment information...
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.4 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.5
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla P100-PCIE-16GB
GPU 1: Tesla P100-PCIE-16GB
GPU 2: Tesla P100-PCIE-16GB
GPU 3: Tesla P100-PCIE-16GB
Nvidia driver version: 390.30
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.3
Versions of relevant libraries:
[pip3] numpy (1.14.3)
[pip3] pytorch-nlp (0.3.5)
[pip3] torch (0.4.0)
[conda] Could not collect
``` | module: multi-gpu,triaged,module: data parallel | low | Critical |
360,016,006 | go | x/build/cmd/gopherbot: autoassignment of reviews for cherry-picks should be sent to release manager group | Looks like https://go-review.googlesource.com/c/go/+/131596 was auto-assigned to owners but the release-managers Gerrit group (which is the only group allowed to submit to a release branch). | Builders,NeedsFix | low | Minor |
360,114,391 | pytorch | [caffe2] adam_op implementation is incorrect. | The formula currently implemented in adam_op.h and adam_op is:
t = iters + 1
corrected_local_rate = lr * sqrt(1 - power(beta2, t)) /(1 - power(beta1, t))
m1_o = (beta1 * m1) + (1 - beta1) * grad
m2_o = (beta2 * m2) + (1 - beta2) * np.square(grad)
grad_o = corrected_local_rate * m1_o /(sqrt(m2_o) + epsilon)
param_o = param + grad_o
which is different from the original paper, and will lead to explosion during training. The update of param_o is supposed to be param_o = param - grad_o. I already validated the result through experiments. Thanks! | caffe2 | low | Minor |
360,138,599 | go | cmd/vet: check for http.ResponseWriter WriteHeader calls after body has been written | ### What version of Go are you using (`go version`)?
```
$ go version
go version go1.11 linux/amd64
```
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details>
```
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/adam/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/adam/code"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build369596792=/tmp/go-build -gno-record-gcc-switches"
```
</details>
### What did you do?
Consider the following code:
```go
package main
import (
"encoding/json"
"fmt"
"net/http"
)
func main() {
http.HandleFunc("/ping", func (w http.ResponseWriter, r *http.Request) {
type response struct {
Error error `json:"error"`
}
if err := json.NewEncoder(w).Encode(response{nil}); err != nil {
fmt.Println(err)
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
})
go http.ListenAndServe(":6060", nil)
resp, err := http.Get("http://localhost:6060")
if err != nil {
panic(err)
}
fmt.Println(resp.Header.Get("Content-Type"))
}
```
This code outputs:
```
$ go run /tmp/vet/main.go
text/plain; charset=utf-8
```
### Comments
It would be nice for vet to be aware of this common mistake and be able to alert the developer to an easy fix. The documentation for `http.ResponseWriter` describes all the details, but that's often missed.
| FeatureRequest,Analysis | low | Critical |
360,148,781 | vscode | [folding] go to region command | So if we have the ctrl + g to go to line, how about we have a shortcut that will allow us to go to certain region in our code? | feature-request,editor-folding,outline | medium | Major |
360,150,824 | go | cmd/vet: mismatch between Printf checks and actual behaviour | ### What version of Go are you using (`go version`)?
go version go1.11 linux/amd64
### What did you do?
- go run https://play.golang.org/p/5rBLFTsNpqZ
- go vet https://play.golang.org/p/5rBLFTsNpqZ
- compare outputs
### What did you expect to see?
Vet to accept the first Printf call and to flag the second one
### What did you see instead?
Vet flags the first call and accepts the second one
| help wanted,NeedsFix,Analysis | low | Major |
360,182,990 | rust | 1.29 fails to build on a Windows networked drive | I have a program that builds under 1.28 on Linux and Windows with no errors or warnings.
The same program builds under 1.29 on Linux, but won't build on Windows if the source is on a virtuabox (i.e., networked) drive.
See my initial report here: https://users.rust-lang.org/t/rust-1-29-0-is-out/20400/10?u=mark
And the Cargo.toml and src/main.rs here: https://users.rust-lang.org/t/rust-1-29-0-is-out/20400/20?u=mark
| A-LLVM,O-windows,T-compiler,C-bug | medium | Critical |
360,213,258 | vscode | copy the exact path from the new breadcrumbs | It would be really great if we can copy and share the breadcrumbs generated a
path, so we can save a huge time when we work as a team, as we can refer to the exact point of (issue or function) we need to work on next

| feature-request,breadcrumbs | high | Critical |
360,226,506 | angular | Allow constants, enums, functions to be used in templates without having to add component members | ## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Performance issue
[ x ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[ ] Other... Please describe:
</code></pre>
## Current behavior
Templates cannot refer to anything not directly associated with the component, so you can't for example do this:
```html
<div *ngIf="user.id === someConstant">
</div>
<div *ngIf="user.status === UserStatus.Active"> <!-- UserStatus is an enum -->
</div>
<div *ngIf="isDevMode()">
</div>
```
To make this work, I have to declare these as members of my component:
```typescript
import { someConstant } from './somewhere';
import { UserStatus } from './somewhere';
import { isDevMode } from '@angular/core';
export class MyComponent {
public someConstant = someConstant;
public UserStatus = UserStatus;
public isDevMode = isDevMode;
}
```
These can quickly clutter the component code, and can't be used for const enums. For const enums I have to duplicate the entire enum into a constant, which becomes a maintenance headache.
## Expected behavior
It would be helpful if I could use constants, enums and functions directly from within the template, without needing to assign them to a component member. Maybe a way of importing variables/types into the HTML, or declaring types in the component that are available to the template. Something like this perhaps:
```typescript
import { someConstant } from './somewhere';
import { UserStatus } from './somewhere';
import { isDevMode } from '@angular/core';
@Component({
selector: 'app-user-list',
templateUrl: './user-list.component.html',
templateImports: [someConstant, UserStatus, isDevMode]
})
```
(this particular solution wouldn't work for const enums, other solutions may be better)
## Environment
<pre><code>
Angular CLI: 6.1.2
Node: 8.11.1
OS: linux x64
Angular: 6.1.1
</code></pre>
| feature,area: core,core: binding & interpolation,feature: under consideration,canonical | high | Critical |
360,247,321 | go | runtime: tracebacks don't print function arguments for inlined functions | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.11 linux/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/rockmen1/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/rockmen1/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/lib/go"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build729838165=/tmp/go-build -gno-record-gcc-switches"
### What did you expect to see?
[run this in go playground, which runs version 1.10.3](https://play.golang.org/p/ZCRmZTwTs3R)
```
panic: oops
goroutine 1 [running]:
main.test(0xc420045f68, 0x1, 0x1, 0x469f51, 0x7, 0x2710, 0xc420078000) <- parameters trace
/tmp/test.go:4 +0x39
main.main()
/tmp/test.go:8 +0x73
```
### What did you see instead?
```
panic: oops
goroutine 1 [running]:
main.test(...) <- gone
/tmp/test.go:4
main.main()
/tmp/test.go:8 +0x39
```
| NeedsInvestigation,compiler/runtime | low | Critical |
360,253,018 | rust | run-pass/lib-defaults.rs warns of redundant linker flag | This test:
https://github.com/rust-lang/rust/blob/fccde0018a618eb6f45d2a3c97f629809994dff6/src/test/run-pass/lib-defaults.rs#L13-L18
is causing the following compile-time output to stderr (on Linux; not sure about other hosts):
```
warning: redundant linker flag specified for library `rust_test_helpers`
```
This is the command line that compiletest is constructing:
```
command: "/home/pnkfelix/Dev/Mozilla/issue53764/rust-53764/objdir-dbgopt/build/x86_64-unknown-linux-gnu/stage1/bin/rustc" "/home/pnkfelix/Dev/Mozilla/issue53764/rust-53764/src/test/run-pass/lib-defaults.rs" "--target=x86_64-unknown-linux-gnu" "--error-format" "json" "-Zui-testing" "-C" "prefer-dynamic" "-o" "/home/pnkfelix/Dev/Mozilla/issue53764/rust-53764/objdir-dbgopt/build/x86_64-unknown-linux-gnu/test/run-pass/lib-defaults/a" "-Crpath" "-Zunstable-options" "-Lnative=/home/pnkfelix/Dev/Mozilla/issue53764/rust-53764/objdir-dbgopt/build/x86_64-unknown-linux-gnu/native/rust-test-helpers" "-lrust_test_helpers" "-L" "/home/pnkfelix/Dev/Mozilla/issue53764/rust-53764/objdir-dbgopt/build/x86_64-unknown-linux-gnu/test/run-pass/lib-defaults/auxiliary"
```
I haven't delved deeply into what our requirements are for linkage directives. I'm guessing that `rustc` is able to construct the desired `-lrust_test_helpers` from the presence of the `#[link(name = "rust_test_helpers", kind = "static")]`, but it might be good to know if that is the case on all of our platforms?
(I.e. maybe we should strive to remove the `// compile-flags: -lrust_test_helpers` directive from this test, or if that fails, maybe it conditional on which platform we are testing on...)
In any case I am mainly filling this ticket so I have something to link to in the test when I explain why I'm ignoring the stderr output from the compiler in this case. | A-linkage,A-testsuite,C-enhancement,T-compiler | low | Critical |
360,291,408 | angular | HttpTestingController does not resolve async/await |
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Performance issue
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[ ] Other... Please describe:
</code></pre>
## Current behavior
When using async await with promises (and therefore: `this.http.get(...).toPromise()`), one can unittest everything like normal with the `HttpTestingController`. _BUT_ if you have a service method that has multiple `await` in it, the testing controller is not aware of the later executed `await` and will just wait and fail.
## Expected behavior
All await calls inside a method should be executed during async testing.
## Minimal reproduction of the problem with instructions
1. use a service with async await function like:
```typescript
@Injectable()
class Service {
constructor(private http: HttpClient){}
public async foobar(): Promise<number> {
await Promise.resolve();
await Promise.resolve();
await this.http.get('http://asdf').toPromise();
return 1;
}
}
```
2. try to use the `HttpTestingClient` to catch the `http://asdf` request.
## Environment
<pre><code>
Angular version: 6.1.0
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [x ] Chrome (desktop) version XX
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
</code></pre>
| type: bug/fix,freq1: low,area: common/http,P3 | medium | Critical |
360,309,327 | react | input[type='number'] value isn't updated | **Do you want to request a *feature* or report a *bug*?**
bug
**What is the current behavior?**
when I enter "01" into input[type=number],I set the value to 1, but it doesn't work. It still show "01"
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:**
https://codesandbox.io/s/20ywk1x71n
**What is the expected behavior?**
when I enter "01", it should show "1"
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
15.6.2. I think it should update in "updatewrapper" in https://github.com/facebook/react/blob/master/packages/react-dom/src/client/ReactDOMInput.js. | Component: DOM,Type: Needs Investigation | low | Critical |
360,366,804 | go | x/build: limit the number of auto-assigned reviewers | This is a follow-up issue to the discussion at https://groups.google.com/forum/#!topic/golang-dev/CdGrNJDcqec, to make sure it doesn't get lost.
In that thread @mvdan proposed:
> If a CL were to ping more than 5 people, simply fallback to pinging someone on the core team (Andy? Ian? Brad?), who can then make a better call than a machine could
I think we should do this.
@dmitshur @andybons
| Builders,NeedsInvestigation | low | Minor |
360,407,529 | TypeScript | `references` are not inherited in `tsconfig.json` | **TypeScript Version:** 3.1.0-dev.20180914
**Search Terms:** tsconfig.json extends references inheritance
It seems that the `references` key is not inherited via the `extends` mechanism, which surprised me because the handbook [doesn't mention](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html#configuration-inheritance-with-extends) anything special about it.
Demo:
`tsconfig.base.json`:
```json
{
"references": [
{ "path": "./some/other/project" }
],
"compilerOptions": {
"declaration": true,
"composite": true
}
}
```
`tsconfig.doesnt-work.json`:
```json
{
"extends": "./tsconfig.base.json"
}
```
Building `tsconfig.doesnt-work.json` doesn't build the reference:
```
$ tsc -b -f -d tsconfig.doesnt-work.json
[11:12:13] A non-dry build would build project 'C:/demo/tsconfig.doesnt-work.json'
```
`tsconfig.works-but-duplicates-references.json`:
```json
{
"extends": "./tsconfig.base.json",
"references": [
{ "path": "./some/other/project" }
],
}
```
This is a correct build but I had to duplicate the `references` key:
```
$ tsc -b -f -d tsconfig.works-but-duplicates-references.json
[11:12:13] A non-dry build would build project 'C:/demo/tsconfig.works-but-duplicates-references.json'
[11:12:13] A non-dry build would build project 'C:/demo/some/other/project/tsconfig.json'
``` | Docs | medium | Critical |
360,446,062 | go | x/mobile/cmd/gobind: non-pointer struct types not supported in parameter or return values | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
```
go version go1.11 darwin/amd64
```
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN="/Users/dradtke/Workspace/go/bin"
GOCACHE="/Users/dradtke/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/dradtke/Workspace/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.11/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.11/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/w1/f3bh6jkx0vx95g4885d2yr5h3kfr1q/T/go-build388664200=/tmp/go-build -gno-record-gcc-switches -fno-common"
```
### What did you do?
I attempted to use the `gobind` tool to generate code for the following example package:
```go
package bindtest
type Param struct {
Value string
}
func DoSomething(p Param) {
// do something with param
}
```
### What did you expect to see?
I would expect to see the method `DoSomething` available in the generated code.
### What did you see instead?
The function gets skipped due to an invalid parameter type:
```bash
$ gobind -lang go bindtest | grep DoSomething
// skipped function DoSomething with unsupported parameter or result types
```
The binding works, however, if `DoSomething` is updated to take a pointer to `Param`. | NeedsInvestigation,mobile | low | Critical |
360,484,520 | TypeScript | Later export causes `'T' is referenced directly or indirectly in its own type annotation` | **TypeScript Version:** 3.1.0-dev.20180914
**Code**
```js
/** @typedef {{}} T */
/** @type {T} */
const T = JSON.parse("");
export class C extends T {}
```
**Expected behavior:**
Same error as if the class isn't exported: `Type '{}' is not a constructor function type.` at the `extends` clause.
**Actual behavior:**
`src/a.js:4:7 - error TS2502: 'T' is referenced directly or indirectly in its own type annotation.` | Bug,Domain: JSDoc,checkJs,Domain: JavaScript | low | Critical |
360,491,777 | go | runtime: cgo Unix C code can't modify environment variables for Go | ### What version of Go are you using (`go version`)?
1.11
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
linux amd64
### What did you do?
Because of envOnce and copyenv in [env_unix.go](https://golang.org/src/syscall/env_unix.go), linked non-Go programs can't set environment variables that are visible to Go code.
This is probably WAI, but it's surprising when embedding and could use a mention somewhere.
```go
package main
/*
#include <stdlib.h> // for setenv()
*/
import "C"
import (
"fmt"
"os"
)
func main() {
os.Setenv("foo", "1")
fmt.Println("before:", os.Getenv("foo"))
C.setenv(C.CString("foo"), C.CString("2"), 1)
fmt.Println("after:", os.Getenv("foo"))
os.Setenv("foo", "3")
fmt.Println("after Go:", os.Getenv("foo"))
}
```
### What did you see instead?
"1 1 3" instead of "1 2 3".
| Documentation,NeedsFix,compiler/runtime | low | Major |
360,494,061 | neovim | provider/node.vim hang on exit | - `nvim --version`: v0.3.2-530-gdadcfe22c
- Vim (version: ) behaves differently? N/A
- Operating system/version: Debian Linux/WSL
- Terminal name/version: conhost, N/A
- `$TERM`: xterm-256color N/A
### Steps to reproduce using `nvim -u NORC`
*This isn't reproducible without any plugins.*
```
yarn global add neovim
# install remote plugin requiring node support, e.g. nvim-typescript
nvim
:UpdateRemotePlugins
:qa
```
### Actual behaviour
Neovim hangs briefly on exit. Using `set verbose=9` reveals a runtime error:
```
Searching for "/home/mqudsi/.config/nvim/autoload/provider/node.vim"
Searching for "/etc/xdg/nvim/autoload/provider/node.vim"
Searching for "/home/mqudsi/.local/share/nvim/site/autoload/provider/node.vim"
Searching for "/usr/local/share/nvim/site/autoload/provider/node.vim"
Searching for "/usr/share/nvim/site/autoload/provider/node.vim"
Searching for "/home/mqudsi/.config/nvim/bundle/repos/github.com/autozimu/LanguageClient-neovim/autoload/provider/node.vim"
Searching for "/home/mqudsi/.config/nvim/bundle/repos/github.com/mhartington/nvim-typescript/autoload/provider/node.vim"
Searching for "/home/mqudsi/.config/nvim/bundle/.cache/init.vim/.dein/autoload/provider/node.vim"
Searching for "/usr/local/share/nvim/runtime/autoload/provider/node.vim"
chdir(/mnt/c/Users/mqudsi/git/prettysize-rs)
chdir(/usr/local/share/nvim/runtime/autoload/provider)
chdir(/mnt/c/Users/mqudsi/git/prettysize-rs)
line 10: sourcing "/usr/local/share/nvim/runtime/autoload/provider/node.vim"
Error detected while processing function provider#node#Detect[4]..<SNR>70_is_minimum_version:
line 2:
E730: using List as a String
Calling shell to execute: ""
finished sourcing /usr/local/share/nvim/runtime/autoload/provider/node.vim
continuing in function remote#define#CommandBootstrap[1]..remote#host#Require
Executing FuncUndefined Auto commands for "*"
autocommand call dein#autoload#_on_func(expand('<afile>'))
Searching for "autoload/provider.vim" in "/home/mqudsi/.config/nvim,/etc/xdg/nvim,/home/mqudsi/.local/share/nvim/site,/usr/local/share/nvim/site,/usr/share/nvim/site,/home/mqudsi/.config/nv
im/bundle/repos/github.com/autozimu/LanguageClient-neovim,/home/mqudsi/.config/nvim/bundle/repos/github.com/mhartington/nvim-typescript,/home/mqudsi/.config/nvim/bundle/.cache/init.vim/.dei
n,/usr/local/share/nvim/runtime,/home/mqudsi/.config/nvim/bundle/repos/github.com/mhartington/nvim-typescript/after,/home/mqudsi/.config/nvim/bundle/.cache/init.vim/.dein/after,/usr/share/n
vim/site/after,/usr/local/share/nvim/site/after,/home/mqudsi/.local/share/nvim/site/after,/etc/xdg/nvim/after,/home/mqudsi/.config/nvim/after,/home/mqudsi/.config/nvim/bundle/repos/github.c
om/Shougo/dein.vim"
Searching for "/home/mqudsi/.config/nvim/autoload/provider.vim"
Searching for "/etc/xdg/nvim/autoload/provider.vim"
Searching for "/home/mqudsi/.local/share/nvim/site/autoload/provider.vim"
Searching for "/usr/local/share/nvim/site/autoload/provider.vim"
Searching for "/usr/share/nvim/site/autoload/provider.vim"
Searching for "/home/mqudsi/.config/nvim/bundle/repos/github.com/autozimu/LanguageClient-neovim/autoload/provider.vim"
Searching for "/home/mqudsi/.config/nvim/bundle/repos/github.com/mhartington/nvim-typescript/autoload/provider.vim"
Searching for "/home/mqudsi/.config/nvim/bundle/.cache/init.vim/.dein/autoload/provider.vim"
Searching for "/usr/local/share/nvim/runtime/autoload/provider.vim"
chdir(/mnt/c/Users/mqudsi/git/prettysize-rs)
chdir(/usr/local/share/nvim/runtime/autoload)
chdir(/mnt/c/Users/mqudsi/git/prettysize-rs)
line 14: sourcing "/usr/local/share/nvim/runtime/autoload/provider.vim"
finished sourcing /usr/local/share/nvim/runtime/autoload/provider.vim
continuing in function remote#define#CommandBootstrap[1]..remote#host#Require[10]..provider#node#Require
function remote#define#CommandBootstrap[1]..remote#host#Require[10]..provider#node#Require[14]..provider#Poll, line 4
Vim(if):ch 6 was closed by the client
node: bad option: -1
Error detected while processing function remote#define#CommandBootstrap[1]..remote#host#Require[10]..provider#node#Require[14]..provider#Poll:
line 14:
E605: Exception not caught: Failed to load node host. You can try to see what happened by starting nvim with $NVIM_NODE_LOG_FILE set and opening the generated log file. Also, the host stder
r is available in messages.
Error detected while processing function remote#define#CommandBootstrap[1]..remote#host#Require:
line 10:
E171: Missing :endif
Writing ShaDa file "/home/mqudsi/.local/share/nvim/shada/main.shada"
```
(note the **node: bad option: -1** redirected from stderr in the log above)
Setting `NVIM_NODE_LOG_FILE` does not create a log, presumably because nothing is logged this early on. `node --version` returns v8.11.2.
`:checkhealth` runs fine:
```
...
## Node.js provider (optional)
- INFO: Node.js: v8.11.2
- INFO: Neovim node.js host: /home/mqudsi/.config/yarn/global/node_modules/neovim/bin/cli.js
- OK: Latest "neovim" npm/yarn package is installed: 4.2.1
```
I'm not entirely sure this error alone is the reason for the delay on exit. Profiling reveals the following:
```
FUNCTIONS SORTED ON TOTAL TIME
count total (s) self (s) function
1 1.563946 0.000147 remote#define#CommandBootstrap()
1 1.563326 0.000542 remote#host#Require()
1 1.126305 0.000124 provider#node#Detect()
2 1.108482 0.000548 <SNR>64_find_node_client()
1 0.435341 0.000454 provider#node#Require()
1 0.434333 provider#Poll()
```
### Expected behaviour
No hang on exit. No errors on exit.
| bug,performance,provider,remote-plugin | medium | Critical |
360,501,648 | pytorch | Semaphore leaks in dataloader | Reported by @PetrochukM.
```
import torch
from torch import multiprocessing
# DEPENDANCY: This is required for ``DistributedDataParallel``
# https://pytorch.org/docs/stable/nn.html?highlight=distributeddataparallel#torch.nn.parallel.DistributedDataParallel
try:
multiprocessing.set_start_method('spawn')
except RuntimeError:
pass
# DEPENDANCY: This is required for ``from tqdm import tqdm``
# https://github.com/tqdm/tqdm/blob/96d8a3c3642474144f53f74331ef2172d1c39496/tqdm/_tqdm.py#L74
mp_lock = multiprocessing.RLock()
import torch
from torch.utils.data import DataLoader
if __name__ == '__main__':
data_iterator = torch.utils.data.DataLoader([torch.tensor(i) for i in range(10)], num_workers=4)
for batch in data_iterator:
pass
```
Example warning:
```
/usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
```
cc @SsnL @VitalyFedyunin @ejguan | module: dataloader,triaged | low | Critical |
360,507,692 | TypeScript | Assignability rule for conditional types needs to require check types and distributivity to be identical | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** master (e4718564e5b2c1f2e7bbef27fc480fd8172dfdc0)
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** assignable assignability conditional type check type checkType distributive distributivity identical unsound
**Code**
```ts
type MyElement<A> = [A] extends [[infer E]] ? E : never;
function oops<A, B extends A>(arg: MyElement<A>): MyElement<B> {
return arg; // compiles OK, expected compile error
}
oops<[number | string], [string]>(42).slice(); // runtime error
type MyAcceptor<A> = [A] extends [[infer E]] ? (arg: E) => void : never;
function oops2<A, B extends A>(arg: MyAcceptor<B>): MyAcceptor<A> {
return arg; // compiles OK, expected compile error
}
oops2<[number | string], [string]>((arg) => arg.slice())(42); // runtime error
type Dist<T> = T extends number ? number : string;
type Aux<A extends { a: unknown }> = A["a"] extends number ? number : string;
type Nondist<T> = Aux<{a: T}>;
function oops3<T>(arg: Dist<T>): Nondist<T> {
return arg; // compiles OK, expected compile error
}
oops3<number | string>(42).slice(); // runtime error
```
**Expected behavior:** Compile errors as marked.
**Actual behavior:** Compiles OK, runtime errors as marked.
**Playground Link:** [link](https://www.typescriptlang.org/play/#src=type%20MyElement%3CA%3E%20%3D%20%5BA%5D%20extends%20%5B%5Binfer%20E%5D%5D%20%3F%20E%20%3A%20never%3B%0D%0Afunction%20oops%3CA%2C%20B%20extends%20A%3E(arg%3A%20MyElement%3CA%3E)%3A%20MyElement%3CB%3E%20%7B%20%0D%0A%20%20%20%20return%20arg%3B%20%20%2F%2F%20compiles%20OK%2C%20expected%20compile%20error%0D%0A%7D%0D%0Aoops%3C%5Bnumber%20%7C%20string%5D%2C%20%5Bstring%5D%3E(42).slice()%3B%20%20%2F%2F%20runtime%20error%0D%0A%0D%0Atype%20MyAcceptor%3CA%3E%20%3D%20%5BA%5D%20extends%20%5B%5Binfer%20E%5D%5D%20%3F%20(arg%3A%20E)%20%3D%3E%20void%20%3A%20never%3B%0D%0Afunction%20oops2%3CA%2C%20B%20extends%20A%3E(arg%3A%20MyAcceptor%3CB%3E)%3A%20MyAcceptor%3CA%3E%20%7B%20%0D%0A%20%20%20%20return%20arg%3B%20%20%2F%2F%20compiles%20OK%2C%20expected%20compile%20error%0D%0A%7D%0D%0Aoops2%3C%5Bnumber%20%7C%20string%5D%2C%20%5Bstring%5D%3E((arg)%20%3D%3E%20arg.slice())(42)%3B%20%20%2F%2F%20runtime%20error%0D%0A%0D%0Atype%20Dist%3CT%3E%20%3D%20T%20extends%20number%20%3F%20number%20%3A%20string%3B%20%0D%0Atype%20Aux%3CA%20extends%20%7B%20a%3A%20unknown%20%7D%3E%20%3D%20A%5B%22a%22%5D%20extends%20number%20%3F%20number%20%3A%20string%3B%0D%0Atype%20Nondist%3CT%3E%20%3D%20Aux%3C%7Ba%3A%20T%7D%3E%3B%0D%0Afunction%20oops3%3CT%3E(arg%3A%20Dist%3CT%3E)%3A%20Nondist%3CT%3E%20%7B%0D%0A%20%20%20%20return%20arg%3B%20%20%2F%2F%20compiles%20OK%2C%20expected%20compile%20error%0D%0A%7D%0D%0Aoops3%3Cnumber%20%7C%20string%3E(42).slice()%3B%20%20%2F%2F%20runtime%20error)
**Related Issues:** None found
| Suggestion,Experimentation Needed | medium | Critical |
360,550,245 | rust | NLL: Poor borrow checker error message when extension of borrow happens indirectly (e.g. via method) | **EDIT**: Go [here](https://github.com/rust-lang/rust/issues/54256#issuecomment-434450019) for a smaller example.
Code:
```rust
#![feature(nll)]
extern crate clang;
use std::collections::HashMap;
use std::error::Error;
use clang::*;
#[derive(Default)]
struct SymbolDesc<'a> {
deps: Vec<Entity<'a>>,
}
fn visit<'a>(_entity: Entity<'a>, _desc: &mut SymbolDesc<'a>) {
// snip
}
fn main() -> Result<(), Box<Error>> {
let clang = Clang::new()?;
let index = Index::new(&clang, false, false);
let sources = vec!["examples/simple.c", "examples/simple_impl.c"];
let mut tus = vec![];
let mut sym_table: HashMap<String, SymbolDesc<'_>> = HashMap::new();
for source in sources {
tus.push(index.parser(source).parse()?);
for child in tus.last().unwrap().get_entity().get_children() {
if let Some(child_name) = child.get_name() {
let desc = sym_table.entry(child_name).or_default();
visit(child, desc);
}
}
}
Ok(())
}
```
Error:
```
error[E0502]: cannot borrow `tus` as mutable because it is also borrowed as immutable
--> src/main.rs:28:9
|
28 | tus.push(index.parser(source).parse()?);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ mutable borrow occurs here
29 |
30 | for child in tus.last().unwrap().get_entity().get_children() {
| --- immutable borrow occurs here
31 | if let Some(child_name) = child.get_name() {
32 | let desc = sym_table.entry(child_name).or_default();
| --------- borrow used here in later iteration of loop
```
The crux of the problem is that at line `visit(child, desc)`, `table` starts borrowing `tu` because it has type `HashMap<String, SymbolDesc<'tu>>`. This prevents `tus.push(index.parser(source).parse()?)` because it needs a mutable borrow.
I wish the error message mentioned that `table` borrows `tu` because that's far from obvious. But once you see it, it's understandable why the borrow checker is not happy. | C-enhancement,A-diagnostics,P-medium,A-borrow-checker,T-compiler,E-medium,A-NLL | low | Critical |
360,588,651 | TypeScript | Only emit declarations for code that has an /** @external */ JSDoc annotation | ## Search Terms
external emit declaration
## Problem
When working with the `--stripInternal` compiler flag, you want to create a clean declaration file that only exposes the stuff your library users should use.
You as library developer must add the `/* @internal */` annotations to all the parts of your code you don't want in the declaration file. 'Sloppy developers in the team' forget to set this annotation and expose stuff without realizing it.
## Suggestion
It would be handy if we could reverse this.
We could start with an empty declaration file when a compiler flag `--onlyExternal` is set and only expose the things that have an `/* @external */` JSDoc annotation
- `/* @external */` exposes the 'item' and the 'parent path' to acces it.
- `/* @internal */` used inside an `/* @external */` parent hides the 'item' and and all its children.
- `/* @external */` used inside an `/* @internal */` parent is ignored and stays internal.
## Remarks
- When using the `/* @internal */` annotation, you mostly try to place it on top level structures (like namespaces). In these cases it is not a problem when code is added to the namespace. It is mostly a problem when new files or namespaces are added.
- When using the suggested `/* @external */` annotation, you normally will place it more deeply, like on specific functions, variables or classes to expose.
| Suggestion,Awaiting More Feedback | low | Minor |
360,591,742 | pytorch | Caffe2 installation: libcaffe2_gpu.so: undefined reference to `caffe2::ClipTransformRGB | Hi all,
When I try to install Caffe2 with CUDA, CUDNN, OpenCV and FFMPEG enabled, I got errors:
------------------------------------------------------------------------------------------------------
/home/jinchoi/src/pytorch_new2/build/lib/libcaffe2_gpu.so: undefined reference to `caffe2::ClipTransformRGB(unsigned char const*, int, int, int, int, int, int, int, int, int, int, int const*, int const*, bool, bool, float, float, float, bool, float, std::vector<std::vector<float, std::allocator<float> >, std::allocator<std::vector<float, std::allocator<float> > > > const&, std::vector<float, std::allocator<float> > const&, std::vector<float, std::allocator<float> > const&, std::vector<float, std::allocator<float> > const&, std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>*, float*)'
/home/jinchoi/src/pytorch_new2/build/lib/libcaffe2_gpu.so: undefined reference to `caffe2::ClipTransformOpticalFlow(unsigned char const*, int, int, int, int, int, int, int, cv::Rect_<int> const&, int, bool, int, int, int, bool, std::vector<float, std::allocator<float> > const&, std::vector<float, std::allocator<float> > const&, float*)'
/home/jinchoi/src/pytorch_new2/build/lib/libcaffe2_gpu.so: undefined reference to `caffe2::DecodeMultipleClipsFromVideo(char const*, std::string const&, int, caffe2::Params const&, int, int, bool, int&, int&, std::vector<unsigned char*, std::allocator<unsigned char*> >&)'
------------------------------------------------------------------------------------------------------
My environment information is as follows.
------------------------------------------------------------------------------------------------------
- Caffe2
- Build command you used (if compiling from source): `USE_OPENCV=1 USE_FFMPEG=1 FULL_CAFFE2=1 python setup.py install`
- OS: CentOS 7
- CUDA/cuDNN version: CUDA 8.0.61, cuDNN 6.0
- GPU models and configuration: P100
- GCC version (if compiling from source): 4.8.5
- CMake version: 3.12.2
------------------------------------------------------------------------------------------------------
My cmake output is as follows:
------------------------------------------------------------------------------------------------------
-- ******** Summary ********
-- General:
-- CMake version : 3.12.2
-- CMake command : /home/jinchoi/pkg/anaconda2_nr/envs/cf2_new/bin/cmake
-- Git version : v0.1.11-10519-gf09054f8d-dirty
-- System : Linux
-- C++ compiler : /home/jinchoi/pkg/anaconda2_nr/envs/cf2_new/bin/c++
-- C++ compiler version : 4.8.5
-- BLAS : MKL
-- CXX flags : -msse3 -msse4.1 -msse4.2 --std=c++11 -fvisibility-inlines-hidden -fopenmp -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-unused-but-set-variable -Wno-maybe-uninitialized
-- Build type : Release
-- Compile definitions : ONNX_NAMESPACE=onnx_torch;USE_GCC_ATOMICS=1;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1
-- CMAKE_PREFIX_PATH : /home/jinchoi/pkg/anaconda2_nr/envs/cf2_new/lib/python2.7/site-packages
-- CMAKE_INSTALL_PREFIX : /home/jinchoi/src/pytorch_new2/torch/lib/tmp_install
--
-- BUILD_ATEN_MOBILE : OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : ON
-- Python version : 2.7.15
-- Python executable : /home/jinchoi/pkg/anaconda2_nr/envs/cf2_new/bin/python
-- Pythonlibs version : 2.7.15
-- Python library : /home/jinchoi/pkg/anaconda2_nr/envs/cf2_new/lib/python2.7
-- Python includes : /home/jinchoi/pkg/anaconda2_nr/envs/cf2_new/include/python2.7
-- Python site-packages: lib/python2.7/site-packages
-- BUILD_CAFFE2_OPS : ON
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : ON
-- USE_ASAN : OFF
-- USE_CUDA : 1
-- CUDA static link : 0
-- USE_CUDNN : ON
-- CUDA version : 8.0
-- cuDNN version : 6.0.21
-- CUDA root directory : /opt/apps/cuda/8.0.61
-- CUDA library : /usr/lib64/libcuda.so
-- cudart library : /opt/apps/cuda/8.0.61/lib64/libcudart_static.a;-pthread;dl;/usr/lib64/librt.so
-- cublas library : /opt/apps/cuda/8.0.61/lib64/libcublas.so;/opt/apps/cuda/8.0.61/lib64/libcublas_device.a
-- cufft library : /opt/apps/cuda/8.0.61/lib64/libcufft.so
-- curand library : /opt/apps/cuda/8.0.61/lib64/libcurand.so
-- cuDNN library : /home/jinchoi/lib/cudnn6/cuda/lib64/libcudnn.so
-- nvrtc : /opt/apps/cuda/8.0.61/lib64/libnvrtc.so
-- CUDA include path : /opt/apps/cuda/8.0.61/include
-- NVCC executable : /opt/apps/cuda/8.0.61/bin/nvcc
-- CUDA host compiler : /home/jinchoi/pkg/anaconda2_nr/envs/cf2_new/bin/cc
-- USE_TENSORRT : OFF
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FFMPEG : ON
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : OFF
-- USE_METAL : OFF
-- USE_MKL :
-- USE_MOBILE_OPENGL : OFF
-- USE_NCCL : ON
-- USE_SYSTEM_NCCL : OFF
-- USE_NERVANA_GPU : OFF
-- USE_NNPACK : 1
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : ON
-- OpenCV version : 3.3.1
-- USE_OPENMP : OFF
-- USE_PROF : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : ON
-- USE_MPI : OFF
-- USE_GLOO : ON
-- USE_GLOO_IBVERBS : OFF
-- TORCH_USE_CEREAL : OFF
-- Public Dependencies : Threads::Threads
-- Private Dependencies : nnpack;cpuinfo;/usr/lib64/libnuma.so;opencv_core;opencv_highgui;opencv_imgproc;opencv_imgcodecs;opencv_videoio;opencv_video;/home/jinchoi/pkg/anaconda2_nr/envs/cf2_new/lib/libavcodec.so;/home/jinchoi/pkg/anaconda2_nr/envs/cf2_new/lib/libavformat.so;/home/jinchoi/pkg/anaconda2_nr/envs/cf2_new/lib/libavutil.so;/home/jinchoi/pkg/anaconda2_nr/envs/cf2_new/lib/libswscale.so;gloo;aten_op_header_gen;onnxifi_loader;rt;gcc_s;gcc;dl
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
CUDNN_LIB_DIR
------------------------------------------------------------------------------------------------------
It seems that `video_io.cc` is not built successfully because of some reason.
Does anybody know how to resolve this issue? I googled the error, but there is no answer to this yet.
If I turn off the OpenCV when I build Caffe2, I can build it successfully. I get this error when I build it with OpenCV. | caffe2 | low | Critical |
360,605,523 | godot | JavaClassWrapper.wrap() doesn't seem to work | Basically JavaClassWrapper.wrap() has never worked. But it is tempting for people trying to get access to Android Classes. I did manage to the the java classes from the function by returning the correct value a modification I made to Godot code. But even then, it seems that the class deletes too many references which makes the Java Class holo which is still useless.
I think it could work great as the kivy project has something similiar in [pyjinus](https://pyjnius.readthedocs.io/en/latest/).
I would like to get this to work, but I need help because I do not know C++ much at all.
**Godot version:**
ALL
**OS/device including version:**
Android
**Issue description:**
JavaClassWrapper.wrap() returns [Object:null]
After patch it returns [JavaClass] but it seems to be holo, no class methods or class variables.
[Some proof that JavaClassWrapper.wrap() is actually getting a class from the JNI](https://gist.github.com/FeralBytes/c3a76dec37208c41798136653c5530e8). | bug,platform:android,topic:porting | low | Minor |
360,614,176 | opencv | cv::imread() won't work with .jp2 files after GetOpenFileName() dialog | I've encountered this weird issue that I seem to be unable to solve, involving jpeg2000 file format and GetOpenFileName() dialog.
I've made a C++ program that asks for an image and then, later on, uses it as a texture, manipulates it etc. But here's the problem:
I ask for the path to the image file with OPENFILENAME and GetOpenFileName(). Here's an example code for this very basic format:
```
#include "pch.h"
#include <iostream>
#include <opencv2/opencv.hpp>
#include <Windows.h>
void SelectAndOpenImage()
{
OPENFILENAME openFile;
char filePath[MAX_PATH] = { 0 };
ZeroMemory(&openFile, sizeof(OPENFILENAME));
openFile.lStructSize = sizeof(OPENFILENAME);
openFile.hwndOwner = NULL;
openFile.lpstrFile = filePath;
openFile.lpstrFile[0] = '\0';
openFile.nMaxFile = MAX_PATH;
openFile.lpstrFilter = "Jpeg2000 (*.jp2)\0*.jp2\0Bitmap (*.bmp)\0*.bmp\0Jpeg (*.jpg)\0*.jpg\0Png (*.png)\0*.png\0";
openFile.nFilterIndex = 1;
openFile.lpstrInitialDir = NULL;
openFile.Flags = OFN_FILEMUSTEXIST | OFN_HIDEREADONLY | OFN_NOCHANGEDIR;
if (GetOpenFileName(&openFile))
{
cv::Mat image = cv::imread(openFile.lpstrFile, CV_LOAD_IMAGE_COLOR);
if (image.empty()) std::cout << "Empty" << std::endl;
else
{
cv::imshow("testing", image);
cv::waitKey(5000);
}
}
else
{
std::cout << "Failure" << std::endl;
}
}
int main()
{
SelectAndOpenImage();
return 0;
}
```
The problem is that:
- cv::imread() works correctly for every image file format except Jpeg2000, cv::imshow() shows just a black image (apparently, all data is full of zeros).
- Jpeg2000 works well along with the others if I do not use this GetOpenFileName() dialog at all (instead I just type the file path directly to the source code and call cv::imread() before GetOpenFileName()).
- If I use the dialog but ignore the file selection and use my own hard coded file path instead, Jpeg2000 file format won't work then either, again just a blank black image.
- All other image file formats work just fine, no matter how I do it.
I'm using OpenCV 3.4.2 Release, but tried the same thing with Debug version. With Debug, all the data is filled with 0xcd (205 as uchar), resulting in a nice gray blank image (obviously not what I was looking for).
I'm currently using Windows10 and Visual Studio 2017 version 15.8.2 (MSVC++ 14.15). I built OpenCV as static libs.
Any ideas what could be causing this? Later on, I'll be moving away from always-so-problematic GetOpenFileName(), but before that, I'd like to see this thing work. | category: imgcodecs,incomplete | low | Critical |
360,638,206 | opencv | How to modify the camera setting to support more than 8 usb cameras ? | Hi:
I try to use 16 cameras for a object tracking task. I used to use the opencv version 2.4.11, which only need to modify the code in module/highgui/src [https://github.com/opencv/opencv/blob/2.4/modules/highgui/src/cap_libv4l.cpp#L260] [https://github.com/opencv/opencv/blob/2.4/modules/highgui/src/cap_v4l.cpp#L249](url). For the opencv version 2.4.13 and 3.4.3 looks like this setting can't work now. Is there anyone have some idea to solve this? | priority: low,category: videoio(camera) | low | Minor |
360,672,471 | rust | Tracking issue for `slice::partition_dedup/by/by_key` | Add three methods to the `slice` type, the `partition_dedup`, `partition_dedup_by` and `partition_dedup_by_key`. The two other methods are based on `slice::partition_dedup_by`.
### Steps / History
- [x] Implement the feature as a PR (#54058)
- [ ] Stabilization PR ([see instructions on forge](https://forge.rust-lang.org/stabilization-guide.html))
### Unresolved Questions
- [ ] Should the methods only return one slice instead of a tuple with two slices? ([comment](https://github.com/rust-lang/rust/issues/54279#issuecomment-572494868))
- [ ] Annotate it with `#[must_use]`?
- [ ] https://github.com/rust-lang/rust/issues/34162 | T-libs-api,B-unstable,C-tracking-issue,A-slice,Libs-Tracked,Libs-Small | medium | Critical |
360,695,238 | rust | Only first cap-lints argument is used | `rustc --cap-lints=warn --cap-lints=foo` should give an error on the invalid "foo" passed, but currently this does nothing.
It is not completely clear what exact behavior we want on `--cap-lints=allow --cap-lints=forbid` for example, though.
Currently this means that there's no way to override Cargo's default behavior on dependencies via RUSTFLAGS, for example. | A-frontend,A-lints,T-compiler,C-bug,A-CLI | low | Critical |
360,695,893 | TypeScript | First class mapped (folded, appended, traversed, etc.) types | ## Search Terms
Mapped types, dependent typing, higher kinded types, type families
## Suggestion
TypeScript currently has baked in support for mapped types, with special syntax for transforming type level maps and type level lists (tuples). Borrowing an example from #26063:
```ts
type Box<T> = { value: T }
type Boxified<T> = { [P in keyof T]: Box<T[P]> }
type A = [string, number]
type MappedA = Boxified<A> // => [Box<string>, Box<number>]
type B = { foo: string, bar: number }
type MappedB = Boxified<B> // => { foo: Box<string>, bar: Box<number> }
```
The logic for when a type is treated as a mappable type unto itself, and when it is simply treated as an heterogeneous map for the purposes of type mapping is built out of various edge cases.
The code above is analogous at the value level to writing the following:
```ts
const box = t => ({ value: t })
const boxified = t => MAP{ [p in t]: box(t[p]) } // Special syntax baked into the language
const a = ["hello", 1]
const mappedA = boxified(a) // => [{ value: "hello" }, { value: 1 }]
const b = { foo: "hello", bar: 1 }
const mappedB = boxified(b) // => { foo: { value: "hello" }, bar: { value: 1 } }
```
The `MAP{ [p in t]: box(t[p]) }` thing above stands out as something of an oddity; you don't usually see a "map operator" built directly into a language. Given a way to inductively break down and reconstruct data, users are perfectly capable of defining a `map` function for each type of data themselves:
```ts
// There are no linked lists in JS, so let's graft an algebra onto arrays
// for breaking them down and building them up recursively
const nil = []
const cons = x => xs => [x, ...xs]
const match = ({ nil, cons }) => xs =>
xs.length === 0 ? nil : cons(xs[0])(xs.slice(1))
// How to map over lists (ignoring the baked in `.map` operation from the prototype or a sec)
const map = f =>
match({
nil: nil,
cons: x => xs => cons(f(x))(map(f)(xs))
})
console.log(map(x => x * 2)([1, 2, 2, 3]))
// => [2, 4, 4, 6]
```
Similarly, for maps:
```ts
// Again, need to graft on an algebra for deconstructing and reconstructing objects
const empty = {}
const withKey = k => v => o => ({ [k]: v, ...o })
const match = ({ empty, withKey }) => o => {
const keys = Object.keys(o)
if (keys.length === 0) return empty
const [k, ...ks] = keys
const { [k]: v, ...o_ } = o
return withKey(k)(v)(o_)
}
// How to map over objects
const map = f =>
match({
empty: empty,
withKey: k => v => o => withKey(k)(f(v))(map(f)(o))
})
console.log(map(x => x * 2)({ foo: 1, bar: 2, baz: 2, quux: 3 }))
// => { foo: 2, bar: 4, baz: 4, quux: 6 }
```
The important thing to notice here is that once we have a way to break down and reconstruct the data recursively, we don't need any language-level features to express mapping over the data; both implementations of `map` require only the ability to define and apply functions, and for such functions to themselves be valid arguments and results (i.e. the features found in a minimal lambda calculus).
Moving back to the type level, we can promote our value level implementation of `map` to a type constructor in pseudo-TypeScript:
```ts
type Nil = []
type Cons<X, R> = [X, ...R]
type Map<F, A> =
A extends Nil ? Nil :
A extends Cons<infer X, infer R> ? Cons<F<X>, Map<F, R>> :
never
// NB: we need a type level bottom that inhabits all kinds (corresponding
// to a type error), never is not the right answer here
type Box<V> = { value: V }
type A = [number, string]
type MappedA = Map<Box, A> // => [{ value: number }, { value: string }]
```
Using the ability to abstract over type level functions and to apply them, i.e. utilizing the lambda calculus at the type level, the user is able to define "mapped types" without the need for this to be implemented in the language.
In fact, mapped types (i.e. structure preserving transformations), are just one useful tool for massaging types. It would similarly be useful for the user to be able to fold down a type level list into a type level union (using the binary `|` type constructor), or to fold down a type level list of object types into their intersection (using the binary `&` type constructor).
AFAICT the primary missing features here are HKTs (#1213) and recursive type constructors (#6230), but it's possible there are other features related to spread types, conditional types, and `infer` that are missing. This provides good motivation for implementing these features in a sound way; it would allow powerful abstractions like mapped types to be implemented and extended in libraries rather than in the core language.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,In Discussion | low | Critical |
360,731,349 | opencv | Opencv failed to run 17 tests with release configuration and debug configuration | Environment:
Windows Server 2016 + VS2017 Update 5 + opencv master branch latest srouce code
Opencv failed to run 17 tests with release configuration and debug configuration. Could you please help take a look at this? Thanks in advance!
Steps to reproduce the behavior:
1.git clone https://github.com/opencv/opencv D:\OpenCV\src
2.git clone https://github.com/opencv/opencv_extra D:\OpenCV\src\extra
3.Open a VS 2017 x86 prompt and browse to D:\OpenCV
4.mkdir build_x86 && pushd build_x86
5.cmake -G "Visual Studio 15 2017" -DCMAKE_SYSTEM_VERSION=10.0.17134.0 -DWITH_IPP=ON -DBUILD_SHARED_LIBS=OFF -DBUILD_PERF_TESTS=ON -DBUILD_TESTS=ON -DBUILD_EXAMPLES=ON -DWITH_OPENCL=OFF -DBUILD_DOCS=OFF -DWITH_CUDA=OFF ..\src
6.msbuild /p:Configuration=Release;Platform=Win32 .\OpenCV.sln /t:Rebuild /m /p:BuildInParallel=true
7.pushd build_x86\bin\Release
8. opencv_test_calib3d.exe --gtest_filter=-fisheyeTest.rectify:Calib3d_StereoSGBM_HH4.regression:Calib3d_StereoBM.regression:Calib3d_StereoSGBM.regression
Note:
OPENCV_TEST_DATA_PATH env variable has been set and point to <opencv_extra>/testdata.
The whole log file please see attachment.
[log_x86_test_13.log](https://github.com/opencv/opencv/files/2387457/log_x86_test_13.log)
TestErrorMessage:
[----------] Global test environment tear-down
[==========] 82 tests from 44 test cases ran. (241581 ms total)
[ PASSED ] 65 tests.
[ FAILED ] 17 tests, listed below:
[ FAILED ] Calib3d_CalibrateCamera_C.regression
[ FAILED ] Calib3d_CalibrateCamera_CPP.regression
[ FAILED ] Calib3d_StereoCalibrate_C.regression
[ FAILED ] Calib3d_StereoCalibrate_CPP.regression
[ FAILED ] Calib3d_ChessboardDetector.accuracy
[ FAILED ] Calib3d_ChessboardDetector.timing
[ FAILED ] Calib3d_ChessboardDetector2.accuracy
[ FAILED ] Calib3d_CirclesPatternDetector.accuracy
[ FAILED ] Calib3d_AsymmetricCirclesPatternDetector.accuracy
[ FAILED ] Calib3d_AsymmetricCirclesPatternDetectorWithClustering.accuracy
[ FAILED ] fisheyeTest.Calibration
[ FAILED ] fisheyeTest.Homography
[ FAILED ] fisheyeTest.EstimateUncertainties
[ FAILED ] fisheyeTest.stereoRectify
[ FAILED ] fisheyeTest.stereoCalibrate
[ FAILED ] fisheyeTest.stereoCalibrateFixIntrinsic
[ FAILED ] Calib3d_Homography.fromImages
| priority: low,category: calib3d,platform: win32 | low | Critical |
360,767,799 | rust | Allow the `Iterator::partition` method to collect in two different collections | The current `Iterator::partition` signature look like this:
```rust
fn partition<B, F>(self, mut f: F) -> (B, B) where
Self: Sized,
B: Default + Extend<Self::Item>,
F: FnMut(&Self::Item) -> bool
{
// ...
}
```
Which force the user to collect in two collections of the same type for the two different "sides".
A partition method with a signature like the following, allow more flexibility, allowing to specify the two collections independently.
```rust
fn partition<B, C, F>(self, mut f: F) -> (B, C) where
Self: Sized,
B: Default + Extend<Self::Item>,
C: Default + Extend<Self::Item>,
F: FnMut(&Self::Item) -> bool
{
// ...
}
```
This is something that could not be changed in a snap of a finger, this is not backward compatible in a sense that the new function signature asks for 3 generic types. It could have been fixed by a default generic (i.e. `C=B`) but it is not allowed in function generic parameters.
So this issue will be here for a moment, I think. however, [I have done the update][1] just to see the compatibility problem.
The other solution could be to add something like `Iterator::partition_distinct`.
[1]: https://github.com/rust-lang/rust/compare/master...Kerollmops:iterator-partition-distinct-collections | T-libs-api,A-iterators | low | Major |
360,783,235 | neovim | RPC: Implement an analogue for ch_logfile | Vim 8.0 has a function called `ch_logfile` for logging all of the activity for channels and jobs to a file conveniently. This makes it easy to figure out where some interaction between some plugin and a program, especially a language server, might be going wrong. It would be very useful to have a similar function in NeoVim. Right now, I often advise users to try and repeat their bugs in Vim instead of NeoVim, so I can see the log file that `ch_logfile` creates. | enhancement,channels-rpc,revisit-at-release,logging | low | Critical |
360,784,986 | go | time: excessive CPU usage when using Ticker and Sleep | ### What version of Go are you using (`go version`)?
go1.11 linux/amd64
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/lni/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/lni/golang_ws"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build440058908=/tmp/go-build -gno-record-gcc-switches"
### What did you do?
I need to call a function roughly every millisecond, there is no real time requirement, as long as it is called roughly every millisecond everything is fine. However, I realised that both time.Ticker and time.Sleep() are causing excessive CPU overhead, probably due to runtime scheduling.
The following Go program uses 20-25% %CPU as reported by top on Linux.
```
package main
import (
"time"
)
func main() {
ticker := time.NewTicker(time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ticker.C:
}
}
}
```
for loop with range is causing similar overhead
```
package main
import (
"time"
)
func main() {
ticker := time.NewTicker(time.Millisecond)
defer ticker.Stop()
for range ticker.C {
}
}
```
while the following program is showing 10-15% %CPU in top
```
package main
import (
"time"
)
func main() {
for {
time.Sleep(time.Millisecond)
}
}
```
To workaround the issue, I had to move the ticker/sleep part to C and let the C code to call the Go function that need to be invoked every millisecond. Such a cgo based ugly hack reduced %CPU in top to 2-3%. Please see the proof of concept code below.
ticker.h
```
#ifndef TEST_TICKER_H
#define TEST_TICKER_H
void cticker();
#endif // TEST_TICKER_H
```
ticker.c
```
#include <unistd.h>
#include "ticker.h"
#include "_cgo_export.h"
void cticker()
{
for(int i = 0; i < 30000; i++)
{
usleep(1000);
Gotask();
}
}
```
ticker.go
```
package main
/*
#include "ticker.h"
*/
import "C"
import (
"log"
)
var (
counter uint64 = 0
)
//export Gotask
func Gotask() {
counter++
}
func main() {
C.cticker()
log.Printf("Gotask called %d times", counter)
}
```
### What did you expect to see?
Much lower CPU overhead when using time.Ticker or time.Sleep()
### What did you see instead?
20-25% %CPU in top | Performance,NeedsInvestigation | high | Critical |
360,801,349 | rust | unused_must_use lint after `write!` fails to note its origin | Consider the following code ([play](https://play.rust-lang.org/?gist=411e153d8d1bc1a04a7538575aa02d31&version=nightly&mode=debug&edition=2015)):
```rust
// Uncomment the `cfg` to see expected output noting `#[warn(unused_must_use)]`
// #[cfg(without_write)]
pub fn encode_json<T: std::fmt::Display>(val: &T, wr: &mut Vec<u8>) {
use std::io::Write;
write!(wr, "{}", val);
}
pub fn after_write_macro() {
Result::Ok::<_, ()>(());
}
fn main() {
}
```
It generates the following warning:
```
warning: unused `std::result::Result` which must be used
--> src/main.rs:9:5
|
9 | Result::Ok::<_, ()>(());
| ^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this `Result` may be an `Err` variant, which should be handled
```
which is *almost*, but not quite, what I expect to see.
Namely, I expect to see this:
```
warning: unused `std::result::Result` which must be used
--> src/main.rs:9:5
|
9 | Result::Ok::<_, ()>(());
| ^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: #[warn(unused_must_use)] on by default
= note: this `Result` may be an `Err` variant, which should be handled
```
which includes the useful `note: #[warn(unused_must_use)] on by default`
As far as I can tell, the choice whether or not to include that note is based on the presence/absence of the earlier `write!` invocation.
(Hypothesis: perhaps there is an occurrence of an unused must_use within that macros expansion that we are silencing, but even though it is silenced, it is erroneously counted in whatever mechanism we are using to avoid printing out multiple redundant instances of notes like `note: #[warn(unused_must_use)] on by default`?) | A-lints,T-compiler | low | Critical |
360,819,467 | opencv | CUDA failing tests | ##### System information (version)
- OpenCV => 4.0.0-pre
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2013 (Most likely independent from the compiler)
- WITH_CUDA
##### Detailed description
Several errors are related to threshold settings, while others are related to recent color conversion changes, and other stuff..
##### The following are already fixed:
- `cuda::meanShiftSegmentation`
- `cuda::polarToCart`
- `CvtColor.Luv_to_*`
- `StereoBeliefPropagation`
- `cuda::BackgroundSubtractorMOG2`
##### Details of currently failing tests
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/MeanShift.Filtering/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/MeanShift.Filtering/*
[==========] Running 2 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 2 tests from CUDA_ImgProc/MeanShift
[ RUN ] CUDA_ImgProc/MeanShift.Filtering/0, where GetParam() = GeForce GTX 1080
F:\opencv_contrib\modules\cudaimgproc\test\test_mean_shift.cpp(94): error: The max difference between matrices "img_template" and "result" is 14 at (246, 382), which exceeds "0.0", where "img_template" at (246, 382) evaluates to (91, 134, 103), "result" at (246, 382) evaluates to (79, 120, 98), "0.0" evaluates to 0
[ FAILED ] CUDA_ImgProc/MeanShift.Filtering/0, where GetParam() = GeForce GTX 1080 (1080 ms)
[ RUN ] CUDA_ImgProc/MeanShift.Filtering/1, where GetParam() = GeForce GTX 1080
F:\opencv_contrib\modules\cudaimgproc\test\test_mean_shift.cpp(94): error: The max difference between matrices "img_template" and "result" is 14 at (246, 382), which exceeds "0.0", where "img_template" at (246, 382) evaluates to (91, 134, 103), "result" at (246, 382) evaluates to (79, 120, 98), "0.0" evaluates to 0
[ FAILED ] CUDA_ImgProc/MeanShift.Filtering/1, where GetParam() = GeForce GTX 1080 (927 ms)
[----------] 2 tests from CUDA_ImgProc/MeanShift (2016 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 1 test case ran. (2026 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 2 tests, listed below:
[ FAILED ] CUDA_ImgProc/MeanShift.Filtering/0, where GetParam() = GeForce GTX 1080
[ FAILED ] CUDA_ImgProc/MeanShift.Filtering/1, where GetParam() = GeForce GTX 1080
2 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/MeanShift.Proc/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/MeanShift.Proc/*
[==========] Running 2 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 2 tests from CUDA_ImgProc/MeanShift
[ RUN ] CUDA_ImgProc/MeanShift.Proc/0, where GetParam() = GeForce GTX 1080
F:\opencv_contrib\modules\cudaimgproc\test\test_mean_shift.cpp(120): error: The max difference between matrices "spmap_template" and "spmap" is 14 at (246, 382), which exceeds "0.0", where "spmap_template" at (246, 382) evaluates to (409, 245), "spmap" at (246, 382) evaluates to (395, 246), "0.0" evaluates to 0
[ FAILED ] CUDA_ImgProc/MeanShift.Proc/0, where GetParam() = GeForce GTX 1080 (1064 ms)
[ RUN ] CUDA_ImgProc/MeanShift.Proc/1, where GetParam() = GeForce GTX 1080
F:\opencv_contrib\modules\cudaimgproc\test\test_mean_shift.cpp(120): error: The max difference between matrices "spmap_template" and "spmap" is 14 at (246, 382), which exceeds "0.0", where "spmap_template" at (246, 382) evaluates to (409, 245), "spmap" at (246, 382) evaluates to (395, 246), "0.0" evaluates to 0
[ FAILED ] CUDA_ImgProc/MeanShift.Proc/1, where GetParam() = GeForce GTX 1080 (886 ms)
[----------] 2 tests from CUDA_ImgProc/MeanShift (1959 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 1 test case ran. (1965 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 2 tests, listed below:
[ FAILED ] CUDA_ImgProc/MeanShift.Proc/0, where GetParam() = GeForce GTX 1080
[ FAILED ] CUDA_ImgProc/MeanShift.Proc/1, where GetParam() = GeForce GTX 1080
2 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/Canny.Async/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/Canny.Async/12
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from CUDA_ImgProc/Canny
[ RUN ] CUDA_ImgProc/Canny.Async/12, where GetParam() = (GeForce GTX 1080, AppertureSize(3), L2gradient(false), whole matrix)
unknown file: error: C++ exception with description "OpenCV(4.0.0-pre) F:/opencv_contrib/modules/cudaimgproc/src/cuda/canny.cu:201: error: (-217:Gpu API call) an illegal memory access was encountered in function 'canny::calcMagnitude'
" thrown in the test body.
[ FAILED ] CUDA_ImgProc/Canny.Async/12, where GetParam() = (GeForce GTX 1080, AppertureSize(3), L2gradient(false), whole matrix) (17393 ms)
[----------] 1 test from CUDA_ImgProc/Canny (17415 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (17439 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] CUDA_ImgProc/Canny.Async/12, where GetParam() = (GeForce GTX 1080, AppertureSize(3), L2gradient(false), whole matrix)
1 FAILED TEST
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/CvtColor.BGR2GRAY/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/CvtColor.BGR2GRAY/*
[==========] Running 24 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 24 tests from CUDA_ImgProc/CvtColor
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(165): error: The max difference between matrices "dst_gold" and "dst" is 1 at (5, 81), which exceeds "1e-5", where "dst_gold" at (5, 81) evaluates to (45), "dst" at (5, 81) evaluates to (46), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix) (956 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(165): error: The max difference between matrices "dst_gold" and "dst" is 1 at (6, 6), which exceeds "1e-5", where "dst_gold" at (6, 6) evaluates to (107), "dst" at (6, 6) evaluates to (108), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/2, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/2 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/3, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/3 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/4 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/5 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(165): error: The max difference between matrices "dst_gold" and "dst" is 1 at (2, 36), which exceeds "1e-5", where "dst_gold" at (2, 36) evaluates to (127), "dst" at (2, 36) evaluates to (126), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(165): error: The max difference between matrices "dst_gold" and "dst" is 1 at (0, 11), which exceeds "1e-5", where "dst_gold" at (0, 11) evaluates to (166), "dst" at (0, 11) evaluates to (165), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/8, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/8 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/9, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/9 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/10 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/11 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(165): error: The max difference between matrices "dst_gold" and "dst" is 1 at (3, 10), which exceeds "1e-5", where "dst_gold" at (3, 10) evaluates to (103), "dst" at (3, 10) evaluates to (102), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix) (870 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(165): error: The max difference between matrices "dst_gold" and "dst" is 1 at (0, 31), which exceeds "1e-5", where "dst_gold" at (0, 31) evaluates to (155), "dst" at (0, 31) evaluates to (154), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/14, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/14 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/15, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/15 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/16 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/17 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(165): error: The max difference between matrices "dst_gold" and "dst" is 1 at (1, 100), which exceeds "1e-5", where "dst_gold" at (1, 100) evaluates to (181), "dst" at (1, 100) evaluates to (180), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix) (11 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(165): error: The max difference between matrices "dst_gold" and "dst" is 1 at (0, 14), which exceeds "1e-5", where "dst_gold" at (0, 14) evaluates to (136), "dst" at (0, 14) evaluates to (135), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix) (5 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/20, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/20 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/21, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/21 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/22 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2GRAY/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2GRAY/23 (3 ms)
[----------] 24 tests from CUDA_ImgProc/CvtColor (2011 ms total)
[----------] Global test environment tear-down
[==========] 24 tests from 1 test case ran. (2031 ms total)
[ PASSED ] 16 tests.
[ FAILED ] 8 tests, listed below:
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2GRAY/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
8 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/CvtColor.BGR2Lab/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/CvtColor.BGR2Lab/*
[==========] Running 24 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 24 tests from CUDA_ImgProc/CvtColor
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/0 (1178 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/1 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/2, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/2 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/3, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/3 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1628): error: The max difference between matrices "dst_gold" and "dst" is 0.549148 at (65, 37), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (65, 37) evaluates to (9.24683, -11.7813, 9.39063), "dst" at (65, 37) evaluates to (9.41736, -12.3304, 9.68835), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1628): error: The max difference between matrices "dst_gold" and "dst" is 0.421126 at (3, 25), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (3, 25) evaluates to (9.57031, -6.51563, 5.53125), "dst" at (3, 25) evaluates to (9.70393, -6.93675, 5.66072), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/6 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/7 (7 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/8, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/8 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/9, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/9 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1628): error: The max difference between matrices "dst_gold" and "dst" is 0.417459 at (59, 61), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (59, 61) evaluates to (6.90918, 14.3594, -12.4531), "dst" at (59, 61) evaluates to (6.90817, 14.7768, -12.7167), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (13 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1628): error: The max difference between matrices "dst_gold" and "dst" is 0.446589 at (62, 42), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (62, 42) evaluates to (6.14014, 8.42188, 4.70313), "dst" at (62, 42) evaluates to (6.14831, 8.86846, 4.72926), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (14 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/12 (802 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/13 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/14, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/14 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/15, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/15 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1628): error: The max difference between matrices "dst_gold" and "dst" is 0.412675 at (43, 117), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (43, 117) evaluates to (10.6201, -5.3125, -1.71875), "dst" at (43, 117) evaluates to (10.7797, -5.72517, -1.49808), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1628): error: The max difference between matrices "dst_gold" and "dst" is 0.43129 at (114, 6), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (114, 6) evaluates to (5.63965, 21.4219, 6.625), "dst" at (114, 6) evaluates to (5.69825, 21.8532, 6.55611), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (5 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/18 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/19 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/20, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/20 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/21, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGR2Lab/21 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1628): error: The max difference between matrices "dst_gold" and "dst" is 0.432425 at (85, 77), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (85, 77) evaluates to (13.0371, -12.3125, 8.39063), "dst" at (85, 77) evaluates to (13.2111, -12.7449, 8.68957), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (6 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGR2Lab/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1628): error: The max difference between matrices "dst_gold" and "dst" is 0.45595 at (35, 86), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (35, 86) evaluates to (10.4797, -15.1719, 10.2031), "dst" at (35, 86) evaluates to (10.7019, -15.5761, 10.6591), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (3 ms)
[----------] 24 tests from CUDA_ImgProc/CvtColor (2148 ms total)
[----------] Global test environment tear-down
[==========] 24 tests from 1 test case ran. (2169 ms total)
[ PASSED ] 16 tests.
[ FAILED ] 8 tests, listed below:
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGR2Lab/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
8 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/CvtColor.BGRA2Lab4/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/CvtColor.BGRA2Lab4/*
[==========] Running 24 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 24 tests from CUDA_ImgProc/CvtColor
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/0 (1250 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/1 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/2, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/2 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/3, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/3 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1669): error: The max difference between matrices "dst_gold" and "h_dst" is 0.539009 at (65, 37), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (65, 37) evaluates to (9.14917, -12.1719, 8.09375), "h_dst" at (65, 37) evaluates to (9.32361, -12.7109, 8.42758), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1669): error: The max difference between matrices "dst_gold" and "h_dst" is 0.424875 at (95, 104), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (95, 104) evaluates to (10.0159, -8.125, 4.5), "h_dst" at (95, 104) evaluates to (10.2047, -8.49723, 4.92488), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/6 (11 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/7 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/8, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/8 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/9, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/9 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1669): error: The max difference between matrices "dst_gold" and "h_dst" is 0.466402 at (59, 61), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (59, 61) evaluates to (7.55005, 15.375, -6.85938), "h_dst" at (59, 61) evaluates to (7.53144, 15.8414, -7.16155), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (7 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1669): error: The max difference between matrices "dst_gold" and "h_dst" is 0.513504 at (68, 42), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (68, 42) evaluates to (8.5083, -10.1406, 10.2656), "h_dst" at (68, 42) evaluates to (8.64621, -10.6541, 10.4561), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/12 (877 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/13 (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/14, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/14 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/15, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/15 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1669): error: The max difference between matrices "dst_gold" and "h_dst" is 0.440076 at (11, 16), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (11, 16) evaluates to (5.66406, 7.73438, -11.4375), "h_dst" at (11, 16) evaluates to (5.63392, 7.8678, -11.8776), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1669): error: The max difference between matrices "dst_gold" and "h_dst" is 0.414804 at (17, 97), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (17, 97) evaluates to (7.89795, 16.3438, 9.71875), "h_dst" at (17, 97) evaluates to (7.93877, 16.7586, 9.71248), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/18 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/19 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/20, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/20 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/21, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.BGRA2Lab4/21 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1669): error: The max difference between matrices "dst_gold" and "h_dst" is 0.508063 at (95, 46), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (95, 46) evaluates to (6.51245, 11.6094, 7.51563), "h_dst" at (95, 46) evaluates to (6.47419, 12.1174, 7.45142), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.BGRA2Lab4/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1669): error: The max difference between matrices "dst_gold" and "h_dst" is 0.496662 at (84, 77), which exceeds "depth == 0 ? 1 : 1e-3", where "dst_gold" at (84, 77) evaluates to (9.10645, -8.14063, 9.25), "h_dst" at (84, 77) evaluates to (9.18162, -8.63729, 9.38914), "depth == 0 ? 1 : 1e-3" evaluates to 0.001
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (3 ms)
[----------] 24 tests from CUDA_ImgProc/CvtColor (2318 ms total)
[----------] Global test environment tear-down
[==========] 24 tests from 1 test case ran. (2325 ms total)
[ PASSED ] 16 tests.
[ FAILED ] 8 tests, listed below:
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.BGRA2Lab4/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
8 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/CvtColor.Lab2BGR/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/CvtColor.Lab2BGR/*
[==========] Running 24 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 24 tests from CUDA_ImgProc/CvtColor
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/0 (1155 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/1 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/2, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/2 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/3, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/3 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1743): error: The max difference between matrices "dst_gold" and "dst" is 0.000969685 at (32, 109), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (32, 109) evaluates to (0.0536261, 0.978481, 0), "dst" at (32, 109) evaluates to (0.0536264, 0.978481, -0.000969685), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1743): error: The max difference between matrices "dst_gold" and "dst" is 0.000589545 at (57, 69), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (57, 69) evaluates to (0.896404, 0, 0.982517), "dst" at (57, 69) evaluates to (0.896404, -0.000589545, 0.982516), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (5 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/6 (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/7 (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/8, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/8 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/9, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/9 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1743): error: The max difference between matrices "dst_gold" and "dst" is 0.000550881 at (64, 2), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (64, 2) evaluates to (0.83979, 0, 0.55294), "dst" at (64, 2) evaluates to (0.83979, -0.000550881, 0.552941), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1743): error: The max difference between matrices "dst_gold" and "dst" is 0.000517055 at (83, 43), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (83, 43) evaluates to (0, 0.998051, 0.106375), "dst" at (83, 43) evaluates to (-0.000517055, 0.998051, 0.106376), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/12 (865 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/13 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/14, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/14 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/15, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/15 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1743): error: The max difference between matrices "dst_gold" and "dst" is 0.00149032 at (116, 24), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (116, 24) evaluates to (0.593788, 0.970716, 0), "dst" at (116, 24) evaluates to (0.593788, 0.970717, -0.00149032), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1743): error: The max difference between matrices "dst_gold" and "dst" is 0.000483242 at (124, 57), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (124, 57) evaluates to (0.523473, 0, 0.945398), "dst" at (124, 57) evaluates to (0.523473, -0.000483242, 0.945397), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/18 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/19 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/20, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/20 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/21, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGR/21 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1743): error: The max difference between matrices "dst_gold" and "dst" is 0.000539942 at (74, 22), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (74, 22) evaluates to (0.839861, 0, 0.92001), "dst" at (74, 22) evaluates to (0.839861, -0.000539942, 0.92001), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGR/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1743): error: The max difference between matrices "dst_gold" and "dst" is 0.000444587 at (91, 8), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (91, 8) evaluates to (0.145019, 0.935527, 0), "dst" at (91, 8) evaluates to (0.145019, 0.935527, -0.000444587), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (7 ms)
[----------] 24 tests from CUDA_ImgProc/CvtColor (2243 ms total)
[----------] Global test environment tear-down
[==========] 24 tests from 1 test case ran. (2253 ms total)
[ PASSED ] 16 tests.
[ FAILED ] 8 tests, listed below:
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGR/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
8 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/CvtColor.Lab2BGRA/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/CvtColor.Lab2BGRA/*
[==========] Running 24 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 24 tests from CUDA_ImgProc/CvtColor
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/0 (1167 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/1 (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/2, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/2 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/3, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/3 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1779): error: The max difference between matrices "dst_gold" and "dst" is 0.000969685 at (32, 109), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (32, 109) evaluates to (0.0536261, 0.978481, 0, 1), "dst" at (32, 109) evaluates to (0.0536264, 0.978481, -0.000969685, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1779): error: The max difference between matrices "dst_gold" and "dst" is 0.000589545 at (57, 69), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (57, 69) evaluates to (0.896404, 0, 0.982517, 1), "dst" at (57, 69) evaluates to (0.896404, -0.000589545, 0.982516, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/6 (6 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/7 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/8, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/8 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/9, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/9 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1779): error: The max difference between matrices "dst_gold" and "dst" is 0.000550881 at (64, 2), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (64, 2) evaluates to (0.83979, 0, 0.55294, 1), "dst" at (64, 2) evaluates to (0.83979, -0.000550881, 0.552941, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1779): error: The max difference between matrices "dst_gold" and "dst" is 0.000517055 at (83, 43), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (83, 43) evaluates to (0, 0.998051, 0.106375, 1), "dst" at (83, 43) evaluates to (-0.000517055, 0.998051, 0.106376, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (15 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/12 (868 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/13 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/14, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/14 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/15, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/15 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1779): error: The max difference between matrices "dst_gold" and "dst" is 0.00149032 at (116, 24), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (116, 24) evaluates to (0.593788, 0.970716, 0, 1), "dst" at (116, 24) evaluates to (0.593788, 0.970717, -0.00149032, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1779): error: The max difference between matrices "dst_gold" and "dst" is 0.000483242 at (124, 57), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (124, 57) evaluates to (0.523473, 0, 0.945398, 1), "dst" at (124, 57) evaluates to (0.523473, -0.000483242, 0.945397, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (5 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/18 (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/19 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/20, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/20 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/21, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2BGRA/21 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1779): error: The max difference between matrices "dst_gold" and "dst" is 0.000539942 at (74, 22), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (74, 22) evaluates to (0.839861, 0, 0.92001, 1), "dst" at (74, 22) evaluates to (0.839861, -0.000539942, 0.92001, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (7 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2BGRA/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1779): error: The max difference between matrices "dst_gold" and "dst" is 0.000444587 at (91, 8), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (91, 8) evaluates to (0.145019, 0.935527, 0, 1), "dst" at (91, 8) evaluates to (0.145019, 0.935527, -0.000444587, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (12 ms)
[----------] 24 tests from CUDA_ImgProc/CvtColor (2246 ms total)
[----------] Global test environment tear-down
[==========] 24 tests from 1 test case ran. (2255 ms total)
[ PASSED ] 16 tests.
[ FAILED ] 8 tests, listed below:
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2BGRA/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
8 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/CvtColor.Lab2LBGR/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/CvtColor.Lab2LBGR/*
[==========] Running 24 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 24 tests from CUDA_ImgProc/CvtColor
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/0 (1141 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/1 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/2, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/2 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/3, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/3 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1796): error: The max difference between matrices "dst_gold" and "dst" is 0.00033617 at (67, 5), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (67, 5) evaluates to (0.880201, 0.0143441, 1), "dst" at (67, 5) evaluates to (0.880201, 0.014344, 1.00034), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1796): error: The max difference between matrices "dst_gold" and "dst" is 4.56004e-005 at (57, 69), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (57, 69) evaluates to (0.780315, 0, 0.960688), "dst" at (57, 69) evaluates to (0.780316, -4.56004e-005, 0.960688), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/6 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/7 (5 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/8, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/8 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/9, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/9 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1796): error: The max difference between matrices "dst_gold" and "dst" is 4.26098e-005 at (64, 2), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (64, 2) evaluates to (0.67348, 0, 0.266355), "dst" at (64, 2) evaluates to (0.673481, -4.26098e-005, 0.266355), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (13 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1796): error: The max difference between matrices "dst_gold" and "dst" is 3.99934e-005 at (83, 43), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (83, 43) evaluates to (0, 0.995571, 0.0110409), "dst" at (83, 43) evaluates to (-3.99934e-005, 0.995571, 0.0110409), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/12 (798 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/13 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/14, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/14 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/15, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/15 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1796): error: The max difference between matrices "dst_gold" and "dst" is 0.000115275 at (116, 24), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (116, 24) evaluates to (0.311344, 0.934673, 0), "dst" at (116, 24) evaluates to (0.311344, 0.934672, -0.000115275), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1796): error: The max difference between matrices "dst_gold" and "dst" is 5.72205e-005 at (109, 86), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (109, 86) evaluates to (1, 0.00757743, 0.285055), "dst" at (109, 86) evaluates to (1.00006, 0.00757725, 0.285055), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/18 (7 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/19 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/20, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/20 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/21, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LBGR/21 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1796): error: The max difference between matrices "dst_gold" and "dst" is 0.000108123 at (15, 110), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (15, 110) evaluates to (0.00908916, 1, 0.656073), "dst" at (15, 110) evaluates to (0.00908923, 1.00011, 0.656074), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LBGR/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1796): error: The max difference between matrices "dst_gold" and "dst" is 3.43881e-005 at (91, 8), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (91, 8) evaluates to (0.0184828, 0.859554, 0), "dst" at (91, 8) evaluates to (0.0184828, 0.859554, -3.43881e-005), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (2 ms)
[----------] 24 tests from CUDA_ImgProc/CvtColor (2093 ms total)
[----------] Global test environment tear-down
[==========] 24 tests from 1 test case ran. (2100 ms total)
[ PASSED ] 16 tests.
[ FAILED ] 8 tests, listed below:
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LBGR/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
8 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/CvtColor.Lab2LRGB/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/CvtColor.Lab2LRGB/*
[==========] Running 24 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 24 tests from CUDA_ImgProc/CvtColor
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/0 (1114 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/1 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/2, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/2 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/3, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/3 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1813): error: The max difference between matrices "dst_gold" and "dst" is 0.00033617 at (67, 5), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (67, 5) evaluates to (1, 0.0143441, 0.880201), "dst" at (67, 5) evaluates to (1.00034, 0.014344, 0.880201), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1813): error: The max difference between matrices "dst_gold" and "dst" is 4.56004e-005 at (57, 69), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (57, 69) evaluates to (0.960688, 0, 0.780315), "dst" at (57, 69) evaluates to (0.960688, -4.56004e-005, 0.780316), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/6 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/7 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/8, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/8 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/9, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/9 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1813): error: The max difference between matrices "dst_gold" and "dst" is 4.26098e-005 at (64, 2), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (64, 2) evaluates to (0.266355, 0, 0.67348), "dst" at (64, 2) evaluates to (0.266355, -4.26098e-005, 0.673481), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (7 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1813): error: The max difference between matrices "dst_gold" and "dst" is 3.99934e-005 at (83, 43), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (83, 43) evaluates to (0.0110409, 0.995571, 0), "dst" at (83, 43) evaluates to (0.0110409, 0.995571, -3.99934e-005), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (14 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/12 (829 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/13 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/14, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/14 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/15, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/15 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1813): error: The max difference between matrices "dst_gold" and "dst" is 0.000115275 at (116, 24), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (116, 24) evaluates to (0, 0.934673, 0.311344), "dst" at (116, 24) evaluates to (-0.000115275, 0.934672, 0.311344), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1813): error: The max difference between matrices "dst_gold" and "dst" is 5.72205e-005 at (109, 86), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (109, 86) evaluates to (0.285055, 0.00757743, 1), "dst" at (109, 86) evaluates to (0.285055, 0.00757725, 1.00006), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/18 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/19 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/20, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/20 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/21, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGB/21 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1813): error: The max difference between matrices "dst_gold" and "dst" is 0.000108123 at (15, 110), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (15, 110) evaluates to (0.656073, 1, 0.00908916), "dst" at (15, 110) evaluates to (0.656074, 1.00011, 0.00908923), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (5 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGB/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1813): error: The max difference between matrices "dst_gold" and "dst" is 3.43881e-005 at (91, 8), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (91, 8) evaluates to (0, 0.859554, 0.0184828), "dst" at (91, 8) evaluates to (-3.43881e-005, 0.859554, 0.0184828), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (2 ms)
[----------] 24 tests from CUDA_ImgProc/CvtColor (2132 ms total)
[----------] Global test environment tear-down
[==========] 24 tests from 1 test case ran. (2141 ms total)
[ PASSED ] 16 tests.
[ FAILED ] 8 tests, listed below:
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGB/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
8 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/CvtColor.Lab2LRGBA/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/CvtColor.Lab2LRGBA/*
[==========] Running 24 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 24 tests from CUDA_ImgProc/CvtColor
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/0 (1342 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/1 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/2, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/2 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/3, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/3 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1830): error: The max difference between matrices "dst_gold" and "dst" is 0.00033617 at (67, 5), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (67, 5) evaluates to (1, 0.0143441, 0.880201, 1), "dst" at (67, 5) evaluates to (1.00034, 0.014344, 0.880201, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1830): error: The max difference between matrices "dst_gold" and "dst" is 4.56004e-005 at (57, 69), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (57, 69) evaluates to (0.960688, 0, 0.780315, 1), "dst" at (57, 69) evaluates to (0.960688, -4.56004e-005, 0.780316, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/6 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/7 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/8, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/8 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/9, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/9 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1830): error: The max difference between matrices "dst_gold" and "dst" is 4.26098e-005 at (64, 2), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (64, 2) evaluates to (0.266355, 0, 0.67348, 1), "dst" at (64, 2) evaluates to (0.266355, -4.26098e-005, 0.673481, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1830): error: The max difference between matrices "dst_gold" and "dst" is 3.99934e-005 at (83, 43), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (83, 43) evaluates to (0.0110409, 0.995571, 0, 1), "dst" at (83, 43) evaluates to (0.0110409, 0.995571, -3.99934e-005, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (5 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/12 (858 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/13 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/14, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/14 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/15, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/15 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1830): error: The max difference between matrices "dst_gold" and "dst" is 0.000115275 at (116, 24), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (116, 24) evaluates to (0, 0.934673, 0.311344, 1), "dst" at (116, 24) evaluates to (-0.000115275, 0.934672, 0.311344, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1830): error: The max difference between matrices "dst_gold" and "dst" is 5.72205e-005 at (109, 86), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (109, 86) evaluates to (0.285055, 0.00757743, 1, 1), "dst" at (109, 86) evaluates to (0.285055, 0.00757725, 1.00006, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/18 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/19 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/20, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/20 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/21, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2LRGBA/21 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1830): error: The max difference between matrices "dst_gold" and "dst" is 0.000108123 at (15, 110), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (15, 110) evaluates to (0.656073, 1, 0.00908916, 1), "dst" at (15, 110) evaluates to (0.656074, 1.00011, 0.00908923, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (5 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2LRGBA/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1830): error: The max difference between matrices "dst_gold" and "dst" is 3.43881e-005 at (91, 8), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (91, 8) evaluates to (0, 0.859554, 0.0184828, 1), "dst" at (91, 8) evaluates to (-3.43881e-005, 0.859554, 0.0184828, 1), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (3 ms)
[----------] 24 tests from CUDA_ImgProc/CvtColor (2370 ms total)
[----------] Global test environment tear-down
[==========] 24 tests from 1 test case ran. (2391 ms total)
[ PASSED ] 16 tests.
[ FAILED ] 8 tests, listed below:
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2LRGBA/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
8 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/CvtColor.Lab2RGB/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/CvtColor.Lab2RGB/*
[==========] Running 24 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 24 tests from CUDA_ImgProc/CvtColor
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/0 (1150 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/1 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/2, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/2 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/3, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/3 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1760): error: The max difference between matrices "dst_gold" and "dst" is 0.000969685 at (32, 109), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (32, 109) evaluates to (0, 0.978481, 0.0536261), "dst" at (32, 109) evaluates to (-0.000969685, 0.978481, 0.0536264), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (5 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1760): error: The max difference between matrices "dst_gold" and "dst" is 0.000589545 at (57, 69), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (57, 69) evaluates to (0.982517, 0, 0.896404), "dst" at (57, 69) evaluates to (0.982516, -0.000589545, 0.896404), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/6 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/7 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/8, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/8 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/9, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/9 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1760): error: The max difference between matrices "dst_gold" and "dst" is 0.000550881 at (64, 2), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (64, 2) evaluates to (0.55294, 0, 0.83979), "dst" at (64, 2) evaluates to (0.552941, -0.000550881, 0.83979), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (6 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1760): error: The max difference between matrices "dst_gold" and "dst" is 0.000517055 at (83, 43), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (83, 43) evaluates to (0.106375, 0.998051, 0), "dst" at (83, 43) evaluates to (0.106376, 0.998051, -0.000517055), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/12 (849 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/13 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/14, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/14 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/15, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/15 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1760): error: The max difference between matrices "dst_gold" and "dst" is 0.00149032 at (116, 24), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (116, 24) evaluates to (0, 0.970716, 0.593788), "dst" at (116, 24) evaluates to (-0.00149032, 0.970717, 0.593788), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix) (6 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1760): error: The max difference between matrices "dst_gold" and "dst" is 0.000483242 at (124, 57), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (124, 57) evaluates to (0.945398, 0, 0.523473), "dst" at (124, 57) evaluates to (0.945397, -0.000483242, 0.523473), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/18 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/19 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/20, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/20 (0 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/21, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.Lab2RGB/21 (1 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1760): error: The max difference between matrices "dst_gold" and "dst" is 0.000539942 at (74, 22), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (74, 22) evaluates to (0.92001, 0, 0.839861), "dst" at (74, 22) evaluates to (0.92001, -0.000539942, 0.839861), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.Lab2RGB/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(1760): error: The max difference between matrices "dst_gold" and "dst" is 0.000444587 at (91, 8), which exceeds "depth == 0 ? 2 : 1e-5", where "dst_gold" at (91, 8) evaluates to (0, 0.935527, 0.145019), "dst" at (91, 8) evaluates to (-0.000444587, 0.935527, 0.145019), "depth == 0 ? 2 : 1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix) (3 ms)
[----------] 24 tests from CUDA_ImgProc/CvtColor (2197 ms total)
[----------] Global test environment tear-down
[==========] 24 tests from 1 test case ran. (2230 ms total)
[ PASSED ] 16 tests.
[ FAILED ] 8 tests, listed below:
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.Lab2RGB/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
8 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/CvtColor.RGB2GRAY/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/CvtColor.RGB2GRAY/*
[==========] Running 24 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 24 tests from CUDA_ImgProc/CvtColor
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(178): error: The max difference between matrices "dst_gold" and "dst" is 1 at (2, 51), which exceeds "1e-5", where "dst_gold" at (2, 51) evaluates to (124), "dst" at (2, 51) evaluates to (125), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix) (965 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(178): error: The max difference between matrices "dst_gold" and "dst" is 1 at (16, 23), which exceeds "1e-5", where "dst_gold" at (16, 23) evaluates to (155), "dst" at (16, 23) evaluates to (154), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/2, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/2 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/3, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/3 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/4 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/5 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(178): error: The max difference between matrices "dst_gold" and "dst" is 1 at (3, 27), which exceeds "1e-5", where "dst_gold" at (3, 27) evaluates to (80), "dst" at (3, 27) evaluates to (81), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix) (8 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(178): error: The max difference between matrices "dst_gold" and "dst" is 1 at (0, 4), which exceeds "1e-5", where "dst_gold" at (0, 4) evaluates to (154), "dst" at (0, 4) evaluates to (153), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix) (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/8, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/8 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/9, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/9 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/10 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/11 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(178): error: The max difference between matrices "dst_gold" and "dst" is 1 at (2, 63), which exceeds "1e-5", where "dst_gold" at (2, 63) evaluates to (235), "dst" at (2, 63) evaluates to (234), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix) (805 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(178): error: The max difference between matrices "dst_gold" and "dst" is 1 at (0, 14), which exceeds "1e-5", where "dst_gold" at (0, 14) evaluates to (33), "dst" at (0, 14) evaluates to (34), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/14, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/14 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/15, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/15 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/16 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/17 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(178): error: The max difference between matrices "dst_gold" and "dst" is 1 at (1, 111), which exceeds "1e-5", where "dst_gold" at (1, 111) evaluates to (51), "dst" at (1, 111) evaluates to (52), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix) (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(178): error: The max difference between matrices "dst_gold" and "dst" is 1 at (1, 85), which exceeds "1e-5", where "dst_gold" at (1, 85) evaluates to (90), "dst" at (1, 85) evaluates to (91), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix) (5 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/20, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/20 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/21, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/21 (11 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/22 (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGB2GRAY/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGB2GRAY/23 (4 ms)
[----------] 24 tests from CUDA_ImgProc/CvtColor (1971 ms total)
[----------] Global test environment tear-down
[==========] 24 tests from 1 test case ran. (1977 ms total)
[ PASSED ] 16 tests.
[ FAILED ] 8 tests, listed below:
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGB2GRAY/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
8 FAILED TESTS
```
</details>
<details>
<summary>cudaimgproc --gtest_filter=CUDA_ImgProc/CvtColor.RGBA2GRAY/*</summary>
<br />
```
Note: Google Test filter = CUDA_ImgProc/CvtColor.RGBA2GRAY/*
[==========] Running 24 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 24 tests from CUDA_ImgProc/CvtColor
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(234): error: The max difference between matrices "dst_gold" and "dst" is 1 at (5, 81), which exceeds "1e-5", where "dst_gold" at (5, 81) evaluates to (45), "dst" at (5, 81) evaluates to (46), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix) (875 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(234): error: The max difference between matrices "dst_gold" and "dst" is 1 at (6, 6), which exceeds "1e-5", where "dst_gold" at (6, 6) evaluates to (107), "dst" at (6, 6) evaluates to (108), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/2, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/2 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/3, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/3 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/4, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/4 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/5, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/5 (11 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(234): error: The max difference between matrices "dst_gold" and "dst" is 1 at (2, 36), which exceeds "1e-5", where "dst_gold" at (2, 36) evaluates to (127), "dst" at (2, 36) evaluates to (126), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix) (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(234): error: The max difference between matrices "dst_gold" and "dst" is 1 at (0, 11), which exceeds "1e-5", where "dst_gold" at (0, 11) evaluates to (166), "dst" at (0, 11) evaluates to (165), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix) (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/8, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/8 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/9, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/9 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/10, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/10 (8 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/11, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/11 (5 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(234): error: The max difference between matrices "dst_gold" and "dst" is 1 at (3, 10), which exceeds "1e-5", where "dst_gold" at (3, 10) evaluates to (103), "dst" at (3, 10) evaluates to (102), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix) (797 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(234): error: The max difference between matrices "dst_gold" and "dst" is 1 at (0, 31), which exceeds "1e-5", where "dst_gold" at (0, 31) evaluates to (155), "dst" at (0, 31) evaluates to (154), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix) (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/14, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/14 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/15, where GetParam() = (GeForce GTX 1080, 128x128, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/15 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/16, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/16 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/17, where GetParam() = (GeForce GTX 1080, 128x128, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/17 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(234): error: The max difference between matrices "dst_gold" and "dst" is 1 at (1, 100), which exceeds "1e-5", where "dst_gold" at (1, 100) evaluates to (181), "dst" at (1, 100) evaluates to (180), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix) (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
F:\opencv_contrib\modules\cudaimgproc\test\test_color.cpp(234): error: The max difference between matrices "dst_gold" and "dst" is 1 at (0, 14), which exceeds "1e-5", where "dst_gold" at (0, 14) evaluates to (136), "dst" at (0, 14) evaluates to (135), "1e-5" evaluates to 1e-005
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix) (6 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/20, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/20 (4 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/21, where GetParam() = (GeForce GTX 1080, 113x113, CV_16U, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/21 (2 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/22, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, whole matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/22 (3 ms)
[ RUN ] CUDA_ImgProc/CvtColor.RGBA2GRAY/23, where GetParam() = (GeForce GTX 1080, 113x113, CV_32F, sub matrix)
[ OK ] CUDA_ImgProc/CvtColor.RGBA2GRAY/23 (3 ms)
[----------] 24 tests from CUDA_ImgProc/CvtColor (1849 ms total)
[----------] Global test environment tear-down
[==========] 24 tests from 1 test case ran. (1856 ms total)
[ PASSED ] 16 tests.
[ FAILED ] 8 tests, listed below:
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/0, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/1, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/6, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/7, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/12, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/13, where GetParam() = (GeForce GTX 1080, 128x128, CV_8U, sub matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/18, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, whole matrix)
[ FAILED ] CUDA_ImgProc/CvtColor.RGBA2GRAY/19, where GetParam() = (GeForce GTX 1080, 113x113, CV_8U, sub matrix)
8 FAILED TESTS
```
</details> | bug,test,priority: low,category: gpu/cuda (contrib) | low | Critical |
360,858,197 | rust | Aarch64-Windows: Cannot build libcore with exception handling enabled | Trying to cross-compile libcore for `aarch64-pc-windows-msvc` fails with (see also https://github.com/rust-lang/rust/issues/54190#issuecomment-421968456):
```
Building stage2 test artifacts (x86_64-pc-windows-msvc -> aarch64-pc-windows-msvc)
Compiling term v0.0.0 (file:///C:/msys64/home/mw/2-rust/src/libterm)
Compiling getopts v0.2.17
Compiling test v0.0.0 (file:///C:/msys64/home/mw/2-rust/src/libtest)
error: cannot link together two panic runtimes: panic_abort and panic_unwind
error: aborting due to previous error
error: Could not compile `test`.
```
(Note, one has to use `link.exe` instead of LLD as the linker (see #54290) in order to get that far.)
The underlying problem is (probably) that `aarch64-pc-windows-msvc` defaults to `panic_abort`, while `libstd` is unconditionally compiled with `panic_unwind`. All other targets that hard-code `panic_abort` are probably `#[no_std]`?
Making `aarch64-pc-windows-msvc` use `panic_unwind` like the other non-embedded platforms leads to LLVM running into an error while trying to compile `libcore`:
```
Compiling core v0.0.0 (file:///C:/msys64/home/mw/2-rust/src/libcore)
LLVM ERROR: Cannot select: t10: ch = cleanupret t9, libcore\str\mod.rs:3734 @[ libcore\str\mod.rs:3563 ]
In function: _ZN4core3str21_$LT$impl$u20$str$GT$4trim17hd88bcddf6e49abc8E
error: Could not compile `core`.
```
| A-LLVM,T-compiler,O-windows-msvc,O-AArch64 | low | Critical |
360,877,708 | kubernetes | StatefulSet: support resize pvc storage in K8s v1.11 | /kind feature
**What happened**:
With k8s v1.11 the [new feature "Resizing Persistent Volume"](https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/) was prompted to beta.
I tried to update the field "statefulset.spec.volumeClaimTemplates.spec.resources.requests.storage" by increasing the storage size from 3Gi to 4Gi, however received following error message.
"The StatefulSet "es-data" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden."
**What you expected to happen**:
The storage size defined in volumeClaimTemplates should be updated using the new feature in v 1.11.
**How to reproduce it (as minimally and precisely as possible)**:
Try following yaml file. Note that you might need to change the "storageClassName" field according to your cluster setup.
Once the deployment finished, try to change the field "statefulset.spec.volumeClaimTemplates.spec.resources.requests.storage" to e.g. 4Gi.
then applying the same yaml file will result in above error.
```
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 2 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "default"
resources:
requests:
storage: 3Gi
```
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
lient Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
GCP
/sig storage | sig/storage,kind/feature | high | Critical |
360,904,517 | TypeScript | Index signature is assignable to weak type whose properties don't match the signature type | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
From https://stackoverflow.com/q/52368008 . Based on https://github.com/Microsoft/TypeScript/pull/16047#issue-122101801, it appears that @sandersn may have thought about the rule, but it still makes no sense to me and no rationale is stated.
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** master (394ee31a56c40033f31a3a8461ec267463033194)
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** weak type index signature
**Code**
```ts
interface Foo {
a?: string;
}
interface Bar {
[n: string]: number;
}
declare let b: Bar;
// No error; expected "Type 'Bar' has no properties in common with type 'Foo'."
let a: Foo = b;
```
**Expected behavior:** Error: "Type 'Bar' has no properties in common with type 'Foo'."
**Actual behavior:** No error.
**Playground Link:** [link](https://www.typescriptlang.org/play/#src=interface%20Foo%20%7B%0D%0A%20%20%20%20a%3F%3A%20string%3B%0D%0A%7D%0D%0Ainterface%20Bar%20%7B%0D%0A%20%20%20%20%5Bn%3A%20string%5D%3A%20number%3B%0D%0A%7D%0D%0Adeclare%20let%20b%3A%20Bar%3B%0D%0A%2F%2F%20No%20error%3B%20expected%20%22Type%20'Bar'%20has%20no%20properties%20in%20common%20with%20type%20'Foo'.%22%0D%0Alet%20a%3A%20Foo%20%3D%20b%3B%0D%0A)
**Related Issues:** Possibly #9900
| Bug | low | Critical |
360,971,764 | rust | Expose `proc_macro::__internal::in_sess` in some manner | Currently to detect whether or not the native `proc_macro` APIs can be used, `proc_macro2` tries to parse, and catches a panic. This isn't the best way to handle it, and involves racy work such as replacing the panic hook to silence the panic logging within the compiler.
It'd be nice to expose the `in_sess` method somehow, perhaps as `proc_macro::within_proc_macro` (strawman) or similar.
https://github.com/rust-lang/rust/blob/b7c6e8f1805cd8a4b0a1c1f22f17a89e9e2cea23/src/libproc_macro/lib.rs#L1438-L1442
cc @dtolnay @alexcrichton | A-macros,T-libs-api,C-feature-request,A-proc-macros | low | Minor |
361,003,618 | go | x/tools/godoc/vfs: path separator character(s) in FileSystem, Opener interfaces is not specified | ### Problem
Documentation for [`net/http.FileSystem`](https://godoc.org/net/http#FileSystem) reads:
> A FileSystem implements access to a collection of named files. **The elements in a file path are separated by slash ('/', U+002F) characters, regardless of host operating system convention.**
However, the [`FileSystem`](https://godoc.org/golang.org/x/tools/godoc/vfs#FileSystem) interface in `golang.org/x/tools/godoc/vfs` does not clearly specify what the path separator character is expected or allowed to be:
> The FileSystem interface specifies the methods godoc is using to access the file system for which it serves documentation.
**Edit:** [`vfs.Opener`](https://godoc.org/golang.org/x/tools/godoc/vfs#Opener) interface is affected too:
> Opener is a minimal virtual filesystem that can only open regular files.
We need to document it so this is clear to the users and implementors of the interfaces.
### Proposed Resolution
Following logic, I expect that it has to be slash, regardless of host OS conventions, since it's meant to be a virtual filesystem that works across platform boundaries, just like `http.FileSystem`. (Of course, implementations may use non-slash separator paths to access underlying physical filesystems.)
Applying `NeedsDecision` label because I want someone else to confirm my reasoning that this is a documentation bug and that we must specify `/` as the only allowed path separator element. /cc @andybons @bradfitz | Documentation,NeedsFix,Tools | low | Critical |
361,055,762 | go | cmd/go: unclear how to cache transitive dependencies in a Docker image | ### What version of Go are you using (`go version`)?
> go version go1.11 linux/amd64
### Does this issue reproduce with the latest release?
> yes
### What did you do?
I'm attempting to populate a Docker cache layer with compiled dependencies based on the contents of `go.mod`. The general recommendation with Docker is to use `go mod download` however this only provides caching of sources.
`go build all` can be used to compile these sources but instead of relying on `go.mod` contents, it requires my application source to be present to determine which deps to build. This causes a cache invalidation on every code change and renders the step useless.
Here's a Dockerfile demonstrating my issue:
```Dockerfile
FROM golang:1.11-alpine
RUN apk add git
ENV CGO_ENABLED=0 GOOS=linux
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
# this fails
RUN go build all
# => go: warning: "all" matched no packages
COPY . .
# this now works but isn't needed
RUN go build all
# compile app along with any unbuilt deps
RUN go build
```
From [package lists and patterns](https://golang.org/cmd/go/#hdr-Package_lists_and_patterns):
> When using modules, "all" expands to all packages in the main module and their dependencies, including dependencies needed by tests of any of those.
where [the main module](https://golang.org/cmd/go/#hdr-The_main_module_and_the_build_list) is defined by the contents of `go.mod` (if I'm understanding this correctly).
Since "the main module's go.mod file defines the precise set of packages available for use by the go command", I would expect `go build all` to rely on `go.mod` and build any packages listed within.
Other actions which support "all" have this issue but some have flags which resolve it (`go list -m all`). | NeedsInvestigation,modules | high | Critical |
361,072,388 | TypeScript | Quick fix proposal: declare property for jsdoc types | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
From https://github.com/Microsoft/vscode/issues/44824
## Search Terms
- quick fix / code action
- jsdoc / jsdocs
## Suggestion
Introduce a new refactoring that introduces a property into a jsdoc type. For example, for the js:
```js
//@ts-check
/**
* @param {{ }} arg
*/
function doStuff(arg) {
console.log(arg.noSuch);
}
```
A quick fix on `noSuch` would add this property to the type of `arg`:
```js
//@ts-check
/**
* @param {{noSuch: object }} arg
*/
function doStuff(arg) {
console.log(arg.noSuch);
}
```
A similar refactoring already exists for typescript:
```ts
interface IFoo {
a: number
}
function doStuff(arg: IFoo) {
console.log(arg.noSuch);
}
```

## Use Cases
Improve experience using ts-check, especially if using complex types
## Other cases to consider
**typedef**
```ts
//@ts-check
/**
* @typedef {{ a: number}} IFoo
*/
/**
*
* @param {IFoo} arg
*/
function doStuff(arg) {
console.log(arg.noSuch);
}
```
**Augments**
```ts
/**
* @augments {Component<{}>}
*/
class Binek extends Component {
render() {
console.log(this.props.noSuch)
}
}
```
| Suggestion,In Discussion,Domain: Refactorings,Domain: JavaScript | low | Critical |
361,080,818 | TypeScript | Spreading private property is wrong | **TypeScript Version:** 3.1.0-dev.20180914
**Code**
```ts
class C {
constructor(readonly x: number, private readonly y: number) {}
}
const a = { ...(new C(0, 1)) };
const b = { x: "", y: "", ...a };
b.x.toUpperCase(); // Error (good)
b.y.toUpperCase(); // No error (bad)
```
**Expected behavior:**
Error at the spread since that accesses a private property.
At least there should be an error at `b.y` since that accesses the same private property, and is a `number` at runtime.
**Actual behavior:**
No error except at `b.x.toUpperCase()` which is correctly an error. | Bug | low | Critical |
361,104,466 | TypeScript | keyof becoming union of string literal in emitted type definitions | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.0.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** declaration emit keyof as union type
**Code**
We're seeing this problem in LitElement: https://github.com/Polymer/lit-element/blob/master/src/lib/decorators.ts#L43
```ts
export const customElement = (tagName: keyof HTMLElementTagNameMap) =>
(clazz: Constructor<HTMLElement>) => {
window.customElements.define(tagName, clazz);
// Cast as any because TS doesn't recognize the return type as being a
// subtype of the decorated class when clazz is typed as
// `Constructor<HTMLElement>` for some reason. `Constructor<HTMLElement>`
// is helpful to make sure the decorator is applied to elements however.
return clazz as any;
};
```
We do this so that users are forced to add their elements to the HTMLElementTagNameMap:
```ts
@customElement('my-element')
class MyElement extends HTMLElement {}
declare global {
interface HTMLElementTagNameMap {
'my-element': MyElement;
}
}
```
**Expected behavior:**
The declaration for `customElement` is emitted as:
```ts
export declare const customElement: (tagName: keyof HTMLElementTagNameMap) => (clazz: Constructor<HTMLElement>) => any;
```
And the example user code has no errors.
**Actual behavior:**
This is the declaration emit for `customElement` (see https://unpkg.com/@polymer/[email protected]/lib/decorators.d.ts):
```ts
export declare const customElement: (tagName: "object" | "a" | "abbr" | "acronym" | "address" | "applet" | "area" | "article" | "aside" | "audio" | "b" | "base" | "basefont" | "bdo" | "big" | "blockquote" | "body" | "br" | "button" | "canvas" | "caption" | "center" | "cite" | "code" | "col" | "colgroup" | "data" | "datalist" | "dd" | "del" | "dfn" | "dir" | "div" | "dl" | "dt" | "em" | "embed" | "fieldset" | "figcaption" | "figure" | "font" | "footer" | "form" | "frame" | "frameset" | "h1" | "h2" | "h3" | "h4" | "h5" | "h6" | "head" | "header" | "hgroup" | "hr" | "html" | "i" | "iframe" | "img" | "input" | "ins" | "isindex" | "kbd" | "keygen" | "label" | "legend" | "li" | "link" | "listing" | "map" | "mark" | "marquee" | "menu" | "meta" | "meter" | "nav" | "nextid" | "nobr" | "noframes" | "noscript" | "ol" | "optgroup" | "option" | "output" | "p" | "param" | "picture" | "plaintext" | "pre" | "progress" | "q" | "rt" | "ruby" | "s" | "samp" | "script" | "section" | "select" | "slot" | "small" | "source" | "span" | "strike" | "strong" | "style" | "sub" | "sup" | "table" | "tbody" | "td" | "template" | "textarea" | "tfoot" | "th" | "thead" | "time" | "title" | "tr" | "track" | "tt" | "u" | "ul" | "var" | "video" | "wbr" | "xmp") => (clazz: Constructor<HTMLElement>) => any;
```
And the above user code has an error, because extending HTMLElementTagNameMap has no effect.
**Playground Link:** The Playground doesn't show declaration files.
**Related Issues:** This is exactly the issue, but it was closed and locked: https://github.com/Microsoft/TypeScript/issues/21445
| Suggestion,In Discussion | medium | Critical |
361,125,353 | go | encoding/json: cannot call UnmarshalJSON on value receiver through interface | [Noticed this inconsistency when poking around the code](https://play.golang.org/p/Yw_2jS_Kr6r):
```go
type CustomUnmarshaler struct{ p *bytes.Buffer }
func (cu CustomUnmarshaler) UnmarshalJSON(b []byte) error {
cu.p.Write(b)
return nil
}
func (cu CustomUnmarshaler) String() string {
return fmt.Sprintf("custom %v", cu.p.String())
}
func main() {
s := &struct {
F1 CustomUnmarshaler
F2 interface{}
F3 interface{}
}{
CustomUnmarshaler{new(bytes.Buffer)},
CustomUnmarshaler{new(bytes.Buffer)},
&CustomUnmarshaler{new(bytes.Buffer)},
}
json.Unmarshal([]byte(`{"F1": "F1", "F2": "F2", "F3": "F3"}`), s)
fmt.Println(s.F1)
fmt.Println(s.F2)
fmt.Println(s.F3)
}
```
This current prints:
```
custom "F1"
F2
custom "F3"
```
I expect it to print:
```
custom "F1"
custom "F2"
custom "F3"
```
In this case, `UnmarshalJSON` is a method on the value receiver. This is a rare situation for methods that mutate the receiver, but is valid. I expect the method to still be called. That is, the `json` package should not assume that the receiver must be a pointer.
This is a very obscure use-case and I personally don't have a horse in the game to address this. | NeedsFix | low | Critical |
361,132,999 | pytorch | Mysterious error due to num_workers: 1 | While I have found the workaround for below, I thought to log this anyway so others can use some insights. In one of my projects I started getting this error randomly after I did bunch of refactoring:
```
Exception ignored in: <bound method StorageRef.__del__ of <torch.multiprocessing.reductions.StorageRef object at 0x00000258213645C0>>
Traceback (most recent call last):
File "C:\Program Files\Anaconda3\lib\site-packages\torch\multiprocessing\reductions.py", line 26, in __del__
AttributeError: 'NoneType' object has no attribute 'Storage'
```
One of the things I'm doing in my code that is not usual is to train the model on CPU and test on GPU, so I basically do `model.to(...)` to move the model back and forth in each epoch for train and test. Apart from that this is just simple MNIST official CNN code in nutshell.
**Workaround Found**
I haven't yet figured out why exactly this is happening but it turns out that test DataLoader (which I want to use on GPU), had following parameters being passed:
`'num_workers': 1, 'pin_memory': True`
After I removed `'num_workers': 1`, the error disappeared.
**Repro Code**
I haven't done the minimal version of code that reproduces this error but you can get this code from [this repo and commit](https://github.com/sytelus/NNExp/tree/f5d8cdfcde39d867cc090546a94051426d96acd8). If you run [partial_data.py](https://github.com/sytelus/NNExp/blob/f5d8cdfcde39d867cc090546a94051426d96acd8/NNExp/pytorch/mnist/partial_data.py), you will see above error shown in the console.
**Other Info**
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: 9.0
OS: Microsoft Windows 10 Enterprise
CMake version: version 3.10.2
Python version: 3.5
Is CUDA available: Yes
CUDA runtime version: 9.2.88
GPU models and configuration:
GPU 0: TITAN Xp
GPU 1: TITAN Xp
Nvidia driver version: 397.44
cuDNN version: 7.2
Versions of relevant libraries:
[pip] Could not collect
[conda] pytorch 0.4.1 py35_cuda90_cudnn7he774522_1 pytorch
[conda] torch 0.5.0a0+f45dfbc <pip>
[conda] torchvision 0.2.1 <pip> | module: multiprocessing,triaged | low | Critical |
361,154,861 | TypeScript | NodeList is no more compatible with Array<Node>. Breaking change in 3.0 | **TypeScript Version: Version 3.0.3**
**Code**
I didn't find any mentioning to this breaking change.
This code used to compile in 2.9.2:
```ts
/**
* Array based implementation of NodeList
*/
class JSArrayNodeList extends Array<Node> implements NodeList {
constructor(items?: Array<Node>) {
if (items) {
super(...items);
} else {
super();
}
}
public item(index: number): Node {
return this[index];
}
public copy(): JSArrayNodeList {
return new JSArrayNodeList(this);
}
}
```
In 3.0 forEach method of NodeList and Array<Node> became incompatible.
Due to this changeset: https://github.com/Microsoft/TypeScript/commit/7a7d04e126fb7c1c6074ef26657eddb0f32e4003
A solution was to add explicit forEach which delegates call to super:
```ts
/**
* Array based implementation of NodeList
*/
class JSArrayNodeList extends Array<Node> implements NodeList {
constructor(items?: Array<Node>) {
if (items) {
super(...items);
} else {
super();
}
}
public forEach(
callbackfn: ((value: Node, index: number, array: Node[]) => void) | ((value: Node, key: number, parent: NodeList) => void),
thisArg?: any): void {
// Just call Array.forEach
Array.prototype.forEach.call(thisArg, this, callbackfn);
}
public item(index: number): Node {
return this[index];
}
public copy(): JSArrayNodeList {
return new JSArrayNodeList(this);
}
}
```
**Expected behavior:**
Documentation in breaking change list.
**Actual behavior:**
No information about breaking change. | Help Wanted,Docs | low | Minor |
361,163,964 | pytorch | set num_workers on the dataloader make the jupyter kernel crash at the almost end of the epoch | I had code that was running perfectly and I decided to update pytorch from 0.4.0 to 0.4.1
now the program crash close to the end to the first epoch if num_workers is set.
How to fix that?
cc @SsnL @VitalyFedyunin @ejguan | module: dataloader,triaged | low | Critical |
361,169,683 | pytorch | [caffe2]Can i build caffe2 library only for cpu inference purpose and reduce the binary size? | I want to deploy caffe2 on mobile device(including Android & IOS), but i found that the binary size is **too large** for my app. I only use a little operators on my net(convolution and forward calculation), how can I build caffe2 just including the inference operators i wanted to **reduce the size of library**?
It's an urgent task for me, if anyone can provide help, i will be appreciate. | caffe2 | low | Minor |
361,186,809 | pytorch | DataParallel: Parallel_apply assert len(modules) == len(inputs) AssertionError | Hello,
I am using Pytorch version 0.4.1 with Python 3.6. I am adapting the transformer model for translation from this site (http://nlp.seas.harvard.edu/2018/04/03/attention.html). The model runs without error with single gpu. However, the AssertionError happens when I use DataParallel for the model. How should I deal with this issue? Thanks for your help in advance. The following are snippet of the code:
```python
def run_epoch(data_iter, model, loss_compute):
"Standard Training and Logging Function"
start = time.time()
total_tokens = 0
total_loss = 0
tokens = 0
for i, batch in enumerate(data_iter):
out = model.forward(batch.src, batch.trg, batch.src_mask, batch.trg_mask)
loss = loss_compute(out, batch.trg_y, batch.ntokens)
total_loss += loss
total_tokens += batch.ntokens.float()
tokens += batch.ntokens.float()
if i % 50 == 1:
elapsed = time.time() - start
print("Epoch Step: %d Loss: %f Tokens per Sec: %f" % (i, loss / batch.ntokens.float().item(), tokens / elapsed))
start = time.time()
tokens = 0
return total_loss / total_tokens.float()
```
loss_compute function
```python
class MultiGPULossCompute:
"A multi-gpu loss compute and train function."
def __init__(self, generator, criterion, devices, opt=None, chunk_size=5):
# Send out to different gpus.
self.generator = generator
self.criterion = nn.parallel.replicate(criterion,
devices=devices)
self.opt = opt
self.devices = devices
self.chunk_size = chunk_size
def __call__(self, out, targets, normalize):
total = 0.0
generator = nn.parallel.replicate(self.generator,
devices=self.devices)
out_scatter = nn.parallel.scatter(out,
target_gpus=self.devices)
out_grad = [[] for _ in out_scatter]
targets = nn.parallel.scatter(targets,
target_gpus=self.devices)
# Divide generating into chunks.
chunk_size = self.chunk_size
for i in range(0, out_scatter[0].size(1), chunk_size):
# Predict distributions
out_column = [[Variable(o[:, i:i+chunk_size].data,
requires_grad=self.opt is not None)]
for o in out_scatter]
gen = nn.parallel.parallel_apply(generator, out_column)
# Compute loss.
y = [(g.contiguous().view(-1, g.size(-1)),
t[:, i:i+chunk_size].contiguous().view(-1))
for g, t in zip(gen, targets)]
loss = nn.parallel.parallel_apply(self.criterion, y)
# Sum and normalize loss
l = nn.parallel.gather(loss,
target_device=self.devices[0])
l = l.sum()[0] / normalize.float()
total += l.data[0]
# Backprop loss to output of transformer
if self.opt is not None:
l.backward()
for j, l in enumerate(loss):
out_grad[j].append(out_column[j][0].grad.data.clone())
# Backprop all loss through transformer.
if self.opt is not None:
out_grad = [Variable(torch.cat(og, dim=1)) for og in out_grad]
o1 = out
o2 = nn.parallel.gather(out_grad,
target_device=self.devices[0])
o1.backward(gradient=o2)
self.opt.step()
self.opt.optimizer.zero_grad()
return total * normalize.float()
``` | oncall: distributed,triaged | low | Critical |
361,223,965 | flutter | Dependency Injection for Flutter | I would love to have something similar to Dagger for flutter. I notice https://github.com/google/inject.dart but apparently it will not get any real support/maintenance from the team even if it use internally at Google.
For me it worth putting this repo open source as there no equivalent currently (or not as powerful as this one ^^).
| c: new feature,framework,would be a good package,c: proposal,P3,team-framework,triaged-framework | high | Critical |
361,227,648 | TypeScript | Compiler Errors TS1110 and TS2345 with integer type pattern | **TypeScript Version:** 3.0.3
Explicitly marked positive integers are not recognized in a type pattern:
```ts
foo(direction: -1 | 1 | -2 | +2) { }
```

Likewise, calling it using a perfectly legit integer fails with TS2345:

**Expected behavior:**
As everywhere else, the unary operator + should be allowed in a type pattern.
This is not urgent, just for completeness.
| Suggestion,In Discussion,Domain: Literal Types | low | Critical |
361,264,781 | flutter | I am not able to use onDestroy() lifecycle hook in my flutter application? Can you tell me what is the workaround to this? | c: new feature,framework,engine,P3,team-engine,triaged-engine | low | Major |
|
361,280,378 | kubernetes | Forbidden error returned from dynamic client is not typed correctly | As a user who didn't have permission to watch a CRD in a namespace, I got the correct error from the server, but the dynamic client didn't correctly report the error.
```
I0918 13:51:19.668687 78071 request.go:897] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"prowjobs.prow.k8s.io is forbidden: User \"smarterclayton\" cannot watch prowjobs.prow.k8s.io in the namespace \"ci\": no RBAC policy matched","reason":"Forbidden","details":{"group":"prow.k8s.io","kind":"prowjobs"},"code":403}
E0918 13:51:19.668801 78071 reflector.go:322] github.com/openshift/release-controller/cmd/release-controller/main.go:149: Failed to watch *unstructured.Unstructured: unknown &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"unknown", Reason:"Forbidden", Details:(*v1.StatusDetails)(0xc42070ea80), Code:403}}
```
I think this may be a bug with unstructured and the dynamic client / scheme not being able to extract the correct message.
@deads2k
@kubernetes/sig-api-machinery-bugs | kind/bug,area/client-libraries,sig/api-machinery,help wanted,lifecycle/frozen | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.