id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
378,987,304 | TypeScript | Issue better error message for binary operations between numbers and bigints | ```ts
declare let x: bigint;
declare let y: number;
x + y;
y + x;
x - y;
y - x;
x * y;
y * x;
x / y;
y / x;
x += y;
y += x;
```
Each of these gives an error like "Operator '{0}' cannot be applied to types '{1}' and '{2}'". It should *somehow* tell the user that the `number` argument needs to be converted to a `bigint`. | Suggestion,Domain: Error Messages,Experience Enhancement | low | Critical |
379,015,932 | flutter | Provide a pagination widget | Currently the possible ways to achieve pagination are using ScrollView or ListView.Builder. But need to consider about loading indicator and retry mechanism. Although it's completely possible to create such widget by ourself but it's great if Flutter team able to build this widget. | c: new feature,framework,f: scrolling,would be a good package,P3,team-framework,triaged-framework | low | Major |
379,020,876 | rust | Unexpected doc errors when using --nocapture and compile_fail | **Problem**
`cargo test -- --nocapture` outputs error logs for documentation tests even when specifying the `compile_fail` argument.
**Steps**
1. Create a new project with `cargo new <project name>`
2. Add a `lib.rs` file to `src`
3. Add the following method with documentation and code example which is supposed to fail compilation (the section code will contain invalid Rust code):
```
/// ```compile_fail
/// Input: 123
/// ```
pub fn foo() {
}
```
4. Run `cargo test -- --nocapture`
The output will be:
> Finished dev [unoptimized + debuginfo] target(s) in 0.03s
> Running target/debug/deps/compile_fail-af4564015b61948e
>
> running 0 tests
>
> test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
>
> Running target/debug/deps/compile_fail-8c19ede40c69c8ad
>
> running 0 tests
>
> test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
>
> Doc-tests compile-fail
>
> running 1 test
> error: expected type, found `123`
> --> src/lib.rs:2:8
> |
> 3 | Input: 123
> | ^^^ expecting a type here because of type ascription
>
> test src/lib.rs - foo (line 1) ... ok
>
> test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
**Notes**
Running `cargo test` presents a different output with no errors:
> Finished dev [unoptimized + debuginfo] target(s) in 0.03s
> Running target/debug/deps/compile_fail-af4564015b61948e
>
> running 0 tests
>
> test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
>
> Running target/debug/deps/compile_fail-8c19ede40c69c8ad
>
> running 0 tests
>
> test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
>
> Doc-tests compile-fail
>
> running 1 test
> test src/lib.rs - foo (line 1) ... ok
>
> test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Output of `cargo version`:
> cargo 1.30.0 (36d96825d 2018-10-24)
| T-rustdoc,C-bug,A-doctests | low | Critical |
379,028,337 | react | Input type email bug | https://codepen.io/anon/pen/GwZeNO
Open this codepen and paste this "[email protected] 1"
Then press backspace, notice that the focus changes to the beginning of the input.
| Component: DOM,Type: Needs Investigation | low | Critical |
379,038,158 | TypeScript | Suggestion: Uniform Type Predicate, or Concrete Types | ## Suggestion: Uniform Type Predicate, or Concrete Types
### Summary
Narrowing tells us information about a value but not so much about the type of the value, this is because a type may over approximate a value. The over approximation poses a problem when using multiples values of the same type: narrowing information is not sharable because over approximation can allow non-uniform behaviour with respect to operations such as `typeof` or `===`.
As a canonical example:
```ts
function eq<T>(x: T, y: T) {
if (typeof x === "number") {
// What do we know about y here?
}
}
```
Inside the if branch what do we know about `y`, a variable with the same type as `x` that we know to be a number. Sadly, nothing. A caller may instantiate `T` to be `unknown` and therefore `y` could be any other value.
```ts
eq<unknown>(5, "not a number");
```
### Proposal
Support a way to define uniform, or concrete, types. A uniform type is one where all values of that type behave uniformly with respect to some operation. The idea is taken from Julia, where [concrete types](https://docs.julialang.org/en/v1/devdocs/types/index.html#Types-and-sets-(and-Any-and-Union{}/Bottom)-1) are defined as:
_A concrete type T describes the set of values whose direct tag, as returned by the typeof function, is T. An abstract type describes some possibly-larger set of values._
The obvious candidate of operation is `typeof`, but this could be extended to include equality for literal types, or key sets for objects.
### Basic Example
The introduction syntax is very much up for grabs, but for the examples we can use a constraint.
```ts
function eq<T extends Concrete<"typeof">>(x: T, y: T) {
if (typeof x === "number") {
// if x is number, then so is y.
}
}
```
The constraint `Concrete<"typeof">` of `T` says that `T` may only be instantiated with types where all values of that type behave uniformly with respect to `typeof`. When `x` is a number, then so is `y`. The following call-sites demonstrate legal/illegal instantiations.
```ts
eq<unknown>(5, "not a number"); // Illegal: unknown is not concrete
eq<number>(5,4); // ok
eq<number | string>(5, "not a number also"); // Illegal, (number | string) is not concrete.
eq<4 | 2>(4, 2); // ok
```
### Examples from related issues (1)
#27808
```ts
declare function smallestString(xs: string[]): string;
declare function smallestNumber(x: number[]): number;
function smallest<T extends Concrete<number | string, "typeof">>(x: T[]): T {
if (x.length == 0) {
throw new Error('empty');
}
const first = x[0]; // first has type "T"
if (typeof first == "string") {
return smallestString(x); // legal
}
return smallestNumber(x);
}
```
We write `Concrete<number | string, "typeof">` for a type that is either a string or number, but is also concrete. As the values of the array are concrete, a single witness for `typeof` is enough to narrow the type of the entire array.
### Examples from related issues (2)
#24085
Here we show a use case for defining uniformity of equality.
```ts
const enum TypeEnum {
String = "string",
Number = "number",
Tuple = "tuple"
}
interface KeyTuple { key1: string; key2: number; }
type KeyForTypeEnum<T extends TypeEnum>
= T extends TypeEnum.String ? string
: T extends TypeEnum.Number ? number
: T extends TypeEnum.Tuple ? KeyTuple
: never;
function doSomethingIf<TType extends Concrete<TypeEnum, "===">>(type: TType, key: KeyForTypeEnum<TType>) {
if (type === TypeEnum.String) {
doSomethingWithString(key);
}
else if (type === TypeEnum.Number) {
doSomethingWithNumber(key);
}
else if (type === TypeEnum.Tuple) {
doSomethingWithTuple(key);
}
}
```
The issue presented here is that over-approximation leads to unsound uses:
```ts
doSomethingIf<TypeEnum>(TypeEnum.String, 42);
```
However with a concrete constraint `Concrete<TypeEnum, "===">` we enforce that the parameter is assignable to `TypeEnum`, however any instantiation must be uniform with respect to equality, restricting instantiations to singleton types. Example calls:
```ts
doSomethingIf<TypeEnum>(TypeEnum.String, 42); // Illegal: TypeEnum is not concrete
doSomethingIf<TypeEnum.String>(TypeEnum.String, "foo"); // Ok
doSomethingIf<TypeEnum.String | TypeEnum.Number>(TypeEnum.String, 42); // Illegal: (TypeEnum.String | TypeEnum.Number) is not concrete.
```
| Suggestion,In Discussion | low | Critical |
379,051,587 | react | react-test-renderer doesn't support Suspense and lazy nodes | Hello. How can I test components with Suspense/Lazy?
now renderer.create(...)toTree() throws
"toTree() does not yet know how to handle nodes with tag=13"
react 16.6.1
react-test-renderer 16.6.1 | Type: Bug | medium | Major |
379,128,307 | kubernetes | ValidatingWebhookConfiguration causes deployments to maintain two underlying RS at the same time | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!-->
**What happened**:
I have ValidationWebhookConfiguration that checks the deployments for the availability of `spec.template.spec.tolerations` and throws an error if it does not have `tolerations`. Thus enforcing deployments to have `tolerations`. I enforce `tolerations` on staging namespaces only in order to enforce deployments to run on pre-emptive nodes.
After I add the `tolerations` because of the above enforcement(ie.update the deployment with tolerations). The deployment maintains two of its underlying ReplicaSet. Thus keeping the old pods and new pods as well. It is important to note that I cannot simply delete old RS by --force too, it is just recreated again.
When I remove the ValidationWebhookConfiguration, I have no problems, I can update the deployments with tolerations(or for that matter any other key/map) and the deployment maintains only one RS i.e., it has only new pods. I can also delete the old RS(if needed).
**Please find below, the ValidationWebhookConfiguration that I am using:-**
```
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: deny-absence-of-tolerations
webhooks:
- name: deny.absence.of.tolerations
rules:
- apiGroups: ["apps","extensions"]
apiVersions: ["v1","v1beta1","v1beta2"]
operations: [ "CREATE","UPDATE" ]
#operations: [ "*" ]
resources: ["deployments","replicasets"]
namespaceSelector:
matchExpressions:
- key: namespace
operator: In
values:
- test-namespace
failurePolicy: Fail
clientConfig:
url: "https://deny_absence_of_tolerations"
caBundle: cabundlevaluehere
```
**And the core part of my backend server goes like this:-**
```
var admissionResponse = {
allowed: false
};
var found = false;
if (!object.spec.template.spec.tolerations) {
console.log("Workload is not using tolerations");
admissionResponse.status = {
status: 'Failure',
message: "On Staging/Testing please use tolerations",
reason: " Workload ( ie.,deployment ) Requirement Failed",
code: 402
};
found = true;
};
if (!found) {
admissionResponse.allowed = true;
}
var admissionReview = {
response: admissionResponse
};
res.setHeader('Content-Type', 'application/json');
res.send(JSON.stringify(admissionReview));
res.status(200).end();
```
**What you expected to happen**:
Expected behaviour of deployment i.e., deployment should maintain only one RS.
**How to reproduce it (as minimally and precisely as possible)**:
1). First of all, have no validationWebhookConfiguration.
2). Create a deployment(without tolerations) in a namespace(lets call it :- test-namespace)
3). Add validationWebhookConfiguration now to test-namespace in order to enforce tolerations.
4). Now update the deployment with tolerations and apply it.
5). As mentioned above in detail, now the ValidationWebhookConfiguration causes deployments to maintain two underlying RS at the same time
**Anything else we need to know?**: A complete delete of deployment and applying the updated deployment does not cause the above problem. But deleting a complete deployment is a downtime to consider.
**Environment**:
- Kubernetes version (use `kubectl version`): 1.10.6-gke.9
- Cloud provider or hardware configuration: Google Kubernetes Engine
- OS (e.g. from /etc/os-release): COS
- Kernel (e.g. `uname -a`): 4.14.56+ #1 SMP x86_64 Intel(R) Xeon(R) CPU @ 2.30GHz GenuineIntel GNU/Linux
- Install tools:
- Others:
@kubernetes/sig-scheduling-bug
@kubernetes/sig-apps-bug
<!-- DO NOT EDIT BELOW THIS LINE -->
/kind bug | kind/bug,priority/important-soon,sig/scheduling,sig/apps,lifecycle/frozen | low | Critical |
379,130,383 | TypeScript | TSC fails to emit required files when run below node_modules | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.0-dev.20181107
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** node_modules
**Code**
`/tmp/node_modules/tsc-test/tsconfig.json`
```json
{
"compilerOptions": {
"outDir": "dist"
},
"files": [
"src/index.ts"
]
}
```
`/tmp/node_modules/tsc-test/src/index.ts`
```ts
import { A } from './A';
export const main = () => {
console.log('A=', A);
};
```
`/tmp/node_modules/tsc-test/src/A.ts`
```ts
export const A = 'A';
```
**Expected behavior:**
Output files:
- `/tmp/node_modules/tsc-test/dist/index.js`
- `/tmp/node_modules/tsc-test/dist/A.js`
**Actual behavior:**
Output files:
`/tmp/node_modules/tsc-test/dist/index.js`
```js
"use strict";
exports.__esModule = true;
var A_1 = require("./A");
exports.main = function () {
console.log('A=', A_1.A);
};
```
_No `A.js` is output_
**Notes:**
I wanted to build a typescript project as part of an npm postinstall script, to enable installation from npm or from git. In the case of installation from npmjs, `dist/` will already exist. In the case of installation from git, `dist/` will be missing and build will be run.
After setting this up, I noticed that only files listed in my tsconfig `files` list were being emitted when installing from git. I looked around for tsconfig or tsc options that may change this behavior, but realized it probably had to do with `'node_modules'` being in the path.
I debugged the issue by modifying the built `tsc.js`. In my case, the problematic code is in `nodeModuleNameResolverWorker/tryResolve`, specifically [here](https://github.com/Microsoft/TypeScript/blob/b534fb4849eca0a792199fb6c0cb8849fece1cfd/src/compiler/moduleNameResolver.ts#L918:L921):
```ts
const { path: candidate, parts } = normalizePathAndParts(combinePaths(containingDirectory, moduleName));
const resolved = nodeLoadModuleByRelativeName(extensions, candidate, /*onlyRecordFailures*/ false, state, /*considerPackageJson*/ true);
// Treat explicit "node_modules" import as an external library import.
return resolved && toSearchResult({ resolved, isExternalLibraryImport: contains(parts, "node_modules") });
```
The call to `contains(parts, "node_modules")` should likely consider the project root directory, and ignore `"node_modules"` in the path parts above the root. | Suggestion,In Discussion | low | Critical |
379,147,455 | pytorch | Error in building Caffe2 on Windows (experimental operators) | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
Error in building Caffe2 under Windows 10 with enabled GPU and Python Bindings.
## To Reproduce
Steps to reproduce the behavior:
1. Followed the manual for building from source under Windows using Visual Studio 2015 v3.
2. Enabled Cuda and Python Bindings ensuring the correct MSVC is selected.
3. Run the script build_windows.bat.
Same results by manually running CMake and then building the solution
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
96 Errors in generating Caffe2.lib.
Starting like that and following a similar pattern for the remaining errors.
(ClCompile target) ->
D:\Libraries\pytorch\c10/util/TypeList.h(160): error C2938: 'extract_type_t<unknown-type>' : Failed to special
ize alias template (compiling source file D:\Libraries\pytorch\caffe2\operators\experimental\c10\schemas\add.cc)
[D:\Libraries\pytorch\build\caffe2\caffe2.vcxproj]
D:\Libraries\pytorch\c10/util/TypeList.h(160): error C3546: '...': there are no parameter packs available to e
xpand (compiling source file D:\Libraries\pytorch\caffe2\operators\experimental\c10\schemas\add.cc) [D:\Librarie
s\pytorch\build\caffe2\caffe2.vcxproj]
D:\Libraries\pytorch\caffe2/core/operator_c10wrapper.h(54): error C3203: 'typelist': unspecialized class templ
ate can't be used as a template argument for template parameter 'TypeList', expected a real type (compiling sour
ce file D:\Libraries\pytorch\caffe2\operators\experimental\c10\schemas\add.cc) [D:\Libraries\pytorch\build\caffe
2\caffe2.vcxproj]
D:\Libraries\pytorch\c10/util/TypeList.h(42): error C2338: In typelist::to_tuple<T>, T must be typelist<...>.
(compiling source file D:\Libraries\pytorch\caffe2\operators\experimental\c10\schemas\add.cc) [D:\Libraries\pyto
rch\build\caffe2\caffe2.vcxproj]
D:\Libraries\pytorch\caffe2/core/operator_c10wrapper.h(54): error C2794: 'type': is not a member of any direct
or indirect base class of 'c10::guts::typelist::to_tuple<int>' (compiling source file D:\Libraries\pytorch\caff
e2\operators\experimental\c10\schemas\add.cc) [D:\Libraries\pytorch\build\caffe2\caffe2.vcxproj]
D:\Libraries\pytorch\caffe2/core/operator_c10wrapper.h(54): error C2938: 'to_tuple_t<int>' : Failed to special
ize alias template (compiling source file D:\Libraries\pytorch\caffe2\operators\experimental\c10\schemas\add.cc)
[D:\Libraries\pytorch\build\caffe2\caffe2.vcxproj]
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Produce Caffe2.lib
## Environment
- Caffe2 Version Number 0.8.2
- Windows 10 Home x64
- Source
- Both manually with CMake and running build_windows.bat
- Python 2.7.14 x64
- Cuda 9.2 with CuDNN 7.2.1.38
- Nvidia GeForce GTX 1060
- CMake 3.12.4
| caffe2 | low | Critical |
379,176,137 | terminal | Three problems displaying End User Defined Characters in Windows Console | The first problem is that the top of the character is truncated.
The second problem is that they occupy two display cells.
The third problem is that they are displayed overlapping.
To reproduce:
Run the MS supplied Private Character Editor "C:\Windows\System32\eudcedit.exe".
Draw two lines using the straight line tool from one corner to the opposite corner. Draw a box around the border using the hollow rectangle tool. Save the character either for all fonts.
Run the MS supplied Character Map app "C:\Windows\System32\charmap.exe". Select "All fonts (Private Characters), select the character created in the previous step, and then click copy.
Open cmd.exe type "REM" and then paste the character several times onto the same line. Press <return>
type REM and type "0123456789" for comparison.

It should be noted that copying and pasting other characters such as the Unicode box drawing glyphs display correctly.
I am using Windows 10 build 1803, all current updates applied. | Product-Conhost,Area-Rendering,Area-Output,Issue-Bug | low | Minor |
379,208,966 | flutter | Emojis on keyboard | Hello, I have a problem with the Emojis on Android Keyboard.
The thing is:
In the TextField I must insert a UserName, emojis are not allowed. Besides this while using the Android keyboard the emojis button appear and is available, but in iOS keyboard it does not appear.
Maybe is not a bug, but I think is a good idea add some TextInputType to choose if add Emojis button or not and it should be consistent between both systems whether you can insert Emojis or not.
Thanx | a: text input,c: new feature,platform-android,framework,engine,P2,team-android,triaged-android | low | Critical |
379,229,550 | rust | Confusing error message when trying to implement a shared mutable state | Let's say I am a rust beginner coming from another language, and I want to write a simple multi-threaded program in rust.
So I write this simple function :
```rust
fn simple() -> u8 {
let mut x = 0;
std::thread::spawn(|| {
x += 1;
});
x += 1;
x
}
```
One of the selling points of rust is being able to prevent data races, so it should be able to explain in details what is wrong with this function. However, it gives a confusing error message.
It does not compile, and the compiler returns :
> closure may outlive the current function, but it borrows `x`, which is owned by the current function
> help: to force the closure to take ownership of `x` (and any other referenced variables), use the `move` keyword
And if I look up the detailed error message in the error book (here: https://doc.rust-lang.org/error-index.html#E0373), it also tells me to add a `move` in front of the closure.
I think this is confusing. Obviously, the right thing to do here is to use a mutex, but this is not mentioned neither in the error message, nor in the error book.
Maybe the error should also say something like
> If you need to use `x` both in the current function and in the closure, than you may have to use a form of [shared state concurrency](https://doc.rust-lang.org/book/second-edition/ch16-03-shared-state.html)
It would be even nicer if the compiler could detect that we are using `x` after the litigious function, and thus not talk about `move` at all. | C-enhancement,A-diagnostics,T-compiler | low | Critical |
379,261,705 | flutter | UiKitViews breaks group opacity into 2 groups | The opacity is applied separately to the render objects that are before the platform view and to the ones that are after it. | platform-ios,engine,a: platform-views,P2,team-ios,triaged-ios | low | Minor |
379,282,454 | pytorch | torch.linspace does not check for infinity and nan | *UPDATED*:
```
>>> torch.linspace(1.175494351e-38, float('nan'),steps=30)
tensor([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan])
>>> torch.linspace(1.175494351e-38, float('inf'),steps=30)
tensor([nan, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf, inf,
inf, inf, inf, inf, inf, inf])
```
*OUTDATED*:
```python
import torch
print(torch.arange(0, float('+inf')).shape)
# torch.Size([-9223372036854775808])
print(torch.__version__)
# 1.0.0a0+9d9e5f8
```
cc @gchanan @mruberry | module: error checking,triaged,module: tensor creation | low | Minor |
379,290,660 | go | cmd/compile: refine calcHasCall's handling of ODIV and OMOD | @cherrymui noted in the review of CL 148837 that on non-soft-float platforms, when working with floats, ODIV and OMOD won't panic. We should handle them a bit more precisely in `calcHasCall`.
| Suggested,NeedsFix,compiler/runtime | low | Minor |
379,310,643 | TypeScript | Expose compiler option allowNonTsExtensions | ## Search Terms
allowNonTsExtensions
## Suggestion
Make `allowNonTsExtensions` from
https://github.com/Microsoft/TypeScript/blob/0010a38660b1eb03c188c9c1758177b4501760b7/src/compiler/types.ts#L4480
public / not-internal
## Use Cases
The Monaco Editor requires usage of this compiler option for in-memory TypeScript / JavaScript editors.
## Examples
https://github.com/Microsoft/monaco-editor/blob/35f3f99ff251744eabe4dfbff8535288e363933d/test/playground.generated/extending-language-services-configure-javascript-defaults.html#L53
## Checklist
My suggestion meets these guidelines:
* [✔️] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [✔️] This wouldn't change the runtime behavior of existing JavaScript code
* [✔️] This could be implemented without emitting different JS based on the types of the expressions
* [✔️] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [?] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Minor |
379,316,422 | pytorch | PyTorch streams are not cuda-memcheck clean | ## 🐛 Bug
PyTorch streams are not cuda-memcheck clean
## To Reproduce
Steps to reproduce the behavior:
Create the following PyTorch script, which initializes our stream pool:
```
import torch
torch.cuda.Stream()
```
Run it with `cuda-memcheck`. You'll get 64 warnings. Here's a few:
```
========= Program hit cudaErrorCudartUnloading (error 29) due to "driver shutting down" on CUDA API call to cudaStreamDestroy.
========= Saved host backtrace up to driver entry point at error
========= Host Frame:/usr/lib64/nvidia/libcuda.so.1 [0x3478e3]
========= Host Frame:/data/users/ezyang/pytorch-tmp/torch/lib/libcaffe2_gpu.so [0x2ae05de]
========= Host Frame:/data/users/ezyang/pytorch-tmp/torch/lib/libcaffe2_gpu.so (_ZNSt6vectorISt5arrayI19CUDAStreamInternalsLm32EESaIS2_EED1Ev + 0x57) [0x2662317]
========= Host Frame:/lib64/libc.so.6 [0x39bd9]
========= Host Frame:/lib64/libc.so.6 [0x39c27]
========= Host Frame:/lib64/libc.so.6 (__libc_start_main + 0xfc) [0x2244c]
========= Host Frame:python [0x1c7773]
=========
========= Program hit cudaErrorCudartUnloading (error 29) due to "driver shutting down" on CUDA API call to cudaStreamDestroy.
========= Saved host backtrace up to driver entry point at error
========= Host Frame:/usr/lib64/nvidia/libcuda.so.1 [0x3478e3]
========= Host Frame:/data/users/ezyang/pytorch-tmp/torch/lib/libcaffe2_gpu.so [0x2ae05de]
========= Host Frame:/data/users/ezyang/pytorch-tmp/torch/lib/libcaffe2_gpu.so (_ZNSt6vectorISt5arrayI19CUDAStreamInternalsLm32EESaIS2_EED1Ev + 0x57) [0x2662317]
========= Host Frame:/lib64/libc.so.6 [0x39bd9]
========= Host Frame:/lib64/libc.so.6 [0x39c27]
========= Host Frame:/lib64/libc.so.6 (__libc_start_main + 0xfc) [0x2244c]
========= Host Frame:python [0x1c7773]
=========
========= Program hit cudaErrorCudartUnloading (error 29) due to "driver shutting down" on CUDA API call to cudaStreamDestroy.
========= Saved host backtrace up to driver entry point at error
========= Host Frame:/usr/lib64/nvidia/libcuda.so.1 [0x3478e3]
========= Host Frame:/data/users/ezyang/pytorch-tmp/torch/lib/libcaffe2_gpu.so [0x2ae05de]
========= Host Frame:/data/users/ezyang/pytorch-tmp/torch/lib/libcaffe2_gpu.so (_ZNSt6vectorISt5arrayI19CUDAStreamInternalsLm32EESaIS2_EED1Ev + 0x57) [0x2662317]
========= Host Frame:/lib64/libc.so.6 [0x39bd9]
========= Host Frame:/lib64/libc.so.6 [0x39c27]
========= Host Frame:/lib64/libc.so.6 (__libc_start_main + 0xfc) [0x2244c]
========= Host Frame:python [0x1c7773]
=========
========= ERROR SUMMARY: 64 errors
```
## Expected behavior
No errors
## Environment
```
Collecting environment information...
PyTorch version: 1.0.0a0+583731d
Is debug build: No
CUDA used to build PyTorch: 9.2.88
OS: CentOS Linux 7 (Core)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
CMake version: version 3.11.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.2.88
GPU models and configuration:
GPU 0: Tesla M40
GPU 1: Tesla M40
GPU 2: Tesla M40
GPU 3: Tesla M40
GPU 4: Tesla M40
GPU 5: Tesla M40
GPU 6: Tesla M40
GPU 7: Tesla M40
Nvidia driver version: 396.26
cuDNN version: Probably one of the following:
/usr/local/cuda-9.2/targets/x86_64-linux/lib/libcudnn.so.7.1.2
/usr/local/cuda-9.2/targets/x86_64-linux/lib/libcudnn.so.7.1.4
/usr/local/cuda-9.2/targets/x86_64-linux/lib/libcudnn_static.a
Versions of relevant libraries:
[pip] numpy (1.14.3)
[pip] onnx (1.2.1, /data/users/ezyang/pytorch-tmp/third_party/onnx)
[pip] torch (1.0.0a0+583731d, /data/users/ezyang/pytorch-tmp)
[pip] torch-complex (0.0.1, /data/users/ezyang/complex)
[pip] torchvision (0.2.1)
[conda] magma-cuda91 2.3.0 1 pytorch
[conda] torchvision 0.2.1 <pip>
```
## Additional context
CC @mruberry
cc @ngimel | module: cuda,triaged | low | Critical |
379,322,006 | TypeScript | Allow minimal type checking of JavaScript files | I'm in the process of introducing TypeScript support into an enormous, 6-year old web application with a *lot* of JavaScript files. In order to get the most advantages out of TypeScript while we slowly migrate over (which will likely happen slowly), I'd like the following to be true:
- TypeScript files (`.ts` and `.tsx`) are type checked with `strict: true`, so that we get as much useful feedback about TS code as possible.
- JavaScript code that imports and then incorrectly uses typed TypeScript code throws a TS error
- Other TypeScript errors in JavaScript files are suppressed, because we haven't had the chance to migrate the file to TypeScript yet. OR, at the least, JavaScript files are type checked with `strict: false` to minimize the number of errors they output.
## Primary Suggestion
It would be fantastic if typescript had a `"minimalCheckJs"` config option that, when set to true, would only check JS files for errors on imported typed code. For example, setting `"minimalCheckJs": true, "strict": true` in `tsconfig.json` would have this effect:
```javascript
# moduleA.ts
function hello(msg) {} # throws "no implicit any" error on "msg" arg
export default function logA(msg: string) {
console.log(msg);
}
# moduleB.js
import logA from './moduleA'
import something from './someJavaScriptFile.js' # does not throw "cannot find module" error
logA(1) # throws TS error, because logA is typed
function logB(msg) { # does not show "no implicit any" error on "msg" arg
console.log(msg);
}
```
This feature would allow me to convert a file from JS to TS, add types to the file, and immediately be assured that the exported code is being used correctly throughout my entire code base (including JS files). Currently, I can set `"checkJs": true`, but then I will see *thousands* of other kinds of errors that TypeScript finds in the JS files, such as `cannot find module` errors.
## Alternative Suggestion
If the above feature is difficult to implement, a less ideal but also helpful feature would be to allow setting `"strict": false` for JS files and `"strict": true` for TS files. Some way to combine the following:
```json
# strictly type check TS files
{
"compilerOptions": {
"checkJs": false,
"allowJs": true,
"strict": true
},
"include": ["**/*.ts", "**/*.tsx"],
}
# type check JS files with strict: false
{
"compilerOptions": {
"checkJs": true,
"allowJs": true,
"strict": false
},
"include": ["**/*.js", "**/*.jsx"],
}
``` | Suggestion,In Discussion,Domain: JavaScript,Add a Flag | medium | Critical |
379,325,834 | go | runtime: repeated syscalls inhibit periodic preemption | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
go version go1.11 darwin/amd64
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/benesch/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/benesch/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.11/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.11/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/tw/njf8lc695t70f1lkhts5d0nr0000gn/T/go-build056467710=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
Ran the following program repeatedly.
```go
package cgotest
// #include <unistd.h>
import "C"
import (
"context"
"math/rand"
// "runtime"
"sync"
"testing"
"time"
)
func test27660(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
ints := make([]int, 100)
locks := make([]sync.Mutex, 100)
// Slowly create threads so that ThreadSanitizer is forced to
// frequently resize its SyncClocks.
for i := 0; i < 100; i++ {
go func() {
for ctx.Err() == nil {
// Sleep in C for long enough that it is likely that the runtime
// will retake this goroutine's currently wired P.
C.usleep(1000 /* 1ms */)
// runtime.Gosched()
}
}()
go func() {
// Trigger lots of synchronization and memory reads/writes to
// increase the likelihood that the race described in #27660
// results in corruption of ThreadSanitizer's internal state
// and thus an assertion failure or segfault.
for ctx.Err() == nil {
j := rand.Intn(100)
locks[j].Lock()
ints[j]++
locks[j].Unlock()
}
}()
time.Sleep(time.Millisecond)
}
}
```
The program usually completes in 100ms, but occasionally takes upwards of 60s, and sometimes more than 10m (i.e., maybe it never completes). It appears to be caused by the scheduler starving out the outer goroutine, as uncommenting the runtime.Gosched call makes the test pass reliably in under 60s.
It's possible that this is #10958, but I think that the call to `ctx.Err()` should allow for preemption, as should the call into cgo. I'm filing as a separate bug so I have an issue to reference when I add this test in http://golang.org/cl/148717. I'll dig a little deeper in a bit.
/cc @dvyukov | Performance,NeedsInvestigation,compiler/runtime | low | Critical |
379,366,484 | kubernetes | Cloud Controller Manger doesn't query cloud provider for node name, causing the node to be removed | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!-->
**What happened**:
- Launch a node with container linux `CoreOS-stable-1911.3.0` on aws, with customized ignition configs.
- The hostname turns to be `ip-10-3-18-1`, instead of the full private dns `ip-10-3-18-1.us-west-1.compute.internal` because `/etc/hostname` is not set.
- kubelet starts with `--cloud-provider=external` and skips [this code path](https://github.com/kubernetes/kubernetes/blob/v1.14.0-alpha.0/pkg/kubelet/kubelet.go#L382-L395). So it's not able to set the private dns as the node name.
- When CCM starts, it tries to read the node name from the node spec.
- CCM calls [`GetInstanceProviderID()`](https://github.com/kubernetes/kubernetes/blob/v1.14.0-alpha.0/pkg/controller/cloud/node_controller.go#L361) and fails.
- CCM calls [`getNodeAddressesByProviderIDOrName()`](https://github.com/kubernetes/kubernetes/blob/v1.14.0-alpha.0/pkg/controller/cloud/node_controller.go#L372) and fails too.
- CCM then removes the node.
**What you expected to happen**:
Since now the kubelet runs with `--cloud-provider=external`, no one is executing that code path to query the cloud provider to get the node name anymore.
However this code path still needs to be executed by someone to get the correct node name from the cloud provider for the node.
I think the CCM might need to query the cloud provider for the full node hostname in case the hostname given by the kubelet is not the full hostname (in AWS case).
**How to reproduce it (as minimally and precisely as possible)**:
Launch a container linux with non-empty ignition config in the user data, then the hostname won't be the full `private-dns`.
Then launch kubelet with `--cloud-provider=external` and CCM will reproduce the issue described above.
**Anything else we need to know?**:
**Environment**:
- Kubernetes version:
```Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-30T21:39:16Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}```
- Cloud provider or hardware configuration:
AWS
- OS
```NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1911.3.0
VERSION_ID=1911.3.0
BUILD_ID=2018-11-05-1815
PRETTY_NAME="Container Linux by CoreOS 1911.3.0 (Rhyolite)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
```
- Kernel:
`Linux ip-10-3-20-13 4.14.78-coreos #1 SMP Mon Nov 5 17:42:07 UTC 2018 x86_64 Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz GenuineIntel GNU/Linux`
- Install tools:
Internal k8s installer tool based on terraform
- Others:
This issue can be mitigated by telling the igntion config to set the `/etc/hostname` to the private-dns (by `curl http://169.254.169.254/latest/meta-data/hostname`). Or just use the coreos-metadata service.
/cc @Quentin-M
@kubernetes/sig-aws-misc
@andrewsykim
<!-- DO NOT EDIT BELOW THIS LINE -->
/kind bug | kind/bug,area/cloudprovider,area/provider/openstack,lifecycle/frozen,area/provider/aws,sig/cloud-provider,needs-triage | medium | Critical |
379,376,135 | TypeScript | Feature Request: type for Arguments OF Function | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
current typescript support `...args`
but when we use
```ts
declare function a(x: number, y: number) : number
declare function a(x: number, s: string) : string
a(...[1, 2]) // => Error: TS2556: Expected 2 arguments, but got 0 or more.
```
will get error
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
```ts
declare function a(x: number, y: number) : number
declare function a(x: number, s: string) : string
```
Actual behavior:
```ts
a(...[1, 2]) // => Error: TS2556: Expected 2 arguments, but got 0 or more.
```
### type `ArgumentsOFFunction<T extends Function>`
a type can let typescript know this is args array of target function
> notice: this type is not exists
Expected behavior:
```ts
let args1: ArgumentsOFFunction<typeof a> = [x = 1, s = ''];
let args2: ArgumentsOFFunction<typeof a> = [x = 1, y = 2];
let args3: ArgumentsOFFunction<typeof a> = ['1', 2]; // trigger a ts error => Error: TS2345: Argument of type '"1"' is not assignable to parameter of type 'number'.
a(...args1) // => string
a(...args2) // => number
a(1, 'str') // => string
a(1, 2) // => number
a('1', 2) // trigger a ts error => Error: TS2345: Argument of type '"1"' is not assignable to parameter of type 'number'.
declare function b(...args: ArgumentsOFFunction<typeof a>) : boolean
b(...args1) // => boolean
b(...args2) // => boolean
b(1, 'str') // => boolean
b(1, 2) // => boolean
b('1', 2) // trigger a ts error => Error: TS2345: Argument of type '"1"' is not assignable to parameter of type 'number'.
```
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [ ] This wouldn't change the runtime behavior of existing JavaScript code
* [ ] This could be implemented without emitting different JS based on the types of the expressions
* [ ] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Critical |
379,425,438 | scrcpy | OpenGL Integration | This is purely speculation on my part, but would it be possible to pass the decoding through OpenGL on the computer side? The reason for this... use [Reshade!](https://github.com/crosire/reshade)
There are some really exceptional [filters](https://github.com/crosire/reshade-shaders/tree/master/Shaders) available in Reshade, and it would only require that the stream pass through OpenGL (or Win only DX9+). Some of the filters (i.e. ambient occlusion) require a true 3D rendered scene, but most of the others are just post-processors that don't require the actual 3D. They do, however, use the graphics card texture processing to do the filter and can be quite fast.
Notably, [SMAA](https://github.com/crosire/reshade-shaders/blob/master/Shaders/SMAA.fxh) is a very capable AA filter that can use color/luma in addition to depth buffer to find edges. We can't have depth buffer in this case, but could still have "true" AA as opposed to FXAA. | question,opengl | low | Minor |
379,434,479 | flutter | [google_maps_flutter] Support Widgets as markers | It would be very handy to allow markers on the google maps to be flutter widgets. I know that currently there is already a method to load an image from assets but this is a static marker.
Overlaying a widget as a marker allows for dynamic generation of markers (example: rating of location on the map)
| c: new feature,customer: crowd,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | high | Critical |
379,453,716 | pytorch | Feature request: von Mises-Fisher distribution | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
There should be a von Mises-Fisher distribution in torch.distributions.
## Motivation
It is very important to have those directional distributions when you want to do research related to directional statistics for machine learning. For example, von Mises-Fisher will be very useful for modeling unit length vectors. See paper https://arxiv.org/pdf/1804.00891.pdf
## Additional context
TensorFlow has the von Mises-Fisher distribution. https://github.com/tensorflow/tensorflow/issues/6141
Here is also an implementation of von Mises-Fisher in pytorch (using scipy, without GPU support) https://github.com/nicola-decao/s-vae-pytorch/tree/master/hyperspherical_vae/distributions
<!-- Add any other context or screenshots about the feature request here. -->
cc @vincentqb @fritzo @neerajprad @alicanb @vishwakftw | module: distributions,triaged | medium | Major |
379,463,584 | terminal | MiniTerm: ProcessFactory.Start is hard-coded to run "cmd.exe" and ignores intended parameter | See:
https://github.com/Microsoft/console/blob/34ff272cfa5b49dcb0bbe7aaec41797d9c285e5e/samples/ConPTY/MiniTerm/MiniTerm/Processes/ProcessFactory.cs#L21 | Work-Item,Help Wanted,Area-Interop,Product-Conpty,Issue-Samples | low | Minor |
379,484,198 | kubernetes | Make it clear why a namespace is still in Terminating | Users need to know why a namespace is not being deleted when it has been in the Terminating state for a sufficiently long amount of time (see #64002). We should have a status condition or set of events that make it clear which objects the namespace deleter is trying / failing to delete.
<!-- DO NOT EDIT BELOW THIS LINE -->
/kind feature
/sig api-machinery | sig/api-machinery,kind/feature,lifecycle/frozen | medium | Critical |
379,484,946 | pytorch | test_spectral_norm: Backward is not reentrant | I got this on a run:
```
Nov 09 21:56:23 ======================================================================
Nov 09 21:56:23 ERROR: test_spectral_norm (__main__.TestNN)
Nov 09 21:56:23 ----------------------------------------------------------------------
Nov 09 21:56:23 Traceback (most recent call last):
Nov 09 21:56:23 File "/var/lib/jenkins/workspace/test/common_utils.py", line 116, in wrapper
Nov 09 21:56:23 fn(*args, **kwargs)
Nov 09 21:56:23 File "test_nn.py", line 1864, in test_spectral_norm
Nov 09 21:56:23 torch.autograd.gradcheck(fn, (input.clone().requires_grad_(),))
Nov 09 21:56:23 File "/opt/conda/lib/python3.6/site-packages/torch/autograd/gradcheck.py", line 208, in gradcheck
Nov 09 21:56:23 return fail_test('Backward is not reentrant, i.e., running backward with same '
Nov 09 21:56:23 File "/opt/conda/lib/python3.6/site-packages/torch/autograd/gradcheck.py", line 185, in fail_test
Nov 09 21:56:23 raise RuntimeError(msg)
Nov 09 21:56:23 RuntimeError: Backward is not reentrant, i.e., running backward with same input and grad_output multiple times gives different values, although analytical gradient matches numerical gradient
Nov 09 21:56:23
```
https://circleci.com/gh/pytorch/pytorch/215442?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano | module: autograd,triaged,module: data parallel,module: norms and normalization | low | Critical |
379,490,743 | vscode | emmet.includeLanguages for one to many mapping | How can I enable several languages for a given file extension please?
For example
```json
"emmet.includeLanguages": {
"tt": ["html", "css", "javascript"],
}
```
Except this doesn't work. I tried with multiple references to the key
```json
"emmet.includeLanguages": {
"tt": "html",
"tt": "css",
"tt": "javascript",
}
```
and also with an embedded object
```json
"emmet.includeLanguages": {
"tt": {
"html":"html",
"css": "css",
"javascript": "javascript",
}
```
no joy :-(
-------------------------------------------------
```json
"emmet.includeLanguages": {
"erb": "html"
}
```
_Originally posted by @abnersajr in https://github.com/Microsoft/vscode/issues/9500#issuecomment-339634819_ | help wanted,feature-request,emmet | medium | Major |
379,500,300 | nvm | usage of `command` in `nvm.sh` does not prevent alias substitution for `alias -g` aliasses | - Operating system and version:
Linux 4.18.16
zsh 5.6.2
- `nvm debug` output:
<details>
<!-- do not delete the following blank line -->
```sh
nvm --version: v0.33.11
$SHELL: /usr/bin/zsh
$SHLVL: 5
$HOME: /home/pseyfert
$NVM_DIR: '$HOME/.nvm_zsh'
$PATH: $HOME/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
$PREFIX: ''
$NPM_CONFIG_PREFIX: ''
$NVM_NODEJS_ORG_MIRROR: ''
$NVM_IOJS_ORG_MIRROR: ''
shell version: 'zsh 5.6.2 (x86_64-pc-linux-gnu)'
uname -a: 'Linux 4.18.16-arch1-1-ARCH #1 SMP PREEMPT Sat Oct 20 22:06:45 UTC 2018 x86_64 GNU/Linux'
OS version: Arch Linux ()
curl: /usr/bin/curl, curl 7.62.0 (x86_64-pc-linux-gnu) libcurl/7.62.0 OpenSSL/1.1.1 zlib/1.2.11 libidn2/2.0.5 libpsl/0.20.2 (+libidn2/2.0.4) libssh2/1.8.0 nghttp2/1.34.0
wget: /usr/bin/wget, GNU Wget 1.19.5 built on linux-gnu.
git: /usr/bin/git, git version 2.19.1
grep: grep is a global alias for grep GO, grep (GNU grep) 3.1
--color=always: not found
--exclude=*.pyc: not found
--exclude-dir=.git: not found
--exclude-dir=.svn: not found
-InH: not found
awk: /usr/bin/awk, GNU Awk 4.2.1, API: 2.0 (GNU MPFR 4.0.1, GNU MP 6.1.2)
sed: /usr/bin/sed, sed (GNU sed) 4.5
cut: /usr/bin/cut, cut (GNU coreutils) 8.30
basename: /usr/bin/basename, basename (GNU coreutils) 8.30
rm: /usr/bin/rm, rm (GNU coreutils) 8.30
mkdir: /usr/bin/mkdir, mkdir (GNU coreutils) 8.30
xargs: /usr/bin/xargs, xargs (GNU findutils) 4.6.0
nvm current: system
which node: /usr/bin/node
which iojs: iojs not found
which npm: /usr/bin/npm
npm config get prefix: /usr
npm root -g: /usr/lib/node_modules
```
</details>
- `nvm ls` output:
<details>
<!-- do not delete the following blank line -->
```sh
-> system
node -> stable (-> N/A) (default)
iojs -> N/A (default)
lts/* -> lts/dubnium (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.14.4 (-> N/A)
lts/carbon -> v8.12.0 (-> N/A)
lts/dubnium -> v10.13.0 (-> N/A)
```
</details>
- How did you install `nvm`? (e.g. install script in readme, Homebrew):
From [AUR](https://aur.archlinux.org/packages/nvm/)
- What steps did you perform?
```sh
export NVM_DIR="$HOME/.nvm_zsh"
source /usr/share/nvm/nvm.sh
source /usr/share/nvm/install-nvm-exec
NVM_VERSION_ONLY=true NVM_LTS=carbon nvm_remote_version ""
```
(Well, actually I tried `nvm install --lts=carbon`, but going through what nvm does when calling it like that leads here)
- What happened?
`nvm_remote_version` prints in magenta (is that the color's name?) `(standard`.
(I'm trying to attach a screenshot but the page says "Something went really wrong, and we can't process that file." … will try a different browser later to update)
- What did you expect to happen?
The same as in bash. It should've printed `v8.12.0` in black. (or rather the standard terminal color)
- Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`?
Yes, but these two lines are more relevant for this issue:
```zsh
#alias -g "GO"=$GREP_OPTIONS
#alias "grep"="grep $GREP_OPTIONS"
#GREP_OPTIONS="--color=always --exclude='*.pyc' --exclude-dir=.git --exclude-dir=.svn -InH"
alias -g "GO"="--color=always --exclude='*.pyc' --exclude-dir=.git --exclude-dir=.svn -InH"
alias -g "grep"="grep GO"
```
- What's your explanation for the problem?
nvm.sh defines the following shell function:
```sh
nvm_grep() {
GREP_OPTIONS='' command grep "$@"
}
```
The `command` keyword should prevent shell alias substitution to take place. To
my understanding the mechanism is that the otherwise-to-be-resolved alias gets
moved out of the command-position. This is only effective for aliasses that do
not get resolved in non-command positions. Global zsh aliasses however also get
resolved in non-command positions (the use case being that filename and
linenumber printing remains aliassed when using `find -exec grep {} \;`).
Somewhere in the call stack of `nvm_remote_version`, a `nvm_grep` is run. This
calls for me the grep command still with the -InH flags and thus prints
`(standard input):<linenumber>:<expected output>`. Which is then cut off after
the first white space and `nvm install` tries to download the `(standard`
version of node instead of `v8.12.0`.
- Suggestions for fixing
Even global aliasses do not get resolved when quoted or backslashed:
```sh
GREP_OPTIONS='' command 'grep' "$@"
GREP_OPTIONS='' command \grep "$@"
```
| shell: zsh,shell alias clobbering,pull request wanted | low | Critical |
379,509,631 | opencv | Arm Mali/OpenCL - OpenCL error CL_OUT_OF_RESOURCES | **_update: this bug is specific to Mali Midgard(T8xx\T7xx\T6xx) GPU only. More resent Mali Bifrost(Gxx) series not affected**
##### System information (version)
- OpenCV => 4.0beta
- Operating System / Platform => Ubuntu 18.04/arm64
- Compiler => Ubuntu/Linaro 7.3.0-27
- OpenCL => OpenCL 1.2 v1.r14p0-01rel0-git(966ed26)
- GPU => ARM Mali T860(Midgard 4gen)
##### Detailed description
It is specific bug to arm\mali hardware in OpenCL, which related to limited resources on this mobile device.
`OpenCL error CL_OUT_OF_RESOURCES (-5) during call: clEnqueueNDRangeKernel('stage1_with_sobel', dims=2, globalsize=4096x4096x1, localsize=32x8x1) sync=false`
reducing `maxWorkGoupSize_` value of `cl::Device` from default `256` to `128` (to give more resources to kernel) remove that particular bug in `cv::GausianBlur` with size `3x3`(and may be many more), but bigger parameters, like size '5x5' or other kernel, like described in #11503 issue, still lead to `CL_OUT_OF_RESOURCES`
I am continue to evaluate another CL-acceletared methods on Arm Mali-t860 to figure out another reasons of `CL_OUT_OF_RESOURCES`.
##### Steps to reproduce
```.cpp
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int argc, char** argv)
{
UMat img, gray;
imread("lena.jpg", IMREAD_COLOR).copyTo(img); //use lena.jpg from CV examples
cvtColor(img, gray, COLOR_BGR2GRAY);
GaussianBlur(gray, gray,Size(3, 3), 1.5);
Canny(gray, gray, 0, 50);
return 0;
}
```
| category: ocl,RFC,platform: arm | medium | Critical |
379,526,442 | opencv | the location of tutorial(cuda module) is not appropriate | ##### System information (version)
- OpenCV => 4.0.0-beta
- Operating System / Platform => All Platform
- Compiler => All Compiler
##### Detailed description
In PR https://github.com/opencv/opencv/pull/12585, cuda module was moved to opencv_contrib.
But, the tutorial(cuda module) remains in the [main repository](https://github.com/opencv/opencv/tree/4.0.0-beta/doc/tutorials/gpu).
I think that this tutorial should move to opencv_contrib, too.
##### Steps to reproduce
```shell
$ cmake -D BUILD_DOCS=ON -D WITH_CUDA=ON -D OPENCV_EXTRA_MODULES_PATH=<opencv_contrib>/modules <opencv_source_directory>
$ make doxygen
``` | category: documentation,category: contrib,category: gpu/cuda (contrib) | low | Minor |
379,544,568 | TypeScript | intellisense suggests inferred type parameter in else branch of conditional type | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.0-dev.20181101
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
type T<U> = U extends Array<infer V> ? V : Array</**/>;
```
**Expected behavior:**
Intellisense does not suggest `V` in the else branch as it will result in an error.
**Actual behavior:**
Suggests `V`, using `V` there is an error
**Playground Link:** https://agentcooper.github.io/typescript-play/#code/C4TwDgpgBAKgPAVQHxQLxQVCAPYEB2AJgM5QCCAThQIYhwCW+AZhBVAGooD8HUAXOSq04SANxA
**Related Issues:**
This is likely caused by `TypeChecker.getSymbolsInScope`, which doesn't use the same rules as the actual symbol lookup
| Bug,Domain: Completion Lists | low | Critical |
379,545,375 | TypeScript | intellisense in static method suggests TypeParameter of outer class | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.2.0-dev.20181101
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
class Outer<T> {
doStuff() {
class Inner<T> {
static doStuff(param: /**/) {}
}
}
}
```
**Expected behavior:**
Intellisense does not suggest `T` as it will result in an error.
OR
`T` of `Inner` doesn't shadow the outer `T` in static members, which is therefore accessible.
**Actual behavior:**
Suggests `T` from `Outer`.
**Playground Link:** https://agentcooper.github.io/typescript-play/#code/MYGwhgzhAEDyCuAXApgJwDwBUB80DeAUNMdACYD2AyovAGa0AUAlPkSe6JDAJIB2vaLLkLtRJCIjCIAlsDJUa9BgAcwqMAFsAXNAD0AKn26WeAL5sx59udNA
**Related Issues:**
#28471
`TypeChecker.getSymbolsInScope` correctly excludes TypeParameters of the containing class for its static members. Because of that an outer scope can add a symbol with the same name.
| Suggestion,In Discussion | low | Critical |
379,549,065 | TypeScript | Should intellisense suggest block scoped values before they are declared? | Currently intellisense suggests every block scoped declaration in scope, even if the suggestion is located within the TDZ. When accepting that suggestion, TypeScript issue an `used before its declaration` error:
```ts
new /**/; // suggests 'Clazz', which results in an error
class Clazz {}
function test(
foo = /**/, // suggests `foo`, `bar` and `baz`, all of them result in an error
bar = /**/, // suggests `bar` and `baz`, both result in an error
) {
let baz = 1;
}
```
Should intellisense filter out suggestions that would result in an error? If so, it should probably use the same condition as the checker.
Related: #28472 #28471 | Suggestion,Domain: Completion Lists,Experience Enhancement | low | Critical |
379,568,274 | flutter | Dismissible widgets disable Tabview drag detection | I have two tabs, the left tab having a list of tiles and the right tab having nothing. The user can drag the screen from right-to-left or left-to-right to get from one tab to the other.
The left tab has a list of dismissible tiles that only have `direction: DismissDirection.startToEnd` (from left-to-right) enabled so that the user can still _theoretically_ drag (from right-to-left) to go to the right tab.
However, I believe the `Dismissible` widget still receives the right-to-left drag information which is disabling the `TabView` drag to change tabs.
In essence, how do I allow the right-to-left drag to be detected by only the `TabView` and not the `Dismissible` item?
If an **explicit** solution/example with code snippets can be given, I would very very much appreciate the help!
UPDATE:
I'm thinking we could change a copy of the `dismissible.dart` file to change the `TabController`, but I'm not sure how I might do that.
In the `dismissible.dart` file:
```dart
...
void _handleDragUpdate(DragUpdateDetails details) {
if (!_isActive || _moveController.isAnimating)
return;
final double delta = details.primaryDelta;
if (delta < 0) print(delta); // thinking of doing something here
...
``` | framework,f: material design,f: gestures,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Major |
379,610,336 | opencv | Suggestion: use opencv:: namespace for cmake imported targets | While the syntax `::` seems to be more modern and conventional in cmake, it has a obvious benefit.
The following won't get any error until the linking time with a undefined reference error to `cv::imshow`.
```cmake
find_package(OpenCV 3.3 REQUIRED COMPONENTS core)
target_link_libraries(myExe opencv_core opencv_highgui)
```
But if using `opencv::highgui`, we get a configuration error from cmake:
```cmake
Target "myExe" links to target "opencv::highgui" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
```
| priority: low,category: build/install,future | low | Critical |
379,622,582 | vue | Unexpected component destroyed trigger by sibling component | ### Version
2.5.17
### Reproduction link
[https://jsfiddle.net/teddy_scmp/2m6kv3rn/](https://jsfiddle.net/teddy_scmp/2m6kv3rn/)
### Steps to reproduce
1. Open console
2. click the TOGGLE bottom
### What is expected?
It is weird that the component between the two v-if will destroy and mount again
### What is actually happening?
1. AComponent is destroyed which is unexpected
2. BComponent will keep, which I only added a class there.
---
In addition, I find that DIV will cause this issue, if I added a class / change it to a tag / button, it won't destroy.
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement,has workaround | medium | Minor |
379,681,839 | opencv | Stitching: failed test: ParallelFeaturesFinder.IsSameWithSerial (master branch) | [Nightly build](http://pullrequest.opencv.org/buildbot/builders/master-win64-vc14/builds/10634) (sporadic failures on Windows):
```
[ RUN ] ParallelFeaturesFinder.IsSameWithSerial
C:\build\master-win64-vc14\opencv\modules\stitching\test\test_matchers.cpp(103): error: Expected equality of these values:
countNonZero(diff_descriptors)
Which is: 12691
0
C:\build\master-win64-vc14\opencv\modules\stitching\test\test_matchers.cpp(103): error: Expected equality of these values:
countNonZero(diff_descriptors)
Which is: 12691
0
...
```
---
Problem is related to ORB descriptors (not bit-exact between runs for some reasons) | bug,test,category: stitching | low | Critical |
379,713,482 | pytorch | Pytorch-Caffe2 export: "Arrays are not almost equal to 3 decimals" | ## 📚 Documentation
Following the "[Transfering a Model from PyTorch to Caffe2 and Mobile using ONNX](https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html)" I get the following exception:
```python
Arrays are not almost equal to 3 decimals
```
when running
```python
np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3)
```
(with the documentation specifically asking to contact the team when this happens. :))
I have been able to reproduce the error on both Ubuntu 16.04 and Google Colab. Note that I have not install onnx-caffe2 and updated the source code following [PR-348](https://github.com/pytorch/tutorials/pull/348).
This is the environment on Ubuntu 16.004
**Conda env**:
```python
# packages in environment at ~/.conda/envs/torch-onnx:
#
# Name Version Build Channel
backcall 0.1.0 py36_0
blas 1.0 mkl
bleach 3.0.2 py36_0
bzip2 1.0.6 h14c3975_5
ca-certificates 2018.03.07 0
certifi 2018.10.15 py36_0
cffi 1.11.5 py36he75722e_1
cmake 3.12.2 h52cb24c_0
cycler 0.10.0 py36_0
dbus 1.13.2 h714fa37_1
decorator 4.3.0 py36_0
entrypoints 0.2.3 py36_2
expat 2.2.6 he6710b0_0
fontconfig 2.13.0 h9420a91_0
freetype 2.9.1 h8a8886c_1
future 0.17.1 <pip>
glib 2.56.2 hd408876_0
gmp 6.1.2 h6c8ec71_1
gst-plugins-base 1.14.0 hbbd80ab_1
gstreamer 1.14.0 hb453b48_1
icu 58.2 h9c2bf20_1
intel-openmp 2019.0 118
ipykernel 5.1.0 py36h39e3cac_0
ipython 7.1.1 py36h39e3cac_0
ipython_genutils 0.2.0 py36_0
ipywidgets 7.4.2 py36_0
jedi 0.13.1 py36_0
jinja2 2.10 py36_0
jpeg 9b h024ee3a_2
jsonschema 2.6.0 py36_0
jupyter 1.0.0 py36_7
jupyter_client 5.2.3 py36_0
jupyter_console 6.0.0 py36_0
jupyter_core 4.4.0 py36_0
kiwisolver 1.0.1 py36hf484d3e_0
libcurl 7.61.1 heec0ca6_0
libedit 3.1.20170329 h6b74fdf_2
libffi 3.2.1 hd88cf55_4
libgcc-ng 8.2.0 hdf63c60_1
libgfortran-ng 7.3.0 hdf63c60_0
libpng 1.6.35 hbc83047_0
libprotobuf 3.6.1 hd408876_0
libsodium 1.0.16 h1bed415_0
libssh2 1.8.0 h9cfc8f7_4
libstdcxx-ng 8.2.0 hdf63c60_1
libuuid 1.0.3 h1bed415_2
libxcb 1.13 h1bed415_1
libxml2 2.9.8 h26e45fe_1
magma-cuda90 2.3.0 1 pytorch
markupsafe 1.0 py36h14c3975_1
matplotlib 3.0.1 py36h5429711_0
mistune 0.8.4 py36h7b6447c_0
mkl 2019.0 118
mkl-include 2019.0 118
mkl_fft 1.0.6 py36h7dd41cf_0
mkl_random 1.0.1 py36h4414c95_1
mkldnn 0.16.1 0 mingfeima
nbconvert 5.3.1 py36_0
nbformat 4.4.0 py36_0
ncurses 6.1 hf484d3e_0
ninja 1.8.2 py36h6bb024c_1
notebook 5.7.0 py36_0
numpy 1.15.4 py36h1d66e8a_0
numpy-base 1.15.4 py36h81de0dd_0
onnx 1.3.0 <pip>
openssl 1.0.2p h14c3975_0
pandoc 2.2.3.2 0
pandocfilters 1.4.2 py36_1
parso 0.3.1 py36_0
pcre 8.42 h439df22_0
pexpect 4.6.0 py36_0
pickleshare 0.7.5 py36_0
pip 18.1 py36_0
prometheus_client 0.4.2 py36_0
prompt_toolkit 2.0.7 py36_0
protobuf 3.6.1 py36he6710b0_0
ptyprocess 0.6.0 py36_0
pycparser 2.19 py36_0
pygments 2.2.0 py36_0
pyparsing 2.3.0 py36_0
pyqt 5.9.2 py36h05f1152_2
python 3.6.5 hc3d631a_2
python-dateutil 2.7.5 py36_0
pytorch-nightly 1.0.0.dev20181109 py3.6_cuda9.0.176_cudnn7.1.2_0 pytorch
pytz 2018.7 py36_0
pyyaml 3.13 py36h14c3975_0
pyzmq 17.1.2 py36h14c3975_0
qt 5.9.6 h8703b6f_2
qtconsole 4.4.2 py36_0
readline 7.0 h7b6447c_5
rhash 1.3.6 hb7f436b_0
send2trash 1.5.0 py36_0
setuptools 40.5.0 py36_0
sip 4.19.8 py36hf484d3e_0
six 1.11.0 py36_1
sqlite 3.25.2 h7b6447c_0
terminado 0.8.1 py36_1
testpath 0.4.2 py36_0
tk 8.6.8 hbc83047_0
tornado 5.1.1 py36h7b6447c_0
traitlets 4.3.2 py36_0
typing 3.6.4 py36_0
typing-extensions 3.6.6 <pip>
wcwidth 0.1.7 py36_0
webencodings 0.5.1 py36_1
wheel 0.32.2 py36_0
widgetsnbextension 3.4.2 py36_0
xz 5.2.4 h14c3975_4
yaml 0.1.7 had09818_2
zeromq 4.2.5 hf484d3e_1
zlib 1.2.11 ha838bed_2
```
**Pip list**:
```python
Package Version
------------------ -----------------
backcall 0.1.0
bleach 3.0.2
certifi 2018.10.15
cffi 1.11.5
cycler 0.10.0
decorator 4.3.0
entrypoints 0.2.3
future 0.17.1
ipykernel 5.1.0
ipython 7.1.1
ipython-genutils 0.2.0
ipywidgets 7.4.2
jedi 0.13.1
Jinja2 2.10
jsonschema 2.6.0
jupyter 1.0.0
jupyter-client 5.2.3
jupyter-console 6.0.0
jupyter-core 4.4.0
kiwisolver 1.0.1
MarkupSafe 1.0
matplotlib 3.0.1
mistune 0.8.4
mkl-fft 1.0.6
mkl-random 1.0.1
nbconvert 5.3.1
nbformat 4.4.0
notebook 5.7.0
numpy 1.15.4
onnx 1.3.0
pandocfilters 1.4.2
parso 0.3.1
pexpect 4.6.0
pickleshare 0.7.5
pip 18.1
prometheus-client 0.4.2
prompt-toolkit 2.0.7
protobuf 3.6.1
ptyprocess 0.6.0
pycparser 2.19
Pygments 2.2.0
pyparsing 2.3.0
python-dateutil 2.7.5
pytz 2018.7
PyYAML 3.13
pyzmq 17.1.2
qtconsole 4.4.2
Send2Trash 1.5.0
setuptools 40.5.0
six 1.11.0
terminado 0.8.1
testpath 0.4.2
torch 1.0.0.dev20181109
tornado 5.1.1
traitlets 4.3.2
typing 3.6.4
typing-extensions 3.6.6
wcwidth 0.1.7
webencodings 0.5.1
wheel 0.32.2
widgetsnbextension 3.4.2
``` | caffe2 | low | Critical |
379,832,306 | opencv | cv::cvtColor does not properly take transparency into account | ##### System information (version)
- OpenCV => 3.2
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2013
##### Detailed description
``cv::cvtColor(image, outImage, cv::COLOR_BGRA2GRAY`);`` and ``cv::cvtColor(image, outImage, cv::COLOR_BGRA2BGR);`` don't handle transparency well: RGB pixel values determine the output even if alpha value is zero.
One could indeed argue that its because of a "wrong" image but this seems to be what many standard editing tools / conversion tools produce
##### Steps to reproduce
Use this test image at ``path``: [PngWithTransparency.png](https://user-images.githubusercontent.com/1710255/48357036-e3bc0380-e697-11e8-8e7a-6d67a3a6ed85.png)
```
cv::Mat image= cv::imread(path, cv::IMREAD_UNCHANGED);
cv::Mat noAlpha, noAlphaGray;
cv::cvtColor(image, noAlpha, cv::COLOR_BGRA2BGR);
cv::cvtColor(image, noAlphaGray, cv::COLOR_BGRA2GRAY);
```
This is what is inside the images, a black "bounding box" and then fully white border around. Expected would be a full black background.

BTW: ``cv::imshow("image", image)`` would produce the same result as ``noAlpha``
| wontfix,category: imgproc | low | Major |
379,870,692 | go | x/tools/go/packages: include directness information to ParseFile callback | Wire [recently switched to go/packages](https://github.com/google/go-cloud/pull/623) after using `golang.org/x/tools/go/loader`. As noted in google/go-cloud#663, we observed a ~4x slowdown. Since not all of the time is spent inside the Wire process, profiling has been difficult. However, for a baseline Wire run that took 2.26s (originally 0.3s), half the time is spent in `go list`, the other half is spent in `go/types.dependencyGraph` and map assignments.
This indicates to me that at least part of the problem still is having too much input being sent to the typechecker. One of the tricks Wire employed in the `loader`-based code was to skip function bodies for dependency packages (Wire needs function body type-checking for the root packages). However, the `ParseFile` callback does not give enough information to make this determination. I would like for the arguments to `ParseFile` to include whether the file is being parsed for the root packages or imported packages. Something like:
```go
package packages
type Config struct {
// ...
ParseFile func(fset *token.FileSet, filename string, src []byte, isRoot bool) (*ast.File, error)
}
```
It does seem quite possible that more information could be added over time, so a more future-friendly change could be along the lines of:
```go
package packages
type ParseFileRequest struct {
Fset *token.FileSet
Filename string
Src []byte
IsRoot bool
}
type Config struct {
// ...
ParseFile func(*ParseFileRequest) (*ast.File, error)
}
``` | Tools | low | Critical |
379,956,832 | go | x/build/maintner: GitHubIssue.ClosedBy field is never populated | There exists a [`GitHubIssue.ClosedBy`](https://godoc.org/golang.org/x/build/maintner#GitHubIssue.ClosedBy) field in `maintner`:
```Go
type GitHubIssue struct {
...
ClosedBy *GitHubUser
...
}
```
### What did you expect to see?
Accurate values.
### What did you see instead?
The field is never populated and always equals to `nil` for all GitHub issues.
This can be misleading for anyone looking to use that information.
## Cause
The `closed_by` JSON field is documented and shown in the sample response at https://developer.github.com/v3/issues/#get-a-single-issue.
However, `maintner` uses the https://developer.github.com/v3/issues/#list-issues-for-a-repository endpoint for getting information about many issues at once:
https://github.com/golang/build/blob/23803abc1638efbf100d69fe6d901b14a9ad55fd/maintner/github.go#L1605-L1613
But GitHub doesn't include all detailed fields when listing many issues rather than getting a single issue. The `closed_by` field is indeed missing:
<details><summary>Response from Get Single Issue Endpoint</summary><br>
```
...
"comments": 1,
"created_at": "2018-11-09T00:20:31Z",
"updated_at": "2018-11-09T00:25:34Z",
"closed_at": "2018-11-09T00:22:12Z",
"author_association": "MEMBER",
"body": "It exists in [...]",
"closed_by": {
"login": "gopherbot",
...
}
}
```
</details><br>
<details><summary>Response from List Issues Endpoint</summary><br>
```
...
"comments": 1,
"created_at": "2018-11-09T00:20:31Z",
"updated_at": "2018-11-09T00:25:34Z",
"closed_at": "2018-11-09T00:22:12Z",
"author_association": "MEMBER",
"body": "It exists in [...]"
},
```
</details>
## Possible Fixes
I see two possible solutions:
1. Populate the field.
2. Remove the field (or document it as broken and point to this issue).
Since this field isn't included in the existing endpoint queried by `maintner`, it would require making additional API calls per issue. That can be extremely expensive and simply not viable.
I'd suggest removing it or documenting, at least as an intermediate step. But open to ideas. /cc @bradfitz | help wanted,Builders,NeedsFix | low | Critical |
379,974,911 | pytorch | [c10d] Coordinated file truncation for FileStore | Comment by @SsnL in https://github.com/pytorch/pytorch/issues/13750#issuecomment-437242684, and discussion following that comment.
The FileStore will delete the underlying file if all participating processes terminate gracefully. If they don't there will be a file hanging around. If you try to reuse that file, the process group that uses it will get to see old values, and likely hang or crash. The technique showed by @SsnL fixes this by ensuring that a file gets truncated when it is first opened.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar @jiayisuse @agolynski | oncall: distributed,feature,triaged,distributed-backlog | low | Critical |
379,976,992 | rust | proc_macro support for reading files/strings to spanned TokenStream | Currently, as far as I am aware, there is no way to get spans for tokens parsed from a source other than the macro's callsite, and no good way to tell rustc & cargo a procedural macro depends on another file. It would be nice for macros which want to load other files to enable this.
It'd be nice to support some mechanism for loading a file as a `TokenStream`. Ideally this would take the form of a method in the crate directly parsing a file / string to a `TokenStream`. For example:
1. A function to load a file as a `TokenStream` with accurate span info etc, e.g.
```rust
fn lex_file(path: &Path) -> io::Result<TokenStream>;
```
2. A function to parse a string with a given filename. This might need an additional mechanism to register the source file as a dependency, e.g.
```rust
fn lex_str(src: &str, filename: &Path) -> Result<TokenStream, LexErr>;
fn add_path_dependency(path: &Path) -> ...;
```
To make it generally useful & able to implement something like `include!()` we'd probably also want/need something like #54725 to get the file to load relative to.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-macros,A-proc-macros | medium | Major |
379,987,737 | flutter | Performance overlay is wrong when embedding UIViews on iOS | With an embedded platform view we have multiple SurfaceFrames for a single frame, the stopwatch presented on performance overlay measures each of them separately. | platform-ios,engine,a: platform-views,P2,team-ios,triaged-ios | low | Major |
380,005,274 | godot | Using canvas world environment makes viewport render target ignore texture flip flag | godot 3.1, Windows 10, gles3, commit hash:
caa141a1ac630144e441f89689baa1a74bb3bae5
When you use environment world in 2d (canvas mode) the viewport which is rendering to the texture in the scene, ignores the v flip flag. You can click this flag many times and nothing happens.
When you remove the world environment, the viewport uses the flag again.
| bug,topic:rendering,confirmed | low | Major |
380,012,045 | electron | Support for the ppc64le architecture via existing patches | **Is your feature request related to a problem? Please describe.**
Currently Electron does not support the ppc64le architecture and therefore can't be used on machines like the libre Talos II workstation from Raptor Computing Systems.
On this front, myself and others have recently ported Chromium to ppc64le. We are currently attempting to get them upstream in Chromium, but unfortunately Google has been slow in acknowledging them.
In the meantime, I have integrated the patches into Electron's patch hook which allows the project to be built and run on ppc64le systems[1]. Incorporating these changes would allow users on the ppc64le platform to benefit from Electron's broad ecosystem without needing to wait for Google.
**Describe the solution you'd like**
While the Chromium patches pend upstream, I would like for them to be incorporated into Electron's build system. I have already done this in a fork[1], and with the Electron team's guidance, I would like for them to be merged so that official Electron binaries can be provided and the existing ecosystem tooling can take advantage of them.
**Describe alternatives you've considered**
Ideally, the patches will be accepted into upstream Chromium and Electron will have automatic support as a downstream. As stated above, though, there are some delays on that front and I believe the ppc64 community would greatly benefit from Electron support in the meantime.
**Additional context**
The Chromium patchset is actively maintained by me to track upstream, so the Electron team would simply need to pull my changes on Chromium version bumps. Ideally, this would introduce as little maintenance overhead as possible.
I have also written build instructions for ppc64le machines[2].
On the unit test front, there is an issue when running the `npm i --nodedir=../../out/Debug/gen/node_headers` step as described in the Electron documentation. The issue does not appear to be directly related to electron, but rather an to npm dependency of the unit tests. I have attached the output of this command in hopes that someone more familiar with the ecosystem can provide some insight[3].
I believe introducing ppc64le support in Electron would greatly benefit the users and developers of the ppc64 community, as well as the libre software community as a whole.
[1] https://github.com/shawnanastasio/electron
[2] https://wiki.raptorcs.com/wiki/Porting/Chromium/Electron
[3] https://paste.fedoraproject.org/paste/cYjtZcqxlcV5RwLPsjAxWQ | enhancement :sparkles: | medium | Critical |
380,021,856 | go | x/tools/go/packages: go list emits no CompiledGoFiles for packages that depend on one with an error | The TestLoadSyntaxError test case establishes an import graph a->b->c->d->e->f where e contains an error:
```
$ head $(find src -name \*.go)
==> src/golang.org/fake/d/d.go <==
package d; import "golang.org/fake/e"; const D = "d" + e.E
==> src/golang.org/fake/c/c.go <==
package c; import "golang.org/fake/d"; const C = "c" + d.D
==> src/golang.org/fake/f/f.go <==
package f; const F = "f"
==> src/golang.org/fake/e/e.go <==
package e; import "golang.org/fake/f"; const E = "e" + f.F + 1
==> src/golang.org/fake/b/b.go <==
package b; import "golang.org/fake/c"; const B = "b" + c.C
==> src/golang.org/fake/a/a.go <==
package a; import "golang.org/fake/b"; const A = "a" + b.B
```
If you run tip go list on a and c (as in the test), the reported set of CompiledGoFiles is accurate only for e and f, but not for any package above them, presumably because no build was attempted for those packages:
```
$ go list -compiled -f '{{.ImportPath}} {{.GoFiles}} {{.CompiledGoFiles}}' -deps -export golang.org/fake/a golang.org/fake/c
# golang.org/fake/e
src/golang.org/fake/e/e.go:1:60: cannot convert "ef" (type untyped string) to type int
src/golang.org/fake/e/e.go:1:60: invalid operation: "ef" + 1 (mismatched types string and int)
golang.org/fake/f [f.go] [f.go]
golang.org/fake/e [e.go] [e.go]
golang.org/fake/d [d.go] []
golang.org/fake/c [c.go] []
golang.org/fake/b [b.go] []
golang.org/fake/a [a.go] []
```
Eliminating the type error cause go list to report CompiledGoFiles all the way up:
```
$ GOPATH=$(pwd) go list -compiled -f '{{.ImportPath}} {{.GoFiles}} {{.CompiledGoFiles}}' -deps -export golang.org/fake/a golang.org/fake/c
golang.org/fake/f [f.go] [f.go]
golang.org/fake/e [e.go] [e.go]
golang.org/fake/d [d.go] [d.go]
golang.org/fake/c [c.go] [c.go]
golang.org/fake/b [b.go] [b.go]
golang.org/fake/a [a.go] [a.go]
```
This is arguably a bug in go list, but it suggests we need better test coverage and a more complex and better documented workaround for missing CompiledGoFiles than go/packages has today. | NeedsInvestigation,Tools | low | Critical |
380,099,297 | rust | assert! and assert_eq! generate different assembly | Concidering this simple code
```rust
pub fn foo1(a: u32, b: u32) -> bool {
assert!(a == b);
true
}
pub fn foo2(a: u32, b: u32) -> bool {
assert_eq!(a, b);
true
}
```
The generate very different [assembly](https://rust.godbolt.org/z/_kFRkv)
```asm
example::foo1:
push rax
cmp edi, esi
jne .LBB7_1
mov al, 1
pop rcx
ret
.LBB7_1:
// panicing
example::foo2:
sub rsp, 104
mov dword ptr [rsp], edi
mov dword ptr [rsp + 4], esi
mov rax, rsp
mov qword ptr [rsp + 8], rax
lea rax, [rsp + 4]
mov qword ptr [rsp + 16], rax
cmp edi, esi
jne .LBB8_1
mov al, 1
add rsp, 104
ret
.LBB8_1:
// panicing
```
There is a difference regarding the parameter formatting in the "assert_false" branch `LBB7_1` and `LBB8_1`, but what's even worse is, that the assert itself does differ and I don't see a reason why. | I-slow,A-codegen | low | Major |
380,109,743 | react | Textarea loses focus after inserting paired punctuation with Chinese IME | <!--
Note: if the issue is about documentation or the website, please file it at:
https://github.com/reactjs/reactjs.org/issues/new
-->
**Do you want to request a *feature* or report a *bug*?**
bug
**What is the current behavior?**
Textarea lose focus after insert paired punctuation by "Chinese-Pinyin 10 key" input source on IOS safari. And then textarea can't be focused when I click it. But after other element has been focused, the textarea could be focused again.
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:**
[CodeSandbox Demo](https://codesandbox.io/s/7w23wpl0q1)
[steps(youtube video)](https://youtu.be/4PJ2WVD83Eg)
**What is the expected behavior?**
just work fine!
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
versions of React: 16+
OS: IOS
browser: safari
| Type: Bug,Component: DOM | medium | Critical |
380,129,163 | TypeScript | Update Terminal's "--pretty" style. | I find the Pretty mode in TSC useful. But when there are multiple errors, it has so many spaces between each line that it becomes a pain for me to identify each error, even with the terminal in full size. Maybe I'm missing some setting?

It would be nice if the extra spaces were removed, something like this (also saves half of the terminal space):

If the terminal does not allow having an underline effect in the same text line, maybe a highlight effect like this is possible?

| Suggestion,In Discussion | low | Critical |
380,146,905 | godot | CurveTexture throwing error in particles shader generation | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.0.6 stable
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
<!-- What happened, and what was expected. -->
I noticed a really rare error thrown in a project I am working on.
It seems to be related to the shader generation in 'particles.cpp'.
Pinging @akien-mga as you had some more and better info on this issue, thanks!
```
0 shader_type particles;
1 uniform float spread;
2 uniform float flatness;
3 uniform float initial_linear_velocity;
4 uniform float initial_angle;
5 uniform float angular_velocity;
6 uniform float orbit_velocity;
7 uniform float linear_accel;
8 uniform float radial_accel;
9 uniform float tangent_accel;
10 uniform float damping;
11 uniform float scale;
12 uniform float hue_variation;
13 uniform float anim_speed;
14 uniform float anim_offset;
15 uniform float initial_linear_velocity_random;
16 uniform float initial_angle_random;
17 uniform float angular_velocity_random;
18 uniform float orbit_velocity_random;
19 uniform float linear_accel_random;
20 uniform float radial_accel_random;
21 uniform float tangent_accel_random;
22 uniform float damping_random;
23 uniform float scale_random;
24 uniform float hue_variation_random;
25 uniform float anim_speed_random;
26 uniform float anim_offset_random;
27 uniform vec3 emission_box_extents;
28 uniform vec4 color_value : hint_color;
29 uniform int trail_divisor;
30 uniform vec3 gravity;
31 uniform sampler2D damping_texture;
32
33
34 float rand_from_seed(inout uint seed) {
35 int k;
36 int s = int(seed);
37 if (s == 0)
38 s = 305420679;
39 k = s / 127773;
40 s = 16807 * (s - k * 127773) - 2836 * k;
41 if (s < 0)
42 s += 2147483647;
43 seed = uint(s);
44 return float(seed % uint(65536))/65535.0;
45 }
46
47 float rand_from_seed_m1_p1(inout uint seed) {
48 return rand_from_seed(seed)*2.0-1.0;
49 }
50
51 uint hash(uint x) {
52 x = ((x >> uint(16)) ^ x) * uint(73244475);
53 x = ((x >> uint(16)) ^ x) * uint(73244475);
54 x = (x >> uint(16)) ^ x;
55 return x;
56 }
57
58 void vertex() {
59 uint base_number = NUMBER/uint(trail_divisor);
60 uint alt_seed = hash(base_number+uint(1)+RANDOM_SEED);
61 float angle_rand = rand_from_seed(alt_seed);
62 float scale_rand = rand_from_seed(alt_seed);
63 float hue_rot_rand = rand_from_seed(alt_seed);
64 float anim_offset_rand = rand_from_seed(alt_seed);
65 float pi = 3.14159;
66 float degree_to_rad = pi / 180.0;
67
68 if (RESTART) {
69 float tex_linear_velocity = 0.0;
70 float tex_angle = 0.0;
71 float tex_anim_offset = 0.0;
72 float spread_rad = spread*degree_to_rad;
73 float angle1_rad = rand_from_seed_m1_p1(alt_seed)*spread_rad;
74 vec3 rot = vec3( cos(angle1_rad), sin(angle1_rad),0.0 );
75 VELOCITY = rot*initial_linear_velocity*mix(1.0, rand_from_seed(alt_seed), initial_linear_velocity_random);
76 float base_angle = (initial_angle+tex_angle)*mix(1.0,angle_rand,initial_angle_random);
77 CUSTOM.x = base_angle*degree_to_rad;
78 CUSTOM.y = 0.0;
79 CUSTOM.z = (anim_offset+tex_anim_offset)*mix(1.0,anim_offset_rand,anim_offset_random);
80 TRANSFORM[3].xyz = vec3(rand_from_seed(alt_seed) * 2.0 - 1.0, rand_from_seed(alt_seed) * 2.0-1.0, rand_from_seed(alt_seed) * 2.0-1.0)*emission_box_extents;
81 VELOCITY = (EMISSION_TRANSFORM * vec4(VELOCITY,0.0)).xyz;
82 TRANSFORM = EMISSION_TRANSFORM * TRANSFORM;
83 VELOCITY.z = 0.0;
84 TRANSFORM[3].z = 0.0;
85 } else {
86 CUSTOM.y += DELTA/LIFETIME;
87 float tex_linear_velocity = 0.0;
88 float tex_orbit_velocity = 0.0;
89 float tex_angular_velocity = 0.0;
90 float tex_linear_accel = 0.0;
91 float tex_radial_accel = 0.0;
92 float tex_tangent_accel = 0.0;
93 float tex_damping = textureLod(damping_texture,vec2(CUSTOM.y,0.0),0.0).r;
94 float tex_angle = 0.0;
95 float tex_anim_speed = 0.0;
96 float tex_anim_offset = 0.0;
97 vec3 force = gravity;
98 vec3 pos = TRANSFORM[3].xyz;
99 pos.z = 0.0;
100 //apply linear acceleration
101 force += length(VELOCITY) > 0.0 ? normalize(VELOCITY) * (linear_accel+tex_linear_accel)*mix(1.0,rand_from_seed(alt_seed),linear_accel_random) : vec3(0.0);
102 //apply radial acceleration
103 vec3 org = EMISSION_TRANSFORM[3].xyz;
104 vec3 diff = pos-org;
105 force += length(diff) > 0.0 ? normalize(diff) * (radial_accel+tex_radial_accel)*mix(1.0,rand_from_seed(alt_seed),radial_accel_random) : vec3(0.0);
106 //apply tangential acceleration;
107 force += length(diff.yx) > 0.0 ? vec3(normalize(diff.yx * vec2(-1.0,1.0)),0.0) * ((tangent_accel+tex_tangent_accel)*mix(1.0,rand_from_seed(alt_seed),tangent_accel_random)) : vec3(0.0);
108 //apply attractor forces
109 VELOCITY += force * DELTA;
110 //orbit velocity
111 float orbit_amount = (orbit_velocity+tex_orbit_velocity)*mix(1.0,rand_from_seed(alt_seed),orbit_velocity_random);
112 if (orbit_amount!=0.0) {
113 float ang = orbit_amount * DELTA * pi * 2.0;
114 mat2 rot = mat2(vec2(cos(ang),-sin(ang)),vec2(sin(ang),cos(ang)));
115 TRANSFORM[3].xy-=diff.xy;
116 TRANSFORM[3].xy+=rot * diff.xy;
117 }
118 if (damping + tex_damping > 0.0) {
119
120 float v = length(VELOCITY);
121 float damp = (damping+tex_damping)*mix(1.0,rand_from_seed(alt_seed),damping_random);
122 v -= damp * DELTA;
123 if (v < 0.0) {
124 VELOCITY = vec3(0.0);
125 } else {
126 VELOCITY = normalize(VELOCITY) * v;
127 }
128 }
129 float base_angle = (initial_angle+tex_angle)*mix(1.0,angle_rand,initial_angle_random);
130 base_angle += CUSTOM.y*LIFETIME*(angular_velocity+tex_angular_velocity)*mix(1.0,rand_from_seed(alt_seed)*2.0-1.0,angular_velocity_random);
131 CUSTOM.x = base_angle*degree_to_rad;
132 CUSTOM.z = (anim_offset+tex_anim_offset)*mix(1.0,anim_offset_rand,anim_offset_random)+CUSTOM.y*(anim_speed+tex_anim_speed)*mix(1.0,rand_from_seed(alt_seed),anim_speed_random);
133 CUSTOM.z = clamp(CUSTOM.z,0.0,1.0);
134 }
135 float tex_scale = textureLod(scale_texture,vec2(CUSTOM.y,0.0),0.0).r;
136 float tex_hue_variation = 0.0;
137 float hue_rot_angle = (hue_variation+tex_hue_variation)*pi*2.0*mix(1.0,hue_rot_rand*2.0-1.0,hue_variation_random);
138 float hue_rot_c = cos(hue_rot_angle);
139 float hue_rot_s = sin(hue_rot_angle);
140 mat4 hue_rot_mat = mat4( vec4(0.299, 0.587, 0.114, 0.0),
141 vec4(0.299, 0.587, 0.114, 0.0),
142 vec4(0.299, 0.587, 0.114, 0.0),
143 vec4(0.000, 0.000, 0.000, 1.0)) +
144 mat4( vec4(0.701, -0.587, -0.114, 0.0),
145 vec4(-0.299, 0.413, -0.114, 0.0),
146 vec4(-0.300, -0.588, 0.886, 0.0),
147 vec4(0.000, 0.000, 0.000, 0.0)) * hue_rot_c +
148 mat4( vec4(0.168, 0.330, -0.497, 0.0),
149 vec4(-0.328, 0.035, 0.292, 0.0),
150 vec4(1.250, -1.050, -0.203, 0.0),
151 vec4(0.000, 0.000, 0.000, 0.0)) * hue_rot_s;
152 COLOR = color_value * hue_rot_mat;
153
154 TRANSFORM[0] = vec4(cos(CUSTOM.x),-sin(CUSTOM.x),0.0,0.0);
155 TRANSFORM[1] = vec4(sin(CUSTOM.x),cos(CUSTOM.x),0.0,0.0);
156 TRANSFORM[2] = vec4(0.0,0.0,1.0,0.0);
157 float base_scale = mix(scale*tex_scale,1.0,scale_random*scale_rand);
158 if (base_scale==0.0) base_scale=0.000001;
159 TRANSFORM[0].xyz *= base_scale;
160 TRANSFORM[1].xyz *= base_scale;
161 TRANSFORM[2].xyz *= base_scale;
162 VELOCITY.z = 0.0;
163 TRANSFORM[3].z = 0.0;
164 }
165
166
SHADER ERROR: (null): Unknown identifier in expression: scale_texture
At: :136.
ERROR: _update_shader: Condition ' err != OK ' is true.
At: drivers/gles3/rasterizer_storage_gles3.cpp:1697.
```
**Steps to reproduce:**
Unfortunately I can't reproduce this easily right now, but I will try to make a minimal project that can.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| bug,topic:core,topic:rendering | low | Critical |
380,153,517 | create-react-app | What should the behavior of an exception inside a promise be? | ### Is this a bug report?
No/Maybe
I have a section of code (in typescript using "react-scripts": "2.1.1") that generates a promise that is used later. Inside that promise, I occasionally will get a bad object, and expect the promise to be rejected as a result of an object being undefined. Here's a pathological example, where I force the variable a to be undefined to illustrate the problem, which gives
Unhandled Rejection (TypeError): Cannot read property 'toLowerCase' of undefined
```
const a: any = undefined;
cached = new Promise(async (resolve, reject) => {
const ret = a.toLowerCase();
resolve(ret);
});
```
On chrome this generates an exception at runtime when running the debug build, which prevents any further use of the app, but does not on edge.
If instead, I rewrite it like this, it also fails on chrome.
```
const a: any = undefined;
cached = new Promise(async (resolve, reject) => {
try {
const ret = a.toLowerCase();
resolve(ret);
} catch (e) {
throw(e);
}
});
```
This too fails, making it look like anything thrown that derives from error causes the message:
```
const a: any = undefined;
cached = new Promise(async (resolve, reject) => {
try {
const ret = a.toLowerCase();
resolve(ret);
} catch (e) {
throw new Error("fix chrome");
}
});
```
however if I rewrite it like this, it does not fail on chrome.
```
const a: any = undefined;
cached = new Promise(async (resolve, reject) => {
try {
const ret = a.toLowerCase();
resolve(ret);
} catch (e) {
throw "fix chrome";
}
});
```
| issue: needs investigation | low | Critical |
380,161,227 | opencv | Proposal: VideoCapture to drop frames for low latency still capture | Use case: someone wants to capture still images. However the camera API, trying not to drop any frames, buffers several frames. This results in a pipeline of "stale" images when frames are read at less than full configured frame rate.
Those who don't know of this behavior, are often puzzled and don't know how to solve it.
The usual solution is to spawn a thread to read from VideoCapture without delay, and keep the most recent frame around for whatever requests it. This is often implemented from scratch by whoever needs this functionality.
Idea: introduce a property/flag to cv::VideoCapture that would enable the described solution. A VideoCapture::read() should not return the same frame twice, i.e. it should block if the most recent frame was already read. This is usually implemented using std::condition_variable.
This might interact with #12077.
Some example code for the principle (and direct access to v4l) can be found here:
* http://triffid-hunter.no-ip.info/Camera.cpp
* http://triffid-hunter.no-ip.info/CameraTest.cpp | priority: normal,feature,category: videoio(camera) | low | Major |
380,180,823 | opencv | G-API: division test is disabled for MathOperatorTest | Testing of the divsion operation has been disabled by @alalek with PR #13096
I am filing this issue in order to track fixing and re-enabling of these tests
Reasoning provided by @alalek about these tests:
Div by exact zero has been changed in OpenCV 4.x (to follow IEEE 754 for floating point numbers): #12826
Div by values near zero provides unreliable results.
Small error in denominator provides has huge impact on result. So inputs should be "normalized" (avoided values near zero), or compare function should be based on input values too (it is not enough to provide "expected" and "actual" results only)
| test,category: g-api / gapi | low | Critical |
380,182,767 | opencv | G-API: please fix the EqHistTest test | Issue description provided by @alalek with the PR #13096
Implementation of (equalize histogram) function is not bit-exact (at least in OpenCV), so this test will broke soon with "exact" check. | test,category: g-api / gapi | low | Minor |
380,219,769 | TypeScript | Feature: Lazy object initialization | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
lazy object initialization
## Suggestion
Introduce ability to initialize objects lazily and check if it was initialized.
I suggest new keyword `lazyof` (or other name) which would work as following:
- You can assign `Partial<T>` to `lazyof T`.
- Only property from type `T` can be assigned to `lazyof T`.
- Compiler should keep track of what properties were assigned to variable of type `lazyof T`.
- Every time you reference variable with type `lazyof T` it should be treated as if it's type was `{ properties, which, you, assigned, up, to, this, point }`. This implies that when you assign such variable to other variable/return it, the implicit type should be latter (see example 1 and 2).
- When `lazyof T` variable goes out of scope, compiler should check if all the required properties of `T` are set.
## Use Cases
1. Initializing objects like this:
```javascript
const obj = {};
obj.foo = 1;
```
is common practice in JS. It would be nice to have a proper type checking
2. Easier migration from JS to TS. You don't need to convert lazy object initialization to eager.
3. Eager object initialization is not always the simplest approach and might complicate code
4. Ability to check whether object was properly initialized in constructor/named constructor (see last example)
## Examples
Simple example:
```typescript
const obj: lazyof { [key: string]: any } = {};
obj.foo = 1;
const otherObj = obj; // otherObj implicit type is { foo: number }
obj.bar = 'a';
const otherObj2 = obj; // otherObj2 implicit type is { foo: number, bar: string }
```
Implicit function return type:
```typescript
function initialize() {
const obj: lazyof { [key: string]: any } = {};
obj.foo = 1;
obj.bar = 'a';
return obj; // function initialize implicit return type is { foo: number, bar: string }
}
```
With specific interface:
```typescript
interface Obj {
foo: number;
}
const obj: lazyof Obj = {};
obj.foo = 1;
obj.bar = 'a'; // error, property bar doesn't exists on Obj.
```
Going out of scope:
```typescript
interface Obj {
foo: number;
bar: string;
}
function initialize(obj: lazyof Obj) {
obj.foo = 1;.
// error. obj variable goes out of scope, but { foo: number } is not assignable to Obj
}
```
Check if object instance is initialized:
```typescript
class Obj {
foo: number;
bar: string;
private constructor() {}
static namedConstructor(): Obj {
const self: lazyof Obj = new Obj;
self.foo = 1;
console.log(obj.bar); // // error, property bar doesn't exists on { foo: number }
return self;
// error. self variable goes out of scope, but { foo: number } is not assignable to Obj
}
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | medium | Critical |
380,287,586 | godot | Manually linking static libraries with MSVC appends Godot's LIBSUFFIX |
[fmod.zip](https://github.com/godotengine/godot/files/2576922/fmod.zip)
<!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** 3.1
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Windows
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
It's very likely to be my fault, but I can't include the FMOD library when linking a custom module.
It's strange how the module itself compiles without problems, but when it comes to the "Linking program" step the compiler freezes for a bit and then outputs `fatal error LNK2019: unresolved external symbol "public: static enum FMOD_RESULT __cdecl FMOD::Studio::System::create(class FMOD::Studio::System * *,unsigned int)" in function "public: __cdecl FMod::FMod(void)"`.
**Minimal reproduction project:**
I placed the "FMOD Studio API Windows" under "thirdparty". The module itself basically contains just a call to FMOD's initialization function. I attached the module as a zip file.
For those who don't want to download the zip, here's the SCsub:
```python
Import('env')
fmodapifolder = "#thirdparty/FMOD Studio API Windows/api"
module_env = env.Clone()
module_env.add_source_files(env.modules_sources, "*.cpp")
module_env.Append(CPPPATH=fmodapifolder + "/studio/inc")
module_env.Append(LIBPATH=fmodapifolder + "/studio/lib")
module_env.Append(CPPPATH=fmodapifolder + "/lowlevel/inc")
module_env.Append(LIBPATH=fmodapifolder + "/lowlevel/lib")
```
Here's fmod.cpp:
```c++
#include "fmod.h"
FMod::FMod() {
// Initialize fmod
FMOD::Studio::System* system = NULL;
FMOD::Studio::System::create(&system);
}
void FMod::_bind_methods(){
// none
}
```
Here's fmod.h:
```c++
#ifndef FMODNODE
#define FMODNODE
#include "scene/main/node.h"
#include "fmod.h"
#include "fmod.hpp"
#include "fmod_common.h"
#include "fmod_studio.hpp"
#include "fmod_studio.h"
#include "fmod_studio_common.h"
class FMod : public Node {
GDCLASS(FMod, Node);
protected:
static void _bind_methods();
public:
FMod();
};
#endif
```
Also, I don't know if this can help, but without any call to FMOD functions (only with the `includes`), scons compiles without any issue.
| bug,platform:windows,topic:buildsystem,confirmed | medium | Critical |
380,290,769 | TypeScript | Rest tuple parameter with intersection/union places error on wrong argument | **TypeScript Version:** 3.2.0-dev.20181113
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
rest tuple, intersection, union, error, argument, misplaced
**Code**
```ts
declare function boop<T extends any[]>(...args: T & [number, string]): void;
boop(123, 456); // error
// ~~~ <-- error on first argument
// Argument of type '[123, 456]' is not assignable to parameter
// of type '[number, number] & [number, string]'.
// Type '[123, 456]' is not assignable to type '[number, string]'.
// Type '456' is not assignable to type 'string'.
```
**Expected behavior:**
I expect an error on the second argument (`456`), since that's the argument that causes the failure, as made evident by the error message. (Or possibly I expect an error on both arguments, if it's interpreted as a failure to match the rest parameter.)
**Actual behavior:**
The actual error is on the first argument (`123`). This is a minor problem, and it only seems to show up in the intersection-with-type-parameter situation above (concrete types like `...args: [number, string] & [number, string]` work fine), but I figured I'd report it in case it has an easy fix.
(Note to self or interested non-selves: ran into this in a [Stack Overflow answer](https://stackoverflow.com/a/53175538/2887218))
**Playground Link:**
[🔗](https://www.typescriptlang.org/play//#src=declare%20function%20boop%3CT%20extends%20any%5B%5D%3E%28...args%3A%20T%20%26%20%5Bnumber%2C%20string%5D%29%3A%20void%3B%0Aboop%28123%2C%20456%29%3B%20%2F%2F%20error%0A%2F%2F%20%20%20~~~%20%3C--%20error%20on%20first%20argument%0A%2F%2F%20Argument%20of%20type%20%27%5B123%2C%20456%5D%27%20is%20not%20assignable%20to%20parameter%20%0A%2F%2F%20%20%20of%20type%20%27%5Bnumber%2C%20number%5D%20%26%20%5Bnumber%2C%20string%5D%27.%0A%2F%2F%20Type%20%27%5B123%2C%20456%5D%27%20is%20not%20assignable%20to%20type%20%27%5Bnumber%2C%20string%5D%27.%0A%2F%2F%20Type%20%27456%27%20is%20not%20assignable%20to%20type%20%27string%27.)
**Related Issues:**
Haven't found anything. Anyone find anything?
| Bug,Domain: Error Messages | low | Critical |
380,317,873 | flutter | Using ChangeNotifier as a mixin is messing with the super.dispose() call in a StatefulWidget | I'm seeing an issue which feels like a Flutter bug.
I have a StatefulWidget which has a state class that looks like this
```dart
class _SearchScreenTabsContainerState extends State<SearchScreenTabsContainer>
with SingleTickerProviderStateMixin, ChangeNotifier { ... }
```
I've added the ChangeNotifier class so that I can call `textController.notifyListeners();` when changing tabs to refresh my tab content. However by implementing the ChangeNotifier mixin my dispose method which looks like this
```dart
@override
void dispose() {
textController.dispose();
super.dispose();
}
```
throws an error `_SearchScreenTabsContainerState.dispose failed to call super.dispose.`
I found a [SO](https://stackoverflow.com/questions/48678788/flutter-screenstate-dispose-method-exception) thread of someone having the same problem and the solution was to remove the ChangeNotifier mixin, because it looks like the `super.dispose()` call, calls into the ChangeNotifier's dispose method instead of the State's.
I can remove the ChangeNotifier mixin and my code still works (and I don't see the error) but when adding `textController.notifyListeners();` I get this warning "_The member 'notifyListeners' can only be used within instance members of subclasses of 'ChangeNotifier'_".
I might be doing something wrong as I'm very new to Flutter, but this felt worth raising to check if it's indeed a bug or not.
Here's my Flutter doctor output
```
Flutter (Channel beta, v0.9.4, on Mac OS X 10.13.3 17D47, locale en-GB)
[✓] Android toolchain - develop for Android devices (Android SDK 28.0.3)
[!] iOS toolchain - develop for iOS devices
✗ Xcode installation is incomplete; a full installation is necessary for iOS development.
Download at: https://developer.apple.com/xcode/download/
Or install Xcode via the App Store.
Once installed, run:
sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer
✗ libimobiledevice and ideviceinstaller are not installed. To install, run:
brew install --HEAD libimobiledevice
brew install ideviceinstaller
✗ ios-deploy not installed. To install:
brew install ios-deploy
✗ CocoaPods not installed.
CocoaPods is used to retrieve the iOS platform side's plugin code that responds to your plugin usage on the Dart side.
Without resolving iOS dependencies with CocoaPods, plugins will not work on iOS.
For more info, see https://flutter.io/platform-plugins
To install:
brew install cocoapods
pod setup
[✓] Android Studio
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
[✓] Android Studio (version 3.2)
[✓] IntelliJ IDEA Community Edition (version 2018.1.5)
[✓] VS Code (version 1.25.1)
[✓] Connected devices (1 available)
``` | framework,dependency: dart,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-framework,triaged-framework | low | Critical |
380,321,093 | rust | Rust incorrectly infers the generic type if used with failure::ResultExt and the try macro. | When trying to compile the following code, the compiler will incorrectly infer the type as `()`.
It shouldn't happen, since the returned type is `Result<U, _>` and `context` doesn't change the generic value type in `Result.`
```
extern crate serde;
extern crate failure;
extern crate reqwest;
use serde::{Deserialize, Serialize};
use failure::{Fallible, ResultExt, format_err};
fn post<T, U>(json: T) -> Fallible<U>
where
T: Serialize,
for<'a> U: Deserialize<'a>,
{
let client = reqwest::Client::new();
let url = "http://someurl.com";
let mut resp = client.post(url).json(&json).send()?;
let resp_content: Result<U, String> = resp.json().context("Bad JSON")?;
resp_content.map_err(|e| format_err!("Provider replied: {:?}", e))
}
fn run() -> Fallible<()> {
// works fine, the error is:
// error[E0282]: type annotations needed
//let ret = post("foo").expect("blah");
// This incorrectly infers the type `()` instead of requiring type annotations.
// The compilation fails in the `println` line
// error[E0277]: `()` doesn't implement `std::fmt::Display`
let ret = post("foo").context("blah")?;
// Here the typecheck succeeds
// let ret: String = post("foo").context("blah")?;
println!("{}", ret);
Ok(())
}
fn main() {
run().unwrap();
}
```
The dependencies in `Cargo.toml`:
```
[dependencies]
failure = "0.1"
serde = "1.0"
reqwest = "0.9"
```
This happens on both Rust 1.30.1 and 1.32.0-nightly (65204a97d 2018-11-12)
It is clearly a compiler bug: if both `()` and `String` are correct types, then the type inference should fail.
| A-trait-system,T-compiler,A-inference,C-bug | low | Critical |
380,322,262 | rust | Lifetime is somehow treated as invariant where it shouldn't be | Original code using [combine](https://crates.io/crates/combine) v3.6.2 (self-contained reproduce code can be found below):
```rust
use combine::parser::{Parser, range::take, char::string};
fn func<'a>(source: &'a str) {
take(1).or(string("a")).parse(source); // error
}
```
The code should compiles.
Instead, this happened:
```
error[E0495]: cannot infer an appropriate lifetime due to conflicting requirements
--> src\main.rs:3:35
|
3 | take(1).or(string("a")).parse(source);
| ^^^^^^
|
note: first, the lifetime cannot outlive the lifetime 'a as defined on the function body at 2:9...
--> src\main.rs:2:9
|
2 | fn func<'a>(source: &'a str) {
| ^^
= note: ...so that the expression is assignable:
expected &str
found &'a str
= note: but, the lifetime must be valid for the static lifetime...
= note: ...so that the types are compatible:
expected &str
found &'static str
```
With NLL, error is stranger:
```
error[E0521]: borrowed data escapes outside of function
--> src\main.rs:4:6
|
3 | fn func<'a>(source: &'a str) {
| ------ `source` is a reference that is only valid in the function body
4 | take(1).or(string("a")).parse(source);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `source` escapes the function body here
```
However, slightly modified code
```rust
fn func<'a>(source: &'a str) {
take(1).or(string("a").map(|x| x)).parse(source); // ok
}
```
compiles fine.
I made a self-contained repro:
```rust
use std::marker::PhantomData;
pub struct ParserS<I, O>(PhantomData<fn(I) -> O>);
pub trait Parser: Sized {
type I;
type O;
fn or<P: Parser<I = Self::I, O = Self::O>>(self, _: P) -> ParserS<Self::I, Self::O> { ParserS(PhantomData) }
fn parse(self, _: Self::I) { }
}
impl<I, O> Parser for ParserS<I, O> {
type I = I;
type O = O;
}
fn take<'a>() -> ParserS<&'a str, &'a str> {
ParserS(PhantomData)
}
pub struct Str<I>(PhantomData<fn(I) -> &'static str>);
impl<I> Parser for Str<I> {
type I = I;
type O = &'static str;
}
pub fn string_bad<'a>() -> Str<&'a str> {
Str(PhantomData)
}
pub fn string_ok<'a>() -> ParserS<&'a str, &'static str> {
ParserS(PhantomData)
}
pub fn func<'a>(source: &'a str) {
// error[E0495]: cannot infer an appropriate lifetime due to conflicting requirements
take().or(string_bad()).parse(source);
// however, this equivalent code compiles
take().or(string_ok()).parse(source);
}
```
## Meta
```
rustc 1.32.0-nightly (15d770400 2018-11-06)
binary: rustc
commit-hash: 15d770400eed9018f18bddf83dd65cb7789280a5
commit-date: 2018-11-06
host: x86_64-pc-windows-gnu
release: 1.32.0-nightly
LLVM version: 8.0
```
Also reproduced on playground's stable/beta/nightly. | A-lifetimes | low | Critical |
380,374,732 | vscode | Ability to apply a final sort to QuickPick results | I'd like to be able to pin certain items to the top of a QuickPick list, but the QuickPick's fuzzy matching functionality doesn't allow any custom sorting. Providing some kind of hook to apply a final sort would solve this. This hook would receive the fuzzy-matched order and could return a new array for the final order. | feature-request,quick-pick | low | Minor |
380,395,464 | pytorch | Pytorch very slow to convert list of numpy arrays into tensors | ## 🐛 Bug
I compared the execution time of two codes.
Code 1:
```
import torch
import numpy as np
a = [np.random.randint(0, 10, size=(7, 7, 3)) for _ in range(100000)]
b = torch.tensor(np.array(a))
```
And code 2:
```
import torch
import numpy as np
a = [np.random.randint(0, 10, size=(7, 7, 3)) for _ in range(100000)]
b = torch.tensor(a)
```
The code 1 takes less than 1 second to execute (used `time`):
```
real 0m0,915s
user 0m0,808s
sys 0m0,330s
```
Whereas the code 2 takes 5 seconds:
```
real 0m6,057s
user 0m5,979s
sys 0m0,308s
```
## Expected behavior
I would expect code 2 to be as fast as code 1.
## Environment
- PyTorch 0.4.1
- Linux
- OS (e.g., Linux):
- Installed with `conda`
- Python version: 3.6
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @mruberry @rgommers @heitorschueroff @VitalyFedyunin @ngimel | high priority,module: performance,triaged,enhancement,module: numpy,has workaround | medium | Critical |
380,415,331 | go | doc: freeze "Effective Go" | There have been a number of suggestions to change the code inside Effective Go lately. For instance, the comments in https://github.com/golang/go/issues/28773 suggest adding an example that uses strings.Builder. But in fact there are almost no mentions of library functions in the document, so starting with strings.Builder doesn't make much sense.
I may be totally alone here, but I think of the current document called "Effective Go" (EG) as a bit of a time capsule, a published book if you will.
I think it should be pretty much left alone. The continuing churn of advice and recommendations and cargo culting should not be developed there. I think even though it is old it does a fine job of saying how to write _effective_ Go code. How to write modern, trendy, stylistic, library-aware code is different question.
I think there should be a new document that talks about the libraries (almost totally absent from EG), style (nearly ditto), "best practices" such as the thing about returning concrete types, not interfaces (which requires so many caveats I fail to understand why it's a rule), etc. I fear that if you try to incorporate that into EG several things will happen: It will become much longer; it will be changing constantly; many people will add to it; and it will lose the writing style it has, which is almost entirely mine, and therefore in my voice.
Let's freeze it and start something new and more dynamic and more current.
| Documentation,NeedsDecision | high | Critical |
380,418,556 | TypeScript | A way to expand mapped types | I want to be able to expand mapped type in intellisense
Currently, mapped types is displayed directly with the mapper and original type, which is unhelpful as hell

With the ts 2.8 condition type,
you can write more powerful declaration to map almost everything to correct type for you,
but the ide simply show it is mapped,
you don't know what is it actually mapped to.
for example, in the above case, I have a mapper looks like
```ts
type Mapper<T> = {
[K in keyof T]:
T[K] extends {type: SQL_ENUM<infer U>}? U:
T[K] extends {type: SQL_ENUM<infer U>, allowNull: true}? U | undefined:
T[K] extends {type: typeof Sequelize.DATE, allowNull: true}? Date | undefined:
T[K] extends {type: typeof Sequelize.DATE}? Date:
T[K] extends {type: typeof Sequelize.INTEGER, allowNull: true}? number | undefined:
T[K] extends {type: typeof Sequelize.INTEGER}? number:
// stop here, fon't let things goes too wrong
T[K] extends {type: typeof Sequelize.ENUM}? never:
T[K] extends {type: typeof Sequelize.STRING, allowNull: true}? string | undefined:
T[K] extends {type: typeof Sequelize.STRING}? string:
T[K] extends {type: typeof Sequelize.TEXT, allowNull: true}? string | undefined:
T[K] extends {type: typeof Sequelize.TEXT}? string:
T[K] extends {type: typeof Sequelize.BOOLEAN, allowNull: true}? boolean | undefined:
T[K] extends {type: typeof Sequelize.BOOLEAN}? boolean:
any
}
```
that will transform the decalration to a simple
```ts
interface session {
token: string,
userId: string,
ip: string
}
```
But the ide won't tell you anything, which is quite annoying | Suggestion,In Discussion | medium | Critical |
380,421,213 | go | runtime: types are not garbage collected | go version devel +644ddaa842 Wed Nov 7 16:12:02 2018 +0000 linux/amd64
Newly created types are not garbage collected, so code which creates types on the fly in response to runtime data can leak memory.
This code prints a large number when it should print zero: https://play.golang.org/p/R6N6IJSzYTD
| Performance,NeedsInvestigation,compiler/runtime | low | Major |
380,425,439 | vscode | Feature Request Add favorites | Can you add in the Welcome page, a list of favorite projects that the user will set? For example i am working on 3 projects but the recent projects is not helping. I would want to have a panel that i can set my favorites and with a click to open the folder. | feature-request,getting-started | medium | Major |
380,479,684 | go | net/http: ServeMux excess locking | In #25383, a lock contention regression was brought up in `ServeMux`. I believe this lock contention is still present in the mux, and could be removed. Making ServeMux have an atomically loaded handler map can help reduce latency of serving at the cost of more expensive mutations.
A sample CL that fixes this regression is https://go-review.googlesource.com/c/go/+/149377 and shows promising speedups on the benchmarks:
```
$ benchstat /tmp/old.txt /tmp/new.txt
name old time/op new time/op delta
ServeMux-12 76.3µs ± 3% 64.0µs ± 7% -16.14% (p=0.000 n=30+30)
ServeMux_SkipServe-12 28.4µs ± 2% 19.8µs ± 3% -30.49% (p=0.000 n=28+28)
name old alloc/op new alloc/op delta
ServeMux-12 17.3kB ± 0% 17.3kB ± 0% ~ (all equal)
ServeMux_SkipServe-12 0.00B 0.00B ~ (all equal)
name old allocs/op new allocs/op delta
ServeMux-12 360 ± 0% 360 ± 0% ~ (all equal)
ServeMux_SkipServe-12 0.00 0.00 ~ (all equal)
```
However, this change depends on using CompareAndSwap, which is only available for `unsafe.Pointer` values. The reviewer of the CL expressed that using `unsafe` is unacceptable, though I am not really sure why. I have previously proposed that `atomic.Value` should gain these methods in #26728, but the proposal was also declined.
### Options
Thus, there are a few ways forward that I can think of, but need a decision from the Go team:
1. Accept the use of `unsafe` in `ServeMux`, and gain the extra efficiency.
2. Accept that there is a valid use case for `atomic.Value.CompareAndSwap()`, and prioritize it being added as a public (or private) API, and use it instead.
3. Come up with some other solution to improve the serving speed of ServeMux, that is neither of the above two.
4. Disagree that performance of the ServeMux matters or is worth optimizing, close the CL, and pay the extra 10us per request.
I feel rather helpless with regards to options 1 and 2. I have suggested both, but have received differing reasons for why they aren't acceptable from different Go team members. I'd rather the Go team just pick one of the 4 options above, because the requirements for changing Go are opaque from my POV.
cc: @bradfitz @rsc
| Performance,NeedsInvestigation | low | Major |
380,483,059 | TypeScript | Go to definition on @constructor type fails | **TypeScript Version:** 3.2.0-dev.20181110
**Code**
```js
/** @constructor */
function f() {
this.x = 0;
}
/** @type {f} */
let x = new f();
```
**Expected behavior:**
Go-to-definition at `f` in `@type {f}` works.
**Actual behavior:**
No definitions returned. | Bug,Domain: JSDoc,Domain: Symbol Navigation | low | Minor |
380,489,223 | TypeScript | Symbol display of namespace-function merged symbol at value use should just mention function | **TypeScript Version:** 3.2.0-dev.20181113
**Code**
See `referencesForMergedDeclarations.ts`
**Expected behavior:**
At the value use of `Foo` in `Foo.bind`, just the function is shown in symbol display.
**Actual behavior:**
It shows `"namespace Foo\nfunction Foo(): void"`. | Bug,Domain: Quick Info | low | Minor |
380,510,818 | TypeScript | Export a object with comment ,but the comment is missing after build a *.d.ts file | I export an object,then I build to a declare file like `index.d.ts`,but the file has no comment about that object.
### Source code
```javascript
//src/cache/index.ts
import * as localStorage from "./localStorage"
export default {
/**
* This is a comment2
*/
localStorage
}
//src/index.ts
import cache from "./cache/index"
export default {
/**
* This is a comment1
*/
cache
}
```
### Index.d.ts
```javascript
declare const _default: {
/**
* This is a comment1
*/
cache: {
localStorage: typeof import("./cache/localStorage");//#### This field's comment is missing ####
};
};
export default _default;
``` | Suggestion,Revisit,Domain: Comment Emit | low | Minor |
380,527,161 | flutter | Updating Google Maps options needs a way to fail | Right now we try to update the options and always complete the method channel invocation with a success.
We should refactor this a bit and relay update errors back to the Dart side. | c: new feature,p: maps,package,team-ecosystem,P2,triaged-ecosystem | low | Critical |
380,572,462 | TypeScript | An index signature parameter type with types extends string or number. | An_index_signature_parameter_type_must_be_string_or_number_1023:
with this Feature, I can't make this code work:
```typescript
export class Events<ED extends EventsDef, EventName extends keyof ED = keyof ED> {
private eventMap = new Map<EventName, Event<ED[EventName]>>();
constructor() {
new Proxy(this, {
get: (target, p) => {
if (p in target) {
return target[p];
} else {
this.getEvent(p as EventName);
}
}
})
}
private getEvent(eventName: EventName) {
return this.eventMap.get(eventName) || this.generateEvent(eventName);
}
private generateEvent(eventName: EventName) {
let event = new Event<ED[EventName]>();
this.eventMap.set(eventName, event);
return event;
}
// error here
readonly [eventName: EventName]: Event<ED[EventName]>;
}
```
| Suggestion,In Discussion | low | Critical |
380,599,865 | kubernetes | Suspicious breakdown in pod startup time in scalability tests | I've already observed a number cases in scalability tests, where for pod startup-time, the breakdown looks suspicious.
As an example, in this run:
https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/255/build-log.txt
from looking into 10% worst times, we see that:
```
I1113 02:27:31.355] Nov 13 02:27:31.301: INFO: 10% worst schedule-to-watch latencies: ... {density-latency-pod-438-8thtq gce-scale-cluster-minion-group-2-4km0 4.881956298s}]
...
I1113 02:27:31.376] Nov 13 02:27:31.324: INFO: 10% worst e2e latencies: ... {density-latency-pod-438-8thtq gce-scale-cluster-minion-group-2-4km0 4.881956298s}]
```
The time for "schedule-to-watch" and "e2e latencies" for different pods are exactly the same (which suggests something is wrong).
The second thing that is a bit suspicious in some cases is that the "watch part" something is relatively long (time from when kubelet reports pod status as running to when test really observes that, even though watch latencies e.g. for scheduler or controller-manager are low). This may or may not be related to some starvation at the test level.
@kubernetes/sig-scalability-bugs @mborsz | kind/bug,sig/scalability,lifecycle/frozen | low | Critical |
380,705,204 | flutter | Programmatic sign in using a specific email id | Hi,
I have a requirement that I need to run background processing that has to connect to Google Drive and Google Sheets for a set of gmail IDs. I didn't find any place where I could force a Google Signin using a specific gmail ID. In my native Android app, I could do this easily using the snippet below but this does not seem possible in Flutter using this package. Do you have any recommendations? Silent sign in also uses the last logged in gmail id. My requirement is to sign in for a set of gmail ids registered in my app.
```dart
// Initialize credentials and service object.
GoogleAccountCredential credential = GoogleAccountCredential.usingOAuth2(
this, Arrays.asList(Const.SCOPES))
.setBackOff(new ExponentialBackOff());
credential.setSelectedAccount(new Account(gmailid,"com.test.myapp"));
``` | c: new feature,p: google_sign_in,package,team-ecosystem,P3,triaged-ecosystem | low | Minor |
380,711,970 | godot | Reimporting meshes clears MeshInstance materials | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** 89a76f21edcdd41b2e032c69fab6cc8211aecd76
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Windows 10
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
When importing a .obj mesh and adding it to a MeshInstance, the MeshInstance's `material/*` properties get reset upon reimporting the mesh. Additionally, the editor does not properly update to reflect this, and still displays the old material.
<!-- What happened, and what was expected. -->
**Steps to reproduce:**
1. Open [reflectionbug.zip](https://github.com/godotengine/godot/files/2580958/reflectionbug.zip)
2. See a purple teapot
3. in the "FileSystem" Tab, navigate to res://teapot/teapot2.obj
4. in the "Import" Tab, click reimport
5. The teapot becomes white with a checkerboard texture
6. Select "MeshInstance" (which is the teapot), and set its `material/0` and `material/1` properties to a new SpatialMaterial in the Inspector
7. reimport teapot2.obj again
8. Now, the teapot does **not** become white-with-checkerboard, but stays like after step 6.
9. Check MeshInstance in the Inspector: the materials set in 6. still appear as set.
10. Try to open such a material by clicking on the preview-sphere. The material editor does **not** open, instead you get a popup asking whether to create a new ShaderMaterial or a new SpatialMaterial. Do not do any of this.
11. run the scene: the teapot (which appears white-without-checkerboard, like after 6.) in the editor, looks "white-with-checkerboard" (like after 5., which is just the default material of the obj file) in the running game's camera.
**Expected behaviour:**
After 5., I'd expect the MeshInstance's `material/{0,1}` setting to remain unchanged (and also the appearance of the teapot). Isn't that the MeshInstance's property, and not the Mesh's property?
After 8., I suspect that the materials have been reset again, but the editor wasn't updated accordingly and thus behaves glitchy in 9. and 10. I'd expect it to correctly update the view and the inspector (or to not clear the materials, preferably)
After 11., I'd expect the actual game to behave like shown in the editor.
**Minimal reproduction project:** see above
| bug,confirmed,topic:import,topic:3d | low | Critical |
380,716,805 | vscode | editor.insertSnippet() messes with indenting of the SnippetString | This is a very similar issue to #44200/#57093 but that fix appears to be only being applied to completions, but applying snippets causes the same issues.
Here's a repro:
```ts
vs.workspace.openTextDocument(vs.Uri.parse(`untitled:${os.tmpdir}/a.dart`)).then(async (document) => {
const originalCode = `main() {
// code indented with two spaces
}`;
const editor = await vs.window.showTextDocument(document);
await editor.edit((eb) => eb.insert(new vs.Position(0, 0), originalCode));
const startOffset = originalCode.indexOf("// code indented with two spaces");
const endOffset = originalCode.indexOf("\n}");
const snippetString = new vs.SnippetString("// new code indented with two spaces\n // newer code indented with two spaces");
await editor.insertSnippet(snippetString, new vs.Range(document.positionAt(startOffset), document.positionAt(endOffset)));
});
```
This creates a doc and inserts into it the code:
```dart
main() {
// code indented with two spaces
}
```
It then applies a snippet string that includes an additional line that is also indented by two spaces. However what you end up with is:
```dart
main() {
// new code indented with two spaces
// newer code indented with two spaces
}
```
The inserted line has been re-indented.
Whilst this behaviour may be useful if applying basic hard-coded strings, it's incorrect for language servers that are correctly calculating edits to leave the file indented/formatted correctly (I guess for the same reasons that left to #57093). | help wanted,feature-request,api,snippets | medium | Major |
380,766,555 | nvm | install script does not detect if git is installed or not | - Operating system and version:
Ubuntu bionic
- `nvm debug` output:
n/a
- `nvm ls` output:
n/a
- How did you install `nvm`? (e.g. install script in readme, Homebrew):
via the install script, but git **was not** installed
- What steps did you perform?
as mentioned in the doc, but git **was not** installed
- What happened?
```bash
$ wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
=> Downloading nvm as script to '/home/user/.nvm'
=> Appending nvm source string to /home/user/.bashrc
=> Appending bash_completion source string to /home/user/.bashrc
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
```
- What did you expect to happen?
```bash
$ wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
=> Downloading nvm from git to '/home/user/.nvm'
=> Dépôt Git vide initialisé dans /home/user/.nvm/.git/
remote: Enumerating objects: 267, done.
remote: Counting objects: 100% (267/267), done.
remote: Compressing objects: 100% (242/242), done.
remote: Total 267 (delta 31), reused 86 (delta 15), pack-reused 0
Réception d'objets: 100% (267/267), 119.47 KiB | 643.00 KiB/s, fait.
Résolution des deltas: 100% (31/31), fait.
Depuis https://github.com/creationix/nvm
* [nouvelle étiquette] v0.33.11 -> v0.33.11
=> Compressing and cleaning up git repository
=> nvm source string already in /home/user/.bashrc
=> bash_completion source string already in /home/user/.bashrc
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
```
- Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`?
no | needs followup,installing nvm | low | Critical |
380,799,586 | go | cmd/link: wrong c-archive architecture using GNU binutils ar on macOS Mojave 10.14.1 | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.2 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/kyle/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/kyle/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/vw/ht92mdqd1f54_c6ys42ltc5c0000gn/T/go-build035928823=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
Following the instructions at https://golang.org/doc/install/source, with the latest 1.11.2 darwin/amd64 Go binary distribution as the bootstrap toolchain:
```
git clone https://go.googlesource.com/go gosrc
cd gosrc
git checkout go1.11.2
cd src
./all.bash
```
### What did you expect to see?
Go project should build successfully.
### What did you see instead?
Error log: https://gist.github.com/DemonWav/8caf4fbdf9642133f53502614a231a24
Issues occur with `##### ../misc/cgo/testcarchive`. I believe the issue is with this line:
`ld: warning: ignoring file pkg/darwin_amd64/libgo.a, file was built for archive which is not the architecture being linked (x86_64): pkg/darwin_amd64/libgo.a`
Since the architecture doesn't match, it doesn't link, and the linker can't find the missing symbols.
If I instead ask for the `386` architecture (`GOARCH=386 ./all.bash`), it builds successfully: https://gist.github.com/DemonWav/533894d4426e6a9dc7cc84cc613ee66d
The host is still recognized as `amd64` in the build log, though.
```
Building packages and commands for host, darwin/amd64.
Building packages and commands for target, darwin/386
```
It appears `libgo` is always compiled as `i386` architecture, even when both host and target are defined as `amd64`, `GOARCH=amd64 GOHOSTARCH=amd64 ./all.bash` gives the same result.
I'm not sure if there's something wrong with my environment or what, but I've asked on Slack and IRC and no one seems to know. I'm sorry if this isn't an issue. | help wanted,NeedsInvestigation | low | Critical |
380,828,488 | TypeScript | Type inference does not work for enums and calculated props | **TypeScript Version:** 3.1.6
Calculated props, Spread operator, Enum keyed types
**Code**
```ts
enum Test {
param = 'X',
other = 'Y',
last = 'Z',
};
type TSomeOtherType = {
readonly someProp: number;
readonly orOther: string;
};
type TKeyedByEnum = {
[key in Test]: TSomeOtherType | null;
};
// const throwsCompilerError: TKeyedByEnum = {
// [Test.param]: 123,
// };
Object.keys(Test).reduce(
(acc, key: Test): TKeyedByEnum => {
const doesntThrow: TKeyedByEnum = {
...acc,
[key]: 123,
};
return doesntThrow;
},
{} as TKeyedByEnum
);
```
**Expected behavior:**
Typescript compiler throws an error on the object `doesntThrow`.
**Actual behavior:**
There is no typescript error on the object `doesntThrow`. The resulting object of type `TKeyedByEnum` contains values of type `number`.
**Playground Link:**
http://www.typescriptlang.org/play/#src=enum%20Test%20%7B%0D%0A%20%20param%20%3D%20'X'%2C%0D%0A%20%20other%20%3D%20'Y'%2C%0D%0A%20%20last%20%3D%20'Z'%2C%0D%0A%7D%3B%0D%0A%0D%0Atype%20TSomeOtherType%20%3D%20%7B%0D%0A%20%20readonly%20someProp%3A%20number%3B%0D%0A%20%20readonly%20orOther%3A%20string%3B%0D%0A%7D%3B%0D%0A%0D%0Atype%20TKeyedByEnum%20%3D%20%7B%0D%0A%20%20%5Bkey%20in%20Test%5D%3A%20TSomeOtherType%20%7C%20null%3B%0D%0A%7D%3B%0D%0A%0D%0Aconst%20throwsCompilerError%3A%20TKeyedByEnum%20%3D%20%7B%0D%0A%20%20%5BTest.param%5D%3A%20123%2C%0D%0A%7D%3B%0D%0A%0D%0Aconst%20t%20%3D%20Object.keys(Test).reduce(%0D%0A%20%20%20%20(acc%2C%20key%3A%20Test)%3A%20TKeyedByEnum%20%3D%3E%20%7B%0D%0A%20%20%20%20%20%20const%20doesntThrowCompilerError%3A%20TKeyedByEnum%20%3D%20%7B%0D%0A%20%20%20%20%20%20%20%20...acc%2C%0D%0A%20%20%20%20%20%20%20%20%5Bkey%5D%3A%20123%2C%0D%0A%20%20%20%20%20%20%7D%3B%0D%0A%20%20%20%20%20%20return%20doesntThrowCompilerError%3B%0D%0A%20%20%20%20%7D%2C%0D%0A%20%20%20%20%7B%7D%20as%20TKeyedByEnum%0D%0A)%3B%0D%0A%0D%0Aconsole.log(t)%3B
**Related Issues:**
https://github.com/Microsoft/TypeScript/issues/27704
| Bug | low | Critical |
380,876,722 | rust | rustdoc doesn't honor `#![doc(html_no_source)]` across crates | There's an attribute you can apply to your crate, `#![doc(html_no_source)]`, that tells rustdoc to not generate `[src]` links or their corresponding files for that crate. However, if a *dependency* of this crate doesn't add this attribute, and then re-exports something from that crate, it will happily generate `[src]` links... to nonexistent pages. We should probably track this flag on a per-crate basis for all dependencies, rather than only looking for it in the active crate. | T-rustdoc,C-bug | low | Minor |
380,912,787 | pytorch | [tracking task] FBGEMM guarding AVX2 properly | ## 🐛 Bug
Since the commit 18de330e8639249d4e8c95b2f357983890458d99, `USE_FBGEMM` is set to 1 in the build system if the compiler has the support for AVX2 or higher, and FBGEMM is built as part of PyTorch.
This causes an issue on the systems that have the compiler with the support for AVX2 or higher but the CPU without the corresponding support. The build system builds PyTorch, generating code for AVX2 or higher, which the CPU cannot handle, hence the `Illegal Instruction (core dumped)` error.
I can work around the issue by setting the environment variable `NO_FBGEMM=1`.
## To Reproduce
Steps to reproduce the behavior:
1. `git clone --recursive https://github.com/pytorch/pytorch.git`
1. `cd pytorch`
1. `python setup.py build develop`
1. `python -c "import torch"`
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
```
# python -c "import torch"
Illegal instruction (core dumped)
# gdb python core
GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python...done.
warning: core file may not match specified executable file.
[New LWP 8034]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `python -c import torch'.
Program terminated with signal SIGILL, Illegal instruction.
#0 0x00007f497b050e3c in std::__detail::_Prime_rehash_policy::_Prime_rehash_policy (this=0x557819ce2570, __z=3.06604104e-41) at /usr/include/c++/5/bits/hashtable_policy.h:460
460 _Prime_rehash_policy(float __z = 1.0) noexcept
(gdb) bt
#0 0x00007f497b050e3c in std::__detail::_Prime_rehash_policy::_Prime_rehash_policy (this=0x557819ce2570, __z=3.06604104e-41) at /usr/include/c++/5/bits/hashtable_policy.h:460
#1 0x00007f497c3dd727 in std::_Hashtable<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>
> const, std::function<std::unique_ptr<caffe2::OperatorBase, std::default_delete<caffe2::OperatorBase> > (caffe2::OperatorDef const&, caffe2::Workspace*)> >, std::allocator<std::pair<std::__cxx11::basic_string<c
har, std::char_traits<char>, std::allocator<char> > const, std::function<std::unique_ptr<caffe2::OperatorBase, std::default_delete<caffe2::OperatorBase> > (caffe2::OperatorDef const&, caffe2::Workspace*)> > >, s
td::__detail::_Select1st, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>
> >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::_Hashtable() (this=0x557819ce2550)
at /usr/include/c++/5/bits/hashtable.h:394
#2 0x00007f497c3dd74e in std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::function<std::unique_ptr<caffe2::OperatorBase, std::default_delete<caffe2::Opera
torBase> > (caffe2::OperatorDef const&, caffe2::Workspace*)>, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char
_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::function<std::unique_ptr<caffe2::OperatorBase, std::d
efault_delete<caffe2::OperatorBase> > (caffe2::OperatorDef const&, caffe2::Workspace*)> > > >::unordered_map() (this=0x557819ce2550) at /usr/include/c++/5/bits/unordered_map.h:132
#3 0x00007f497c3dd784 in c10::Registry<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::unique_ptr<caffe2::OperatorBase, std::default_delete<caffe2::OperatorBase> >, caffe2:
:OperatorDef const&, caffe2::Workspace*>::Registry (this=0x557819ce2550) at ../c10/util/Registry.h:58
#4 0x00007f497c3d6a3e in caffe2::CPUOperatorRegistry[abi:cxx11]() () at ../caffe2/core/operator.cc:320
#5 0x00007f497b03b4e2 in __static_initialization_and_destruction_0 (__initialize_p=1, __priority=65535) at ../caffe2/quantization/server/conv_dnnlowp_op.cc:1502
#6 0x00007f497b03bd53 in _GLOBAL__sub_I_conv_dnnlowp_op.cc(void) () at ../caffe2/quantization/server/conv_dnnlowp_op.cc:1526
#7 0x00007f49914306ba in call_init (l=<optimized out>, argc=argc@entry=3, argv=argv@entry=0x7ffcd85f16a8, env=env@entry=0x5578195a3980) at dl-init.c:72
#8 0x00007f49914307cb in call_init (env=0x5578195a3980, argv=0x7ffcd85f16a8, argc=3, l=<optimized out>) at dl-init.c:30
#9 _dl_init (main_map=main_map@entry=0x557819968e70, argc=3, argv=0x7ffcd85f16a8, env=0x5578195a3980) at dl-init.c:120
#10 0x00007f49914358e2 in dl_open_worker (a=a@entry=0x7ffcd85eefd0) at dl-open.c:575
#11 0x00007f4991430564 in _dl_catch_error (objname=objname@entry=0x7ffcd85eefc0, errstring=errstring@entry=0x7ffcd85eefc8, mallocedp=mallocedp@entry=0x7ffcd85eefbf,
operate=operate@entry=0x7f49914354d0 <dl_open_worker>, args=args@entry=0x7ffcd85eefd0) at dl-error.c:187
#12 0x00007f4991434da9 in _dl_open (file=0x7f498e2ea050 "/workspace/dl/pytorch/torch/_C.cpython-36m-x86_64-linux-gnu.so", mode=-2147483391,
caller_dlopen=0x5578178a098a <_PyImport_FindSharedFuncptr+138>, nsid=-2, argc=<optimized out>, argv=<optimized out>, env=0x5578195a3980) at dl-open.c:660
#13 0x00007f4990c35f09 in dlopen_doit (a=a@entry=0x7ffcd85ef200) at dlopen.c:66
#14 0x00007f4991430564 in _dl_catch_error (objname=0x5578195a68c0, errstring=0x5578195a68c8, mallocedp=0x5578195a68b8, operate=0x7f4990c35eb0 <dlopen_doit>, args=0x7ffcd85ef200) at dl-error.c:187
#15 0x00007f4990c36571 in _dlerror_run (operate=operate@entry=0x7f4990c35eb0 <dlopen_doit>, args=args@entry=0x7ffcd85ef200) at dlerror.c:163
#16 0x00007f4990c35fa1 in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87
#17 0x00005578178a098a in _PyImport_FindSharedFuncptr () at /tmp/build/80754af9/python_1540319457073/work/Python/dynload_shlib.c:95
#18 0x00005578178cbfa0 in _PyImport_LoadDynamicModuleWithSpec () at /tmp/build/80754af9/python_1540319457073/work/Python/importdl.c:129
#19 0x00005578178cc1e5 in _imp_create_dynamic_impl.isra.12 (file=0x0, spec=0x7f498e2e8198) at /tmp/build/80754af9/python_1540319457073/work/Python/import.c:1994
#20 _imp_create_dynamic () at /tmp/build/80754af9/python_1540319457073/work/Python/clinic/import.c.h:289
#21 0x00005578177c83a1 in PyCFunction_Call () at /tmp/build/80754af9/python_1540319457073/work/Objects/methodobject.c:114
#22 0x00005578178768ff in do_call_core (kwdict=0x7f498e2f4630, callargs=0x7f49901e86d8, func=0x7f4990514f30) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:5102
#23 _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3404
#24 0x0000557817848124 in _PyEval_EvalCodeWithName () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4166
#25 0x0000557817848fc1 in fast_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4978
#26 0x000055781784ecf5 in call_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4858
#27 0x000055781787171a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3335
#28 0x0000557817848d8b in _PyFunction_FastCall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4919
#29 fast_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4954
#30 0x000055781784ecf5 in call_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4858
#31 0x000055781787171a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3335
#32 0x0000557817848d8b in _PyFunction_FastCall (globals=<optimized out>, nargs=1, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4919
#33 fast_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4954
#34 0x000055781784ecf5 in call_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4858
#35 0x000055781787171a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3335
#36 0x0000557817848d8b in _PyFunction_FastCall (globals=<optimized out>, nargs=1, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4919
#37 fast_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4954
#38 0x000055781784ecf5 in call_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4858
#39 0x000055781787171a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3335
#40 0x0000557817848d8b in _PyFunction_FastCall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4919
#41 fast_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4954
#42 0x000055781784ecf5 in call_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4858
#43 0x000055781787171a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3335
#44 0x00005578178493fb in _PyFunction_FastCall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4919
#45 _PyFunction_FastCallDict () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:5021
#46 0x00005578177c579f in _PyObject_FastCallDict () at /tmp/build/80754af9/python_1540319457073/work/Objects/abstract.c:2310
#47 0x00005578178081e0 in _PyObject_CallMethodIdObjArgs () at /tmp/build/80754af9/python_1540319457073/work/Objects/abstract.c:2796
#48 0x00005578177bc2a0 in PyImport_ImportModuleLevelObject () at /tmp/build/80754af9/python_1540319457073/work/Python/import.c:1578
#49 0x0000557817873eb6 in import_name (level=0x5578179f3a00 <small_ints+160>, fromlist=0x7f499041afd0, name=0x7f499030f2b0, f=0x5578195711d8) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:5231
#50 _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:2899
#51 0x0000557817849ad9 in _PyEval_EvalCodeWithName (qualname=0x0, name=0x0, closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwstep=2, kwcount=<optimized out>, kwargs=0x0, kwnames=0x0, argcount=0, args=0x0,
locals=0x7f49904985e8, globals=0x7f49904985e8, _co=0x7f4990379ed0) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4166
#52 PyEval_EvalCodeEx () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4187
#53 0x000055781784a87c in PyEval_EvalCode (co=co@entry=0x7f4990379ed0, globals=globals@entry=0x7f49904985e8, locals=locals@entry=0x7f49904985e8)
at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:731
#54 0x000055781786efed in builtin_exec_impl.isra.11 (locals=0x7f49904985e8, globals=0x7f49904985e8, source=0x7f4990379ed0) at /tmp/build/80754af9/python_1540319457073/work/Python/bltinmodule.c:983
#55 builtin_exec () at /tmp/build/80754af9/python_1540319457073/work/Python/clinic/bltinmodule.c.h:283
#56 0x00005578177c83a1 in PyCFunction_Call () at /tmp/build/80754af9/python_1540319457073/work/Objects/methodobject.c:114
#57 0x00005578178768ff in do_call_core (kwdict=0x7f4990305d38, callargs=0x7f499030f848, func=0x7f499148d9d8) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:5102
#58 _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3404
#59 0x0000557817848124 in _PyEval_EvalCodeWithName () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4166
#60 0x0000557817848fc1 in fast_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4978
#61 0x000055781784ecf5 in call_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4858
#62 0x000055781787171a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3335
#63 0x0000557817848d8b in _PyFunction_FastCall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4919
#64 fast_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4954
#65 0x000055781784ecf5 in call_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4858
#66 0x000055781787171a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3335
#67 0x0000557817848d8b in _PyFunction_FastCall (globals=<optimized out>, nargs=1, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4919
#68 fast_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4954
#69 0x000055781784ecf5 in call_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4858
#70 0x000055781787171a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3335
#71 0x0000557817848d8b in _PyFunction_FastCall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4919
#72 fast_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4954
#73 0x000055781784ecf5 in call_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4858
#74 0x000055781787171a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3335
#75 0x00005578178493fb in _PyFunction_FastCall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4919
#76 _PyFunction_FastCallDict () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:5021
#77 0x00005578177c579f in _PyObject_FastCallDict () at /tmp/build/80754af9/python_1540319457073/work/Objects/abstract.c:2310
#78 0x00005578178081e0 in _PyObject_CallMethodIdObjArgs () at /tmp/build/80754af9/python_1540319457073/work/Objects/abstract.c:2796
#79 0x00005578177bc2a0 in PyImport_ImportModuleLevelObject () at /tmp/build/80754af9/python_1540319457073/work/Python/import.c:1578
#80 0x0000557817873eb6 in import_name (level=0x5578179f3a00 <small_ints+160>, fromlist=0x5578179aca90 <_Py_NoneStruct>, name=0x7f4990493a78, f=0x55781958cc68)
at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:5231
#81 _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:2899
#82 0x0000557817849ad9 in _PyEval_EvalCodeWithName (qualname=0x0, name=0x0, closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwstep=2, kwcount=<optimized out>, kwargs=0x0, kwnames=0x0, argcount=0, args=0x0,
locals=0x7f49904d8120, globals=0x7f49904d8120, _co=0x7f49904949c0) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4166
#83 PyEval_EvalCodeEx () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4187
#84 0x000055781784a87c in PyEval_EvalCode (co=co@entry=0x7f49904949c0, globals=globals@entry=0x7f49904d8120, locals=locals@entry=0x7f49904d8120)
at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:731
#85 0x00005578178cb074 in run_mod () at /tmp/build/80754af9/python_1540319457073/work/Python/pythonrun.c:1025
#86 0x00005578178cb10d in PyRun_StringFlags () at /tmp/build/80754af9/python_1540319457073/work/Python/pythonrun.c:949
#87 0x00005578178cb16f in PyRun_SimpleStringFlags () at /tmp/build/80754af9/python_1540319457073/work/Python/pythonrun.c:445
#88 0x00005578178cef7a in run_command (cf=0x7ffcd85f149c, command=0x557819537f40 L"import torch\n") at /tmp/build/80754af9/python_1540319457073/work/Modules/main.c:301
#89 Py_Main () at /tmp/build/80754af9/python_1540319457073/work/Modules/main.c:749
#90 0x0000557817796b4e in main () at /tmp/build/80754af9/python_1540319457073/work/Programs/python.c:69
#91 0x00007f4990e59830 in __libc_start_main (main=0x557817796a60 <main>, argc=3, argv=0x7ffcd85f16a8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffcd85f1698)
at ../csu/libc-start.c:291
#92 0x00005578178781a8 in _start () at ../sysdeps/x86_64/elf/start.S:103
```
## Expected behavior
`pytorch -c "import torch"` is expected to succeed without any error
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): github master branch at 8752214fb7534d3d3f83c1a459f24c57db86cd10
- OS (e.g., Linux): Ubuntu 16.04.5 LTS
- How you installed PyTorch (`conda`, `pip`, source): source
- Build command you used (if compiling from source): `python setup.py build develop`
- Python version: 3.6.7
- CUDA/cuDNN version: 10.0/7.4.1
- GPU models and configuration: Quadro GV100 32GB
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
```
# cat /proc/cpuinfo [164/9317]
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
stepping : 5
microcode : 0x1d
cpu MHz : 2668.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xto
pology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ssbd ibrs ibpb stibp kaiser tpr_shadow vnmi flexpriority ept vpid dtherm ida flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 5345.60
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
stepping : 5
microcode : 0x1d
cpu MHz : 2668.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xto
pology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ssbd ibrs ibpb stibp kaiser tpr_shadow vnmi flexpriority ept vpid dtherm ida flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 5345.60
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
stepping : 5
microcode : 0x1d
cpu MHz : 2668.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xto
pology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ssbd ibrs ibpb stibp kaiser tpr_shadow vnmi flexpriority ept vpid dtherm ida flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 5345.60
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
stepping : 5
microcode : 0x1d
cpu MHz : 2668.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 6
initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xto
pology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ssbd ibrs ibpb stibp kaiser tpr_shadow vnmi flexpriority ept vpid dtherm ida flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 5345.60
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 4
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
stepping : 5
microcode : 0x1d
cpu MHz : 2668.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xto
pology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ssbd ibrs ibpb stibp kaiser tpr_shadow vnmi flexpriority ept vpid dtherm ida flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 5345.60
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 5
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
stepping : 5
microcode : 0x1d
cpu MHz : 2668.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 1
cpu cores : 4
apicid : 3
initial apicid : 3
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xto
pology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ssbd ibrs ibpb stibp kaiser tpr_shadow vnmi flexpriority ept vpid dtherm ida flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 5345.60
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 6
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
stepping : 5
microcode : 0x1d
cpu MHz : 2668.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 2
cpu cores : 4
apicid : 5
initial apicid : 5
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ssbd ibrs ibpb stibp kaiser tpr_shadow vnmi flexpriority ept vpid dtherm ida flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 5345.60
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
stepping : 5
microcode : 0x1d
cpu MHz : 2668.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 7
initial apicid : 7
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ssbd ibrs ibpb stibp kaiser tpr_shadow vnmi flexpriority ept vpid dtherm ida flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 5345.60
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
cc @malfet @seemethere @walterddr | module: build,triaged,module: third_party | medium | Critical |
380,932,367 | TypeScript | identifierCount may not be correct | **TypeScript Version:** 3.2.0-dev.20181114
In `parser.ts`, there is an `identifierCount` variable that is incremented every time `createIdentifier` is called.
Unfortunately, not all `createIdentifier` calls will result in an identifier in the AST, since some may have happened during speculative parsing. It may be necessary to reset `identifierCount` after speculative parsing fails. | Infrastructure | low | Minor |
380,939,814 | opencv | pass ENVIRONMENT to Emscripten | See: https://github.com/kripken/emscripten/pull/6565
As it is right now, the build doesn't work out of the box with bundlers like webpack. I'd imagine that the "worker" target can come in handy too. | feature,priority: low,category: build/install,category: javascript (js) | low | Minor |
380,947,889 | node | Run tests with OPENSSL_CONF set to embedded openssl.cnf | * **Version**: <= 10.13.0
* **Platform**: Linux
when building with the `--shared-openssl` flag, it is highly probable that the `openssl.cnf` used by the shared library will be different than the one present in Node.js source code, resulting in several test failures.
Typically, on debian, openssl is configured with stricter options regarding key length and TLS versions.
It is very easy to avoid any test failure, though, by setting this environment variable when running tests:
`OPENSSL_CONF=./deps/openssl/openssl/apps/openssl.cnf`
| help wanted,test,openssl | low | Critical |
380,952,108 | opencv | Exception in knnMatch of cvflann::anyimpl::bad_any_cast when upgrading Podfile from 3.1.0.1 to latest | ##### System information (version)
- OpenCV => latest Pod install
- Operating System / Platform => iOS
- Compiler => Xcode
##### Detailed description
I have an image matching program on iOS that identifies images from the phone's camera for the user. It's been working fine for two years, starting with a much, much older opencv2 library. I've updated it several times throughout the years with no issues, but when going from 3.1.0.1 to latest Pod using Podfiles, I suddenly get crashes.
Here is the feature extractor I use:
```
#define NUM_FEATURES_ORB (400)
m_descriptorExtractor = cv::ORB::create(NUM_FEATURES_ORB);
```
In debug mode, I only get a exception in `knnMatch(...)` of type
```
cvflann::anyimpl::bad_any_cast
```
Which is usually thrown when you've setup your system incorrectly (as most posts about that exception are about dating back to 2012). However, I know the system is setup correctly and untouched for two years.
In release mode, it just crashes in the constructor for the FlannMatcher
```
int const table_number = 6;// 12
int const key_size = 12;// 20
int const multi_probe_level = 1;// 2
cv::FlannBasedMatcher flannMatcher(cv::makePtr<cv::flann::LshIndexParams>(table_number, key_size, multi_probe_level));
```
Reverting back to using 3.1.0.1 in the Podfile fixes the issue
| platform: ios/osx,incomplete,needs investigation | low | Critical |
380,956,411 | TypeScript | Statically match namespace names using `new Function().name` | I have this bit of code:
```js
import * as express from 'express';
import {RequestHandler} from 'express';
const router = express.Router();
export const register = (v: any) => {
router.get('/', makeGetFoo(v));
router.put('/', makePutFoo(v));
};
namespace ApiDoc {
export interface makeGetFoo { // <--- i want to share this interface with front-end codebase
success: boolean
}
export interface makePutFoo { // <--- i want to share this interface with front-end codebase
success: boolean
}
}
const makeGetFoo = (v: any): RequestHandler => {
return (req, res, next) => {
res.json(<ApiDoc[makeGetFoo.name]>{success: true});
};
};
const makePutFoo = (v: any): RequestHandler => {
return (req, res, next) => {
res.json(<ApiDoc[makePutFoo.name]>{success: true});
};
};
```
my goal is to put the typings in a intermediary file, so that the types can be sourced by the front-end code. Hopefully you understand what I am trying to do.
The problem is I get this:

> Cannot use namespace ApiDoc as a type
What I want to do is create a system where the interface name matches the function name so that I don't have to guess, etc.
| Suggestion,In Discussion | low | Major |
380,957,889 | go | cmd/go: add (or make explicit) tests for ambiguous imports in module mode | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.2 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/deklerk/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/deklerk/workspace/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/lk/zs4m7sv12mq2vzk_wfn2tfvm00h16k/T/go-build259032920=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
- Create repo `foo` with:
- A `foo/go.mod`
- A `foo/somefile.go` containing `const Version = "foo=0.0.1"`
- A directory `foo/bar/` with a `foo/bar/somefile.go` containing `const Version = "foo=0.0.1"`
- Commit, tag v0.0.1 (parent encompasses directory), push both
...
- In some other library, depend on foo with `require foo v0.0.1`
- Add `fmt.Println(foo.Version) // expect "foo=0.0.1"`
- Add `fmt.Println(bar.Version) // expect "foo=0.0.1"`
...
- Back in repo `foo`, make a `foo/bar` its own submodule:
- Create `foo/bar/go.mod`
- Update `foo/bar/somefile.go` to now have `const Version = "bar=1.0.0"`
- Commit, tag bar/v1.0.0, tag v0.0.2 (new version of the parent because a submodule was carved out), and push all
...
- In your other library, add a `require bar v1.0.0` dependency (keeping the `require foo v0.0.1` dependency!)
- Now you require "bar" two ways: via `require bar v1.0.0` and via `require foo v0.0.1`, in which `foo/bar` was still part of the module
- `fmt.Println(bar.Version)` results in `bar=1.0.0`
_EDIT_: actually, you get the following error (see @myitcv excellent repro below):
```
unknown import path "github.com/jadekler/module-testing/pkg_b": ambiguous import: found github.com/jadekler/module-testing/pkg_b in multiple modules:
```
### What did you expect to see?
I'm not sure. Maybe this is WAI? I vaguely expected to see an error. I _suspect_ there are some hidden subtleties here that could end up breaking people. Maybe since bar has to always be backwards compatible, this is OK?
Anyways, feel free to close if this is unsurprising. I thought it was subtle and maybe worth looking at, but y'all let me know.
_EDIT_: Ignore this, since my original post was flawed. In fact you _cannot_ do this, which seems to imply carving out submodules after-the-fact is unsupported.
### What did you see instead?
Go modules seems to silently replace `foo/bar/` in the `require foo v0.0.1` with the `bar v1.0.0` version.
_EDIT_: Ignore this, since my original post was flawed. In fact you _cannot_ do this, which seems to imply carving out submodules after-the-fact is unsupported. | Testing,NeedsFix,GoCommand,modules | low | Critical |
380,967,879 | create-react-app | Development page blank on first load | <!--
PLEASE READ THE FIRST SECTION :-)
-->
### Is this a bug report?
Yes
<!--
If you answered "Yes":
Please note that your issue will be fixed much faster if you spend about
half an hour preparing it, including the exact reproduction steps and a demo.
If you're in a hurry or don't feel confident, it's fine to report bugs with
less details, but this makes it less likely they'll get fixed soon.
In either case, please fill as many fields below as you can.
If you answered "No":
If this is a question or a discussion, you may delete this template and write in a free form.
Note that we don't provide help for webpack questions after ejecting.
You can find webpack docs at https://webpack.js.org/.
-->
### Did you try recovering your dependencies?
<!--
Your module tree might be corrupted, and that might be causing the issues.
Let's try to recover it. First, delete these files and folders in your project:
* node_modules
* package-lock.json
* yarn.lock
Then you need to decide which package manager you prefer to use.
We support both npm (https://npmjs.com) and yarn (http://yarnpkg.com/).
However, **they can't be used together in one project** so you need to pick one.
If you decided to use npm, run this in your project directory:
npm install -g npm@latest
npm install
This should fix your project.
If you decided to use yarn, update it first (https://yarnpkg.com/en/docs/install).
Then run in your project directory:
yarn
This should fix your project.
Importantly, **if you decided to use yarn, you should never run `npm install` in the project**.
For example, yarn users should run `yarn add <library>` instead of `npm install <library>`.
Otherwise your project will break again.
Have you done all these steps and still see the issue?
Please paste the output of `npm --version` and/or `yarn --version` to confirm.
-->
Yes
### Environment
<!--
To help identify if a problem is specific to a platform, browser, or module version, information about your environment is required.
This enables the maintainers quickly reproduce the issue and give feedback.
Run the following command in your React app's folder in terminal.
Note: The result is copied to your clipboard directly.
`npx create-react-app --info`
Paste the output of the command in the section below.
-->
just says it's missing app name
### Steps to Reproduce
<!--
How would you describe your issue to someone who doesn’t know you or your project?
Try to write a sequence of steps that anybody can repeat to see the issue.
-->
It's definitely something in my project (upgraded from latest 1.x), because a brand new app works. when I first start the development servers, it has the following html:
```
<body>
<div id="root"></div>
<!--
This HTML file is a template.
If you open it directly in the browser, you will see an empty page.
You can add webfonts, meta tags, or analytics to this file.
The build step will place the bundled scripts into the <body> tag.
To begin the development, run `npm start` or `yarn start`.
To create a production bundle, use `npm run build` or `yarn build`.
-->
<script src="https://www.google.com/recaptcha/api.js?render=explicit" async defer></script>
<script src="/static/js/bundle.js"></script>
<script src="/static/js/0.chunk.js"></script>
<script src="/static/js/main.chunk.js"></script>
</body>
```
The Page comes up blank and react-dev-tools (chrome extension) says `this page is not using react`
It's almost like the JS is never executed
But as soon as I refresh the page, the page works as expected with the following html served **(notice chunk 0 is now chunk 1)**:
```
<body>
<div id="root"></div>
<!--
This HTML file is a template.
If you open it directly in the browser, you will see an empty page.
You can add webfonts, meta tags, or analytics to this file.
The build step will place the bundled scripts into the <body> tag.
To begin the development, run `npm start` or `yarn start`.
To create a production bundle, use `npm run build` or `yarn build`.
-->
<script src="https://www.google.com/recaptcha/api.js?render=explicit" async defer></script>
<script src="/static/js/bundle.js"></script>
<script src="/static/js/1.chunk.js"></script>
<script src="/static/js/main.chunk.js"></script>
</body>
```
### Expected Behavior
<!--
How did you expect the tool to behave?
It’s fine if you’re not sure your understanding is correct.
Just write down what you thought would happen.
-->
It should just load the right chunk the first time
### Actual Behavior
<!--
Did something go wrong?
Is something broken, or not behaving as you expected?
Please attach screenshots if possible! They are extremely helpful for diagnosing issues.
-->
It doesn't execute react on the first load
### Reproducible Demo
<!--
If you can, please share a project that reproduces the issue.
This is the single most effective way to get an issue fixed soon.
There are two ways to do it:
* Create a new app and try to reproduce the issue in it.
This is useful if you roughly know where the problem is, or can’t share the real code.
* Or, copy your app and remove things until you’re left with the minimal reproducible demo.
This is useful for finding the root cause. You may then optionally create a new project.
This is a good guide to creating bug demos: https://stackoverflow.com/help/mcve
Once you’re done, push the project to GitHub and paste the link to it below:
-->
Very confidential app. will try to bring it down to an MVP I can share. in the mean time I would love at least a direction to look in. there are absolutely zero errors in the console
Similar to [#4788](https://github.com/facebook/create-react-app/issues/4788) and [#4757](https://github.com/facebook/create-react-app/issues/4757)
<!--
What happens if you skip this step?
We will try to help you, but in many cases it is impossible because crucial
information is missing. In that case we'll tag an issue as having a low priority,
and eventually close it if there is no clear direction.
We still appreciate the report though, as eventually somebody else might
create a reproducible example for it.
Thanks for helping us help you!
-->
| issue: bug | medium | Critical |
380,975,590 | TypeScript | "2 definitions" when using module.exports to export a function wrapped in {} | - VSCode Version: 1.28.2
- OS Version: macOS 10.14
I noticed there are several issues opened against "2 definitions", noticeably https://github.com/Microsoft/vscode/issues/51459 & https://github.com/Microsoft/TypeScript/issues/24861
But I don't think my issue is the same (I don't use ts or react, just nodejs code). It happens when I export like this
```
function downloadVoice() {
...
}
module.exports = {
downloadVoice
}
// If I export like this it will not happen
// module.exports = downloadVoice
```
But when I hover over to the place when the function is called, vscode can show the definition correctly,
<img width="894" alt="2 inspection" src="https://user-images.githubusercontent.com/516243/48044533-5d9b4b00-e1c7-11e8-8902-cdddaf3ae2c2.png">
The 2 definitions warning:
<img width="948" alt="ks3test_js_ _inspection" src="https://user-images.githubusercontent.com/516243/48044566-86bbdb80-e1c7-11e8-9551-74f2b0fcaec8.png">
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
| Bug,Domain: JavaScript | low | Major |
380,990,731 | go | runtime: scheduler work stealing slow for high GOMAXPROCS | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
We don't have Go installed in the production environment. The program reports this info as part of reporting its version.
```
Go Version: "go1.11"
Go Compiler: "gc"
Go ARCH: "amd64"
Go OS: "linux"
```
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
We don't have Go installed in the production environment. I think the following info is relevant to this issue.
```
$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz
Stepping: 1
CPU MHz: 1207.421
BogoMIPS: 4004.63
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 35840K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55
```
### What did you do?
We have an application that primarily retrieves data in large chunks from a node-local cache server via HTTP, packetizes it, and sends the packets out at a target bitrate via UDP. It does this for many concurrent streaming sessions per node. A node with the CPU shown above will serve several hundred streaming sessions at once.
The application currently starts separate goroutines for each streaming session. Each session has a goroutine responsible for outputting the UDP packets as close to the target bitrate as possible. That goroutine uses a `time.Ticker` to wake up periodically and transmit data. The Go execution tracer shows that each of these transmitting goroutines typically runs for about 15µs once every 2.4ms.
Until recently the application was built with Go 1.9. We found empirically that running five instances of the program on a node—each instance handling one fifth of the work—handled our work load the best. At the time we didn't understand why we had to shard the workload across multiple processes, but the empirical data was conclusive.
We built our latest release with Go 1.11 and began load testing it with the same configuration of five instances per node.
### What did you expect to see?
We expected to see that the new release built with Go 1.11 performed at least as well as the previous release built with Go 1.9.
### What did you see instead?
The new release built with Go 1.11 consumed significantly more CPU for the same workload compared with the previous release built with Go 1.9. We eliminated the possibility that the changes in our new release were the cause by rebuilding the previous release with Go 1.11 and observed that it also consumed significantly more CPU for the same load as the same code built with Go 1.9.
We collected CPU profile data from these three builds of the program under load and began looking for culprits. The data showed that the Go 1.11 builds were spending about 3x CPU time in `runtime.findrunnable` and its helpers than the Go 1.9 builds.
Looking at the commit history since Go 1.9 for that part of the runtime we identified commit ["runtime: improve timers scalability on multi-CPU systems"](https://github.com/golang/go/commit/76f4fd8a5251b4f63ea14a3c1e2fe2e78eb74f81) as the only change that seemed relevant. But we were puzzled that despite the performance improvements to timer handling we saw increased CPU load rather than performance improvements for our program.
After further analysis, however, we realized that the inefficient timer implementation was the likely reason we were forced to shard the load across five processes when using Go 1.9. Since the profile data for the Go 1.11 build showed that the cumulative time spent in `runtime.runqsteal` was the largest contributor to the cumulative time of `runtime.findrunnable` we hypothesized that with the timer handling bottleneck reduced each P in the scheduler could advance to the work stealing loop instead of contending for the timer lock. Furthermore, since we were running on hardware with 56 hardware threads and had not set an explicit `GOMAXPROCS` the work stealing loop was rather expensive, especially if it typically found all the other run queues empty, which we confirmed by running with `GODEBUG=schedtrace=1000`.
With that hypothesis seeming sound we next hypothesized that running each of the five Go 1.11 processes with `GOMAXPROCS=12` would be a better configuration to reduce the work stealing iterations and without under utilizing the available hardware threads. This idea also matched the conclusion of this similar issue (now closed) https://github.com/golang/go/issues/16476. Load tests with five instances of a Go 1.11 build each with `GOMAXPROCS=12` found a similar amount of CPU consumption as with Go 1.9 builds. This is a reasonable workaround and we are using it now.
Although there is a simple workaround, it is unsettling that the runtime scheduler does not scale well for this type of workload on high core counts. Notably, the work stealing algorithm degenerates to O(N²) due to N cores all inspecting each other's mostly empty run queues. The CPU time spent fruitlessly attempting to steal work from empty run queues contributes to overall system load and competes with the demands of other processes on the node, such as the local content cache server mentioned earlier.
The problem I've described here is almost the same as https://github.com/golang/go/issues/18237, in which @aclements explained:
>Given how much time you're spending in findrunnable, it sounds like you're constantly switching between having something to do and being idle. Presumably the 1500 byte frames are coming in just a little slower than you can process them, so the runtime is constantly looking for work to do, going to sleep, and then immediately being woken up for the next frame. This is the most expensive path in the scheduler (we optimize for the case where there's another goroutine ready to run, which is extremely fast) and there's an implicit assumption here that the cost of going to sleep doesn't really matter if there's nothing to do. But that's violated if new work is coming in at just the wrong rate.
I am creating a new issue because our goroutines are made runnable largely by runtime timers instead of network events. I believe that is a significant difference because although the scheduler cannot easily predict the arrival of network events, runtime timers have known expiration times.
### Reproducing
Below is a small program that models the critical path of the application. The code will not run on the playground, but here is a link in case that is more convenient that the inline code below. https://play.golang.org/p/gcGT2v2mZjU
Profile data we've collected from this program shows a strong resemblance to the profile data from the bigger program we originally witnessed this issue with, at least with regard to the Go runtime functions involved.
```go
package main
import (
"flag"
"log"
"math/rand"
"os"
"os/signal"
"runtime"
"runtime/pprof"
"runtime/trace"
"sync"
"time"
)
func main() {
var (
runTime = flag.Duration("runtime", 10*time.Second, "Run `duration` after target go routine count is reached")
workDur = flag.Duration("work", 15*time.Microsecond, "CPU bound work `duration` each cycle")
cycleDur = flag.Duration("cycle", 2400*time.Microsecond, "Cycle `duration`")
gCount = flag.Int("gcount", runtime.NumCPU(), "Number of `goroutines` to use")
gStartFreq = flag.Int("gfreq", 1, "Number of goroutines to start each second until gcount is reached")
cpuProfilePath = flag.String("cpuprofile", "", "Write CPU profile to `file`")
tracePath = flag.String("trace", "", "Write execution trace to `file`")
)
flag.Parse()
sigC := make(chan os.Signal, 1)
signal.Notify(sigC, os.Interrupt)
var wg sync.WaitGroup
done := make(chan struct{})
stop := make(chan struct{})
wg.Add(1)
go func() {
defer wg.Done()
select {
case sig := <-sigC:
log.Print("got signal ", sig)
case <-stop:
}
close(done)
}()
gFreq := time.Second / time.Duration(*gStartFreq)
jitterCap := int64(gFreq / 2)
for g := 0; g < *gCount; g++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
ticker := time.NewTicker(*cycleDur)
defer ticker.Stop()
for {
select {
case <-done:
return
case <-ticker.C:
workUntil(time.Now().Add(*workDur))
}
}
}(g)
log.Print("goroutine count: ", g+1)
jitter := time.Duration(rand.Int63n(jitterCap))
select {
case <-done:
g = *gCount // stop loop early
case <-time.After(gFreq + jitter):
}
}
select {
case <-done:
default:
log.Print("running for ", *runTime)
runTimer := time.NewTimer(*runTime)
wg.Add(1)
go func() {
wg.Done()
select {
case <-runTimer.C:
log.Print("runTimer fired")
close(stop)
}
}()
}
if *cpuProfilePath != "" {
f, err := os.Create(*cpuProfilePath)
if err != nil {
log.Fatal("could not create CPU profile: ", err)
}
if err := pprof.StartCPUProfile(f); err != nil {
log.Fatal("could not start CPU profile: ", err)
}
log.Print("profiling")
defer pprof.StopCPUProfile()
}
if *tracePath != "" {
f, err := os.Create(*tracePath)
if err != nil {
log.Fatal("could not create execution trace: ", err)
os.Exit(1)
}
defer f.Close()
if err := trace.Start(f); err != nil {
log.Fatal("could not start execution trace: ", err)
}
log.Print("tracing")
defer trace.Stop()
}
wg.Wait()
}
func workUntil(deadline time.Time) {
now := time.Now()
for now.Before(deadline) {
now = time.Now()
}
}
```
### Profile Data
We ran the above program in several configurations and captured profile and schedtrace data.
#### Go 1.9, GOMAXPROCS=56, 5 procs x 500 goroutines
schedtrace sample
```
SCHED 145874ms: gomaxprocs=56 idleprocs=50 threads=60 spinningthreads=1 idlethreads=50 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 146880ms: gomaxprocs=56 idleprocs=43 threads=60 spinningthreads=4 idlethreads=43 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 147886ms: gomaxprocs=56 idleprocs=49 threads=60 spinningthreads=1 idlethreads=49 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 148892ms: gomaxprocs=56 idleprocs=56 threads=60 spinningthreads=0 idlethreads=56 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 149898ms: gomaxprocs=56 idleprocs=50 threads=60 spinningthreads=1 idlethreads=50 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
```
pprof data
```
File: sched-test-linux-9
Type: cpu
Time: Oct 30, 2018 at 3:26pm (EDT)
Duration: 1mins, Total samples = 4.70mins (468.80%)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top20 -cum
Showing nodes accounting for 229.49s, 81.39% of 281.97s total
Dropped 61 nodes (cum <= 1.41s)
Showing top 20 nodes out of 43
flat flat% sum% cum cum%
0 0% 0% 143.03s 50.73% runtime._System
135.95s 48.21% 48.21% 135.95s 48.21% runtime._ExternalCode
0.18s 0.064% 48.28% 73.63s 26.11% runtime.mcall
0.06s 0.021% 48.30% 73.45s 26.05% runtime.park_m
0.67s 0.24% 48.54% 72.55s 25.73% runtime.schedule
11.32s 4.01% 52.55% 60.58s 21.48% runtime.findrunnable
0.97s 0.34% 52.90% 56.95s 20.20% main.main.func2
7.71s 2.73% 55.63% 41.98s 14.89% main.workUntil
9.40s 3.33% 58.96% 29.40s 10.43% time.Now
25.23s 8.95% 67.91% 25.23s 8.95% runtime.futex
11.73s 4.16% 72.07% 20s 7.09% time.now
3.92s 1.39% 73.46% 19.21s 6.81% runtime.runqsteal
0.59s 0.21% 73.67% 15.43s 5.47% runtime.stopm
15.06s 5.34% 79.01% 15.29s 5.42% runtime.runqgrab
0.11s 0.039% 79.05% 14.20s 5.04% runtime.futexsleep
0.92s 0.33% 79.38% 13.86s 4.92% runtime.notesleep
4.47s 1.59% 80.96% 12.79s 4.54% runtime.selectgo
0.43s 0.15% 81.12% 12.30s 4.36% runtime.startm
0.56s 0.2% 81.31% 11.56s 4.10% runtime.notewakeup
0.21s 0.074% 81.39% 11.52s 4.09% runtime.wakep
```
#### Go 1.11, GOMAXPROCS=56, 5 procs x 500 goroutines
schedtrace sample
```
SCHED 144758ms: gomaxprocs=56 idleprocs=52 threads=122 spinningthreads=2 idlethreads=59 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 145764ms: gomaxprocs=56 idleprocs=46 threads=122 spinningthreads=4 idlethreads=55 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 146769ms: gomaxprocs=56 idleprocs=46 threads=122 spinningthreads=3 idlethreads=56 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 147774ms: gomaxprocs=56 idleprocs=46 threads=122 spinningthreads=2 idlethreads=55 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 148780ms: gomaxprocs=56 idleprocs=52 threads=122 spinningthreads=2 idlethreads=60 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0]
SCHED 149785ms: gomaxprocs=56 idleprocs=46 threads=122 spinningthreads=1 idlethreads=57 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
```
pprof data
```
File: sched-test-linux-11
Type: cpu
Time: Oct 30, 2018 at 3:35pm (EDT)
Duration: 1mins, Total samples = 6.43mins (641.46%)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top20 -cum
Showing nodes accounting for 310.85s, 80.57% of 385.80s total
Dropped 61 nodes (cum <= 1.93s)
Showing top 20 nodes out of 49
flat flat% sum% cum cum%
160.61s 41.63% 41.63% 169.60s 43.96% time.now
0.83s 0.22% 41.85% 122.39s 31.72% runtime.mcall
0.13s 0.034% 41.88% 121.56s 31.51% runtime.park_m
0.64s 0.17% 42.05% 120.12s 31.14% runtime.schedule
19.14s 4.96% 47.01% 115.47s 29.93% runtime.findrunnable
1.54s 0.4% 47.41% 64.15s 16.63% main.main.func2
53.51s 13.87% 61.28% 53.51s 13.87% runtime.futex
1.47s 0.38% 61.66% 48.66s 12.61% runtime.timerproc
10.70s 2.77% 64.43% 47.49s 12.31% runtime.runqsteal
9.88s 2.56% 66.99% 44.13s 11.44% main.workUntil
0.14s 0.036% 67.03% 36.91s 9.57% runtime.notetsleepg
35.08s 9.09% 76.12% 36.79s 9.54% runtime.runqgrab
0.73s 0.19% 76.31% 36.66s 9.50% runtime.futexsleep
8.79s 2.28% 78.59% 30.07s 7.79% time.Now
1.12s 0.29% 78.88% 26.48s 6.86% runtime.stopm
0.33s 0.086% 78.96% 23.16s 6.00% runtime.systemstack
1.49s 0.39% 79.35% 22.39s 5.80% runtime.notesleep
0.39s 0.1% 79.45% 18.49s 4.79% runtime.startm
0.09s 0.023% 79.47% 17.68s 4.58% runtime.futexwakeup
4.24s 1.10% 80.57% 17.56s 4.55% runtime.selectgo
```
#### Go 1.11, GOMAXPROCS=12, 5 procs x 500 goroutines
schedtrace sample
```
SCHED 145716ms: gomaxprocs=12 idleprocs=8 threads=31 spinningthreads=2 idlethreads=11 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 146721ms: gomaxprocs=12 idleprocs=8 threads=31 spinningthreads=1 idlethreads=12 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 147725ms: gomaxprocs=12 idleprocs=8 threads=31 spinningthreads=3 idlethreads=11 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 148730ms: gomaxprocs=12 idleprocs=9 threads=31 spinningthreads=0 idlethreads=12 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 149735ms: gomaxprocs=12 idleprocs=6 threads=31 spinningthreads=1 idlethreads=9 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0]
SCHED 150740ms: gomaxprocs=12 idleprocs=2 threads=31 spinningthreads=3 idlethreads=5 runqueue=0 [0 0 0 0 0 0 0 0 0 0 0 0]
```
pprof data
```
File: sched-test-linux-11
Type: cpu
Time: Oct 30, 2018 at 3:32pm (EDT)
Duration: 1mins, Total samples = 4.49mins (447.65%)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top20 -cum
Showing nodes accounting for 223.33s, 82.96% of 269.19s total
Dropped 61 nodes (cum <= 1.35s)
Showing top 20 nodes out of 46
flat flat% sum% cum cum%
152.74s 56.74% 56.74% 160.99s 59.81% time.now
0.82s 0.3% 57.05% 49.80s 18.50% main.main.func2
0.44s 0.16% 57.21% 42.75s 15.88% runtime.mcall
0.09s 0.033% 57.24% 42.03s 15.61% runtime.park_m
0.57s 0.21% 57.45% 41.32s 15.35% runtime.schedule
41.16s 15.29% 72.74% 41.16s 15.29% runtime.futex
9.33s 3.47% 76.21% 41.09s 15.26% main.workUntil
5s 1.86% 78.07% 36.28s 13.48% runtime.findrunnable
1.06s 0.39% 78.46% 33.86s 12.58% runtime.timerproc
0.55s 0.2% 78.67% 29.45s 10.94% runtime.futexsleep
7.98s 2.96% 81.63% 27.25s 10.12% time.Now
0.07s 0.026% 81.66% 25.42s 9.44% runtime.notetsleepg
0.55s 0.2% 81.86% 17.84s 6.63% runtime.stopm
0.72s 0.27% 82.13% 16.36s 6.08% runtime.notesleep
1.39s 0.52% 82.64% 15.18s 5.64% runtime.notetsleep_internal
0.22s 0.082% 82.73% 13.38s 4.97% runtime.startm
0.16s 0.059% 82.79% 12.66s 4.70% runtime.systemstack
0.35s 0.13% 82.92% 12.46s 4.63% runtime.notewakeup
0.05s 0.019% 82.93% 12.32s 4.58% runtime.futexwakeup
0.08s 0.03% 82.96% 8.78s 3.26% runtime.entersyscallblock
```
I can test other configurations if it will help. | Performance,NeedsInvestigation,compiler/runtime | medium | Critical |
381,022,623 | TypeScript | Typescript failed to expand type in generic for unknown reason | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.1.3
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
type Or<T> = T [
{
[K in keyof T]: T[K] extends ((...arg:any[])=>any)?
never:
K
} [keyof T]
]
// F is unknown
type F = Or<[
string,
number
]>
// F_ is string|number
type F_ = ([
string,
number
])[
{
[K in keyof ([
string,
number
])]: ([
string,
number
])[K] extends ((...arg:any[])=>any)?
never:
K
} [keyof ([
string,
number
])]
]
```
**Expected behavior:**
I will get the type `string|number` for both case
**Actual behavior:**
The generic failed to expand the type and make `F` as `unknown` for unknown reason,
while manully inline it in `F_` typed as expected
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
[link](https://www.typescriptlang.org/play/index.html#src=type%20Or%3CT%3E%20%3D%20T%20%5B%0D%0A%20%20%20%20%7B%0D%0A%20%20%20%20%20%20%20%20%5BK%20in%20keyof%20T%5D%3A%20T%5BK%5D%20extends%20((...arg%3Aany%5B%5D)%3D%3Eany)%3F%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20never%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20K%0D%0A%20%20%20%20%7D%20%5Bkeyof%20T%5D%0D%0A%5D%0D%0A%0D%0A%2F%2F%20F%20is%20unknown%0D%0Atype%20F%20%3D%20Or%3C%5B%0D%0A%20%20%20%20string%2C%0D%0A%20%20%20%20number%0D%0A%5D%3E%0D%0A%0D%0A%2F%2F%20F_%20is%20string%7Cnumber%0D%0Atype%20F_%20%3D%20(%5B%0D%0A%20%20%20%20string%2C%0D%0A%20%20%20%20number%0D%0A%5D)%5B%0D%0A%20%20%20%20%7B%0D%0A%20%20%20%20%20%20%20%20%5BK%20in%20keyof%20(%5B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20string%2C%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20number%0D%0A%20%20%20%20%20%20%20%20%5D)%5D%3A%20(%5B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20string%2C%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20number%0D%0A%20%20%20%20%20%20%20%20%5D)%5BK%5D%20extends%20((...arg%3Aany%5B%5D)%3D%3Eany)%3F%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20never%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20K%0D%0A%20%20%20%20%7D%20%5Bkeyof%20(%5B%0D%0A%20%20%20%20%20%20%20%20string%2C%0D%0A%20%20%20%20%20%20%20%20number%0D%0A%20%20%20%20%5D)%5D%0D%0A%5D%0D%0A%0D%0Avar%20f1%3A%20F_%20%3D%20%2Ffsd%2F%0D%0Avar%20f2%3A%20F%20%3D%20%2Ffsd%2F)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Bug,Domain: Conditional Types | low | Critical |
381,053,676 | vscode | [scss] selector specificity works wrong with nested selectors | Issue Type: <b>Bug</b>
Css specifity calculator doesn't respect nested selectors or selectors with `&`.
```css
#id {} /* (1, 0, 0) */
#id div {} /* (1, 0, 1) */
```
```sass
#id {
div { /* (0, 0, 1) */
}
}
.block{ /* (0, 1, 0) */
&__element { /* (0, 0, 0) */
}
}
```
Reproduced without any extensions
VS Code version: Code 1.29.0 (5f24c93878bd4bc645a4a17c620e2487b11005f9, 2018-11-12T07:47:15.448Z)
OS version: Windows_NT x64 10.0.17134
<!-- generated by issue reporter --> | bug,css-less-scss | low | Critical |
381,056,641 | vue | match (getTypeIndex) is called lot of time when props changed and consume lot of memory | ### Version
2.5.17
### Reproduction link
[https://codepen.io/anon/pen/MzpZoz?editors=1011](https://codepen.io/anon/pen/MzpZoz?editors=1011)
### Steps to reproduce
Open google chrome dev console
Go in 'Memory' tab
Select 'Allocation instrumentation timeline' and check 'Record allocation stacks'
Launch for 30 seconds
On your snapshot select 'Allocation' on the top left drop-down
Order by 'Count'
You will see high number of match called (a lot more than the actual var change)
### What is expected?
Expected to have match number closer to the number of var change.
### What is actually happening?
Match is executed lot of time and consume lot of memory.
---
I think it's the way i deal with the object that it's not the right way but i search in the doc and couldn't find something related to that behavior and how to manage it (perhaps use store ?).
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | medium | Minor |
381,109,175 | TypeScript | Compiler option for implicitly adding this context on class member methods | ## Search Terms
this context class
## Suggestion
A new compiler option `--enforceClassMemberThisContext`. It causes the compiler to implicitly set the `this` context of class methods which use this somewhere in the method body and behave as the user had done this manually on each method.
## Use Cases
It is pretty common to pass class methods down to other parts of the program as event listeners (e.g. in react apps). In this case it is almost always necessary to bind the `this` context to the current class instance if the method uses `this` in some form. A common mistake is to miss this binding for a given method. It is possible to add a typescript check for this mistake (by specifying the `this`-"parameter" on each class method) but one would have to do this for each method by hand, which is also easy to miss. By specifying it as a compiler flag, it would become very hard to make this mistake.
## Examples
The following example doesn't raise a typescript error in current typescript but would do so with the proposed compiler flag:
```ts
class MyComponent {
otherMethod() {}
clickHandler() {
// ...
this.otherMethod();
}
registerListeners() {
//...
buttonElement.onclick = this.clickHandler;
}
}
```
This example produces a "The 'this' types of each signature are incompatible." error in current typescript and is identical to the example above with the proposed compiler flag:
```ts
class MyComponent {
otherMethod() {}
clickHandler(this: MyComponent) {
// ...
this.otherMethod();
}
registerListeners() {
//...
buttonElement.onclick = this.clickHandler;
}
}
```
## Problems/Discussion Points
* This doesn't work with `element.addEventListener` at the moment because the `EventListener` interface doesn't specify or complain about wrong this contexts (but I think it should, maybe this is another separate issue?)
* This doesn't work with the semi-common pattern of overwriting the method instances with its bound versions in the constructor. Maybe the docs should recommend something like this instead (should be roughly equivalent besides the different name for the bound handler If I'm not mistaken)?
```ts
class MyComponent {
// ...
clickHandler() { /* ... */ }
boundClickHandler = this.clickHandler.bind(this);
// ...
}
```
* Maybe this could be implemented as a tslint check, but it would require the user to type out the this context for each method separately, which is cumbersome.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion,Add a Flag | low | Critical |
381,119,794 | godot | Errors when trying to import and open EXR files with multiple layers | **Godot version:**
3.1 66c6dfb
**OS/device including version:**
Windows 10
**Issue description:**
When I copy exr file(created in Krita 4 without merging layers) to godot project, and later open this project and file, then this error is shown:
```
ERROR: load_image: TinyEXR: R channel not found.
At: modules/tinyexr/image_loader_tinyexr.cpp:111
ERROR: _reimport_file: Error importing: res://Places/Flag/flaga.exr
At: editor/editor_file_system.cpp:1540
ERROR: Failed loading resource: res://Places/Flag/flaga.exr
At: core/io/resource_loader.cpp:192
ERROR: load_resource: Condition ' !res.is_valid() ' is true. returned: ERR_CANT_OPEN
At: editor/editor_node.cpp:583
```
Example file(need change extension to .exr)
[flaga.zip](https://github.com/godotengine/godot/files/2585085/flaga.zip)
| bug,topic:thirdparty,topic:import | low | Critical |
381,120,647 | flutter | Support debugging `flutter drive` integration tests from the editors | Hi,
I can't find any way debugging integration tests run with flutter drive.
in flutter site it's only say to use Observatory for debug, but not explain how to do this.
https://flutter.io/docs/testing/debugging
i also tried to connect to link of listening to Observatory like show in screenshot i attache

, but i couldn't see my code and debut it.
please help me.
thanks | tool,t: flutter driver,P2,team-tool,triaged-tool | low | Critical |
381,179,861 | pytorch | Cannot import caffe2_pybind11_state_gpu | ## 🐛 Bug
Hello, THe problem is that import caffe2_pybind11_state_gpu doesn't work it says:
ImportError: DLL load failed: The specified module could not be found.
On the other hand, import caffe2_pybind11_state (without gpu) works fine. What could be missing ? How can I find which DLL is he looking for ?
## To Reproduce
Steps to reproduce the behavior:
1. Download master branch and build from sources, Window 10, Cuda 9.2, Python 3.6
| caffe2 | low | Critical |
381,218,226 | rust | `?` can not use assiciated type constraints usable by `.into()` | When an error type is guaranteed to come `From<>` a result error type by means of a trait constraint in a generic function with a constant result type, the idiom `some_result.map_err(|e| e.into())` works, but the should-be equivalent `Ok(some_result?)` does not and produces:
the trait `std::convert::From<<W as Worker>::Error>` is not implemented for `GenericError`
That can be alleviated by explicitly demanding that `From<<W as Worker>::Error>` of the generic function's error type, but my expectation would be that this is implied by the type constraint -- especially since the `.map_err` version that should be equivalent to the `?` one does "get it".
Full example:
#[derive(Debug)]
struct GenericError();
// impl Into<GenericError> for i32 {
// fn into(self) -> GenericError {
// GenericError()
// }
// }
impl From<i32> for GenericError {
fn from(_: i32) -> Self {
GenericError()
}
}
trait Worker {
type Error: Into<GenericError>;
fn do_something(&self) -> Result<(), Self::Error>;
}
struct IntegerWorker();
impl Worker for IntegerWorker {
type Error = i32;
fn do_something(&self) -> Result<(), Self::Error> {
Err(42)
}
}
fn process<W>(w: W) -> Result<(), GenericError> where
W: Worker,
// GenericError: From<<W as Worker>::Error>
{
let some_result = w.do_something();
// // This works:
// some_result.map_err(|e| e.into())
// This doesn't
Ok(some_result?)
}
fn main() {
let w = IntegerWorker();
println!("Result: {:?}", process(w));
}
This fails to build with the above error message, while the commented-out version with `.map_err(|e| e.into())` builds fine. Adding the `GenericError: From<...>` "where" clause solves the build error, but is unergonomic (it'd clutter all functions working on that type).
In case it is relevant: If `Into<>` is implemented instead of the equivalent `From<>` (top comment block), then the problem still exists, but adding the additional constraint does not solve it any more but produces a different error message.
## Meta
Previous research on web and issue tracker search turned out dry, and disucssion on IRC just helped rule out some possible oversights. The behavior is the same with 1.29.0, 1.30.0 and current nightly; full example version:
rustc 1.30.1 (1433507eb 2018-11-07)
binary: rustc
commit-hash: 1433507eba7d1a114e4c6f27ae0e1a74f60f20de
commit-date: 2018-11-07
host: x86_64-unknown-linux-gnu
release: 1.30.1
LLVM version: 8.0
| A-trait-system,T-compiler | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.