id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
342,255,450 | opencv | Lower FPS with the newest OpenCV version(3.4) than 3.3/3.2 | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV =>
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => 3.4 and 3.3 and 3.2
- Operating System / Platform => Linux Ubuntu 16.04
- Compiler => g++
##### Detailed description
Hey guys, I'm faced with an interesting problem: I got a lower FPS of UVC camera using the newest version 3.4 than the older version 3.3/3.2.
<!-- your description -->
##### Steps to reproduce
Here is the result of my test:
I test the same code on the same machine.
In a WHILE loop my program did nothing but read one image from the usb camera and display it every time.
1. When I used the version 3.4, each frame cost 30ms.
2. And in the same condition except for the version of OpenCV(I used 3.3 and 3.2 for test), each frame cost 10ms.
It surprised my a lot...and I'm very confused about it. | category: videoio(camera),incomplete | low | Critical |
342,293,640 | vscode | Enhancement: finer control of word separators | <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Currently, the setting term `"editor.wordSeparators"` only tells the editor to separate words *at* these characters. Can we make it *before* or *after* the separators?
Now I take LaTeX as an example. If we remove `\` from "editor.wordSeparators", then double click on `\cmdA\cmdB\cmdC`, we will select the whole expression. However, as the expression is actually three commands, we should select a single (e.g. `\cmdA`) one at a time. That means, we should only separate words *before* `\`, not after. | feature-request,languages-basic,editor-core | low | Major |
342,295,520 | pytorch | [caffe2]train on GPU and test on cpu failed | I am learning caffe2. I want to train on a Machine with GPU, and I want to test the trained model on an ARM cpu without GPU.
I tried the cifar10 tuorial on
[https://github.com/caffe2/tutorials/blob/master/CIFAR10_Part1.ipynb](url)
to speed up the training , I use GPU mode while training
```
with core.DeviceScope(core.DeviceOption(caffe2_pb2.CUDA, 0)):
data, label = AddInput(
train_model, batch_size=training_net_batch_size,
db=training_lmdb_path,
db_type='lmdb')
# Add model definition, save return value to 'softmax' variable
softmax = Add_Original_CIFAR10_Model(train_model, data, num_classes, image_height, image_width, image_channels)
# Add training operators using the softmax output from the model
AddTrainingOperators(train_model, softmax, label)
# Add periodic checkpoint outputs to the model
AddCheckpoints(train_model, checkpoint_iters, db_type="lmdb")
with core.DeviceScope(core.DeviceOption(caffe2_pb2.CUDA, 0)):
data, label = AddInput(
val_model, batch_size=validation_images,
db=validation_lmdb_path,
db_type='lmdb')
# Add model definition, save return value to 'softmax' variable
softmax = Add_Original_CIFAR10_Model(val_model, data, num_classes, image_height, image_width, image_channels)
# Add accuracy operator
AddAccuracy(val_model, softmax, label)
with core.DeviceScope(core.DeviceOption(caffe2_pb2.CUDA, 0)):
Add_Original_CIFAR10_Model(deploy_model, "data", num_classes, image_height, image_width, image_channels)
```
the trained model is saved in
```
/home/ice/caffe2_notebooks/tutorial_files/tutorial_cifar10/2018-07-18_16-11-41/
```
the directory's contents is like this:
```
cifar10_checkpoint_01000.lmdb cifar10_checkpoint_02000.lmdb cifar10_init_net.pb cifar10_predict_net.pb
```
Then I followed this
[https://github.com/caffe2/tutorials/blob/master/CIFAR10_Part2.ipynb](url)
to test the saved model trained just before.
But I got error
```
Success, you may continue!
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0718 19:45:00.070950 20855 init.h:99] Caffe2 GlobalInit should be run before any other API calls.
W0718 19:45:00.071094 20855 init.h:99] Caffe2 GlobalInit should be run before any other API calls.
I0718 19:45:00.194257 20855 operator.cc:169] Engine CUDNN is not available for operator MaxPool.
I0718 19:45:00.194357 20855 operator.cc:169] Engine CUDNN is not available for operator AveragePool.
I0718 19:45:00.194423 20855 operator.cc:169] Engine CUDNN is not available for operator AveragePool.
Original python traceback for operator `4` in network `test_model` in exception above (most recent call last):
Traceback (most recent call last):
File "train_gpu_pred_cpu.py", line 147, in <module>
workspace.RunNet(test_model.net)
File "/usr/local/pytorch/lib64/python3.6/site-packages/caffe2/python/workspace.py", line 217, in RunNet
StringifyNetName(name), num_iter, allow_fail,
File "/usr/local/pytorch/lib64/python3.6/site-packages/caffe2/python/workspace.py", line 178, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at blob.h:84] IsType<T>(). wrong type for the Blob instance. Blob contains caffe2::Tensor<caffe2::CPUContext> while caller expects caffe2::Tensor<caffe2::CUDAContext> .
Offending Blob name: data.
Error from operator:
input: "data" input: "conv1_w" input: "conv1_b" output: "conv1" name: "" type: "Conv" arg { name: "kernel" i: 5 } arg { name: "order" s: "NCHW" } arg { name: "stride" i: 1 } arg { name: "pad" i: 2 } arg { name: "exhaustive_search" i: 0 } device_option { device_type: 1 cuda_gpu_id: 0 } engine: "CUDNN"
```
I have tried:
```
INIT_NET = "/home/ice/caffe2_notebooks/tutorial_files/tutorial_cifar10/2018-07-18_16-11-41/cifar10_init_net.pb" # GPU trained
PREDICT_NET = "/home/ice/caffe2_notebooks/tutorial_files/tutorial_cifar10/2018-07-18_16-11-41/cifar10_predict_net.pb" # GPU trained
device_opts = core.DeviceOption(caffe2_pb2.CPU, 0)
arg_scope = {"order": "NCHW"}
test_model = model_helper.ModelHelper(name="test_model", arg_scope=arg_scope, init_params=False)
# Add the data input layer to the model, pointing at the TEST_LMDB
data,_ = AddInputLayer(test_model,1,TEST_LMDB,'lmdb')
# In [5]:
# Populate the model helper obj with the init net stuff, which provides the
# weight initializations for the model
init_net_proto = caffe2_pb2.NetDef()
with open(INIT_NET, "rb") as f:
init_net_proto.ParseFromString(f.read())
init_net_proto.device_option.CopyFrom(device_opts)
with core.DeviceScope(core.DeviceOption(caffe2_pb2.CPU, 0)):
test_model.param_init_net = test_model.param_init_net.AppendNet(core.Net(init_net_proto))
# Populate the model helper obj with the predict net stuff, which defines
# the structure of the model
predict_net_proto = caffe2_pb2.NetDef()
with open(PREDICT_NET, "rb") as f:
predict_net_proto.ParseFromString(f.read())
predict_net_proto.device_option.CopyFrom(device_opts)
with core.DeviceScope(core.DeviceOption(caffe2_pb2.CPU, 0)):
test_model.net = test_model.net.AppendNet(core.Net(predict_net_proto))
# Add an accuracy feature to the model for convenient reporting during testing
accuracy = brew.accuracy(test_model, ['softmax', 'label' ], 'accuracy')
# In [6]:
# Run the param init net to put the trained model info into the workspace
workspace.RunNetOnce(test_model.param_init_net)
workspace.CreateNet(test_model.net, overwrite=True)
```
But the error still exists.So, what's the right way to load a model trained in GPU mode, and test on CPU mode? Thanks. | caffe2 | low | Critical |
342,345,091 | opencv | setting parameters of openCV tracking API in python | I'm trying to use openCV tracking API in Python for object tracking. I tried the code in this [link](https://www.learnopencv.com/object-tracking-using-opencv-cpp-python/) and don't have any problem running the code. But, looking at the openCV documentation [here](http://docs.opencv.org/trunk/d9/df8/group__tracking.html), I realized there are parameters for each of the tracking algorithms. I could not find a way how I can set these parameters in python. Note, this question was asked on [StackOverflow](https://stackoverflow.com/questions/45677651) | feature,category: python bindings,category: contrib | low | Minor |
342,346,909 | rust | suggestion in unresolved trait method calls misses crate `as` rename | Okay get ready, this is quite a combination of pieces to fit together to reproduce this.
If crate A defines a trait (with a method), and crate B rexports A under a new name (via `pub extern crate A as NewName`), and crate C pulls in crate B and makes a call to the trait's method without importing the trait or without implementing it ...
... then the diagnostic produced by rustc comes *close* but just misses the mark, because it ignores B's rename of A in its suggestion.
Under the details below is a complete set of steps to reproduce; here I use four separate instances of the "A" in my example above, in order to enumerate the four combinations of {pub,}{,as _}:
<details>
```
% ls *.rs
case_four.rs case_three.rs reexporter.rs
case_one.rs case_two.rs test.rs
% cat case_one.rs
```
```rust
#![crate_type="lib"]
pub fn function_in_one() -> &'static str { "find me!" }
pub trait TraitInOne { fn find_me_1() { } }
```
```
% cat case_two.rs
```
```rust
#![crate_type="lib"]
pub fn function_in_two() -> &'static str { "find me!" }
pub trait TraitInTwo { fn find_me_2() { } }
```
```rust
% cat case_three.rs
#![crate_type="lib"]
pub fn function_in_three() -> &'static str { "find me!" }
pub trait TraitInThree { fn find_me_3() { } }
```
```
% cat case_four.rs
```
```rust
#![crate_type="lib"]
pub fn function_in_four() -> &'static str { "find me!" }
pub trait TraitInFour { fn find_me_4() { } }
```
```
% cat reexporter.rs
```
```rust
#![crate_type="lib"]
extern crate case_one;
extern crate case_two as two;
pub extern crate case_three;
pub extern crate case_four as four;
```
```
% cat test.rs
```
```rust
extern crate reexporter;
struct T;
fn main() {
T.find_me_1();
T.find_me_2();
T.find_me_3();
T.find_me_4();
}
```
```
% rustc --version
rustc 1.29.0-nightly (4f3c7a472 2018-07-17)
% rustc case_one.rs && rustc case_two.rs && rustc case_three.rs && rustc case_four.rs && rustc -L . reexporter.rs
% rustc -L . test.rs
error[E0599]: no method named `find_me_1` found for type `T` in the current scope
--> test.rs:6:7
|
3 | struct T;
| --------- method `find_me_1` not found for this
...
6 | T.find_me_1();
| ^^^^^^^^^
|
= help: items from traits can only be used if the trait is implemented and in scope
= note: the following trait defines an item `find_me_1`, perhaps you need to implement it:
candidate #1: `case_one::TraitInOne`
error[E0599]: no method named `find_me_2` found for type `T` in the current scope
--> test.rs:7:7
|
3 | struct T;
| --------- method `find_me_2` not found for this
...
7 | T.find_me_2();
| ^^^^^^^^^
|
= help: items from traits can only be used if the trait is implemented and in scope
= note: the following trait defines an item `find_me_2`, perhaps you need to implement it:
candidate #1: `case_two::TraitInTwo`
error[E0599]: no method named `find_me_3` found for type `T` in the current scope
--> test.rs:8:7
|
3 | struct T;
| --------- method `find_me_3` not found for this
...
8 | T.find_me_3();
| ^^^^^^^^^
|
= help: items from traits can only be used if the trait is implemented and in scope
= note: the following trait defines an item `find_me_3`, perhaps you need to implement it:
candidate #1: `reexporter::case_three::TraitInThree`
error[E0599]: no method named `find_me_4` found for type `T` in the current scope
--> test.rs:9:7
|
3 | struct T;
| --------- method `find_me_4` not found for this
...
9 | T.find_me_4();
| ^^^^^^^^^
|
= help: items from traits can only be used if the trait is implemented and in scope
= note: the following trait defines an item `find_me_4`, perhaps you need to implement it:
candidate #1: `reexporter::case_four::TraitInFour`
error: aborting due to 4 previous errors
For more information about this error, try `rustc --explain E0599`.
%
```
</details>
----
(Also of potential interest after reviewing the results from the cases enumerated in the details block: the diagnostic also suggests the paths `case_one::TraitInOne` and `case_two::TraitInTwo`, even though these crates are themselves not linked by `test.rs` itself. This is probably better than saying nothing at all, but it could be nice to give a hint to the user that they'll need to also add the `extern crate` declarations in `test.rs` itself?)
| C-enhancement,A-diagnostics,T-compiler,D-confusing | low | Critical |
342,357,037 | go | testing: parallel test output reported in non-deterministic order | I have a test with parallel subtests. A few are failing. Every time I run the test, the failures come out in a different order, presumably depending on which ones finish first. This makes it hard to see at a glance what is different each time I make a tweak and re-run the tests.
Given that we're saving up the output to print at the end of the test anyway, we should also arrange to order the test results - at least all the parallel ones - in the order they started, not the order they finished.
/cc @mpvl | NeedsInvestigation | low | Critical |
342,381,333 | vscode | Allow disabling of breadcrumbs on a per language basis | I want to do this:
```json
"[markdown]": {
"breadcrumbs.enabled": false
}
``` | feature-request,polish,breadcrumbs | low | Minor |
342,387,955 | rust | 1.27.1 many tests fail on s390x | See https://buildd.debian.org/status/fetch.php?pkg=rustc&arch=s390x&ver=1.27.1%2Bdfsg1-1%7Eexp1&stamp=1531882985&raw=0 at the bottom.
You can ignore the stdsimd/coresimd failures they were fixed as part of https://github.com/rust-lang-nursery/stdsimd/pull/466 but I have no clue what the other failures are. | C-bug,O-SystemZ | low | Critical |
342,392,166 | TypeScript | Formatter should convert object literal to multiple lines / one line | **TypeScript Version:** 3.1.0-dev.20180717
**Code**
```ts
const o = { a: 0, b: 1, c: 2 };
```
**Expected behavior:**
A refactor exists to convert to:
```ts
const o = {
a: 0,
b: 1,
c: 2,
};
```
and back.
**Actual behavior:**
No such refactor. I have to do a lot of manual editing of whitespace. | Suggestion,Help Wanted,Domain: Formatter | low | Minor |
342,393,861 | rust | 1.27.1 powerpc64/powerpc64le ui/target-feature-{gate,wrong} failing | - Debian log https://buildd.debian.org/status/fetch.php?pkg=rustc&arch=ppc64el&ver=1.27.1%2Bdfsg1-1%7Eexp1&stamp=1531883902&raw=0
- Fedora log https://kojipkgs.fedoraproject.org//packages/rust/1.27.1/3.fc29/data/logs/ppc64le/build.log
Since the files already have `// ignore-arm` for now I will go ahead and add `// ignore-powerpc64{,le}` to the Debian patches.
However `target-feature-gate.rs` also has `gate-test-powerpc_target_feature` so I wonder if `ignore-powerpc64` is the best approach.
| O-PowerPC,C-bug | low | Minor |
342,429,850 | flutter | Flutter crashes in debug builds (but not release) on the Hisense C11 Chromebook | I purchased a cheap Hisense Chromebook to test out how Unreal Engine performs on it (and the results are quite surprising!)
Anyways, running a release mode app on the Chromebook runs fine, very smooth, great performance, etc.
Running a debug mode app is... actually quite entertaining. Seems to be a VM bug.
<details><summary>Fun error snippet</summary>
```
E/flutter (19652): [ERROR:topaz/lib/tonic/logging/dart_error.cc(16)] Unhandled exception:
E/flutter (19652): 'package:flutter/src/foundation/binding.dart': Failed assertion: line 79 pos 12: '!_debugInitialized': is not true.
E/flutter (19652): #0 _AssertionError._doThrowNew (dart:core/runtime/liberrors_patch.dart:37:39)
E/flutter (19652): #1 _AssertionError._throwNew (dart:core/runtime/liberrors_patch.dart:33:5)
E/flutter (19652): #2 BindingBase.initInstances (package:flutter/src/foundation/binding.dart)
E/flutter (19652): #3 _WidgetsFlutterBinding&BindingBase&GestureBinding.initInstances (package:flutter/src/gestures/binding.dart:26:11)
E/flutter (19652): #4 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding.initInstances (package:flutter/src/services/binding.dart:26:11)
E/flutter (19652): #5 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.initInstances (package:flutter/src/scheduler/binding.dart:196:11)
E/flutter (19652): #6 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding.initInstances (package:flutter/src/painting/binding.dart:22:11)
E/flutter (19652): #7 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding.initInstances (package:flutter/src/rendering/binding.dart:31:11)
E/flutter (19652): #8 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding&WidgetsBinding.initInstances (package:flutter/src/widgets/binding.dart)
E/flutter (19652): #9 new BindingBase (package:flutter/src/foundation/binding.dart:53:5)
E/flutter (19652): #10 new _WidgetsFlutterBinding&BindingBase&GestureBinding (package:flutter/src/widgets/binding.dart)
E/flutter (19652): #11 new _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding (package:flutter/src/widgets/binding.dart)
E/flutter (19652): #12 new _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding (package:flutter/src/widgets/binding.dart)
E/flutter (19652): #13 new _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding (package:flutter/src/widgets/binding.dart)
E/flutter (19652): #14 new _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding (package:flutter/src/widgets/binding.dart)
E/flutter (19652): #15 new _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding&WidgetsBinding (package:flutter/src/widgets/binding.dart)
E/flutter (19652): #16 new WidgetsFlutterBinding (package:flutter/src/widgets/binding.dart)
E/flutter (19652): #17 WidgetsFlutterBinding.ensureInitialized (package:flutter/src/widgets/binding.dart:911:11)
E/flutter (19652): #18 runApp (package:flutter/src/widgets/binding.dart:703:25)
E/flutter (19652): #19 main (file:///C:/Users/ds841/Realism/naga/lib/main.dart:11:3)
E/flutter (19652): #20 _startIsolate.<anonymous closure> (dart:isolate/runtime/libisolate_patch.dart:279:19)
E/flutter (19652): #21 _RawReceivePortImpl._handleMessage (dart:isolate/runtime/libisolate_patch.dart:165:12)
I/flutter (19652): βββ‘ EXCEPTION CAUGHT BY SCHEDULER LIBRARY ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
I/flutter (19652): The following NoSuchMethodError was thrown during a scheduler callback:
I/flutter (19652): The getter '_v4storage' was called on null.
I/flutter (19652): Receiver: null
I/flutter (19652): Tried calling: _v4storage
I/flutter (19652):
I/flutter (19652): When the exception was thrown, this was the stack:
I/flutter (19652): #0 Object.noSuchMethod (dart:core/runtime/libobject_patch.dart:46:5)
I/flutter (19652): #1 new Vector4 (file:///C:/Users/ds841/AppData/Roaming/Pub/Cache/hosted/pub.dartlang.org/vector_math-2.0.6/lib/src/vector_math_64/vector4.dart)
I/flutter (19652): #2 TransformLayer.find (package:flutter/src/rendering/layer.dart:787:32)
I/flutter (19652): #3 RenderView._updateSystemChrome (package:flutter/src/rendering/view.dart:215:58)
I/flutter (19652): #4 RenderView.compositeFrame (package:flutter/src/rendering/view.dart:198:9)
I/flutter (19652): #5 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding.drawFrame (package:flutter/src/rendering/binding.dart:273:16)
I/flutter (19652): #6 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding&WidgetsBinding.drawFrame (package:flutter/src/widgets/binding.dart:627:13)
I/flutter (19652): #7 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding._handlePersistentFrameCallback (package:flutter/src/rendering/binding.dart:208:5)
I/flutter (19652): #8 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._invokeFrameCallback (package:flutter/src/scheduler/binding.dart:990:15)
I/flutter (19652): #9 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:930:9)
I/flutter (19652): #10 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._handleDrawFrame (package:flutter/src/scheduler/binding.dart:842:5)
I/flutter (19652): #11 _invoke (dart:ui/hooks.dart:125:13)
I/flutter (19652): #12 _drawFrame (dart:ui/hooks.dart:114:3)
I/flutter (19652): ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
</details>
Other ones are just crashes in JIT'd code (the stack traces are _very_ short), or aborts in unrelated pieces of code like this one:
[tombstone_00.txt](https://github.com/flutter/flutter/files/2206784/tombstone_00.txt)
This one seems to be a crash in the JIT:
[tombstone_04.txt](https://github.com/flutter/flutter/files/2206796/tombstone_04.txt)
ChromeOS isn't writing the tombstones for some of the crashes, which is kinda disappointing.
<details><summary>/proc/cpuinfo</summary>
```
processor : 0
model name : ARMv7 Processor rev 1 (v7l)
Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xc0d
CPU revision : 1
processor : 1
model name : ARMv7 Processor rev 1 (v7l)
Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xc0d
CPU revision : 1
processor : 2
model name : ARMv7 Processor rev 1 (v7l)
Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xc0d
CPU revision : 1
processor : 3
model name : ARMv7 Processor rev 1 (v7l)
Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xc0d
CPU revision : 1
Hardware : Rockchip (Device Tree)
Revision : 0000
Serial : 0000000000000000
```
</details>
Flutter Doctor:
```
[β] Connected devices (1 available)
β’ Rockchip RK3288 Chromebook β’ <omitted>:5555 β’ android-arm β’ Android 7.1.1 (API 25)
``` | c: crash,platform-android,framework,engine,dependency: dart,platform-chromebook,P2,team-android,triaged-android | low | Critical |
342,452,023 | go | x/mobile: support vendored asset package on Android | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version devel +2acae87416 Mon Jul 9 23:04:30 2018 +0000 windows/amd64
go/mobile version is also latest from master
### Does this issue reproduce with the latest release?
This is on latest master/built from source (to resolve other gomobile problem).
### What operating system and processor architecture are you using (`go env`)?
This may be the same as https://github.com/golang/go/issues/25255
Note my build environment is windows; i have reproduced the problem using a linux
build environment; target environment is android
go env
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\db\AppData\Local\go-build
set GOEXE=.exe
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=d:\dev\go
set GOPROXY=
set GORACE=
set GOROOT=c:\devtools\go1.11.devel
set GOTMPDIR=
set GOTOOLDIR=c:\devtools\go1.11.devel\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\db\AppData\Local\Temp\go-build567116882=/tmp/go-build -gno-record-gcc-switches
set VGOMODROOT=
### What did you do?
I have an android app that creates a service. The service is built linking to a library
created with `gomobile bind`
### What did you expect to see?
Everything works
### What did you see instead?
When the service starts, it immediately crashes with
I/GoLog (18112): asset: no current JVM
I/Zygote ( 195): Process 18112 exited cleanly (1)
The first thing the service does is attempt to load an asset (which is present in the .apk)
| OS-Android,mobile | medium | Critical |
342,454,377 | flutter | NetworkImageWithRetry should support HTTP headers for parity with NetworkImage | Just like the "usual" `NetworkImage` does. In my case I need to add an `Authorization` header and I think this is not unlikely. | c: new feature,framework,package,team-ecosystem,P2,triaged-ecosystem | low | Major |
342,460,207 | vue | Error thrown when using transition-group with conditionally rendered children | ### Version
2.5.16
### Reproduction link
[https://codepen.io/riophae/pen/EpgWdZ](https://codepen.io/riophae/pen/EpgWdZ)
### Steps to reproduce
1. Open the pen and it shows 1, 3, 5
2. Click the button
### What is expected?
It should show 3, 5
### What is actually happening?
Got this error in console:
```
[Vue warn]: Error in render: "TypeError: c$1.elm.getBoundingClientRect is not a function"
found in
---> <TransitionGroup>
<Root>
TypeError: c$1.elm.getBoundingClientRect is not a function
at Proxy.render (VM643 vue.js:8383)
at VueComponent.Vue._render (VM643 vue.js:4535)
at VueComponent.updateComponent (VM643 vue.js:2788)
at Watcher.get (VM643 vue.js:3140)
at Watcher.run (VM643 vue.js:3217)
at flushSchedulerQueue (VM643 vue.js:2981)
at Array.<anonymous> (VM643 vue.js:1839)
at MessagePort.flushCallbacks (VM643 vue.js:1760)
```
<!-- generated by vue-issues. DO NOT REMOVE --> | transition | medium | Critical |
342,476,162 | pytorch | [feature request] Rename `Subset` -> `Resample` to reflect wider use | Very minor but I believe `torch.utils.data.dataset.Subset` might better be named `Resample` (or something similar), since it's possible to use for oversampling via duplicated indices. This is a common use case for e.g. imbalanced data, and I had actually made my own before realizing that `Subset` was exactly the same indices-array method I implemented.
The switch would make it clear (a) that it can be used like this, and (b) that it *is* being used like this, so that it doesn't get broken in some future update.
Thanks.
cc @SsnL @VitalyFedyunin @ejguan | todo,module: dataloader,triaged | low | Critical |
342,476,830 | pytorch | Incorrect term in _LRScheduler. | So, _LRScheduler use 'epoch' (last_epoch etc.) as a term which describe step of learning rate. But epoch is an artifical term, more commonly we operate in terms of steps. And I think that rename of 'last_epoch' to 'last_step' will be more correct.
cc @vincentqb | todo,module: optimizer,triaged | low | Major |
342,477,549 | go | x/image/tiff: implement a generic tiff parser | Hi,
I am currently working on a project handling tiff based content (proprietary raw from Canon and Nikon, as well as Exif).
There is a library handling my needs regarding [Exif](https://github.com/rwcarlsen/goexif) but this one re-implements the [tiff parser](https://github.com/rwcarlsen/goexif/blob/go1/tiff/tiff.go) in order to access fields.
I am wondering it could not be interesting to expose a tiff parser in the `x/image/tiff` package that developers could use to handle content in a TIFF container.
Happy to help if you think this is an interesting idea.
| NeedsInvestigation,FeatureRequest | low | Minor |
342,489,303 | pytorch | batch_sampler/test_worker_seed intermittently fails with address already in use on OS X | Sample log:
```
19:26:36 test_add_dataset (__main__.TestConcatDataset) ... ok
19:26:36 test_concat_raises_index_error (__main__.TestConcatDataset) ... ok
19:26:36 test_concat_two_non_singletons (__main__.TestConcatDataset) ... ok
19:26:36 test_concat_two_non_singletons_with_empty (__main__.TestConcatDataset) ... ok
19:26:36 test_concat_two_singletons (__main__.TestConcatDataset) ... ok
19:26:36 test_batch_sampler (__main__.TestDataLoader) ... libc++abi.dylib: terminating with uncaught exception of type std::__1::system_error: Address already in use
19:26:36 ERROR
19:26:36 test_default_colate_bad_numpy_types (__main__.TestDataLoader) ... ok
19:26:36 test_error (__main__.TestDataLoader) ... ok
19:26:36 test_error_workers (__main__.TestDataLoader) ... ok
19:26:36 test_growing_dataset (__main__.TestDataLoader) ... ok
19:26:36 test_invalid_assign_after_init (__main__.TestDataLoader) ... ok
19:26:36 test_len (__main__.TestDataLoader) ... ok
19:26:36 test_manager_unclean_exit (__main__.TestDataLoader)
19:26:36 there might be ConnectionResetError or leaked semaphore warning (due to dirty process exit), but they are all safe to ignore ... skipped 'CUDA unavailable'
19:26:36 test_multiple_dataloaders (__main__.TestDataLoader) ... ok
19:26:36 test_numpy (__main__.TestDataLoader) ... ok
19:26:36 test_numpy_scalars (__main__.TestDataLoader) ... ok
19:26:36 test_partial_workers (__main__.TestDataLoader)
19:26:36 check that workers exit even if the iterator is not exhausted ... skipped 'CUDA unavailable'
19:26:36 test_segfault (__main__.TestDataLoader) ... skipped 'temporarily disable until flaky failures are fixed'
19:26:36 test_seqential_batch_workers (__main__.TestDataLoader) ... ok
19:26:36 test_sequential (__main__.TestDataLoader) ... ok
19:26:36 test_sequential_batch (__main__.TestDataLoader) ... ok
19:26:36 test_sequential_pin_memory (__main__.TestDataLoader) ... skipped 'CUDA unavailable'
19:26:36 test_sequential_workers (__main__.TestDataLoader) ... ok
19:26:36 test_shuffle (__main__.TestDataLoader) ... ok
19:26:36 test_shuffle_batch (__main__.TestDataLoader) ... ok
19:26:37 test_shuffle_batch_workers (__main__.TestDataLoader) ... ok
19:26:37 test_shuffle_pin_memory (__main__.TestDataLoader) ... skipped 'CUDA unavailable'
19:26:37 test_shuffle_workers (__main__.TestDataLoader) ... ok
19:26:37 test_timeout (__main__.TestDataLoader) ... ok
19:26:38 test_worker_init_fn (__main__.TestDataLoader) ... ok
19:26:38 test_worker_seed (__main__.TestDataLoader) ... ok
19:26:38 test_lengths_must_equal_datset_size (__main__.TestDatasetRandomSplit) ... ok
19:26:38 test_splits_are_mutually_exclusive (__main__.TestDatasetRandomSplit) ... ok
19:26:38 test_splits_have_correct_size (__main__.TestDatasetRandomSplit) ... ok
19:26:38 test_pin_memory (__main__.TestDictDataLoader) ... skipped 'CUDA unavailable'
19:26:38 test_sequential_batch (__main__.TestDictDataLoader) ... ok
19:26:38 test_ind_worker_queue (__main__.TestIndividualWorkerQueue) ... ok
19:26:39 test_shuffle_pin_memory (__main__.TestStringDataLoader) ... skipped 'CUDA unavailable'
19:26:39 test_getitem (__main__.TestTensorDataset) ... ok
19:26:39 test_getitem_1d (__main__.TestTensorDataset) ... ok
19:26:39 test_len (__main__.TestTensorDataset) ... ok
19:26:39 test_many_tensors (__main__.TestTensorDataset) ... ok
19:26:39 test_single_tensor (__main__.TestTensorDataset) ... ok
19:26:39
19:26:39 ======================================================================
19:26:39 ERROR: test_batch_sampler (__main__.TestDataLoader)
19:26:39 ----------------------------------------------------------------------
19:26:39 Traceback (most recent call last):
19:26:39 File "test_dataloader.py", line 441, in test_batch_sampler
19:26:39 self._test_batch_sampler(num_workers=4)
19:26:39 File "test_dataloader.py", line 427, in _test_batch_sampler
19:26:39 for i, (input, _target) in enumerate(dl):
19:26:39 File "/var/lib/jenkins/pytorch-ci-env/miniconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 336, in __next__
19:26:39 return self._process_next_batch(batch)
19:26:39 File "/var/lib/jenkins/pytorch-ci-env/miniconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 357, in _process_next_batch
19:26:39 raise batch.exc_type(batch.exc_msg)
19:26:39 RuntimeError: Traceback (most recent call last):
19:26:39 File "/var/lib/jenkins/pytorch-ci-env/miniconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 106, in _worker_loop
19:26:39 samples = collate_fn([dataset[i] for i in batch_indices])
19:26:39 File "/var/lib/jenkins/pytorch-ci-env/miniconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 187, in default_collate
19:26:39 return [default_collate(samples) for samples in transposed]
19:26:39 File "/var/lib/jenkins/pytorch-ci-env/miniconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 187, in <listcomp>
19:26:39 return [default_collate(samples) for samples in transposed]
19:26:39 File "/var/lib/jenkins/pytorch-ci-env/miniconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 162, in default_collate
19:26:39 storage = batch[0].storage()._new_shared(numel)
19:26:39 File "/var/lib/jenkins/pytorch-ci-env/miniconda3/lib/python3.6/site-packages/torch/storage.py", line 118, in _new_shared
19:26:39 return cls._new_using_filename(size)
19:26:39 RuntimeError: std::exception at /private/var/lib/jenkins/workspace/pytorch-builds/pytorch-macos-10.13-py3-build/torch/lib/libshm/core.cpp:99
```
log: https://ci.pytorch.org/jenkins/job/pytorch-builds/job/pytorch-macos-10.13-py3-test1/4002/console | todo,module: serialization,triaged,module: flaky-tests | low | Critical |
342,494,956 | vscode | Rectangle commands for large file editing | ### Problem
It is difficult to modify large files (more than 10k lines) efficiently, as we are limited in cursors, as showed in the following gif.

While i do understand the possible problem to have more than 10k active cursors at the same time, i still think it would be nice to be able to edit large file.
### Solution and request
I think emacs has a interesting solution for file editing without using multiple cursors: the [rectangle commands](https://www.gnu.org/software/emacs/manual/html_node/emacs/Rectangles.html).
Maybe using this "marking" solution could enable vscode to modify larger file.
Thanks.
| feature-request,editor-multicursor | low | Minor |
342,507,841 | flutter | Update flutter_test documentation to highlight gotchas of inherent fake async usage | Not sure if this is expected but I just noticed that calling any async method from `dart:io`, for instance `File.exists()` does not work when executed in tests. However `File.existsSync()` still works.
To reproduce simply `flutter create` a new project and modify existing test to include something like this:
```dart
final dir = Directory.systemTemp;
final res = await dir.exists();
print(res);
```
The test will fail with timeout.
```
$ flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[β] Flutter (Channel master, v0.5.8-pre.62, on Mac OS X 10.13.5 17F77, locale en-BY)
[β] Android toolchain - develop for Android devices (Android SDK 27.0.3)
[β] iOS toolchain - develop for iOS devices (Xcode 9.4.1)
[β] Android Studio (version 3.1)
[β] IntelliJ IDEA Community Edition (version 2018.1.6)
[!] VS Code (version 1.24.1)
[!] Connected devices
! No devices available
! Doctor found issues in 2 categories.
```
| team,tool,d: api docs,P3,team-tool,triaged-tool | low | Major |
342,512,437 | pytorch | [feature request] More options for Fractional Max Pooling | Two things from the Fractional Max Pooling paper (https://arxiv.org/pdf/1412.6071.pdf) are not currently implemented:
- what is called random increments in the paper (in contrast with what is called pseudorandom increments, which seems to be the only possible kind of increments in the current implementation);
- what is called disjoint regions in the paper (in contrast with overlapping regions, which seems to be the only possible kind of regions in the current implementation).
Thanks | triaged,module: pooling | low | Minor |
342,523,838 | TypeScript | Bring back typedef property expansion | _From @zavr-1 on July 18, 2018 20:25_
In the newer version, I can only see the type of an argument which is an object, compared to how it was expanded in the previous versions.

It was actually very useful to see all properties on hover which served as a documentation.

I think if you hide the details of the type, you should also provide means to expand its properties via a UI, or provide a setting for it.
_Copied from original issue: Microsoft/vscode#54609_ | Suggestion,In Discussion | high | Critical |
342,531,751 | go | cmd/go: get downgrade of excluded last version fails | Bryan points out that this fails because the go.mod is inconsistent:
go get rsc.io/sampler@<v1.99
-- go.mod --
module x
require rsc.io/[email protected]
exclude rsc.io/[email protected]
It's unclear how to make it succeed, and it's fine for most other commands like go build etc to fail, but running that go get is an obvious way to attempt to dig out of that hole and it doesn't work.
It would be nice to do something here, but it's not entirely clear what.
From https://go-review.googlesource.com/c/vgo/+/122396/6#302. | NeedsDecision,modules | low | Minor |
342,569,682 | rust | rustdoc discussion: collapsing sections by default | IIUC, Rustdoc collapses all trait impls and the type declaration (cc https://github.com/rust-lang/rust/pull/49412) by default. But leaves other sections expanded.
This seems good in many cases, but in others. We might want to adjust exactly how things are collapsed.
Some thoughts:
* When collapsing the type decl, it would be nice to still see a minimal part of the signature, e.g., for [Iterator](https://doc.rust-lang.org/std/iter/trait.Iterator.html) there is good reason to not show all the methods, but it would be nice to show `pub trait Iterator { ... }` and maybe the `Item` assoc type. I realise all the information is there on the page, but it just looks incomplete without something there.
* It is often useful to have type decl as a 'contents' page for the docs, especially if the intro text is very long. OTOH, it might be nice to actually have a contents (e.g., just the names of fields/methods) so that impl methods are included.
* For trait impls, it would be nice to include the signatures of methods, but not the text descriptions (i.e., the trait impl is expanded, but each method is collapsed). The reasoning here is that often wants to know the names or signatures of methods in an impl, but not the descriptions (where I agree with the reasoning that it is not the most useful).
* For traits, it would be good to further collapse the list of impls, see for example [IntoIterator](https://doc.rust-lang.org/std/iter/trait.IntoIterator.html#implementors), it might be nice to have just a list of the types (rather than the signatures) with a click through to see the sig, or to keep the current style but collapse the whole list by default. | T-rustdoc,C-discussion,A-rustdoc-ui | low | Minor |
342,573,967 | rust | SystemTime conversions invite programmer error | Yesterday, I found and reported a [panic bug in `serde_json`](https://github.com/serde-rs/json/issues/464) which traces its origins to the mechanism for extracting usable information from `SystemTime`.
Specifically, using the best method they believed to be available, it would panic when I tried to serialize metadata for files with negative `mtime` values. (I have one file with an mtime of `-1` and another with an mtime so far in the future that it overflows into a large negative number if the Rust code is compiled for the `i686-unknown-linux-musl` target.
The current design of the API encourages users to assume that sources of `SystemTime` values use unsigned values internally (ie. that modifications to the system clock are the only source of `Err` with valid input), and, further more, in the bug I reported for `serde_json`, it led @lambda to the incorrect conclusion that there is currently no way to losslessly serialize and deserialize `SystemTime` values.
If nothing else, the documentation should be improved in the following ways:
1. The documentation for both `UNIX_EPOCH` definitions should explicitly mention that the underlying OS may return valid `SystemTime` values prior to the epoch and code should be prepared for such an eventuality. (Perhaps with the concrete example of `ext3` filesystems allowing negative `mtime` values to be stored)
2. The documentation for `SystemTime::duration_since` should use "may return `SystemTimeError`" rather than "may fail" to avoid the connotation that an `Err` result is erroneous in any sense other than being an "until" rather than a "since".
That is, "failure" connotes a response that should prompt either retry or error-propagation, while it is not necessary to re-run the operation with the arguments reversed in order to extract the `Duration` and, depending on the input, nothing may be wrong at all. (ie. the `Result` discriminant is semantically equivalent to a sign bit in this case.)
3. "and the error contains how far from self the time is" is too easy to overlook, given how important it is for robust software to handle this eventuality. It should explicitly mention `SystemTimeError::duration` to make it more eye-catching.
Perhaps "Returns an Err if `earlier` is later than `self`. A `Duration` representing how far `self` is before `earlier` can be retrieved via the `duration()` function on the resulting `SystemTimeError`.
There are probably other ways to better tune the writing to guard against human error, but I don't want to be here all night trying to perfect this when you might think of the same improvements immediately, simply by virtue of having a different perspective. | C-enhancement,T-libs-api,A-docs | low | Critical |
342,599,613 | godot | GodotSharp(?): detect if Solution is a C# or C++ project first before attempting to build it | **Godot version:** 3.0.5 stable mono win64
**Issue description:** GodotSharp will blindly try to build an .sln, obviously failing when it's a C++ solution. Had to drag the .sln out of the folder in order to run the project.
Not an urgent issue at all, as I can just download the non-mono release of Godot and use that instead, but its nice to have :P
Maybe parsing the .sln, and skip building if it has vcxproj(s) in it?
**Steps to reproduce:** Build a project with a Visual Studio C++ solution inside. | enhancement,topic:dotnet | low | Major |
342,605,294 | vscode | [folding] Add keyboard shortcut to jump between #region and #endregion | How difficult would it be to add a keyboard shortcut to jump between `#region` and `#endregion`, the same way you can jump between opening and closing brackets `[]`, braces `{}`, and parentheses `()`? I would prefer the same keyboard shortcut, so just adding this to `editor.action.jumpToBracket` would be perfect. However, I can imagine that some might prefer it to be a separate command, or it may be better to do that from an implementation standpoint.
I apologize if I'm missing an existing shortcut, but I couldn't find anything in the current keybindings. | feature-request,editor-folding | low | Major |
342,632,809 | vscode | [scss] Add "Go to or peek defintion" for imported mixins and variables for SCSS files | Has this been discussed before?
I couldn't find an issue, but I can't imagine other people not needing this. | feature-request,css-less-scss | low | Minor |
342,665,400 | go | testing: examples with trailing whitespace in multiple lines of output can be confusing | Below example fails. Even if I copy the output of `go test` to the output section, it still fails.
```
func ExampleTrailingSpace() {
fmt.Println("abc ")
fmt.Println("cde")
// Output:
//abc
//cde
}
```
However, if I remove the second print, it succeeds.
```
func ExampleTrailingSpace() {
fmt.Println("abc ")
// Output:
//abc
}
```
Why? Because `testing/example.go` trims leading and trailing spaces of the whole output, but not every line. Code formatting tools, including `gofmt`, trims the space of each comment line.
I suggest trimming each line before comparing the output. | NeedsInvestigation | low | Major |
342,683,952 | opencv | isContinuous() fails for large images in OpenCV | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => 3.4.2
- Operating System / Platform => Ubuntu 18.04
- Compiler => gcc 8
##### Detailed description
Hi, I have the following in OpenCV 3.4.2
cv::Mat3b a(cv::Size(30511, 26060));
cout << a.isContinuous() << endl;
cv::Mat3b b = a.clone();
cout << b.isContinuous() << endl;
and the result is 0 (false) in both cases, which I think is not correct. At least for `cv::Mat3b b` the docs say "The original step[] is not taken into account. So, the array copy is a continuous array occupying total()*elemSize() bytes.". | category: core,RFC | low | Critical |
342,749,282 | go | cmd/link: short tests are not short enough | Running all.bash (which does go test -short std cmd), my target is usually that individual tests should run in under 1 second.
The cmd/link test takes 3.4s on my system. Can we make it take less time?
cmd/link is implicitly tested by pretty much every other test in the system, so it's hard to believe it needs to do a ton in the -short test.
/cc @aclements | Testing,NeedsInvestigation,compiler/runtime | low | Major |
342,752,500 | go | build: all.bash takes too long | My target for all.bash is to keep it around 3-4 minutes. Usually when it gets to 5 minutes I spend some time trimming it back down. Maybe 3-4 minutes is no longer attainable, but right now we're at 7 minutes, which is much longer than I'd hope.
I filed a few bugs to try to help: #26469, #26472, #26470, #26471.
Speeding all.bash may also help speed the trybots, which also take longer than I'd hope. | Testing,help wanted,NeedsFix | medium | Critical |
342,755,606 | go | cmd/go: module update should drop unused indirect dependencies? | ### What version of Go are you using (`go version`)?
Latest go devel: `go version devel +d278f09333 Thu Jul 19 05:40:37 2018 +0000 darwin/amd64`
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/ikorolev/Library/Caches/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/ikorolev/.gvm/pkgsets/go1.11beta1/global:/Users/ikorolev/dev/go"
GOPROXY=""
GORACE=""
GOROOT="/Users/ikorolev/.gvm/gos/go1.11beta1"
GOTMPDIR=""
GOTOOLDIR="/Users/ikorolev/.gvm/gos/go1.11beta1/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/_b/d1934m9s587_8t_6ngv3hnc00000gp/T/go-build301560759=/tmp/go-build -gno-record-gcc-switches -fno-common"
```
### What did you do?
Sorry, can't give you a standalone reproduction - bug is produced only with downloading updated dependencies.
Consider the following flow:
A(v0.0.1) uses B(v0.0.1):
```
module github.com/mwf/vgo-modules/a
require github.com/mwf/vgo-modules/b v0.0.1
```
Then A(v0.1.0) drops the dependency.
Here we got a project `main` which uses A(v0.0.1) and we want to upgrade A to v0.1.0
```
cd `mktemp -d`
git clone https://github.com/mwf/vgo-modules .
echo "\n$(cat go.mod)"
echo "\n$(cat go.sum)"
go get github.com/mwf/vgo-modules/[email protected]
echo "\n$(cat go.mod)"
```
We get an unexpected indirect dependency `github.com/mwf/vgo-modules/b v0.0.1 // indirect` in `go.mod`, it seems to be taken from go.sum as a side-effect.
### What did you expect to see?
`go.mod` should contain updated `a` dependency
```
module github.com/mwf/vgo-modules
require github.com/mwf/vgo-modules/a v0.1.0
```
### What did you see instead?
We get an unexpected indirect dependency from `b` in `go.mod`, it seems to be taken from `go.sum` as a side-effect.
```
module github.com/mwf/vgo-modules
require (
github.com/mwf/vgo-modules/a v0.1.0
github.com/mwf/vgo-modules/b v0.0.1 // indirect
)
```
| NeedsInvestigation,modules | low | Critical |
342,779,719 | rust | Set up CI for local rebuilds using the actual rustc-src tarball | By that I mean the whole end-to-end process of `./x.py {build,test,install}` and then using this installed version to build itself again.
Would help to prevent things like #52541, this has happened quite a few times now. | T-infra,C-feature-request | low | Minor |
342,783,512 | flutter | [image_picker plugin] FEATURE REQUEST: get most recent image/video | I'm working on an app that accesses the user's image/video gallery. I want the button that opens the image picker to be a thumbnail of the most recent image/video in the user's gallery. It would be great if I could get this from the image_picker package directly! | c: new feature,p: image_picker,package,c: proposal,team-ecosystem,P3,has partial patch,triaged-ecosystem | low | Minor |
342,803,524 | go | x/net/http2: support http2 proxy connections | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
```
go version go1.10.3 linux/amd64
```
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOCACHE="..."
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="..."
GORACE=""
GOROOT="/usr/lib/go"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build612803013=/tmp/go-build -gno-record-gcc-switches"
```
### What did you do?
I have an HTTP/2 proxy (I tested a custom one and https://github.com/caddyserver/forwardproxy) and when trying to connect through it from a client using HTTP/2 I get an error. This does not happen if I don't use HTTP/2.
Here's my code:
```go
package main
import (
"crypto/tls"
"fmt"
"io/ioutil"
"net/http"
"os"
"golang.org/x/net/http2"
)
func main() {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
Proxy: http.ProxyFromEnvironment,
}
if err := http2.ConfigureTransport(tr); err != nil {
fmt.Fprintf(os.Stderr, "error configuring transport: %v\n", err)
os.Exit(1)
}
client := &http.Client{Transport: tr}
rsp, err := client.Get("https://google.com")
if err != nil {
fmt.Fprintf(os.Stderr, "error getting: %v\n", err)
os.Exit(1)
}
defer rsp.Body.Close()
buf, err := ioutil.ReadAll(rsp.Body)
if err != nil {
fmt.Fprintf(os.Stderr, "error reading response: %v\n", err)
os.Exit(1)
}
fmt.Printf("Response: %s\n", string(buf))
}
```
If I comment out the `http2.ConfigureTransport()` call (so it will use HTTP/1.1) it works fine.
### What did you expect to see?
A successful connection.
### What did you see instead?
On the proxy server:
```
2018/07/19 18:30:25 http2: server connection from 127.0.0.1:54850 on 0xc420174b60
2018/07/19 18:30:25 http2: server: error reading preface from client 127.0.0.1:54850: bogus greeting "CONNECT google.com:443 H"
```
On the client:
```
2018/07/19 18:30:25 http2: Transport failed to get client conn for google.com:443: http2: no cached connection was available
error getting: Get https://google.com: malformed HTTP response "\x00\x00\x18\x04\x00\x00\x00\x00\x00\x00\x05\x00\x10\x00\x00\x00\x03\x00\x00\x00\xfa\x00\x06\x00\x10\x01@\x00\x04\x00\x10\x00\x00"
``` | FeatureRequest | medium | Critical |
342,808,803 | rust | Sub-optimal performance of Iterator::max_by_key in some cases | @pftbest noted that this test case:
```rust
#[bench]
fn bench_max_by_key(b: &mut Bencher) {
struct Data { k: i32, z: i8 }
let v: Vec<Data> = (0..10000).map(|i| Data { k: i, z: 48 }).collect();
b.iter(|| v.iter().max_by_key(|x| x.k));
}
```
is about 2β3 times slower than the same test run with this alternate implementation of `Iterator::max_by_key`:
```rust
fn max_by_key2<B: Ord, F>(mut self, mut f: F) -> Option<Self::Item>
where F: FnMut(&Self::Item) -> B {
let mut max_el = self.next()?;
let mut max_val = f(&max_el);
for el in self {
let val = f(&el);
if val >= max_val {
max_val = val;
max_el = el;
}
}
Some(max_el)
}
```
However, if the field `z` is removed from `struct Data`, the libstd test case runs as fast as the alternate one.
[Complete example on playground](https://play.rust-lang.org/?gist=72627da67b11c06cc57fbffb3631f028&version=nightly&mode=debug&edition=2018) | I-slow,T-compiler | low | Critical |
342,847,100 | pytorch | [JIT] Unify python -> SugaredValue construction + fix inlining of graphs with Tuple-typed inputs | 1. Remove stuff like this https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/script/init.cpp#L87 and move it into a unified python -> SugaredValue constructor (e.g. `create()`). This will require having flags that indicate that a value is e.g. constant or a submodule.
2. Fix disabled tests for annotated script functions and methods that are taking in Tuples. The simplest form is to get these graphs to be inlineable, first step is to relax the constraint in LowerTuples.
cc @suo | oncall: jit | low | Minor |
342,893,666 | pytorch | WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. CRITICAL:root:Cannot load caffe2.python. Error: libcaffe2.so: cannot open shared object file: No such file or directory | So I tried to install caffe2 from source because I want to build densepose. I installed cuda and cudnn as well.
Getting above error when I do `from caffe2.python import core`
my cmake output:
**-- ******** Summary ********
-- General:
-- CMake version : 3.11.4
-- CMake command : /usr/local/bin/cmake
-- Git version : v0.1.11-9388-gf33cd36
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 5.4.0
-- BLAS : Eigen
-- CXX flags : -fvisibility-inlines-hidden -DONNX_NAMESPACE
=onnx_c2 -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initial
izers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-co
mpare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno
-unused-result -Wno-strict-aliasing -Wno-error=deprecated-declarations
-- Build type : Release
-- Compile definitions :
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : /usr/local
-- BUILD_CAFFE2 : ON
-- BUILD_ATEN : OFF
-- BUILD_BINARY : ON
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : ON
-- Python version : 2.7.12
-- Python executable : /usr/bin/python
-- Pythonlibs version : 2.7.12
-- Python library : /usr/lib/python2.7
-- Python includes : /usr/include/python2.7
-- Python site-packages: lib/python2.7/dist-packages
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : OFF
-- USE_ASAN : OFF
-- USE_ATEN : OFF
-- USE_CUDA : ON
-- CUDA static link : OFF
-- USE_CUDNN : ON
-- CUDA version : 8.0
-- cuDNN version : 7.0.5
-- CUDA root directory : /usr/local/cuda-8.0
-- CUDA library : /usr/lib/x86_64-linux-gnu/libcuda.so
-- cudart library : /usr/local/cuda-8.0/lib64/libcudart_static.a;-pthread;dl;/usr/lib/x86_64-linux-gnu/librt.so
-- cublas library : /usr/local/cuda-8.0/lib64/libcublas.so;/usr/local/cuda-8.0/lib64/libcublas_device.a
-- cufft library : /usr/local/cuda-8.0/lib64/libcufft.so
-- curand library : /usr/local/cuda-8.0/lib64/libcurand.so
-- cuDNN library : /usr/lib/x86_64-linux-gnu/libcudnn.so
-- nvrtc : /usr/local/cuda-8.0/lib64/libnvrtc.so
-- CUDA include path : /usr/local/cuda-8.0/include
-- NVCC executable : /usr/local/cuda-8.0/bin/nvcc
-- CUDA host compiler : /usr/bin/cc
-- USE_TENSORRT : OFF
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FFMPEG : OFF
-- USE_GFLAGS : ON
-- USE_GLOG : ON
-- USE_GLOO : ON
-- USE_GLOO_IBVERBS : OFF
-- USE_LEVELDB : ON
-- LevelDB version : 1.18
-- Snappy version : 1.1.3
-- USE_LITE_PROTO : OFF
-- USE_LMDB : ON
-- LMDB version : 0.9.17
-- USE_METAL : OFF
-- USE_MKL :
-- USE_MOBILE_OPENGL : OFF
-- USE_MPI : ON
-- USE_NCCL : ON
-- USE_SYSTEM_NCCL : OFF
-- USE_NERVANA_GPU : OFF
-- USE_NNPACK : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : ON
-- OpenCV version : 2.4.9.1
-- USE_OPENMP : OFF
-- USE_PROF : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- Public Dependencies : Threads::Threads;gflags;glog::glog
-- Private Dependencies : nnpack;cpuinfo;/usr/lib/x86_64-linux-gnu/liblm
db.so;/usr/lib/x86_64-linux-gnu/libleveldb.so;/usr/lib/x86_64-linux-gnu/li
bsnappy.so;/usr/lib/x86_64-linux-gnu/libnuma.so;opencv_core;opencv_highgui
;opencv_imgproc;/usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/li
bmpi.so;gloo;onnxifi_loader;gcc_s;gcc;dl
-- Configuring done
-- Generating done
-- Build files have been written to: /home/g008kaaraan/pytorch/build**
## System Info
I am doing this on google cloud ubuntu 16.04
### which python:
/usr/bin/python
### echo $PYTHONPATH:
blank
### echo $PATH:
/usr/local/cuda8.0/bin:/home/g008kaaraan/bin:/home/g008kaaraan/.local/bin:/home/g008kaaraan/anaconda2/bin:/usr/local/cuda8.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
### echo $LD_LIBRARY_PATH:
/usr/local/cuda-8.0/lib64:/usr/local/cuda-8.0/lib64
I suspect this might be because of my default python is in /usr/bin which should not be as per the FAQ from caffe2 offical guide. How to deal with that.
## my bashrc:
export COCOAPI="/home/g008kaaraan/cocoapi"
export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
If need to modify my bashrc then please tell how ? | caffe2 | low | Critical |
342,901,863 | flutter | No word-breaks for CJK locales | Internal: b/112618495
Flutter cannot properly detect word breaks in Chinese and Japanese text because the ICU configuration that we ship with Flutter doesn't include the dictionary to provide that functionality.
Chinese example:
Type δ½ ε₯½ε into a text field. Long-pressing on either of the first two characters should select both characters as they form one word.
Japanese example:
Type ζ₯ζ¬θͺε¦ζ ‘ into a text field (means Japanese language school). Long-pressing on either of the last two characters should select them both to select the word school (ε¦ζ ‘).
Compare the result of the two experiments above between a native textfield and a Flutter text field to see the difference.
Other languages (e.g. English or German words) work fine and long-pressing a word selects the full word.
/cc @cbracken @xster @Hixie | a: text input,engine,a: internationalization,a: china,has reproducible steps,P3,team-engine,triaged-engine,found in release: 3.19,found in release: 3.22 | low | Major |
342,910,917 | vue-element-admin | adding Font-awesome | Hello,
Can i add font-awesome in our best powerfull template of vue-element-admin ?
Thank's for help,
Merci d'avance,
| in plan | low | Minor |
342,928,270 | go | cmd/go: build: add -static flag | This is a proposal to add `-static` flag to `go build`.
Producing a static build with go already requires a non-trivial amount of flags passed to `go build`, which on Linux currently amounts to something like:
`-ldflags '-extldflags "-fno-PIC -static"' -buildmode pie -tags 'osusergo netgo static_build'` [1]
...and this magic string keeps growing.
It would be awesome to encapsulate this sacred knowledge internally, exposing it via a new `-static` flag for `go build` and friends.
In addition to the above benefit of hiding the complexity, `-static` can also bring us more niceties, such as:
* automatic `static` build tag third-party software can rely upon. Currently using `static_build` tag seems like a de-facto standard, used a lot, but it has to be explicitly defined like in [1] above.
* providing `--static` flag to `pkg-config` invocations (those initiated by `// #cgo pkg-config: lib` lines in the source code). It will solve another issue (linking against a proper set of libraries for both static and dynamic link cases) for which a somewhat verbose workaround is currently required (for example, see [ploop_link_static.go](https://github.com/kolyshkin/goploop/blob/master/ploop_link_static.go) and [ploop_link_dynamic.go](https://github.com/kolyshkin/goploop/blob/master/ploop_link_dynamic.go)).
### See also
* https://github.com/golang/go/issues/12058 (original proposal for adding conditional `--static` to `pkg-config`)
* https://github.com/golang/go/issues/24787 (initial version of this very proposal) | help wanted,Proposal,Proposal-Accepted,GoCommand | high | Critical |
342,931,629 | TypeScript | fourslash test breaks when setting "@lib" | **TypeScript Version:** master
**Code**
```ts
/// <reference path="fourslash.ts" />
// @lib: es2015
////function f(x: number[]) { return x.length; }
verify.noErrors();
```
**Expected behavior:**
No error.
**Actual behavior:**
`Error: At line 1, col 0: Found an error: /tests/cases/fourslash/quickInfoBar.ts@35: Property 'length' does not exist on type '{}'.` | Infrastructure | low | Critical |
342,980,450 | rust | derive Debug incorrect assumption | Hello,
I'm guessing there "might" be something wrong how the `Debug` trait is derived in case a type is owned but not actually needed for formatting the object. The following code fails because `B` is not necessarily `Debug` which is correct, however since `Foo` doesn't actually have an instance of `B` as a field, that shouldn't be needed, and `B::Item` is restricted to `Debug`:
```
#[derive(Debug)]
struct Foo<B: Bar>(B::Item);
trait Bar {
type Item: Debug;
}
fn foo<B: Bar>(f: Foo<B>) {
println!("{:?}", f);
}
struct ABC();
impl Bar for ABC {
type Item = String;
}
fn main() {
foo(Foo::<ABC>("a".into()));
}
```
```
error[E0277]: `B` doesn't implement `std::fmt::Debug`
--> src/main.rs:65:22
|
65 | println!("{:?}", f);
| ^ `B` cannot be formatted using `:?` because it doesn't implement `std::fmt::Debug`
|
= help: the trait `std::fmt::Debug` is not implemented for `B`
= help: consider adding a `where B: std::fmt::Debug` bound
= note: required because of the requirements on the impl of `std::fmt::Debug` for `middlewares::authentication::Foo<B>`
= note: required by `std::fmt::Debug::fmt
```
| C-enhancement,A-diagnostics,T-compiler | low | Critical |
343,028,673 | go | cmd/compile: constant string -> []byte and []byte -> string conversions aren't constant folded | ### What did you see?
Conversions like
```
string([]byte{...})
[]byte("...")
```
are not constant folded.
### What did you expect?
Conversions of const literals should probably be constant folded because it significantly slows down
things like
```
for .... {
p.Write([]byte(":=")
}
```
This is related to https://github.com/golang/go/issues/26264 but the scope of that issues should probably be the difference between `Write, WriteByte, WriteString` as some conversions not being constant folded is an issue on a different level that has little to do with the performance of these functions per se.
I can't quite tell whether `[]rune(...)` is also not constant folded because I'm not sure what `type.[]int32` actually does but for `[]byte(...)` and `string(...)` there are calls to `runtime.stringtoslicebyte(SB)` and `runtime.slicebytetostring(SB)`.
### What version of Go are you using (`go version`)?
1.10.3 | Performance,NeedsInvestigation,compiler/runtime | low | Major |
343,056,106 | rust | Rustdoc: unify impl blocks where possible. | From #32631:
> We'll have to be sure to update rustdoc, however, to provide one unified view to the API surface area rather than multiple impl blocks.
I have run into a similar problem when using codegen (specifically, in [web-sys]), where currently each function/value is inside its own impl block, e.g.
```rust
impl Event {
pub const CAPTURING_PHASE: u16 = 1;
}
impl Event {
pub const AT_TARGET: u16 = 2;
}
impl Event {
pub const BUBBLING_PHASE: u16 = 3;
}
impl Event {
pub const ALT_MASK: i32 = 1;
}
```
Presently all these appear as separate impl blocks in rustdoc. It would be easier to read if they were in a single block.
[web-sys]: https://github.com/rustwasm/wasm-bindgen/tree/master/crates/web-sys | T-rustdoc | low | Major |
343,061,184 | vue-element-admin | RTL Support | Please add rtl support to project.
Tanks a lot | PR Welcome | low | Minor |
343,106,912 | angular | browser back froward does not trigger router change detection on upgraded app | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Performance issue
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[ ] Other... Please describe:
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
browser back froward does not trigger router change detection on upgraded app
## Expected behavior
<!-- Describe what the desired behavior would be. -->
browser back froward trigger router change detection on upgraded app !
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://stackblitz.com or similar (you can use this template as a starting point: https://stackblitz.com/fork/angular-gitter).
-->
https://stackblitz.com/edit/ngupgraderouter
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
:heart:
## Environment
<pre><code>
Angular version: 6.0.7
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [ ] Chrome (desktop) version XX
- [ ] Chrome (Android) version XX
- [x ] Chrome (iOS) version XX
- [x ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: XX <!-- run `node --version` -->
- Platform: <!-- Mac, Linux, Windows -->
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
| type: bug/fix,freq1: low,area: router,area: upgrade,state: confirmed,P3 | low | Critical |
343,111,809 | pytorch | Request: Pycuda interoperability | For doing computations on tensors with `pycuda`, I can do this
for going from `Tensor` to `pycuda.gpuarray.GPUArray`:
```
def tensor_to_gpuarray(tensor):
if not tensor.is_cuda:
raise ValueError('Cannot convert CPU tensor to GPUArray (call `cuda()` on it)')
else:
return GPUArray(tensor.shape, dtype=torch_dtype_to_numpy(tensor.dtype), gpudata=tensor.data_ptr())
```
however, I'm stumped on how to convert back to a Tensor without copying the data. How can this be done? | feature,triaged | low | Critical |
343,150,575 | go | x/tools/go/analysis/cmd/vet: add -cgoinclude check for #include of code in subdirectories | The go build model is that all code for a package is in one directory.
The cache relies on this.
Vendoring tools rely on this.
Probably many other things rely on this.
One of the few ways a package can violate this rule is by using
#include in cgo source code to access code in nearby directories.
This appears to work, in that the package builds,
but it doesn't cache properly, doesn't vendor properly, and so on.
It seems reasonable for cmd/vet to notice such #includes and
complain about them, and we could enable that check as one
of the automatic ones run during go test.
Suggested in #26366. | help wanted,NeedsFix,Analysis | low | Major |
343,159,204 | flutter | Feature Request: Support for endDrawer on Scaffold when AppBar actions exist | If the `actions` property exists on an `AppBar` object, then the `endDrawer` property on the parent `Scaffold` object is ignored. It would be nice to have support for both together, so that `endDrawer` menus could peacefully co-exist with `Scaffold` `action` properties. | c: new feature,framework,f: material design,c: proposal,P2,team-design,triaged-design | low | Major |
343,229,178 | flutter | Make flutter_driver and flutter_test APIs more similar | We have lots of minor unnecessary differences right now. We should implement as much of each API on the other, to make it easier to migrate tests back and forth. | a: tests,c: new feature,tool,c: API break,t: flutter driver,P3,team-tool,triaged-tool | low | Minor |
343,271,711 | kubernetes | GCE: ILB->ClusterIP orphans a few GCP resources | /kind bug
**How to reproduce it (as minimally and precisely as possible)**:
1. Create a service of type ILB.
2. Edit the service
a. Change the type to ClusterIP
b. Remove the port's nodeport
Observe the service controller calls the delete code for the external load balancer instead of the internal loadbalancer.
/sig gcp
/assign @bowei
cc @freehan @MrHohn | kind/bug,area/provider/gcp,lifecycle/frozen,sig/cloud-provider,needs-triage | low | Critical |
343,289,524 | go | x/tools/cmd/stringer: handle untyped constants with typed initial values | ### What version of Go are you using (`go version`)?
1.10.2, 1.11beta2 (but "stringer" is the same version either way)
### Does this issue reproduce with the latest release?
probably ("stringer" is not really included)
### What operating system and processor architecture are you using (`go env`)?
linux/amd64
### What did you do?
Tried to run stringer on a program:
https://play.golang.org/p/y0bNVBnOyAi
### What did you expect to see?
I expected it to correctly discern that FooA and FooB were of type Foo, and FooAMask and FooBMask were of type FooMask. Looking at the code, stringer is looking for the declared type, not for an inferred type inferred from an explicitly-typed expression.
### What did you see instead?
$ go generate
stringer: no values defined for type Foo
This is surprisingly hard to fix in the code stringer is run against. You can do `FooA Foo = iota; FooAMask FoomMask = (1 << iota)`, but you can't do following lines and have them get the same types. You can't declare two things of different types on a line using commas. And if you don't declare them on the same line, it's possible for FooA and FooAMask to get out of sync. (Well, the first one in each list might be the same, but if you had two twenty-item lists, they would end up out of sync.)
| NeedsInvestigation,FeatureRequest,Tools | low | Major |
343,294,880 | pytorch | The state of sparse Tensors | This note tries to summarize the current state of sparse tensor in pytorch. It describes important invariance and properties of sparse tensor, and various things need to be fixed (e.g. empty sparse tensor). It also shows some details of sparse operators.
## Semantics
### Construct a sparse tensor
```
# create a float sparse tensor in CPU
>>> indices = torch.LongTensor([[0, 0, 1], [0, 1, 1]])
>>> values = torch.FloatTensor([2, 3, 4])
>>> sizes = [2, 2]
>>> torch.sparse_coo_tensor(indices, values, sizes)
torch.sparse.FloatTensor of size (2,2) with indices:
tensor([[0, 0, 1],
[0, 1, 1]])
and values:
tensor([2., 3., 4.])
# to_dense
>>> torch.sparse_coo_tensor(indices, values, sizes).to_dense()
tensor([[2., 3.],
[0., 4.]])
# in CUDA
>>> torch.sparse_coo_tensor(indices, values, sizes, device=torch.device('cuda'))
torch.cuda.sparse.FloatTensor of size (2,2) with indices:
tensor([[0, 0, 1],
[0, 1, 1]], device='cuda:0')
and values:
tensor([2., 3., 4.], device='cuda:0')
```
### More approaches in creating a sparse tensor
```
torch.sparse.FloatTensor()
torch.sparse.DoubleTensor()
torch.zeros(..., layout=torch.sparse_coo, ...)
```
Should we unify/reduce different approaches?
## Sparse representation
Currently, our sparse tensors are hybrid tensors, with a mix of sparse dims and dense dims. We keep track of `nnz`, `sparseDims`, `denseDims`, a `indices tensor of size = (sparseDims, nnz)`, and a `values tensor of size (nnz, size[sparseDims:])`. Additionally, we have a flag to note if the tensor is `coalesced`.
### Should we keep this representation?
- Our currently hybrid representation has some issues. It's difficult to support operators on hybrid tensors, *especially because only embedding uses hybrid*. Furthermore, some functions have ambiguous outputs on hybrid tensors (e.g. transposing a dense dim and a sparse dim is ambiguous). Because of this, it makes sense to have embedding has a special case, and only support βtrueβ sparse tensors with denseDims = 0.
- Many sparse libraries use CSR because it's really efficient for ops such as mm. Ideally, we'd like to make use of these libraries, but it's difficult: CSR can't represent more than 2D, but COO requires a lot of conversions. Potential solution: caching CSR representations?
- Caffe2 uses per-row counts to represent sparse tensors, and they have some level of sparse support.
### A couple caveats with our current system
- Things we should always keep in mind
```
Ideal INVARIANTS:
_sparseDims: range [0, len(shape)]; _sparseDims + _denseDims = len(shape)
_denseDims : range [0, len(shape)]; _sparseDims + _denseDims = len(shape)
_indices.shape: dimensionality: 2, shape: (_sparseDims, nnz)
_values.shape: dimensionality: 1 + _denseDims. shape: (nnz, shape[_sparseDims:])
Actual INVARIANT differences:
1) _sparseDims: range [1, len(shape)] (i.e. we don't allow 0 sparse dimensions)
2) when nnz = 0, there is strange behavior because we lack 0-dimensional sparse tensors. Namely:
dimensionality == 0, _sparseDims == 0, _denseDims == 0, _indices.shape == {0}, _values.shape == {0}
3) For both _indices.shape and _values.shape, the nnz dimension may be larger than nnz
4) For _values.shape, the non-nnz dimensions may be smaller than the corresponding dimension size, e.g.
a shape (2,3) sparse tensor with _sparseDims == 1, may have _values.shape: (nnz, <=2, <=3).
```
- We're keeping track of nnz, sparseDims, denseDims and coalesced independently of the indices and values tensors. For instance, we may need to write the following code to maintain the invariance:
```
_get_sparse_impl(r)->set_indices_and_values(r_indices, r_values);
_get_sparse_impl(r)->set_nnz(t._nnz());
_get_sparse_impl(r)->set_coalesced(t.is_coalesced());
```
- Without support for 0-dim tensors, we cannot have proper support for scalar sparse tensors. This makes it unclear how to implement ops like reduction ops, which is currently represented as a dense scalar. Maybe there is nothing wrong with the current representation, we just need to be minded that sparse ops can return a scalar.
- The nnz and the nnz dimension in two indices and value tensors may not be the same, some cuda kernels rely on this.
- In values, the non-nnz dimensions might not actually match the dimensions of the sparse tensor
- We have dim == sparseDims + denseDims, but sparseDims cannot be 0. On one hand, this makes sense because a sparse tensor should have at least one sparse dim, but at the same time, it's in conflict with the idea of a scalar sparse tensor.
- An empty sparse tensor has indices of dim = 1, which means we have to check `nnz != 0` everywhere in using a TensorAccessor of indices.
### Some of these issues are fixed by @yf225 PR https://github.com/pytorch/pytorch/pull/9279
Current behavoir of an empty sparse tensor:
```
>>> a = torch.sparse_coo_tensor([], [], [2, 3])
>>> i = a._indices()
>>> v = a._values()
>>> print(a)
torch.sparse.FloatTensor of size (2,3) with indices:
tensor([], dtype=torch.int64)
and values:
tensor([])
>>> print('i = %s' % i)
i = tensor([], dtype=torch.int64)
>>> print('v = %s' % v)
v = tensor([])
>>> print('a.dim = %d, i.dim = %d, v.dim = %d' % (a.dim(), i.dim(), v.dim()))
a.dim = 2, i.dim = 1, v.dim = 1
>>> print('a._dimI = %d, a._sparseDims = %d, a._dimV = %d, a._denseDims = %d, a._nnz = %d' %
(a._dimI(), a._sparseDims(), a._dimV(), a._denseDims(), a._nnz()))
a._dimI = 0, a._sparseDims = 2, a._dimV = 0, a._denseDims = 0, a._nnz = 0
```
When properly supported:
```
>>> import torch
>>> a=torch.sparse.DoubleTensor()
>>> a
torch.sparse.DoubleTensor of size (0,) with indices:
tensor([], size=(1, 0), dtype=torch.int64)
and values:
tensor([], dtype=torch.float64)
# empty sparse tensor to be a 1-dimensional tensor of size [0],
# with sparseDims == 1 and denseDims == 0
# invariants:
# _sparseDims + _denseDims = len(shape)
# _indices.dim = 2, shape = (_sparseDims, nnz)
# _values.dim = 1 + _denseDims, shape = (nnz, shape[_sparseDims:])
```
This fixes:
- An empty sparse tensor has indices of dim = 1, which means we have to check `nnz != 0` everywhere in using a TensorAccessor of indices.
### Autograd support
```
>>> a = torch.sparse_coo_tensor(indices, values, sizes, requires_grad=True)
>>> b = a * 2
>>> b.backward(torch.sparse_coo_tensor(indices, values, sizes))
>>> print(a)
torch.sparse.FloatTensor of size (2,2) with indices:
tensor([[0, 0, 1],
[0, 1, 1]])
and values:
tensor([4., 6., 8.])
```
A lot of operators on sparse tensors have densified gradients. e.g., `log1p` would make all 0 inputs have a gradient of 1, and it densifies the sparse tensor gradients. Currently we have to raise errors in the backward of these operators. Some potential ways to fix:
- Define special sparse operations so that they operate only on the nnz. This works well for functions that take a single input tensor. For example, we can define sparse log1p such that implicit 0s do not participate, and it would allow us to have a sparse gradient tensor. We can call this operator `s_log1p` or `nnz_log1p` (https://github.com/pytorch/pytorch/issues/1369 has similar discussions). However, we are not sure if they are really what users want, because these operators have different behaviors to their dense counterpart.
- Return a dense gradient. We can have backward functions able to handle both of dense and sparse gradients. No special treatment needed during autograd, just more functions to be written.
- Return the gradient as a sparse tensor. Even though we know a tensor might be completely dense, we can still choose to return its sparse form. This is awful for performance.
- No backward for densified gradients of sparse operator. Users need to call `to_dense` and apply dense operators instead. This will not work without backward for `to_dense`, which itself will have a densified gradients, and it brings us back to the 2nd proposal.
## Functions
### Pointwise one tensor math
|functions|dense grad|
|---|:---:|
|pow|N|
|log1p|Y|
|div_ / div(Sparse, Scalar)|Y|
|mul_ /Β mul(Sparse, Scalar)|Y|
All pointwise one tensor calls dense couterpart ops on `_values`, so maybe a macro can be written to cover them all.
### Pointwise two tensor math
|functions|formula|dense grad|
|---|---|:---:|
|add_ / add(Sparse, Sparse, Scalar)Β β Sparse|add(T, S, alpha) = T + alpha * S|Y|
|add_ / add(Dense, Sparse, Scalar)Β β Dense|add(T, S, alpha) = T + alpha * S|Y|
|sub_ / sub(Sparse, Sparse, Scalar)Β β Sparse|sub(T, S, alpha) = T - alpha * S|Y|
|mul_ / mul(Sparse, Sparse) β Sparse|mul(T, S) = T * S|Y|
All pointwise two tensor functions have properly optimized CUDA kernel except for `mul_ / mul`:
- `add_ / add(Sparse, Sparse, Scalar)Β β Sparse` returns cat(Sparse, Scalar * Sparse)
- `add_ / add(Dense, Sparse, Scalar)Β β Dense` parallelizes over nnz
- `sub_ / sub(Sparse, Sparse, Scalar)Β β Sparse` calls `add`
`mul_ / mul(Sparse, Sparse) β Sparse` needs parallelized CUDA kernels, possible directions:
1. write customized kernel to parallelized over nnz of 1st sparse tensor
2. use cuSPARSE?
### BLAS
|functions|formula|dense grad|
|---|---|:---:|
|addmm(Dense, Sparse, Dense, Scalar, Scalar) β Dense|addmm(T, S, D, beta, alpha) = beta * T + alpha * mm(S, D)|Y|
|sspaddmm(Sparse, Sparse, Dense, Scalar, Scalar) β Sparse|sspaddmm(T, S, D, beta, alpha) = beta * T + alpha * mm(S, D)|Y|
|mm(Sparse, Dense) β Dense|mm(S, D) = mm(S, D)|Y|
|smm(Sparse, Dense) β Sparse|smm(S, D) = mm(S, D)|Y|
|hspmm(Sparse, Dense) β HybridSparse|hspmm(S, D) = mm(S, D)|Y|
|spmm(Sparse, Dense) β Dense|spmm(S, D) = mm(S, D)|Y|
Functions with CUDA kernel optimized are `mm`, `addmm` and `hspmm`. `addmm` and `hspmm` use cuSPARSE (cusparseScsrmm2 and cusparseDcsrmm2) in CUDA kernel, and `mm` calls `addmm`. However, `smm` and `sspaddmm` don't have CUDA support (gets removed?).
### Others
|functions|dense grad|
|---|:---:|
|clone|NA|
|norm|N|
|zero_|N|
|t_ / t|N|
## Optimizers
- optim.SGD (CUDA and CPU)
- optim.SparseAdam (CUDA and CPU) - lazy version of Adam algorithm
- optim.Adagrad (CPU)
## Future work
### TODO functions
|Functions|dense grad|
|---|:---:|
|nonzero| N|
|sum| Y|
|copy_| N|
|narrow| N|
|select_index| N|
|mul_ / mul(S, D)| Y|
|cuda| NA|
|F.linear| Y|
|softmax| N|
|cat| N|
|max| N|
|bmm(S, D)| Y|
- There is a list of pointwise functions for sparse can be implemented by calling dense ops on their `_values`, some helpers or macros can be written to make all of these ops available for sparse.
```
abs, acos, asin, atan, ceil, cos, cosh, erf, exp, expm1, floor,
log, log10, log2, round, sin, sinh, sqrt, rsqrt, tan, trunc
```
This note was summarized by @li-roy and me. All suggestions/comments are more than welcomed!
cc @soumith @ezyang @colesbury @gchanan @SsnL @zou3519 @yf225 | module: sparse,triaged | medium | Critical |
343,299,974 | youtube-dl | [Site support request] kocowa.com | ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.07.10*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.07.10**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'https://www.kocowa.com/episode/tempted-episode-1/842290']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.07.10
[debug] Python version 2.7.10 (CPython) - Darwin-15.6.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 3.2, ffprobe 3.2, rtmpdump 2.4
[debug] Proxy map: {}
[generic] 842290: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 842290: Downloading webpage
[generic] 842290: Extracting information
ERROR: Unsupported URL: https://www.kocowa.com/episode/tempted-episode-1/842290
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 2298, in _real_extract
doc = compat_etree_fromstring(webpage.encode('utf-8'))
File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2542, in compat_etree_fromstring
doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory)))
File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2531, in _XML
parser.feed(text)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
ParseError: mismatched tag: line 28, column 2
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 792, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 502, in extract
ie_result = self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 3258, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: https://www.kocowa.com/episode/tempted-episode-1/842290
```
---
- Single video: https://www.kocowa.com/episode/tempted-episode-1/842290
---
### Description of your *issue*, suggested solution and other information
Would love Kocowa support. Thanks! | site-support-request,geo-restricted | low | Critical |
343,305,062 | pytorch | [caffe2]running problem, No module named caffe2_pybind11_state_hip | I follow the Detectron installation, and i succeed install caffe2

but i don't test the GPU support, so sad for this.
and i install cocoapi and detectron
when i finish this, and want to test detectron.
it appear this problem

and however the more sorrowful is that

and I want to know if my operation cause caffe2 error, i uninstall the python-pip when i install detectron.
and how to solve this problem, thank
| caffe2 | low | Critical |
343,312,053 | go | x/text: p.Sprint outputs key + fallback | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
1.11b2
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
linux/amd64
### What did you do?
```
T.Sprint(message.Key("login.form.label.password", "Password"))
```
### What did you expect to see?
Password
### What did you see instead?
{login.form.label.password Password}
`T.Sprintf(message.Key("login.form.label.password", "Password"))` returns the expected output, however it's use is not necessary in many such cases, where there is no parameters for the string even the lint tool is showing warnings in such cases.
| NeedsInvestigation | low | Major |
343,316,524 | go | dl: golang.org/dl installer shouldn't create a new subdirectory of $HOME | @andybons sent an email to golang-dev yesterday, subject 'Go 1.11 Beta 2 is Released'.
The instructions included
If you have Go installed already, the easiest way to try go1.11beta2
is by installing it using the go command:
$ go get https://golang.org/dl/go1.11beta2
$ go1.11beta2 download
These fail with Arch Linux's go 2:1.10.3-1 package.
$ go get https://golang.org/dl/go1.11beta2
package https:/golang.org/dl/go1.11beta2: "https://" not allowed in import path
$
Removing the `https://' worked. It then matches the comment in https://go.googlesource.com/dl/+/master/go1.11beta2/main.go
Please consider warning that 'go1.11beta2 download' will install to
$HOME/sdk/go1.11beta2 as some folks, me included, won't appreciate new
top-level $HOME directories. Altering $HOME for the run of 'go1.11beta2
download' avoided this on Unix, and it looks like similar is possible on other
platforms.
(Opened issue as requested by Andy in private email.)
| Builders,NeedsInvestigation | medium | Critical |
343,317,498 | opencv | Opencv 3.4.2 broke waitkey | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.4.2
- Operating System / Platform => Antergos/X86_64
- Compiler => G++ 8.1.1
-->
- OpenCV => 3.4.2
- Operating System / Platform => Antergos/X86_64
- Compiler => G++ 8.1.1
##### Detailed description
Opencv 3.4.2 broke waitkey, producing SIGFPE randomly.
Opencv 3.4.1 does not.
P.S.
The PKGBUILD can be found here.
https://git.archlinux.org/svntogit/packages.git/tree/trunk?h=packages/opencv
##### Steps to reproduce
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
-->
Build the following code with release mode will trigger the bug really quick on my PC.
Didn't try debug mode....
[bugtest2.zip](https://github.com/opencv/opencv/files/2216080/bugtest2.zip)
| bug,category: highgui-gui | low | Critical |
343,319,519 | godot | Comment node in visual scripting does not properly update size | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
Godot 3.1 dev
**OS/device including version:**
Win 10 64 bit, AMD GPU
**Issue description:**
When changing size of the comment node in visual script canvas, the size in the inspector is not updated and if you change size on the canvas and then immediately chang it in the inspector, the size will snap back to previous value before the canvas size manipulation.
You need to change size of the comment node on the canvas, then deselect node, reselect it and the inspector size will be updated. Changing size via inspector then works as intended.
**Steps to reproduce:**
Place comment node in the canvas, change it's size using node corner drag gizmo and then change the value in the inspector size.
| bug,confirmed,topic:visualscript | low | Minor |
343,323,343 | pytorch | Remove BatchNorm layers once the training is completed. | In PyTorch, is there any way to remove the batchnorm layers by merging their parameters into their preceding convolutional layers?
ps: If you are familiar with the model deployment function in MatConvNet, you should know what I mean :)
cc @albanD @mruberry @jbschlosser | feature,module: nn,triaged,module: norms and normalization | low | Major |
343,345,941 | pytorch | [feature request] Support for 0-length sequences in packed_sequences | Hello,
Currently, calling `pack_padded_sequence` with some length of size zero raises the error:
```
ValueError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0
```
I believe it would be useful to support sequences of length zero for the following case. Working with variable lengths sequences, and implementing truncated backpropagation through time, I split my sequences in chunks along the time axis. In the last chunks some of the shortest sequences happened to be completely padded (hence of length zero).
Of course there are workarounds but I believe supporting it would reduce the amount of code to write.
Thank you
cc @albanD @mruberry @jbschlosser @zou3519 | module: nn,module: rnn,triaged | low | Critical |
343,379,472 | go | go/format: add benchmarks | I maintain a (fork of a) [tool called `go-bindata`](https://github.com/kevinburke/go-bindata) that compiles static files into a binary. Often the compiled file is quite large - tens of megabytes are not uncommon. I call `format.Source` on this generated file to avoid gofmt thrash. However, I noticed recently that calling `format.Source` takes 60% of the runtime of the go-bindata program.
Currently there are no benchmarks for `format.Source`. It might be good to add some to see if there are quick wins to get that function to return more quickly.
A starting point may be the benchmark here: https://github.com/kevinburke/go-bindata/blob/master/benchmark_test.go. Committing an extremely large file to source might not be the best way to do this, though. | Performance,NeedsInvestigation | low | Major |
343,412,400 | godot | InputEventMouseMotion is generated after each mouse button release | Godot 3.0.5
Windows 10 64 bits
Repro:
1) Create a new scene and add a single Node in it
2) Put this script on it:
```gdscript
extends Node
func _input(event):
if event is InputEventMouseMotion:
print(event, " motion ", event.relative)
if event is InputEventMouseButton:
print(event, " button ", event.pressed)
```
3) Run the scene, place your mouse above the game, and click several times without moving the mouse:
when you release a mouse button, it is always followed by a motion event, even if the mouse didn't move.
```
[InputEventMouseButton:982] button True
[InputEventMouseButton:985] button False
[InputEventMouseMotion:988] motion (0, 0)
[InputEventMouseButton:991] button True
[InputEventMouseButton:994] button False
[InputEventMouseMotion:997] motion (0, 0)
``` | bug,platform:windows,confirmed,topic:input | low | Major |
343,424,643 | go | cmd/compile: optimise away deferred calls to empty functions | ### What version of Go are you using (`go version`)?
### Does this issue reproduce with the latest release?
go version devel +48c7973 Fri Jul 20 20:08:15 2018 +0000 linux/amd64
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/ainar/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/ainar/go"
GOPROXY=""
GORACE=""
GOROOT="/home/ainar/go/gotip"
GOTMPDIR=""
GOTOOLDIR="/home/ainar/go/gotip/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build664294769=/tmp/go-build -gno-record-gcc-switches"
```
### What did you do?
https://play.golang.org/p/TbjFF3fe8Wu
$ go run -gcflags '-S' assertopt.go
### What did you expect to see?
Lines 12, 13, and 14 skipped by the compiler.
### What did you see instead?
Line 12 is skipped, but defers on 13 and 14 are still there. The Go compiler leaves those in, despite the fact that they are NOOPs.
This makes it hard to create assertions (in this case, postconditions) that don't hurt the performance of production builds using build tags. | Performance,help wanted,NeedsFix | medium | Critical |
343,466,051 | pytorch | [Build error] libcudnn.so: error adding symbols: File in wrong format | ## Issue description
When I build Caffe2 from source on my Jetson TX1, there are errors like this:
[ 93%] Linking CXX shared library ../lib/libcaffe2_gpu.so
/usr/local/cuda/lib64/libcudnn.so: error adding symbols: File in wrong format
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/caffe2_gpu.dir/build.make:2913: recipe for target 'lib/libcaffe2_gpu.so' failed
make[2]: *** [lib/libcaffe2_gpu.so] Error 1
CMakeFiles/Makefile2:1399: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
How can I solve the problem? Thank you!
## System Info
- CUDA :8.0
- cuDNN: 6.0
- Ubuntu: 16.04 | caffe2 | low | Critical |
343,482,514 | pytorch | [request] speed-up multidim slicing backward | The current way of apply slicing (indexing) isn't very efficient ([here](https://github.com/pytorch/pytorch/blob/7160846c81d9ebd298a4c1608d67b8f4bd76fee7/torch/csrc/autograd/python_variable_indexing.cpp#L151-L199)). Each slicing at each dim is applied as a separate autograd op, making the backward extremely inefficient:
```py
>>> x = torch.randn(3,3,requires_grad=True)
>>> y = x[0, 1]
>>> y.grad_fn
<SelectBackward object at 0x7fbca03e0668>
>>> y.grad_fn.next_functions # two <SelectBackward>s
((<SelectBackward object at 0x7fbca03e0550>, 0),)
>>> y.grad_fn.next_functions[0][0].next_functions
((<AccumulateGrad object at 0x7fbca03e0668>, 0),)
```
The backward of this would be:
```
gy = [1]
=> g(x[0]) = [0, 1, 0]
=> g(x) = [[0, 1, 0], [0, 0, 0], [0, 0, 0]]
```
, incurring extra copying and malloc'ing. This will be especially slow with large tensors.
Potential solutions:
1. Use `as_strided` in `applySlicing`.
pros: fits the current structure easily.
cons: the backward isn't well optimized for slicing (but it's not terrible either if input is not expanded...). I'll think about ways to speed this up when updating https://github.com/pytorch/pytorch/pull/8965 .
2. Move multidim slicing to ATen.
pros: benefit cpp API
cons: need ways to represent `3..5` and `...`.
cc @ezyang @SsnL @albanD @colesbury | todo,module: performance,module: autograd,triaged | low | Major |
343,487,042 | pytorch | A newer protobuf should be used, so that cmake install will work when building for mobile | ## Issue description
As Yangqing noted in https://github.com/pytorch/pytorch/blob/master/cmake/ProtoBuf.cmake#L77, we should eliminate `EXCLUDE_FROM_ALL` after https://github.com/google/protobuf/pull/3878 merges. I have a corresponding PR(https://github.com/pytorch/pytorch/pull/7085) but the updating of protobuf breaks iOS build. I think we need to find a proper protobuf version and enable cmake install for Android and iOS.
| caffe2 | low | Minor |
343,562,544 | node | Processes created using spawn with a pipe have no /dev/stdin | <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: v8.9.4
* **Platform**: Linux 4.9.0-6-amd64 #1 SMP Debian 4.9.82-1+deb9u3 (2018-03-02) x86_64 GNU/Linux
* **Subsystem**: child_process
<!-- Enter your issue details below this comment. -->
The code below fails with `cat: /dev/stdin: No such device or address`.
```js
const {spawn} = require('child_process');
const proc = spawn('cat', ['/dev/stdin'], {stdio: ['pipe', process.stdout, process.stderr]});
```
Apparently Node spawns sub processes in a way that there is no `/dev/stdin` allocated.
I think this is a NodeJS bug - on UNIX, a sub process should be able to open `/dev/stdin` if there is a stdin for it. The following Go program for example works perfectly fine:
```go
func main() {
cmd := exec.Command("cat", "/dev/stdin")
cmd.Stderr = os.Stderr
cmd.Stdout = os.Stdout
stdin, _ := cmd.StdinPipe()
go func() {
stdin.Write([]byte("hello cat!\n"))
stdin.Close()
}()
cmd.Run()
}
``` | child_process | low | Critical |
343,629,457 | pytorch | [caffe2]mpirun multi-node multi GPU in Distributed mode ,run resnet50_trainer.py get RuntimeError | run with MPI
MPI is for coordinating machine rendezvous through MPI
so is it possible just using MPI as rendezvous but not redis or nfs?
i do not use redis and nfs
as my caffe2 is included in the container singularity οΌ
i run the under command:
mpirun -np 4 singularity exec /public/DL_Data/cnic_ai_20180316.img python resnet50_trainer.py --train_data ~/ILSVRC/ilsvrc12_train_lmdb --num_gpus 8 --batch_size 256 --num_epochs 90 --num_shards 4 --shard_id 1 --run_id 50002
got runtime erro :
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Add initial parameter sync
INFO:data_parallel_model:Add initial parameter sync
INFO:data_parallel_model:Add initial parameter sync
INFO:data_parallel_model:Add initial parameter sync
Traceback (most recent call last):
File "resnet50_trainer.py", line 602, in <module>
main()
File "resnet50_trainer.py", line 598, in main
Train(args)
File "resnet50_trainer.py", line 436, in Train
data_parallel_model.OptimizeGradientMemory(train_model, {}, set(), False)
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/data_parallel_model.py", line 1688, in OptimizeGradientMemory
input_shapes_all_devices,
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 257, in InferShapesAndTypes
net_protos, blob_dimensions
RuntimeError: [enforce fail at operator.cc:492] . Invalid shape inference for operator Broadcast Expected 8 outputs, but got 1
Traceback (most recent call last):
File "resnet50_trainer.py", line 602, in <module>
main()
File "resnet50_trainer.py", line 598, in main
Train(args)
File "resnet50_trainer.py", line 436, in Train
data_parallel_model.OptimizeGradientMemory(train_model, {}, set(), False)
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/data_parallel_model.py", line 1688, in OptimizeGradientMemory
input_shapes_all_devices,
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 257, in InferShapesAndTypes
net_protos, blob_dimensions
RuntimeError: [enforce fail at operator.cc:492] . Invalid shape inference for operator Broadcast Expected 8 outputs, but got 1
Traceback (most recent call last):
File "resnet50_trainer.py", line 602, in <module>
main()
File "resnet50_trainer.py", line 598, in main
Train(args)
File "resnet50_trainer.py", line 436, in Train
data_parallel_model.OptimizeGradientMemory(train_model, {}, set(), False)
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/data_parallel_model.py", line 1688, in OptimizeGradientMemory
input_shapes_all_devices,
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 257, in InferShapesAndTypes
net_protos, blob_dimensions
RuntimeError: [enforce fail at operator.cc:492] . Invalid shape inference for operator Broadcast Expected 8 outputs, but got 1
Traceback (most recent call last):
File "resnet50_trainer.py", line 602, in <module>
main()
File "resnet50_trainer.py", line 598, in main
Train(args)
File "resnet50_trainer.py", line 436, in Train
data_parallel_model.OptimizeGradientMemory(train_model, {}, set(), False)
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/data_parallel_model.py", line 1688, in OptimizeGradientMemory
input_shapes_all_devices,
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 257, in InferShapesAndTypes
net_protos, blob_dimensions
RuntimeError: [enforce fail at operator.cc:492] . Invalid shape inference for operator Broadcast Expected 8 outputs, but got 1
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[43321,1],1]
Exit code: 1
-----------------------------------------------
if run with single node 8 gpus ,# success !
-----------------------------------------------
i have 4 nodes ,8 gpus per node
so how to run distributed caffe2 with mpi, whether redis or nfs is necessary? my machine does not install redis and nfsγ
| caffe2 | low | Critical |
343,629,665 | pytorch | [caffe2]mpirun multi-node multi GPU in Distributed mode ,run resnet50_trainer.py get RuntimeError | run with MPI
MPI is for coordinating machine rendezvous through MPI
so is it possible just using MPI as rendezvous but not redis or nfs?
i do not use redis and nfs
as my caffe2 is included in the container singularity οΌ
i run the under command:
mpirun -np 4 singularity exec /public/DL_Data/cnic_ai_20180316.img python resnet50_trainer.py --train_data ~/ILSVRC/ilsvrc12_train_lmdb --num_gpus 8 --batch_size 256 --num_epochs 90 --num_shards 4 --shard_id 1 --run_id 50002
got runtime erro :
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Add initial parameter sync
INFO:data_parallel_model:Add initial parameter sync
INFO:data_parallel_model:Add initial parameter sync
INFO:data_parallel_model:Add initial parameter sync
Traceback (most recent call last):
File "resnet50_trainer.py", line 602, in <module>
main()
File "resnet50_trainer.py", line 598, in main
Train(args)
File "resnet50_trainer.py", line 436, in Train
data_parallel_model.OptimizeGradientMemory(train_model, {}, set(), False)
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/data_parallel_model.py", line 1688, in OptimizeGradientMemory
input_shapes_all_devices,
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 257, in InferShapesAndTypes
net_protos, blob_dimensions
RuntimeError: [enforce fail at operator.cc:492] . Invalid shape inference for operator Broadcast Expected 8 outputs, but got 1
Traceback (most recent call last):
File "resnet50_trainer.py", line 602, in <module>
main()
File "resnet50_trainer.py", line 598, in main
Train(args)
File "resnet50_trainer.py", line 436, in Train
data_parallel_model.OptimizeGradientMemory(train_model, {}, set(), False)
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/data_parallel_model.py", line 1688, in OptimizeGradientMemory
input_shapes_all_devices,
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 257, in InferShapesAndTypes
net_protos, blob_dimensions
RuntimeError: [enforce fail at operator.cc:492] . Invalid shape inference for operator Broadcast Expected 8 outputs, but got 1
Traceback (most recent call last):
File "resnet50_trainer.py", line 602, in <module>
main()
File "resnet50_trainer.py", line 598, in main
Train(args)
File "resnet50_trainer.py", line 436, in Train
data_parallel_model.OptimizeGradientMemory(train_model, {}, set(), False)
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/data_parallel_model.py", line 1688, in OptimizeGradientMemory
input_shapes_all_devices,
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 257, in InferShapesAndTypes
net_protos, blob_dimensions
RuntimeError: [enforce fail at operator.cc:492] . Invalid shape inference for operator Broadcast Expected 8 outputs, but got 1
Traceback (most recent call last):
File "resnet50_trainer.py", line 602, in <module>
main()
File "resnet50_trainer.py", line 598, in main
Train(args)
File "resnet50_trainer.py", line 436, in Train
data_parallel_model.OptimizeGradientMemory(train_model, {}, set(), False)
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/data_parallel_model.py", line 1688, in OptimizeGradientMemory
input_shapes_all_devices,
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 257, in InferShapesAndTypes
net_protos, blob_dimensions
RuntimeError: [enforce fail at operator.cc:492] . Invalid shape inference for operator Broadcast Expected 8 outputs, but got 1
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[43321,1],1]
Exit code: 1
-----------------------------------------------
if run with single node 8 gpus ,# success !
-----------------------------------------------
i have 4 nodes ,8 gpus per node
so how to run distributed caffe2 with mpi, whether redis or nfs is necessary? my machine does not install redis and nfsγ
| caffe2 | low | Critical |
343,631,296 | opencv | solvePnP behaves differently on ARM as compared to x86_64 | ##### System information (version)
- OpenCV => 3.4.2
- Operating System / Platform => Ubuntu Xenial 16.04 64bit
- Compiler => g++5.4.0
##### Detailed description
When running solvePnP function with some parameters it returns some values on x86_64 platform, but when running the same code on ARM, results are completely different.
##### Steps to reproduce
1) Compile a code which will call solvePnP function with arbitrary parameters (CV_EPNP algorithm)
2) Run the code and see the results, x, y, z.
3) Do the same on the ARM arch with the same version of openCV, OS and compiler.
4) Results are given below:
| bug,category: calib3d | low | Major |
343,715,175 | TypeScript | Services in unopened projects | Some language services, such as `findAllReferences`, `rename`, `navigateTo`, and `getEditsForFileRename`, as well as refactors like `moveToNewFile` (which wants to update imports from external references), work best when they know about as many files as possible.
Currently we only keep projects open if at least one file is being viewed in the editor.
When we invoke a service that might need to know about unopened projects, we might consider walking up ancestor directories to look for a base `tsconfig.json` that references many projects, and opening those. | Suggestion,In Discussion,Effort: Difficult | low | Minor |
343,737,011 | material-ui | [TextField] Long labels break layout | <!--- Provide a general summary of the issue in the Title above -->
The textfield layout is not responsive when there are long labels.
<!-- Checked checkbox should look like this: [x] -->
- [x] This is a v1.3.1 issue.
- [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
## Expected Behavior
According to Material Design, long labels should be correctly positioned within the available input space:

## Current Behavior
* Two or more line labels are overlapping the underline line.
## Steps to Reproduce
https://codesandbox.io/s/6xjr79vx3w
```
import React from 'react';
import TextField from '@material-ui/core/TextField';
class TextFields extends React.Component {
render() {
return (
<form noValidate autoComplete="off">
<TextField
id="name"
label="This is a long label because things"
/>
</form>
);
}
}
export default TextFields;
```
## Context
* All content is usually content managed, which means that the content editors could enter in label that may break the layout.
* If the enter in a label that is longer than the available space, example: _"Do you have any association with a government entity?"_, and the user is loading the page in an iPhone 5, there is no enough space to render the label in only one line.
## Your Environment
<!--- Include as many relevant details about the environment with which you experienced the bug. -->
| Tech | Version |
|--------------|---------|
| Material-UI | v1.3.1 |
| React | v16.4.1 |
| browser | Chrome |
| docs,component: text field | medium | Critical |
343,760,962 | flutter | Need primaryLightTextTheme and primaryDarkTextTheme in ThemeData | I need a reliable way of placing text on primary dark and primary light backgrounds
Primary and accent colors can do this since I can access `primaryTextTheme` and `accentTextTheme` through the `ThemeData`.
Currently I mimic the code from `ThemeData` `primaryTextTheme` for `primaryColorLight` and `primaryColorDark` and use the `estimateBrightnessForColor`:
```dart
final Typography typography = Typography(platform: themeData.platform);
final brightness =
ThemeData.estimateBrightnessForColor(themeData.primaryColorLight);
final TextTheme primaryLightTextTheme =
brightness == Brightness.dark ? typography.white : typography.black;
```
What `ThemeData` needs is
```dart
primaryColorLightBrightness ??= estimateBrightnessForColor(primaryLightColor);
final bool primaryLightIsDark = primaryColorLightBrightness == Brightness.dark;
final TextTheme defaultPrimaryLightTextTheme = primaryLightIsDark ? typography.white : typography.black;
primaryLightTextTheme = defaultPrimaryLightTextTheme.merge(primaryLightTextTheme);
```
And similar for `primaryDarkTextTheme`
Thanks for all the awesomeness in Flutter!
| c: new feature,framework,f: material design,P2,team-design,triaged-design | low | Minor |
343,821,743 | go | cmd/vet: make printf checking more precise when arguments are changed | For #26486 https://golang.org/cl/125039 changed the printf checking to only warn about functions that do not modify the arguments before passing them to `fmt.Printf` (or whatever). The example in that issue, drawn from real code, is:
```Go
func dbg(s string, va ...interface{}) {
if s == "" {
s = strings.Repeat("%v ", len(va))
}
_, fn, fl, _ := runtime.Caller(1)
fmt.Printf("dbg %s:%d: ", path.Base(fn), fl)
fmt.Printf(s, va...)
fmt.Println()
}
```
This can be called as `dbg("", err)`.
The fix in CL 125039 means that we no longer issue printf warnings for functions like this:
```Go
func Prefix(s string, args ...interface{}) {
s = "error: " + s
fmt.Printf(s, args)
}
```
We should figure out a way to continue issuing warnings for `Prefix` without issuing them for `dbg`. | help wanted,NeedsInvestigation,Analysis | low | Critical |
343,821,891 | TypeScript | Allow type variables to be constrained singleton, causing lookup into a non-generic mapped type to substitute | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
I feel awkward submitting this suggestion since I don't know if it will get enough votes to go anywhere, but I guess _someone_ has to be the initial submitter for each suggestion...
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
mapped type indexed access type lookup type substitute substitution generic
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
Currently, a lookup into a mapped type, for example `{ [P in K]: Box<T[P]> }[X]`, is simplified by substitution (in this example, to produce `Box<T[X]>`) only if the constraint type `K` is generic; this is unsound but I guess it was useful in some cases. I'd like to be able to constrain a type parameter `X` to be a singleton type, causing substitution to occur (which is sound) regardless of whether `K` is generic.
## Use Case
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
Suppose we have a codebase with an enum `E` and many functions that simulate dependent types by taking a type parameter `A extends E` (where `A` is intended to be a singleton type) along with a value of type `A`. Given a generic type `T<A extends E>`, we may want an object that contains a `T<A>` for each `A` in `E`, i.e., `{[A in E]: T<A>}`. Then we'd like to pass this object to a function along with a particular choice of `A` and have it manipulate the corresponding property. We should get a type error if the function uses the wrong property. Currently, a lookup type expression like `{[A in E]: T<A>}[A1]` does not substitute (because the constraint type `E` is not generic), so all reads and writes to the property are checked using the constraint of the lookup type, which is `{[A in E]: T<A>}[E]`, and in effect we get no distinction among the properties of the object.
Specifically, I'm writing a structured spreadsheet tool that [manipulates row and column IDs](https://bitbucket.org/espalier-spreadsheet/espalier/src/5a582831912434caaf703f6dcc5b9e18d7dfa2c7/src/layout/core.ts?at=master&fileviewer=file-view-default). A rectangle ID is a pair of a row ID and a column ID. I wanted to brand the row and column IDs differently to ensure I don't mix them up. I have many functions that are parameterized over an axis: for example, `getRectParentOnAxis` takes a rectangle and can either find the rectangle that covers the same column and a larger row, or the same row and a larger column.
One current approach, which I've taken and I call the "generic index" hack, is to add an artificial type variable to every relevant type and function so that I can ensure the constraint type of the mapped type is always generic and the mapped type will always substitute. (See "Workaround" below.) This is ugly, but I wanted the checking badly enough to do it.
## Examples
<!-- Show how this would be used and what the behavior would be -->
```ts
enum Axis {
ROW = "row",
COL = "col",
}
const AXIS_BRAND = Symbol();
type SpanId<A extends Axis> = string & {[AXIS_BRAND]: A};
type Rectangle = {[A in Axis]: SpanId<A>};
function getRectangleSide<A in Axis>(rect: Rectangle, a: A): SpanId<A> {
// Error with `A extends axis`: `Rectangle[A]` doesn't simplify and isn't assignable to `SpanId<A>`
// Allowed with `A in Axis`: `Rectangle[A]` simplifies to `SpanId<A>`
return rect[a];
}
function getRectangleSide2<A in Axis>(rect: Rectangle, a: A): Rectangle[A] {
if (Math.random() > 0.5) {
return rect[a];
} else {
// Allowed with `A extends axis`: `SpanId<Axis.ROW>` is unsoundly assignable to `Rectangle[A]`
// because it is assignable to the constraint `SpanId<Axis.ROW> | SpanId<Axis.COL>`
// Error with `A in Axis`: `SpanId<Axis.ROW>` is not assignable to `SpanId<A>`
return rect[Axis.ROW];
}
}
```
## Workaround
```ts
const FAKE_INDEX = "fake-index";
type GenericIndex<_, K> = K | (_ & typeof FAKE_INDEX);
type LooseIndex<K> = K | typeof FAKE_INDEX;
enum Axis {
ROW = "row",
COL = "col",
}
type AxisG<_> = GenericIndex<_, Axis>;
type AxisL = LooseIndex<Axis>;
const AXIS_BRAND = Symbol();
type SpanId<A extends AxisL> = string & {[AXIS_BRAND]: A};
type Rectangle<_> = {[A in AxisG<_>]: SpanId<A>};
function getRectangleSide<_, A extends Axis>(rect: Rectangle<_>, a: A): SpanId<A> {
return rect[a]; // allowed
}
function getRectangleSide2<_, A extends Axis>(rect: Rectangle<_>, a: A): Rectangle<_>[A] {
if (Math.random() > 0.5) {
return rect[a];
} else {
return rect[Axis.ROW]; // error
}
}
```
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,In Discussion | low | Critical |
343,858,114 | flutter | [text input ]Detect input via the on-screen keyboard | Per the advice, I [asked this on StackOverflow](https://stackoverflow.com/questions/50341089/detecting-software-keyboard-input-using-flutter) some time ago now. I'm yet to receive any reply, so I'm starting to think this is a genuine issue.
In flutter, is it possible to detect keyboard input received via the on-screen keyboard? | a: text input,c: new feature,framework,c: proposal,P2,team-framework,triaged-framework | low | Major |
343,921,060 | go | cmd/go: test cached run slower than real test run | ### What version of Go are you using (`go version`)?
1.10.3
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
linux; amd64
### What did you do?
go test --count 1 .; time go test .
foo_test.go:
// package and imports snipped
func TestCache(t *testing.T) {
tmp := os.TempDir()
for i := 0; i < 10*1000*1000; i++ {
os.Stat(filepath.Join(tmp, fmt.Sprintf("%d", i)))
}
}
### What did you expect to see?
$ go test --count 1 .; time go test .
ok github.com/dbentley/testcache 11.839s
ok github.com/dbentley/testcache (cached)
real 2s
### What did you see instead?
$ go test --count 1 .; time go test .
ok github.com/dbentley/testcache 11.839s
ok github.com/dbentley/testcache (cached)
real 1m2.278s
### Further investigation
Using GODEBUG=gocachetest=1 or GODEBUG=gocachehash=1 doesn't cause output for 30+s.
I think the function inDir (currently at https://github.com/golang/go/blob/master/src/cmd/go/internal/test/test.go#L1450 ) evaluates symlinks, which requires lots of stat'ing.
It looks like this was introduced in https://github.com/golang/go/commit/37d56279c87818b496e5717bddd1f7c43bfa743d | help wanted,ToolSpeed,NeedsInvestigation,GoCommand | low | Critical |
343,939,937 | rust | Add `--emit=nothing` to tell rustc not to emit any files (even if `--crate-type=lib`) | In order to only check if a program is well-formed (think `cargo check`), one would like to call `rustc` without emitting any files. `cargo check` uses `--emit=metadata -Z no-codegen` because the metadata is needed to check dependent crates.
However, it seems like it's impossible to tell the compiler to not write any files in the case of `--crate-type=lib`. For the default `--crate-type=bin` it works: `rustc -Z no-codegen foo.rs` does not emit any files. But again, for `rustc -Z no-codegen --crate-type=lib foo.rs` it emits an `.rlib` file.
In case this isn't a bug, would it be possible to add an `--emit=nothing` or equivalent option?
<sub>([Related question on StackOverflow](https://stackoverflow.com/questions/51485765/run-rustc-to-check-a-program-without-generating-any-files))</sub> | A-driver,T-compiler,C-feature-request | low | Critical |
343,950,300 | pytorch | dataloader stuck at sched_yield =0 | **My docker envs**:
Description: Debian GNU/Linux 8.9 (jessie)
Pytorch: 0.4.0
cuda: 8.0
python: 3.6.5
After a few epochs of training and validation, dataloader of validation suddenly stuck at sched_yield =0. I set dataloader worker = 10. I can see one subprocess use CPU at 100% and other 9 processes just use CPU at 0%. RAM and GPU memory doesn't release. The whole program just hangs up.
__More details__: My implementation of Dataset just reads image and meta information from multiple lmdb files. I open the lmdb env in Dataset __init__ function,
```
self.env = lmdb.open(self.pic_set_path, max_readers=1,
readonly=True, lock=False,
readahead=False, meminit=False)
```
and in __getitem__ function, i get information just like this:
```
with self.env.begin(write=False) as txn:
info = txn.get(image_name)
```
cc @SsnL @VitalyFedyunin @ejguan | module: dataloader,triaged | low | Minor |
343,964,471 | TypeScript | Feat: Extract function with jsx selection should be replaced with jsx sfc | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
If we select jsx element and do extract refactor, editor should be refactor it in jsx maner
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
```ts
function Foo({ items }: { items: { id: number; name: string }[] }) {
return <div>{items.map(item => <div>{item.id}: {item.name}</div>)}</div>;
}
```
After refactor should be
```ts
function Foo({ items }: { items: { id: number; name: string }[] }) {
return <div>{items.map(item => <Bar id={item.id} name={item.name}/>)}</div>;
}
function Bar(item: { id: number; name: string; }) {
return <div>{item.id}: {item.name}</div>;
}
```
## Current behavior

<!-- Show how this would be used and what the behavior would be -->
| Suggestion,Awaiting More Feedback,Domain: Refactorings | low | Critical |
343,990,665 | kubernetes | Bind mounts in container not updated after volume is remounted again | <!-- This form is for bug reports and feature requests ONLY!
If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
> Uncomment only one, leave it on its own line:
>
/kind bug
> /kind feature
@kubernetes/sig-storage-bugs
**What happened**:
For fuse-backed volumes, fuse process may exit on error, the volume on the host will disappear. Kubelet will remount the volume later because it will find volume mount point does not exist anymore. But bind mount in container will still be the old one, processes in container will fail with error:
```
Transport endpoint is not connected
```
**What you expected to happen**:
- Processes in container can access new volume mount point without restart if possible
- Or containers are restarted with new bind mounts, then processes in container can access new volume mount point.
- Or other way to recover in this scenario.
**How to reproduce it (as minimally and precisely as possible)**:
- Create a cephfs volume and enable fuse support
- Create a pod to mount the volume
- Kill ceph-fuse process on the host
**Anything else we need to know?**:
Should we restart container after remounting volumes of it?
**Related issues**
- https://github.com/kubernetes/kubernetes/issues/46253 | kind/bug,sig/storage,lifecycle/frozen | medium | Critical |
344,064,444 | pytorch | [caffe2] Questions about conv_transpose && ConvTranspose | I need to use conv_transpose(), but I always receive the same error. My code are as follows, any help would be highly appreciated. Thanks in advance.
upconv = brew.conv_transpose(
model,
blob_in = final_avg,
blob_out = 'upconv',
dim_in = =512,
dim_out = 2048,
kernels = [4,1,1],
strides = [2,1,1],
pads = [1,0,0],
) | caffe2 | low | Critical |
344,075,078 | TypeScript | Enums and interfaces do not have definitions in quick info/tooltips | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
The issue might be related to [this one](https://github.com/Microsoft/TypeScript/issues/24409), but mine is not about version 3.0, and is not solely about imports.
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
enum, description, definition, pop-up, tooltip, vs code, interface, suggestion
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
Hovering over enums and interfaces, should show their definition, i.e. I have this enum (or an interface):
```
enum enumName {
ONE = 1;
TWO = 2;
THREE = 3;
}
```
If I hover over this enum, wherever I use it (whether it is imported to another file or used locally where it is declared), I would like to see what this enum is all about (essentially what are the properties).
This is what I see right now when I hover over enums or interfaces:


As a workaround, I currently need to add this type of JSDoc comments right above the enum declaration.
```
/** Props:
* @param ONE 1;
* @param TWO 2;
* @param THREE 3;
*/
```
Which yields to:

## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
This feature would save time & effort.
The shortcoming of the current approach is that either need to add `${2+numberOfEnumProperties} lines` of comments or go to the file, where enum is declared, locate the enum there and then familiarize myself with its properties.
## Examples
<!-- Show how this would be used and what the behavior would be -->

## Checklist
I am not that experienced to be able to answer the questions below.
* [ ] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [ ] This wouldn't change the runtime behavior of existing JavaScript code
* [ ] This could be implemented without emitting different JS based on the types of the expressions
* [ ] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,Help Wanted,Domain: Quick Info,Experience Enhancement | low | Critical |
344,079,728 | rust | Macro checks thwart `unreachable_pub` lint | Recently we [landed a change](https://github.com/rust-lang/rust/pull/52467) which squashes all lints tied to foreign macros, but this can thwart lints like `unreachable_pub` in unusual fashions. The `unreachable_pub` lint can't be fixed unless *all* its warnings are fixed, which means the following examples fails to compile after being fixed:
```rust
// bar.rs
#![crate_type = "lib"]
#[macro_export]
macro_rules! a {
(pub static ref $name:ident : $t:ty = $val:expr) => (
pub struct $name { _wut: () }
pub static $name: $t = $val;
)
}
```
and then
```rust
// foo.rs
#![crate_type = "lib"]
#![feature(rust_2018_preview)]
#![warn(rust_2018_idioms)]
#[macro_use]
#[allow(macro_use_extern_crate)]
extern crate bar;
mod m1 {
pub struct Foo;
}
mod lookup {
use crate::m1::Foo;
a!(pub static ref F: Foo = Foo);
}
pub fn f() {
drop(&lookup::F);
}
```
compiled with:
```
$ rustc +nightly bar.rs
$ rustc +nightly foo.rs -L .
warning: unreachable `pub` item
--> foo.rs:10:5
|
10 | pub struct Foo;
| ---^^^^^^^^^^^^
| |
| help: consider restricting its visibility: `crate`
|
note: lint level defined here
--> foo.rs:3:9
|
3 | #![warn(rust_2018_idioms)]
| ^^^^^^^^^^^^^^^^
= note: #[warn(unreachable_pub)] implied by #[warn(rust_2018_idioms)]
= help: or consider exporting it for use by other crates
```
but the corrected code doesn't compile!
This is a reuced example where `a!` is `lazy_static!`, but I believe the issue here is that the lints behind the `lazy_static!` invocation are being ignored which means that `Foo` *is* actually fully public (because `F` is) and the lint is no longer applicable as a result.
cc @Manishearth, @oli-obk | C-enhancement,A-lints,A-macros,T-compiler,A-suggestion-diagnostics,L-unreachable_pub,A-edition-2018 | low | Minor |
344,139,428 | TypeScript | Improve errors for potential arrow functions | There are certain locations where arrow functions don't parse
```ts
var x = window.foo || () => {}
```
In these instances, it would be nice if users could get a better error message as to avoid issues like #25897.
Ideally this should not be an invasive change.
* The change should be limited in scope.
* The parser should continue erroring in those locations.
* The change must not introduce change how existing valid code is parsed. | Bug,Help Wanted,Effort: Moderate,Domain: Error Messages | low | Critical |
344,179,626 | flutter | Drawer width should vary based on device size | The drawer width is too narrow on the small iPhone screens (iPhone SE, 5, etc) and leaves only a narrow tap section on the right side of the screen to close. The user can still swipe to close but it may not be obvious to them, especially if they opened the menu by tapping instead of swiping.
The width is hardcoded at 304.0 right now https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/material/drawer.dart#L41
I found this TODO for the drawer
https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/material/drawer.dart#L28
```
// TODO(eseidel): Draw width should vary based on device size:
// http://material.google.com/layout/structure.html#structure-side-nav
// Mobile:
// Width = Screen width β 56 dp
// Maximum width: 320dp
// Maximum width applies only when using a left nav. When using a right nav,
// the panel can cover the full width of the screen.
// Desktop/Tablet:
// Maximum width for a left nav is 400dp.
// The right nav can vary depending on content.
``` | framework,f: material design,customer: mulligan (g3),c: proposal,P2,team-design,triaged-design | low | Minor |
344,246,904 | pytorch | Build torch as a submodule with static linking doesn't work (CAFFE2_PERF_WITH_AVX2 is not defined) | ## Issue description
Hi,
I'm trying to compile a simple C++ program using the Caffe2 library. I include the pytorch repository as a git submodule to my project because I want to make this project as self-conteined as I can (the systems I want to target doesn't have the caffe2 lib installed).
The file layout of my project is the following:
```
.
βββ build
βββ CMakeLists.txt
βββ pytorch
βΒ Β βββ aten
βΒ Β βββ binaries
βΒ Β βββ caffe
βΒ Β βββ caffe2
βΒ Β βββ CITATION
βΒ Β βββ cmake
βΒ Β βββ CMakeLists.txt
βΒ Β βββ CODEOWNERS
βΒ Β βββ conda
βΒ Β βββ CONTRIBUTING.md
βΒ Β βββ docker
βΒ Β βββ docs
βΒ Β βββ LICENSE
βΒ Β βββ Makefile
βΒ Β βββ modules
βΒ Β βββ mypy-files.txt
βΒ Β βββ mypy.ini
βΒ Β βββ mypy-README.md
βΒ Β βββ NOTICE
βΒ Β βββ README.md
βΒ Β βββ requirements.txt
βΒ Β βββ scripts
βΒ Β βββ setup_caffe2.py
βΒ Β βββ setup.py
βΒ Β βββ test
βΒ Β βββ third_party
βΒ Β βββ tools
βΒ Β βββ torch
βΒ Β βββ tox.ini
βββ README.md
βββ src
βΒ Β βββ test.cpp
```
I'm using the following CMakeLists.txt file:
```
cmake_minimum_required(VERSION 2.6)
project (caffe2_cpp_tutorial)
find_package(Protobuf REQUIRED)
find_package(OpenCV REQUIRED)
find_package(Threads)
set(ALL_LIBRARIES)
set(BUILD_PYTHON OFF)
add_subdirectory(pytorch caffe2_build)
list(APPEND ALL_LIBRARIES caffe2)
find_library(GLOG_LIB glog)
find_library(GFLAGS_LIB gflags)
if(OpenCV_LIBS)
include_directories(${OpenCV_INCLUDE_DIRS})
list(APPEND ALL_LIBRARIES ${OpenCV_LIBS})
add_definitions(-DWITH_OPENCV)
endif()
set(CMAKE_CXX_STANDARD 11)
add_executable(test src/test.cpp)
target_link_libraries(test ${ALL_LIBRARIES} ${GLOG_LIB} ${GFLAGS_LIB} ${PROTOBUF_LIBRARY})
```
I couldn't find any CMakeLists.txt file example to do this, nor any documentation about it, so I cannibalized different CMakeLists.txt files hopping something functional would form from it.
I can now successfully generate the build scripts with these cmake arguments:
```
$ mkdir build
$ cd build
$ cmake .. -DUSE_CUDA=OFF -DUSE_NATIVE_ARCH=ON
```
But when compiling I get this build error:
```
/home/giachero/repos/livdet-caffe2/pytorch/caffe2/perfkernels/common_avx.cc:17:2: error: #error ( "You found a build system error: __AVX__ is defined (via e.g. -mavx) " "but CAFFE2_PERF_WITH_AVX is not defined.");
#error( \
^~~~~
/home/giachero/repos/livdet-caffe2/pytorch/caffe2/perfkernels/common_avx2.cc:17:2: error: #error ( "You found a build system error: __AVX2__ is defined (via e.g. -mavx2) " "but CAFFE2_PERF_WITH_AVX2 is not defined.");
#error( \
^~~~~
caffe2_build/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx.dir/build.make:62: recipe for target 'caffe2_build/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx.dir/common_avx.cc.o' failed
make[2]: *** [caffe2_build/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx.dir/common_avx.cc.o] Error 1
make[2]: *** Waiting for unfinished jobs....
caffe2_build/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/build.make:62: recipe for target 'caffe2_build/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o' failed
make[2]: *** [caffe2_build/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o] Error 1
make[2]: *** Waiting for unfinished jobs....
[ 47%] Linking CXX static library ../../../lib/libonnx.a
[ 47%] Built target onnx
CMakeFiles/Makefile2:2105: recipe for target 'caffe2_build/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx.dir/all' failed
make[1]: *** [caffe2_build/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 47%] Linking CXX static library ../../../../lib/libnomnigraph.a
[ 47%] Built target nomnigraph
CMakeFiles/Makefile2:2143: recipe for target 'caffe2_build/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/all' failed
make[1]: *** [caffe2_build/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
```
My searches return nothing useful, any hints?
Thanks!
Augusto.
## System Info
- Ubuntu 17.04 amd64
- gcc (Ubuntu 6.3.0-12ubuntu2) 6.3.0 20170406:
- CMake version: 3.7.2
- pytorch: 3efdece9daade24630c72ebb7b17502134995196 | caffe2,module: static linking | low | Critical |
344,246,959 | go | x/text: panic of index out of range in language.Matcher | ### What version of Go are you using (`go version`)?
go 1.9
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
amd64, darwin and linux.
### What did you do?
language.MustParse accepts "lang_ZZ", e.g. en_ZZ, es_ZZ, etc. and returns a valid tag.
but m.Match(such a language tag) triggers a panic of "index out of range" at this lookup table: https://github.com/golang/text/blob/master/language/tables.go#L50
Sample code to reproduce the issue:
```
package main
import (
"fmt"
"golang.org/x/text/language"
)
func main() {
supportedLanguages := []language.Tag{
language.English,
language.Arabic,
language.BritishEnglish,
language.Spanish,
language.French,
language.Italian,
language.Japanese,
language.Korean,
language.Portuguese,
language.Russian,
language.SimplifiedChinese,
language.TraditionalChinese,
}
tl := language.MustParse("en_ZZ")
fmt.Printf("target language: %v\n", tl)
m := language.NewMatcher(supportedLanguages)
_, idx, _ := m.Match(tl)
fmt.Printf("index matched: %v\n", idx)
}
```
Output:
```
target language: en-ZZ
panic: runtime error: index out of range
goroutine 1 [running]:
golang.org/x/text/language.regionGroupDist(0x139005701650135, 0x1650139)
/Users/yb/go_root/src/golang.org/x/text/language/match.go:677 +0x13d
golang.org/x/text/language.(*bestMatch).update(0xc42003fbf0, 0xc4200722d0, 0x1650139, 0x0, 0x0, 0x101650057)
/Users/ybi/go_root/src/golang.org/x/text/language/match.go:622 +0x28d
golang.org/x/text/language.(*matcher).getBest(0xc420072270, 0xc42000adc0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/Users/yb/go_root/src/golang.org/x/text/language/match.go:504 +0x4a0
golang.org/x/text/language.(*matcher).Match(0xc420072270, 0xc42000adc0, 0x1, 0x1, 0x0, 0x0, 0x1161220, 0xc420072270, 0x2d952e1a19cc2e08)
/Users/yb/go_root/src/golang.org/x/text/language/match.go:83 +0xd4
main.main()
/Users/yb/test.go:28 +0x4d5
exit status 2
```
### What did you expect to see?
either
(1) language.MustParse rejects "lang_ZZ" and throws an error;
or
(2) language.MustParse parses "lang_ZZ" to a tag of just "lang", similar to "lang_UND", e.g. language.MustParse of "en_ZZ" and "en" should return the *identical* result.
or
(3) language.Matcher supports ZZ country code and works properly.
### What did you see instead?
language.MustParse accepts ZZ, but language.Matcher crashes on that.
| NeedsInvestigation | low | Critical |
344,272,435 | go | cmd/vendor/github.com/google/pprof/internal/binutils: TestObjFile failure | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.10.3 linux/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
bash-3.2$ ld --version
GNU ld (GNU Binutils) 2.22
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/jhart/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/jhart/go"
GORACE=""
GOROOT="/home/jhart/temp/temp/go"
GOTMPDIR=""
GOTOOLDIR="/home/jhart/temp/temp/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build636359804=/tmp/go-build -gno-record-gcc-switches"
### What did you do?
```
cd temp
tar xf go1.4-bootstrap-20171003.tar.gz
cd go/src
export CGO_ENABLED=0
su
cd /var
mkdir tmp
chmod go+w tmp
exit
./make.bash
### Installed Go for linux/amd64 in /home/jhart/temp/go
### Installed commands in /home/jhart/temp/go/bin
unset CGO_ENABLED
export GOROOT_BOOTSTRAP=/home/jhart/temp/go
cd ../../
mkdir temp
cd temp
git clone https://go.googlesource.com/go
cd go
git describe --tags # go1.11beta1-226-ge161b1e
git checkout go1.10.3
cd src
./all.bash
```
### What did you expect to see?
All tests should have been passed successfully
### What did you see instead?
```
ok cmd/trace 0.028s
--- FAIL: TestObjFile (0.17s)
binutils_test.go:237: SourceLine for main: expect [{main /tmp/hello.c 3}]; got [{main 0}]
FAIL
FAIL cmd/vendor/github.com/google/pprof/internal/binutils 0.371s
```
| help wanted,NeedsInvestigation | low | Critical |
344,316,180 | godot | Export arrays in singleton end up shared if initialized to same value (fixed in `master`) | ___
***Bugsquad note:** This issue has been confirmed several times already. No need to confirm it further.*
___
**Godot version:**
Godot v3.0.5.stable.official.6a88e22 win64
**OS/device including version:**
Windows 10 Pro v.1803 (on 2 different pc)
**Issue description:**
Loading values in a two-dimensional array has different results if the array is local or global.
On a new project, I added a script (g.gd) as autoload-singleton enabled with the following code:
extends Node
```gdscript
extends Node
export var gridTiles = []
export var gridStat = []
```
I create a new scene assigned as main scene on project setting, on the root node added a script with the following code:
```gdscript
extends Node
func _ready():
var x
var y
var gridTiles = []
var gridStat = []
for y in range(3):
gridTiles.append([])
gridStat.append([])
for x in range(4):
gridTiles[y].append(null)
gridStat[y].append(0)
gridStat[0][0] = 1
print(gridStat)
print(gridStat.size(), " x ", gridStat[0].size())
print(gridTiles)
print(gridTiles.size(), " x ", gridTiles[0].size())
for y in range(3):
g.gridTiles.append([])
g.gridStat.append([])
for x in range(4):
g.gridTiles[y].append(null)
g.gridStat[y].append(0)
g.gridStat[0][0] = 2
print(g.gridStat)
print(g.gridStat.size(), " x ", g.gridStat[0].size())
print(g.gridTiles)
print(g.gridTiles.size(), " x ", g.gridTiles[0].size())
```
The resulting output is:
```
[[1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
3 x 4
[[Null, Null, Null, Null], [Null, Null, Null, Null], [Null, Null, Null, Null]]
3 x 4
[[2, 0, Null, 0, Null, 0, Null, 0], [Null, 0, Null, 0, Null, 0, Null, 0], [Null, 0, Null, 0, Null, 0, Null, 0], [], [], []]
6 x 8
[[2, 0, Null, 0, Null, 0, Null, 0], [Null, 0, Null, 0, Null, 0, Null, 0], [Null, 0, Null, 0, Null, 0, Null, 0], [], [], []]
6 x 8
```
Is it a bug or my poor knowledge of gdscript (python)?
Thanks for the attention.
| bug,topic:gdscript,confirmed | medium | Critical |
344,329,363 | TypeScript | Special characters in 'enum' type will be compiled to unicode by default |
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.9.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** typescript unicode, typescript enum unicode
**Code**
```ts
export const TestObject = {
WELCOME: 'δ½ ε₯½',
TEST: 'ΡΠ΅ΡΡ',
}
export enum TestEnum {
WELCOME = 'δ½ ε₯½',
TEST = 'ΡΠ΅ΡΡ',
}
```
**Expected behavior:**
Fields in `TestObject` and `TestEnum` ought to be compiled to a consistent encoding format (utf8).
**Actual behavior:**
Special characters in `Object` were compiled to utf8 while those in `Enum` were compiled to unicode.
**Playground Link:**
[Official Playground](https://www.typescriptlang.org/play/#src=%0D%0Aconst%20TestObject%20%3D%20%7B%0D%0A%20%20WELCOME%3A%20'%E4%BD%A0%E5%A5%BD'%2C%0D%0A%20%20TEST%3A%20'%D1%82%D0%B5%D1%81%D1%82'%2C%0D%0A%7D%0D%0A%0D%0Aenum%20TestEnum%20%7B%0D%0A%20%20WELCOME%20%3D%20'%E4%BD%A0%E5%A5%BD'%2C%0D%0A%20%20TEST%20%3D%20'%D1%82%D0%B5%D1%81%D1%82'%2C%0D%0A%7D%0D%0A)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
https://github.com/Microsoft/TypeScript/issues/10498
| Bug | low | Critical |
344,400,754 | vscode | Allow syntax colouring in parameters for SignatureHelp | There are a couple of issues that looked like they might cover this:
- https://github.com/Microsoft/vscode/issues/11877
- https://github.com/Microsoft/vscode/issues/26241
However as far as I can tell, it's not possible to get highlighting in the parameter names in the signature. I've tried various combinations of `string/MarkdownString` in the `documentation` properties with no luck. The string being rendered here is `SignatureInformation.label` which is only a `string`:

Here the main signature line in the tooltip is just all white. It would be much more readable if the types were coloured like they are in the editor. | feature-request,editor-parameter-hints | medium | Critical |
344,414,500 | opencv | Linker Error LNK2022 of dnn module in CLR project | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
- OpenCV => 3.4.2
- Operating System / Platform => Windows 7 64 Bit
- Compiler => Visual Studio 2015
##### Detailed description
I am trying to use opencv dnn module in a project having common language runtime support (C++/CLI). I get the following compile error when I want to load a network model stored in TensorFlow framework's format:
**error LNK2022: metadata operation failed (8013119F) : A TypeRef exists which should, but does not, have a corresponding TypeDef: (Impl): (0x0100001f).**
##### Steps to reproduce
Loading the frozen graphs as follows in a CLR project produces the above mentioned compile error
cv::dnn::Net my_net;
my_net = cv::dnn::readNetFromTensorflow("frozen_inference_graph.pb", "Config.pbtxt");
but in the following case no error is produced. However in this case I do not have global access to my_net object.
cv::dnn::Net my_net = cv::dnn::readNetFromTensorflow("frozen_inference_graph.pb", "Config.pbtxt"); | priority: low,category: build/install | low | Critical |
344,425,689 | rust | Doc-tests are not ignored when no changes have been made | Running the entire test suite will ignore most files after nothing relevant has been changed (e.g. another test), but doc-tests will be re-run each time. I tested this for stage 2 tests, but it seems likely it occurs for other stages too. | A-testsuite,C-enhancement,T-bootstrap,E-needs-investigation | low | Minor |
344,438,660 | rust | {i386,x86_64}-apple-ios targets pass too many thousands of arguments to the linker | To reproduce:
```sh
git clone https://github.com/rust-lang-nursery/packed_simd
cd packed_simd
# Get Rust target:
rustup target add x86_64-apple-ios
# Build cargo runner:
export RUSTFLAGS='-C link-args=-mios-simulator-version-min=7.0'
rustc ./ci/deploy_and_run_on_ios_simulator.rs -o ios_cargo_runner --verbose
export CARGO_TARGET_X86_64_APPLE_IOS_RUNNER=$(pwd)/ios_cargo_runner
# Build library:
cargo build --target=x86_64-apple-ios
# Build and run tests:
cargo test --target=x86_64-apple-ios
```
Produces this output (too long to put it here, gist: https://gist.github.com/gnzlbg/4ff458a32ebd56e4d8930aaf766178b4). Linking fails. This is rust-lang-nursery/packed_simd issue: https://github.com/rust-lang-nursery/packed_simd/issues/26
cc @michaelwoerister @alexcrichton this might be related to incremental compilation, I see a lot of rcgus. | A-linkage,O-macos,O-ios,O-x86_64,T-compiler,C-bug,O-x86_32 | low | Major |
344,495,586 | TypeScript | Improve completions with --checkJs turned off | **TypeScript Version:** 3.1.0-dev.20180725
**Code**
```ts
/**
* @typedef {object} I
* @property {number} m
*/
/** @returns {I} */
function f() { return { x: 0 }; }
const x = f();
x.
```
**Expected behavior:**
Get completion for `m` only.
**Actual behavior:**
Get completions for `I`, `f`, and `x`. None of which are properties. | Suggestion,Domain: Completion Lists,Awaiting More Feedback | low | Minor |
344,525,437 | vscode | Show Opening Tag when Closing Tag Selected | intelliJ / PHP Storm / etc. have this nifty little feature. When your cursor is at the end of a tag, if the opening tag is off the screen it will show it in a little floating window.

This makes it incredibly easy to keep track of exactly which closing tag you're at, especially useful for when you have to work in heavily nested templates.
| feature-request,editor-bracket-matching | high | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.