id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
332,242,091 | rust | NLL Regression: Conditional control flow returning from functions no longer works | If a match expression has one arm that returns a borrowed matched value from its pattern, and another arm that doesn't borrow directly from a matched value but instead borrows the variable the matched value borrows and returns it, compilation fails if the match statement is returning from a function, but not if it's assigning to a variable. This used to work (and iirc one of the goals of NLLs was to make this pattern work), but it broke between `nightly-2018-05-17` and `nightly-2018-05-19`. A nightly for May 18th doesn't appear to exist.
Examples:
This doesn't work now:
```rust
// Doesn't compile on nightly-2018-05-19 and later, but does on nightly-2018-05-17.
fn borrow(o: &mut Option<i32>) -> Option<&mut i32> {
match o.as_mut() {
Some(i) => Some(i),
None => o.as_mut()
}
}
```
This is similar to the other code, but *does* work both before and after `nightly-2018-05-19`
```rust
fn main() {
let mut o: Option<i32> = Some(1i32);
// Compiles everywhere!
let x = match o.as_mut() {
Some(i) => Some(i),
None => o.as_mut()
};
}
``` | A-borrow-checker,T-compiler,A-NLL,C-bug,fixed-by-polonius | low | Major |
332,245,349 | pytorch | RuntimeError: /pytorch/torch/csrc/jit/tracer.h:117: getTracingState: Assertion `var_state == state` failed. | I did train Resnet152 model and I want to export onnx.
this error occurs
I did run programming with GPU on jupyter
## Error message
```
RuntimeErrorTraceback (most recent call last)
<ipython-input-15-f5f241afd3b5> in <module>()
----> 1 torch.onnx.export(model_ft, dummy_input, "resnet152.onnx",export_params=True)
/usr/local/lib/python3.6/dist-packages/torch/onnx/__init__.py in export(*args, **kwargs)
23 def export(*args, **kwargs):
24 from torch.onnx import utils
---> 25 return utils.export(*args, **kwargs)
26
27
/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten)
82 as ATen ops.
83 """
---> 84 _export(model, args, f, export_params, verbose, training, input_names, output_names)
85
86
/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_type)
132 # training mode was.)
133 with set_training(model, training):
--> 134 trace, torch_out = torch.jit.get_trace_graph(model, args)
135
136 if orig_state_dict_keys != _unique_state_dict(model).keys():
/usr/local/lib/python3.6/dist-packages/torch/jit/__init__.py in get_trace_graph(f, args, kwargs, nderivs)
253 if not isinstance(args, tuple):
254 args = (args,)
--> 255 return LegacyTracedModule(f, nderivs=nderivs)(*args, **kwargs)
256
257
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 hook(self, input)
488 if torch.jit._tracing:
--> 489 result = self._slow_forward(*input, **kwargs)
490 else:
491 result = self.forward(*input, **kwargs)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _slow_forward(self, *input, **kwargs)
465 def _slow_forward(self, *input, **kwargs):
466 input_vars = tuple(torch.autograd.function._iter_tensors(input))
--> 467 tracing_state = torch.jit.get_tracing_state(input_vars)
468 if not tracing_state:
469 return self.forward(*input, **kwargs)
/usr/local/lib/python3.6/dist-packages/torch/jit/__init__.py in get_tracing_state(args)
33 if not torch._C._is_tracing(args):
34 return None
---> 35 return torch._C._get_tracing_state(args)
36
37
RuntimeError: /pytorch/torch/csrc/jit/tracer.h:117: getTracingState: Assertion `var_state == state` failed.
```
## this is my export onnx code
```
dummy_input = Variable(torch.randn(1, 3, 224, 224))
torch.onnx.export(model_ft, dummy_input, "resnet152.onnx",export_params=True)
```
## this is my training model code
```
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
scheduler.step()
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = 'data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
use_gpu = torch.cuda.is_available()
# get model and replace the original fc layer with your fc layer
model_ft = torchvision.models.resnet152(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 3)
if use_gpu:
model_ft = model_ft.cuda()
# define loss function
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft = train_model(model=model_ft,
criterion=criterion,
optimizer=optimizer_ft,
scheduler=exp_lr_scheduler,
num_epochs=2)
```
I want training a resnet model to predict threes classes
I use pip to install pytorch
GPU : GeForce GTX 1080 Ti
Python 3.6.3
Pytorch 0.4.0
CUDA Version 9.0.176
GCC Version 5.4.0
Cmake version 3.5.1
Could you please advice how to solve out this error
Thank you | oncall: jit | medium | Critical |
332,314,992 | pytorch | An error occurred while creating a new notebook. | I following [Caffe2 Tutorials Overview](https://caffe2.ai/docs/tutorials.html#null__new-to-deep-learning)
When I run` ./start_ipython_notebook.sh` ,Jupyter interface pops up.
I am trying to create a new python file,Error message appears as follows.
```
An error occurred while creating a new notebook.
Unexpected error while running post hook save: [Errno 2] No such file or directory.
```
But I run `jupyter notebook` in other directories without this problem. | caffe2 | low | Critical |
332,383,919 | flutter | InputDecorator lacks support for material theming | When using a material input widget, and adding an icon to it, whether it is suffix, prefix or the leading icon, there's no way of showing that icon with a custom icon theme, because in the code the color is fixed:
```dart
Color _getDefaultIconColor(ThemeData themeData) {
if (!decoration.enabled)
return themeData.disabledColor;
switch (themeData.brightness) {
case Brightness.dark:
return Colors.white70;
case Brightness.light:
return Colors.black45;
default:
return themeData.iconTheme.color;
}
}
```
This comes from [input_decorator.dart](https://github.com/flutter/flutter/blob/48f4ff6ddab5520a6a761f4e6079c64000c16d29/packages/flutter/lib/src/material/input_decorator.dart#L1534)
So, if the theme brightness is dark or light (and it can not be of any other value), the color of the icon on inactive state of the input are always `Colors.white70` or `Colors.black45`, and the default case in the switch will never be taken.
There's a solution for this, which is setting the color in the `Icon` widget directly, but that makes the icon have the same color always, so, to have it changed to primary color when the input gets focused I would have to listen to the focus node of the input and change it from outside, but I think there's no need for this, mainly because it will rerender the input decorator from the state of the input parent widget when it can be changed inside the input making the parent stateless. | c: new feature,framework,f: material design,P2,team-design,triaged-design | low | Minor |
332,416,688 | react | Allow Portals to be used for Reparenting | **Do you want to request a *feature* or report a *bug*?**
feature
**What is the current behavior?**
[Reparenting](https://github.com/facebook/react/issues/3965) is an unsolved issues of React(DOM). So far, it was possible to hack around the missing support for it by relying on unstable API (`unstable_renderSubtreeIntoContainer`) to render **and update** a subtree inside a different container. It's important to note that this API was using React's diffing algorithm so that, similar to `ReactDOM.render()`, it is possible to keep components mounted.
```js
ReactDOM.render(<Foo />, container);
// This won't get <Foo /> to be unmounted and mounted again:
ReactDOM.render(<Foo />, container);
ReactDOM.unstable_renderSubtreeIntoContainer(
parentComponent,
<Foo />,
container
);
// This also won't get <Foo /> to be unmounted and mounted again, no matter if
// we change parentComponent (and thus call it from a different parent):
ReactDOM.unstable_renderSubtreeIntoContainer(
parentComponent,
<Foo />,
container
);
```
However this unstable API is [going to be deprecated soon](https://github.com/facebook/react/issues/10143) and recent features like the introduction of the new context API introduced [additional issues](https://github.com/facebook/react/issues/12493).
As an alternative to this unstable API, `ReactDOM.createPortal(children, container)` was introduced. However this API is unsuitable for the reparenting issue since it will always [create a new mount point](https://github.com/facebook/react/issues/10713) inside the `container` instead of applying the diffing _when called from a different parent_ (Check out this [CodeSandbox](https://codesandbox.io/s/91o7oovo54) where calling the portal from a different portal will cause the `<Leaf />` to have a new uuid). The reason for this is that we want multiple portals to be able to render inside the same `container` which makes perfect sense for more common use cases like popovers, etc.
Before we're going to remove `unstable_renderSubtreeIntoContainer`, I suggest we find a way to portal into a specific node instead of appending to it so that we can diff its contents instead (or implement a solution for #3965 although that seems to be more complicated), similar to `unstable_renderSubtreeIntoContainer`. | Type: Feature Request | medium | Critical |
332,476,332 | kubernetes | strategic patch doesn't keep order of duplicated elements in the list | /kind bug
**What happened**:
given list with duplicated keys D1
```
mergingList:
- name: D1
value: URL1
- name: A
value: x
- name: D1
value: URL2
```
and a patch updating non-duplicated key A:
```
$setElementOrder/mergingList:
- name: D1
- name: A
- name: D1
mergingList:
- name: A
value: z
```
it reorders keys:
```
mergingList:
- name: D1
value: URL1
- name: D1
value: URL2
- name: A
value: z
```
**What you expected to happen**:
Order of duplicated keys stays the same
```
mergingList:
- name: D1
value: URL1
- name: A
value: z
- name: D1
value: URL2
```
**How to reproduce it (as minimally and precisely as possible)**:
I created test-case which fails on release-1.10 and master: https://github.com/redbaron/kubernetes/commit/971c1b20e51dd2090a9fd585bafa18282c2984f5
**Anything else we need to know?**:
I am trying to find out why kubectl creates incorrect patch on apply. Although I can't reproduce it with a simple test case like in this ticket, it look related to it. On a real resource kubectl apply it reorders keys in generated patch, so it generates something like:
```
$setElementOrder/mergingList:
- name: D1
- name: A
- name: D1
mergingList:
- name: D1
value: URL1
- name: D1
value: URL2
- name: A
value: z
```
which makes it an invalid patch as order of keys differs from order of values. This reordering might or might not be result of the same underneath bug.
/sig api-machinery
| kind/bug,priority/backlog,sig/api-machinery,help wanted,lifecycle/frozen,triage/accepted | medium | Critical |
332,498,810 | rust | Ironing out StepBy<Range>'s performance issues | The behaviour of `<Range<_> as Iterator>::nth` has a slight mismatch with `StepBy` (or `Step` depending on your viewpoint) as [@scottmcm has found out](https://github.com/rust-lang/rust/issues/27741#issuecomment-385237631), resulting in sub-optimal performance.
On every iteration, the range has to first step forwards `n-1` times to get the next element and then advance again by 1.
I'm hoping we can improve `step_by` into a 100% zero-cost abstraction.
It seems like the performance issue is specific to `StepBy<Range>`. I'm thinking therefore that we could specialize `Iterator for StepBy<Range<I>>` such that it would use @scottmcm's suggested semantics. Like this:
```rust
impl<I> Iterator for StepBy<Range<I>>
where
I: Step
{
fn next(&mut self) -> Option<Self::Item> {
self.first = false;
if let Some(mut n) = self.start.add_usize(self.step+1) {
if n < self.end {
std::mem::swap(&mut self.start, &mut n);
return Some(n);
}
}
self.start = self.end.clone();
None
}
}
```
That also avoids the branch on a regular `next()`. I haven't looked at the other methods but that boolean in `StepBy` could possibly become superfluous. During construction of the `StepBy` adapter, the `size` in `.step_by(size)` is decremented and this specialization has to counter-add 1 every time but that should be optimized away if inlined.
If someone were to depend on side-effects in `Step::add_usize` (when the trait is stabilized), this pre-stepping would become weird. Same thing with a hypothetical `next_and_skip_ahead()`.
@scottmcm what do you think of this? | I-slow,C-enhancement,T-libs-api | low | Major |
332,524,749 | go | cmd/go: maybe show legacy tags in pseudoversions | If you do
go get foo@tag
where tag is not a proper semver tag for foo, then vgo looks up tag, resolves the commit, finds the date on the commit, and records instead of tag a pseudo-version like
v0.0.0-20180501123456-1234abcdef
I wonder if we should preserve the tag, so that the recorded version would be:
v0.0.0-20180501123456-1234abcdef-tag
This would work for branch names and legacy semver tags too, of course, so that we could have:
v0.0.0-20180501123456-1234abcdef-devbranch
v0.0.0-20180501123456-1234abcdef-v17.0.0
These would just show a little bit more information when you list your dependencies in projects not using tagged releases (or at least vgo-compatible tagged releases in the case of the legacy v17). | NeedsDecision,modules | low | Major |
332,529,115 | neovim | Behavior difference with :<lang>do commands | In Vim, the `:<lang>do` commands (`:rubydo`, `:pydo`, etc.) stop executing if the command causes the current buffer to change. You can see a test for this in [test_python2.vim](https://github.com/neovim/neovim/blob/c46997aa8744f88e9886022dab703157c101cff7/src/nvim/testdir/test_python2.vim#L16-L21) which came from upstream [fixing](https://github.com/vim/vim/commit/a58883b) this issue in the python bindings.
In Neovim, we just tell the provider "run this command for all lines in this range". This means that, unless the provider does otherwise, the commands are all going to run in the context of the original buffer, even if the commands _should_ be changing to a different buffer.
Should Neovim be calling into the provider for every line, so we can centralize these sorts of checks, instead of the current mechanism? | compatibility,provider | low | Major |
332,580,440 | pytorch | [Caffe2] Runtime error while using a pre-trained style_transfer model | Hello, I'm using Caffe2 on Conda / Mac OSX 10.13.1 and I was trying to replicate the model zoo tutorial on a style transfer model. I pretty much copied the same thing to see if it would work while replacing squeezenet with style_transfer/crayon.
```python
from caffe2.python.models.style_transfer import crayon as mynet
from caffe2.python import workspace
import numpy as np
init_net = mynet.init_net
predict_net = mynet.predict_net
# you must name it something
predict_net.name = "stylize"
# Dummy batch
data = np.random.rand(1, 3, 227, 227).astype(np.float32)
workspace.FeedBlob("data", data)
workspace.RunNetOnce(init_net)
workspace.CreateNet(predict_net)
p = workspace.Predictor(init_net.SerializeToString(), predict_net.SerializeToString())
p.run([data])
```
While instantiating the net the following warnings appeared
```
I0615 03:22:06.141191 2799588160 operator.cc:167] Engine MOBILE is not available for operator PackedInt8BGRANHWCToNCHWCStylizerPreprocess.
I0615 03:22:06.141261 2799588160 operator.cc:167] Engine MOBILE is not available for operator AveragePool.
... (About 20 such warnings in total, with some repetitions)
```
And finally while trying to run the forward pass the script threw an error with the message
```
p.run([data])
RuntimeError: [enforce fail at stylizer_ops.cc:101] C == kInputChannels. Error from operator:
input: "data_int8_bgra" input: "mean" output: "b0" name: "" type: "PackedInt8BGRANHWCToNCHWCStylizerPreprocess" arg { name: "noise_std" f: 5 } device_option { device_type: 0 random_seed: 0 } engine: "MOBILE"
```
I'm not too sure how to interpret this error, all I could guess is that the shape of the input sample is wrong but I tried 224px to 270px (incase the image scale was off) | caffe2 | low | Critical |
332,612,574 | rust | HRTB-like bounds on structs | So I have a situation, detailed in u.r.l.o [here](https://users.rust-lang.org/t/expressing-hrtb-like-bound-on-generic-struct/18081), that I'd be curious to see if there's a good solution for now or if anything in the pipeline might help. It's likely a dupe of some existing issue(s) but I failed to find anything similar enough.
I'll repeat my u.r.l.o. post below to save you the redirects.
---
Suppose you have a `Writer<T>` struct as follows:
```rust
struct Writer<T> {
m: std::marker::PhantomData<fn(&T)>,
}
impl<T> Writer<T> {
fn write(&mut self, val: &T) {} // somehow knows what to do - that's not important
}
```
The `Writer` is generic only to associate the T with the method parameter in write() - it otherwise doesn’t store or use the T.
Next, you have a struct like this:
```rust
struct Record<'a> {
x: &'a i32,
}
```
And now you want to have a Logger struct as follows:
```rust
// Don't want a lifetime parameter on `Logger`
struct Logger {
rec_writer: Writer<Record<'static>>, // it's not really 'static
}
impl Logger {
fn write_record(&mut self) {
let x = 5;
let rec = Record {x: &x};
let rec: Record<'static> = unsafe {
std::mem::transmute(rec)
};
self.rec_writer.write(&rec);
}
}
```
So here I’m transmuting to 'static to satisfy the type system, and I know that rec_writer.write(...) will not attempt to stash any values out of Record on the premise that they’re 'static (which they’re not). Ideally, what I’d like to express is:
```rust
struct Logger {
rec_writer: for<'a> Writer<Record<'a>>,
}
```
which of course is invalid since HRTB only works on traits.
Is there a clean way to achieve this? Am I forgetting something? I can probably redesign the code to not be structured like this, but I’m curious if this is feasible as-is.
---
I don't think generic associated types (GAT) would help, and I suspect the issue borders more on HKT but I'm unsure.
Would appreciate some insight/guidance from people in the know :). | T-lang,C-feature-request,A-higher-ranked | low | Critical |
332,646,656 | pytorch | Q: how to generate my own pb files to C++ Predictor | Now I had trained a model used python interface. and now I wanna to save the trained paramters and the net model to pb file format which can be loaded by `ReadProtoFromFile` later used C++ interface.
But I cannot find out how to generate the two file(init_net.pb & predict_net.pbtxt)。
Can someone tell me, thanks very much. | caffe2 | low | Minor |
332,663,431 | youtube-dl | [Site support request] nudogram.com | ```
$ youtube-dl -v http://www.nudogram.com/videos/94/allison-parker-squirt-show-snapchat-2018/
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'http://www.nudogram.com/videos/94/allison-parker-squirt-show-snapchat-2018/']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.06.14
[debug] Git HEAD: 9b0b62753
[debug] Python version 3.6.5 (CPython) - Linux-4.16.12-1-hardened-x86_64-with-arch
[debug] exe versions: ffmpeg 4.0, ffprobe 4.0, rtmpdump 2.4
[debug] Proxy map: {}
[generic] allison-parker-squirt-show-snapchat-2018: Requesting header
WARNING: Falling back on generic information extractor.
[generic] allison-parker-squirt-show-snapchat-2018: Downloading webpage
[generic] allison-parker-squirt-show-snapchat-2018: Extracting information
ERROR: Unsupported URL: http://www.nudogram.com/videos/94/allison-parker-squirt-show-snapchat-2018/
Traceback (most recent call last):
File "/home/lee/Projects/youtube-dl/youtube_dl/YoutubeDL.py", line 792, in extract_info
ie_result = ie.extract(url)
File "/home/lee/Projects/youtube-dl/youtube_dl/extractor/common.py", line 500, in extract
ie_result = self._real_extract(url)
File "/home/lee/Projects/youtube-dl/youtube_dl/extractor/generic.py", line 3263, in _real_extract
raise UnsupportedError(url)
youtube_dl.utils.UnsupportedError: Unsupported URL: http://www.nudogram.com/videos/94/allison-parker-squirt-show-snapchat-2018/
```
- Single video: http://www.nudogram.com/videos/94/allison-parker-squirt-show-snapchat-2018/
- Single video: http://www.nudogram.com/videos/240/allison-parker-roleplay-snapchat-01-18-2018/
- Single video: http://www.nudogram.com/videos/598/megan-barton-hanson-photoshoot/
- Single video: http://www.nudogram.com/videos/585/tove-lo-topless-on-stage-in-sydney-2017/
| site-support-request | low | Critical |
332,708,175 | vscode | Ignore `editor.insertSpaces` within strings | <!-- Please search existing issues to avoid creating duplicates. -->
Related issues
==============
After searching existing issues, I found two open issues, which are closely related, but different. Including links for reference purposes:
- #5394
- #46287
<!-- Describe the feature you'd like. -->
Feature request
===============
Tabs have three uses in code files:
1. Indentation at the beginning of lines - people often prefer to use spaces here
2. Indentation between keys and values or around assignment operators - people often prefer to use spaces here
3. When a literal tab character is required within a string - this should *always* be a tab and never converted to spaces.
Currently, the 3rd use-case is difficult/impossible to handle easily in VSCode when `editor.insertSpaces` is set to `true`.
The most common place I've encountered this is for RegEx strings in BSD `sed`, which does not support GNU's non-standard `\t` notation for tabs. | feature-request,editor-core | low | Minor |
332,715,988 | pytorch | [caffe2] [feature request]Doese caffe2 support conv_nd with group? | ## Issue description
When I try to set the group parameter(larger than 1) for model.ConvNd, there is cudnn error.
Provide a short description.
Code:
'''
self.prev_blob = self.model.ConvNd(
self.prev_blob,
'test_group_conv_%d' % (self.comp_count),
in_filters,
out_filters,
kernels,
weight_init=("MSRAFill", {}),
strides=strides,
pads=pads,
no_bias=self.no_bias,
group=group,
)
'''
The error is:
'''
RuntimeError: [enforce fail at conv_op_cudnn.cc:616] status == CUDNN_STATUS_SUCCESS. 3 vs 0. , Error at: /home/hf17/pytorch/caffe2/operators/conv_op_cudnn.cc:616: CUDNN_STATUS_BAD_PARAM Error from operator:
'''
- PyTorch or Caffe2: Caffe2
- How you installed PyTorch (conda, pip, source):conda
- Build command you used (if compiling from source):
- OS:
- PyTorch version:
- Python version:
- CUDA/cuDNN version: 7.0.5
- GPU models and configuration: TITANX (Pascal)
- GCC version (if compiling from source):
- CMake version:
- Versions of any other relevant libraries:
| caffe2 | low | Critical |
332,745,941 | pytorch | caffe2: Does caffe support the model running on different device (CPU and GPU) same time ? | I want to know whether caffe2 support building the model on different device (CPU and GPU) at the same time.
Precisely, I want to setting part1(include some operators) of the network model running on CPU, while part2 (others) just on GPU, just like the TF.
If so, how can I make it ? I used to run model by the way " workspace.RunNet(model.net)".... | caffe2 | low | Minor |
332,821,472 | pytorch | [JIT] Interleaved C++-Python execution loses inner Python stacks | I triggered a recent shape propagation test failure:
```
14:56:59 ======================================================================
14:56:59 ERROR: test_pow_scalar_constant (__main__.TestJitGenerated)
14:56:59 ----------------------------------------------------------------------
14:56:59 Traceback (most recent call last):
14:56:59 File "test_jit.py", line 3940, in do_test
14:56:59 check(name)
14:56:59 File "test_jit.py", line 3921, in check
14:56:59 fn, (self_variable,) + args_variable)
14:56:59 File "test_jit.py", line 3831, in check_against_reference
14:56:59 outputs_test = func(*nograd_inputs)
14:56:59 File "test_jit.py", line 3807, in script_fn
14:56:59 return output_process_fn(CU.the_method(*tensors))
14:56:59 RuntimeError:
14:56:59 Expected object of type CPUDoubleType but found type CPUFloatType for argument #2 'exponent' (checked_cast_tensor at /var/lib/jenkins/workspace/aten/src/ATen/Utils.h:32)
14:56:59 frame #0: at::CPUDoubleType::s_pow(at::Tensor const&, at::Tensor const&) const + 0x6b (0x7fb4526c795b in /opt/python/2.7/lib/python2.7/site-packages/torch/lib/libcaffe2.so)
14:56:59 frame #1: at::Type::pow(at::Tensor const&, at::Tensor const&) const + 0x161 (0x7fb4527d2931 in /opt/python/2.7/lib/python2.7/site-packages/torch/lib/libcaffe2.so)
14:56:59 frame #2: <unknown function> + 0x497796 (0x7fb453c52796 in /opt/python/2.7/lib/python2.7/site-packages/torch/_C.so)
14:56:59 frame #3: <unknown function> + 0x33735e (0x7fb453af235e in /opt/python/2.7/lib/python2.7/site-packages/torch/_C.so)
14:56:59 frame #4: <unknown function> + 0x337ccf (0x7fb453af2ccf in /opt/python/2.7/lib/python2.7/site-packages/torch/_C.so)
14:56:59 frame #5: <unknown function> + 0x339a6b (0x7fb453af4a6b in /opt/python/2.7/lib/python2.7/site-packages/torch/_C.so)
14:56:59 frame #6: torch::jit::PropagateInputShapes(torch::jit::Graph&, torch::jit::ArgumentSpec const&) + 0x3b9 (0x7fb453af4f39 in /opt/python/2.7/lib/python2.7/site-packages/torch/_C.so)
14:56:59 frame #7: torch::jit::GraphExecutorImpl::run(torch::jit::variable_tensor_list) + 0xd5c (0x7fb453aa0e3c in /opt/python/2.7/lib/python2.7/site-packages/torch/_C.so)
14:56:59 frame #8: torch::jit::GraphExecutor::run(torch::jit::variable_tensor_list&&) + 0x47 (0x7fb453a9aa97 in /opt/python/2.7/lib/python2.7/site-packages/torch/_C.so)
14:56:59 frame #9: <unknown function> + 0x54f249 (0x7fb453d0a249 in /opt/python/2.7/lib/python2.7/site-packages/torch/_C.so)
14:56:59 frame #10: <unknown function> + 0x22db98 (0x7fb4539e8b98 in /opt/python/2.7/lib/python2.7/site-packages/torch/_C.so)
14:56:59 frame #11: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #12: <unknown function> + 0x63095 (0x7fb4641ba095 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #13: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #14: <unknown function> + 0xc2a45 (0x7fb464219a45 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #15: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #16: PyEval_EvalFrameEx + 0x3077 (0x7fb464260f77 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #17: PyEval_EvalCodeEx + 0x80d (0x7fb4642676fd in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #18: <unknown function> + 0x867e0 (0x7fb4641dd7e0 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #19: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #20: PyEval_EvalFrameEx + 0x3077 (0x7fb464260f77 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #21: PyEval_EvalCodeEx + 0x80d (0x7fb4642676fd in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #22: PyEval_EvalFrameEx + 0x7ed2 (0x7fb464265dd2 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #23: PyEval_EvalCodeEx + 0x80d (0x7fb4642676fd in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #24: PyEval_EvalFrameEx + 0x7ed2 (0x7fb464265dd2 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #25: PyEval_EvalCodeEx + 0x80d (0x7fb4642676fd in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #26: PyEval_EvalFrameEx + 0x7ed2 (0x7fb464265dd2 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #27: PyEval_EvalCodeEx + 0x80d (0x7fb4642676fd in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #28: <unknown function> + 0x868b5 (0x7fb4641dd8b5 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #29: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #30: PyEval_EvalFrameEx + 0x3077 (0x7fb464260f77 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #31: PyEval_EvalCodeEx + 0x80d (0x7fb4642676fd in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #32: <unknown function> + 0x867e0 (0x7fb4641dd7e0 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #33: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #34: <unknown function> + 0x63095 (0x7fb4641ba095 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #35: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #36: <unknown function> + 0xc2a45 (0x7fb464219a45 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #37: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #38: PyEval_EvalFrameEx + 0x3b78 (0x7fb464261a78 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #39: PyEval_EvalCodeEx + 0x80d (0x7fb4642676fd in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #40: <unknown function> + 0x868b5 (0x7fb4641dd8b5 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #41: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #42: PyEval_EvalFrameEx + 0x3077 (0x7fb464260f77 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #43: PyEval_EvalCodeEx + 0x80d (0x7fb4642676fd in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #44: <unknown function> + 0x867e0 (0x7fb4641dd7e0 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #45: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #46: <unknown function> + 0x63095 (0x7fb4641ba095 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #47: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #48: <unknown function> + 0xc2a45 (0x7fb464219a45 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #49: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #50: PyEval_EvalFrameEx + 0x3b78 (0x7fb464261a78 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #51: PyEval_EvalCodeEx + 0x80d (0x7fb4642676fd in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #52: <unknown function> + 0x868b5 (0x7fb4641dd8b5 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #53: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #54: PyEval_EvalFrameEx + 0x3077 (0x7fb464260f77 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #55: PyEval_EvalCodeEx + 0x80d (0x7fb4642676fd in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #56: <unknown function> + 0x867e0 (0x7fb4641dd7e0 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #57: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #58: <unknown function> + 0x63095 (0x7fb4641ba095 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #59: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #60: <unknown function> + 0xc2a45 (0x7fb464219a45 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #61: PyObject_Call + 0x43 (0x7fb4641ab373 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #62: PyEval_EvalFrameEx + 0x3b78 (0x7fb464261a78 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 frame #63: PyEval_EvalFrameEx + 0x7f86 (0x7fb464265e86 in /opt/python/2.7.15/lib/libpython2.7.so.1.0)
14:56:59 :
14:56:59 operation failed shape propagation:
14:56:59
14:56:59 def the_method(i0):
14:56:59 return i0.pow(3.14)
14:56:59 ~~~~~~ <--- HERE
14:56:59
```
You can see in the C++ backtrace that we go back into Python when shape propagation occurs. We should be able to extract a useful Python trace when this occurs; however, it seems to be lost. | oncall: jit | low | Critical |
332,841,232 | TypeScript | Import assignment should work with esnext targets | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
import assignment
## Suggestion
#22321 was closed by a bot that thought the issue was addressed, but it wasn't.
Import assignment should work with esnext targets
## Use Cases
JS modules loaders for Node might not support importing CJS with `import` statements, so you would use `require` to import them. But we'd still like to bring in their types. This is what import assignment is designed for, it's just disallowed with esnext targets.
## Examples
This:
```ts
import * as m from './a-module.js';
import cjs = require('some-cjs');
```
should emit:
```ts
import * as m from './a-module.js';
const cjs = require('some-cjs');
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Needs Proposal | low | Critical |
332,854,335 | opencv | Stitching enhencements for multithreading calibrated stitcher | It would be nice to have the possibility to use composePanorama function of stitcher class on several thread once it is calibrated using either estimateTransform or stitch. For this can you either make the method thread safe. OR give the possibility to clone a calibrated stitcher in order to use it on a different thread.
##### System information (version)
- OpenCV => 3.4.1 but can move to 4.0
- Operating System / Platform => Windows 10 64 Bit
- Compiler => Visual Studio 2015
##### Detailed description
Add the possibility to clone a calibrated stitcher.
Thanks
| feature,priority: low,category: stitching | low | Minor |
332,867,901 | TypeScript | Duplicated jsdoc should error | It's possible to document parameters and function expressions 2-3 different places in JSDoc. However, I can't think of a good reason to do so, and I suspect that it's (1) rare (2) by mistake. Here's an example:
```ts
/** @param a - the a */
function f(/* an a */ a) {
}
```
Today gives the jsdoc
```
an a
- the a
```
But should give ` an a` and an error on `@param a`: "Duplicate jsdoc".
The problem is even worse in JS, where jsdoc provides types. Here, the innermost type annotation provides the type and the rest are ignored. This should *definitely* be an error:
```js
/** @param {string} a */
function f(/* @type {number} */ a) {
}
```
This example should have an error on `@param a`: "Duplicate jsdoc"
Thanks to @bterlson for this idea. | Suggestion,In Discussion,checkJs,Domain: JavaScript | low | Critical |
332,871,586 | godot | Forwarding Signals in C# - not passing arguments as expected | **Godot version:**
v3.0.3-stable_mono
**OS/device including version:**
Windows 10
**Issue description:**
I have two signals:
- Signal1
- Signal2
I'm attempting to connect Signal1 to Signal2 so that when Signal1 is emitted Signal2 is emitted as well. I have a wrapper object that handles this for me so that I don't have to create a dummy method for every signal I would like to forward. Everything works up until Signal2 emits. Signal2 isn't receiving the arguments emitted from Signal1. I've implemented this concept in GDScript and it works without issue.
Here is a gist for a quick look at what I am doing: https://gist.github.com/dylmeadows/c45cb464b4cb3f0185e69f1e579aeae7
Here is a corresponding gist for the GDScript implementation:
https://gist.github.com/dylmeadows/e3d5433e2c7c24aa23d744e4a8139a2b
**Minimal reproduction project:**
[SignalForwarding-C#.zip](https://github.com/godotengine/godot/files/2107109/SignalForwarding-C.zip)
[SignalForwarding-GDScript.zip](https://github.com/godotengine/godot/files/2107110/SignalForwarding-GDScript.zip)
**Expected Outcome**
```
SignalForwarding::_Ready()
SignalObject::Connect(target=Main, methodName=DoSomething)
SignalObject::Forward(signal=SignalObject[Owner=Main,SignalName=Signal2])
SignalObject::Emit(args=[Hello World])
SignalObject::Emit(args=[Hello World])
SignalForwarding::DoSomething(s=Hello World)
```
**Actual Outcome**
```
SignalForwarding::_Ready()
SignalObject::Connect(target=Main, methodName=DoSomething)
SignalObject::Forward(signal=SignalObject[Owner=Main,SignalName=Signal2])
SignalObject::Emit(args=[Hello World])
SignalObject::Emit(args=[])
```
| enhancement,confirmed,topic:dotnet | low | Minor |
332,874,858 | pytorch | cleanup BLAS detection | PyTorch and Caffe2 use the following libraries as preferred for BLAS/LAPACK capabilities:
- MKL - first preference in unified build system
- Eigen - 2nd preference in unified build system, but first preference if ATen is not being compiled
- PyTorch finds OpenBLAS, Accelerate, cblas if MKL is not found -- finding OpenBLAS is useful for PPC64
- I have to explore what Caffe2 does if Eigen is not found
There are however additional subtelities.
MKL and Eigen are not used JUST for BLAS/LAPACK, but also for additional functionality:
- MKL for FFT, VML in pytorch
- Eigen for [implementing caffe2 operators](https://github.com/pytorch/pytorch/search?q=eigen&unscoped_q=eigen)
Additionally, we also use fortran-blas interface in pytorch, cblas interface in Caffe2 (if not Eigen), and eigen::MatrixMap interface if Eigen is found and `CAFFE2_USE_EIGEN_FOR_BLAS` is set.
Cleanup plan:
- separate what it means to detect MKL / Eigen from what it means to detect BLAS capabilities
- Find MKL at a single entry point across PyTorch, Caffe2, and if it's found, set a USE_BLAS="mkl" with BLAS_INCLUDE and BLAS_LIBRARIES variables set to MKL paths. This is in addition to setting MKL_FOUND
- [Should we] Find Eigen by default (unless USE_EIGEN=OFF (which we can set via setup.py) and use it for everything (building caffe2 operators etc.) except BLAS (i.e. only not set CAFFE2_USE_EIGEN_FOR_BLAS). If MKL was not found, use Eigen for BLAS as well cc: @Yangqing for guidance.
- On PPC64 and OSX, should we prefer OpenBLAS and Accelerate respectively (instead of Eigen) for the BLAS capabilities? cc: @Yangqing
cc @malfet @seemethere @walterddr @jianyuh @nikitaved @pearu @mruberry @heitorschueroff | module: build,triaged,module: linear algebra | low | Minor |
332,880,321 | neovim | Make terminal buffer behave more like normal buffers | I notice the terminal buffer has a few things which are inconsistent with "normal" text buffers. I think it would be better and more logical if terminal buffers work more like regular buffers.
# `set list` not working in terminal buffers.
One thing thing which is inconsistent with normal buffers is that `set list` will not work on terminal buffers.
# Missing output becomes blank lines
When a terminal window has been opened and the user has typed a few commands, such that the whole terminal window is not filled up and exits from terminal mode, there are a lot of blank lines in the terminal buffer.
Example (user is in terminal mode):
+-------------------------------------------------------------------------+
|$ ls |
|decode.c encode.c executor.c gc.c typval.c typval_encode.h |
|decode.h encode.h executor.h gc.h typval_encode.c.h typval.h |
|$ █ |
| |
| |
| |
| |
| |
|--TERMINAL-- |
+-------------------------------------------------------------------------+
What happens when user exits from terminal mode:
+-------------------------------------------------------------------------+
|$ ls |
|decode.c encode.c executor.c gc.c typval.c typval_encode.h |
|decode.h encode.h executor.h gc.h typval_encode.c.h typval.h |
|$ |
| |
| |
| |
| |
|█ |
| |
+-------------------------------------------------------------------------+
The current behavior is inconsistent because the missing `~`-signs for missing text will show up and the user cannot move past the last terminal output line if and only if the terminal buffer output overflows the current window.
I think it would make more sense if lines which haven't been output by the terminal job will become missing lines in the buffer, even when the terminal output have not overflown.
# Proposal
I think it would be better if `set list` works in the terminal buffer, and lines which have not been output by the terminal application will always show up as empty lines in the terminal buffer.
A side effect of having missing lines not becoming blank lines in the terminal buffer is that the cursor position can't start at the lower left corner anymore when exiting terminal mode. I propose that this behavior is changed so the cursor position when exiting terminal mode will always be the position which the cursor had when inside terminal mode.
What is proposed to happen when user exits from terminal mode (note the `~` characters indicating missing lines in the buffer):
+-------------------------------------------------------------------------+
|$ ls |
|decode.c encode.c executor.c gc.c typval.c typval_encode.h |
|decode.h encode.h executor.h gc.h typval_encode.c.h typval.h |
|$ █ |
|~ |
|~ |
|~ |
|~ |
|~ |
| |
+-------------------------------------------------------------------------+
Implementing this should be possible. Vim has implemented the kind of behavior which I propose.
| enhancement,terminal | low | Major |
332,901,634 | node | Inconsistent behavior of path.basename(path, ext) | I believe this was introduced somewhere in https://github.com/nodejs/node/pull/5123, which changed the behavior of the `ext` argument.
This is observed on all supported branches and was even recently backported to 4.x.
Documentation:
https://nodejs.org/api/path.html#path_path_basename_path_ext
Observe the input and try to predict the output:
```console
> ['a', 'a/', 'a//'].map(x => path.posix.basename(x))
[ 'a', 'a', 'a' ]
> ['a', 'a/', 'a//'].map(x => path.posix.basename(x,'b'))
[ 'a', 'a', 'a' ]
> ['a', 'a/', 'a//'].map(x => path.posix.basename(x,'a'))
[ '', 'a', 'a' ]
> ['a', 'a/', 'a//'].map(x => path.posix.basename(x,'a/'))
[ 'a', '', 'a' ]
> ['a', 'a/', 'a//'].map(x => path.posix.basename(x,'a//'))
[ 'a', 'a', '' ]
> ['a', 'a/', 'a//'].map(x => path.posix.basename(x,'aa'))
[ 'a', 'a/', 'a//' ]
> ['a', 'a/', 'a//'].map(x => path.posix.basename(x,'bb'))
[ 'a', 'a', 'a' ]
> ['a', 'a/', 'a//'].map(x => path.posix.basename(x,'aaa'))
[ 'a', 'a', 'a//' ]
> ['a', 'a/', 'a//'].map(x => path.posix.basename(x,'aaaa'))
[ 'a', 'a', 'a' ]
> ['dd', '/dd', 'd/dd', 'd/dd/'].map(x => path.posix.basename(x))
[ 'dd', 'dd', 'dd', 'dd' ]
> ['dd', '/dd', 'd/dd', 'd/dd/'].map(x => path.posix.basename(x, 'd'))
[ 'd', 'd', 'd', 'd' ]
> ['dd', '/dd', 'd/dd', 'd/dd/'].map(x => path.posix.basename(x, 'dd'))
[ '', 'dd', 'dd', 'dd' ]
> ['dd', '/dd', 'd/dd', 'd/dd/'].map(x => path.posix.basename(x, 'ddd'))
[ 'dd', 'dd', 'dd', 'dd/' ]
```
There are more, but all the inconsistencies with the previous behavior involve at least one of those:
1. Either the `path` ends with `/`,
2. Or `ext` includes `/`,
3. Or `ext` equals to the actual resolved basename (i.e. `path.endsWith('/' + ext)`).
More specifically, the following check _covers_ all the cases inconsistent behavior to my knowledge:
`path.endsWith('/') || ext.includes('/') || path.endsWith('/' + ext)`
(note that it also includes cases of consistent behavior).
---
Reminder: before #5123, this was how `ext` behave:
```js
if (ext && f.substr(-1 * ext.length) === ext) {
f = f.substr(0, f.length - ext.length);
}
```
I.e. it just sliced off the suffix (**after** doing everything else). | help wanted,discuss,path | low | Major |
332,903,326 | TypeScript | Add related error spans for getter/setters with different types | Now that we support multiple related spans for errors (#10489, #22789, #24548), we'd like to improve an existing error message.
Currently, we provide a diagnostic for a pair of `get`/`set` accessor's types not matching:
Code:
```ts
let x = {
get foo() { return 100; }
set foo(value: string): { }
}
```
Current error:
```
'get' and 'set' accessor must have the same type.
```
We'd like to give a better error message. For example:
Primary span:
```
A 'get-' and 'set-' accessor must have the same type, but this 'get' accessor has the type '{0}'.
```
Related span:
```
The respective 'set' accessor has the type '{0}'.
``` | Suggestion,Domain: Error Messages,Domain: Related Error Spans,Experience Enhancement | medium | Critical |
332,905,684 | go | cmd/compile: possible missed optimization in append benchmark | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.10.3 linux/amd64
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN="/home/manlio/.local/bin"
GOCACHE="/home/manlio/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/manlio/.local/lib/go:/home/manlio/code/src/go"
GORACE=""
GOROOT="/usr/lib/go"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build931873105=/tmp/go-build -gno-record-gcc-switches"
### What did you do?
I wrote a simple benchmark to check the performance of *append* versus *copy*.
The benchmark is here:
https://play.golang.org/p/jA7Fb0oON6Z
The benchmark result is:
```
Benchmark_Append-4 30000000 43.1 ns/op 0 B/op 0 allocs/op
Benchmark_Copy-4 30000000 43.0 ns/op 0 B/op 0 allocs/op
```
The unexpected result is when *bug* is set to true. In this case the benchmarks results are:
```
Benchmark_Append-4 30000000 37.2 ns/op 0 B/op 0 allocs/op
Benchmark_Copy-4 30000000 43.2 ns/op 0 B/op 0 allocs/op
```
The same result is produced when I change *bug* from a const to a var, even if it is set to *false*.
When I set *GOARCH=386* with *bug=true*, the benchmark result is:
```
Benchmark_Append-4 20000000 63.5 ns/op 0 B/op 0 allocs/op
Benchmark_Copy-4 20000000 59.6 ns/op 0 B/op 0 allocs/op
```
This seems to be an issue with the amd64 compiler.
This is the assembly listing when *bug* is false:
https://pastebin.com/kDVTypHF
and this is the assembly listing when *bug* is true
https://pastebin.com/VErtqZw6
This is the discussion of golang-nuts:
https://groups.google.com/forum/#!topic/golang-nuts/lJvBonZg62g
### What did you expect to see?
The benchmark result should be the same, with *bug* set to *false* or bug set to *true*.
### What did you see instead?
When *bug* is set to *true*, the *Append* benchmark is faster than the *Copy* benchmark. | Performance,NeedsInvestigation,compiler/runtime | low | Critical |
332,917,826 | kubernetes | PVC/PV conformance testing discussion | <!-- This form is for bug reports and feature requests ONLY!
If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
If the matter is security related, please disclose it privately via https://kubernetes.io/security/.
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
@kubernetes/sig-storage-feature-requests
As a follow up to the conformance testing discussion in https://github.com/kubernetes/features/issues/498, we had a meeting to brainstorm ideas around conformance testing of PVCs/PVs. Some ideas we came up with:
* The core controller paths with PV controller (both static and dynamic provisioning), attach/detach, kubelet volume manager mount/unmount can be part of a core Kubernetes suite using a mock CSI plugin. This would only test the Kubernetes control path and the PVC/PV lifecycle management. Even something like default StorageClasses could be tested if we made the mock CSI driver a default StorageClass (and disabled the provider-specific default StorageClass if preinstalled)
* Data path tests (ie reading/writing), data persistence during disruption/errors (rescheduling pods on other nodes) requires a real volume plugin implementation and is a good candidate for a "persistent volumes profile" conformance suite. The behavior here is very volume-plugin dependent, and not all providers may have a "default" volume plugin that they include. So this may actually be more of a volume plugin conformance suite, instead of a provider suite, where each volume plugin can be certified to meet some standard behavior (ie attach/detach handling in error conditions, read only behavior, permission handling, etc). Providers could potentially list which persistent volume plugins they include out of the box.
Some action items/questions:
* [sig-architecture] Is having a core conformance suite that tests the control path for volume handling with a mock plugin that doesn't actually run a real user experience useful? It would indicate that your provider at least runs all the necessary controllers.
* [sig-architecture] Is having a profile conformance suite that requires providers to include support for at least one volume plugin out of the box useful? This would map to user experience much more than the former.
* [sig-storage] For sure we should have a CSI plugin conformance standard (that is being handled separately outside of Kubernetes). What about in-tree plugins?
* [@msau42] Once the above questions are answered, come up with a plan for:
* what tests can go in each suite
* what work needs to be done to modify existing tests, add new tests to the conformance suites
* multi-release plan for adding those tests
cc @WilliamDenniss @bgrant0607 @AishSundar @pospispa @saad-ali | priority/backlog,sig/storage,kind/feature,sig/architecture,area/conformance,lifecycle/frozen | medium | Critical |
332,925,962 | youtube-dl | [SBS] sbs ondemand: some videos not working | ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.06.14*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.06.14**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
---
I just tried to download 22 videos from https://sbs.com.au/ondemand
19 worked perfectly. 3 repeatedly gave errors.
I have verified that I can get those 3 to play inside my browser (Google Chrome) although oddly I had to refresh the browser page a couple of times on each video before it played successfully.
Note that these videos are *probably* region-locked to Australian users only, though I haven't confirmed that.
**EDIT (9 days later): The three specific videos below now DO download correctly, using exactly the same settings & version of youtube-dl as was originally tried, so something on the server has changed. However, another user is having quite similar problems with some _other_ SBS videos; see below.**
The 3 --verbose mode logs are below:
(1)
```
youtube-dl.exe -o "C:\DOWNLOADS\John McCain Maverick.%%(ext)s" --restrict-filenames --ignore-errors --continue --retries 10 --skip-unavailable-fragments --fragment-retries 10 --convert-subs srt --no-overwrites --recode-video mp4 --postprocessor-args "-codec copy" "https://www.sbs.com.au/ondemand/video/1242879043727/john-mccain-maverick" --verbose
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-o', 'C:\\DOWNLOADS\\John McCain Maverick.%%(ext)s', '--restrict-filenames', '--ignore-errors', '--continue', '--retries', '10', '--skip-unavailable-fragments', '--fragment-retries', '10', '--convert-subs', 'srt', '--no-overwrites', '--recode-video', 'mp4', '--postprocessor-args', '-codec copy', 'https://www.sbs.com.au/ondemand/video/1242879043727/john-mccain-maverick', '--verbose']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2018.06.14
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.17134
[debug] exe versions: ffmpeg N-91288-g29cddc99cd, ffprobe N-91288-g29cddc99cd
[debug] Proxy map: {}
[SBS] 1242879043727: Downloading JSON metadata
[ThePlatform] 60RwkWWN2GVN: Downloading SMIL data
[ThePlatform] 60RwkWWN2GVN: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\YoutubeDL.py", line 792, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\extractor\common.py", line 500, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\extractor\theplatform.py", line 296, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\extractor\common.py", line 1190, in _sort_formats
youtube_dl.utils.ExtractorError: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
(2)
```
youtube-dl.exe -o "C:\DOWNLOADS\The Queen's Favorite Animals.%%(ext)s" --restrict-filenames --ignore-errors --continue --retries 10 --skip-unavailable-fragments --fragment-retries 10 --convert-subs srt --no-overwrites --recode-video mp4 --postprocessor-args "-codec copy" "https://www.sbs.com.au/ondemand/video/1254957123965/the-queens-favourite-animals" --verbose
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-o', "C:\\DOWNLOADS\\The Queen's Favorite Animals.%%(ext)s", '--restrict-filenames', '--ignore-errors', '--continue', '--retries', '10', '--skip-unavailable-fragments', '--fragment-retries', '10', '--convert-subs', 'srt', '--no-overwrites', '--recode-video', 'mp4', '--postprocessor-args', '-codec copy', 'https://www.sbs.com.au/ondemand/video/1254957123965/the-queens-favourite-animals', '--verbose']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2018.06.14
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.17134
[debug] exe versions: ffmpeg N-91288-g29cddc99cd, ffprobe N-91288-g29cddc99cd
[debug] Proxy map: {}
[SBS] 1254957123965: Downloading JSON metadata
[ThePlatform] lHPeY8Z_w7Fy: Downloading SMIL data
[ThePlatform] lHPeY8Z_w7Fy: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\YoutubeDL.py", line 792, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\extractor\common.py", line 500, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\extractor\theplatform.py", line 296, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\extractor\common.py", line 1190, in _sort_formats
youtube_dl.utils.ExtractorError: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
(3)
```
youtube-dl.exe -o "C:\DOWNLOADS\Australia With Simon Reeve Ep 1.%%(ext)s" --restrict-filenames --ignore-errors --continue --retries 10 --skip-unavailable-fragments --fragment-retries 10 --convert-subs srt --no-overwrites --recode-video mp4 --postprocessor-args "-codec copy" "https://www.sbs.com.au/ondemand/video/36767811596/australia-with-simon-reeve" --verbose
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-o', 'C:\\DOWNLOADS\\Australia With Simon Reeve Ep 1.%%(ext)s', '--restrict-filenames', '--ignore-errors', '--continue', '--retries', '10', '--skip-unavailable-fragments', '--fragment-retries', '10', '--convert-subs', 'srt', '--no-overwrites', '--recode-video', 'mp4', '--postprocessor-args', '-codec copy', 'https://www.sbs.com.au/ondemand/video/36767811596/australia-with-simon-reeve', '--verbose']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2018.06.14
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.17134
[debug] exe versions: ffmpeg N-91288-g29cddc99cd, ffprobe N-91288-g29cddc99cd
[debug] Proxy map: {}
[SBS] 36767811596: Downloading JSON metadata
[ThePlatform] DulgNCe6ndgK: Downloading SMIL data
[ThePlatform] DulgNCe6ndgK: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\YoutubeDL.py", line 792, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\extractor\common.py", line 500, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\extractor\theplatform.py", line 296, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp6bcc20cp\build\youtube_dl\extractor\common.py", line 1190, in _sort_formats
youtube_dl.utils.ExtractorError: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
``` | geo-restricted,account-needed | medium | Critical |
332,944,715 | pytorch | Update tests to no longer spew debug info | See recent CI, which spews the following:
23:49:53 test_anomaly_detect_nan (__main__.TestAutograd) ... No forward pass information available.
23:49:53 Enable detect anomaly during forward pass for more informations.
23:49:53 No forward pass information available.
23:49:53 Enable detect anomaly during forward pass for more informations.
23:49:53 test_autograd.py:2328: DeprecationWarning: Please use assertRaisesRegex instead.
23:49:53 with self.assertRaisesRegexp(RuntimeError, "Function 'MyFuncBackward' returned nan values in its 0th output."):
23:49:53 ok
cc @mruberry @VitalyFedyunin @walterddr | module: tests,triaged,better-engineering | low | Critical |
332,959,770 | rust | [control flow analysis] Special treatment for `Err( )?` | I don't know this issue should be `feature request` or `bug report`.
I have code like this:
```Rust
#[derive(Debug, Fail)]
#[fail(display = "Target can't open: {:?}", path)]
struct NotExists {
path: PathBuf
}
#[derive(Debug, Fail)]
#[fail(display = "Target is not file: {:?}", path)]
struct NotFile {
path: PathBuf
}
fn foo() -> Result<(), failure::Error> {
let path = PathBuf::from("Rust/is/the/best/language");
if path.exists() {
Err(NotExists { path })?
}
if path.is_dir() {
Err(NotFile { path })?
}
unimplemented!()
}
```
The compiler error is:
```
24 | Err(NotExists { path })?
| ---- value moved here
25 | }
26 | if path.is_dir() {
| ^^^^ value used here after move
``````
Obviously, `Err(foo)?` equals `return Err(foo.into());`, that means there should be no `use after moved` error.
There are two other ways to achieve the goal:
```Rust
if path.exists() {
Err(NotExists { path })?
} else if path.is_dir() {
Err(NotFile { path })?
} else {
unimplemented!()
}
```
or
```Rust
if path.exists() {
return Err(NotExists { path }.into());
}
```
I think with control flow analysis, compiler should be able to pass my origin code(`Err(foo)?` version). | A-type-system,T-lang,A-inference,C-feature-request,T-types | low | Critical |
332,982,286 | go | syscall, os: opening hidden file for write access on Windows gives ACCESS_DENIED_ERROR | ### What version of Go are you using (`go version`)?
go version go1.10.2 windows/amd64
also reproduces on go version go1.8.7 windows/386
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
set GOARCH=amd64
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
### What did you do?
I try to make a write access to an hidden text file on windows 10.
I made separate project to reproduce the issue here : https://github.com/FabienTregan/TestWritingToHiddenFilesInGo (with a bit more informations in the readme.md file)
### What did you expect to see?
Some text written in the file
### What did you see instead?
Received unexpected error:
open C:\Users\[USER NAME]\AppData\Local\Temp\write_to_hidden_file_test_204946587: Access is denied.
| OS-Windows,NeedsInvestigation,compiler/runtime | low | Critical |
332,991,860 | vscode | [folding] unfold when pressing enter on last line |

Issue Type: <b>Bug</b>
Hi,
After collapsing a Command (with code in) I can't enter after that.
In this example I can't enter between fds en the comment.
VS Code version: Code 1.21.1 (79b44aa704ce542d8ca4a3cc44cfca566e7720f1, 2018-03-14T14:47:13.351Z)
OS version: Windows_NT ia32 10.0.17134
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz (8 x 4008)|
|Memory (System)|15.92GB (8.93GB free)|
|Process Argv|E:\ProgramFiles(x86)\Microsoft VS Code\Code.exe|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (1)</summary>
Extension|Author (truncated)|Version
---|---|---
LiveServer|rit|5.0.0
</details>
Reproduces only with extensions
<!-- generated by issue reporter --> | feature-request,editor-folding | low | Critical |
333,004,263 | TypeScript | Project compiles OK but hangs when --watch is added | **TypeScript Version:** 3.0.0-dev.20180616 and 2.9.2
**Search Terms:** typescript watch --watch hangs stalls slow
Hi I have minimized one of my projects to an example source (which I have [attached](https://github.com/Microsoft/TypeScript/files/2108402/tsc-watch-hangs.zip) [1]) that:
1. Compile fine when not using watch mode (takes about 5 seconds to compile on my machine)
2. Hangs the compiler when using watch mode (doesn't complete it's compile).
To compile the example:
1. download the [zip](https://github.com/Microsoft/TypeScript/files/2108402/tsc-watch-hangs.zip) and unzip it and enter that directory
2. run `yarn install` to add the project's deps.
3a. run `yarn tsc-ok` for the example where it does compile
3b. run `yarn tsc-bad` for the example where it doesn't
BTW: Is there some way for me to tell the compiler to dump out the source files it is processing as it processes them? It would be *extremely* helpful for tracking this down to the specific issue.
1: Source Code: [tsc-watch-hangs.zip](https://github.com/Microsoft/TypeScript/files/2108402/tsc-watch-hangs.zip) | Bug,Domain: Declaration Emit | medium | Major |
333,008,404 | flutter | Flutter Doctor reports the flutter version as v0.0.0-unknown if repository is cloned with depth 1. | On CI environments where the Flutter repository may be downloaded with `--depth=1` as an option to Git, `flutter doctor` says the installation is fine but won't actually build anything because the Flutter version it reports is `v0.0.0-unknown`. Builds will then fail on package constraint checks.
The workaround is to not specify the option but at the cost of longer checkouts. | c: new feature,tool,t: flutter doctor,has reproducible steps,P3,found in release: 2.6,team-tool,triaged-tool | low | Minor |
333,018,640 | flutter | CupertinoNavigationBar leading is too high | For some reason the leading in CupertinoNavigationBar is too high. Any ideas?
<img width="315" alt="screen shot 2018-06-16 at 6 10 46 pm" src="https://user-images.githubusercontent.com/666539/41502812-b0308544-7190-11e8-9186-60ecad54af75.png">
```dart
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return new MaterialApp(
title: 'Flutter Demo',
theme: new ThemeData(
primarySwatch: Colors.blueGrey,
),
home: CupertinoPageScaffold(
navigationBar: new CupertinoNavigationBar(
leading: Text("lead"),
middle: Text("Title"),
trailing: Text("Trailing")
),
child: Center(child: Text("Test!")),
)
);
}
}
``` | framework,a: fidelity,f: cupertino,has reproducible steps,P3,workaround available,team-design,triaged-design,found in release: 3.16,found in release: 3.19 | medium | Major |
333,027,017 | TypeScript | [feat] Allow use of downlevelIteration on es2015 or greater | ## Search Terms
downlevelIteration es2015 es6
## Suggestion
A new flag or way to enable `downlevelIteration` when targeting es2015 or greater
Maybe `forceDownlevelIteration` ?
## Use Cases
`for .. of` is slow: https://jsperf.com/for-vs-forof
This request comes from my recent work in Nodejs where you want to use higher targets for better performance from the runtime but have to replace `for .. of` with `for` due to it's inefficiencies.
Devs shouldn't be forced to write less ergonomic code for performance reasons when TypeScript could simply do it for you.
An example of having to refactor due to increasing the language target:
https://github.com/angular/angular/pull/24534
## Examples
N/A
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax)
| Suggestion,In Discussion | low | Major |
333,029,061 | vscode | [Feature Request] Extension Permissions, Security Sandboxing & Update Management Proposal | I believe that Visual Studio Code should support some kind of "Extension Permission Management", complete with prompts, warnings, opt-in, and opt-out, similar to what has been supported for some time now with Chrome, Firefox, and other browsers.
# Reference Screenshot
I've provided, for reference, some screenshots showing Extension Permission and update prompts and management UIs in the below screenshot, as well as others torwards the end of this proposal.
#### Chrome prompting to approve additional permissions when updating an extension:

_(See additional screenshots at the very bottom.)_
# Scope and Benefits
I have proposed, in detail here, how Extension Permissions management could function and be exposed to users, including descriptions of dialogs for prompting users to allow/deny permissions on extension install vs. updating, changes to Extensions Sidebar and Extension Details Marketplace pages, grouping/managing extensions by Category/Collection in VSCode, specific warnings and when to show them, types of permissions could define (and whether may default to opt-out or opt-in for them), APIs could provide, how extensions could operate with more limited or conditional functionality, and how could crowdsource extension safety reporting.
I'm also proposing that users can disable Auto-Update behavior for specific extensions, which, besides being useful in its own right, could allow users to stick with previous versions which require fewer permissions, manually review updates for higher-risk extensions, or avoid updating to problematic versions of extensions until issues are resolved.
# Related Issues & Discussions
As discussed in Issue #9539 ("Visual Studio Code should update itself silently") regarding enabling silent, auto-updates of VSCode and Extensions) by @alexhass, @kasajian, myself and others, there are some security concerns regarding what permissions are granted to extensions when installing or updating them. As seen there, without such controls, some users aren't even comfortable installing many extensions, allowing them to auto-update once they have, or even allowing VSCode core itself to auto-update.
# Proposed User Stories / Features for Extension Permissions Management
Specifically, I propose the following extension permission management features, prompts, and use cases:
## 1. **Display Extension Permission requirements**
1. Clearly labels what permissions are required by each extension in Extensions Details page, with Permission Name (plus Icon) shown underneath the Disable/Uninstall buttons for each permission required/requested.
2. Clearly label extension permissions, ideally via Icons (along with Name, Author, Description and Rating) in Extensions Sidebar (showing Installed and Available Extensions), at least as Icons next to either A) to left of # of Downloads (Cloud icon), B) to left of Settings (gear icon), or C) to right of Name and Version #, with them grayed out if denied
## 2. **Prompt Users to Approve High-Risk Permissions**
1. Notify user on extension install of potentially dangerous extension permissions and provide ability to opt-out of optional permissions or cancel install, by showing an "Approve Permissions for (ExtensionName)?" (or "Allow (ExtensionName) To?" or "Approve Extension Permissions") dialog:
- Only show this dialog if more than just basic On-Demand Actions permissions are requested.
- Provide VSCode options to skip this prompt if only other common, usually safe permissions as requested (like possibly "Auto-Complete"?)
- "Approve" (or "Allow") and "Cancel Install" dialog buttons
- Checkboxes (or toggle buttons) for each "Optional Permission" (with "(Optional") shown after permission name)
- For Required Permissions, show Checked (but disabled, so can't modify) checkboxes shown next to required permissions, possibly with "(Required)" shown after permission name
- [Maybe, Low Priority] If user attempts to uncheck a required permission, possibly could just suggest they "Don't install", "Disable" or "Install/Downgrade to Previous Version"
2. When Updating Extensions, show users "Approve New Permissions for (ExtensionName)?" prompt
- Based on dialog shown when first installing
- Only showing if/when there are new not-yet-approved permissions to review
- Only shown if there are New Requested, New Required, or Now Required (previously optional and rejected) permissions which user hasn't already approved.
- Show New (Not-yet-Approved or Now Required) permissions at the top
- If permission was previously denied but is now Required instead of Optional, highlight it, and warn user may want to Cancel Update + Disable Auto-Update for that one extension instead.
- Previously prompted permissions (whether or not previously approved, optional, required, etc.) are bottom, with space in between, so easy to review and modify here, but doesn't crowd the important changes.
- Show dialog buttons: "Approve", "Skip Update" (only prompt again after next update), and "Never Update" (disabling auto-update).
- [Advanced / Later / Maybe] Could ask user, after chose "Skip/Never Update", whether want to prompt again if/when required permissions change (optionally showing list of all New/Now Required permissions to check or uncheck waiting for, though may not be needed).
## 3. **Extensions functioning with Optional Permissions denied**
1. Extensions should be able to run with limited functionality when optional permissions are denied, yet be able to prompt user when try to use a feature disabled by denied permissions:
2. You could provide an API allowing extension to show the Approve Extension Permissions dialog together with a custom message (maybe even custom title too) together with API allowing extensions to check what permissions are currently approved for the extension
3. Possibly could limit extensions from checking permissions available to other extensions, in case would be security risk with them polling other extensions to find other extensions to automate/interact with as a workaround to their own denied permissions.
4. Provide "Never Ask Again" button so that a prompt is never shown again for an extension, possibly with option to disable prompts just for just a) specific feature (permission use case/prompt message), b) specific permission, or c) all permissions for this extension.
5. Can (disabled in VSCode Options) show status bar message and/or play error sound whenever attempt to use a feature (eg. hotkey, F1 action, etc.) unavailable due to denied permissions
6. Provide API making it flag an action or context menu item as requiring a permission so that will automatically show "(Disabled due to Permissions)" or "(Disabled)" after action names (eg. in F1 command line, menus, etc.), and/or show permission prompt if clicked (or hotkey is used) anyways.
## 4. **Disable higher-risk permissions by default**
1. Have dangerous permissions like Full File System Control disabled by default in permission prompts.
2. If required (vs. optional), can require user to manually check it before proceeding or warn user about the risks and how isn't needed for most extensions, and how may want to not proceed).
3. Can be based on selected "Extension Type/Category" (eg. Language Syntax, Language Syntax + Auto-Formatting, File Management, etc.)
## 5. **Safety Reporting**
1. Allow Users to Flag Extensions as Safe vs. Suspicious (in addition to Ratings / Reviews), to crowdsource security and review
2. Allowing reporting potentially malicious extensions for investigation
3. Possibly affects what, if any, type and severity of warnings are shown in Approve Permissions dialog
4. Possibly affects whether higher-risk optional permissions are enabled or disabled by default.
## 6. **Modify Permissions Anytime**
1. In Extension Sidebar, show "Enable/Disable (Permission Name) (Icon)" entry for each requested/required permission as menu items under the Gear icon (Settings menu showing Disable, Uninstall, etc. currently).
2. In Extensions Sidebar, ideally also allow clicking permission icons to enable/disable (gray out).
3. In Extensions Detail Page, allow clicking each Permission listed below the Disable, Uninstall, etc. buttons to Approve/Deny them.
## 7. **Auto-Update options per Extension**
1. Users could opt-out of Auto-Update for specific extensions, with toggle button next to Disable/Uninstall in Extensions Sidebar (under gear icon menu) and next to those buttons in Extensions Detail Page.
2. This could allow users to stick with previous versions before new high-risk permissions became required
3. This could allow avoiding updating to problematic versions of extensions until issues are resolved.
4. This could allow users to manually review/approve updates based on reviews and changelog for higher (security or reliability) risk extensions
5. This could be controlled per extension without disabling globally as may be desired by default for most extensions.
6. Can be set per extension to "Default" vs Auto-Update vs. Disable Automates, like with Firefox, with Default behavior controlled through global setting.
Could provide Undo Update button or choose from version history (like with Chrome/Firefox extension stores) on extension details page, to enable rollback to previous version after an update causes issues, instead of just disabling until if/when ever fixed.
## 8. **Extension Categories Enhancements**
1. Extension Type Categories benefits and use cases:
- Allow extensions to be browsed or filtered by category from within VSCode and more easily in marketplace, in addition to how are used as Collections in Marketplace currently, possibly allowing extensions to belong to multiple categories, and supporting subcategories.
- Which permissions are selected by default in Approve Permissions dialog, and when warnings (for exceptionally high-risk, unusual permission requirements) are shown to user in that dialog can be used on the extension type.
- This also makes it very clear to the user - without relying on them reviewing easy-to-overlook detailed permission requirements - at a glance what kind of permissions are likely to be required.
- This can also be useful in general for helping users to find extensions, like done with Chrome and Firefox.
- This can also make it very clear to users how advanced an extension is, with just Syntax Highlighting vs. Auto-Complete vs. Run/Debug, when trying to find an extension for a particular language.
2. Show "Extension Type/Category" near the top of the Extension Details Page:
- Category Customizable by Author, but limited based on permissions, eg. can't classify File System control extension as "On-Demand Actions"
- Showing this Category at either:
A) to right of Extension Name/ID at top, in parenthesis, B) to right of Author Name, C) a separate Line below Author, D) to the left of the Permissions Names/Icons row, or E) a separate line above the Permissions row.
- Allow Browsing and Filtering on Extensions website by Extension Type/Category
3. Group Extensions by Category in Extensions Sidebar
With grouping enabled/disabled via Icon next to "Clear Extensions Input", possibly allowing Expand/Collapse Groups, with options to group by:
- Extension Type/Category (overall, like Language Syntax, etc.)
- Permission (eg. File System, Auto-Save, Auto-Complete, On-Demand Actions), with extensions able to be shown multiple times under multiple groups.
4. **Possible Additional Extension Categories / Subcategories could include**
- **Language Syntax**
- Highlighting, maybe even auto-complete prompts supports (if user always chooses/confirms what to insert, vs. arbitrary, automated modification of any document contents
- **Language Syntax & Auto-Format**
- If want as separate category with much higher expected (and by-default enabled) permissions, and if don't just allow extensions belong to a couple different categories simultaneously.
- **Auto-Format (Auto Document Actions)**
- **Document Tabs Management**
- **Document Actions**
- **Menu Extensions**
- **Automation**
- **File Management**
- **Web-Connected**
## 9. Specific Permission Types could include
1. **On-Demand Document Actions**
- Shown in F1 command line or newly added context, etc. menu actions which the user would have to choose to perform.
- May not need permission (or at least don't show prompt) to register these kinds of actions.
2. **Automated Document Actions**
- Automate executing own (or even other built-in or other extension) actions (from F1, context menu, etc.) in response to events, timer, etc.,
- Possibly can split into separate permission for use of other extension and/or built-in actions.
3. **On-Demand Non-Document Actions**
- Automated use of VSCode features which don't just affect document contents.
4. **Automated Non-Document Actions (or just "Automation")**
- Use of Actions applying to more than just document content performed automatically instead of on-demand (via context menus or F1), such as instead based on event handling, timer, etc. or in response to certain types of document edits.
5. **Document Rendering (or Document/Editor Rendering/Display)**
- For custom spelling underline, indicators, showing collapsible regions, etc.
6. **User Interface Extension permissions**
- **Toolbar**
- **Context Menu**
- **Menu Bar**
- **Sidebar / Tool Windows**
- **Statusbar**
7. **File System Permissions**
- **Extension Data Files**
- Maybe allow extensions full control over files in folder only they have access to (except from other extensions without full file system control) without requiring approved permissions.
- **File Browsing**
- Read/list file and folder names (and possibly sizes, timestamps, etc.) for browsing.
- **File Reading**
- Read any file on disk, including those not opened as documents by the user.
- **File Modification**
- Modify or overwrite a file.
- Provide warning in description for this and similar permissions that this is not typically necessary for language extensions where user can choose whether to save file or whether Auto-Save Permission should be granted instead.
- Description / Tooltip:
Warning: This is a potential dangerous permission usually not needed for most extensions (such as most Language Syntax extensions) which, when approved, allows the extension to delete, move, rename, create, read, and modify any files or folders on your system (Instead of just those installed with it or otherwise modify contents of opened document tabs user can choose to save), so you should only enable it for extensions you trust and may want to deny this permission when optional or cancel install of extensions which don’t allow opt-out, especially if seems like this wouldn't be necessary for the type of extension.
- **File Deletion**
- Delete any existing or newly created file on disk.
8. **Networking** (or Internet, or Web Service Use)
9. **Open Files (as Document Tabs)**
10. **Auto-Save (Opened Documents)**
- If needed, can possibly split into separate permissions for Open Active Document (and only if allowed by document type/extension) vs. Auto-Save All Document Tabs.
11. **Manage Document Tabs**
- Reorganize, rename, save, close, or create new (but not necessarily Open Existing File as New Tab, if have that as a separate permission).
12. **Create New Documents** (opened as Document Tabs)
13. **Task Management**
14. **Source Control**
15. **Process Control**
- Interacting with processed had launched or with existing processes.
16. **Full System Control**
17. **Launching Processes**
- Alternative Names: "Execute" or "Run/Debug Code/App" or "Launch / Run / Debug"
- Description: Allow to start new processes or launch applications in background, such as is often required to Run, Build, or Debug code.
- You could possibly allow extensions requiring this or similar permissions (or provide to any extension, if/when needed) the ability to save contents of an open document with unsaved changes to a temporary file, and provide the file path to that temp file to the extension - but *without* providing the extension the ability to modify/overwrite that temp file by default. This would reduce risk of a malicious extension being able to save and execute arbitrary code into a temp file without that code being first shown in the opened document tab.
- You can even, if necessary, delay any process launching until X milliseconds after any extension-automated changes (eg. auto-formatting) made to opened document contents.
- Possibly can restrict, as defined in extension manifest and shown in Extension Details Page, what executable names are allowed. Then, at worst, the extension would have to inject malicious code into an open script document and require user
- Possibly can restrict (without requiring separate higher-risk permission) whether any variable command line args are allowed for it other than file name and pre-declared ones to prevent passing arbitrary code via command line to execute.
- Possibly separate permission for launching processes for exe's that are just bundled with extension vs. already installed on the system.
## 10. Sandboxing Extensions
What, if anything, has already been done to provide or attempt extension sandboxing or security with VSCode?
As I understand, VSCode is based on Electrum which, by design, disables much of Chromium's facility for sandboxing to enable native API access. However, Electron and Node.js both have some facilities for sandboxing and security, and there are a few projects extending support for these, as detailed below:
### Electron/Node.js Sandboxing/Security References and Options to Consider
- [Electron's overview of security risks and options](https://electronjs.org/docs/tutorial/security) may be a useful reference for this discussion.
- [Electron's sandbox options](https://electronjs.org/docs/api/sandbox-option) Would these be of any help here, or anything else described in above articles?
- Here is an [Issue summing up progress of sandboxing support for Electron](https://github.com/electron/electron/issues/6712).
- [Some tips on sandboxing and security with Electron](https://www.nccgroup.trust/uk/about-us/newsroom-and-events/blogs/2016/september/avoiding-pitfalls-developing-with-electron/)
Might the following be useful references for implementing Sandboxing in VSCode?
- @kwede's [Electron Sandboxing example app / template](https://github.com/kewde/electron-sandbox-boilerplate)e for Electron app using sandboxing for security:
- [Great summary of sandboxing and XSS with Electron](https://blog.scottlogic.com/2016/03/09/As-It-Stands-Electron-Security.html), mentioning Brave browser as having developed or contributed code to support this.
- Might [enableMixedSandbox() in VSCode](https://github.com/Microsoft/vscode/search?q=enableMixedSandbox&unscoped_q=enableMixedSandbox) be applicable or [use of --enable-sandbox](https://github.com/electron/electron/issues/11631)?
- Would disabling node integration (eg. via "new BrowserWindow({ webPreferences: { nodeIntegration: false } });") and/or [Preload script + WebView](https://github.com/electron/electron/issues/1753) - as both discussed in [this Atom issue](https://github.com/electron/electron/issues/1753), as an example (though I understand VSCode isn't based on Atom) and as also [suggested at Hackernoon](https://hackernoon.com/electron-the-bad-parts-2b710c491547)- help here?
- Preload script based security,
### Node.js / JavaScript Sandboxing Projects
Would any of Node.js's facilities for sandboxing / security possibly be applicable here, or any of the following projects providing sandboxing for Node.js or otherwise?
- [Node.js built-in VM](https://nodejs.org/api/vm.html#vm_vm_executing_javascript)
- [VM2](https://github.com/patriksimek/vm2) (from @patriksimek, like mentioned [here](https://medium.freecodecamp.org/running-untrusted-javascript-as-a-saas-is-hard-this-is-how-i-tamed-the-demons-973870f76e1c))
- @gf3's [Sandbox](https://github.com/gf3/sandbox)
- @auth0's [SandboxJS](https://github.com/auth0/sandboxjs)
- [electron-common-ipc](https://www.npmjs.com/package/electron-common-ipc)
- [Compute's Node.js sandboxing](https://blog.computes.com/new-javascript-secure-sandbox-405a4fca31ed)
- [Google Caja](https://github.com/google/caja)
## Additional Reference Screenshots
You can see some additional good examples of Extension Permission prompts and management in the screenshots below:
#### Chrome prompting user to confirm higher risk permissions (and in language very clear to the user), when installing an extension:

#### Chrome prompting user to enable additional requested permissions when updating an extension:

#### Chrome allowing modifying some permissions for installed extensions, like "Allow in incognito":

#### Firefox allowing managing Auto-Update behavior for installed extensions:

#### Firefox allowing changing permissions from sidebar for installed plugins, such as to control "Ask to Activate" behavior:

### Labels
_Suggested additional labels for this issue:_
install-update
_Possible additional labels:_
api-proposal, extension-host
| feature-request,extensions,extension-host | high | Critical |
333,042,619 | opencv | resize: cannot specify both size and fx/fy | ##### System information (version)
- OpenCV => all
- Operating System / Platform => all
- Compiler => all
##### Detailed description
By reading the documentation it seems that it is possible to specify both parameters **dsize** for the destination size in pixels and **fx/fy** for a fine tuning of the scaling interpolation.
"..the size and type are derived from the src,dsize,fx, and fy"
"Either dsize or both fx and fy must be non-zero."
But looking at the code only one of the two parameters are considered. The other is always calculated.
If dsize is specified then fx/fy are calculated/overwritten. If fx/fy are specified then dsize is calculated/overwritten.
So if I want a fine tuning then I have to specify only fx/fy leaving dsize to zero.
##### Code in cv::resize
```.cpp
if( dsize.area() == 0 )
{
dsize = ...;
}
else
{
inv_scale_x = (double)dsize.width/ssize.width;
inv_scale_y = (double)dsize.height/ssize.height;
}
```
##### Solution 1.
Update the documentation saying that only one of the two parameters should be specified, specifically fx/fy for fine tuning because slightly different values of fx/fy will produce different images even if the resulting image has the same size (as by design).
We could also make more clear in the documentation that by specifying only the destination size the caller doesn't care for a fine tuning of the resize call. If the caller specifies the fx/fy parameter then a fine tuning is required leading to different results even with identical destination size.
##### Solution 2.
Update the code so that it is possible to specify both values: dsize to set the exact size in pixels and fx/fy to specify a fine tuning of the resizing.
| category: imgproc,RFC | low | Minor |
333,043,451 | opencv | resize: fx/fy parameters near to 1.0 are ignored | ##### System information (version)
- OpenCV => all
- Operating System / Platform => all
- Compiler => all
##### Detailed description
By specificating the fx/fy parameters it is possible to fine tune the resizing. For example if I need to create multiple images for an animation with small increases in the resize. It works as designed except when the resulting image has the same size as the original image. And there is no workaround with the resize function.
We could also make more clear in the documentation that by specifying only the destination size the caller doesn't care for a fine tuning of the resize call. If the caller specifies the fx/fy parameter then a fine tuning is required leading to different results even with identical destination size.
##### Code in cv::resize
```.cpp
// as designed these two resize calls will create two different images: OK
// for both calls the resulting image has double size compared to original image.
resize(image, dest, size(), 2, 2);
resize(image, dest, size(), 2+0.499/image.cols, 2+0.499/image.rows);
// these two calls generate the same image: NOT OK.
// for both calls the resulting image has same size as the original image
resize(image, dest, size(), 1, 1);
resize(image, dest, size(), 1+0.499/image.cols, 1+0.499/image.rows);
```
##### Solution
Update the code so that the original image is returned only if the fine tune scale parameter is set to exactly 1. E.g. In resize replace the following code:
```.cpp
if (dsize == ssize)
{
// Source and destination are of same size. Use simple copy.
src.copyTo(dst);
return;
}
```
with something like
```.cpp
if (inv_scale_x == 1. && inv_scale_y == 1.)
{
// Source and destination are of same size. Use simple copy.
src.copyTo(dst);
return;
}
```
| category: imgproc,RFC | low | Minor |
333,058,520 | opencv | warpAffine: correct coordinate system, documentation and incorrect usage | ##### System information (version)
Tested with
- OpenCV => 3.4.1
- Operating System / Platform => Linux
- Compiler => gcc
##### Detailed description
After some tests I found out that the coordinate system in warpAffine is translated by 0.5 pixels, in other words the topleft origin pixel area goes from -0.5 to +0.5 on both x and y axis. This is not intuitive and without proper documentation yield to incorrect results. Example code, tutorial and regression test code in OpenCV are using incorrectly the warpAffine function.
##### Example 1
I want to scale the source image by a factor of 10. Without applying a translation correction the resulting image is moved off by over 5 pixels.
```.cpp
// define an intuitive scaling matrix: NOT OK. Image is translated
double scale = 10;
cv::Matx23d matrix(scale, 0, 0,
0, scale, 0);
// the corrected scaling matrix: We have to translate by scale/2-0.5.
// Not intuitive at all and not documented.
double scale = 10;
cv::Matx23d matrix(scale, 0, scale/2-0.5),
0, scale, scale/2-0.5);
// Apply matrix
cv::Mat destImage;
cv::Mat image = imread("helloworld.png");
destSize = cv::Size(image.cols*scale, image.rows*scale);
cv::warpAffine(image, destImage, matrix, destSize, cv::INTER_CUBIC,
cv::BORDER_CONSTANT, cv::Scalar(255, 255, 255));
cv::imwrite("helloworldScaled.png", destImage);
```
##### Example 2
I want to rotate the image by 45 degrees around the center. If the original image was symmetric then the resulting image must be symmetric too. The center of the image is not (cols/2, rows/2) but (cols/2-0.5, rows/2-0.5)
```.cpp
// Wrong call used almost everywhere. The resulting image is no longer symmetric.
// Resulting image is translated by 0.5 pixels.
cv::Point2f centerImage(image.cols/2., image.rows/2.);
cv::Matx23d = cv::getRotationMatrix2D(centerImage, 45, 1), cv::INTER_LINEAR);
// Correct call.
cv::Point2f centerImage(image.cols/2.-0.5, image.rows/2.-0.5);
cv::Matx23d = cv::getRotationMatrix2D(centerImage, 45, 1), cv::INTER_LINEAR);
```
With a rotation of 90 degrees the whole image is translated by 1 pixel in the wrong call. It's a small error but still an error. Note: The problem is not the getRotationMatrix2D function.
##### Solution 1.
Maintain the -0.5, -0.5 design and update documentation and code to reflect this. Examples of where in OpenCV a correction is needed:
- In the opencv_test code modules/imgproc/perf/perf_warp.cpp there are multiple calls to warpAffine / getRotationMatrix2D using the wrong center of the image. I suppose the test doesn't fail because a certain amount of tolerance is accepted. Still the calls are wrong.
https://github.com/opencv/opencv/blob/master/modules/imgproc/perf/perf_warp.cpp
- The sample code samples/cpp/tutorial_code/ImgTrans/Geometric_Transforms_Demo.cpp uses a wrong call to getRotationMatrix2D.
https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/ImgTrans/Geometric_Transforms_Demo.cpp
- in the tutorial the wrong call to getRotationMatrix2D is used to Rotate by 90 degrees. The resulting image is translated by 1 pixel.
https://docs.opencv.org/3.4.0/da/d6e/tutorial_py_geometric_transformations.html
- More wrong usages can found on opencv.org, not counting the large amount of wrong usages on the web. For a rotation developers typically uses getRotationMatrix2D(image.width/2.f, image.height/2.f, angle, 1) which is the wrong call.
- Actually in my own code as a workaround I apply a 0.5 translation to the matrix before calling warpAffine, so that I can use the (0,0)-(1,1) coordinate system. The fix can be easily computed before using the resulting matrix in warpAffine:
```.cpp
cv::Matx23d fixMatrix(cv::Matx23d matrix)
{
matrix(0,2) += (matrix(0,0) + matrix(0,1) - 1) / 2;
matrix(1,2) += (matrix(1,0) + matrix(1,1) - 1) / 2;
return matrix;
}
```
The fix above could be documented too, for whoever like me prefers to use the (0,0)-(1,1) based coordinate system. Advantages: The matrices have a intuitive definition, especially scaling. I don't need to apply a correction on the getRotationMatrix2D or getAffineTransform. All coordinates are intuitively area based instead of point based and can be shared in different scenarios without the need of a conversion / correction: image pixel coordinates, vector/shape object coordinates, mathematical coordinates.
##### Solution 2.
The amount of incorrect usages even within OpenCV denotes that the design of (-0.5,-0.5) to (0.5,0.5) for the topleft pixel area is not intuitive. A second solution would be to update warpAffine and to use (0,0) to (1,1) as the area for the topleft pixel. Then the sample code, test code and documentation doesn't need to be updated because that would be the intuitive usage of warpAffine.
##### Complete Sample Code showing the issues
```.cpp
#include <opencv2/opencv.hpp>
#include <stdio.h>
cv::Mat createSymmetricTestImage(int width, int height)
{
cv::Mat image(width, height, CV_8UC3, cv::Scalar(63, 0, 0));
cv::Point pos1(width/4, height/4);
cv::Point pos2(width - pos1.x - 1, height - pos1.y - 1);
cv::rectangle(image, pos1, pos2, cv::Scalar(0, 255, 255), 1);
return image;
}
cv::Matx23d fixMatrix(cv::Matx23d matrix)
{
matrix(0,2) += (matrix(0,0) + matrix(0,1) - 1) / 2;
matrix(1,2) += (matrix(1,0) + matrix(1,1) - 1) / 2;
return matrix;
}
cv::Matx23d getAffineTransformRotate90(cv::Size size)
{
cv::Point2f srcTri[3];
cv::Point2f dstTri[3];
/// Set your 3 points to calculate the Affine Transform
/// Use area based coordinates ranging from 0,0 to width,height
srcTri[0] = cv::Point2f( 0,0 );
srcTri[1] = cv::Point2f( size.width, 0 );
srcTri[2] = cv::Point2f( 0, size.height);
/// Apply rotation 90 degrees CCW
dstTri[0] = srcTri[2];
dstTri[1] = srcTri[0];
dstTri[2] = cv::Point2f( size.width, size.height);
/// Get the Affine Transform
return getAffineTransform( srcTri, dstTri );
}
void doWarpAffine(const std::string fileNameToSave, const cv::Mat &image, cv::Size destSize, const cv::Matx23d &matrix, int interpolation)
{
printf("%20s, Matrix2x2: [ %9lf, %9lf] [ %9lf, %9lf ], Traslation: [ %9lf, %9lf ]\n", fileNameToSave.c_str(), matrix(0,0), matrix(0, 1), matrix(1,0), matrix(1,1), matrix(0,2), matrix(1,2));
cv::Mat dest, dest2;
cv::warpAffine(image, dest, matrix, destSize, interpolation /*| cv::WARP_INVERSE_MAP*/, cv::BORDER_CONSTANT, cv::Scalar(0, 0, 255));
cv::imwrite(fileNameToSave.c_str(), dest);
}
void doWarpAffineTest(const std::string fileNameToSavePrefix, const cv::Mat &image, cv::Size destSize, const cv::Matx23d &matrix, int interpolation)
{
cv::Matx23d fixedMatrix = fixMatrix(matrix);
if (fixedMatrix == matrix) {
// fixed matrix is equal to original one
doWarpAffine(fileNameToSavePrefix + ".png", image, destSize, matrix, interpolation);
} else {
// fixed matrix is different
doWarpAffine(fileNameToSavePrefix + "Orig.png", image, destSize, matrix, interpolation);
doWarpAffine(fileNameToSavePrefix + "Fix.png", image, destSize, fixedMatrix, interpolation);
}
}
int main(int, char **)
{
// create a symmetric Test Image
cv::Mat image = createSymmetricTestImage(17,17);
cv::imwrite("SourceImage.png", image);
cv::Point2f centerImage(image.cols/2., image.rows/2.);
cv::Point2f originImage(0, 0);
cv::Size destSize = cv::Size(image.cols, image.rows);
//------------------------------------------------
// Test Identity
cv::Matx23d identityMatrix(1, 0, 0, 0, 1, 0);
doWarpAffineTest("Identity", image, destSize, identityMatrix, cv::INTER_CUBIC);
//------------------------------------------------
// Test Rotaiton
// 45 degrees Rotation around the center of the image
doWarpAffineTest("Rotate45", image, destSize, cv::getRotationMatrix2D(centerImage, 45, 1), cv::INTER_LINEAR);
// 90 degrees Rotation around the center of the image
doWarpAffineTest("Rotate90", image, destSize, cv::getRotationMatrix2D(centerImage, 90, 1), cv::INTER_LINEAR);
//------------------------------------------------
// Test Traslation by 1 pixel
double traslate = 1;
destSize = cv::Size(image.cols+traslate, image.rows+traslate);
cv::Matx23d traslateMatrix(1, 0, traslate, 0, 1, traslate);
doWarpAffineTest("Traslate", image, destSize, traslateMatrix, cv::INTER_LINEAR);
//------------------------------------------------
// Test Scaling. 10x bigger.
double scale = 10;
destSize = cv::Size(image.cols*scale, image.rows*scale);
cv::Matx23d scaleMatrix(scale, 0, 0, 0, scale, 0);
doWarpAffineTest("Scale", image, destSize, scaleMatrix, cv::INTER_CUBIC);
//------------------------------------------------
// Test Shear
//
double shear = 1;
destSize = cv::Size(image.cols*(1+shear)+0.5, image.rows);
cv::Matx23d shearMatrix(1, shear, -0.5, 0, 1, 0);
doWarpAffineTest("Shear", image, destSize, shearMatrix, cv::INTER_CUBIC);
//------------------------------------------------
// Test getAffineTransform with Rotate 90 degrees
destSize = cv::Size(image.cols, image.rows);
cv::Matx23d warp_mat = getAffineTransformRotate90(cv::Size(image.cols, image.rows));
doWarpAffine("GetAffine.png", image, destSize, fixMatrix(warp_mat), cv::INTER_LINEAR);
return 0;
}
```
Scale without correction:

Scale with correction:

Rotation 45 degrees (typical usage):

Rotation 45 degrees with fix:

Rotate 90 degrees (typical usage). Image is translated down by 1 pixel:

Rotate 90 degrees with fix:

| category: imgproc,category: documentation,RFC,future | low | Critical |
333,065,986 | go | cmd/link: link failure caused by duplicated LDFLAGS passed to external linker | ### What version of Go are you using (`go version`)?
go1.10.3
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
macOS/amd64, doing cross compile for android/arm64
### What did you do?
export GOOS=android
export GOARCH=arm64
export CC=/path/to/androd/gcc
export CXX=/path/to/androd/g++
export CGO_ENABLED=1
export CGO_LDFLAGS="-Wl,--version-script=/the/version-script/export.txt"
go build -buildmode=c-shared -ldflags="-s -w -v" -v -o libxxx.so foo/bar
### What did you expect to see?
The shared library is generated without any error.
### What did you see instead?
In the output line
0.19 host link: ...
"-Wl,--version-script=/the/version-script/export.txt" is repeated 4 time and there are 3 errors,
anonymous version tag cannot be combined with other version tags
Note that if the version script contains a version name, the error message is 'duplicate version tag xxx'. | NeedsInvestigation,compiler/runtime | low | Critical |
333,075,015 | vscode | cursor up/down is confused by selection | - VSCode Version: 1.24.0
- OS Version: Windows 7
Steps to Reproduce:
1. Create the following document
```
dorfus
hamnut
```
2. Place the cursor at the d and go right three. Then go down. The cursor is at the n. Good.
3. Place the cursor at the end of dorfus and go left three. Then go down. The cursor is at the n. Good.
4. Place the cursor at the d and hold down shift while going right three. Release shift. Go down. The cursor is at the n. Good
5. Place the cursor at the end of dorfus and hold shift while going left three. Release shift. Go down. The cursor is at the end of hamnut? <--- BUG
I expect the cursor to be at n in every case
Other applications, for reference:
visual studio: good
wordpad: good
notepad: good
firefox: good
IE: good
chrome: mysteriously works today, but didn't when I wrote this bug
gedit: "broken" also, but on second thought, maybe it's sensible to some people.
In any event, it's also senseless to some people.
Does this issue occur when all extensions are disabled?: Yes
| feature-request,editor-commands | low | Critical |
333,080,145 | rust | Code fails to link on macOS with incremental compilation | The following code fails to link on macOS with incremental compilation. It works fine with incremental compilation turned off.
```rust
fn obj_alloc<T>(_p: T) -> *const [u8; 6] {
struct Foo(*const [u8; 6]);
unsafe impl Send for Foo {}
unsafe impl Sync for Foo {}
#[link_section="__TEXT,__objc_methname,cstring_literals"]
static OBJC_METH_VAR_NAME_ : [u8; 6] = *b"alloc\0";
#[link_section="__DATA,__objc_selrefs,literal_pointers,no_dead_strip"]
static OBJC_SELECTOR_REFERENCES_: Foo = Foo(&OBJC_METH_VAR_NAME_);
return OBJC_SELECTOR_REFERENCES_.0;
}
fn main() {
let i: u32 = 0;
obj_alloc(i);
}
```
The linker error is
```
= note: Undefined symbols for architecture x86_64:
"example::obj_alloc::OBJC_SELECTOR_REFERENCES_::ha196cc644aeae679", referenced from:
example::obj_alloc::h54a034c209c8e4e3 in example-4a01b102e2898ce7.4jffkl93bkv60fxe.rcgu.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
The motivation for this weird code is getting native performance for objective-c message selectors. See https://github.com/SSheldon/rust-objc/issues/49 | A-linkage,O-macos,C-enhancement,P-medium,T-compiler,A-incr-comp | medium | Critical |
333,109,011 | flutter | [Proposal] Allow to adjust hitSlop of GestureDetector | @Hixie already closed #14794 but I'm posting a new Issue because this is critical functionality
Hixie recommended using padding to adjust the hitbox of a `GestureDetector`. I tried to use padding but it merely produced whitespace around GestureDetector, or around its children.
I even tried to transform with hitTests up and then transform without hitTests down to try to hack my way to a solution to move the hitbox and that failed.
Now, back to GestureDetector without any hackiness!
First box is what Flutter does, second box is what I want for best functionality and user experience.
<img width="307" alt="screen shot 2018-06-17 at 9 01 52 pm" src="https://user-images.githubusercontent.com/666539/41514239-61e8f572-7273-11e8-8a89-d3082b63daa9.png">
in RN this would be achieved by typing `hitSlop={{top: 20, bottom:20}}` | c: new feature,framework,f: gestures,c: proposal,P2,team-framework,triaged-framework | low | Critical |
333,123,004 | youtube-dl | Adding boyztube.com and boyztube.xxx websites | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.06.18*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.06.18**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: http://www.boyztube.xxx/gay/now-this-is-a-true-cocksucker-clip/89882
- Single video: http://www.boyztube.com/gay-videos/watch/620553-straight_hustler.html
- Playlist: -
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.
If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
| nsfw | low | Critical |
333,125,034 | go | cmd/vet: detect Get/Post http.Response assignment to "_", which is a memory leak | I made a rookie mistake and introduced a memory leak into a codebase by failing to realize that you must close the `http.Response` body returned by `http.Post()` even if you don't need the response content.
I suspect I'm not only one who has made this mistake.
I have a diff, if the proposal is accepted. | NeedsInvestigation,Analysis | low | Major |
333,130,832 | pytorch | [Caffe2] How to build caffe2/mobile/ulp2/ulp_test? | I'm trying to test out the kernel implementation of quantized convolution in `caffe2/mobile/ulp2df folder. Currently, I would like to build and run the ulp_test.cc. However, I don't know how to build the files in there. Can someone tell me how? Maybe some CMakeLists files.
Thanks in advance
Cheers, | caffe2 | low | Minor |
333,141,577 | TypeScript | Related error spans for derived members in interfaces/classes | Now that we support multiple related spans for errors (#10489, #22789, #24548), we'd like to improve an existing error message.
Currently, we provide certain errors for when a derived type incorrectly overrides/implements a member from the base type:
```
Class '{0}' defines instance member function '{1}', but extended class '{2}' defines it as instance member accessor.
Class '{0}' defines instance member function '{1}', but extended class '{2}' defines it as instance member property.
Class '{0}' defines instance member property '{1}', but extended class '{2}' defines it as instance member function.
Class '{0}' defines instance member accessor '{1}', but extended class '{2}' defines it as instance member function.
Property '{0}' in type '{1}' is not assignable to the same property in base type '{2}'.
```
We can actually provide the base type's member as a related span location for some extra context.
```
'{0}' redefines '{1}' in an incompatible way.
```
Where
* `0` is the derived type
* `1` is the member name
and this gets reported on the base type member.
| Suggestion,Help Wanted,Domain: Related Error Spans,Experience Enhancement | low | Critical |
333,144,852 | flutter | [request] Provide static libraries for Custom Flutter Engine Embedders | https://github.com/flutter/flutter/wiki/Custom-Flutter-Engine-Embedders currently provides dynamic libraries for the flutter engine.
It would be incredibly useful if static libraries were also provided! | engine,e: embedder,c: proposal,P2,team-engine,triaged-engine | low | Minor |
333,151,023 | rust | Wrong lifetime is inferred in the argument of closure when given more specific type. | I have lifetime bounds to a closure through a trait like this.
```rust
trait Closure<'t> {}
impl<'t, F: Fn(&'t ())> Closure<'t> for F {}
fn restrict_trait(_: impl Closure<'static>) {}
```
When I call `restrict_trait(|r| ...)`, I'll get `r: &'static ()` inside the closure. But when using `restrict_trait(|r: &_| ...)`, the lifetime of `r` inferred is not `'static` anymore.
Here is the [detailed example](https://play.rust-lang.org/?gist=2bdd08b5d0a1ad2512afcad18426a4b7&version=nightly&mode=debug)
| A-lifetimes,A-closures,T-compiler,A-inference,A-impl-trait,C-bug | low | Critical |
333,169,081 | neovim | termguicolors: fallback to cterm if gui is not set (Vim patches) | `:hi SpellBad cterm=undercurl ctermfg=1 gui=undercurl guisp=Red` does not get displayed as red (ctermfg 1).
This is supported in Vim since:
```
vim-patch:8.0.1544: when using 'termguicolors' SpellBad doesn't show
Problem: When using 'termguicolors' SpellBad doesn't show.
Solution: When the GUI colors are not set fall back to the cterm colors.
```
https://github.com/vim/vim/commit/d4fc577e60d325777d38c00bd78fb9a32c7b1dfa
The patch is not trivial, and requires at least 8.0.0754 and 8.0.0760 before. | enhancement,has:vim-patch,needs:discussion,highlight | low | Major |
333,201,353 | node | make test: use after free: parallel/test-cli-node-options | git repo (nodejs/node) @ 64de66d78888f46c74ba8b8ea18100a9f35a1c7a
I was running ```` CFLAGS="-fsanitize=address -fno-sanitize=leak -g3" CXXFLAGS="$CFLAGS" LDFLAGS="-fsanitize=address -fno-sanitize=leak -g3" ASAN_OPTIONS=detect_leaks=0 make test -j 4 ````
platform: ````Linux t470 4.17.0-2-MANJARO #1 SMP PREEMPT Fri Jun 8 07:13:17 UTC 2018 x86_64 GNU/Linux````
I used clang 6.0 as CC/CXX.
During ````make test````, I got a use after free:
````
[----------] Global test environment tear-down
[==========] 74 tests from 9 test cases ran. (424 ms total)
[ PASSED ] 74 tests.
make -s jstest
touch 251b41f63160e3e22459f6ddaeb4fca739404752.intermediate
touch 1b6e683759875e45877a449826f87697ec02fb35.intermediate
LD_LIBRARY_PATH=/home/matthias/vcs/github/node/out/Release/lib.host:/home/matthias/vcs/github/node/out/Release/lib.target:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; cd ../.; mkdir -p /home/matthias/vcs/github/node/out/Release/obj/gen/src/node/inspector/protocol; python deps/v8/third_party/inspector_protocol/CodeGenerator.py --jinja_dir deps/v8/third_party/inspector_protocol/.. --output_base "/home/matthias/vcs/github/node/out/Release/obj/gen/src/" --config "/home/matthias/vcs/github/node/out/Release/obj/gen/node_protocol_config.json"
LD_LIBRARY_PATH=/home/matthias/vcs/github/node/out/Release/lib.host:/home/matthias/vcs/github/node/out/Release/lib.target:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; cd ../deps/v8/gypfiles; mkdir -p /home/matthias/vcs/github/node/out/Release/obj/gen/src/inspector/protocol /home/matthias/vcs/github/node/out/Release/obj/gen/include/inspector; python ../third_party/inspector_protocol/CodeGenerator.py --jinja_dir ../third_party --output_base "/home/matthias/vcs/github/node/out/Release/obj/gen/src/inspector" --config ../src/inspector/inspector_protocol_config.json
=== release test-cli-node-options ===
Path: parallel/test-cli-node-options
assert.js:671
throw newErr;
^
AssertionError [ERR_ASSERTION]: ifError got unwanted exception: Command failed: /home/matthias/vcs/github/node/out/Release/node -e console.log("B")
=================================================================
==25285==ERROR: AddressSanitizer: heap-use-after-free on address 0x619000000ae0 at pc 0x000001a219ac bp 0x7f46b37fed10 sp 0x7f46b37fed08
READ of size 8 at 0x619000000ae0 thread T1
#0 0x1a219ab in uv__run_closing_handles /home/matthias/vcs/github/node/out/../deps/uv/src/unix/core.c:299:12
#1 0x1a219ab in uv_run /home/matthias/vcs/github/node/out/../deps/uv/src/unix/core.c:370
#2 0x7f46b6490074 in start_thread (/usr/lib/libpthread.so.0+0x7074)
#3 0x7f46b5fad53e in __GI___clone (/usr/lib/libc.so.6+0xf853e)
0x619000000ae0 is located 96 bytes inside of 952-byte region [0x619000000a80,0x619000000e38)
freed by thread T0 here:
#0 0x16cbbe2 in operator delete(void*) /home/matthias/LLVM/LLVM6/stage_2/llvm/projects/compiler-rt/lib/asan/asan_new_delete.cc:149:3
#1 0x18f762d in std::default_delete<node::tracing::AsyncTraceWriter>::operator()(node::tracing::AsyncTraceWriter*) const /usr/lib64/gcc/x86_64-pc-linux-gnu/8.1.1/../../../../include/c++/8.1.1/bits/unique_ptr.h:81:2
#2 0x18f762d in std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> >::~unique_ptr() /usr/lib64/gcc/x86_64-pc-linux-gnu/8.1.1/../../../../include/c++/8.1.1/bits/unique_ptr.h:274
#3 0x18f762d in std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >::~pair() /usr/lib64/gcc/x86_64-pc-linux-gnu/8.1.1/../../../../include/c++/8.1.1/bits/stl_pair.h:198
#4 0x18f762d in void __gnu_cxx::new_allocator<std::__detail::_Hash_node<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >, false> >::destroy<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > > >(std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >*) /usr/lib64/gcc/x86_64-pc-linux-gnu/8.1.1/../../../../include/c++/8.1.1/ext/new_allocator.h:140
#5 0x18f762d in void std::allocator_traits<std::allocator<std::__detail::_Hash_node<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >, false> > >::destroy<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > > >(std::allocator<std::__detail::_Hash_node<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >, false> >&, std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >*) /usr/lib64/gcc/x86_64-pc-linux-gnu/8.1.1/../../../../include/c++/8.1.1/bits/alloc_traits.h:487
#6 0x18f762d in std::__detail::_Hashtable_alloc<std::allocator<std::__detail::_Hash_node<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >, false> > >::_M_deallocate_node(std::__detail::_Hash_node<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >, false>*) /usr/lib64/gcc/x86_64-pc-linux-gnu/8.1.1/../../../../include/c++/8.1.1/bits/hashtable_policy.h:2100
#7 0x18f762d in std::_Hashtable<int, std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >, std::allocator<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > > >, std::__detail::_Select1st, std::equal_to<int>, std::hash<int>, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<false, false, true> >::_M_erase(unsigned long, std::__detail::_Hash_node_base*, std::__detail::_Hash_node<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >, false>*) /usr/lib64/gcc/x86_64-pc-linux-gnu/8.1.1/../../../../include/c++/8.1.1/bits/hashtable.h:1905
#8 0x18f762d in std::_Hashtable<int, std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >, std::allocator<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > > >, std::__detail::_Select1st, std::equal_to<int>, std::hash<int>, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<false, false, true> >::_M_erase(std::integral_constant<bool, true>, int const&) /usr/lib64/gcc/x86_64-pc-linux-gnu/8.1.1/../../../../include/c++/8.1.1/bits/hashtable.h:1931
#9 0x18f762d in std::_Hashtable<int, std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > >, std::allocator<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > > >, std::__detail::_Select1st, std::equal_to<int>, std::hash<int>, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<false, false, true> >::erase(int const&) /usr/lib64/gcc/x86_64-pc-linux-gnu/8.1.1/../../../../include/c++/8.1.1/bits/hashtable.h:771
#10 0x18f762d in std::unordered_map<int, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> >, std::hash<int>, std::equal_to<int>, std::allocator<std::pair<int const, std::unique_ptr<node::tracing::AsyncTraceWriter, std::default_delete<node::tracing::AsyncTraceWriter> > > > >::erase(int const&) /usr/lib64/gcc/x86_64-pc-linux-gnu/8.1.1/../../../../include/c++/8.1.1/bits/unordered_map.h:818
#11 0x18f762d in node::tracing::Agent::Disconnect(int) /home/matthias/vcs/github/node/out/../src/tracing/agent.cc:100
#12 0x18fb9a8 in node::tracing::Agent::DisconnectClient(std::pair<node::tracing::Agent*, int>*) /home/matthias/vcs/github/node/out/../src/tracing/agent.h:67:22
#13 0x1770792 in node::$_0::StopTracingAgent() /home/matthias/vcs/github/node/out/../src/node.cc:334:23
#14 0x1770792 in node::Start(int, char**) /home/matthias/vcs/github/node/out/../src/node.cc:3684
#15 0x7f46b5ed806a in __libc_start_main (/usr/lib/libc.so.6+0x2306a)
previously allocated by thread T0 here:
#0 0x16cb002 in operator new(unsigned long) /home/matthias/LLVM/LLVM6/stage_2/llvm/projects/compiler-rt/lib/asan/asan_new_delete.cc:92:3
#1 0x18f93bf in node::tracing::Agent::Enable(std::set<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) /home/matthias/vcs/github/node/out/../src/tracing/agent.cc:137:9
#2 0x18f857c in node::tracing::Agent::Enable(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /home/matthias/vcs/github/node/out/../src/tracing/agent.cc:120:3
#3 0x1770699 in node::$_0::StartTracingAgent() /home/matthias/vcs/github/node/out/../src/node.cc:329:21
#4 0x1770699 in node::$_0::Initialize(int) /home/matthias/vcs/github/node/out/../src/node.cc:294
#5 0x1770699 in node::Start(int, char**) /home/matthias/vcs/github/node/out/../src/node.cc:3678
#6 0x7f46b5ed806a in __libc_start_main (/usr/lib/libc.so.6+0x2306a)
Thread T1 created by T0 here:
#0 0x168601d in __interceptor_pthread_create /home/matthias/LLVM/LLVM6/stage_2/llvm/projects/compiler-rt/lib/asan/asan_interceptors.cc:204:3
#1 0x1a4755b in uv_thread_create /home/matthias/vcs/github/node/out/../deps/uv/src/unix/thread.c:202:9
#2 0x18f62de in node::tracing::Agent::Start() /home/matthias/vcs/github/node/out/../src/tracing/agent.cc:65:3
#3 0x18f93b5 in node::tracing::Agent::Enable(std::set<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&) /home/matthias/vcs/github/node/out/../src/tracing/agent.cc:135:5
#4 0x18f857c in node::tracing::Agent::Enable(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /home/matthias/vcs/github/node/out/../src/tracing/agent.cc:120:3
#5 0x1770699 in node::$_0::StartTracingAgent() /home/matthias/vcs/github/node/out/../src/node.cc:329:21
#6 0x1770699 in node::$_0::Initialize(int) /home/matthias/vcs/github/node/out/../src/node.cc:294
#7 0x1770699 in node::Start(int, char**) /home/matthias/vcs/github/node/out/../src/node.cc:3678
#8 0x7f46b5ed806a in __libc_start_main (/usr/lib/libc.so.6+0x2306a)
SUMMARY: AddressSanitizer: heap-use-after-free /home/matthias/vcs/github/node/out/../deps/uv/src/unix/core.c:299:12 in uv__run_closing_handles
Shadow bytes around the buggy address:
0x0c327fff8100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c327fff8110: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c327fff8120: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c327fff8130: 00 00 00 00 00 fa fa fa fa fa fa fa fa fa fa fa
0x0c327fff8140: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c327fff8150: fd fd fd fd fd fd fd fd fd fd fd fd[fd]fd fd fd
0x0c327fff8160: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fff8170: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fff8180: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fff8190: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c327fff81a0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==25285==ABORTING
at ChildProcess.exithandler (child_process.js:291:12)
at ChildProcess.emit (events.js:182:13)
at maybeClose (internal/child_process.js:961:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:248:5)
Command: out/Release/node /home/matthias/vcs/github/node/test/parallel/test-cli-node-options.js
=== release test-crypto-dh-leak ===
Path: parallel/test-crypto-dh-leak
assert.js:270
throw err;
^
AssertionError [ERR_ASSERTION]: before=131252224 after=150888448
at Object.<anonymous> (/home/matthias/vcs/github/node/test/parallel/test-crypto-dh-leak.js:26:1)
at Module._compile (internal/modules/cjs/loader.js:702:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:713:10)
at Module.load (internal/modules/cjs/loader.js:612:32)
at tryModuleLoad (internal/modules/cjs/loader.js:551:12)
at Function.Module._load (internal/modules/cjs/loader.js:543:3)
at Function.Module.runMain (internal/modules/cjs/loader.js:744:10)
at startup (internal/bootstrap/node.js:241:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:565:3)
Command: out/Release/node --expose-gc --noconcurrent_recompilation /home/matthias/vcs/github/node/test/parallel/test-crypto-dh-leak.js
```` | trace_events | low | Critical |
333,290,724 | TypeScript | in typescript 2.8, a common definition of Omit causes declarations that don't compile to be generated | **TypeScript Version:**
Reproduces with 3.0.0-dev.201xxxxx, 2.9.2, 2.8.4
**Search terms**
omit, declaration, pick, undefined
**Example**
```
export type Omit<T, K extends keyof T> = Pick<T,
({ [P in keyof T]: P } & { [P in K]: never } )[keyof T]>;
export interface IOmitTest {
(): { notSupposedToHappen: Omit<IXProps, "unwantedProp"> }
}
export interface IXProps {
optionalProp?: string
unwantedProp: string
}
const Y: IOmitTest = null as any;
export const Z = Y();
export interface IMouseOver {
wrong: Omit<IXProps, "unwantedProp">
}
```
generates
```
export declare type Omit<T, K extends keyof T> = Pick<T, ({
[P in keyof T]: P;
} & {
[P in K]: never;
})[keyof T]>;
export interface IOmitTest {
(): {
notSupposedToHappen: Omit<IXProps, "unwantedProp">;
};
}
export interface IXProps {
optionalProp?: string;
unwantedProp: string;
}
export declare const Z: {
notSupposedToHappen: Pick<IXProps, "optionalProp" | undefined>;
};
export interface IMouseOver {
wrong: Omit<IXProps, "unwantedProp">;
}
```
The interface returning a function seems to be required to get typescript to collapse the Omit to a Pick statement (otherwise you just get Omit<IXProps, "unwatedProp"> in the declaration file).
I think this is a bug in typescript because it's produced a declaration file with syntax errors in it.
**Expected behavior:**
undefined should not be present in the Pick statement
**Playground Link:**
the playground does not provide the ability to generate declartions
**Related Issues:**
https://github.com/Microsoft/TypeScript/issues/12215
note that if the definition of Omit is replaced with the newer declaration possible in typescript 2.9
```
export type Omit<T, K extends keyof T> = Pick<T, Exclude<keyof T, K>>
```
then
```
export declare const Z: {
notSupposedToHappen: Pick<IXProps, "optionalProp">;
};
```
results as expected.
I actually encountered this issue using when using connect() in react-redux with a component with an optional property, where the definition of Omit is:
```
type Omit<T, K extends keyof T> = Pick<T, ({ [P in keyof T]: P } & { [P in K]: never } & { [x: string]: never, [x: number]: never })[keyof T]>;
```
| Bug | low | Critical |
333,294,761 | kubernetes | Provide an update-boilerplate script | **Is this a BUG REPORT or FEATURE REQUEST?**:
/kind feature
**What happened**:
Part of the tests that run once a PR is submitted is `pull-kubernetes-verify`. Right now, contributors have to manually copy and paste boilerplate headers so that any new files they add as part of a PR pass the `hack/verify-boilerplate.sh` validation.
**What you expected to happen**:
Offer a way to update the boilerplate content of files in an automatic way. This would improve the contributor experience.
**How to reproduce it (as minimally and precisely as possible)**:
Add files, forget to add headers, see your PR fail `pull-kubernetes-verify`.
**Environment**:
Not applicable
**Additional/Future related work**:
The same script could be reused to keep licence headers up to date, i.e. refresh the year on licence headers when it's executed.
@kubernetes/sig-contributor-experience-feature-requests
cc @hoegaarden @totherme | sig/contributor-experience,kind/feature,lifecycle/frozen | low | Critical |
333,298,401 | pytorch | Come with a better strategy for TensorArg (error reporting) | TensorArg (in `aten/src/ATen/TensorUtils.h`) is a class I came up with when I was porting CuDNN convolutions from Python to C++. The original goals were as follows:
1. Create a set of utility functions to conveniently perform input checks (e.g., does it have the correct size; is it on the correct GPU, etc.)
2. Give good error messages when these tests fail (e.g., give the name and the argument number of the argument that failed the test)
3. Avoid copy-pasting error checking logic.
Goals (2) and (3) are in tension with each other, especially for convolutions. Consider the following example: `convolution_forward`, and `transposed_convolution_backward`. Algorithmically, these two functions are equivalent (and indeed, both functions backend to the same underlying function), but from a UI perspective the functions are very different (e.g., arguments names differ). If you want to not copy-paste error checking logic, you need to put this logic in the underlying backend function, but if you want good error messages you have to track extra information on a per-Tensor basis so that you can accurately describe what the name/argument position of an argument is.
TensorArg solves this problem by rewriting functions to pass `TensorArg` struct instead of `Tensor`, where `TensorArg` is augmented with extra information to track the name and position of an argument. Checking functions which make use of `TensorArg` can pull this information out directly when giving an error message.
However, TensorArg falls short of some other goals which I subsequently realized were quite important:
1. **Speed.** Boxing tensors into `TensorArg` structs, while fairly cheap, resulted in a perceptible, cumulative performance slowdown. Ugh! Some of the cost may be due to additional dynamic allocations. https://github.com/pytorch/pytorch/issues/3958 is the tracking issue.
2. **Brevity.** Suppose that I have a native function `f` which backends to `f_out`, which is also native. I cannot rewrite `f_out` to be `TensorArg`, because the calling convention for native functions requires `Tensor` arguments. So I have to write another backend function `_f_out`, and then have `f_out` and `f` call into it.
3. **Compositionality.** Essentially, the `TensorArg` strategy means we need a "shadow" hierarchy of TensorArg-consuming functions, which you call directly if you want to preserve error reporting. This means that if you ever need to go back in the direct Tensor interface, you cannot preserve error information.
Given this problem, it seems unlikely that we can recommend TensorArg for wider use in the ATen codebase, which is a shame because it is still very difficult to do good error reporting when writing code in ATen.
CC @goldsborough
----
Proposal: Thread-local state error reporting information
Let's get rid of `TensorArg` so that we can achieve brevity and compositionality. In this case, how can we tell what the name of an argument is? We have thread-local error reporting information, which maintains a mapping (just an association list) from `TensorImpl*` to error information. Upon a failure, we perform a lookup in the mapping to resolve a `Tensor` into an actual argument name (or not, if we failed on an intermediate result; that's a bad outcome and means you didn't add enough error checking). The mapping is similar to how we handle input arguments in tracing.
One limitation of this approach is that it deals poorly when exactly the same tensor is passed as two arguments, e.g., `add(x, x)`. In this case, we cannot tell purely from the identity of the TensorImpl whether or not this was the first argument or the second argument.
----
Proposal: Exception handler transformers
Let's get rid of `TensorArg` so that we can achieve brevity and compositionality. In this case, how can we tell what the name of an argument is? (Does this sound familiar? It's because it's the same as above). We don't: instead, we report whatever the local information of a function is and raise the exception.
At each function call site, we instrument an exception handler which is responsible for "rewriting" the error message so it makes sense in its locality. So for example, if we have:
```
void foo(Tensor x, Tensor y) {
bar(y, x);
}
```
We must somehow make this equivalent to the following:
```
void foo(Tensor x, Tensor y) {
try {
bar(y, x);
except (AtenError& e) {
e.calling_pattern({{"x", 1}, {"y", 0}});
throw;
}
}
```
or something equivalent, where we can reconstruct the original argument pattern by traversing the exception handlers.
Limitation: It's not altogether clear how to automatically generate these wrappers without ... something like `TensorArg`, because the call pattern of `bar` matters, and that's not easily accessible from C++.
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @ailzhang @VitalyFedyunin @ngimel | module: performance,module: internals,triaged | low | Critical |
333,320,824 | vscode | Add prompt to "test on insiders" to issue reporter | In the issue reporter, add a prompt the test using the latest insiders build. Many issues reported against stable have already been fixed in insiders | feature-request,good first issue,issue-reporter | low | Major |
333,329,446 | neovim | Windows: "E138: main.shada.tmp.X files exist, cannot write ShaDa" on close | - `nvim --version`:
```
NVIM v0.3.0
Build type: RelWithDebInfo
LuaJIT 2.0.5
Compilation: C:/msys64/mingw64/bin/gcc.exe -Wconversion -DNVIM_MSGPACK_HAS_FLOAT32 -DNVIM_UNIBI_HAS_VAR_FROM -O2 -g -DMI
N_LOG_LEVEL=3 -Og -g -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wimplicit-fallthrough
-D__USE_MINGW_ANSI_STDIO -D_WIN32_WINNT=0x0600 -Wvla -fdiagnostics-color=auto -Wno-array-bounds -DINCLUDE_GENERATED_DEC
LARATIONS -IC:/projects/neovim/build/config -IC:/projects/neovim/src -IC:/projects/neovim/.deps/usr/include -IC:/msys64/
mingw64/include -IC:/projects/neovim/build/src/nvim/auto -IC:/projects/neovim/build/include
Compiled by appveyor@APPVYR-WIN
Features: -acl +iconv -jemalloc +tui
See ":help feature-compile"
system vimrc file: "$VIM\sysinit.vim"
fall-back for $VIM: "C:/Program Files (x86)/nvim/share/nvim"
Run :checkhealth for more info
```
- Operating system/version: Windows 10 Pro 1703 15063.1115
- Terminal name/version:
- `$TERM`:
### Steps to reproduce using `nvim -u NORC`
```
nvim -u NORC
:q
```
### Actual behaviour

### Expected behaviour
Close without error.
| bug,platform:windows,has:plan,filesystem,editor-state | high | Critical |
333,345,905 | pytorch | [Caffe2] Wrong prediction with simple FF | Hello,
I am new to Caffe2.
For my first attempts with Caffe2 I am trying a simple Feed Forward, but I don't get a correct prediction.
Could please somebody explain my mistake?
It's a simple net to solve a XOR
```
data = np.array([[0, 0],[0, 1], [1, 0], [1, 1]]).astype(np.float32)
label = np.array([[0],[1],[1],[0]]).astype(np.float32)
workspace.FeedBlob("data", data)
workspace.FeedBlob("label", label)
print(workspace.FetchBlob("data"))
m = model_helper.ModelHelper(name="my first net")
weight = m.param_init_net.XavierFill([], 'fc_w', shape=[2, 2])
bias = m.param_init_net.ConstantFill([], 'fc_b', shape=[2, ])
fc_1 = m.net.FC(["data", "fc_w", "fc_b"], "fc1")
pred = m.net.Sigmoid(fc_1, "pred")
weight2 = m.param_init_net.XavierFill([], 'fc_w2', shape=[1, 2])
bias2 = m.param_init_net.ConstantFill([], 'fc_b2', shape=[1, ])
fc_2 = m.net.FC(["pred", "fc_w2", "fc_b2"], "fc2")
pred2 = m.net.Sigmoid(fc_2, "pred2")
xent = m.net.SigmoidCrossEntropyWithLogits([pred2,"label"], "xent")
loss = m.net.AveragedLoss(xent, "loss")
# learning
gradient_map = m.AddGradientOperators([loss])
ITER = m.param_init_net.ConstantFill([], "ITER", shape=[1], value=0)
m.net.Iter(ITER, ITER)
LR = m.net.LearningRate(ITER, "LR", base_lr=-0.1, policy="step", stepsize=1, gamma=0.999 )
ONE = m.param_init_net.ConstantFill([], "ONE", shape=[1], value=1.0)
# update weights
m.net.WeightedSum([weight, ONE, gradient_map[weight], LR], weight)
m.net.WeightedSum([bias, ONE, gradient_map[bias], LR], bias)
m.net.WeightedSum([weight2, ONE, gradient_map[weight2], LR], weight2)
m.net.WeightedSum([bias2, ONE, gradient_map[bias2], LR], bias2)
workspace.RunNetOnce(m.param_init_net)
workspace.CreateNet(m.net)
workspace.RunNet(m.name, 1)
print(workspace.FetchBlob("pred2"))
print(workspace.FetchBlob("loss"))
for i in range(100000):
r = randint(0, 3)
inp = np.array([data[r]])
inplbl = np.array([label[r]])
workspace.FeedBlob("data", inp)
workspace.FeedBlob("label", inplbl)
workspace.RunNet(m.name, 10)
print "------------------"
print(workspace.FetchBlob("pred2"))
print(workspace.FetchBlob("loss"))
print "------------------"
workspace.FeedBlob("data", data)
workspace.FeedBlob("label", label)
workspace.RunNet(m.name, 1)
print(workspace.FetchBlob("pred2"))
print(workspace.FetchBlob("loss"))
```
| caffe2 | low | Minor |
333,354,668 | go | x/tools/go/ssa: wrong ssa referrers with ifstmt | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.10.2 linux/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/nabice/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/nabice/go:/home/nabice/skygo/go"
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build010669814=/tmp/go-build -gno-record-gcc-switches"
### What did you do?
Package: golang.org/x/tools/go/ssa
ok.Referrers do not contains Ifstmt
https://play.golang.org/p/wkERfEjeSu1
I found it is replaced here:
https://github.com/golang/tools/blob/e10d6c9a84802dced65cb0278773be159bb7ed07/go/ssa/blockopt.go#L89
### What did you expect to see?
ok.Referrers contain Ifstmt's cond -- "ok"
### What did you see instead?
wrong
| Tools | low | Critical |
333,365,881 | opencv | POSIT in C++? | As shown in Wiki, [https://github.com/opencv/opencv/wiki/Posit](https://github.com/opencv/opencv/wiki/Posit), **POSIT** is done in some **C** functions. When I tested my old code today, there are so many **Error** messages in just **one** single function:
```
/**
* @brief Calculate object's absolute orientations
* @param iShape2D
* @param iShape3D
* @param oShape2D
* @return std::vector<float>
*/
std::vector<float> CRecognitionAlgs::CalcAbsoluteOrientations( const VO_Shape& iShape2D,
const VO_Shape& iShape3D,
VO_Shape& oShape2D)
{
assert (iShape2D.GetNbOfPoints() == iShape3D.GetNbOfPoints() );
unsigned int NbOfPoints = iShape3D.GetNbOfPoints();
cv::Point3f pt3d;
cv::Point2f pt2d;
float height1 = iShape2D.GetHeight();
float height2 = iShape3D.GetHeight();
VO_Shape tempShape2D = iShape2D;
tempShape2D.Scale(height2/height1);
//Create the model points
std::vector<CvPoint3D32f> modelPoints;
for(unsigned int i = 0; i < NbOfPoints; ++i)
{
pt3d = iShape3D.GetA3DPoint(i);
modelPoints.push_back(cvPoint3D32f(pt3d.x, pt3d.y, pt3d.z));
}
//Create the image points
std::vector<CvPoint2D32f> srcImagePoints;
for(unsigned int i = 0; i < NbOfPoints; ++i)
{
pt2d = tempShape2D.GetA2DPoint(i);
srcImagePoints.push_back(cvPoint2D32f(pt2d.x, pt2d.y));
}
//Create the POSIT object with the model points
CvPOSITObject *positObject = cvCreatePOSITObject( &modelPoints[0], NbOfPoints );
//Estimate the pose
//CvMatr32f rotation_matrix = new float[9];
float* rotation_matrix = new float[9];
//CvVect32f translation_vector = new float[3];
float* translation_vector = new float[3];
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 100, 1.0e-4f);
cvPOSIT( positObject, &srcImagePoints[0], FOCAL_LENGTH, criteria, rotation_matrix, translation_vector );
//rotation_matrix to Euler angles, refer to VO_Shape::GetRotation
float sin_beta = -rotation_matrix[0 * 3 + 2];
float tan_alpha = rotation_matrix[1 * 3 + 2] / rotation_matrix[2 * 3 + 2];
float tan_gamma = rotation_matrix[0 * 3 + 1] / rotation_matrix[0 * 3 + 0];
//Project the model points with the estimated pose
oShape2D = tempShape2D;
for ( unsigned int i=0; i < NbOfPoints; ++i )
{
pt3d.x = rotation_matrix[0] * modelPoints[i].x +
rotation_matrix[1] * modelPoints[i].y +
rotation_matrix[2] * modelPoints[i].z +
translation_vector[0];
pt3d.y = rotation_matrix[3] * modelPoints[i].x +
rotation_matrix[4] * modelPoints[i].y +
rotation_matrix[5] * modelPoints[i].z +
translation_vector[1];
pt3d.z = rotation_matrix[6] * modelPoints[i].x +
rotation_matrix[7] * modelPoints[i].y +
rotation_matrix[8] * modelPoints[i].z +
translation_vector[2];
if ( pt3d.z != 0 )
{
pt2d.x = FOCAL_LENGTH * pt3d.x / pt3d.z;
pt2d.y = FOCAL_LENGTH * pt3d.y / pt3d.z;
}
oShape2D.SetA2DPoint(pt2d, i);
}
delete(rotation_matrix);
delete(translation_vector);
//return Euler angles
std::vector<float> pos(3);
pos[0] = (float)atan(tan_alpha); // yaw
pos[1] = (float)asin(sin_beta); // pitch
pos[2] = (float)atan(tan_gamma); // roll
return pos;
}
```
Typical error messages include:
> error: 'CvPoint2D32f' was not declared in this scope
> CvPOSITObject *positObject = cvCreatePOSITObject( &modelPoints[0], NbOfPoints );
> CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 100, 1.0e-4f);
> cvPOSIT( positObject, &srcImagePoints[0], FOCAL_LENGTH, criteria, rotation_matrix, translation_vector );
Just wonder is there a C++ replacement for **cvPOSIT** ?
BTW, please refer to my [wiki about POSIT in OpenCV3](https://github.com/jiapei100/VOSM/wiki/POSIT-in-OpenCV3)
Cheers
Pei
| category: documentation | low | Critical |
333,425,869 | go | proposal: testing: add a flag to detect unnecessary skips | **Motivation**
We often use `(*testing.T).Skip` to skip tests that fail due to known bugs in the Go toolchain, the standard library, or a specific platform where the test runs.
When the underlying bugs are fixed, it is important that we remove the skips to prevent regressions.
However, bugs are sometimes fixed without us noticing — especially if they are platform bugs. In such cases, it would be helpful to have some way to detect that situation short of editing and recompiling all of the tests in a project.
[Here](https://github.com/golang/go/blob/964639cc338db650ccadeafb7424bc8ebb2c0f6c/misc/cgo/test/issue17065.go#L24) is a concrete example in the standard library: a `t.Skip` call that references an issue that was marked closed back in 2016 (#17065). Given how easy that call was to find, I expect that similarly outdated skips are pervasive.
**Proposal**
A new boolean flag for the `testing` package, tentatively named `-test.unskip`, with the corresponding `go test` flag `-unskip`. `-test.unskip` takes a regular expression, which is matched against the formatted arguments of `t.Skip` or `t.Skipf` or the most recent test log entry before `t.SkipNow`.
If a `Skip` matches the regular expression, the test does not stop its execution as usual: instead, it continues to run. **If the skipped test passes, the test is recorded as unnecessarily-skipped and the binary will exit with a nonzero exit code.** If the skipped test fails, the test log (and the exit code of the test binary) ignores the failure and proceeds as usual.
**Comparison**
Test frameworks in a few other languages provide an explicit mechanism for expecting a test to fail. For example, Python has [`@unittest.expectedFailure`](http://go/pylib/unittest.html#skipping-tests-and-expected-failures); Boost has an [`expected_failures`](https://www.boost.org/doc/libs/1_67_0/libs/test/doc/html/boost_test/testing_tools/expected_failures.html#l_expected_failure) decorator; RSpec has the [pending](https://relishapp.com/rspec/rspec-core/v/3-7/docs/pending-and-skipped-examples) keyword. | Proposal,Proposal-Hold | medium | Critical |
333,440,490 | flutter | Build interactive reports of engine, APK & IPA sizes on each build. | This issue tracks the creation of reproducible reports generated by @goderbauer in [his preliminary document](https://docs.google.com/document/d/1hZUIKCPsUSfZ0Nt7_b9CE0izlteeHmOf2PzrlGZ_chk/edit?disco=AAAAB6lPw9Y&ts=5b27f6f2#heading=h.byyqjskjk9nl) investigating engine and APK sizes. | team,platform-ios,engine,P3,team-ios,triaged-ios | low | Major |
333,452,117 | flutter | LongPressDraggable doesn't have accessibility semantics for its long press action in iOS | cc @jonahwilliams | platform-ios,framework,a: accessibility,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-ios,triaged-ios | low | Major |
333,465,846 | rust | The locally-installed docs try to load non-local assets, causing them to fail on a bad connection. | The locally-installed docs that open when you use `rustup docs` try to load a few assets from a CDN. This is fine if you have no network connection at all, because they will fail quickly and the page will render.
However, if you have a very poor/slow network connection, they will try to load, and nothing on the page will render.
Please make the locally-installed docs not depend on non-local resources at all. | C-enhancement,A-docs | low | Major |
333,470,063 | angular | Post strictPropertyInitialization flag flip cleanup |
## I'm submitting a...
[x] Other... Please describe: Tracking issue for internal cleanup.
</code></pre>
## What needs to be done
In order to quickly turn on strictPropertyInitialization flag in the whole code base, we introduced `!` on every non-initialized class field. Each one of those fields needs to be individually checked and decided whether:
- the field should be initialized. Thus `foo!: string`, should become `foo = 'default'`.
- the field should be marked as optional. Thus `foo!: string`, should become `foo?: string`.
- the field is reliably initialized, but TS cannot infer that (initialization can occur outside the ctor, for example), so `!` should be kept. | type: bug/fix,freq2: medium,area: core,P4 | low | Major |
333,476,937 | youtube-dl | [cbc.ca] Error 402: Payment Required | ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.06.18*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [X] I've **verified** and **I assure** that I'm running youtube-dl **2018.06.18**
### Before submitting an *issue* make sure you have:
- [X] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [X] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [X] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [X] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [X] Question
- [ ] Other
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
```
[debug] System config: []
[debug] User config: ['--all-subs', '--sub-format', 'srt', '--convert-subs', 'srt', '--embed-subs', '--merge-output-format', 'mkv', '--recode-video', 'mkv', '--format', 'bestvideo[width<=?1280]+bestaudio/best', '-o', 'D:/Video/Downloads/%(series)s/%(series)s - %(season_number)sx%(episode_number)02d - %(title)s [%(height)s].%(ext)s', '--prefer-ffmpeg', '--ffmpeg-location', 'D:\\Video\\FFMPEG\\bin', '--download-archive', 'D:/Video/Downloads/Archive.txt']
[debug] Custom config: []
[debug] Command-line args: ['https://watch.cbc.ca/media/heartland/season-9/episode-7/38e815a-0094c1b411a', '--username', 'PRIVATE', '-v']
Type account password and press [Return]:
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2018.06.18
[debug] Python version 3.4.4 (CPython) - Windows-8.1-6.3.9600
[debug] exe versions: ffmpeg N-90771-g5079e96bcc, ffprobe N-90771-g5079e96bcc
[debug] Proxy map: {}
[debug] Using fake IP 99.239.129.198 (CA) as X-Forwarded-For.
[cbc.ca:watch] 38e815a-0094c1b411a: Downloading XML
[cbc.ca:watch] 38e815a-0094c1b411a: Downloading XML
[download] Downloading playlist: Fearless
[cbc.ca:watch] playlist Fearless: Collected 1 video ids (downloading 1 of them)
[download] Downloading video 1 of 1
[debug] Using fake IP 99.234.54.159 (CA) as X-Forwarded-For.
[cbc.ca:watch:video] c78f815d-d3ed-4502-871f-da9347e0318a: Downloading XML
ERROR: Unable to download XML: HTTP Error 402: Payment Required (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpu1zifx01\build\youtube_dl\extractor\common.py", line 579, in _request_webpage
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmpu1zifx01\build\youtube_dl\YoutubeDL.py", line 2211, in urlopen
File "C:\Python\Python34\lib\urllib\request.py", line 470, in open
File "C:\Python\Python34\lib\urllib\request.py", line 580, in http_response
File "C:\Python\Python34\lib\urllib\request.py", line 508, in error
File "C:\Python\Python34\lib\urllib\request.py", line 442, in _call_chain
File "C:\Python\Python34\lib\urllib\request.py", line 588, in http_error_default
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://watch.cbc.ca/media/heartland/season-9/episode-7/38e815a-0094c1b411a
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Unlike Error 402 for Youtube which typically signifies too many requests, a restricted IP, or a CAPTCHA request, in this situation some of the videos on the site (mainly those not currently being broadcast but instead from a previous season, etc) require a free membership in order to watch them. In this situation, "Error 402: Payment Required" actually seems appropriate in that you need to be logged in to access select files. After trying multiple different options and variables, it appears that the youtube-dl interface is either not passing the login info to the page, or when logged in within the browser, the login info is not being save as a cookie that can be utilised by youtube-dl. Files that don't require logging in to view are easily downloaded.
Is there some sort of variable/option that I am missing that would circumvent the issue, or is it just not passing the credentials on to the site at the appropriate time? | geo-restricted,account-needed | low | Critical |
333,487,487 | flutter | Flutter Engine should consider banning static initializers | See Chromium's policy and reasoning:
https://chromium.googlesource.com/chromium/src/+/lkcr/docs/static_initializers.md
http://neugierig.org/software/chromium/notes/2011/08/static-initializers.html
Unless maybe already our build rules do? @chinmaygarde might know. | engine,P3,team-engine,triaged-engine | low | Minor |
333,494,573 | rust | Trait bounds are not checked on type aliases until they are used | Consider the following code:
```rust
#![crate_type = "lib"]
pub trait Trait {}
pub struct Foo<T: Trait>(T);
pub struct Qux;
pub type Bar<T=Qux> = Foo<T>;
```
This builds without an error although `Qux` doesn't respect the trait bound.
Build failure only occurs when one actually uses the type alias, like the following:
```rust
fn foo(_: Bar) {}
``` | A-diagnostics,A-trait-system,P-medium,T-compiler,C-bug | low | Critical |
333,559,804 | pytorch | Cannot allocate memory Error from operator | https://caffe2.ai/docs/tutorial-MNIST.html
Finally, we can plot the results using pyplot.
```
# The parameter initialization network only needs to be run once.
workspace.RunNetOnce(train_model.param_init_net)
# creating the network
workspace.CreateNet(train_model.net, overwrite=True)
# set the number of iterations and track the accuracy & loss
total_iters = 200
accuracy = np.zeros(total_iters)
loss = np.zeros(total_iters)
# Now, we will manually run the network for 200 iterations.
for i in range(total_iters):
workspace.RunNet(train_model.net)
accuracy[i] = workspace.FetchBlob('accuracy')
loss[i] = workspace.FetchBlob('loss')
# After the execution is done, let's plot the values.
pyplot.plot(loss, 'b')
pyplot.plot(accuracy, 'r')
pyplot.legend(('Loss', 'Accuracy'), loc='upper right')
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-19-aad356a03a21> in <module>()
1 # The parameter initialization network only needs to be run once.
----> 2 workspace.RunNetOnce(train_model.param_init_net)
3 # creating the network
4 workspace.CreateNet(train_model.net, overwrite=True)
5 # set the number of iterations and track the accuracy & loss
/usr/local/lib/python2.7/dist-packages/caffe2/python/workspace.pyc in RunNetOnce(net)
197 C.Workspace.current._last_failed_op_net_position,
198 GetNetName(net),
--> 199 StringifyProto(net),
200 )
201
/usr/local/lib/python2.7/dist-packages/caffe2/python/workspace.pyc in CallWithExceptionIntercept(func, op_id_fetcher, net_name, *args, **kwargs)
176 def CallWithExceptionIntercept(func, op_id_fetcher, net_name, *args, **kwargs):
177 try:
--> 178 return func(*args, **kwargs)
179 except Exception:
180 op_id = op_id_fetcher()
RuntimeError: [enforce fail at lmdb.cc:20] mdb_status == MDB_SUCCESS. 12 vs 0. Cannot allocate memory Error from operator:
output: "dbreader_/home/kylin/\344\270\213\350\275\275/caffe2_notebooks/tutorial_data/mnist/mnist-train-nchw-lmdb" name: "" type: "CreateDB" arg { name: "db_type" s: "lmdb" } arg { name: "db" s: "/home/kylin/\344\270\213\350\275\275/caffe2_notebooks/tutorial_data/mnist/mnist-train-nchw-lmdb" }
- CUDA/cuDNN version: none
-
- GPU models and configuration: none
| caffe2 | low | Critical |
333,588,247 | flutter | Flutter doctor does not detect all installs of the Android SDK | Flutter Doctor can't tell the difference between Android SDK not installed and sdkmanager pieces not instaled. To reporoduce:
```
$ flutter doctor -v.
[✗] Android toolchain - develop for Android devices
✗ ANDROID_HOME = /usr/local/share/android-sdk/
but Android SDK not found at this location.
```
OK, let's try this:
```
$ sdkmanager "tools"
$ sdkmanager "platform-tools"
$ sdkmanager "build-tools;27.0.3"
$ sdkmanager "platforms;android-27"
$ sdkmanager "extras;android;m2repository"
$ sdkmanager "extras;google;m2repository"
$ sdkmanager "patcher;v4"
...
$ flutter doctor -v.
[✓] Android toolchain - develop for Android devices (Android SDK 27.0.3)
```
Before that I went down a rabbit hole chasing SDK setup and env vars, when it wasn't that at all :-( I think hundreds of of others have too.
| tool,t: flutter doctor,P2,team-tool,triaged-tool | low | Major |
333,620,664 | rust | Add impl<T: Unsize<U>, U: ?Sized> From<T> for Box<U> | Consider adding this implementation into `stdcore` if it possible:
```rust
impl<T: Unsize<U>, U: ?Sized> From<T> for Box<U> {
fn from(t: T) -> Box<U> {
Box::new(t) as Box<U>
}
}
```
This implementation will allow conversions like:
```rust
let x: Box<Fn(u32) -> u32> = {|x|x}.into();
```
| C-enhancement,T-lang | low | Minor |
333,621,778 | go | encoding/json: eof error of NewDecoder().Decode() should be same with Unmarshal() | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.10.1 darwin/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64" GOOS="darwin"
### What did you do?
https://play.golang.org/p/gr5cHhrpmbK
### What did you expect to see?
shown in previous link
### What did you see instead?
shown in previous link
| help wanted,NeedsDecision | medium | Critical |
333,624,004 | pytorch | Cannot use IterOp at runtime CUDA&CPU | Hello, I am currently following the tutorial with a C++/CUDA implementation.
Code: https://gist.github.com/Tezirg-Wrld3D/043c0662f2611142db86b656d24456a9
I am encountering the following error in the toy example:
```
terminate called after throwing an instance of 'caffe2::EnforceNotMet'
what(): [enforce fail at operator.cc:187] op. Cannot create operator of type 'IterOp' on the device 'CUDA'. device_option { device_type: 1 cuda_gpu_id: 0 }
```
So I tried to run a CPU only build but I encounter the same kind of error :
```
terminate called after throwing an instance of 'caffe2::EnforceNotMet'
what(): [enforce fail at operator.cc:187] op. Cannot create operator of type 'IterOp' on the device 'CPU'. device_option { device_type: 0 }
```
I have built caffe2 from source, here is the cmake output :
```
--
-- ******** Summary ********
-- General:
-- CMake version : 3.11.0
-- CMake command : /usr/local/lib/python2.7/dist-packages/cmake/data/bin/cmake
-- Git version : v0.1.11-8929-gd769074
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 5.4.0
-- BLAS : Eigen
-- CXX flags : -fvisibility-inlines-hidden -DONNX_NAMESPACE=onnx_c2 -O2 -fPIC -Wno-narrowing -Wno-invalid-partial-specialization -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-private-field -Wno-unused-result -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-error=deprecated-declarations
-- Build type : Release
-- Compile definitions :
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : /usr/local
--
-- BUILD_CAFFE2 : ON
-- BUILD_ATEN : OFF
-- BUILD_BINARY : ON
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : ON
-- Python version : 2.7.12
-- Python includes : /usr/include/python2.7
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : OFF
-- USE_ASAN : OFF
-- USE_ATEN : OFF
-- USE_CUDA : ON
-- CUDA static link : OFF
-- USE_CUDNN : ON
-- CUDA version : 9.2
-- cuDNN version : 7.1.4
-- CUDA root directory : /usr/local/cuda
-- CUDA library : /usr/lib/x86_64-linux-gnu/libcuda.so
-- cudart library : /usr/local/cuda/lib64/libcudart_static.a;-pthread;dl;/usr/lib/x86_64-linux-gnu/librt.so
-- cublas library : /usr/local/cuda/lib64/libcublas.so;/usr/local/cuda/lib64/libcublas_device.a
-- cufft library : /usr/local/cuda/lib64/libcufft.so
-- curand library : /usr/local/cuda/lib64/libcurand.so
-- cuDNN library : /usr/local/cuda/lib64/libcudnn.so
-- nvrtc : /usr/local/cuda/lib64/libnvrtc.so
-- CUDA include path : /usr/local/cuda/include
-- NVCC executable : /usr/local/cuda/bin/nvcc
-- CUDA host compiler : /usr/bin/cc
-- USE_TENSORRT : OFF
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FFMPEG : OFF
-- USE_GFLAGS : ON
-- USE_GLOG : ON
-- USE_GLOO : ON
-- USE_GLOO_IBVERBS : OFF
-- USE_LEVELDB : ON
-- LevelDB version : 1.18
-- Snappy version : 1.1.3
-- USE_LITE_PROTO : OFF
-- USE_LMDB : ON
-- LMDB version : 0.9.17
-- USE_METAL : OFF
-- USE_MKL :
-- USE_MOBILE_OPENGL : OFF
-- USE_MPI : ON
-- USE_NCCL : ON
-- USE_SYSTEM_NCCL : OFF
-- USE_NERVANA_GPU : OFF
-- USE_NNPACK : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : ON
-- OpenCV version : 2.4.9.1
-- USE_OPENMP : OFF
-- USE_PROF : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- Public Dependencies : Threads::Threads;gflags;glog::glog
-- Private Dependencies : nnpack;cpuinfo;/usr/lib/x86_64-linux-gnu/liblmdb.so;/usr/lib/x86_64-linux-gnu/libleveldb.so;/usr/lib/x86_64-linux-gnu/libsnappy.so;/usr/lib/x86_64-linux-gnu/libnuma.so;opencv_core;opencv_highgui;opencv_imgproc;/usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/libmpi.so;gloo;gcc_s;gcc;dl
```
I will also provide additional information on my system :
```
$ uname -a
Linux desktop49-ubuntu 4.4.0-128-generic #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ nvidia-smi
Tue Jun 19 12:00:17 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.26 Driver Version: 396.26 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 105... Off | 00000000:22:00.0 On | N/A |
| 0% 35C P8 N/A / 75W | 257MiB / 4036MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1198 G /usr/lib/xorg/Xorg 26MiB |
| 0 1520 G /usr/lib/xorg/Xorg 110MiB |
| 0 1801 G /usr/bin/gnome-shell 88MiB |
+-----------------------------------------------------------------------------+
```
Does anyone knows how to solve this ? Thanks a lot ! | caffe2 | low | Critical |
333,645,924 | godot | Android sensor lag | Hey All,
As I was playing around trying to make #19170 work, I stumbled upon some info that solves the lag issue I've been experiencing on Android. But before I change this, it requires some discussion to do this right.
The problem lies in the code here:
https://github.com/godotengine/godot/blob/master/platform/android/java/src/org/godotengine/godot/Godot.java#L618
We're using SENSOR_DELAY_GAME in all 4 sensors, this setting defines how fast the sensor data is processed and how much smoothing is applied. The longer this delay, the more lag, but the more stable the sensor data.
SENSOR_DELAY_GAME introduces a 0.02 second lag. Seems negligible and it is for normal game applications (whats in a constants name), for VR applications however that is a lifetime.
I changed the setting to SENSOR_DELAY_FASTEST which basically just gives you raw sensor data. Shaky but fast, perfect for VR.
Now this is only important for our native driver, cardboard and gearvr do their own sensor readings.
At the very least however, I think this should be made settable but not knowing enough about the android platform, I'm not sure what the best way to achieve this is.
More info on the sensors:
https://developer.android.com/guide/topics/sensors/sensors_overview | enhancement,discussion,platform:android,topic:xr | low | Major |
333,758,051 | go | all: decide on the hyphenation of pseudorandom vs pseudo-random? | The docs for crypto/rand contain both "pseudorandom" and "pseudo-random" for the same part of speech.
Decide which to use.
Also, the package doc says:
> Package rand implements a cryptographically secure pseudorandom number generator.
But the Reader says:
> Reader is a global, shared instance of a cryptographically strong pseudo-random generator.
Is it cryptographically "strong" or is it "secure"? Can we pick a word there too?
Or can we just remove "pseudorandom" altogether? I feel like it makes it sound too much like math/rand.
Can we just say "cryptographically secure random number generator"?
/cc @FiloSottile @agl @ianlancetaylor | Documentation,NeedsDecision | low | Major |
333,771,261 | neovim | API: buffer updates: merge result of visual operation | <!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`: v0.3.1
- Vim (version: ) behaves differently? neovim only
- Operating system/version: Max
- Terminal name/version: iTerm2
- `$TERM`: xterm-256color
### Steps to reproduce using `nvim -u NORC`
Create a file at `$NVIM_CONFIG/rplugin/node` with:
``` js
async function setup(nvim) {
let id = await nvim.channelId
let buffer = await nvim.buffer
let lines = await buffer.lines
buffer.listen('lines', (buf, tick, firstline, lastline, linedata, more) => {
console.error(firstline)
console.error(lastline)
console.error(linedata.length)
})
}
module.exports = (plugin) => {
let {nvim} = plugin
plugin.registerCommand('LiveListen', setup.bind(null, nvim), {sync: false})
}
```
Run `:UpdateRemotePlugins`
Open a file and run `:LiveListen` and have virtual selection like:
<img width="115" alt="screen shot 2018-06-20 at 1 34 11 am" src="https://user-images.githubusercontent.com/251450/41614154-49757cdc-742a-11e8-9909-2e2c42b638f1.png">
And delete the selection by type `d`
### Actual behaviour
The result would be `firstline: 1, lastline: 2, linedata: []`
### Expected behaviour
The result should be `firstline: 0, lastline: 2, linedata: ['a']`
The change of first line is not calculated. | enhancement,api | low | Critical |
333,786,551 | vscode | Issue Reporter is unstable when number of duplicates change | Issue Type: <b>Bug</b>
See the following recording. When editing the description the text input filed jumps up and down. It should be stable.
VS Code version: Code - Insiders 1.25.0-insider (b4a18da6e78f95295ea6538f5ae8dd0c9f0869a9, 2018-06-19T07:38:09.923Z)
OS version: Darwin x64 16.7.0
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz (8 x 2500)|
|GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: enabled<br>rasterization: enabled<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|2, 2, 2|
|Memory (System)|16.00GB (3.53GB free)|
|Process Argv|/Users/kmaetzel/Applications/Visual Studio Code - Insiders.app/Contents/MacOS/Electron -psn_0_14822946|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (25)</summary>
Extension|Author (truncated)|Version
---|---|---
solargraph|cas|0.17.4
npm-intellisense|chr|1.3.0
regex|chr|0.2.0
githistory|don|0.4.1
gitlens|eam|8.4.1
tslint|eg2|1.0.32
vscode-npm-script|eg2|0.3.4
vsc-material-theme|Equ|2.1.0
git-project-manager|fel|1.6.1
crazy|kie|0.0.2
svgpreview|kis|0.2.0
vscode-azurestorage|ms-|0.3.1
mssql|ms-|1.3.1
python|ms-|2018.5.0
azure-account|ms-|0.4.0
azurecli|ms-|0.4.2
Go|ms-|0.6.83
vsliveshare|ms-|0.3.295
printcode|nob|2.3.0
vscode-docker|Pet|0.0.27
material-icon-theme|PKi|3.5.0
Ruby|reb|0.18.0
kustovscode|sea|0.0.1
php-syntax-visualizer|vsc|0.0.1
vscode-open-in-github|ziy|1.3.3
(3 theme extensions excluded)
</details>
<!-- generated by issue reporter -->

| bug,polish,issue-reporter | low | Critical |
333,809,692 | vue | Dynamic input field type renders invalid code in IE11 | ### Version
2.5.17-beta.0
### Reproduction link
[https://github.com/nirazul/vue-loader-bug-repro](https://github.com/nirazul/vue-loader-bug-repro)
### Steps to reproduce
1. `npm install`
2. `npm run build`
3. `npm run watch`
4. Open `./public/index.html`
5. Inspect `main.bundle.js` in dev tools
6. On line 9044 you will find a duplicated key `value`
### What is expected?
A valid output from vue-template-compiler without duplicated value props, or at least a warning that the usage of dynamic input field types is prohibited in certain cases.
### What is actually happening?
In IE11 a blank page is rendered
---
I'm using a centralized component for both radio and checkbox input fields as the markup is 90% the same.
As we switched from webpack 3 to webpack 4, we had to also upgrade the vue-loader version from 12 to 13 or 14, which introduced this bug.
Prior to version 13, vue-template-renderer was not enforcing strict mode on all of its rendered templates. This is now the case, introducing this critical bug.
References:
https://vuejs.org/v2/guide/forms.html#Radio-1
https://github.com/vuejs/vue/issues/7048
https://github.com/vuejs/vue/issues/6917
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | low | Critical |
333,829,153 | go | x/build: add misc-compile-mobile TryBots for Android/iOS | https://storage.googleapis.com/go-build-log/21200cf2/misc-compile-mobile_0b3b3144.log
```
/workdir/go/pkg/tool/linux_amd64/link: running gcc failed: exit status 1
/tmp/go-link-391904482/go.o:(.data+0x0): undefined reference to `x_cgo_init'
/tmp/go-link-391904482/go.o:(.data+0x4): undefined reference to `x_cgo_notify_runtime_init_done'
/tmp/go-link-391904482/go.o:(.data+0x8): undefined reference to `x_cgo_thread_start'
/tmp/go-link-391904482/go.o:(.data+0x14): undefined reference to `x_cgo_setenv'
/tmp/go-link-391904482/go.o:(.data+0x18): undefined reference to `x_cgo_unsetenv'
/tmp/go-link-391904482/go.o:(.data+0x1c): undefined reference to `_cgo_yield'
collect2: error: ld returned 1 exit status
```
If this is caused by my change (https://go-review.googlesource.com/c/go/+/109697) I can't see it.
/cc @bradfitz | Builders,NeedsInvestigation,FeatureRequest,new-builder | low | Critical |
333,887,112 | rust | rustc doesn't handle libc dependencies it introduces on its own | As of currently nightly, a minimalist `no_std` program can look like the following (panic is purposefully badly handled):
```rust
#![no_std]
#![no_main]
#![feature(panic_implementation)]
#[panic_implementation]
#[no_mangle]
pub fn panic_impl(_: &core::panic::PanicInfo) -> ! { loop {}}
#[no_mangle]
pub extern "C" fn main(_argc: isize, _arg: *const *const u8) -> isize {
0
}
```
This actually fails to link with:
```
= note: /usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/Scrt1.o: In function `_start':
(.text+0x12): undefined reference to `__libc_csu_fini'
(.text+0x19): undefined reference to `__libc_csu_init'
(.text+0x26): undefined reference to `__libc_start_main'
collect2: error: ld returned 1 exit status
```
(using `-C panic=abort`)
`Scrt1.o` comes from the ld command line the rust compiler emits, so it should arguably handle the libc dependency itself.
Somehow, when using the `alloc` library (which may be stabilized soon per RFC 2480), libc gets pulled in, so the following builds:
```rust
#![no_std]
#![no_main]
#![feature(alloc, lang_items, panic_implementation)]
extern crate alloc;
// won't be necessary after PR #51607
#[lang = "oom"]
#[no_mangle]
pub fn oom() -> ! { loop {}}
#[panic_implementation]
#[no_mangle]
pub fn panic_impl(_: &core::panic::PanicInfo) -> ! { loop {}}
#[no_mangle]
pub extern "C" fn main(_argc: isize, _arg: *const *const u8) -> isize {
0
}
```
Presumably, this happens because, without your own `#[global_allocator]`, you end up linking jemalloc in, which pulls libc.
So, let's now see what happens when you set your own `#[global_allocator]`:
```rust
#![no_std]
#![no_main]
#![feature(alloc, lang_items, panic_implementation)]
extern crate alloc;
use alloc::alloc::{GlobalAlloc, Layout};
// won't be necessary after PR #51607
#[lang = "oom"]
#[no_mangle]
pub fn oom() -> ! { loop {}}
#[panic_implementation]
#[no_mangle]
pub fn panic_impl(_: &core::panic::PanicInfo) -> ! { loop {}}
#[no_mangle]
pub extern "C" fn main(_argc: isize, _arg: *const *const u8) -> isize {
0
}
struct MyAlloc;
unsafe impl GlobalAlloc for MyAlloc {
unsafe fn alloc(&self, _layout: Layout) -> *mut u8 {
core::ptr::null_mut()
}
unsafe fn dealloc(&self, _ptr: *mut u8, _layout: Layout) {
}
}
#[global_allocator]
static GLOBAL: MyAlloc = MyAlloc;
```
Now we're back to the same linkage error as originally, because we're not pulling jemalloc anymore.
But then, some `core` operations can also actually use libc functions, and that adds more undefined references to symbols. For instance, adding the following to `main`:
```rust
unsafe { GLOBAL.alloc_zeroed(Layout::new::<[u8; 42]>()); }
```
(which you'd normally get from using e.g. `Vec`)
adds the following error:
```
r6-317d481089b8c8fe83113de504472633.rs:(.text._ZN4core5alloc11GlobalAlloc12alloc_zeroed17h8bac7be5ef64acd6E+0x6c): undefined reference to `memset'
```
Because `GlobalAlloc::alloc_zeroed` uses `ptr::write_bytes`, which is really `intrinsics::write_bytes`, which apparently uses `memset` under the hood. | A-linkage,T-compiler,C-bug | low | Critical |
333,907,021 | neovim | :q, :qa shouldn't quietly kill terminal buffers | <!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`: v0.3.1-dev
- Vim (version: ) behaves differently? 8.1.89. Yes. Depending on `'confirm'`, fails or opens dialog
- Operating system/version: Ubuntu 18.04
- Terminal name/version: gnome-terminal 3.28.1
- `$TERM`: tmux-256color
### Steps to reproduce using `nvim -u NORC`
```
nvim -u NORC
Open a terminal buffer with `:terminal`.
Type `:q` or `:qa` to quit.
```
### Actual behaviour
Neovim quits, killing the terminal buffer silently.
### Expected behaviour
- If `'confirm'` is not set, `:q` (or `:qa`) should fail.
- If `'confirm'` is set, `:q` (or `:qa`) should open a Yes/No/Cancel dialog, offering to kill each open terminal buffer, just as it does for each changed file buffer. | terminal | low | Minor |
333,927,224 | TypeScript | Enum keys not accepted as computed properties if their name is not a valid identifier | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.9.2
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** computed property name
**Code**
```ts
enum Type {
Foo = 'foo',
'3x14' = '3x14'
}
type TypeMap = {
[Type.Foo]: any
[Type['3x14']]: any
}
```
**Expected behavior:**
`Type['3x14']` to be usable as a computed key in a type definition.
**Actual behavior:**
`A computed property name in a type literal must refer to an expression whose type is a literal type or a 'unique symbol' type.`
This is the case even for `Type.Foo` if it's written as `Type['Foo']` instead; so it seems there is some early bailout when the bracket operator is involved. For certain key names it may not be possible to write them without resorting to brackets, like a name that starts with a digit.
**Playground Link:** [Link](https://www.typescriptlang.org/play/#src=enum%20Type%20%7B%0A%20%20Foo%20%3D%20'foo'%2C%0A%20%20'3x14'%20%3D%20'3.14'%0A%7D%0A%0Atype%20TypeMap%20%3D%20%7B%0A%20%20%5BType.Foo%5D%3A%20any%0A%20%20%5BType%5B'3x14'%5D%5D%3A%20any%0A%7D%0A)
| Bug,Good First Issue | low | Minor |
334,005,130 | opencv | `CL_INVALID_WORK_GROUP_SIZE` on calling OpenCL kernels `minmaxloc`, `reduce` and some others | ##### System information (version)
- OpenCV => 3.4.0; 3.4.1
- 3.4.0 was cross-compiled by Yocto.
- 3.4.1 was natively-compiled.
- Operating System / Platform => Yocto Linux 2.4 / i.MX 8M QUAD EVK
- Compiler => g++ 7.3.0
##### Description
When running `opencv_perf_core`, some tests output one of the following lines up to a hundred times:
```shell
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('fft_multi_radix_rows', dims=2, globalsize=160x720x1, localsize=160x1x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('fft_multi_radix_rows', dims=2, globalsize=240x1080x1, localsize=240x1x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('fft_multi_radix_rows', dims=2, globalsize=256x2048x1, localsize=256x1x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('gemm', dims=2, globalsize=160x640x1, localsize=32x32x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('gemm', dims=2, globalsize=320x1280x1, localsize=32x32x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('gemm', dims=2, globalsize=320x640x1, localsize=16x16x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('gemm', dims=2, globalsize=640x1280x1, localsize=16x16x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('ifft_multi_radix_cols', dims=2, globalsize=1025x256x1, localsize=1x256x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('ifft_multi_radix_cols', dims=2, globalsize=961x135x1, localsize=1x135x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('ifft_multi_radix_rows', dims=2, globalsize=160x720x1, localsize=160x1x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('ifft_multi_radix_rows', dims=2, globalsize=240x1080x1, localsize=240x1x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('ifft_multi_radix_rows', dims=2, globalsize=256x2048x1, localsize=256x1x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('meanStdDev', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('minmaxloc', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('reduce', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('reduce', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('reduce_horz_opt', dims=2, globalsize=32x1088x1, localsize=32x32x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('reduce_horz_opt', dims=2, globalsize=32x2176x1, localsize=32x32x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('reduce_horz_opt', dims=2, globalsize=32x480x1, localsize=32x32x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('reduce_horz_opt', dims=2, globalsize=32x736x1, localsize=32x32x1) sync=false
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('stage1_with_sobel', dims=2, globalsize=1920x1088x1, localsize=32x32x1) sync=false
```
The test result, however, is successful.
Are these errors expected?
##### Detailed description
A complete output:
```shell
# opencv_perf_core --gtest_filter=OCL_MinMaxLocFixture_MinMaxLoc.MinMaxLoc/0
Time compensation is 0
CTEST_FULL_OUTPUT
OpenCV version: 3.4.1
OpenCV VCS version: unknown
Build type: release
Parallel framework: pthreads
CPU features: neon fp16
[ INFO:0] Initialize OpenCL runtime...
OpenCL Platforms:
Vivante OpenCL Platform
iGPU: Vivante OpenCL Device GC7000L.6214.0000 (OpenCL 1.2 )
Current OpenCL device:
Type = iGPU
Name = Vivante OpenCL Device GC7000L.6214.0000
Version = OpenCL 1.2
Driver version = OpenCL 1.2 V6.2.4.p1.150331
Address bits = 32
Compute units = 4
Max work group size = 1024
Local memory size = 32 KB
Max memory allocation size = 128 MB
Double support = No
Host unified memory = Yes
Device extensions:
cl_khr_byte_addressable_store
cl_khr_global_int32_base_atomics
cl_khr_global_int32_extended_atomics
cl_khr_local_int32_base_atomics
cl_khr_local_int32_extended_atomics
cl_khr_gl_sharing
Has AMD Blas = No
Has AMD Fft = No
Preferred vector width char = 4
Preferred vector width short = 4
Preferred vector width int = 4
Preferred vector width long = 4
Preferred vector width float = 4
Preferred vector width double = 0
Note: Google Test filter = OCL_MinMaxLocFixture_MinMaxLoc.MinMaxLoc/0
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from OCL_MinMaxLocFixture_MinMaxLoc
[ RUN ] OCL_MinMaxLocFixture_MinMaxLoc.MinMaxLoc/0, where GetParam() = (640x480, 8UC1)
[ INFO:0] Successfully initialized OpenCL cache directory: /home/root/.cache/opencv/3.4.1/opencl_cache/
[ INFO:0] Preparing OpenCL cache configuration for context: 32-bit--Vivante_Corporation--Vivante_OpenCL_Device_GC7000L_6214_0000--OpenCL_1_2_V6_2_4_p1_150331
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('minmaxloc', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('minmaxloc', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('minmaxloc', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('minmaxloc', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('minmaxloc', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('minmaxloc', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('minmaxloc', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('minmaxloc', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('minmaxloc', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
OpenCL error CL_INVALID_WORK_GROUP_SIZE (-54) during call: clEnqueueNDRangeKernel('minmaxloc', dims=1, globalsize=4096x1x1, localsize=1024x1x1) sync=true
[ PERFSTAT ] (samples=10 mean=132.07 median=131.35 min=130.96 stddev=2.12 (1.6%))
[ OK ] OCL_MinMaxLocFixture_MinMaxLoc.MinMaxLoc/0 (1327 ms)
[----------] 1 test from OCL_MinMaxLocFixture_MinMaxLoc (1327 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (1327 ms total)
[ PASSED ] 1 test.
```
Device info:
```shell
root@imx8mqevk:/opt/imx-gpu-sdk/OpenCL/Info# ./Info
Dumping platform info for 1 platforms.
*** Platform #0 ***
Platform version: 1.2
CL_PLATFORM_PROFILE: FULL_PROFILE
CL_PLATFORM_VERSION: OpenCL 1.2 V6.2.4.p1.150331
CL_PLATFORM_NAME: Vivante OpenCL Platform
CL_PLATFORM_VENDOR: Vivante Corporation
CL_PLATFORM_EXTENSIONS: cl_khr_icd
Dumping detailed device info for 1 platforms.
*** Platform #0 ***
Platform version: 1.2
CL_PLATFORM_PROFILE: FULL_PROFILE
CL_PLATFORM_VERSION: OpenCL 1.2 V6.2.4.p1.150331
CL_PLATFORM_NAME: Vivante OpenCL Platform
CL_PLATFORM_VENDOR: Vivante Corporation
CL_PLATFORM_EXTENSIONS: cl_khr_icd
Enumerating devices of type: CL_DEVICE_TYPE_CPU
- Not supported
Enumerating devices of type: CL_DEVICE_TYPE_GPU
--- Device #0 ---
Device version: 1.2
CL_DEVICE_ADDRESS_BITS: 32
CL_DEVICE_AVAILABLE: 1
CL_DEVICE_BUILT_IN_KERNELS:
CL_DEVICE_COMPILER_AVAILABLE: 1
CL_DEVICE_DOUBLE_FP_CONFIG: 0
CL_DEVICE_ENDIAN_LITTLE: 1
CL_DEVICE_ERROR_CORRECTION_SUPPORT: 1
CL_DEVICE_EXECUTION_CAPABILITIES: 1
CL_DEVICE_EXTENSIONS: cl_khr_byte_addressable_store cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_gl_sharing
CL_DEVICE_GLOBAL_MEM_CACHE_SIZE: 8192
CL_DEVICE_GLOBAL_MEM_CACHE_TYPE: 2
CL_DEVICE_GLOBAL_MEM_CACHELINE_SIZE: 8589934656
CL_DEVICE_GLOBAL_MEM_SIZE: 268435456
CL_DEVICE_HALF_FP_CONFIG: 0
CL_DEVICE_HOST_UNIFIED_MEMORY: 1
CL_DEVICE_IMAGE_SUPPORT: 1
CL_DEVICE_IMAGE2D_MAX_HEIGHT: 8192
CL_DEVICE_IMAGE2D_MAX_WIDTH: 8192
CL_DEVICE_IMAGE3D_MAX_DEPTH: 8192
CL_DEVICE_IMAGE3D_MAX_HEIGHT: 8192
CL_DEVICE_IMAGE3D_MAX_WIDTH: 8192
CL_DEVICE_IMAGE_MAX_BUFFER_SIZE: 65536
CL_DEVICE_IMAGE_MAX_ARRAY_SIZE: 8192
CL_DEVICE_LINKER_AVAILABLE: 1
CL_DEVICE_LOCAL_MEM_SIZE: 32768
CL_DEVICE_LOCAL_MEM_TYPE: 2
CL_DEVICE_MAX_CLOCK_FREQUENCY: 500
CL_DEVICE_MAX_COMPUTE_UNITS: 4
CL_DEVICE_MAX_CONSTANT_ARGS: 9
CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE: 65536
CL_DEVICE_MAX_MEM_ALLOC_SIZE: 134217728
CL_DEVICE_MAX_PARAMETER_SIZE: 1024
CL_DEVICE_MAX_READ_IMAGE_ARGS: 128
CL_DEVICE_MAX_SAMPLERS: 16
CL_DEVICE_MAX_WORK_GROUP_SIZE: 1024
CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS: 3
CL_DEVICE_MAX_WORK_ITEM_SIZES: 1024, 1024, 1024, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
CL_DEVICE_MAX_WRITE_IMAGE_ARGS: 8
CL_DEVICE_MEM_BASE_ADDR_ALIGN: 1024
CL_DEVICE_MIN_DATA_TYPE_ALIGN_SIZE: 128
CL_DEVICE_NAME: Vivante OpenCL Device GC7000L.6214.0000
CL_DEVICE_NATIVE_VECTOR_WIDTH_CHAR: 4
CL_DEVICE_NATIVE_VECTOR_WIDTH_SHORT: 4
CL_DEVICE_NATIVE_VECTOR_WIDTH_INT: 4
CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG: 4
CL_DEVICE_NATIVE_VECTOR_WIDTH_FLOAT: 4
CL_DEVICE_NATIVE_VECTOR_WIDTH_DOUBLE: 0
CL_DEVICE_NATIVE_VECTOR_WIDTH_HALF: 0
CL_DEVICE_OPENCL_C_VERSION: OpenCL C 1.2
CL_DEVICE_PARENT_DEVICE: 0
CL_DEVICE_PARTITION_MAX_SUB_DEVICES: 0
CL_DEVICE_PARTITION_PROPERTIES: 0, 0, 0, 0
CL_DEVICE_PARTITION_AFFINITY_DOMAIN: 0
CL_DEVICE_PARTITION_TYPE: 0, 0, 0, 0
CL_DEVICE_PLATFORM: 0xffff7a08eea0
CL_DEVICE_PREFERRED_VECTOR_WIDTH_CHAR: 4
CL_DEVICE_PREFERRED_VECTOR_WIDTH_SHORT: 4
CL_DEVICE_PREFERRED_VECTOR_WIDTH_INT: 4
CL_DEVICE_PREFERRED_VECTOR_WIDTH_LONG: 4
CL_DEVICE_PREFERRED_VECTOR_WIDTH_FLOAT: 4
CL_DEVICE_PREFERRED_VECTOR_WIDTH_DOUBLE: 0
CL_DEVICE_PREFERRED_VECTOR_WIDTH_HALF: 0
CL_DEVICE_PRINTF_BUFFER_SIZE: 1048576
CL_DEVICE_PREFERRED_INTEROP_USER_SYNC: 1
CL_DEVICE_PROFILE: FULL_PROFILE
CL_DEVICE_PROFILING_TIMER_RESOLUTION: 1000
CL_DEVICE_QUEUE_PROPERTIES: 3
CL_DEVICE_REFERENCE_COUNT: 1
CL_DEVICE_SINGLE_FP_CONFIG: 14
CL_DEVICE_TYPE: CL_DEVICE_TYPE_GPU
CL_DEVICE_VENDOR: Vivante Corporation
CL_DEVICE_VENDOR_ID: 5654870
CL_DEVICE_VERSION: OpenCL 1.2
CL_DRIVER_VERSION: OpenCL 1.2 V6.2.4.p1.150331
Enumerating devices of type: CL_DEVICE_TYPE_ACCELERATOR
- Not supported
Enumerating devices of type: CL_DEVICE_TYPE_CUSTOM
- Not supported
Enumerating devices of type: CL_DEVICE_TYPE_ALL
--- Device #0 ---
Device version: 1.2
CL_DEVICE_ADDRESS_BITS: 32
CL_DEVICE_AVAILABLE: 1
CL_DEVICE_BUILT_IN_KERNELS:
CL_DEVICE_COMPILER_AVAILABLE: 1
CL_DEVICE_DOUBLE_FP_CONFIG: 0
CL_DEVICE_ENDIAN_LITTLE: 1
CL_DEVICE_ERROR_CORRECTION_SUPPORT: 1
CL_DEVICE_EXECUTION_CAPABILITIES: 1
CL_DEVICE_EXTENSIONS: cl_khr_byte_addressable_store cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_gl_sharing
CL_DEVICE_GLOBAL_MEM_CACHE_SIZE: 8192
CL_DEVICE_GLOBAL_MEM_CACHE_TYPE: 2
CL_DEVICE_GLOBAL_MEM_CACHELINE_SIZE: 8589934656
CL_DEVICE_GLOBAL_MEM_SIZE: 268435456
CL_DEVICE_HALF_FP_CONFIG: 0
CL_DEVICE_HOST_UNIFIED_MEMORY: 1
CL_DEVICE_IMAGE_SUPPORT: 1
CL_DEVICE_IMAGE2D_MAX_HEIGHT: 8192
CL_DEVICE_IMAGE2D_MAX_WIDTH: 8192
CL_DEVICE_IMAGE3D_MAX_DEPTH: 8192
CL_DEVICE_IMAGE3D_MAX_HEIGHT: 8192
CL_DEVICE_IMAGE3D_MAX_WIDTH: 8192
CL_DEVICE_IMAGE_MAX_BUFFER_SIZE: 65536
CL_DEVICE_IMAGE_MAX_ARRAY_SIZE: 8192
CL_DEVICE_LINKER_AVAILABLE: 1
CL_DEVICE_LOCAL_MEM_SIZE: 32768
CL_DEVICE_LOCAL_MEM_TYPE: 2
CL_DEVICE_MAX_CLOCK_FREQUENCY: 500
CL_DEVICE_MAX_COMPUTE_UNITS: 4
CL_DEVICE_MAX_CONSTANT_ARGS: 9
CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE: 65536
CL_DEVICE_MAX_MEM_ALLOC_SIZE: 134217728
CL_DEVICE_MAX_PARAMETER_SIZE: 1024
CL_DEVICE_MAX_READ_IMAGE_ARGS: 128
CL_DEVICE_MAX_SAMPLERS: 16
CL_DEVICE_MAX_WORK_GROUP_SIZE: 1024
CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS: 3
CL_DEVICE_MAX_WORK_ITEM_SIZES: 1024, 1024, 1024, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
CL_DEVICE_MAX_WRITE_IMAGE_ARGS: 8
CL_DEVICE_MEM_BASE_ADDR_ALIGN: 1024
CL_DEVICE_MIN_DATA_TYPE_ALIGN_SIZE: 128
CL_DEVICE_NAME: Vivante OpenCL Device GC7000L.6214.0000
CL_DEVICE_NATIVE_VECTOR_WIDTH_CHAR: 4
CL_DEVICE_NATIVE_VECTOR_WIDTH_SHORT: 4
CL_DEVICE_NATIVE_VECTOR_WIDTH_INT: 4
CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG: 4
CL_DEVICE_NATIVE_VECTOR_WIDTH_FLOAT: 4
CL_DEVICE_NATIVE_VECTOR_WIDTH_DOUBLE: 0
CL_DEVICE_NATIVE_VECTOR_WIDTH_HALF: 0
CL_DEVICE_OPENCL_C_VERSION: OpenCL C 1.2
CL_DEVICE_PARENT_DEVICE: 0
CL_DEVICE_PARTITION_MAX_SUB_DEVICES: 0
CL_DEVICE_PARTITION_PROPERTIES: 0, 0, 0, 0
CL_DEVICE_PARTITION_AFFINITY_DOMAIN: 0
CL_DEVICE_PARTITION_TYPE: 0, 0, 0, 0
CL_DEVICE_PLATFORM: 0xffff7a08eea0
CL_DEVICE_PREFERRED_VECTOR_WIDTH_CHAR: 4
CL_DEVICE_PREFERRED_VECTOR_WIDTH_SHORT: 4
CL_DEVICE_PREFERRED_VECTOR_WIDTH_INT: 4
CL_DEVICE_PREFERRED_VECTOR_WIDTH_LONG: 4
CL_DEVICE_PREFERRED_VECTOR_WIDTH_FLOAT: 4
CL_DEVICE_PREFERRED_VECTOR_WIDTH_DOUBLE: 0
CL_DEVICE_PREFERRED_VECTOR_WIDTH_HALF: 0
CL_DEVICE_PRINTF_BUFFER_SIZE: 1048576
CL_DEVICE_PREFERRED_INTEROP_USER_SYNC: 1
CL_DEVICE_PROFILE: FULL_PROFILE
CL_DEVICE_PROFILING_TIMER_RESOLUTION: 1000
CL_DEVICE_QUEUE_PROPERTIES: 3
CL_DEVICE_REFERENCE_COUNT: 1
CL_DEVICE_SINGLE_FP_CONFIG: 14
CL_DEVICE_TYPE: CL_DEVICE_TYPE_GPU
CL_DEVICE_VENDOR: Vivante Corporation
CL_DEVICE_VENDOR_ID: 5654870
CL_DEVICE_VERSION: OpenCL 1.2
CL_DRIVER_VERSION: OpenCL 1.2 V6.2.4.p1.150331
```
Cmake output:
```shell
# cmake -D CMAKE_BUILD_TYPE=Release -D BUILD_TESTS=ON -D INSTALL_TESTS=ON ..
-- Looking for ccache - found (/usr/bin/ccache)
-- Found ZLIB: /usr/lib/libz.so (found suitable version "1.2.11", minimum required is "1.2.3")
-- Could NOT find Jasper (missing: JASPER_LIBRARIES JASPER_INCLUDE_DIR)
-- Found ZLIB: /usr/lib/libz.so (found version "1.2.11")
-- Checking for module 'gtk+-3.0'
-- No package 'gtk+-3.0' found
-- Checking for module 'gtk+-2.0'
-- No package 'gtk+-2.0' found
-- Checking for module 'gthread-2.0'
-- Found gthread-2.0, version 2.52.3
-- Checking for module 'gstreamer-base-1.0'
-- Found gstreamer-base-1.0, version 1.12.2
-- Checking for module 'gstreamer-video-1.0'
-- Found gstreamer-video-1.0, version 1.12.2
-- Checking for module 'gstreamer-app-1.0'
-- Found gstreamer-app-1.0, version 1.12.2
-- Checking for module 'gstreamer-riff-1.0'
-- Found gstreamer-riff-1.0, version 1.12.2
-- Checking for module 'gstreamer-pbutils-1.0'
-- Found gstreamer-pbutils-1.0, version 1.12.2
-- Checking for module 'libdc1394-2'
-- No package 'libdc1394-2' found
-- Checking for module 'libdc1394'
-- No package 'libdc1394' found
-- Looking for linux/videodev.h
-- Looking for linux/videodev.h - not found
-- Looking for linux/videodev2.h
-- Looking for linux/videodev2.h - found
-- Looking for sys/videoio.h
-- Looking for sys/videoio.h - not found
-- Checking for modules 'libavcodec;libavformat;libavutil;libswscale'
-- No package 'libavcodec' found
-- No package 'libavformat' found
-- No package 'libavutil' found
-- No package 'libswscale' found
-- Checking for module 'libavresample'
-- No package 'libavresample' found
-- Checking for module 'libgphoto2'
-- Found libgphoto2, version 2.5.8
-- Could not find OpenBLAS include. Turning OpenBLAS_FOUND off
-- Could not find OpenBLAS lib. Turning OpenBLAS_FOUND off
-- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY)
-- A library with BLAS API not found. Please specify library location.
-- LAPACK requires BLAS
-- A library with LAPACK API not found. Please specify library location.
-- Could NOT find JNI (missing: JAVA_AWT_LIBRARY JAVA_JVM_LIBRARY JAVA_INCLUDE_PATH JAVA_INCLUDE_PATH2 JAVA_AWT_INCLUDE_PATH)
-- Could NOT find Matlab (missing: MATLAB_MEX_SCRIPT MATLAB_INCLUDE_DIRS MATLAB_ROOT_DIR MATLAB_LIBRARIES MATLAB_LIBRARY_DIRS MATLAB_MEXEXT MATLAB_ARCH MATLAB_BIN)
-- VTK is not found. Please set -DVTK_DIR in CMake to VTK build directory, or to VTK install subdirectory with VTKConfig.cmake file
-- Excluding from source files list: modules/core/src/convert.avx2.cpp
-- Excluding from source files list: modules/core/src/convert.sse4_1.cpp
-- Excluding from source files list: modules/imgproc/src/corner.avx.cpp
-- Excluding from source files list: modules/imgproc/src/filter.avx2.cpp
-- Excluding from source files list: modules/imgproc/src/imgwarp.avx2.cpp
-- Excluding from source files list: modules/imgproc/src/imgwarp.sse4_1.cpp
-- Excluding from source files list: modules/imgproc/src/resize.avx2.cpp
-- Excluding from source files list: modules/imgproc/src/resize.sse4_1.cpp
-- Excluding from source files list: modules/imgproc/src/undistort.avx2.cpp
-- Excluding from source files list: modules/objdetect/src/haar.avx.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/layers_common.avx.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/layers_common.avx2.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/layers_common.avx512_skx.cpp
-- Excluding from source files list: modules/features2d/src/fast.avx2.cpp
--
-- General configuration for OpenCV 3.4.1 =====================================
-- Version control: unknown
--
-- Platform:
-- Timestamp: 2018-06-20T07:07:40Z
-- Host: Linux 4.9.88-imx_4.9.88_2.0.0_ga+g5e23f9d aarch64
-- CMake: 3.8.2
-- CMake generator: Unix Makefiles
-- CMake build tool: /usr/bin/make
-- Configuration: Release
--
-- CPU/HW features:
-- Baseline: NEON FP16
-- required: NEON
-- disabled: VFPV3
--
-- C/C++:
-- Built as dynamic libs?: YES
-- C++11: YES
-- C++ Compiler: /usr/bin/c++ (ver 7.3.0)
-- C++ flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -WundG
-- C++ flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -WundG
-- C Compiler: /usr/bin/cc
-- C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -WmisG
-- C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -WmisG
-- Linker flags (Release):
-- Linker flags (Debug):
-- ccache: YES
-- Precompiled headers: NO
-- Extra dependencies: dl m pthread rt
-- 3rdparty dependencies:
--
-- OpenCV modules:
-- To be built: calib3d core dnn features2d flann highgui imgcodecs imgproc java_bindings_generator ml objdetect photo python_bindings_generator shape stitching superres ts video videob
-- Disabled: js world
-- Disabled by dependency: -
-- Unavailable: cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev java python2 python3 viz
-- Applications: tests perf_tests apps
-- Documentation: NO
-- Non-free algorithms: NO
--
-- GUI:
-- GTK+: NO
-- VTK support: NO
--
-- Media I/O:
-- ZLib: /usr/lib/libz.so (ver 1.2.11)
-- JPEG: /usr/lib/libjpeg.so (ver )
-- WEBP: /usr/lib/libwebp.so (ver encoder: 0x020e)
-- PNG: /usr/lib/libpng.so (ver 1.6.31)
-- TIFF: /usr/lib/libtiff.so (ver 42 / 4.0.8)
-- JPEG 2000: build (ver 1.900.1)
-- OpenEXR: build (ver 1.7.1)
--
-- Video I/O:
-- DC1394: NO
-- FFMPEG: NO
-- avcodec: NO
-- avformat: NO
-- avutil: NO
-- swscale: NO
-- avresample: NO
-- GStreamer:
-- base: YES (ver 1.12.2)
-- video: YES (ver 1.12.2)
-- app: YES (ver 1.12.2)
-- riff: YES (ver 1.12.2)
-- pbutils: YES (ver 1.12.2)
-- libv4l/libv4l2: NO
-- v4l/v4l2: linux/videodev2.h
-- gPhoto2: YES
--
-- Parallel framework: pthreads
--
-- Trace: YES (built-in)
--
-- Other third-party libraries:
-- Lapack: NO
-- Eigen: NO
-- Custom HAL: YES (carotene (ver 0.0.1))
-- Protobuf: build (3.5.1)
--
-- NVIDIA CUDA: NO
--
-- OpenCL: YES (no extra features)
-- Include path: /home/root/opencv-3.4.1/3rdparty/include/opencl/1.2
-- Link libraries: Dynamic load
--
-- Python (for build): /usr/bin/python2.7
--
-- Java:
-- ant: NO
-- JNI: NO
-- Java wrappers: NO
-- Java tests: NO
--
-- Matlab: NO
--
-- Install to: /usr
-- -----------------------------------------------------------------
--
-- Configuring done
-- Generating done
-- Build files have been written to: /home/root/opencv-3.4.1/build
```
| category: ocl,category: 3rdparty | low | Critical |
334,095,303 | godot | PoolColorArray.set doesn't work | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** 3.0.3
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Linux x64 (SolusOS 3.9999 Gnome 3.28.2)
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
<!-- What happened, and what was expected. -->
I'm trying to use `Polygon2D.vertex_color.set(0, color)` and this approach doesn't do nothing, while `vertex_colors[0] = color` works, the below doesn't:
```GDScript
func _ready():
var color = Color(1, 1, 1, 1)
print(color)
vertex_colors.set(0, color)
print(vertex_colors[0])
```
the print returns:
```
1,1,1,1
1,0.701961,0.501961,1
```
While `Polygon2D.vertex_color` by creating another `PoolColorArray`:
```GDScript
var white = Color(1, 1, 1, 1)
var colors = [white, white, white, white]
print(colors[0])
vertex_colors = colors
print(vertex_colors[0])
```
the print returns:
```
1,1,1,1
1,1,1,1
```
Both are reflected in the actual game (first one doesn't modify the Polygon2D vertex colors, second approach does)
Also, during some tests I realized that unfortunately we have a `not true` statement on latest post about the [Animation System](https://godotengine.org/article/godot-gets-brand-new-animation-editor-cinematic-support).
I tried to `Tween.interpolate_property` and `Tween.interpolate_method` both trying to interpolate the `Polygon2D.vertex_color` and I got this error:
```
0:00:00:0209 - Invalid param type, except(int/real/vector2/vector/matrix/matrix32/quat/aabb/transform/color)
----------
Type:Error
Description:
Time: 0:00:00:0209
C Error: Invalid param type, except(int/real/vector2/vector/matrix/matrix32/quat/aabb/transform/color)
C Source: scene/animation/tween.cpp:991
C Function: _calc_delta_val
```
So it seems like:
> any property from any object can be animated or tweened
Is not true, unfortunately :(
**Steps to reproduce:**
1. Create a Polygon2D
2. On its `vertex_colors` property create the desired array
3. Try to `PoolColorArray.set(index, color)`
---
1.Create 2 PoolColorArray
2. Try to `Tween.interpolate_property` using them as `initial_value` and `final_value`
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[Polygon2D.zip](https://github.com/godotengine/godot/files/2119646/Polygon2D.zip)
| topic:gdscript,documentation | low | Critical |
334,136,669 | flutter | ShadowBox inset attribute? Inner shadow | I didn't find any trace of inner shadow for box.
Like in CSS using box-shadow: inset there is a option to achieve the same effect? | c: new feature,framework,customer: crowd,P3,team-framework,triaged-framework | high | Critical |
334,176,917 | puppeteer | Disable crash reporting by default | Win CI usually fails to cleanup userDataDir folder in some tests. Example: [appveyour build](https://ci.appveyor.com/project/aslushnikov/puppeteer/build/1.0.1672/job/j2bllv92su3uajye)
It turns out that on Windows:
- crash pad is launched in a separate process
- crash pad writes a few files into userDataDir
- crash pad process doesn’t necessarily shutdown when we close browser.
As a result, we fail to remove temporary user data directory.
We need to disable crash reporting by default. Somehow chrome headless ignores the `--disable-breakpad` flag and launches crashpad anyway; this requires investigation and fixing upstream. | bug,feature,upstream,chromium,confirmed,P3 | low | Critical |
334,185,870 | rust | 1.27.0 Compiler Options: no-redzone option (Is the explanation backwards?) | In the 1.27.0 release of the "Rustc Book", the command-line option "no-redzone" is described as follows:
no-redzone
This flag allows you to disable the red zone. This flag can be passed many options:
To enable the red zone: y, yes or on.
To disable it: n, no, or off.
This sounds backwards. Should it be "no-redzone=yes" would disable the RedZone and "no-redzone=no" would enable the "RedZone"? If not, it seems like the command-line option is misnamed and should be "redzone" rather than "no-redzone". | C-enhancement,P-medium,T-compiler,A-docs | low | Minor |
334,190,814 | go | cmd/go: get fails to provide sensible error message for private vcs repos | #### What did you do?
I have a project that imports a private git repository. When setting it up with `vgo get` or similar commands, the resolving process will stop abruptly, not writing anything to the disk.
The command succeeds if git credentials are properly set up (*_ASKPASS, global config or the repo is already cloned with the right config, etc.).
#### What did you expect to see?
An error message, or some indication of what went wrong
#### What did you see instead?
No error message, just an exit code 1.
#### System details
```
go version go1.10.3 linux/amd64 vgo:2018-02-20.1
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/exploser/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/exploser/go"
GORACE=""
GOROOT="/usr/lib/go"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build534162883=/tmp/go-build -gno-record-gcc-switches"
GOROOT/bin/go version: go version go1.10.3 linux/amd64
GOROOT/bin/go tool compile -V: compile version go1.10.3
uname -sr: Linux 4.16.14-2-MANJARO
LSB Version: n/a
Distributor ID: ManjaroLinux
Description: Manjaro Linux
Release: 17.1.10
Codename: Hakoila
/usr/lib/libc.so.6: GNU C Library (GNU libc) stable release version 2.27.
gdb --version: GNU gdb (GDB) 8.1
```
| NeedsFix,GoCommand,modules | medium | Critical |
334,215,939 | rust | #[macro_use] use path; produces unhelpful diagnostics | ```rust
mod a {
macro_rules! foo { () => {}; }
}
mod b {
#[macro_use]
use a;
foo!();
}
fn main() {}
```
```
Compiling playground v0.0.1 (file:///playground)
error: cannot find macro `foo!` in this scope
--> src/main.rs:10:5
|
10 | foo!();
| ^^^
|
= help: have you added the `#[macro_use]` on the module/import?
```
The error message even seems to suggest that this is what you're supposed to write, by using the phrase "on the module/**import**" | C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics | low | Critical |
334,252,168 | puppeteer | Support Network requests in Workers | WebWorkers can `fetch` data; we should surface traffic from web workers in puppeteer.
This is blocked on #2548 since nested targets are broken with request interception. See https://github.com/GoogleChrome/puppeteer/pull/2717#issuecomment-398899616 for details. | feature,upstream,chromium | medium | Critical |
334,260,471 | flutter | Reduce duplication in RenderOpacity and RenderAnimatedOpacity | RenderOpacity and RenderAnimatedOpacity contain quite a bit of duplicated code.
This duplication even resulted in a recent nuanced bug that showed itself when a RenderOpacity was changed to a RenderAnimatedOpacity and broke a golden image test.
The duplication between these two classes should be reduced to avoid future discrepancies. | team,framework,P3,team-framework,triaged-framework | low | Critical |
334,277,875 | godot | Nested viewports don't work correctly with get_local_mouse_position() | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
3.0.2
**OS/device including version:**
Windows 10
**Issue description:**
When nesting viewports (viewportcontainer/viewport/viewportcontainer/viewport), the mouse input is always scaled from the root viewport instead of the parent.
**Steps to reproduce:**
Make 4 nodes with the following hierarchy:
viewportcontainer/viewport/viewportcontainer2/viewport2
scale viewportcontainer, and call get_local_mouse_position() on viewportcontainer2. It will pass the local position as if it was not a child of viewportcontainer.
<!--**Minimal reproduction project:**
Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| bug,topic:core,confirmed | low | Critical |
334,322,058 | pytorch | [caffe2] AffineChannelOp | Hi,
If I use a SpatialBN (spatial batch normalization), when using the net in production, do I need to put a AffineChannelOp or is setting the flag is_test to true enough?
What do you advise? If I need to use an AffinChannelOp, how to do this?
Thanks a lot for anybody helping me. | caffe2 | low | Minor |
334,323,233 | neovim | Project concept | # Problem
With large projects with multiple directories, files like Session.vim, tags files, and other similar "project files" could be anywhere within the project.
# Expected behavior
With a "project" feature, commands like `:vimgrep`, `:make`, `nvim -S Session.vim`, etc., load files from the project directory.
I was thinking of having 4 options:
- ~~`project: boolean` (default 0 for compatibility) When this option is set to true, neovim sets the 'projectdir' and looks for makefiles, cscope dbs, etc, inside the cwd, then inside 'projectdir'. When set to false, neovim acts as usual When 'autochdir' is set, neovim changes the directory to the project directory~~
- *(2024 justinmk)* Doesn't seem necessary; instead, always do the right thing based on the presence of `&l:rootdir` or `.nvim.lua` ?
- `projectfunc: string` (default: empty) Decides the project directory by a custom function. If nil, the project directory is determined by searching parent directories for the projectfiles
- *(2024 justinmk)* related: https://github.com/neovim/neovim/pull/31630
- `projectdir: string` (default: empty) Set by projectfunc or projectfiles. The directory of the project
- *(2024 justinmk)* I assume this is buffer-local option? Alternative names: `rootdir`, `workdir`
- `projectfiles: list` (default: ".git,src,etc") Files/directories to search for in parent directories. Searches from left to right eg. search for .git, then src, etc
- *(2024 justinmk)* In `vim.lsp.config` we named this `root_markers`. | enhancement,core | low | Major |
334,386,893 | pytorch | [Feature Request] Add to() method for optimizers/schedulers | I think the optimizers and schedulers could use some extra work.
The main thing I would love, is for optimizers and schedulers to have a `to()` method so we can send their parameters to a certain device. This would allow us to save the parameters and reload them at a later stage without any problems.
Right now, the following script crashes with an error that the optimizer expects a `torch.FloatTensor` but got a `torch.cuda.FloatTensor`. I am not quite sure why, as I specifically load the data to CPU?
```python
# Note: This is not my actual script, but rather a representation of the steps and in what order I perform them
network = ... # Creating a network based of torch.nn.Module
optim = torch.optim.SGD(network.parameters(), ...)
state = torch.load('file_path', lambda storage, loc: storage) # Send all tensors to CPU
network.load_state_dict(state['network'])
optim.load_state_dict(state['optim'])
network.to(torch.device('cuda'))
#optim.to(torch.device('cuda')) # Would this solve my problems?
```
The same could be said about schedulers. | todo,module: optimizer,triaged | low | Critical |
334,413,534 | go | cmd/go: document exit codes of a process executing `go test` | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
1.10.3
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/user/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/user/devel/go"
GORACE=""
GOROOT="/home/user/devel/golang1.10"
GOTMPDIR=""
GOTOOLDIR="/home/user/devel/golang1.10/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build141686204=/tmp/go-build -gno-record-gcc-switches"
```
### What did you do?
I run `go help test` and try to search for keywords "return", "exit", "code" in the output.
### What did you expect to see?
I expect the possible exit codes for running `go test` be documented
in order to make sensible decisions about running this command in scripts
(and from Makefiles, by a CI system etc).
### What did you see instead?
No menton of the return/exit code.
----
I'd expect the following cases to be documented *if implemented:*
- Successful run—all tests compiled and passed OK—exit code 0.
- Failed run—the compilation of at least one package failed or at least a single test failed—a non-zero exit code.
- Invalid parameters passed and/or one of the listed packages wasn't found—a non-zero exit code,
different from that of the former case.
I'd also think that the cached results should effectively "memorize" the exit codeused ATM the results were generated (and cached).
See also #23716. | Documentation,NeedsInvestigation | low | Critical |
334,504,092 | go | spec: legal conversions from string constant to byte slice are not covered by convertability rules | According to [Conversions](https://golang.org/ref/spec#Conversions) (as of 21.06.2018), the following programs should be illegal, but it compiles without errors:
```
package main
func main() {
_ = []byte("s") // no compile time error
}
```
Here we have constant value `"s"` converted to `[]byte`. According to spec, `"s"` must be [Representable](https://golang.org/ref/spec#Representability) by `[]byte`. The only way to explain the absence of error is to suppose that `"s"` is in the set of values determined by `[]byte`.
The assumption above contradicts with [Assignability](https://golang.org/ref/spec#Assignability) rules. Consider the following program that fails to compile:
```
package main
func main() {
var b []byte
b = "s" // compile time error
}
```
So, `"s"` is not assignable to `[]byte`. As a consequence, `"s"` is not representable by `[]byte` and `"s"` is **not** in the set of values determined by `[]byte`. | Documentation,NeedsFix | low | Critical |
334,594,035 | go | cmd/compile: record and use per-function optimization data | This is an open-ended performance idea to explore.
We have per-function compiler-specific export data (see method funcExt in iexport.go). We could do some analysis in package ssa of return values, add that to the export data, and use it on import. This might help with function calls that cannot be inlined.
For example, we could record whether a return value is known to be non-nil. Or a known limited range for a return value. Or a concrete type for a function that returns an interface.
Related: #25862. If we pursue this idea, the mechanism for using info about the return values could be re-used with some annotation for #25862. And possibly also to generalize and simplify the ssa rule "don't nilcheck the return value of newobject".
We could also return things like the ratio of Nodes to instructions, which we might want to use to improve downstream inlining decisions.
One weird thing about doing this is that calling functions from imported packages might optimize further than calling functions from within the same packages. (A similar problem arises for some of the ideas mooted in #17566.) Still probably worth exploring.
cc @randall77 @martisch @dr2chase @cherrymui
| Performance,NeedsInvestigation,compiler/runtime | low | Major |
334,638,239 | pytorch | [JIT] Add peephole to delete unnecessary type_as. | Context: #8687
This peephole should be probably be run on specialized graphs (in an unspecialized graph, it is hard to tell if a type_as is unnecessary). | oncall: jit | low | Minor |
334,653,243 | neovim | test failure: channels_spec: "can use stdio channel with pty" | Noticed this test failure in https://github.com/neovim/neovim/pull/8612 .
```
[1m[35m[ FAILED ][0m[0m [36m...ild/neovim/neovim/test/functional/core/channels_spec.lua[0m @ [36m109[0m: [1mchannels can use stdio channel with pty[0m
...ild/neovim/neovim/test/functional/core/channels_spec.lua:135: Expected objects to be the same.
Passed in:
(table) {
[1] = 'notification'
[2] = 'stdout'
*[3] = {
[1] = 3
*[2] = {
*[1] = 'n' } } }
Expected:
(table) {
[1] = 'notification'
[2] = 'stdout'
*[3] = {
[1] = 3
*[2] = {
*[1] = 'neovan' } } }
``` | test,channels-rpc | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.