id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
373,243,481
flutter
Embedded UIViews don't show in scene snapshots
c: new feature,platform-ios,engine,a: platform-views,P2,team-ios,triaged-ios
low
Minor
373,245,274
pytorch
Python dataloader Improvements
@goldsborough and I are planning a series of improvements to dataloader in both C++ and Python API. This issue mainly focuses on the planned changes for the Python API. - [ ] Iterator Serialization (https://github.com/pytorch/pytorch/issues/11813) Dataloader iterator is not picklable currently, due to the multiprocessing and multithreading attributes it has. We should make it picklable as long as the dataset and the Sampler iterator is picklable, e.g., the `__getstate__` could be ```py def __getstate__(self): return (self.loader, self.sampler_iter, self.base_seed) ``` We will also make the iterator of provided samplers serializable. - [ ] Examples of Bulk Loading The current dataloding API seems to suggest that dataloader is mainly suited for creating batches from random reads from the dataset. However, it supports bulk loading very well. For instance, this [gist](https://gist.github.com/SsnL/205a4cd2e4e631a42cc9d8e879a296dc) implements sharded/chunked bulk loading in just 40 lines. We will improve the documentation to include examples of such cases. - [ ] Worker Load Configurations We currently balance the load of workers by keeping the #tasks per worker balanced. This could be a problem if the workload is not very even for the tasks. We should make this optional instead. Additionally, the max task number (currently ``2 * num_workers``) should also become configurable. - [ ] Expose Sampler iterator (https://github.com/pytorch/pytorch/issues/7359) This would enable dynamic updates to the Sampler iterator states, e.g., dynamic reweighting of the samples. The API may be `loader_iter.get_sampler_iter()`. Since we always prefetch some number of batches, we also need to augment the existing document to reflect that this iterator may be ahead of the latest return value of the data loader iterator. Edit: As @apaszke pointed out below, it is possible to allow for strict consistency by providing a interface to flush the pipeline and ask sampler iterator to give new indices basing on the updated state. But that design needs further consideration and we don't plan to do until there is immediate need. - [x] More Flexible `worker_init_fn` Currently, `worker_init_fn` only takes in a single `worker_idx` argument, making it very difficult to initialize dataset object of each worker in a different way. Furthermore, it is impossible for the `dataset.__getitem__` in workers to communicate with the Sampler iterator to fetch more indices, or update the iterator state. We plan to augment it's input argument to include a wider range of objects it can access, without being BC breaking, and being future-proof. I haven't given much thought to the API design of this. But for a proof-of-concept, the API could be a `get_worker_init_fn_arg` argument which would be called in **main** process, takes in a `data_loader_iter_info` "struct", containing fields referencing the `dataset` and the `sampler_iter` (and maybe more), and returns a serializable to be fed in as an additional argument of `worker_init_fn` in **worker** processes. Please let me know if you have suggestions! - [x] Iterator-style Dataset We don't necessarily need to have a sampler. By allowing an iterator style Dataset (rather than a stateless mapping), the workers can do interesting things like backfilling. This is entirely supported as of today, but we will make it nicer. - [ ] Bridging C++ and Python DataLoaders We will be providing a simple way to convert a C++ DataLoader into a Python one, with the same API as the existing Python DataLoader. # Our Plan to Make These Happen I (@ssnl) will be focusing on the first four items while @goldsborough will implement the fifth. In addition to these, @goldsborough is also adding a bunch of exciting features into the C++ DataLoader to allow for greater flexibility (e.g., see https://github.com/pytorch/pytorch/pull/12960 https://github.com/pytorch/pytorch/pull/12999). Let us know your thoughts and suggestions! cc @soumith @fmassa @apaszke
module: dataloader,triaged
low
Major
373,280,565
kubernetes
Service objects should be annotated by cloud-providers to track underlying resources.
/kind feature ## What we want to do We need to add an extra field `Service.Status.LoadBalancer.Ingress[0].ProviderID` in Service object to identify the CloudProvider SLB id , like what we did in Node object. This would benefit cloudprovider for supporting rename SLB. ## Reason Cloud Provider createed a LoadBalancer for service with Type equals LoadBalancer. However, Service object does not provide a way to record cloudprovider`s SLB Id . So, each cloudprovider create the loadbalancer with a name generated from service object use the code below which is meaningless string and also NOT UNIQ for the cloud service. People can change the name from cloud console which result in a failure in kubernetes cloudprovider.(CloudProvider would failed to find the SLB again after the name change) We found that if we could add an extra field named `ProviderID` in the Service.Status.LoadBalancer field then we would allow user to rename their LB name freely. Like what we did in Node Spec. Hope to hear feedback from the community. CloudProvider auto generate an SLB name with the following code . ` // TODO(#6812): Use a shorter name that's less likely to be longer than cloud // providers' name length limits. func GetLoadBalancerName(service *v1.Service) string { //GCE requires that the name of a load balancer starts with a lower case letter. ret := "a" + string(service.UID) ret = strings.Replace(ret, "-", "", -1) //AWS requires that the name of a load balancer is shorter than 32 bytes. if len(ret) > 32 { ret = ret[:32] } return ret } ` Which is meaningless string.
sig/network,kind/feature,lifecycle/frozen,sig/cloud-provider
medium
Critical
373,284,903
pytorch
[caffe2]How can I export init_net.pb and predict_net.pb files on my own?
I want to convert the retinanet in Caffe2/Detectron to an onnx model, but I need to prepare init_net.pb and predict_net.pb files first. Followed by the script bellow I got these two files, but when I tried to run the convert procedure an error occured and I don't know how to handle this. Can anybody help me with this problem? #################### script to generate .pb files: def SaveNet(INIT_NET, PREDICT_NET, workspace, model): init_net, predict_net = mobile_exporter.Export( workspace, model.net, model.params ) with open(PREDICT_NET, 'wb') as f: f.write(model.net._net.SerializeToString()) with open(INIT_NET, 'wb') as f: f.write(init_net.SerializeToString()) print("== saved init_net and predict_net. ==") ############################# script to convert files to onnx: import os os.environ['CUDA_VISIBLE_DEVICES']='0' import onnx import caffe2.python.onnx.frontend from caffe2.proto import caffe2_pb2 data_type = onnx.TensorProto.FLOAT data_shape = (1, 3, 480, 640) value_info = { 'gpu_0/data': (data_type, data_shape) } predict_net = caffe2_pb2.NetDef() with open('predict_net.pb', 'rb') as f: predict_net.ParseFromString(f.read()) init_net = caffe2_pb2.NetDef() with open('init_net.pb', 'rb') as f: init_net.ParseFromString(f.read()) onnx_model = caffe2.python.onnx.frontend.caffe2_net_to_onnx_model( predict_net, init_net, value_info, ) onnx.checker.check_model(onnx_model) ############################## ERROR: [E net_async_base.cc:422] [enforce fail at context_gpu.cu:343] error == cudaSuccess. 77 vs 0. Error at: /home/bjtu/Downloads/pytorch/caffe2/core/context_gpu.cu:343: an illegal memory access was encounteredError from operator: input: "gpu_0/conv1_3" output: "gpu_0/pool1_1" name: "" type: "MaxPool" arg { name: "kernel" i: 3 } arg { name: "order" s: "NCHW" } arg { name: "pad" i: 1 } arg { name: "cudnn_exhaustive_search" i: 0 } arg { name: "stride" i: 2 } device_option { device_type: 1 device_id: 0 } engine: "CUDNN", op MaxPool [F context_gpu.h:118] Check failed: error == cudaSuccess an illegal memory access was encountered System information Operating system: ubuntu16.04 Compiler version: gcc 5.4.0 CUDA version: 8.0 cuDNN version: 7.1.4 NVIDIA driver version: 384.130 GPU models (for all devices if they are not all the same): quadro m5000 PYTHONPATH environment variable: ? python --version output: 2.7 Anything else that seems relevant: ?
caffe2,triaged
low
Critical
373,395,206
pytorch
[caffe2] How to handle multiple inputs and multiple outputs in the network architecture?
Is it currently possible to handle multiple inputs and multiple outputs in the network architecture in CAFFE2?
caffe2
low
Minor
373,408,604
TypeScript
Imports in .d.ts files break wildcard modules declarations
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.1.3 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** import d.ts wildcard module **Code** main.ts ```ts /// <reference path="./typings.d.ts" /> import template from './template.html'; ``` typings.d.ts ```ts import * as _angular from 'angular'; declare module '*.html' { const content : string; export default content; } ``` ```json { "name": "ts-bug", "version": "1.0.0", "dependencies": { "angular": "^1.7.5" } } ``` Compile with: ```sh tsc main.ts ``` **Expected behavior:** Compiled without errors. **Actual behavior:** Compiled with error ``` main.ts:3:22 - error TS2307: Cannot find module './template.html'. 3 import template from './template.html'; ~~~~~~~~~~~~~~~~~ ``` Removing `import * as _angular from 'angular';` fixes the issue. **Side Note 1** Regular module declarations work regardless of imports being present. **Side Note 2** `import * as _angular from 'angular';` is needed to later do: ```ts declare global { const angular : typeof _angular; } ``` to workaround https://github.com/Microsoft/TypeScript/issues/10178
Bug,Help Wanted,Domain: Related Error Spans
medium
Critical
373,415,040
TypeScript
SourceRoot documentation inconsistent
In the [compiler options documentation](https://www.typescriptlang.org/docs/handbook/compiler-options.html) is `sourceRoot` explained as follows: > Specifies the location where debugger should locate TypeScript files instead of source locations. Use this flag if the sources will be located at run-time in a different location than that at design-time. The location specified will be embedded in the sourceMap to direct the debugger where the source files will be located. In the [documentation of the `extends` property](https://www.typescriptlang.org/docs/handbook/tsconfig-json.html) is explained that all path are relative to the file they originated in. > All relative paths found in the configuration file will be resolved relative to the configuration file they originated in. I have understood that the path in `sourceRoot` is relative to the compiled files and points to the source files. `sourceRoot` is not relative to the configuration file it originates in. If I use a path relative to the configuration file, the source files are not found. **Expected behavior:** The documentation must be consistent or the path must be relative to the file they originated in.
Docs
low
Critical
373,420,491
rust
Misleading/unhelpful error message from borrow checker
The following program, when compiled, gives rise to a misleading and unhelpful error message: ``` use std::io; enum Error<'a> { Invalid(&'a [u8]), Io(io::Error), } struct Thing { counter: usize } enum Result<'a> { Done, Error(Error<'a>), Value(usize) } impl Thing { fn new() -> Self { Self { counter: 0 } } fn process(&mut self) -> Result { self.counter += 1; Result::Done // for now } } fn main() { let mut thing = Thing::new(); loop { let result = thing.process(); match result { Result::Done => { break; } Result::Error(e) => panic!(e), Result::Value(v) => { println!("{}", v); } } } } ``` The message is: ``` Compiling playground v0.0.1 (/playground) error[E0499]: cannot borrow `thing` as mutable more than once at a time --> src/lib.rs:32:22 | 32 | let result = thing.process(); | ^^^^^ mutable borrow starts here in previous iteration of loop | = note: borrowed value must be valid for the static lifetime... error: aborting due to previous error ``` However, the mutable borrow complaint appears remote from the true reason for the error. The key to determining the problem, as analysed by Daniel Keep [here](https://users.rust-lang.org/t/trying-to-understand-lifetimes-in-loops/21356/2), is the exact language of the error message: > note: borrowed value must be valid for the static lifetime... At first glance, there are no static borrows anywhere in the code. But what's actually happening is that when there's the `panic!(e)`, `e` is being used as the literal panic payload. In order for that to work, `e` itself must live forever. But, as `Error` contains a borrow, that borrow must live forever. But the `'a` lifetime of that `Error` borrow is tied to the `'a` lifetime in `Result`. And that lifetime is tied to the lifetime of `self`, which is in turn the required lifetime of the `thing` variable. So `thing` has to be `'static`, but it cannot be `'static` because it’s on the stack. It's not immediately obvious how best to improve the error message, but the current message isn't helpful, because it points to the borrow in the loop. Apparently Polonius doesn't give any better message. Links which may be useful: [The sample in Rust Playground](https://play.rust-lang.org/?version=beta&mode=debug&edition=2018&gist=f9737238a4c8b43305e0f08487aa668a) [Discussion on users.rust-lang.org](https://users.rust-lang.org/t/trying-to-understand-lifetimes-in-loops/21356)
C-enhancement,A-diagnostics,A-lifetimes,T-compiler,D-confusing
low
Critical
373,426,054
electron
Custom drag and drop type
**Problem** Right now, to start a file drag you call `WebContents.startDrag({ file:<path>, icon:<path> })`. This will spawn the following DataTransfer object: ``` DataTransfer { dropEffect: 'none', effectAllowed: 'copyLink', files: FileList, items: DataTransferItemList, types:['Files',..] } ``` with `FileList` containing the proper file(s). The issue is the `effectAllowed`, which when set to `'copyLink'` will not allow dragging and dropping files from the electron BrowserWindow to say Discord as they only allow dropped items with the 'move' effect from what I can see. **Solution** I propose allowing either changing the default from 'copyLink' to 'copyMove', OR allowing users to pass the drop effect as an argument to `WebContents.startDrag`. **File changes** [drag_utils_views.cc (Line 20)](https://github.com/electron/electron/blob/6f91af93433df4e9c0e7fb592e5b2dcb0b1c7888/atom/browser/ui/drag_util_views.cc#L20) - add argument for `DragFileItems` to accept a drop type (should be an int AFAIK, the [Chromium `DragOperation`s are defined here](https://chromium.googlesource.com/chromium/src/+/66.0.3359.158/ui/base/dragdrop/drag_drop_types.h#17)). [drag_utils_views.cc (Line 45)](https://github.com/electron/electron/blob/6f91af93433df4e9c0e7fb592e5b2dcb0b1c7888/atom/browser/ui/drag_util_views.cc#L45) - `ui::DragDropTypes::DRAG_COPY | ui::DragDropTypes::DRAG_LINK` changed to a variable passed to the function. [drag_utils.h (Line 18)](https://github.com/electron/electron/blob/6f91af93433df4e9c0e7fb592e5b2dcb0b1c7888/atom/browser/ui/drag_util.h#L18) - add argument for `DragFileItems` to accept a drop type. [drag_util_mac.mm (Line 28)](https://github.com/electron/electron/blob/6f91af93433df4e9c0e7fb592e5b2dcb0b1c7888/atom/browser/ui/drag_util_mac.mm#L28) - may need to add argument to `DragFileItems` to accept a drop type too for compatibility with MACOSX. (I did not see how the drag type could be changed for osx, if they even use drag types the way windows does.) [atom_api_web_contents.cc (Line 1736)](https://github.com/electron/electron/blob/cb9be091aa203f27496b2b8993e10b5c554525c4/atom/browser/api/atom_api_web_contents.cc#L1736) - get the drop type from `item` like ``` int item; if (!item.Get("dropType", &item)) { item = <some default drop type value, like ui::DragDropTypes.DRAG_COPY | ui::DragDropTypes.DRAG_LINK as it was previously> } ``` [atom_api_web_contents.cc (Line 1754)](https://github.com/electron/electron/blob/cb9be091aa203f27496b2b8993e10b5c554525c4/atom/browser/api/atom_api_web_contents.cc#L1754) - pass argument for drop type. **Outcome** Example usage of WebContents.startDrag could now be ``` contents.startDrag({ file: 'C:/example/file.png', icon: 'myico.png', dropType: 3, // 3 for copyMove }) ```
enhancement :sparkles:
low
Minor
373,426,130
opencv
VideoCapture will fail after open it many (about 99xx) times
##### System information - OpenCV => 2.4.13.6 - Operating System / Platform => windows7 64bit, windows10 - Compiler => VS2015 - WebCam => USB webcam ##### Detailed description I found that open videocapture will fail when I did it many times. It always failed around the 99xx times. The testing code is quite simple as the following. I show image when loop > 9900 since it will be easier to tell if the webcam still alive. Is it an known issue or I did something wrong? Thanks for comments. ##### Steps to reproduce ``` int main() { Mat image; VideoCapture cap; int i = 0; for (i = 0; i < 11000; i++) { cap.open(0); if (cap.isOpened()) { printf("open ok at:%d\n", i); if (i > 9900) //read and show image after loop 9900 { cap >> image; imshow("Webcam live", image); waitKey(100); } } else { printf("open fail at:%d\n", i); } } return 0; } ```
category: videoio(camera)
low
Critical
373,495,294
go
wasm: browser compatibility policy
I would like to propose a discussion on how we want to handle browser compatibility. There are several features being worked on: https://github.com/WebAssembly/proposals When do we want to use them? Do we want to offer backwards-compatibility? Here are my own thoughts, open for debate: (1) Go's support for WebAssembly is experimental and it should stay experimental at least until WebAssembly itself is fully standardized (right now the main spec is still in "Phase 2 - Proposed Spec Text Available"). (2) The WebAssembly project itself has no clear answer yet on how feature tests and backwards compatibility should work, see https://webassembly.org/docs/feature-test/ (3) Modern browsers (except Safari) have auto-update mechanisms, so staying with the latest version shouldn't be an issue for most users. Due to (1), (2) and (3) I would opt for no backwards compatibility as long as Go's support for WebAssembly is experimental. New WebAssembly features can be adopted as soon as they are marked as "standardized" and are supported by the latest stable Chrome, Firefox, Edge and Safari (on desktop).
DevExp,arch-wasm
high
Critical
373,554,294
pytorch
Generalized Data Class
## Generalized Data Class Feature I suggest to add an abstract `Data` class in the `torch.utils.data` module. The goal of which is to group a couple of data batching behaviors for the user to define: batch together arbitrary `Data` objects, `chunk`/`split` arbitrary batched data objects across batch dimension, move arbitrary data to `device`, `shared_memory`, `pin_memory`... ## Motivation Deep learning is becoming more and more flexible and not all data is in the form `(X, y)`, that is why the current [`default_collate`](https://github.com/pytorch/pytorch/blob/ca03c10cefa1e126eab1446d490f9314bd236c1b/torch/utils/data/dataloader.py#L196) of the `Dataloader` already supports `tuple` and `dict`. Nonetheless: - A user may want to organize its data in an object with stronger type than `dict` and `tuple`; - Not all batching is done by applying `torch.stack` on individual elements. The propose class is an easy to understand abstraction that could be passed from a `Dataset` to a `Dataloader`. In addition to `Dataloader`, this could also be used to define how to split data in `DataParallel`. ## Pitch A incomplete implementation could look like this: ```python from abc import ABC, abstractmethod class Data(ABC): @abstractmethod def apply(self, func: Callable) -> None: """Apply a function on all tensors in the data class. Could be passed functions like `lambda t: t.pin_memory()`, `lambda t: t.to(device)`. """ @abstractmethod @classmethod def batch(cls, data_points: List[Data]) -> Data: """Batch together multiple data object.""" @abstractmethod def split(self, chunks) -> List[Data]: """Split the data into chunks across the data dimension.""" ``` Here I've represented individual data points and batched data with the same class, but this need not be the case. And in the `default_collate`, we would have something like: ```python if isinstance(batch[0], Data): # apply logic using` batch[0].__class__.batch`, and `batch.apply` ``` ## Alternatives It is already possible to have some of the desired behavior using a custom `collate_fn`. However if the user returns a class instead of `tuple` or `dict`, `pin_memory` will [not be applied](https://github.com/pytorch/pytorch/blob/ca03c10cefa1e126eab1446d490f9314bd236c1b/torch/utils/data/dataloader.py#L237). Furthermore, the user also needs to be aware of the `_shared_memory` [variable](https://github.com/pytorch/pytorch/blob/ca03c10cefa1e126eab1446d490f9314bd236c1b/torch/utils/data/dataloader.py#L203) to reproduce in their own `collate_fn` to leverage this. ## Additional context A pratical example can be found in [Pytroch_geometric](https://rusty1s.github.io/pytorch_geometric/build/html/notes/introduction.html), where the author went into the trouble of redefining many `Dataloader` behaviors. cc @VitalyFedyunin @ejguan @SsnL
module: dataloader,triaged,module: data
low
Minor
373,588,573
rust
`Eq + Hash` rule is broken when mixing `OsStr` and `Path`
This program surprisingly fails: ```rust use std::{ ffi::OsStr, path::Path, collections::hash_map::DefaultHasher, hash::{Hash, Hasher}, }; fn hash<T: ?Sized + Hash>(v: &T) -> u64 { let mut sh = DefaultHasher::new(); v.hash(&mut sh); sh.finish() } fn main() { let s = OsStr::new("/dev/null"); let p = Path::new("/dev/null"); assert_eq!(s, p); assert!( hash(s) == hash(p), "Hashes differ for values that compare as equal" ); } ``` The problem is that the provided impls of `PartialEq`, while convenient, cross the data domains and break the intuition on matching behavior of `Hash` and `==`. This does not affect `HashMap` due to lack of a cross-type `Borrow` impl, but the inconsistency can bite the unwary.
C-bug,T-libs
low
Critical
373,590,751
go
x/build/cmd/gomote: 502 Bad Gateway error
I often encounter this 502 Bad Gateway error while actively working with a buildlet created using gomote. I guess this is because I was holding the buildlet too long (~30min) and there is a hidden time limit on each buildlet. If so, we need a better error message than this: <pre> $ gomote ls user-hakim-windows-amd64-2016-0 Error running ls: 502 Bad Gateway: </pre> And, `gomote list` still shows the lease is not yet expired. That's misleading. <pre> $ gomote list user-hakim-windows-amd64-2016-0 windows-amd64-2016 host-windows-amd64-2016 expires in 29m49.611152102s </pre>
Builders,NeedsInvestigation
medium
Critical
373,592,066
go
proposal: spec: non-reference channel types
**Background** I've started using 1-buffered channels pretty heavily as “selectable mutexes” (see also https://github.com/golang/go/issues/16620). Unbuffered channels are also quite common in Go APIs (for example, as `cancel` channels in a `context.Context`; see also https://github.com/golang/go/issues/28342#issuecomment-432684639). I've noticed several issues that emerge with the existing `chan` types: 1. A channel's buffering semantics are often semantically important: for example, a 1-buffered channel can be used as a mutex, but a 2-buffered or unbuffered channel cannot. The need to document and enforce these important properties leads to a proliferation of comments like `// 1-buffered and never closed`. 2. Struct types that require non-`nil` channels cannot have meaningful zero-values unless they also include a `sync.Once` or similar to guard channel initialization (which comes with losses in efficiency and clarity). Writing `New` functions for such types is a source of tedium and boilerplate. 3. Currently, channels are always references to some other underlying object, typically allocated and stored on the heap. That makes them somewhat less efficient that a `sync.Mutex`, which can be allocated inline within a `struct` whose fields it guards. **Proposal** I propose a new family of channel types, with a relationship to the existing channel types analogous to the relationship between arrays and slices: the existing `chan` types are conceptually references to underlying instances of the proposed types. ***Syntax*** ```ebnf StaticChannelType = "chan" "(" Expression [ "," "close" ] ) ")" ElementType . CloseOnlyChannelType = "chan" "_" . ``` Parsing this syntax has one caveat: if we read a selector expression in parentheses after a `chan` keyword, we don't know whether it is an `Expression` or the `ElementType` until we see the next token after the closing parenthesis. I'm open to alternatives. ***Semantics*** A `StaticChannelType` represents a non-reference channel with a buffer size indicated by the `Expression`, which must be an integer [constant expression](https://golang.org/ref/spec#Constant_expressions). A `chan(N, close) T` can be closed, while a `chan(N) T` (without the `close` token) cannot. The buffer size is equivalent to the size passed to `make`: a call to `make(chan T, N)` conceptually returns a reference to an underlying channel of type `chan(N, close) T`. A `CloseOnlyChannelType` is a channel with element type `struct{}` that does not support send operations. An addressable `chan(N[, close]) T` can be used with the send and receive operators, `range` loop, `select` statement, and `close`, `len`, and `cap` builtins just like any other channel type (just as `[N]T` supports the same index expressions as `[]T`). A send expression on a `chan _` is a compile-time error. (Compare https://github.com/golang/go/issues/21069.) A `close` call on a `chan(N) T` (declared without the `close` token) is a compile-time error. `chan(N) T` is *not* assignable to `chan T`, just as `[N]T` is not assignable to `[]T`. However, `*chan(N) T` (a pointer to a static channel) can be _[converted](https://golang.org/ref/spec#Conversions) to_ `chan T` or either of its directional variants, and `chan _` can be converted to `chan struct{}` or either of its directional variants. (Making `*chan(N) T` _assignable to_ `chan T` also seems useful and benign, but I haven't thought through the full implications.) A `close` on a `chan T` that refers to a non-closeable `chan(N) T` panics, much as a `close` on an already-closed `chan T` panics today. (One already cannot, in general, expect to `close` an arbitrary channel.) A send on a `chan _` panics, much as a send on a closed channel panics today. ***Implementation*** A `chan(N) T` is potentially much more compact that a `chan T`, since we can know ahead of time that some of its fields are not needed. (A `chan _` could potentially be as small as a single atomic pointer, storing the head of a wait-queue or a “closed” sentinel.) The difficult aspect of a more optimized implementation is the fact that a `*chan(N) T` can be converted to `chan T`: we would potentially need to make `chan T` a bit larger or more expensive to accommodate the fact that it might refer to a stripped-down statically-sized instance, or else apply something akin to escape analysis to decide whether a given `chan(N) T` should use a compact representation or a full [`hchan`](https://github.com/golang/go/blob/5a7cfbc0117bce314c3f079ece459173b9efc854/src/runtime/chan.go#L32-L51). However, we could already do something like that optimization today: we could, for example, detect that all of the functions that return a non-nil `*S` for some struct type `S` also populate it with channels of a particular size, and that those channels do not escape the package, and replace them with a more optimized implementation and/or fuse the allocations of the object and its channels. It probably wouldn't be quite as straightforward, but it could be done. I want to emphasize that the point of this proposal is to address the _usability_ issues of reference channels: the ad-hoc comments about buffering invariants, lack of meaningful zero-values, and `make` boilerplate. Even if non-reference channels do not prove to be fertile ground for optimization, I think they would significantly improve the clarity of the code. ***Examples*** ```go package context type cancelCtx struct { Context mu sync.Mutex // protects following fields done chan _ children map[canceler]struct{} // set to nil by the first cancel call err error // set to non-nil by the first cancel call } func (c *cancelCtx) Done() <-chan struct{} { return (<-chan struct{})(&c.done) } ``` ```go package deque type state { front, back []interface{} } // A Deque is a synchronized double-ended queue. // The zero Deque is empty and ready to use. type Deque struct { contents chan(1) state empty chan(1) bool } func (d *Deque) Push(item interface{}) { var st state select { case d.empty <- true: case st = <-d.contents: } st.back = append(st.back, item) d.contents <- st } […] ``` ---- CC: @ianlancetaylor @griesemer @josharian
LanguageChange,Proposal,LanguageChangeReview
medium
Critical
373,597,506
pytorch
Backward pass over torch.nn.functional.pad is extremely slow with half tensors
## 🐛 Bug The backward pass over `torch.nn.functional.pad` is more than 300x slower with half tensors compared to fp32 tensor for the example below. ## To Reproduce ``` import torch import time def exec(x): y=torch.nn.functional.pad(x,pad=(47,48,47,48),mode="replicate") torch.cuda.synchronize() tic=time.time() y.sum().backward() torch.cuda.synchronize() return time.time()-tic x=torch.rand(4,2048,1,1, requires_grad=True).cuda() print("fp32: {}".format(exec(x))) print("fp16: {}".format(exec(x.half()))) ``` My output: ``` fp32: 0.14527177810668945 fp16: 50.3135507106781 ``` ## Expected behavior At least as fast as fp32. ## Environment PyTorch version: 0.4.1 Is debug build: No CUDA used to build PyTorch: 9.2.148 OS: Ubuntu 18.04.1 LTS GCC version: (Ubuntu 7.3.0-16ubuntu3) 7.3.0 CMake version: Could not collect Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 9.2.148 GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB GPU 1: Tesla V100-SXM2-32GB GPU 2: Tesla V100-SXM2-32GB GPU 3: Tesla V100-SXM2-32GB GPU 4: Tesla V100-SXM2-32GB GPU 5: Tesla V100-SXM2-32GB GPU 6: Tesla V100-SXM2-32GB GPU 7: Tesla V100-SXM2-32GB Nvidia driver version: 396.37 cuDNN version: Probably one of the following: /usr/local/cuda-9.2/lib64/libcudnn.so.7.2.1 /usr/local/cuda-9.2/lib64/libcudnn_static.a Versions of relevant libraries: [pip] Could not collect [conda] cuda92 1.0 0 pytorch [conda] pytorch 0.4.1 py37_cuda9.2.148_cudnn7.1.4_1 [cuda92] pytorch [conda] torchfile 0.1.0 <pip> [conda] torchnet 0.0.4 <pip> [conda] torchvision 0.2.1 py37_1 pytorch cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @ngimel @VitalyFedyunin
module: performance,module: cuda,triaged,module: half
low
Critical
373,608,371
angular
Multiple Methods with Same HostListener
## I'm submitting a... <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report [ ] Performance issue [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior Attaching HostListeners with the same event to multiple methods within a component results in only the latest one being fired. HostListener methods higher up in the file are never fired. ## Expected behavior Intuitively, multiple HostListener decorators would register multiple listeners for the same event, and fire all of their associated methods. ## Minimal reproduction of the problem with instructions Here is a stackblitz with the problem demonstrated. Press a key anywhere, and the console will only show listener2 being fired, even there are two listener methods. https://stackblitz.com/edit/angular-hpxlzb ## What is the motivation / use case for changing the behavior? My application has multiple methods that need to listen to the same event for different reasons. I can add another function that delegates for now, but it's not intuitive that this use-case doesn't work. ## Environment <pre><code> Angular version: 7.0.0 Browser: - [x] Chrome (desktop) version 69.0.3497.100 - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: v8.11.3 - Platform: Mac </code></pre>
type: bug/fix,freq2: medium,area: core,core: host and host bindings,P4
low
Critical
373,613,258
create-react-app
Add versioning to docs
Adding a placeholder for this so we don't forget about it. Original discussion here: https://github.com/facebook/create-react-app/issues/5238#issuecomment-432428041
tag: documentation
low
Minor
373,616,895
react
Receive previous state in getDerivedStateFromError
<!-- Note: if the issue is about documentation or the website, please file it at: https://github.com/reactjs/reactjs.org/issues/new --> **Do you want to request a *feature* or report a *bug*?** This is feature request. **What is the current behavior?** `getDerivedStateFromError` hook receives `error` and doesn't have access to `state` or component instance. This limits possible ways in which it could be used and requires to additionally use other hooks to derive the state: ```js class App extends Component { state = {} static getDerivedStateFromError(error) { return { error } } static getDerivedStateFromProps(props, state) { // do we really need this? // the state is derived from error, not props if (state.error) return remapStateToPreferredStructure(state); } render() { /* ... */ } } ``` **What is the expected behavior?** `getDerivedStateFromError` is expected to receive previous state and have ``` getDerivedStateFromError(error, state) ``` signature to be consistent with related static hook, `getDerivedStateFromProps`. This `getDerivedStateFromError` signature is backward compatible with existing one (React 16.6.0). **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?** React 16.6.0
Type: Feature Request
medium
Critical
373,618,217
kubernetes
RFE: Remove Provider Specific details from e2e tests
**Is this a FEATURE REQUEST?**: The e2e tests have details baked in that have existed since the early days around providers. Recent refactoring has exposed brittleness in how we invoke the tests and TBH provider specific behavioral tests that depend on non-standard APIs should be moved outside of tree. /kind feature /sig testing /cc @neolit123 @pohly @BenTheElder @kubernetes/cncf-conformance-wg @kubernetes/sig-testing @kubernetes/k8s-infra-team
kind/feature,sig/testing,lifecycle/frozen,sig/cloud-provider,area/e2e-test-framework,needs-triage
medium
Critical
373,653,023
react
onMouseEnter does not fire on an underlaying element if an element above is removed
**Do you want to request a *feature* or report a *bug*?** Bug - I did do some searching around the issues to see if there was a similar/dupe, but I could not find one. **What is the current behavior?** With 2 elements overlaying on top of each other, if the upper element gets removed while the cursor is over both elements, mouse enter never fires on the element below. I compared this to native browser events and the issue does not appear to persist there (native browser events appear to fire mouse enter for the underlying div when the overlaying div gets removed). **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem.** [CodeSandbox Example](https://codesandbox.io/s/wonqx3lo7) I provided a top level boolean constant to switch between using react's synthetic events and the native browser events. In the console I keep track of state updates as console logs. The simple way to test - open the console, mouse over the upper div in a position that is also on top of the lower div, click to remove the upper div, the lower div SHOULD fire mouse enter. It does not with synthetic events, but it does with browser events. **What is the expected behavior?** Expected behavior for me would be if react would fire mouse enter on the underlaying div when the upper div is removed. **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?** ``` "dependencies": { "react": "16.5.2", "react-dom": "16.5.2", }, ``` I have not had a chance to test previous versions.
Component: DOM,Type: Needs Investigation
low
Critical
373,685,753
go
cmd/go, cmd/link: use Windows response files for gcc/g++ to avoid arg length limits
### What version of Go are you using (`go version`)? 1.11.1 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? windows 386/amd64 ### What did you do? Tried to compile a cgo program. ```go package main import ( "fmt" "io/ioutil" "os" "os/exec" "path/filepath" ) func main() { testCgo("c") testCgo("cpp") } func testCgo(ending string) { tmpdir, _ := ioutil.TempDir("", "cgo_resp_issue") defer os.RemoveAll(tmpdir) ioutil.WriteFile(filepath.Join(tmpdir, "gofile.go"), []byte(fmt.Sprint("package main\nimport \"C\"\nfunc main(){}")), 0644) for i := 0; i < 1000; i++ { ioutil.WriteFile(filepath.Join(tmpdir, fmt.Sprintf("cfile_%v."+ending, i)), []byte(fmt.Sprintf("void func_%v(){}", i)), 0644) } cmd := exec.Command("go", "build", "-v", "-x", "-ldflags=all=\"-extldflags=-v\"") cmd.Dir = tmpdir if out, err := cmd.CombinedOutput(); err != nil { println("failed:", ending, err.Error(), string(out)) } } ``` ### What did you expect to see? a successful compilation ### What did you see instead? `gcc: error: CreateProcess: No such file or directory` and/or `g++: error: CreateProcess: No such file or directory` ### Possible solution? This patch works for my needs. It's mostly a copy of: https://go-review.googlesource.com/c/go/+/110395 But I also needed to escape the backslashes to make it work. ``` [patch omitted] ``` ### More info The issue is related to: https://github.com/golang/go/issues/18468 and https://github.com/golang/go/blob/master/src/cmd/go/internal/work/exec.go#L2851 The problem is caused by the 32K character limit on windows https://docs.microsoft.com/en-us/windows/desktop/api/processthreadsapi/nf-processthreadsapi-createprocessasuserw >The maximum length of this string is 32K characters. --- Also, once there is support for msvc on windows (https://github.com/golang/go/issues/20982) it might be good idea to whitelist `cl` as well.
help wanted,NeedsFix,compiler/runtime
low
Critical
373,701,900
rust
Improve diagnostic when writing signature `for<'_> Foo<'_>`
When encountering the following case ``` error[E0637]: `'_` cannot be used here --> $DIR/underscore-lifetime-binders.rs:24:21 | LL | fn meh() -> Box<for<'_> Meh<'_>> | ^^ `'_` is a reserved lifetime name error[E0106]: missing lifetime specifier --> $DIR/underscore-lifetime-binders.rs:24:29 | LL | fn meh() -> Box<for<'_> Meh<'_>> | ^^ help: consider giving it a 'static lifetime: `'static` | = help: this function's return type contains a borrowed value, but there is no value for it to be borrowed from ``` emit a more targeted diagnostic, along the lines of ``` error[E0637]: `'_` cannot be used here --> $DIR/underscore-lifetime-binders.rs:24:21 | LL | fn meh() -> Box<for<'_> Meh<'_>> | ^^ -- `'_` can also not be used here | | | `'_` is a reserved lifetime name help: give it a name instead | LL | fn meh<'a>() -> Box<for<'a> Meh<'a>> | ^^^^ ^^ ^^ ``` Without emitting the E0106.
C-enhancement,A-diagnostics,A-lifetimes,P-low,T-compiler,A-suggestion-diagnostics,D-papercut
low
Critical
373,705,779
svelte
clientWidth on video tag doesn't always return proper value, doesn't change `onupdate` for percentages
If I set a video to have `height: 100%` and `width: auto` so that it fills its parent div fully and auto-expands its width, the value I get back from binding `clientWidth` to it is 1.125 larger than in actuality. You can reproduce this with the following code in the REPL. `clientWidth` should be `800` but it comes out to `900`. If you set the CSS dimensions to a fixed value, they return correctly ```html <div style="width:800px;height:450px;background-color:#f0c;"> <video controls src="https://archive.org/download/ElephantsDream/ed_1024_512kb.mp4" bind:clientWidth></video> </div> <style> video { width: auto; height: 100%; } </style> <script> export default { onupdate() { const { clientWidth } = this.get(); alert(clientWidth) // Should be 800 but comes to 900 } } </script> ``` Also, viewable in [this REPL link](https://svelte.technology/repl?version=2.14.3&gist=51070954364b06e769a683c7ca4e4326), when width and height are `100%`, `onupdate` does not fire on resize.
stale-bot,temp-stale
low
Minor
373,856,084
angular
Introduce `scrollPositionRestoration` for `NavigationExtras`
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [x] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior In `angular 6` You introduced great new feature `scrollPositionRestoration` in router. When it is enabled it **always** scroll to top on navigation change. But sometimes we don't need it (example with nested router-outlets or query change for search feature). How about introducing new `NavigationExtras` param: `scrollPositionRestoration`. It will enable or disable default behaviour only for that navigation change.
feature,area: router,feature: under consideration
high
Critical
373,872,342
puppeteer
Support extensions with createIncognitoBrowserContext
### Steps to reproduce **Tell us about your environment:** * Puppeteer version: ^1.6.2 * Platform / OS version: OSX * Node.js version: v8.11.3 **What steps will reproduce the problem?** _Please include code that reproduces the issue._ 1. Use `createIncognitoBrowserContext` 2. Try to enable an extension using `--load-extension` and `--disable-extensions-except` **What is the expected result?** I expect the extension to work in incognito mode. This extension works well with `Allow in incognito` switched on in Google Chrome, outside of Puppeteer. **What happens instead?** Got error `net::ERR_BLOCKED_BY_CLIENT at chrome-extension://nkbihfbeogaeaoehlefnkodbefgpgknn/popup.html`
feature,upstream,chromium
low
Critical
373,925,073
vscode
MSIX installer
After 2 years of conversation in issue #10759 (Release in the Windows Store) I think this is the right time to reevaluate the request. What we originally wanted is all the goodness of UWP lifecycle management (clean install/uninstall and updates received in the background without having to run the app), but now it's possible outside of the Store with MSIX packaging without having to comply with some of the stricter rules of the Store and resource/capability management of UWP apps. So the proposal is simple: **Please make an MSIX installer for VS Code** even if it's not distributed through the Store (for numerous technical reasons). As a first step, Windows 10 users would benefit from it, later, as MSIX is designed to be cross-platform, it can evolve into the only installer format on all platforms (Linux, macOS, Windows 7+) and as the Store evolves, it can be a good foundation to potentially publish VS Code to the Store.
feature-request,install-update,windows
high
Critical
373,949,758
pytorch
warning: attribute namespace "clang" is unrecognized; High Sierra / Fedora compilation with clang results in spurious clang errors in nvcc
## 🚀 Feature Building for mac os generates a lot of new warnings related to C10_NODISCARD macro (part of the change to make TensorOptions immutable #12630 ) e.g. pytorch/aten/src/ATen/core/TensorOptions.h(120): warning: attribute namespace "clang" is unrecognized pytorch/aten/src/ATen/core/TensorOptions.h(129): warning: attribute namespace "clang" is unrecognized .. pytorch/aten/src/ATen/core/TensorOptions.h(173): warning: attribute namespace "clang" is unrecognized is there a way to turn these off or fix some error i'm making in setup thanks
module: build,module: cuda,triaged,module: build warnings
medium
Critical
373,980,976
pytorch
[caffe2] incompatible constructor arguments
hi guys i try to run the example for caffe2 ; https://github.com/caffe2/tutorials/blob/master/Loading_Pretrained_Models.ipynb and i have this problem in shell File "test1.py", line 128, in <module> p = workspace.Predictor(init_net, predict_net) File "/home/celestial/.virtualenvs/env4/lib/python3.6/site-packages/caffe2/python/workspace.py", line 159, in Predictor return C.Predictor(StringifyProto(init_net), StringifyProto(predict_net)) TypeError: __init__(): incompatible constructor arguments. The following argument types are supported: 1. caffe2.python.caffe2_pybind11_state.Predictor(arg0: bytes, arg1: bytes)
caffe2
low
Critical
374,017,705
go
all: ensure that tests do not write to the current directory
In #27957, @hyangah noticed that the tests for `compress/bzip2` fail when `GOROOT` is not writable, and those tests are run whenever we run `go test all` in module mode (which is intended to be a useful default). As noted in #28386, tests should not assume that they can write to the current directory. We should ensure that none of the tests in the standard library make that mistake.
Testing,help wanted,NeedsFix
high
Critical
374,024,708
go
x/build: add a "small" builder with limited resources
Starting a new issue from #27739. We tend to see issues building Go and running the tests if someone's machine has limited resources. For example, in #26867 I reported how `go test net` OOM'd with a few gigabytes of available memory. We already have special builders like `linux-amd64-noopt`, so I propose adding a `linux-amd64-small`. Alternatively, we could add these qualities to an existing builder like `linux-amd64-noopt`, like @bradfitz suggested. Some ideas to start with: * limiting the total memory to 4GB * limiting the total disk size to 10GB * limiting the CPU power, e.g. to dual-core 1GHz With time, if the builder is stable, we could lower those numbers and add more restrictions, such as: * lowering the maximum number of open file descriptors * lowering the size of /tmp * lowering the maximum number of processes created by the user /cc @dmitshur @bradfitz @andybons
Builders,NeedsFix,FeatureRequest,new-builder
low
Minor
374,027,546
godot
Missing license details for mono module dependencies
Right now, the mono module has two dependencies: Mono itself, and DotNet.Glob. The license details for these dependencies is not being included in the generated header `core/license.gen.h`. This information is displayed in the editor About dialog and returned from `Engine.get_license_info()`. The `license.gen.h` header is generated with the information from `COPYRIGHT.txt` (also `LICENSE.txt` which is Godot's license). My idea is to make the build script also search for `modules/*/COPYRIGHT.txt` files when generating this header. It should make sure the modules being included are enabled.
enhancement,topic:dotnet
low
Minor
374,041,503
go
text/template: allow callers to override IsTrue
The `IsTrue`, which is used in templates to determine if `if` and `when` etc. conditionals should be invoked makes sense in most situations. Some examples: ```go package main import ( "fmt" "text/template" "time" ) func main() { type s struct{} printIsTrue(1) // true printIsTrue(0) // false printIsTrue(-1) // true printIsTrue(s{}) // true printIsTrue(&s{}) // true printIsTrue((*s)(nil)) // false printIsTrue(nil) // false printIsTrue("") // false printIsTrue("foo") // true printIsTrue(time.Now()) // true printIsTrue(time.Time{}) // true } func printIsTrue(in interface{}) { b, _ := template.IsTrue(in) fmt.Println(b) } ``` My main challenge with the above is that `Struct values are always true`-- even the zero `time.Time` value above. It would be useful if the above method could check some additional `truther` interface: ```bash type truther interface { IsTrue() bool } ``` If the above is considered a breaking change, an alternative suggestion would be to make the `template.IsTrue` function settable.
NeedsDecision,FeatureRequest
medium
Critical
374,052,038
flutter
[Suggestion] Installing the Android environment via CLI flutter
As we are getting to Release 1.0, would you like to know if there is any plan to facilitate the complete installation of the development environment with CLI `flutter`? The Flutter environment is dependent on Android Studio, which makes startup time-consuming and a bit tricky. I took a look at other cross-platform development frameworks, and found the way the initial install is awesome. With just 3 lines you already have the entire Android environment installed and a new application running on your smartphone or emulator. I would like to give 3 suggestions 1. Installation of the `flutter` command line via` npm`: I noticed that there is already a third party project for this, but it does not appear in the official Flutter documentation: https://www.npmjs.com/package/flutter-cli 2. Install the Android SDK during `flutter create` if it does not exist: It's very interesting how they solved this in Windows, they are using Chocolatey to install the Android SDK and JDK, in this way, everything is automated and incredibly easy to start on the platform 3. Automatic installation of extensions in IDEs available on the user's machine (I do not know if it's possible)
c: new feature,tool,a: first hour,P3,team-tool,triaged-tool
low
Major
374,055,868
pytorch
arm64 port for PyTorch, libtorch
## 🚀 Feature Port the PyTorch and libtorch codes to arm64, and provide easy to use binary downloads for user of those systems. ## Motivation As requested in https://github.com/WorksOnArm/cluster/issues/115 and as mentioned at #12339. The likely target hardware that's in wide use is the Jetson boards. ## Pitch Provide arm64 builds of PyTorch to enable machine learning applications on ARM systems like NVIDIA Jetson TX2 or AGX Xavier, or Raspberry Pis, which power self driving car and IOT applications. ## Alternatives Individual users can try to build from source and maintain their own patches to make this happen, but that's not terribly efficient in the long term compared to a supported build. ## Additional context The Works on Arm project is prepared to provide access to an arm64 server (initially one provided by Ampere) to assist in the port. cc @seemethere @malfet @pytorch/pytorch-dev-infra
module: ci,triaged,enhancement,module: arm
medium
Critical
374,067,936
rust
Rustc adds line-number information for unhittable panic handlers
_First off, root issue I was investigating was kcov producing bad coverage information for Rust binaries. I've worked out why kcov is producing the results it is and they're "correct" given what Rustc is doing. I'm not sure where the fix (if any) needs to be made, but I'm starting with Rustc as kcov's strategy looks sound._ Consider the following code: ``` struct Person { name: String, age: u32, } fn get_age() -> u32 { 42 } fn create_bob() -> Person { Person { name: "Bob".to_string(), age: get_age(), } // Uncovered } fn main() { let b = create_bob(); } ``` Kcov will mark the "Uncovered" line as unhit. This is surprising, as that line was definitely passed during the execution of the program. Lines can be omitted from kcov coverage (with `// KCOV_EXCL_LINE`) but this leads to hundreds of those markers scattered around the code, potentially hiding real coverage lapses/lowering maintainability of the codebase. ---- Kcov determines coverage by looking at the `.debug_lines` section of the binary, which contains a mapping from address in the binary to line of code. It then sets a break point at every listed address before running the program. Each time a break point is hit, kcov marks the associated line of code as hit, and clears the breakpoint (as hitting it again tells us nothing, and breakpoints are slow). After the process completes, any lines remaining were not hit. This means that kcov's "was this line hit" logic is pretty solid, assuming the `.debug_lines` section is accurate. ---- When Rust generates a block that calls any function after creating any binding to a type with a `Drop` implementation, it also generates a unwind-cleanup block to call that `drop()` in the event of the function `panic`-ing. In the above case, the generated code is something like (assuming Rust had exceptions): ``` fn create_bob() -> Person { let name = "Bob".to_string(); try { let age = get_age(); } catch e { drop(name); throw e; } Person { name, age, } } ``` This makes sense, if there is a panic (which might get handled up the stack somewhere) we should delete bindings that are memory-safe (yay Rust!) to free up memory that's not going to be accessible any more. ---- Unfortunately: * The cleanup code is associated with the "end of block" marker (the `}` line marked above) so unless `get_age` panics, this code won't be hit. * The marked line is not associated with any other generated machine code, so nothing else will cause kcov to consider it hit * `get_age` never panics so this cleanup code can't be hit (without changing the `get_age()` function at least) In a release build, the cleanup handler is stripped because LLVM notices that there's no way for anything in the `try` block to panic, so the handler is not needed. In a debug build, it doesn't do this optimisation and leaves genuinely unhittable code in the binary, but associates it with a line of code, causing many false positives in kcov's output (especially as kcov uses debug builds to prevent dead-code elimination removing untested code). ---- What to do differently? I'm not sure, but here are some ideas: * Associate the cleanup of a binding with the line that created the binding, rather than the end of the block. Might be confusing if you are actually debugging a panic cleanup. * Don't associate the cleanup code with any lines of code. Definitely unhelpful in panic debugging. * Enhance Kcov to ignore cleanup landing pads. Isn't really fair, as in a language like C++ exceptions are fairly common and cleanup is important. Also, landing pads are language-specific (see eh_personality and friends) so it's tricky to get this right. * Somehow perform minimal dead-code elimination even in debug builds (or test builds more accurately) that only removes unnecessary cleanup code. Not sure how plausible this is, nor how likely it is to create false positives. * Something else...?
A-debuginfo,T-compiler
low
Critical
374,067,993
go
cmd/compile: provide a more explanatory error message on trying to take the address of an unaddressable value e.g. constants, integer and string literals
### What version of Go are you using (`go version`)? go version go1.11.1 linux/amd64 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="/home/24G/.bin" GOCACHE="/home/24G/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/24G/go" GOPROXY="" GORACE="" GOROOT="/snap/go/2890" GOTMPDIR="" GOTOOLDIR="/snap/go/2890/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build434234612=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? https://play.golang.org/p/uU1XNXaFYG- ### What did you expect to see? This isn't a bug report, that's exactly what I expected haha. But the purpose of this issue is to get a more helpful error message. ### What did you see instead? `cannot take the address of 5` This is quite repulsive toward beginners. People who encounter this error may not understand exactly how constant values work, and what the differences between constants and normal values (after all, they look a lot like variables). Pointers are extremely hard for beginners to deal with, and changing this error message will make the language a bit more approachable. This is especially unintuitive because `&StructValue{...}` is okay, but `&5` is not. I propose instead that the message is changed to `cannot take the address of the constant value <const>`. This should at least explain a reason _why_ the value is not addressable. If there are cases where this is error message change would be unhelpful, that would be important to consider.
NeedsDecision
low
Critical
374,092,096
go
text/template: documentation for IsTrue disagrees with its implementation for struct types
As noted in #28391, the [documentation for `template.IsTrue`](https://tip.golang.org/pkg/text/template/#IsTrue) says: > IsTrue reports whether the value is 'true', in the sense of not the zero of its type, and whether the value has a meaningful truth value. The zero `time.Time` *is* the zero of its type, so according to the documentation it should be considered to be true, but the implementation clearly intends otherwise: https://github.com/golang/go/blob/f6b554fec75ff1a36c6204755db8c1f638255b64/src/text/template/exec.go#L326-L327 Either the documentation should be clarified, or the behavior of `IsTrue` should be fixed to match what is documented. Probably the former.
Documentation,NeedsFix
low
Major
374,161,438
TypeScript
Ignored check result of typeof a === typeof b
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.2.0-dev.20181025 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** **Code** ```ts type argType = string | number | boolean; function sort(a: argType, b: argType): number { if (typeof a !== typeof b) { return 0; } // typeof a should be number now if (typeof a !== 'number') { return 0; } // typeof b should be number now return a - b; // Here TS is not sure that b is number } ``` **Expected behavior:** We have already checked that `typeof a` is equal `typeof b`. So if typeof `a` is `number`, than and `b` is `number` too. **Actual behavior:** error: `The right-hand side of an arithmetic operation must be of type 'any', 'number' or an enum type.` **Playground Link:** http://www.typescriptlang.org/play/#src=%0D%0Atype%20argType%20%3D%20string%20%7C%20number%20%7C%20boolean%3B%0D%0Afunction%20sort(a%3A%20argType%2C%20b%3A%20argType)%3A%20number%20%7B%0D%0A%20%20%20%20if%20(typeof%20a%20!%3D%3D%20typeof%20b)%20%7B%0D%0A%20%20%20%20%20%20%20%20return%200%3B%0D%0A%20%20%20%20%7D%0D%0A%0D%0A%20%20%20%20if%20(typeof%20a%20!%3D%3D%20'number')%20%7B%0D%0A%20%20%20%20%20%20%20%20return%200%3B%0D%0A%20%20%20%20%7D%0D%0A%0D%0A%20%20%20%20return%20a%20-%20b%3B%0D%0A%7D **Related Issues:** <!-- Did you find other bugs that looked similar? -->
Suggestion,In Discussion
low
Critical
374,281,814
pytorch
circular module reference raises RecursionError
## 🐛 Bug <!-- A clear and concise description of what the bug is. --> ## To Reproduce Steps to reproduce the behavior: 1. Run the following code snippet ```py import torch import torch.nn as nn class ModuleA(nn.Module): def __init__(self): super().__init__() self.bs = nn.ModuleList() class ModuleB(nn.Module): def __init__(self, a): super().__init__() self.a = a a = ModuleA() a.bs.append(ModuleB(a)) a.to("cpu") ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Expected behavior Running the code snippet raises `RecursionError` due to the circular reference. In this particular case, `a.to("cpu")` should have no effect because there are no tensors to move, but in general, it should move all tensors to the corresponding device without raising any error. See `additional context` for an explanation of why a circular reference like this might be useful in practice. ## Environment ``` PyTorch version: 0.4.1 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Ubuntu 16.04.3 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 7.5.17 GPU models and configuration: GPU 0: GeForce GTX 1080 Nvidia driver version: 384.130 cuDNN version: Could not collect Versions of relevant libraries: [pip] Could not collect [conda] pytorch 0.4.1 py37ha74772b_0 ``` ## Additional context A particular case where a circular reference is useful is when `ModuleB.forward` computes something based on attributes of `ModuleA`, and `ModuleA.forward` computes the sum of `ModuleB.forward` for every module in its `ModuleList`. A workaround to avoid the `RecursionError` is to pass all the attributes explicitly to the constructor of `ModuleB`, but it becomes impractical when many parameters are involved and it's particularly problematic for mutable shared state like numbers that get passed by value. For example, consider the following code snippet: ```py import torch import torch.nn as nn class ModuleA(nn.Module): def __init__(self): super().__init__() self.n = 123 self.bs = nn.ModuleList() def forward(self, x): output = 0.0 for b in self.bs: output += b(x) return output class ModuleB(nn.Module): def __init__(self, a): super().__init__() self.a = a def forward(self, x): if self.a.n > 100: return x else: return x + 1 a = ModuleA() a.bs.append(ModuleB(a)) x = torch.zeros(1) print(a(x)) # 0 a.n = 99 print(a(x)) # 1 ``` This code snippet has a circular module reference and works without problems, but `a.to(device)` will result in infinite recursion. A workaround to avoid infinite recursion when calling `a.to(device)` is to make the constructor of `ModuleB` accept `n` as an argument rather than `a`, and store it as an attribute, but changes in `a.n` won't have the desired effect since it's no longer shared. cc @albanD @mruberry @jbschlosser
module: nn,triaged
low
Critical
374,318,744
pytorch
Tracing custom ops
I wrote a custom op using the process described [here](https://pytorch.org/tutorials/advanced/cpp_extension.html), giving me a `torch.autograd.Function` called `MyOp`. But when I run `torch.jit.trace()` on a module using it, and try to save the result, I get an error `RuntimeError: Couldn't export Python operator <MyOp>`. Is there a way to do this? I noticed [this](https://github.com/pytorch/pytorch/blob/master/torch/_ops.py) but I couldn't figure out how to get it to work. Thanks!
oncall: jit
low
Critical
374,353,198
TypeScript
In dom.d.ts, caretPositionFromPoint and caretRangeFromPoint should go on DocumentOrShadowRoot, not Document
**TypeScript Version:** 3.1.3 (issue still present on github) **Code** ```ts let root: DocumentOrShadowRoot = document // Typechecks if declared as type Document root.caretPositionFromPoint(1, 1) ``` **Expected behavior:** The code typechecks **Actual behavior:** TypeScript complains that `caretPositionFromPoint` doesn't exist on `DocumentOrShadowRoot`. **Playground Link:** https://www.typescriptlang.org/play/#src=let%20root%3A%20DocumentOrShadowRoot%20%3D%20document%0D%0Aroot.caretPositionFromPoint(1%2C%201)
Bug,Help Wanted,Domain: lib.d.ts
low
Minor
374,366,828
rust
calling FnMut closure in an immutable value does not say why closure is FnMut
The following error should at least mention `something` being the reason for the `mut` requirement on the closure. ```rust fn main() { let mut something = 42; let dummy = || { something = 44; }; dummy(); } ``` ([Playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=7a3c32138c59a6d86d0c8b7639ea50dc)) Errors: ``` Compiling playground v0.0.1 (/playground) error[E0596]: cannot borrow `dummy` as mutable, as it is not declared as mutable --> src/main.rs:6:5 | 3 | let dummy = || { | ----- help: consider changing this to be mutable: `mut dummy` ... 6 | dummy(); | ^^^^^ cannot borrow as mutable error: aborting due to previous error For more information about this error, try `rustc --explain E0596`. error: Could not compile `playground`. To learn more, run the command again with --verbose. ```
C-enhancement,A-diagnostics,T-compiler
low
Critical
374,371,701
opencv
IPP: cv::ipp::getIppTopFeatures() return value on 32-bit configurations
Windows 32-bit: ``` Intel(R) IPP version: ippIP SSE4.2 (p8) 2019.0.0 Gold (-) Jul 26 2018 ``` `cv::ipp::getIppTopFeatures()` returns `0x8000` (`ippCPUID_AVX2`). It should be `0x80` (`ippCPUID_SSE42`) because SSE4.2 backend is used. relates #12877
bug,optimization,category: core
low
Major
374,382,312
go
cmd/compile: the DW_AT_location of the return value is empty when its name is not specified
### What version of Go are you using (`go version`)? `go version devel +66bb8ddb95 Thu Oct 25 13:37:38 2018 +0000 darwin/amd64` ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/Users/yagami/Library/Caches/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/yagami/go" GOPROXY="" GORACE="" GOROOT="/Users/yagami/src/go" GOTMPDIR="" GOTOOLDIR="/Users/yagami/src/go/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/5t/5crzwsxs305_ngrhgx90phzh0000gn/T/go-build561220144=/tmp/go-build -gno-record-gcc-switches -fno-common" ``` ### What did you do? In the sample program below, the DW_AT_location of the return value `~r1` is empty. It's nice to provide the information about the location here because the value is present on the stack and is used later. ``` % cat loc_attr.go package main import "fmt" func fib(n int) int { if n == 0 || n == 1 { return n } return fib(n-1) + fib(n-2) } func main() { fmt.Println(fib(10)) } % go build -ldflags=-compressdwarf=false loc_attr.go % gobjdump --dwarf=info ./loc_attr | grep -B 1 -A 18 'main.fib' <1><75c36>: Abbrev Number: 3 (DW_TAG_subprogram) <75c37> DW_AT_name : main.fib <75c40> DW_AT_low_pc : 0x1091960 <75c48> DW_AT_high_pc : 0x10919e8 <75c50> DW_AT_frame_base : 1 byte block: 9c (DW_OP_call_frame_cfa) <75c52> DW_AT_decl_file : 0x1 <75c56> DW_AT_external : 1 <2><75c57>: Abbrev Number: 16 (DW_TAG_formal_parameter) <75c58> DW_AT_name : n <75c5a> DW_AT_variable_parameter: 0 <75c5b> DW_AT_decl_line : 5 <75c5c> DW_AT_type : <0x339f7> <75c60> DW_AT_location : 0x71216 (location list) <2><75c64>: Abbrev Number: 15 (DW_TAG_formal_parameter) <75c65> DW_AT_name : ~r1 <75c69> DW_AT_variable_parameter: 1 <75c6a> DW_AT_decl_line : 5 <75c6b> DW_AT_type : <0x339f7> <75c6f> DW_AT_location : 0 byte block: () <2><75c70>: Abbrev Number: 0 ``` I'm not sure it's helpful, but I noticed the DW_AT_location is not empty if the name of the return value is specified in the source: ``` % cat loc_attr.go package main import "fmt" func fib(n int) (r int) { // now the return value is named. if n == 0 || n == 1 { return n } return fib(n-1) + fib(n-2) } func main() { fmt.Println(fib(10)) } % go build -ldflags=-compressdwarf=false loc_attr.go % gobjdump --dwarf=info ./loc_attr | grep -B 1 -A 18 'main.fib' <1><75c36>: Abbrev Number: 3 (DW_TAG_subprogram) <75c37> DW_AT_name : main.fib <75c40> DW_AT_low_pc : 0x1091960 <75c48> DW_AT_high_pc : 0x10919e8 <75c50> DW_AT_frame_base : 1 byte block: 9c (DW_OP_call_frame_cfa) <75c52> DW_AT_decl_file : 0x1 <75c56> DW_AT_external : 1 <2><75c57>: Abbrev Number: 16 (DW_TAG_formal_parameter) <75c58> DW_AT_name : n <75c5a> DW_AT_variable_parameter: 0 <75c5b> DW_AT_decl_line : 5 <75c5c> DW_AT_type : <0x339f7> <75c60> DW_AT_location : 0x71216 (location list) <2><75c64>: Abbrev Number: 16 (DW_TAG_formal_parameter) <75c65> DW_AT_name : r <75c67> DW_AT_variable_parameter: 1 <75c68> DW_AT_decl_line : 5 <75c69> DW_AT_type : <0x339f7> <75c6d> DW_AT_location : 0x71249 (location list) <2><75c71>: Abbrev Number: 0 ``` ### What did you expect to see? The DW_AT_location of the return value `~r1` provides the information about the location. ### What did you see instead? The DW_AT_location of the return value `~r1` is empty.
NeedsFix,Debugging,compiler/runtime
low
Critical
374,384,151
kubernetes
AvailableConditionController doesn't implement proper backoff/retry strategy
**What happened**: When AvailableConditionController is unable to contact a api service it keeps retrying with high number of attempts per seconds (like 80 attempts per seconds) creating significant cpu usage. https://gist.github.com/mborsz/c6094ae23f84db10593f1e0a0fd5d38d contains kube-apiserver's logs from one of the attempts. **What you expected to happen**: AvailableConditionController should implement a proper backoff/retry strategy in that case. **How to reproduce it (as minimally and precisely as possible)**: 1. Create a cluster 2. Block ssh tunnel access to the nodes with service running 3. Restart kube-apiserver 4. Watch logs and cpu usage of kube-apiserver **Anything else we need to know?**: AvailableConditionController seems to be trying to implement some backoff logic, but it either doesn't work or still generates too big load. **Environment**: - Kubernetes version (use `kubectl version`): 1.9.7 - Cloud provider or hardware configuration: gke - OS (e.g. from /etc/os-release): cos - Kernel (e.g. `uname -a`): 4.4.111+ - Install tools: - Others: <!-- DO NOT EDIT BELOW THIS LINE --> /kind bug
kind/bug,priority/backlog,sig/api-machinery,lifecycle/frozen,triage/not-reproducible
medium
Critical
374,408,418
vscode
Git - VS Code no longer supports 'edit' in 'git rebase' flow
As of a latest update, VS Code now does the wrong thing when using the Source Control pane during a `git rebase` command. `git rebase -i` allows the rebase to be stopped in in order to `edit` commits. This includes undoing them, changing the commit message, or adding more commits. Now, instead of just allowing commits as usual, VS Code attempts to use `git rebase --continue` after altering a commit, which fails because there is still a commit pending that is not the original. - VSCode Version: 1.28.2 (system setup) - OS Version: Windows 10 Enterprise 1809 17763.55 Steps to Reproduce: 1. Create a new branch, and add at least one commit to the branch. 2. In the terminal, do `git rebase -i master` 3. In the editor for the interactive rebase, change a commit from `pick` to `edit` 4. Close the editor. 5. Git will start the rebase, and stop on the commit. It will prompt to amend the commit or add more. 6. In the Source Control pane, use 'Undo last commit' 7. Once the commit is soft reset, attempt to edit the commit. Rename the commit description or make more local changes. 8. Note that VS Code shows a warning that editing the commit is not allowed, even though Git specifically allows it here. 9. Attempt to commit anyway through the Source Control pane. 10. VS Code pops a dialog from Git, saying that the operation is not allowed. 11. In the Git log, observe that the commit action actually sent `git rebase --continue`, which is not correct. It should have just done a usual `git commit` One can work around the issue by completing the commit in the terminal window, but using VS Code normally handles the work of auto-staging files and deletions, making it a lot more useful. ![10 26 vs code git rebase edit blocked](https://user-images.githubusercontent.com/6828233/47572497-f5f41e80-d8ef-11e8-9200-8129e130bc2f.gif) This used to work! If you want to be more helpful during rebases, I suggest adding a dedicated 'continue' button instead of altering the behavior of the commit button. Does this issue occur when all extensions are disabled?: Yes
bug,git
medium
Major
374,419,557
godot
Dynamic font sizes changed in script propagate to other nodes
I have been looking online to see if there is a workaround to this, with no success. This is either a bug, or something that needs a fix. As I explained earlier, whenever I change a font size in script, it propagates to other nodes instances, some don't even share the same dynamic font. **Godot version:** 3.0.6 **OS/device including version:** All **Issue description:** Dynamic font sizes changed in script propagate to other nodes, sometimes not even sharing the same dynamic font file. **Steps to reproduce:** - Assign a dynamic font to a label node. - Place various instances of node on scene. - Make the dynamic font size be able to change in script. Per example, ```gdscript extends Label export(int) var font_size = 16 func _ready(): self.get_font('font').size = font_size ``` - Assign different font sizes to each node. - Run scene and watch as all nodes get the same font size of the last label rendered.
topic:core,documentation
low
Critical
374,431,916
flutter
Tool doesn't support a project as both an app and module at the same time (both android and .android directories)
I have a Flutter project that is also a **module** project. That means: - I have a "android" folder. - I also have a ".android" (auto-generated) folder. This causes some issues when I try to build or run the project: - When I run it, the flutter framework install the ".android" (generated) app into the phone. - When I build (release) it generates an apk based on the "android" folder. How do I specify which folder I want the flutter's to work with? I believe this is not possible right now so, it should be considered for future implementations. Ps.: I used Android as a example but also happens for iOS ---------------------------------------------------- To exemplify a problem that happend: I added `shared_preferences `to my `pubspec.yaml`. Running the app by intellij was working fine and nothing wrong was happening, but when I run a `flutter build apk` and then a `flutter install`, the app in the Android device was throwing a "**MissingPluginException**". Then I checked the GeneratedPluginRegistrant file from the "**android**" folder and it was completely out of date. To fix it I had to comment the `pubscpec.yaml's` entire "module" property (with its **androidPackage** and **iosBundleIdentifier** sub-properties). By doing this, Flutter started to generate properly all the files for the "**android**" folder. **(My flutter doctor is all fine)**
tool,a: existing-apps,P2,team-tool,triaged-tool
low
Major
374,442,276
TypeScript
Bug: Chained this-intersecting methods don't work from within class
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.1.1-insiders.20180926 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** this chain **Code** ```ts class Builder { private _class: undefined; withFoo<T>() { return this as this & { foo: T } } withBar<T>() { return this as this & { bar: T } } withFooBar<T>() { return this.withFoo<T>().withBar<T>(); } } let good = new Builder().withFoo<number>().withBar<number>(); let bad = new Builder().withFooBar<number>(); good = bad; ``` **Expected behavior:** `good` & `bad` should be identical **Actual behavior:** `bad` is missing `foo` **Playground Link:** http://www.typescriptlang.org/play/#src=class%20Builder%20%7B%0D%0A%20%20%20%20private%20_class%3A%20undefined%3B%0D%0A%20%20%20%20withFoo%3CT%3E()%20%7B%0D%0A%20%20%20%20%20%20%20%20return%20this%20as%20this%20%26%20%7B%20foo%3A%20T%20%7D%0D%0A%20%20%20%20%7D%0D%0A%20%20%20%20withBar%3CT%3E()%20%7B%0D%0A%20%20%20%20%20%20%20%20return%20this%20as%20this%20%26%20%7B%20bar%3A%20T%20%7D%0D%0A%20%20%20%20%7D%0D%0A%20%20%20%20withFooBar%3CT%3E()%20%7B%0D%0A%20%20%20%20%20%20%20%20return%20this.withFoo%3CT%3E().withBar%3CT%3E()%3B%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0Alet%20good%20%3D%20new%20Builder().withFoo%3Cnumber%3E().withBar%3Cnumber%3E()%3B%0D%0Alet%20bad%20%3D%20new%20Builder().withFooBar%3Cnumber%3E()%3B%0D%0Agood%20%3D%20bad%3B
Bug,Domain: This-Typing
low
Critical
374,451,049
react
nextContext arg in shouldComponentUpdate() method
<!-- Note: if the issue is about documentation or the website, please file it at: https://github.com/reactjs/reactjs.org/issues/new --> **Do you want to request a *feature* or report a *bug*?** I think it is feature, but could be a bug also **What is the current behavior?** When I subscribe to context using React16.6.0 contextType API, component rerenders even when unused context properties was changed. So, I want to use shouldComponentUpdate(), but that method don't have nextContext argument. Is there any other way to solve this problem? **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:** **What is the expected behavior?** **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
Type: Needs Investigation,Component: Reconciler
low
Critical
374,470,178
vue
Allow <noscript> in Vue templates for SSR usage
### Version 2.5.17 ### Reproduction link [https://codepen.io/brophdawg11/pen/OBdZyX](https://codepen.io/brophdawg11/pen/OBdZyX) ### Steps to reproduce I'm opening this issue as a follow up to https://github.com/vuejs/vue/issues/8247, as I don't think the solution provided there is suitable for all use-cases. In a fairly simple UI, where everything is relatively positioned and flows downward, it would likely be fine to include `<noscript>` outside the context of the Vue application, and it would render correctly above the entire app. However, there are plenty of other UI's that it may not be desirable or feasible to include `<noscript>` _outside_ of the Vue application context and display it properly. The linked codepen shows a simple fixed-header layout, where including the `<noscript>` tag outside the Vue application context results in the `<noscript>` tag being hidden _behind_ the fixed header, where in reality it is intended to be rendered inside the main body content, and thus below the fixed header. The `<noscript>` outside the Vue context also has the unintended effect of pushing down the main content, which has a proper margin-top to account for the static-height fixed header. Per MDN, `<noscript>` is Flow Content (https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Content_categories#Flow_content), it is perfectly viable to exist outside the `<head>` (https://developer.mozilla.org/en-US/docs/Web/HTML/Element/noscript), and is perfectly valid to nest inside the DOM in a `<div>`, as div's allow Flow Content as their children (https://developer.mozilla.org/en-US/docs/Web/HTML/Element/div). Please reconsider the decision to not permit `noscript` tags in Vue templates. ### What is expected? `<noscript>` elements should render properly in Vue templates ### What is actually happening? `<noscript>` elements do not render properly in Vue templates and cause hydration issues when in SSR <!-- generated by vue-issues. DO NOT REMOVE -->
discussion,improvement
medium
Critical
374,475,807
flutter
Would like an analytics event every time a user hits "please file this" errors in Framework
Ideally we'd also report the (anonymized) crash traces, but if that's not easy to do, at least we can report an analytics event so we know the frequency. This came up in discussing with @timsneath, noting that he often sees these "something went wrong deep in the framework, please file a bug" messages. Thoughts @gspencergoog @Hixie?
c: new feature,team,tool,framework,P2,team-framework,triaged-framework
low
Critical
374,479,541
go
cmd/go: add 'go get' options to update direct and indirect dependencies separately
### What version of Go are you using (`go version`)? 1.11.1 ### Summary Users have more familiarity with their direct dependencies than their indirect dependencies. Indirect dependencies can also be larger in number, with greater potential combinations of pair-wise versions. For these and other reasons, upgrading direct dependencies has a different risk profile than upgrading indirect dependencies. Therefore: * Consider providing users easier control to upgrade just their direct dependencies. * Consider allowing an easier separation of upgrade strategy for indirect dependencies vs. the upgrade strategy for direct dependencies (e.g., perhaps minor version upgrades for direct dependencies, but only patch version upgrades for resulting indirect dependencies). This might result in upgrade strategies that retain more benefits of 'High-Fidelity Builds', especially as compared to doing a simple `go get -u` or `go get -u=patch`. Regardless of whether or not this particular suggestion ends up making sense, a more general goal would be to have a small basket of easy-to-run upgrade strategies that could be applied to something like 80-90% of projects. ### Background The ["Update Timing & High-Fidelity Builds"](https://github.com/golang/proposal/blob/master/design/24301-versioned-go.md#update-timing--high-fidelity-builds) section of the official proposal includes: > In the Bundler/Cargo/Dep approach, the package manager always prefers to use the latest version of any dependency. These systems use the lock file to override that behavior, holding the updates back. But lock files only apply to whole-program builds, not to newly imported libraries. If you are working on module A, and you add a new requirement on module B, which in turn requires module C, these systems will fetch the latest of B and then also the latest of C. In contrast, this proposal still fetches the latest of B (because it is what you are adding to the project explicitly, and the default is to take the latest of explicit additions) but then prefers to use the exact version of C that B requires. Although newer versions of C should work, it is safest to use the one that B did. > ... > The [minimal version selection](https://research.swtch.com/vgo-mvs) blog post refers to this kind of build as a “high-fidelity build.” This is a very nice set of properties of the overall modules system, and is materially different than a more traditional approach. That section later goes on to say: > Many developers recoil at the idea that adding the latest B would not automatically also add the latest C, but if C was just released, there's no guarantee it works in this build. The more conservative position is to avoid using it until the user asks. > ... > users are expected to update on their own schedule, so that they can control when they take on the risk of things breaking > ... > If a developer does want to update all dependencies to the latest version, that's easy: go get -u. We may also add a go get -p that updates all dependencies to their latest patch versions ### Issue The initial starting point for a new set of dependencies in the modules system is more conservative than a more traditional approach, and as a result it is likely the case that you have better odds of starting with a working system. The ability to easily do `go get -u` to update all direct and indirect dependencies helps balance out that more conservative start. However, once you do `go get -u`, you have stepped away to some degree from some of the benefits of "High-Fidelity Builds" at that point in time. The same is true of `go get -u=patch`, though the step away is smaller. ### Suggestion Consider some form of providing for an easy upgrade just to direct dependencies. Setting aside the actual mechanics (e.g., new flag vs. some other mechanism), if you could easily ask for to get the latest versions of your direct dependencies, e.g., via something like: ``` $ go get direct@latest ``` ...or less likely, perhaps that same sentiment could be written something like: ``` $ go get -u -directonly ``` ...or some other form that would ask to upgrade only your _direct_ dependencies to the latest version available. That would mean in the common case the resulting versions for your _indirect_ dependencies would be the ones listed in a `require` directive by at least one of your other dependencies, which would preserve many of the benefits of 'High-Fidelity Builds'. In contrast, a simple `go get -u` often moves your indirect dependencies to versions beyond the versions listed in any `require` directive, and hence you might be using versions of modules that the module's importer in your build has never used or tested, or you otherwise might find yourself in a rare combination of versions involving indirect dependencies. The author of the top-level build: * often has the most insight into their _direct_ dependencies (e.g., the author made a decision to use them, the top-level code is directly interacting with their direct dependencies). * often has progressively less insight into _indirect_ dependencies further and further down the chain * often is not well positioned to chase down pair-wise incompatibility bugs that show up in their build deep in their dependency chain (e.g., in some indirect dependency 7 levels down). For the rest of this write-up, we'll use `go get -u -directonly` as the strawman form (rather than `go get direct@latest`). `go get -u -directonly` or similar could be viewed as a 'high-fidelity upgrade', though that might not be not good terminology. If some form of mechanics for more easily upgrading direct dependencies was adopted, it could apply for patch upgrades as well, such as something like: ``` go get -u=patch -directonly ``` ...which would update your direct dependencies to their latest patch releases. In addition, it might make sense to also allow for easily specifying how you want to upgrade your _indirect_ dependencies. If that was allowed, then for some projects a natural strategy for dependency upgrades might be: ``` $ go get -u -directonly $ go get -u=patch -indirectonly ``` That would more conservative than `go get -u`, and slightly more aggressive than the hypothetical `go get -u -directonly`, but there is some risk mitigation for indirect dependencies by picking up the latest _patch_ versions for indirect dependencies (`go get -u=patch -indirectonly`, which admittedly is not a great flag name). Backing up: * Different projects will almost certainly adopt different dependency upgrade strategies (due to different risk tolerances, different depth and width of their dependency trees, different levels of stability observed over time within their dependencies, etc.). * There is a general question repeated within the community regarding best practice for library maintainers for dependency updates, including if libraries should: * "Ride the top", such as by always shipping with requirements for the latest version of all dependencies * "Ride the bottom", where `require` statements for a module have the true minimum supported version of dependencies * Options along the lines of `go get -u -directonly`, `go get -u=patch -directonly`, `go get -u=patch -indirectonly` could provide simpler and better answers to those questions at least for some projects. ### Alternatives I think even today in 1.11 you can emulate the suggested behavior above, though it is not always natural or obvious. Perhaps there is a simpler way today, but for example given the flexibility of `go list`, I suspect in 1.11 the following gives you the `go get -u=patch -indirectonly` behavior described above: ``` go get -u=patch $(go list -f '{{if not (or .Main .Indirect)}}{{.Path}}@{{.Version}}{{end}}' -m all) ``` To upgrade your direct dependencies to their latest release (the `go get -u -directonly` or `go get direct@latest` behavior described above), this should work in 1.11: ``` go get $(go list -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' -m all) ``` `go mod edit -json` and similar open up even more doors for greater control today.
NeedsInvestigation,FeatureRequest,GoCommand,modules
medium
Critical
374,494,054
pytorch
Allow traced modules to return dictionaries
For modules traced by `torch.jit.trace()`, allow dictionaries to be returned (ideally nested dictionaries as well) in addition to Tensors/tuples.
oncall: jit
low
Minor
374,500,308
rust
Trivial proc macro crate can't be found for doc tests
I tried making a trivial proc macro crate to try out the newly stabilized things in 1.30 and ran into a problem when running the tests from a crate that imports it. The normal compilation/tests run perfectly fine, but then it tries to run doc tests, even though I have no doc comments or any other comments for that matter. When running these nonexistant doc tests, it fails to locate my proc macro crate, even though it was definitely there and working correctly for the normal tests. Full error: ``` $ cargo +stable test Compiling foo v0.1.0 (/tmp/new_rust_testing/foo) Compiling baz v0.1.0 (/tmp/new_rust_testing/baz) Finished dev [unoptimized + debuginfo] target(s) in 0.87s Running target/debug/deps/baz-a6614c8413dbf748 running 1 test test tests::it_works ... ok test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out Doc-tests baz error[E0463]: can't find crate for `foo` --> /tmp/new_rust_testing/baz/src/lib.rs:1:1 | 1 | extern crate foo; | ^^^^^^^^^^^^^^^^^ can't find crate error: test failed, to rerun pass '--doc' ``` Full code can be found at https://github.com/jrobsonchase/proc_macro_problem My current rustc version: ``` $ rustup run stable rustc --version rustc 1.30.0 (da5f414c2 2018-10-24) ```
T-rustdoc,A-macros,A-doctests
low
Critical
374,539,921
flutter
Ability to recess widgets (i.e. to negative on the z axis)
It is currently possible to give Material widgets an elevation and have them hover over other widgets. It'd be great to go the other direction an provide give an 'embedded' look, in that one widget looks recessed inside another widget. Kind of like a negative elevation.
c: new feature,framework,f: material design,P3,team-design,triaged-design
low
Minor
374,540,771
TypeScript
Improve typing of arguments with a function (with respect to overloads)
## Search Terms function overload arguments narrowing ## Suggestion The example is probably the best illustration I can give... It's also much clearer (I think) that the description in text. Currently, the types of the arguments of a function with overloads cannot be forwarded to another function with the same overloads (1) because the raw typing of the argument doesn't match any of the overloads. But, at the same time, this restriction is not carried out in the relationship between the arguments' types is not used once in the function's body. We currently have to either assert types (using `as`) or do `if (...) throw` with conditions that will never be met runtime. That requires to sacrifice either type safety or performance. I've seen #13225. However, my suggestion is more regarding the arguments than the return type. It shouldn't impose any additional restrictions that currently exist (actually, remove some). Moreover, one of the arguments that was given at the time ("Pass-through overloads are extremely common and would require writing "artificial" code to satisfy the checker") is not true anymore. (See point (1) in the example). I'm not sure if this particular point should be considered a bug in itself or not... Having the type-system aware of the overloads when typing the arguments could actually restore this. ## Use Cases Simplify overloads implementation and make them more readable. ## Examples ```ts let baz: string; function fn2(foo: 'a', bar: string): void; function fn2(foo: 'b', bar: number): void; function fn2(foo: 'a' | 'b', bar: string | number): void { fn(foo, bar); // (1) If this doesn't work, ... } function fn(foo: 'a', bar: string): void; function fn(foo: 'b', bar: number): void; function fn(foo: 'a' | 'b', bar: string | number): void { if (foo == 'a') { baz = bar; // (2) ... This should! } } ``` https://www.typescriptlang.org/play/index.html#src=let%20baz%3A%20string%3B%0D%0A%0D%0Afunction%20fn2(foo%3A%20'a'%2C%20bar%3A%20string)%3A%20void%3B%0D%0Afunction%20fn2(foo%3A%20'b'%2C%20bar%3A%20number)%3A%20void%3B%0D%0Afunction%20fn2(foo%3A%20'a'%20%7C%20'b'%2C%20bar%3A%20string%20%7C%20number)%3A%20void%0D%0A%7B%0D%0A%20%20%20%20fn(foo%2C%20bar)%3B%20%2F%2F%20(1)%20If%20this%20doesn't%20work%2C%20...%0D%0A%7D%0D%0A%0D%0Afunction%20fn(foo%3A%20'a'%2C%20bar%3A%20string)%3A%20void%3B%0D%0Afunction%20fn(foo%3A%20'b'%2C%20bar%3A%20number)%3A%20void%3B%0D%0Afunction%20fn(foo%3A%20'a'%20%7C%20'b'%2C%20bar%3A%20string%20%7C%20number)%3A%20void%0D%0A%7B%0D%0A%20%20%20%20if%20(foo%20%3D%3D%20'a')%0D%0A%20%20%20%20%7B%0D%0A%20%20%20%20%20%20%20%20baz%20%3D%20bar%3B%20%2F%2F%20(2)%20...%20This%20should!%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,In Discussion
low
Critical
374,553,332
godot
Inherited export variables from a registered class don't show up in node inspector
___ ***Bugsquad note:** This issue has been confirmed several times already. No need to confirm it further.* ___ **Godot version:** Godot 3.1 Alpha **OS/device including version:** Mac OSX Sierra 10.12.6 **Issue description:** Suppose you have a base class in GDScript. ``` # res://base.gd extends Node2D class_name Base export(int) var number : int ``` (1) You can extend by using the path to the .gd script: ``` # res://inherited.gd extends "res://base.gd" ``` (2) You can extend by using the registered class name: ``` # res://inherited.gd extends Base ``` in case two the export variable `number` from `Base` does not appear in the inspector. **Steps to reproduce:** 1. Create a new Godot project with an empty scene. 2. Add a Node of any kind and give it a script, say "res://base.gd" 3. Give "res://base.gd" an export variable and register its class name 4. Create a new Node that extends from "res://base.gd" but uses the class name instead of the gdscript path 5. The export variable inside "res://base.gd" does not show up in the inherited node inspector **Minimal reproduction project:** [class_name_export_bug.zip](https://github.com/godotengine/godot/files/2520965/class_name_export_bug.zip)
bug,topic:gdscript,confirmed
medium
Critical
374,557,040
pytorch
cudnn explicit paths and GCC multilib suffixes prevents detection of good cudnn headers
I'm using gcc 4.8 on an old machine. The machine has `/usr/local/cuda-8.0/targets/x86_64-linux/cudnn.h` file (an old Cudnn version). Gcc will detect it, regardless of CUDNN_LIB_DIR, CUDNN_INCLUDE_DIR set, because Gcc first checks multilib/multiplatform suffixes, such as 'x86_x64-linux'. PyTorch could detect this situation and fail the build fast by trying to compile a program "#include <cudnn.h>" and check which effective path got used (gcc has some ways to print that if I remember well). I don't know of a guaranteed way to force Gcc to use a header by absolute path, '-I' would not work, again because of multilib. And of course it's relevant for all possible other headers that may be installed in system paths in multilib paths. cc @malfet @seemethere @walterddr
module: build,triaged
low
Minor
374,560,455
TypeScript
Suggestion: Automatically infer argument types in overloaded function implementation
## Search Terms infer, arguments, function, overload ## Suggestion For basic container types (Point, Size, Rect, ...) I often write constructors and methods with overloaded signatures so they can be called with the container type itself or with the separate components of the container as arguments. Example: ```typescript class Point { constructor(point: Point); constructor(x: number, y: number); constructor(arg1: Point | number, arg2?: number) { if (arg1 instanceof Point) { this.x = arg1.getX(); this.y = arg1.getY(); } else { this.x = arg1; this.y = arg2!; } } } ``` And I always wonder why I have to specify the argument types in the implementation again when I already defined the possible types in the overload signatures. In this simple `Point` type it is still pretty easy but imagine a `Rect` type which can work with four number arguments, two `Point` arguments, a `Point` and `Size` argument or a `Rect` argument. Manually writing the combined signature for all these overloaded signatures is cumbersome. And I don't want to use `any` here because I want type checking in the function body. An alternative way to write this example is this: ```typescript class Point { constructor(point: Point); constructor(x: number, y: number); constructor(...args: [ Point ] | [ number, number ]) { const [ arg1, arg2 ] = args; // arg1 is now Point | number // arg2 is now number | undefined (At least since TS 3.2 because of #27543) ... } } ``` This shows how easy it should be for TypeScript to collect the possible function signatures into a union type so writing `constructor(...args)` would be enough. Taking this a step further I even like to write this so I don't need to destructure the arguments myself: ```typescript class Point { constructor(point: Point); constructor(x: number, y: number); constructor(arg1, arg2) { // arg1 is now Point | number // arg2 is now number | undefined ... } } ``` Taking this ANOTHER step further TypeScript could even narrow down the inferred function signatures by each type check done within the function body: ```typescript class Point { constructor(point: Point); constructor(x: number, y: number); constructor(arg1, arg2) { if (arg1 instanceof Point) { // arg2 can now only be `undefined` because the instanceof check removes // the second call signature (where arg1 is a number) from the list of possible // call signatures this.x = arg1.getX(); this.y = arg1.getY(); } else { // arg2 can now only be `number` because the failed instanceof check removes // the first call signature from the list of possible signatures this.x = arg1; this.y = arg2; } } } ``` I guess the type narrowing is harder to implement but at least the automatic type inference of each argument shouldn't be that hard. So it would be very nice if a future version of TypeScript could do this so using overloading gets a bit easier. ## Checklist My suggestion meets these guidelines: * [X] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [X] This wouldn't change the runtime behavior of existing JavaScript code * [X] This could be implemented without emitting different JS based on the types of the expressions * [X] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,Awaiting More Feedback
medium
Critical
374,586,469
flutter
Flutter should be able to interact with host clipboard for rich content
When a Flutter application is running on a Chromebook, it should be able to interact with the host clipboard to copy/paste text and other things (images, etc.). This should support plain text, rich text, images, and application specific types.
c: new feature,framework,engine,platform-mac,platform-windows,platform-chromebook,customer: crowd,platform-linux,a: desktop,customer: octopod,P2,team-text-input,triaged-text-input
low
Critical
374,587,589
react
Hooks + multiple instances of React
# To people coming from search: please [read this page first](https://reactjs.org/warnings/invalid-hook-call-warning.html). It contains most common possible fixes! **Do you want to request a *feature* or report a *bug*?** Enhancement **What is the current behavior?** I had multiple instances of React by mistake. When trying to use hooks, got this error: `hooks can only be called inside the body of a function component` Which is not correct since I was using function components. Took me a while to find the real cause of the issue. **What is the expected behavior?** Show the correct error message. Maybe detect that the app has multiple instances of React and say that it may be the reason of bugs.
Type: Discussion,Component: Hooks
high
Critical
374,605,845
flutter
flutter drive hangs after completion
Executing `flutter drive ...` hangs after all tests are completed. It takes a Ctrl+C to get control back. I'm running the command on OSX against an iPhone 7 simulator.
tool,t: flutter driver,P2,team-tool,triaged-tool
low
Minor
374,608,591
go
cmd/gofmt: comment in return params is misplaced
### What version of Go are you using (`go version`)? ``` go version go1.11 darwin/amd64 ``` ### Does this issue reproduce with the latest release? Yes. ### What did you do? Ran `gofmt` on this program (https://play.golang.org/p/1rASHggr4wx): ```go package foo func bar() ( // This return value is extremely subtle! Please make sure you understand // what it does. string, ) { return "42" } ``` ### What did you expect to see? The input, unchanged. ### What did you see instead? The program reformatted with the comment in a very strange place (https://play.golang.org/p/NujRVxVN6N5): ```go package foo func bar() string {// This return value is extremely subtle! Please make sure you understand // what it does. return "42" } ``` Arguably not that common to stick a comment there, but I find the resulting code extremely surprising. Note that this nearly identical program works just fine (https://play.golang.org/p/Hpfcc5HX00l) because the presence of `bazzle` makes the parens required and so gofmt doesn't attempt a reformatting: ```go package foo func bar() ( // This return value is extremely subtle! Please make sure you understand // what it does. bazzle string, ) { return "42" } ``` /cc @griesemer since you seem to get pinged on all the gofmt issues eventually
NeedsInvestigation
low
Major
374,633,202
go
net/textproto: seemingly unnecssary buffer copy and reassignment in Reader.ReadLineBytes
I was just studying up and working on some small net/http performance changes and while examining its dependencies stumbled upon net/textproto (*Reader).ReadLineBytes() https://github.com/golang/go/blob/bc4a10d16ca8582eaa92e5b834616df55c777503/src/net/textproto/reader.go#L40-L49 The source code was last edited 7 years ago in https://github.com/golang/go/commit/27a3dcd0d2320e171203294724def784a1ddead6 and am wondering whether the code in question is just a vestige of the old days or if perhaps there is a subtle reason behind that copy and reassignment ```go if line != nil { buf := make([]byte, len(line)) copy(buf, line) line = buf } ``` I believe that code should just replace that for (*Reader) readLineSlice that is ```diff diff --git a/src/net/textproto/reader.go b/src/net/textproto/reader.go index feb464b2f2..83ecae6fb4 100644 --- a/src/net/textproto/reader.go +++ b/src/net/textproto/reader.go @@ -33,22 +33,12 @@ func NewReader(r *bufio.Reader) *Reader { // ReadLine reads a single line from r, // eliding the final \n or \r\n from the returned string. func (r *Reader) ReadLine() (string, error) { - line, err := r.readLineSlice() + line, err := r.ReadLineBytes() return string(line), err } // ReadLineBytes is like ReadLine but returns a []byte instead of a string. func (r *Reader) ReadLineBytes() ([]byte, error) { - line, err := r.readLineSlice() - if line != nil { - buf := make([]byte, len(line)) - copy(buf, line) - line = buf - } - return line, err -} - -func (r *Reader) readLineSlice() ([]byte, error) { r.closeDot() var line []byte for { @@ -120,7 +110,7 @@ func (r *Reader) ReadContinuedLineBytes() ([]byte, error) { func (r *Reader) readContinuedLineSlice() ([]byte, error) { // Read the first line. - line, err := r.readLineSlice() + line, err := r.ReadLineBytes() if err != nil { return nil, err } @@ -145,7 +135,7 @@ func (r *Reader) readContinuedLineSlice() ([]byte, error) { // Read continuation lines. for r.skipSpace() > 0 { - line, err := r.readLineSlice() + line, err := r.ReadLineBytes() if err != nil { break } @@ -479,7 +469,7 @@ func (r *Reader) ReadMIMEHeader() (MIMEHeader, error) { // The first line cannot start with a leading space. if buf, err := r.R.Peek(1); err == nil && (buf[0] == ' ' || buf[0] == '\t') { - line, err := r.readLineSlice() + line, err := r.ReadLineBytes() if err != nil { return m, err } ``` If there is a reason behind it, let's please document it with a comment. In the standard library I can't find any usage of ReadLineBytes so perhaps that might be the reason why no one had noticed? /cc @bradfitz @rsc please feel free to correct me if am mistaken.
NeedsInvestigation
low
Critical
374,634,123
rust
regression: unused imports false positive on nightly for the mach crate
With the latest version of the `mach` crate, running `cargo test` shows: ```shell warning: unused import: `vm_region::*` --> src/vm.rs:200:9 | 200 | use vm_region::*; | ^^^^^^^^^^^^ | = note: #[warn(unused_imports)] on by default warning: unused import: `vm_types::*` --> src/vm.rs:202:9 | 202 | use vm_types::*; | ^^^^^^^^^^^ ``` This warning did not used to trigger, and is incorrect (removing the import breaks the build).
A-lints,T-compiler,C-bug,E-needs-mcve
low
Minor
374,639,952
go
proposal: spec: enum type (revisited)
I think Go is missing an enum type to tie different language features together. This is not simply a "better way to use iota" but rather a new feature to simplify code logic and extend the type system. Note: I read #19814 and it's a good start, but I think some important parts were missing. **Proposal:** This is a backward-compatible proposal to introduce a new type called `enum`. With this new type will allow us to group together values and use them in a type-safe manner. It resembles a Go switch, but it defines a variable declaration (var) value. Also, there is no default enum field case. The enum-case block is a typical block, which may return the value of the field. When no return is specified, the field type defaults to int. Enum field values are not assignable, this differs from other enum implementations. Syntax: Default (int) ```go // no type, defaults to int. fields with no returns will return the index ordering value. enum Status { case success: // value: 0 case failure: // value: 1 case something: return -1 // value: -1 case someFunc(t time.Time): // please keep reading for func below return int(t.Since(time.Now)) } fmt.Printf("%T,%v", Status.success, Status.success) // output: int,0 fmt.Printf("%T,%v", Status[1], Status[1]) // output: int,1 fmt.Printf("%T", Status) // output: enum []int ``` Syntax: Not assignable ```go // this fails Status.failure = 3 // this works s := Status.success fmt.Printf("%T,%v", s, s) // output: int,0 ``` Syntax: Specific value type ```go enum West string { case AZ: return "Arizona" case CA: return "California" case WA: return "Washington" } fmt.Printf("%T,%v", West.AZ, West.AZ) // output: string,Arizona fmt.Printf("%T,%v", West[2], West[2]) // output: string,Washington fmt.Printf("%T", West) // output: enum []string ``` Syntax: Embedding - all enum types must match ```go enum Midwest string { case IL: return "Illinois" } enum USStates string { West case Midwest.IL: // "Illinois" case NY: return "New York" } fmt.Printf("%T,%v", USStates.AZ, USStates.West.AZ) // output: string,Arizona fmt.Printf("%d,%d", len(USStates), len(USStates.West)) // output: 5,3 ``` Syntax: Functions - case matches func signature, and the funcs must return values matching the enum type. ```go func safeDelete(user string) error { return nil } enum Rest error { case Create(user string): if err := db.Store(user); err != nil { return err } return nil case Delete(person string): return safeDelete(person) } err := Rest.Create("srfrog") ``` Syntax: For-loop - similar to a slice, the index match the field order. ```go for k, v := range USStates { fmt.Printf("%d: %s\n", k, v) } for t, f := Rest { if t == 0 { // Create f("srfrog") } } ``` Syntax: Switches ```go switch USStates(0) { case "Arizona": // match case "New York": } // note: West.CA is "California" but index in West is 1. Fixed example. party := West.CA switch party { case USStates.AZ: case USStates.CA: // match case USStates.West.CA: case West.CA: } ``` **Discussion:** The goal is to have an enum syntax that feels like idiomatic Go. I borrowed ideas from Swift and C#, but there's plenty of room to improve. The `enum` works like a slice of values, and it builds on interface. Using enum case blocks as regular blocks and func case matching allows us to extend its use; such as recursive enums and union-like. **Implementation:** I tried to reuse some of the existing code, like switch and slices. But there will be significant work for the func cases and enum introspection. I suppose that internally they could be treated as kind of a slice. Needs more investigation. **Impact:** I think this feature could have a large impact. Specially for simplifying existing code. And it could potentially render `iota` obsolete (?). Many devs, myself included, use const/iota blocks with string types to create and manage status flags. This is a lot of boilerplate code that could basically vanish with enums. The grouping part is also beneficial to keep similar values and operations organized, which could save time during development. Finally, the type assignment reduces errors caused by value overwrites, which can be difficult to spot.
LanguageChange,Proposal,LanguageChangeReview
high
Critical
374,645,146
opencv
cv assertion in warpaffine
I've made a 11176*226776 matrix in CV_8UC1 type and when i use the warpaffine function an assertion arise in following line, this is because my matrix is bigger than int type. https://github.com/opencv/opencv/blob/e0c888acf78ea1ecfe03c6c7d55e041fc96f1750/modules/core/include/opencv2/core/types.hpp#L1724
incomplete
low
Minor
374,649,421
pytorch
[pytorch] [feature request] Error out if the needed GPU device capability is absent in runtime
I installed master of PyTorch from sources as: `TORCH_CUDA_ARCH_LIST=5.2 python3 setup.py install` (because of https://github.com/pytorch/pytorch/issues/4716) I'm getting: ```python import torch print(torch.cuda.get_device_name(0)) # 'TITAN X (Pascal)' print(torch.cuda.get_device_capability(0)) # (6, 1) torch.rand(2, 2).cuda().sum() # RuntimeError: CUDA error: invalid device function ``` Compilation succeeds fine, so `TORCH_CUDA_ARCH_LIST` seems to have worked during compilation, but not during execution. Did I specify `TORCH_CUDA_ARCH_LIST` wrong? Should it have been 6.1? Or did `TORCH_CUDA_ARCH_LIST` not work? UPD: compiling with TORCH_CUDA_ARCH_LIST=6.1 solves the issue. I thought a lower number (5.2) would work as well. If it's important for it to be precise, could PyTorch detect the mismatch and issue a warning? cc @ngimel
module: cuda,module: error checking,triaged
low
Critical
374,653,184
pytorch
[caffe2] How to use the operators that are not included in brew through the python API?
Hello all, I know already how to use the helper functions provided in the brew API, however, I also would like to use other operators from the catalogue (https://caffe2.ai/docs/operators-catalogue.html) that are not included in the brew API. How can I use those operators through the python API? Should I only do the following for that purpose? ``` model= model_helper.ModelHelper(name="train_net") model.net.Add(......) model.net.Split(.....) ```
caffe2
low
Minor
374,654,946
godot
The filename text box in "Save as" dialog appears selected but doesn't have focus
Godot 3.1 alpha1 When I want to save a scene with `Save As`, a file dialog opens and shows a selected input box for the file name. If the current scene already has a file, the text box will appear selected. ![image](https://user-images.githubusercontent.com/1311555/47605716-0694b980-da02-11e8-8abd-5d3a3146cf32.png) However, it doesn't have keyboard focus, so it requires you to click on it. Doing that also clears the selection, and if you attempt to double-click to reselect the name without the extension, it actually select the whole text including the extension. This systematic extra interaction is very annoying... Note: it looks like this doesn't always happen, depending on which state the dialog was left in.
bug,topic:editor,confirmed,usability
low
Major
374,667,131
opencv
Bindings generator needs basic `#if` support
Headers parser is broken by pre-processor `#if` Python generator emits warnings: ``` [1/1] Generate files for Python bindings and documentation Note: Class Feature2D has more than 1 base class (not supported by Python C extensions) Bases: cv::Algorithm, cv::class, cv::Feature2D, cv::Algorithm Only the first base class will be used ``` It tries to parse this statement: > class CV_EXPORTS_W Feature2D : public Algorithm class CV_EXPORTS_W Feature2D : public virtual Algorithm This is wrong.
bug,priority: normal,category: python bindings
low
Critical
374,672,571
go
cmd/go: mod edit -fmt reports latest as invalid version
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? ``` go version go1.11.1 linux/amd64 ``` ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/home/myitcv/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/myitcv/gostuff" GOPROXY="" GORACE="" GOROOT="/home/myitcv/gos" GOTMPDIR="" GOTOOLDIR="/home/myitcv/gos/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/home/myitcv/gostuff/src/github.com/myitcv/gobin/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build194100036=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? ``` export GOPATH=$(mktemp -d) export PATH=$GOPATH/bin:$PATH cd $(mktemp -d) mkdir hello cd hello go mod init example.com/hello go mod edit -require=example.com/goodbye@latest go mod edit -fmt ``` ### What did you expect to see? A zero exit code. ### What did you see instead? A non-zero exit code and: ``` $ go mod edit -fmt go: errors parsing go.mod: /tmp/tmp.OkrAZeigOA/hello/go.mod:3: invalid module version "latest": version must be of the form v1.2.3 ```
NeedsFix,GoCommand,modules
low
Critical
374,684,675
go
x/tools/go/ast/astutil: Apply should not traverse Sel of SelectorExpr
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version go1.11.1 linux/amd64 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? OARCH="amd64" GOBIN="" GOCACHE="/home/dvd/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/dvd/go" GOPROXY="" GORACE="" GOROOT="/usr/local/go" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/home/dvd/work/infergo/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build174838993=/tmp/go-build -gno-record-gcc-switches" ### What did you do? Read the source code and ran tests. astutil.go: ``` 246 case *ast.SelectorExpr: 247 a.apply(n, "X", nil, n.X) 248 a.apply(n, "Sel", nil, n.Sel) ``` ### What did you expect to see? I expect to NOT see line 248. ### What did you see instead? Apply traverses Sel field of SelectorExpr as a child. This is not composable: although Sel is an Ident node implementing ast.Expr, it is not an expression. What happens is that the user has to check in the Ident case that the parent is not SelectorExpr or the field is not "Sel". It is much better in my opinion to just let the API user to process Sel in the SelectorExpr. Not traversing children of SelectorExpr is not a great workaround either because X (the first field) can be as complicated a tree as one wishes.
Tools
low
Critical
374,701,901
pytorch
Support for integer interpolation (torch.nn.functional.interpolate)
## 🚀 Feature Currently, `torch.nn.functional.interpolate` does not support integer formats (I tested int and uint8). I would appreciate having support for these formats, or a description in the documentation mentioning why they are not supported. ## Motivation Consider having an image representing a mask. To scale this image (nearest neighbor) using pytorch I would need to use the following hack: ```python mask = (torch.randn((1,1,50,50)) > 1) torch.nn.functional.interpolate(mask.float(),scale_factor=0.5).int() ``` This is introducing unnecessary casts and less clear compared to the following code, which fails: ```python mask = (torch.randn((1,1,50,50)) > 1) torch.nn.functional.interpolate(mask,scale_factor=0.5) # RuntimeError: upsample_nearest2d_forward is not implemented for type torch.ByteTensor ``` cc @albanD @mruberry
module: nn,triaged,function request
low
Critical
374,713,830
material-ui
[Menu] Max height menus open up elsewhere
<!--- Provide a general summary of the issue in the Title above --> <!-- Thank you very much for contributing to Material-UI by creating an issue! ❤️ To avoid duplicate issues we ask you to check off the following list. --> <!-- Checked checkbox should look like this: [x] --> - [X] This is not a v0.x issue. <!-- (v0.x is no longer maintained) --> - [X] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate. ## Expected Behavior Position at the right position ## Current Behavior Max height menus open up elsewhere ![image](https://user-images.githubusercontent.com/9019581/47611750-029d8180-da6c-11e8-8cac-53c08c825789.png) ## Steps to Reproduce <!--- Provide a link to a live example (you can use codesandbox.io) and an unambiguous set of steps to reproduce this bug. Include code to reproduce, if relevant (which it most likely is). This codesandbox.io template _may_ be a good starting point: https://codesandbox.io/s/github/mui-org/material-ui/tree/master/examples/create-react-app If you're using typescript a better starting point would be https://codesandbox.io/s/github/mui-org/material-ui/tree/master/examples/create-react-app-with-typescript If YOU DO NOT take time to provide a codesandbox.io reproduction, should the COMMUNITY take time to help you? --> Link: https://material-ui.com/demos/menus/#max-height-menus 1. Open Max height menu ## Context <!--- What are you trying to accomplish? How has this issue affected you? Providing context helps us come up with a solution that is most useful in the real world. --> ## Your Environment <!--- Include as many relevant details about the environment with which you experienced the bug. If you encounter issues with typescript please include version and tsconfig. --> | Tech | Version | |--------------|---------| | Material-UI | v3.3.2 | | React | | | Browser | | | TypeScript | | | etc. | |
bug 🐛,component: menu
low
Critical
374,714,611
create-react-app
Ensure @types/jest is the correct version
Right now, we always install the latest `@types/jest`. This is OK since we match the current latest Jest version, but sometimes we lag. We should: 1. [ ] Add a preflight check for this 1. [ ] Somehow make `create-react-app` install the correct version for a given `react-scripts` version
tag: enhancement,issue: typescript
low
Minor
374,715,405
electron
[Feature Request]: protocol.intercept{Any}Protocol handler ability to call original handler
**Is your feature request related to a problem? Please describe.** It's possible to intercept schemes (including built-ins like `http:`, `https:`, and `file:`), but doing so requires you to handle all requests for that scheme, there's no way to fall-through to the original handler as far as I can tell. Neglecting to call `callback(...)` simply makes the request hang. Using `protocol.interceptHttpProtocol('http', ...)` will lead to an infinite loop if you use `callback(request)`. This prevents using the intercepts to do handy things like set a header on all HTTP requests to a certain domain, or blackholing another (useful for preventing tracking, ads, etc): ```javascript const url = require('url') protocol.interceptHttpProtocol('https', (request, callback) => { const { host, pathname } = url.parse(request.url) if (host === 'www.github.com') { request.headers['X-Foo'] = 'bar' } else if (host === 'www.bitbucket.org') { callback({ statusCode: 403 }) return } callback(request) // This will cause an infinite loop }) ``` Being able to selectively handle requests for an intercepted protocol would be very useful. It would provide the ability to use `protocol.interceptFileProtocol('http', ...)` and only intercept certain requests, such as those with the origin `http://localhost/`. This would allow you to simulate local files as being served over HTTP, getting around several of the limitations of the `file://` protocol. Currently there's no easy way to do this as you'll intercept all HTTP requests and lose the ability to talk to the network (at least over HTTP). Even if your handler could retrieve the content, it would have to give it a file path so that the callback can return the path to the file, or you'd need to use the `string`/`buffer`/`stream` variants and open any files yourself and provide the content that way. Neither is very clean. You can also bundle a local web server, but this is overkill most of the time. I've found this particularly painful for trying to get WebAuthn working without a server in the loop, as Chromium only allows the API for `https` domains, or `http://localhost`, not `file://` (or any other protocol). An intercept seems like a perfect fit for this, except you can't selectively handle that small case. Selective handling also opens the ability to make protocol handlers in the future act more like middleware which is composable and reusable. If interceptors could be applied like a stack and pass to the next interceptor, you could have libraries which handle things like setting certain headers. **Describe the solution you'd like** I think this can be solved with a somewhat minor change to the API. There may be dragons in the implementation, I've glanced at the code which implements the `protocol` module, but I'm not too familiar with it. My suggested solution would be to add a third argument to the handler function for the `protocol.intercept{Any}Protocol` functions called `next`, and rename the `callback` argument to `done`. `done` would have the same behavior as the current `callback` but the name change makes it more explicit that it will end processing of the request. The new argument, `next`, would fall through to the original handler, the one which would be restored by `protocol.uninterceptProtocol`. `next` would optionally take a request object, allowing you to modify it before passing it along, such as setting a header, or changing the URL allowing redirecting between files. So, the intercept earlier which currently causes an infinite loop could be written as: ```javascript const url = require('url') protocol.interceptHttpProtocol('https', (request, done, next) => { const { host, pathname } = url.parse(request.url) if (host === 'www.github.com') { request.headers['X-Foo'] = 'bar' } else if (host === 'www.bitbucket.org') { callback({ statusCode: 403 }) return } next(request) }) ``` You could also write a handler which pretended to be `http://localhost` by reading files from disk: ```javascript const url = require('url') protocol.interceptFileProtocol('http', (request, done, next) => { const { host, pathname } = url.parse(request.url) if (host === 'localhost') { done({ path: path.normalize(`${__dirname}/${pathname}`) }) } else { next() // Invokes the original 'http:' handler for 'request' } }) ``` Or make `file://` check cloud storage if the file isn't found locally: ```javascript const fs = require('fs') const url = require('url') protocol.interceptStreamProtocol('file', (request, done, next) => { let { host, pathname } = url.parse(request.url) pathname = pathname.slice(1) if (host === '') { // Absolute path, check if pathname exists, if not // check cloud storage (Dropbox, OneDrive, etc) if (fs.existsSync(pathname)) { next() // Invoke the original 'file:' handler for 'request' } else { checkCloud(contents => { if (contents) { done(/* stream to contents */) } else { done({ statusCode: 404, data: 'Not Found' }) } }) } } else { // Don't allow file:// with a host done({ statusCode: 400, data: 'Host Not Allowed' }) } }) ``` Combined with intercepting a custom protocol: ```javascript const url = require('url') protocol.registerFileProtocol('atom', (request, callback) => { const { host, pathname } = url.parse(request.url) callback({ path: path.normalize(`${__dirname}/${host + pathname}`) }) }) protocol.interceptStreamProtocol('atom', (request, done, next) => { const { host, pathname } = url.parse(request.url) if (pathname.endsWith('.mp4') || pathname === '/dev/video') { done(/* Open a stream to video */) } else { next() } }) ```
enhancement :sparkles:,component/protocol
high
Minor
374,719,728
vscode
Strip trailing whitespace after the cursor when pressing enter
When pressing the enter key, I'd like VSCode's automatic indentation to remove any whitespace directly after the insertion point, so that whatever was after the insertion point on that line ends up correctly indented. Currently, anything after the insertion point is untouched, so the line is not correctly indented and I have to manually delete that extra whitespace. For example, I often have a situation where my cursor is in a line like this: ![indent1](https://user-images.githubusercontent.com/578731/47612472-754d3380-da51-11e8-9b36-5b549874f801.png) With automatic indentation enabled in VSCode, pressing the enter key results in this: ![indent2](https://user-images.githubusercontent.com/578731/47612486-a88fc280-da51-11e8-86cf-76c372dae7d8.png) But that whitespace after the cursor is still there, so the indentation isn't actually correct. Other editors I've used (even Visual Studio, if I recall correctly) would give this result instead: ![indent3](https://user-images.githubusercontent.com/578731/47612474-754d3380-da51-11e8-8fc1-796ea4454aab.png)
feature-request,formatting
low
Major
374,725,795
pytorch
Memory inefficient in batched matmul when requiring gradients
While implementing the batched matrix multiplication, i noticed that the batched matrix multiplication is not efficient, see the code below ``` import torch # Input tensor ## Batch size=8192, dim=512 x = torch.FloatTensor(8192, 512).requires_grad_().cuda() if True: # Batch strategy 1 x1 = x.view(8192, 8, 1, 64) # 512 = 8 * 64 W1 = torch.FloatTensor(8, 64, 64).cuda() out1 = torch.matmul(x1, W1) # out: [8192, 8, 1, 64] print(torch.cuda.memory_allocated()) # 1107427328 if False: # Batch strategy 2 x2 = x.view(8192, 1, 512) # add one dimension for batch matmul W2 = torch.FloatTensor(512, 512).cuda() # larger than W1 # out: [8192, 1, 1024] # the same number of elements as out1 out2 = torch.matmul(x2, W2) print(torch.cuda.memory_allocated()) # 34603008 ``` However, it turns out that Batch strategy 2 has less memory cost despite that W2 is larger than Batch strategy 1. And everything else are the same (x1, x2 have same number of elements, also out1, out2). I also found that by removing the requires_grad_() the memory costs are similar (~33685504). What’s the possible reason for that? (Not sure if this is a real issue, so I also posted on the forum). cc @jianyuh @nikitaved @pearu @mruberry @heitorschueroff @walterddr @IvanYashchuk @xwang233 @Lezcano
triaged,module: linear algebra
low
Minor
374,731,117
rust
rustdoc: "Implementations on Foreign Types" does not work bidirectionally for non-std types
While working with the embedded-hal crates for STM32 boards, I ran into a curious issue where the "Implementations on Foreign Types" section lists pretty much what it says on the tin, but the foreign types in question do *not* conversely list all the traits that are 'foreignly' implemented on them. While understandable for `std` types, since the `std` library is externally documented and thus can't be modified to reflect this fact, this makes a lot less sense when eg. generating documentation for a project through `cargo doc`; and in fact, it can make it very difficult when working with `svd2rust`-based crates to determine what you can do with eg. GPIO pins. I've reproduced a small testcase of the problem here: https://git.cryto.net/joepie91/rustdoc-foreign-traits-bugcase The generated documentation is included, but can also be found [here](http://cryto.net/~joepie91/rustdoc-bugcase/rustdoc_foreign_traits_bugcase/). You can see that the [`other::Foo`](http://cryto.net/~joepie91/rustdoc-bugcase/other/trait.Foo.html) trait has an `impl Foo for Baz` under implementations on foreign types, but conversely the [`Baz`](http://cryto.net/~joepie91/rustdoc-bugcase/third/struct.Baz.html) struct does *not* list the `Foo` trait anywhere. Ideally, this should work bidirectionally, and the `Foo` trait *should* be listed on `Baz` (eg. as a 'foreignly implemented trait'). ``` cargo 1.31.0-nightly (2d0863f65 2018-10-20) release: 1.31.0 commit-hash: 2d0863f657e6f45159fc7412267eee3e659185e5 commit-date: 2018-10-20 ```
T-rustdoc,A-trait-system
low
Critical
374,735,229
rust
Can't use $crate as macro variable
Compiling: ```rust fn main() { macro_rules! my_crate { ( $crate:tt ) => ( println!("Use the {} crate", $crate); ); } my_crate!( "test" ); } ``` fail with the following error: ``` error: no rules expected the token "test" --> src/main.rs:7:16 | 7 | my_crate!( "test" ); | ^^^^ ``` I don't know if this is the expected behavior but at least the error message is not obvious.
C-enhancement,A-diagnostics,A-macros,T-compiler
low
Critical
374,736,762
rust
Impl stability is not checked
Found this implementing #55431. There I want to make an unstable impl for a stable trait with stable types. We can mark impls `#[unstable]`, but they seem to be just ignored. ```rust #[unstable(feature = "boxed_closure_impls", ...)] // This seems to be ignored! impl<A, F: FnOnce<A> + ?Sized> FnOnce<A> for Box<F> { ... } ``` A similar case is the `CoerceUnsized` impl for `NonNull` types. This is marked `#[unstable]` but can be used indirectly through coercion. These `#[unstable]` markers don't show up in rustdoc as well. Questions: - Is this a bug (or an unimplemented feature)? I currently suspect that it's the case, because stability will get complicated when trait solver is involved. - If we implement this... - When should we consider them unstable? In my mind, coherence checking should always see all impls as visible. The hidden impls RFC (https://github.com/rust-lang/rfcs/pull/2529) may have a similar idea. - How should unstable traits behave with respect to type inference? I think they should be removed from candidates in winnowing if there are multiple options, and selected impls should be double-checked. - Do we need an RFC in this case?
A-trait-system,A-stability,T-compiler
low
Critical
374,738,142
pytorch
Caffe2: Causes error when using flag remove_legacy_pad while converting from caffe to caffe2
I was converting basic cat dog model from caffe to caffe2 framework using caffe_translator.py but in conversion it shows a legacy_padding in max pool operation argument which I don't want so i try to remove it using the caffe_translator script in which remove_legacy_pad flag is there which can be enable when we give it in argument while running the script. But it throws some error which I don't know what it is. **Error Log** WARNING: Logging before InitGoogleLogging() is written to STDERR W1028 15:43:38.935531 680 init.h:99] Caffe2 GlobalInit should be run before any other API calls. Traceback (most recent call last): File "/home/parthr/project/pytorch/caffe2/python/caffe_translator.py", line 927, in <module> input_dims=input_dims, File "/home/parthr/project/pytorch/caffe2/python/caffe_translator.py", line 285, in TranslateModel return TranslatorRegistry.TranslateModel(*args, **kwargs) File "/home/parthr/project/pytorch/caffe2/python/caffe_translator.py", line 280, in TranslateModel net = _RemoveLegacyPad(net, net_params, input_dims) File "/home/parthr/project/pytorch/caffe2/python/caffe_translator.py", line 131, in _RemoveLegacyPad ws.create_blob(external_input).feed_blob(dummy_input) AttributeError: 'caffe2.python.caffe2_pybind11_state.Blob' object has no attribute 'feed_blob'
caffe2
low
Critical
374,744,329
go
cmd/vet: potential false positive in the "suspect or" check
### What version of Go are you using (`go version`)? `go version go1.11.1 linux/amd64` ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/home/jnml/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/jnml" GOPROXY="" GORACE="" GOROOT="/home/jnml/go" GOTMPDIR="" GOTOOLDIR="/home/jnml/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build777349074=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? Using `github.com/cznic/ql`, branch `file2`, checked out at `3497607bfaba518eab540d317a6c168396b8002f`. ``` ==== jnml@4670:~/src/github.com/cznic/ql> go test # github.com/cznic/ql ./file2.go:37: suspect or: szKey != szVal || szKey != szBuf FAIL github.com/cznic/ql [build failed] ==== jnml@4670:~/src/github.com/cznic/ql> ``` ### What did you expect to see? Build succeeds, tests are executed and failures reported. ### What did you see instead? Build fails. ### Additional info The line go vet does not like [is here](https://github.com/cznic/ql/blob/3497607bfaba518eab540d317a6c168396b8002f/file2.go#L37) ```go func init() { if al := cfile.AllocAllign; al != 16 || al <= binary.MaxVarintLen64 { panic("internal error") } if szKey != szVal || szKey != szBuf { // <-- line 37 panic("internal error") } } const ( magic2 = "\x61\xdbql" szBuf = szKey szKey = cfile.AllocAllign // BTree szVal = szKey // BTree wal2PageLog = 12 //TODO tune pagelog ) ``` Line 37 tests for equality of the three values `szKey, szValue, szBuf` which other code depends on as it would have to be more complicated for the general case and the equality allow simplifying it. I see nothing wrong about the line, I donť think `vet` should reject it. The `vet` test is located [here](https://github.com/golang/go/blob/50bd1c4d4eb4fac8ddeb5f063c099daccfb71b26/src/cmd/vet/bool.go#L90) ```go // checkSuspect checks for expressions of the form // x != c1 || x != c2 // x == c1 && x == c2 // where c1 and c2 are constant expressions. // If c1 and c2 are the same then it's redundant; // if c1 and c2 are different then it's always true or always false. // Exprs must contain only side effect free expressions. func (op boolOp) checkSuspect(f *File, exprs []ast.Expr) { ... } ``` The comment `if c1 and c2 are different then it's always true or always false.` is correct, but that does not imply something is wrong. There's nothing wrong in implementing ``` a == b == c ``` in Go as ```go a == b && a == c ``` Where the negation, rejected by `vet`, is ```go a != b || a != c ``` Workaround: `vet` is happy if the expression is rewritten as ```go szKey != szVal || szVal != szBuf ``` even though the two versions are logical identities.
NeedsInvestigation,Analysis
medium
Critical
374,758,169
pytorch
[Caffe2] cmake3 detection error?
cmake 3 is detected however version check fails? ``` $ python setup.py install Building wheel torch-1.0.0a0+1fe8278 running install setup.py::run() running build_deps setup.py::build_deps::run() + SYNC_COMMAND=cp ++ command -v rsync + '[' -x /usr/bin/rsync ']' + SYNC_COMMAND='rsync -lptgoD' + CMAKE_COMMAND=cmake ++ command -v cmake3 + [[ -x /usr/bin/cmake3 ]] ++ command -v cmake + [[ -x /usr/bin/cmake ]] ++ cmake --version ++ grep version ++ awk '{print $3}' + CMAKE_VERSION=2.8.12.2 ++ cmake3 --version ++ awk '{print $3}' ++ grep version + CMAKE3_VERSION=3.12.1 ++ /home/apiszcz/p/vx/lib/a3/bin/python -c 'from distutils.version import StrictVersion; print(1 if StrictVersion("2.8.12.2") < StrictVersion("3.5.0") and StrictVersion("3.12.1") > StrictVersion("2.8.12.2") else 0)' Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/apiszcz/p/vx/lib/a3/lib/python3.5/distutils/version.py", line 40, in __init__ self.parse(vstring) File "/home/apiszcz/p/vx/lib/a3/lib/python3.5/distutils/version.py", line 137, in parse raise ValueError("invalid version number '%s'" % vstring) ValueError: invalid version number '2.8.12.2' + CMAKE3_NEEDED= Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-nnpack --use-qnnpack nccl caffe2' ```
caffe2
low
Critical
374,808,526
go
x/text/number: support formatting of custom types
<sup>This is based on [a StackOverflow question](https://stackoverflow.com/q/52591678/1892060).</sup> Package `golang.org/x/text/number` allows locale-specific formatting of numbers à-la `fmt`. Unfortunately, unlike `fmt`, this package doesn't have an interface to let types other than the built-in numeric types tell the package how to format them. Or rather, [the interface](https://godoc.org/golang.org/x/text/internal/number#Converter) is there, but it's in an internal package, so the users can't really implement it. This wouldn't be bad if `math/big` types or types whose `String()` method produces numbers were supported, but as of now that is [still a TODO](https://github.com/golang/text/blob/d82c1812e304abfeeabd31e995a115a2855bf642/internal/number/decimal.go#L340).
NeedsInvestigation,FeatureRequest
low
Minor
374,812,405
rust
libtest: allow for controlling runtime or iterations of benchmarks
There have been instances where it would have been nice to be able to control the minimum number of iterations or the maximum runtime of a set of benchmarks. Currently looking at libtest it seems these values are hardcoded at [1 minimum iteration](https://github.com/rust-lang/rust/blob/master/src/libtest/lib.rs#L1616) and a max of [3 seconds](https://github.com/rust-lang/rust/blob/master/src/libtest/lib.rs#L1650). One easy way to change this would be to add --min_iters and --max_secs command line arguments to allow for controlling these two parameters (as well as corresponding arguments in 'cargo bench'). I noticed #50297 but wasn't sure if this sort of functionality would be included in the implementation of that RFC or if it could be implemented independent of it.
T-libs-api,C-feature-request,A-libtest
low
Minor
374,819,404
flutter
Documentation request: Operations not allowed during the build cycle
There is an interaction between two classes of operations that could be better documented: 1) operations that occur during the widget build cycle. This includes e.g. `build()`, `didChangeDependencies()`, and probably others. 2) operations that are not allowed during the build cycle, but which require a `BuildContext`. This includes e.g. `showDialog`, many `Navigator` methods, and probably others. Since the build method is one of the most common ways to get access to a `BuildContext`, it's a fairly non-obvious restriction. Since there are a number of operations in each class, putting a full description of the issue in each one's API docs might be a lot of duplicated explanation. I wonder instead if there could be a deep dive doc in either the "Want to skill up?" or "Specialized topics" sections here: https://flutter.io/docs/. The API docs for each affected operation could then point to this central explanation. For full context, see the discussion in this thread: https://groups.google.com//forum/#!topic/flutter-dev/V9pCG3FgyMM
framework,d: api docs,c: proposal,P2,team-framework,triaged-framework
low
Minor
374,823,336
gin
A new method for parsing form data array type fields
- go version: 1.9 - gin version (or commit ref): master branch - operating system: ## Description ![image](https://user-images.githubusercontent.com/24504247/47624659-c1839b00-db59-11e8-9fbf-6a54da08c7f4.png) this image is my http post params . I'm going to use c.GetPostFormMap to parse processes field, but I didn't get the expected results. So I wrote a solution myself, as follows: ``` // GetPostFormArrayMap returns a map for a given form key, plus a boolean value // whether at least one value exists for the given key. func (c *Context) GetPostFormArrayMap(key string) ([]map[string]string, bool) { req := c.Request req.ParseForm() req.ParseMultipartForm(c.engine.MaxMultipartMemory) dicts, exist := c.getArrayMap(req.PostForm, key) if !exist && req.MultipartForm != nil && req.MultipartForm.File != nil { dicts, exist = c.getArrayMap(req.MultipartForm.Value, key) } return dicts, exist } // get is an internal method and returns a map which satisfy conditions. func (c *Context) getArrayMap(m map[string][]string, key string) ([]map[string]string, bool) { dicts := make(map[string]map[string]string) exist := false for k, v := range m { if i := strings.IndexByte(k, '['); i >= 1 && k[0:i] == key { // 确定name[1]key num := `` if j := strings.IndexByte(k[i+1:], ']'); j >= 1 { num = k[i+1:][:j] } if j := strings.IndexByte(k[i+1:], '['); j >= 1 { exist = true field := k[i+1:][j+1 : len(k[i+1:])-1] if dict, ok := dicts[num]; ok { dict[field] = v[0] dicts[num] = dict } else { dicts[num] = map[string]string{field: v[0]} } } } } arr := make([]map[string]string, 0) for _, value := range dicts { arr = append(arr, value) } return arr, exist } ``` this image is my parse result . ![image](https://user-images.githubusercontent.com/24504247/47624739-93eb2180-db5a-11e8-99b8-6f39422c8df2.png) ## Screenshots
feature
low
Minor
374,840,361
pytorch
torch.utils.checkpoint is not compatible with nn.DataParallel
## 🐛 Bug <!-- A clear and concise description of what the bug is. --> ## To Reproduce Steps to reproduce the behavior: ``` net = nn.Sequential( # somenet ) from torch.utils.checkpoint import checkpoint_sequential net = nn.DataParallel(net) data = torch.randn(1, 3, 224, 224, requires_grad=True) out = checkpoint_sequential(net, 1, data) # error here ``` ## Expected behavior Checkpoint sequential is expected to work well with nn.DataParallel. ## Environment pytorch: 0.4 OS: ubuntu 16.04
module: checkpoint,triaged,module: data parallel
low
Critical
374,843,055
opencv
fisheye.cpp:line 983 and line 997 cv::Mat(cv::Mat((imageLeft - projected).t()).reshape(1, 1).t()).copyTo(ekk.rowRange(0, 2 * n_points)); the size of imageLeft is [1*54], but the size of projected is [54 * 1]
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). Please: * Read the documentation to test with the latest developer build. * Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue. * Try to be as detailed as possible in your report. * Report only one problem per created issue. This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => :grey_question: - Operating System / Platform => :grey_question: - Compiler => :grey_question: ##### Detailed description <!-- your description --> ##### Steps to reproduce <!-- to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file -->
category: calib3d,incomplete,needs reproducer
low
Critical
374,857,129
godot
Animation Preview does not work when the process mode is set to Manual
**Godot version:** <!-- Specify commit hash if non-official. --> **OS/device including version:** <!-- Specify GPU model and drivers if graphics-related. --> `v3.1.alpha.official` **Issue description:** <!-- What happened, and what was expected. --> When the the animation process mode is in `Manual` the animation previews don't work. ![manual-doesnt-let-you-play](https://user-images.githubusercontent.com/192675/47629784-6e2e3000-dafa-11e8-801b-2317cb32c050.gif) Although it kind of makes sense why it doesn't play from an implementation stand point, from a useability standpoint it's really confusing. If you would want to have a `Manual` animation then you would have to set it to `Idle` or `Physics` when tweaking the animation, then back to `Manual` when you're done. If this was is intentional there should at least be a ⚠️ to warn the user.
enhancement,topic:editor,confirmed,usability,topic:animation
low
Minor
374,904,429
flutter
flutter analyze should have option for machine-readable output
`dartanalyzer` has `--format machine`, and we are using that for processing analysis results with `pana` (and on the pub site). Because `flutter analyze` has no such option, we still fall back to `dartanalyzer`, causing inconsistencies like [the bug last week](https://github.com/dart-lang/pub-dartlang-dart/issues/1728). Providing the same output as `dartanalyzer` gives us would be perfect.
c: new feature,tool,P2,team-tool,triaged-tool
low
Critical
374,906,895
rust
Native FreeBSD and OpenBSD testing on CI
https://builds.sr.ht/ offers native (no qemu emulation) FreeBSD and OpenBSD CI services. These are currently free and invite only, but invites can be requested at [email protected] . They plan to offer these as paid services in the future at $2, $5, and $10 / month price points depending on the plan. This is currently how https://github.com/swaywm/wlroots is tested on these platforms. Maybe we should give these a try.
O-openbsd,O-freebsd,T-infra
low
Minor
374,952,260
go
cmd/cgo: avoid calls to cgoCheckPointer when debug.cgocheck=0
With `DEBUG=cgocheck=0` Go still makes calls to `cgoCheckPointer` which will bail out early in https://github.com/golang/go/blob/8f4fd3f34e8e218cb90435b5a8c6ba9be23a1e1e/src/runtime/cgocall.go#L412. Every such call adds few ns, but funcs with many arguments can end up accumulating a lot of them. https://golang.org/cl/142884 changes cgo generated code to: ``` defer func() func() { _cgo0 := x _cgo1 := y return func() { _cgoCheckPointer(_cgo0) _cgoCheckPointer(_cgo1) C.f(_cgo0, _cgo1) } }()() ``` I propose, instead of checking `debug.cgocheck=0` inside `cgoCheckPointer` it would check it before calling `cgoCheckPointer`, so cgo would generate: ``` defer func() func() { _cgo0 := x _cgo1 := y return func() { if debug.cgocheck != 0 { _cgoCheckPointer(_cgo0) _cgoCheckPointer(_cgo1) } C.f(_cgo0, _cgo1) } }()() ```
Performance,NeedsInvestigation,compiler/runtime
medium
Critical
374,976,647
neovim
"tnoremap jk <C-\><C-n>" delay when holding down "j"
<!-- Before reporting: search existing issues and check the FAQ. --> - `nvim --version`: v0.3.2 - Vim (version: ) behaves differently? yes - Operating system/version: ubuntu16.04 - Terminal name/version: - `$TERM`: ### Steps to reproduce using `nvim -u NORC` ``` 1. nvim -u NORC 2. tnoremap jk <C-\><C-n> 3. :term 4. start insert and hold down the key `j` ``` ### Actual behaviour When I hold down the key `j`, it has no echo, When I release the key `j`, it show as ![image](https://user-images.githubusercontent.com/19503791/47652479-068ee800-dbc1-11e8-9ed3-4d2f4dc47fc7.png) ### Expected behaviour As same as vim8.1
bug,terminal,event-loop,core
low
Major
374,990,570
TypeScript
inline comments are striped in emitted declaration files
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.1.3 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** inline comments striped **tsconfig** ```json { "compilerOptions": { "moduleResolution": "node", "module": "esnext", "target": "es5", "lib": [ "dom", "es2018" ], "strict": true, "esModuleInterop": true, "allowSyntheticDefaultImports": true, "suppressImplicitAnyIndexErrors": true, "forceConsistentCasingInFileNames": true, "jsx": "react", "sourceMap": true, "outDir": "dist/esm5", "declaration": true, "declarationDir": "dist/types", "declarationMap": true, "stripInternal": true, "resolveJsonModule": true, "importHelpers": true }, "include": [ "./src" ], "compileOnSave": false, "buildOnSave": false } ``` **Code:** ```ts // I'll be missing after tsc 😭 /** * I'll be here after tsc 😎 */ export const api = () => { /* some code ... */ } ``` **Expected behavior:** Keeping both block and inline comments in declaration files in both transpiled files and declaration files **Actual behavior:** Inline comments are removed in declaration files ![image](https://user-images.githubusercontent.com/1223799/47649377-6e234400-db7d-11e8-8bb1-85bef3edb025.png) And correctly kept within transpiled files ![image](https://user-images.githubusercontent.com/1223799/47649543-e5f16e80-db7d-11e8-9988-035da78f6461.png)
Suggestion,Needs Proposal
low
Critical
374,997,298
go
os/exec: race on cmd.Wait() might lead to panic
The current implementation of [cmd.Wait](https://golang.org/pkg/os/exec/#Cmd.Wait) has a race condition: if multiple goroutines call it, it might cause a panic. In the first part of the method, copied below, two concurrent calls might check `c.finished`, get false, set it to true, invoke `c.Process.Wait()` and close `c.waitDone` before any error checking is performed. `c.waitDone` is a `chan struct{}`, and a double close will cause a panic. ```go func (c *Cmd) Wait() error { if c.Process == nil { return errors.New("exec: not started") } if c.finished { return errors.New("exec: Wait was already called") } c.finished = true state, err := c.Process.Wait() if c.waitDone != nil { close(c.waitDone) } //[...] ``` Since waiting is a synchronization primitive I'd expect one of the two: * The **documentation** should state that this is not safe for concurrent use (probably the best approach here) * Some form of synchronization to **prevent races**. I would either atomically CAS `c.finished` (but I'm not a fan of atomics and would require to change type to some sort of int) or protect `c` with a mutex, which would be my suggested solution for this case I would happily send in CLs in both cases.
NeedsDecision
medium
Critical