id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
383,674,092
pytorch
[Caffe2] How to use the euclidean loss (L2) as output of a CNN model?
Dear all, I would like to use the euclidean loss (L2) as the output of my CNN model. I am using the brew.db_input to feed the input layer. Should I proceed in the following way? ``` def add_input(self, model, batch_size, db, db_type, device_opts): with core.DeviceScope(device_opts): # load the data data_uint8, label = brew.db_input( model, blobs_out=["data_uint8", "label"], batch_size=batch_size, db=db, db_type=db_type, ) # cast the data to float data = model.Cast(data_uint8, "data", to=core.DataType.FLOAT) # scale data from [0,255] down to [0,1] data = model.Scale(data, data, scale=float(1./256)) # don't need the gradient for the backward pass data = model.StopGradient(data, data) dataset_size = int (lmdb.open(db).stat()['entries']) return data, label, dataset_size data, label, train_dataset_size = self.add_input(train_model, batch_size=batch_size, db=os.path.join(self._data_dir_, 'train-nchw-lmdb'), db_type='lmdb', device_opts=device_opts) predictions = self.create_model(train_model, data, label, device_opts=device_opts) def create_model(self, model, data, label, device_opts): with core.DeviceScope(device_opts): data = data conv1_ = brew.conv(model, data, 'conv1_', dim_in=3, dim_out=96, kernel=11, stride=4) relu1_ = brew.relu(model, conv1_, conv1_) pool1_ = brew.max_pool(model, relu1_, 'pool1_', kernel=3, stride=2) conv2_ = brew.conv(model, pool1_, 'conv2_', dim_in=96, dim_out=256, kernel=5, stride=4) relu2_ = brew.relu(model, conv2_, conv2_) pool2_ = brew.max_pool(model, relu2_, 'pool2_', kernel=3, stride=2) conv3_ = brew.conv(model, pool2_, 'conv3_', dim_in=256, dim_out=384, kernel=3, stride=1) relu3_ = brew.relu(model, conv3_, conv3_) conv4_ = brew.conv(model, relu3_, 'conv4_', dim_in=384, dim_out=384, kernel=3, stride=1) relu4_ = brew.relu(model, conv4_, conv4_) conv5_ = brew.conv(model, relu4_, 'conv5_', dim_in=384, dim_out=256, kernel=3, stride=1) relu5_ = brew.relu(model, conv5_, conv5_) pool5_ = brew.max_pool(model, relu5_, 'pool5_', kernel=3, stride=2) fc5_ = brew.fc(model, pool5_, 'fc5_', dim_in=256 * 2 * 3, dim_out=4096) relu6_ = brew.relu(model, fc5_, fc5_) dropout6_ = brew.dropout(model, relu6_, 'dropout6_', ratio=0.5, is_test=False) fc6_ = brew.fc(model, dropout6_, 'fc6_', dim_in=4096, dim_out=4096) relu7_ = brew.relu(model, fc6_, fc6_) dropout7_ = brew.dropout(model, relu7_, 'dropout7_', ratio=0.5, is_test=False) fc7_ = brew.fc(model, dropout7_, 'fc7_', dim_in=4096, dim_out=256) relu8_ = brew.relu(model, fc7_, fc7_) dropout8_ = brew.dropout(model, relu8_, 'dropout8_', ratio=0.5, is_test=False) relu9_ = brew.relu(model, dropout8_, dropout8_) fc9_ = brew.fc(model, relu9_, 'fc9_', dim_in=256, dim_out=14) dist = model.net.SquaredL2Distance([label, fc9_], 'dist') predictions = dist.AveragedLoss([], ['predictions']) return predictions ```
caffe2
low
Minor
383,685,867
javascript-algorithms
detect-cycle-algorithms detect just one cycle
The detect-cycle-algorithms detect just one cycle. It would be great if the algorithms can recognize more cycles than one, in case there are several.
enhancement
low
Major
383,688,279
angular
Animations: animateChild() fails to run child transitions when using state() with '*' star style
# 🐞 bug report ### Affected Package @angular/animations ### Description When an animation uses states to/from the '*' star style, animateChild() fail to run the transition to the "starred" state and jumps to the final state instead (or run the transition from the final state to the same). ## 🔬 Minimal Reproduction Take a look here: https://stackblitz.com/edit/wizdm-animatechild-toggler?file=src%2Fapp%2Fmat-toggler%2Fmat-toggler-animations.ts ## 🌍 Your Environment **Angular Version:** Angular CLI: 7.0.5 Node: 9.11.1 OS: darwin x64 Angular: 7.0.3 ... animations, cdk, common, compiler, compiler-cli, core, forms ... http, language-service, material, material-moment-adapter ... platform-browser, platform-browser-dynamic, router Package Version ------------------------------------------------------------ @angular-devkit/architect 0.8.6 @angular-devkit/build-angular 0.10.2 @angular-devkit/build-ng-packagr 0.8.6 @angular-devkit/build-optimizer 0.10.2 @angular-devkit/build-webpack 0.10.2 @angular-devkit/core 0.8.6 @angular-devkit/schematics 7.0.5 @angular/cli 7.0.5 @angular/fire 5.1.0 @angular/flex-layout 7.0.0-beta.19 @ngtools/json-schema 1.1.0 @ngtools/webpack 7.0.2 @schematics/angular 7.0.5 @schematics/update 0.10.5 ng-packagr 4.4.0 rxjs 6.3.3 typescript 3.1.6 webpack 4.19.1 **Anything else relevant?** building on macOS, running on both Chrome & Safari. Thanks,
type: bug/fix,area: animations,freq2: medium,P3
low
Critical
383,701,519
node
Does `autoDestroy` make sense for http2 streams?
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Subsystem**: http2 Now that we have `autoDestroy` support in the streams implementation, I think we would like to move streams towards using that option if possible. I have a hard time figuring out if this makes sense for HTTP/2 streams? (i.e., should we destroy streams once both readable and writable side are finished?) I have a hard time understanding HTTP/2 stream’s lifecycle behaviour, to be honest, and I’d appreciate any explanations about it :) /cc @nodejs/http2 @mcollina
question,http2
low
Critical
383,703,598
nvm
`nvm install-latest-npm`: unknown issue
<!-- Thank you for being interested in nvm! Please help us by filling out the following form if you‘re having trouble. If you have a feature request, or some other question, please feel free to clear out the form. Thanks! --> - Operating system and version: - `nvm debug` output: <details> <!-- do not delete the following blank line --> ```sh ``` </details> - `nvm ls` output: <details> <!-- do not delete the following blank line --> ```sh ``` </details> - How did you install `nvm`? (e.g. install script in readme, Homebrew): - What steps did you perform? - What happened? - What did you expect to happen? - Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`? <!-- if this does not apply, please delete this section --> - If you are having installation issues, or getting "N/A", what does `curl -I --compressed -v https://nodejs.org/dist/` print out? <details> <!-- do not delete the following blank line --> ```sh ``` </details>
installing node,needs followup
low
Critical
383,704,726
opencv
Inclusion of JABCode and HiQ
# Introduction JABCode is a color barcode used by the German government (Federal Office of Information Security) for tagging items. Currently there is no easy-to-use solutions available for the public to test and experiment. HiQ is a color barcode based on QR code, created by the students Hong Kong Chinese University (the top University of Hong Kong). # Proposed Solution 1. The inclusion of JABCode and HiQ into OpenCV 2. The allowance of Python, JS and Java APIs in their respective package repos for ease of use # Impact on existing code ## For JABCode There is no compatibility problem for JABCode inclusion, as this is written in C (moving to C++ should be easy), and the code for JABCode is licensed under LGPL. ## For HiQ For HiQ the source code is written in Java, uses ZXing, and the license is Apache 2.0. It would be great if the code is rewritten in C/C++. Also HiQ # References ## For JABCode * Repo https://github.com/jabcode/jabcode * Website https://jabcode.org/ * Standard https://www.bsi.bund.de/EN/Publications/TechnicalGuidelines/TR03137/TechnicalGuidelines_03137_node.html ## For HiQ * Repo https://github.com/ouyangzhibo/HiQ-Robust-and-Fast-Decoding-of-High-Capacity-Color-QR-Codes * Website http://authpaper.net/
feature,category: objdetect,GSoC,evolution,effort: few weeks
low
Major
383,714,356
pytorch
[jit][script] support slicing with tensor literals
See #14311 for context. ```py @torch.jit.script def foo(): a = torch.rand(3, 4, 5, 6) a[[0, 2], :, [3, 1]] = 1 # and a[[0, 2], 1:3, [3, 1]] = 1 return a ``` Gets tripped up, interpreting tensor literals as `int[]`.
oncall: jit
low
Minor
383,729,867
rust
Expose Windows VolumeSerialNumber and FileIndex/FileId in std::os::windows::fs::MetadataExt
Currently, using `std::os::unix::fs::MetadataExt` and `std::os::linux::fs::MetadataExt` it is possible to get the inode number of a file. This can be useful in determining whether two files are the same. There's a similar concept in Windows, consisting of a few parts: `VolumeSerialNumber`: The serial number of the volume that contains the file. `FileIndex`: A 64 bit unique (within a volume) identifier that is associated with a file. `FileId`: A 128 bit unique (within a volume) identifier that is associated with a file (this is the "evolution" of the above, according to the docs currently only used by ReFS). Sources: [_BY_HANDLE_FILE_INFORMATION](https://docs.microsoft.com/en-us/windows/desktop/api/fileapi/ns-fileapi-_by_handle_file_information) [_FILE_ID_INFO](https://docs.microsoft.com/en-us/windows/desktop/api/winbase/ns-winbase-_file_id_info) Combining the `VolumeSerialNumber` and `FileIndex`/`FileId` of a file is thus a unique identifier of a file on a single machine (at a given time; unique IDs may be reused). I would like these to get added to `std::os::windows::fs::MetadataExt`. The changes needed would be fairly tiny (literally just exposing already available data in case of `VolumeSerialNumber` and `FileIndex` [here](https://github.com/rust-lang/rust/blob/6a2d1b4e15d5de90f8c36181b1d429da658adfd2/src/libstd/sys/windows/fs.rs#L299-L309) but an additional function call for `FileId`), and it wouldn't introduce any breaking changes as far as I can see, it would only add 2/3 methods. Would this require an RFC? I see that the RFCs README lists "Additions to std." as an as example, but on the other hand I wouldn't say this is a "substantial" change.
O-windows,T-libs-api,A-io
low
Major
383,759,831
go
net/http: logic error in http2ConfigureServer?
Reported against tip, 649b89377e91ad6dbe710784f9e662082d31a1ff https://github.com/golang/go/blob/649b89377e91ad6dbe710784f9e662082d31a1ff/src/net/http/h2_bundle.go#L4021 the logic of this line is error(ValidCipher, BadCipher,BadCipher will go through), it should be ```go if http2isBadCipher(cs) { sawBad = true } if sawBad { return fmt.Errorf("http2: TLSConfig.CipherSuites index %d contains an HTTP/2-approved cipher suite (%#04x), but it comes after unapproved cipher suites. With this configuration, clients that don't support previous, approved cipher suites may be given an unapproved one and reject the connection.", i, cs) } ```
NeedsInvestigation
low
Critical
383,818,284
go
x/image/font: wrong rendering of intersecting paths
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.11.1 windows/amd64 </pre> ### What did you do? Parse APL386 true type font with github.com/golang/freetype/truetype and render U+2262 with golang.org/x/image/font Drawer.DrawString. I'll update with a link to example code and images shortly. ### What did you expect to see? A rendering of the rune, that has intersecting paths. The intersecting regions should be filled (with black color). ### What did you see instead? Intersecting regions are not filled, they are left white.
NeedsInvestigation
low
Minor
383,832,599
rust
Enhance `match` implied borrow
```rust struct Foo { bar: Option<String> } fn f(foo: &Foo) -> i32 { match &foo.bar { None => 1, Some(s) => 2, } } ``` The above works because of the implied borrow introduced in v1.26 However, if we don't put `&` before `foo.bar`, the `ref` is mandatory. ```rust fn f(foo: &Foo) -> i32 { match foo.bar { None => 1, Some(ref s) => 2, } ``` Is it a good idea in this case that the compiler also implicitly adds `ref` for us, since we cannot move `foo.bar` anyway.
C-enhancement,A-borrow-checker,T-compiler
low
Minor
383,865,101
pytorch
Caffe2 C++ tutorial is not working
## 📚 Documentation <!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new --> The steps explained in https://caffe2.ai/docs/cplusplus_tutorial.html are not working to build C++ code with Caffe2. The first issue is to generate .cc files from .proto files. After trying different command line combination from https://github.com/protocolbuffers/protobuf/issues/3028 I was able to generate it. Then I created a VS solution following described steps and add the include directories. I got this error: ``` "fatal error C1083: Cannot open include file: 'c10/macros/cmake_macros.h': No such file or directory" ``` This is due the following piece of code in Macros.h: ``` #ifndef C10_USING_CUSTOM_GENERATED_MACROS #include "c10/macros/cmake_macros.h" #endif // C10_USING_CUSTOM_GENERATED_MACROS ``` So I defined C10_USING_CUSTOM_GENERATED_MACROS to 1. Then I started getting this error: ``` "Cannot open include file: 'ATen/core/blob.h': No such file or directory" ``` So I added the ATen/core to include directories and now I'm receiving a lot o errors. Is the Caffe2 still supported with C++ building in Windows ??
caffe2
low
Critical
383,899,955
flutter
[Proposal] PageView should support `addAutomaticKeepAlives` parameter to persist state
<!-- Thank you for using Flutter! If you are looking for support, please check out our documentation or consider asking a question on Stack Overflow: * https://flutter.io/ * https://docs.flutter.io/ * https://stackoverflow.com/questions/tagged/flutter?sort=frequent If you have found a bug or if our documentation doesn't have an answer to what you're looking for, then fill our the template below. Please read our guide to filing a bug first: https://flutter.io/bug-reports/ --> ## Steps to Reproduce <!-- Please tell us exactly how to reproduce the problem you are running into. Please attach a small application (ideally just one main.dart file) that reproduces the problem. You could use https://gist.github.com/ for this. If the problem is with your application's rendering, then please attach a screenshot and explain what the problem is. --> 1. Create a widget containing a `PageView` 2. Add multiple children to the `PageView` 3. Add breakpoints to each build function of the children 4. Scroll between pages and watch the breakpoints being hit each time (even after already going there) ## Logs <!-- Run your application with `flutter run --verbose` and attach all the log output below between the lines with the backticks. If there is an exception, please see if the error message includes enough information to explain how to solve the issue. --> No logs provided as this is pretty self explanatory. It is causing a lot of issues with state management solutions like `flutter_redux` as children with a `StoreBuilder`/`Connector` will always be rebuilt, even if the `Store` has not changed. Both `ListView` and `GridView` have this field `addAutomaticKeepAlives`, so it seems weird that `PageView` does not. <!-- Run `flutter analyze` and attach any output of that command below. If there are any analysis errors, try resolving them before filing this issue. --> ``` info - Unused import: 'package:alchemy/redux/actions/loading/update_state_loading_action.dart' - lib\components\edit_profile\edit_profile_view_model.dart:2:8 - unused_import info - Unused import: 'package:firebase_storage/firebase_storage.dart' - lib\components\images\upload_image_view_model.dart:9:8 - unused_import info - Unused import: 'package:path/path.dart' - lib\components\images\upload_image_view_model.dart:10:8 - unused_import info - Unused import: 'package:random_string/random_string.dart' - lib\components\images\upload_image_view_model.dart:12:8 - unused_import ``` <!-- Finally, paste the output of running `flutter doctor -v` here. --> ``` [√] Flutter (Channel beta, v0.11.9, on Microsoft Windows [Version 10.0.17134.407], locale en-GB) • Flutter version 0.11.9 at C:\Users\ryand\Documents\flutter • Framework revision d48e6e433c (3 days ago), 2018-11-20 22:05:23 -0500 • Engine revision 5c8147450d • Dart version 2.1.0 (build 2.1.0-dev.9.4 f9ebf21297) [√] Android toolchain - develop for Android devices (Android SDK 28.0.2) • Android SDK at C:\Users\ryand\AppData\Local\Android\sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-28, build-tools 28.0.2 • Java binary at: D:\Android Studio\jre\bin\java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06) • All Android licenses accepted. [√] Android Studio (version 3.2) • Android Studio at D:\Android Studio X Flutter plugin not installed; this adds Flutter specific functionality. X Dart plugin not installed; this adds Dart specific functionality. • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06) [√] IntelliJ IDEA Community Edition (version 2018.2) • IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2018.2.5 • Flutter plugin version 29.1.3 • Dart plugin version 182.4892.25 [√] VS Code (version 1.29.1) • VS Code at C:\Users\ryand\AppData\Local\Programs\Microsoft VS Code • Flutter extension version 2.20.0 [√] VS Code, 64-bit edition (version 1.27.2) • VS Code at C:\Program Files\Microsoft VS Code • Flutter extension version 2.20.0 [!] Connected device ! No devices available ```
c: new feature,framework,f: material design,f: scrolling,c: proposal,has reproducible steps,P3,found in release: 3.3,found in release: 3.7,team-design,triaged-design
low
Critical
383,939,200
rust
warning for UB `!` exprs is too light
So this code produces a warning about the `&*input` expression being unreachable: ```rust enum Void { }; fn process(input: *const Void) { let _input = &*input; } ``` ``` warning: unreachable expression ``` I can imagine this needs to "only" be a warning so someone can write some really janky macro code that very carefully ensures the expression isn't ever evaluated. Even if that's the case, I would appreciate that this get its own error-code, or just stronger language to indicate to the user that they are doing something Very Bad. In particular I believe evaluating this expr is Undefined Behaviour.
A-type-system,C-enhancement,A-lints,A-diagnostics,E-needs-test,T-lang,T-compiler,T-types
medium
Critical
383,946,513
nvm
[Bug] Install scripts do not create ~/.nvm
(Deleted the template since it is for post-installation bugs) OS: Ubuntu 18.10 Shell: zsh (oh-my-zsh) When installing with either `wget` or `curl` in the README, I get the following error: ``` You have $NVM_DIR set to "/home/kuech/.nvm", but that directory does not exist. Check your profile files and environment. ``` Creating `~/.nvm` prior to running the scripts solves the issue
needs followup,installing nvm: profile detection,shell: zsh: oh-my-zsh
low
Critical
383,966,894
rust
Implement `CoerceUnsized` and `DispatchFromDyn` for `ManuallyDrop`
Add the following impls: ```rust impl<T: ?Sized, U: ?Sized> CoerceUnsized<ManuallyDrop<T>> for ManuallyDrop<U> where T: CoerceUnsized<U>, {} impl<T: ?Sized, U: ?Sized> DispatchFromDyn<ManuallyDrop<T>> for ManuallyDrop<U> where T: DispatchFromDyn<U>, {} ``` And add a test case that is allowed by these impls. This will allow `ManuallyDrop<Box<T>>` to be unsized to `ManuallyDrop<Box<dyn Trait>>`, and allow `ManuallyDrop<Box<Self>>` to be used as a trait-object-safe method receiver with the `arbitrary_self_types` feature. And similarly for other pointer types besides `Box`. Example test case: ```rust trait Trait { fn foo(self: ManuallyDrop<Box<Self>>) -> i32; } impl Trait for i32 { fn foo(self: ManuallyDrop<Box<Self>>) -> i32 { self } } fn main() { let x = ManuallyDrop::new(Box::new(5i32)) as ManuallyDrop<Box<dyn Trait>>; assert_eq!(x.foo(), 5); } ``` This affects stable code, by allowing unsize coercions that were not previously allowed (e.g. from `ManuallyDrop<Box<T>` to `ManuallyDrop<Box<U>>` where `T: Unsize<U>`), so I guess it requires an FCP.
T-lang
low
Minor
383,988,425
pytorch
[Caffe2] Error when loading a leveldb dataset using brew.db_input (Error protos.protos_size() == OutputSize().)
Hi all, I am trying to **load a leveldb dataset using brew.db_input** with the Python API. I used the following relevant code: ``` def add_input(self, model, batch_size, db, db_type, device_opts): with core.DeviceScope(device_opts): # load the data data, label = brew.db_input( model, blobs_out=["data", "label"], batch_size=batch_size, db=db, db_type=db_type, ) # don't need the gradient for the backward pass data = model.StopGradient(data, data) return data, label . . . # == Training model == train_model= model_helper.ModelHelper(name="train_net", arg_scope=arg_scope) data, label = self.add_input(train_model, batch_size=batch_size, db=os.path.join(self._data_dir_, 'TORCS_Training_1F'), db_type='leveldb', device_opts=device_opts) predictions = self.create_model(train_model, data, label, device_opts=device_opts) self.add_training_operators(train_model, predictions, label, device_opts, opt_type, base_learning_rate, policy, stepsize, epsilon, beta1, beta2, gamma, momentum) self.add_accuracy(train_model, predictions, label, device_opts, eval_metric) with core.DeviceScope(device_opts): brew.add_weight_decay(train_model, weight_decay) # Initialize and create the training network workspace.RunNetOnce(train_model.param_init_net) workspace.CreateNet(train_model.net, overwrite=True) # Main Training Loop print("== Starting Training for " + str(num_epoch) + " epochs ==") for i in range(num_epoch): workspace.RunNet(train_model.net) if i % 50 == 0: print 'Iter ' + str(i) + ': ' + 'Loss ' + str(workspace.FetchBlob("loss")) + ' - ' + 'Accuracy ' + str(workspace.FetchBlob('accuracy')) print("Training done") ``` However, I got the following error: ``` WARNING: Logging before InitGoogleLogging() is written to STDERR E1124 13:53:47.919953 8191 prefetch_op.h:110] Prefetching error [enforce fail at tensor_protos_db_input.h:68] protos.protos_size() == OutputSize(). E1124 13:53:47.920176 8161 prefetch_op.h:83] Prefetching failed. E1124 13:53:47.936815 8161 net_simple.cc:63] Operator failed: input: "dbreader_./data/dpnet_dpnet/TORCS_Training_1F" output: "data_uint8" output: "label" name: "" type: "TensorProtosDBInput" arg { name: "batch_size" i: 64 } device_option { device_type: 1 cuda_gpu_id: 0 } WARNING:caffe2.python.workspace:Original python traceback for operator `-1107198840` in network `train_net` in exception above (most recent call last): Traceback (most recent call last): File "CNNTrainer_dpnet_dpnet.py", line 23, in <module> stepsize=8000 File "/home/carlos/Documents/git/Caffe2_scripts/caffe2_torcs_predictor/CNNCreator_dpnet_dpnet.py", line 158, in train workspace.RunNet(train_model.net) File "/home/carlos/Documents/git/pytorch/build/caffe2/python/workspace.py", line 217, in RunNet StringifyNetName(name), num_iter, allow_fail, File "/home/carlos/Documents/git/pytorch/build/caffe2/python/workspace.py", line 178, in CallWithExceptionIntercept return func(*args, **kwargs) RuntimeError: [enforce fail at pybind_state.cc:1025] success. Error running net train_net ``` The leveldb dataset employed is the one used by DeepDriving (http://deepdriving.cs.princeton.edu/) along with caffe (previous framework of Caffe2). Therefore, I know that the dataset is fine and could be loaded with caffe. **Am I doing something wrong here? `data, label = self.add_input(train_model, batch_size=batch_size, db=os.path.join(self._data_dir_, 'TORCS_Training_1F'), db_type='leveldb', device_opts=device_opts)` Is there a difference when loading leveldb datasets between Caffe and Caffe2?**
caffe2
low
Critical
383,997,796
kubernetes
Kubelet does not restart or reregister in response to removed Node API object
Once a kubelet has started up, if its Node API object is removed, the kubelet perpetually attempts and fails to update the status on the now-missing Node object. I would have expected it to do one of the following: * exit after a period of time or number of retries * re-register the Node object /kind bug /sig node
kind/bug,sig/node,sig/cluster-lifecycle,priority/important-longterm,lifecycle/frozen,triage/accepted
medium
Critical
383,999,117
TypeScript
Autocomplete on extends keyof generic
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.3.0-dev.20181122 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** - autocomplete - autocomplete generic - generic extends - autocomplete keyof extends - autocomplete keyof generic **Code** ```ts type Except<T, K extends keyof T> = Pick<T, { [P in keyof T]: P extends K ? never : P }[keyof T]>; interface I1 { foo: string; } function F1<T extends I1>(): Except<T, ''> { // error is shown but autocomplete is not provided //code return null; } ``` **Expected behavior:** Having autocomplete showing when extending a generic **Actual behavior:** Even if error is correctly displayed, no autocomplete is shown for a `T extends` with a known type on the right hand. **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> https://www.typescriptlang.org/play/index.html#src=type%20Except%3CT%2C%20K%20extends%20keyof%20T%3E%20%3D%20Pick%3CT%2C%20%7B%20%5BP%20in%20keyof%20T%5D%3A%20P%20extends%20K%20%3F%20never%20%3A%20P%20%7D%5Bkeyof%20T%5D%3E%3B%0A%0Ainterface%20I1%20%7B%0A%20%20%20%20foo%3A%20string%3B%0A%7D%0Afunction%20F1%3CT%20extends%20I1%3E()%3A%20Except%3CT%2C%20''%3E%20%7B%0A%20%20%20%20%2F%2Fcode%0A%20%20%20%20return%20null%3B%0A%7D **Related Issues:** <!-- Did you find other bugs that looked similar? --> #16740 - but this one is quite different because it's a keyof issue
Suggestion,Domain: Completion Lists,Experience Enhancement
medium
Critical
384,019,382
react
API for display name on forwardRef, memo and potential future exotic components
TL;DR: Can you expose [shared/getComponentName](https://github.com/facebook/react/blob/master/packages/shared/getComponentName.js)? <!-- Note: if the issue is about documentation or the website, please file it at: https://github.com/reactjs/reactjs.org/issues/new --> **Do you want to request a *feature* or report a *bug*?** Expose an API to get the display name of every component (in `__DEV__` only). **What is the current behavior?** Most of the ecosystem still uses `Component.displayName || Component.name || someFallbackName` (with some branching depending on the type of `Component`) when setting the display name of an enhanced component i.e. `connect()(WrappedComponent)` will result in `"connect(WrappedComponent)"` as a `displayName`. **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:** Since components created by `forwardRef` or `memo` are not actual functions these higher-order components are not able to determine a proper display name while `react-devtools` is able to: https://codesandbox.io/s/zqj9v50243 - `react-redux` creates `"connect(Component)"` - `react-router` creates `"withRouter(undefined)"` **What is the expected behavior?** The new "exotic-components" should work with the existing 3rd party libraries WRT to `displayName`. Now there are a couple of solutions to this issue: 1. **Edit:** Expose [shared/getComponentName](https://github.com/facebook/react/blob/master/packages/shared/getComponentName.js) 2. This is the responsibility of the ecosystem. It should provide a solution and maintain it. Somewhat blocked by #12882, related: #12932 3. Grant access to the functionality used in `react-devtools` (or would this only work on the fibers?) 4. Set a `name` (or `displayName` no preference here) property on those "exotic-components" (don't know how to call them). Naive implementation e.g.: `name: 'ForwardRef(' + fn.name + ')'`. **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?** I guess this started with `forwardRef` in 16.3.
Type: Feature Request
medium
Critical
384,027,490
flutter
ModalRoute.of(context).pop(value)
There's currently no good way for a dialog to pop itself when it's not sure that it's on top.
c: new feature,framework,f: routes,P3,team-framework,triaged-framework
low
Minor
384,061,743
pytorch
[libtorch] Catkin_make compilation error
I am trying to use libtorch to my catkin project. One of my package A links with libtorch.so and can be successfually compiled as a static library A.a. Then I try to linked A.a to another package B. When I run catkin_make, the following error comes out: > -- Configuring done > CMake Error: > Error evaluating generator expression: > > $<TARGET_PROPERTY:caffe2,INTERFACE_SYSTEM_INCLUDE_DIRECTORIES> > > Target "caffe2" not found. > > > CMake Error: > Error evaluating generator expression: > > $<TARGET_PROPERTY:caffe2_gpu,INTERFACE_SYSTEM_INCLUDE_DIRECTORIES> > > Target "caffe2_gpu" not found. > > > CMake Error: > Error evaluating generator expression: > > $<TARGET_PROPERTY:caffe2,INTERFACE_SYSTEM_INCLUDE_DIRECTORIES> > > Target "caffe2" not found. > > > CMake Error: > Error evaluating generator expression: > > $<TARGET_PROPERTY:caffe2_gpu,INTERFACE_SYSTEM_INCLUDE_DIRECTORIES> > > Target "caffe2_gpu" not found. > > > -- Generating done > -- Build files have been written to: /home/panpan/workspace/catkin_ws/build > Makefile:4460: recipe for target 'cmake_check_build_system' failed > make: *** [cmake_check_build_system] Error 1 > Invoking "make cmake_check_build_system" failed My CMakeLists.txt in package A contains something like ``` project(A CXX CUDA) find_package(Torch REQUIRED) ... target_link_libraries("${PROJECT_NAME}" "${TORCH_LIBRARIES}") ``` And CMakeLists.txt in package B calls: ``` target_link_libraries(B ${catkin_LIBRARIES} A ) ``` The error comes during the cmake generation stage where the configuration is done but the compilation hasn't started yet. Only package A in my catkin workspace is using libtorch. System configuration: - Ubuntu 18.04 - Cmake 3.12.4 - CUDA 9.0.176 - gcc g++ 5.5 - libtorch 1.0.0.dev20181121 - Catkin comes with ROS melodic _Originally posted by @cindycia in https://github.com/pytorch/pytorch/issue_comments#issuecomment-441210189_
needs reproduction,module: build,triaged,has workaround
low
Critical
384,062,952
rust
"the type parameter is not constrained" but it is needed
I would expect the following code to compile: ```rust #![allow(unused)] trait AllocExtra<MemoryExtra> { fn create(x: &MemoryExtra) -> Self; } struct Alloc<Extra> { extra: Extra, other: u32, } impl<MemoryExtra, Extra: AllocExtra<MemoryExtra>> Alloc<Extra> { fn new(x: &MemoryExtra) -> Self { Alloc { extra: AllocExtra::create(x), other: 42, } } } ``` but [it does not](https://play.rust-lang.org/?version=stable&mode=debug&edition=2015&gist=2d912f58d2bd63ebbc633c02ddd15cfd). I will now try to hack around this by moving the where-clause to every function, but I will have to repeat it *at lot*.
A-type-system,T-compiler,C-bug,T-types
medium
Critical
384,082,217
go
cmd/compile: teach prove about slice expressions
``` $ cat f.go package p func f(s string) { if len(s) >= 2 { s = s[1:] _ = s[0] } } $ go version go version devel +6d5caf38e3 Thu Nov 22 02:59:55 2018 +0000 linux/amd64 $ go build -gcflags=-d=ssa/check_bce/debug=1 f.go # command-line-arguments ./f.go:6:8: Found IsInBounds ``` The bounds check disappears as soon as we rewrite the code to not reslice `s`. This shows up in real code fairly often, for example, I encountered it in a somewhat hot function in the `encoding/json` decoder: ``` if len(s) >= 2 && (s[0] == 'e' || s[0] == 'E') { s = s[1:] if s[0] == '+' || s[0] == '-' { // bounds check isn't eliminated here ``` I'm not sure how easy it would be to make the prove pass aware of slice expressions. I think handling the simple `x = x[N:]` case (where `N` is constant) should be doable, and hopefully remove a few dozen bounds checks across the standard library. /cc @aclements @rasky @josharian
Performance,compiler/runtime
low
Critical
384,089,856
rust
rustdoc does not highlight non-rust code fragments
In this example, the Rust fragment is highlighted, but the C one is not: //! ```rust //! // This is Rust. //! fn main() {} //! ``` //! //! ```c //! /* This is C. */ //! int main() { return 0; } //! ```
T-rustdoc,A-docs,C-feature-request
low
Major
384,090,278
go
cmd/compile: BCE is better with reslicing than index variables
Take this piece of code: ``` $ cat f.go package p func slice(p []byte) { for len(p) > 4 { // zero bounds checks. _ = p[0] _ = p[1] _ = p[2] _ = p[3] p = p[4:] // reslicing is expensive. } } func index(p []byte) { i := 0 for i < len(p) { _ = p[i+3] // BCE hint; bounds check _ = p[i] // unexpected bounds check _ = p[i+1] // unexpected bounds check _ = p[i+2] // unexpected bounds check _ = p[i+3] i += 4 // incrementing i is cheap. } } $ go version go version devel +6d5caf38e3 Thu Nov 22 02:59:55 2018 +0000 linux/amd64 $ go build -gcflags=-d=ssa/check_bce/debug=1 f.go # command-line-arguments ./f.go:18:8: Found IsInBounds ./f.go:20:8: Found IsInBounds ./f.go:21:8: Found IsInBounds ./f.go:22:8: Found IsInBounds ``` It's easy to see why the first variant has zero bounds checks. However, reslicing can be expensive in a hot loop, so sometimes the code is rewritten to use indexes. This is what the second variant does. I do realise that the two loops aren't equivalent - for example, if `len(p) == 5`, the second loop will panic since the length is not a multiple of 4. So I understand why the compiler needs to insert one bounds check. Still, it seems to me like it should insert one, not four, since I've added the BCE hint. My first thought was that it couldn't prove that `i >= 0`, but changing the index to be unsigned still doesn't remove all the bounds checks that I'd expect. I encountered this in the base64 encoder: ``` di, si := 0, 0 n := (len(src) / 3) * 3 for si < n { // Convert 3x 8bit source bytes into 4 bytes val := uint(src[si+0])<<16 | uint(src[si+1])<<8 | uint(src[si+2]) // 3 bounds checks dst[di+0] = enc.encode[val>>18&0x3F] // bounds check dst[di+1] = enc.encode[val>>12&0x3F] // bounds check dst[di+2] = enc.encode[val>>6&0x3F] // bounds check dst[di+3] = enc.encode[val&0x3F] // bounds check si += 3 di += 4 } ``` Rewriting the loop to use reslicing like `for len(src) >= 3 { ...; src = src[3:] }` does remove the bounds checks, but slows down the encoder noticeably. Just like in my simple example above, BCE hints like `_ = src[si+2]` and unsigned indexes didn't help either. I think there's probably a way to rewrite the loop index logic to trick BCE, but I think the compiler could be made smarter here. I'm not sure whether that should be an enhancement in the prove pass, or in the bounds check elimination pass. /cc @aclements @rasky @josharian
Performance,compiler/runtime
low
Critical
384,097,300
flutter
Add an option to set a Dismissble as 'active'
Attempt to disable the Dismissible() behavior by setting `.direction = null` causes assert at https://github.com/flutter/flutter/blob/6d134e0c866ce90a4dc7a02af6a8f1079c2a4fdc/packages/flutter/lib/src/widgets/dismissible.dart#L337 This is a feature enhancement to simplify customer code, not a code fault. ## Setup Flutter version 0.11.9 Framework revision d48e6e433c (5 days ago), 2018-11-20 22:05:23 -0500 Engine revision 5c8147450d Dart version 2.1.0 (build 2.1.0-dev.9.4 f9ebf21297) Platform android-27, build-tools 27.0.3 ## Repo Scenario * ListView containing many Dismissible() * Dismissible's contain a card * Sometimes, all these cards can not be dismissed. There should be no sliding possible * Sometimes, all these cards can be dismissed. Slide to remove them. It would be desired to use the `Dismissible.direction` field to set the means to be dismissed. With flutter v0.11.9, we can set that field to any choice of `DismissDirection` but unfortunately not to `null`. Naturally, `null` as a `.direction` would indicate there is no direction in which it can be dismissed...making the Dismissible a noop. Support for `null` as a value to `Dismissible.direction` will simply customer code. Rather than using a condition expression and duplicating the child widget code, it can more easily be done by just setting the `direction` to null. Today's unhappy duplicate code 😐 ```dart Listview.builder(... itemBuilder: return isDisabled ? CardWidgetAndCode( ... ) : Dismissible( child: CardWidgetAndCode( // <-- duplicate code and/or function call ... ) ) ) ``` Future happy simplified code 🙂 ```dart Listview.builder(... itemBuilder: Dismissible( direction: isDisabled ? null : DismissDirection.horizontal, child: CardWidgetAndCode( ... ) ) ) ``` ## Log of assert ``` package:flutter/src/widgets/dismissible.dart': Failed assertion: line 337 pos 12: 'widget.direction != null': is not true ``` ## Doctor ``` [√] Flutter (Channel beta, v0.11.9, on Microsoft Windows [Version 10.0.17763.165], locale en-US) • Flutter version 0.11.9 at C:\Users\dale\.flutter • Framework revision d48e6e433c (5 days ago), 2018-11-20 22:05:23 -0500 • Engine revision 5c8147450d • Dart version 2.1.0 (build 2.1.0-dev.9.4 f9ebf21297) [√] Android toolchain - develop for Android devices (Android SDK 27.0.3) • Android SDK at C:\Users\dale\.android\Sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-27, build-tools 27.0.3 • ANDROID_HOME = C:\Users\dale\.android\Sdk • Java binary at: C:\Program Files\Android\android-studio-preview\jre\bin\java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06) • All Android licenses accepted. [√] Android Studio (version 3.2) • Android Studio at C:\Program Files\Android\android-studio-preview • Flutter plugin version 30.0.1 • Dart plugin version 181.5656 • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1136-b06) [√] VS Code, 64-bit edition (version 1.29.1) • VS Code at C:\Program Files\Microsoft VS Code • Flutter extension version 2.20.0 [√] Connected device (1 available) • Android SDK built for x86 • emulator-5554 • android-x86 • Android 8.0.0 (API 26) (emulator) • No issues found! ```
c: new feature,framework,f: material design,P3,team-design,triaged-design
low
Critical
384,155,328
go
net: support for RFC 4592 (lookup for wildcard dns)
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.10.3 darwin/amd64 </pre> ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/Users/qiufeng/Library/Caches/go-build" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/qiufeng/go" GORACE="" GOROOT="/usr/local/go" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/pr/1blgkr1j60bbscwnp21g4bqr0000gn/T/go-build995562236=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre></details> ### What did you do? <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> <pre>package main import ( "net" "context" "log" ) func main() { host:="*.qzone.qq.com" log.Println(net.LookupHost(host)) r := net.Resolver{ PreferGo:true, StrictErrors:true, Dial: DNSDialer, } ctx := context.Background() ipaddr, err := r.LookupHost(ctx, host) if err != nil { log.Println(err) } log.Println(ipaddr) } func DNSDialer(ctx context.Context, network, address string) (net.Conn, error) { d := net.Dialer{} return d.DialContext(ctx, "udp", "8.8.8.8:53") }</pre> ### What did you expect to see? <pre> $ go run test2.go 2018/11/26 10:47:27 [180.163.21.59] <nil> 2018/11/26 10:47:27 [180.163.21.59] <nil> </pre> ### What did you see instead? <pre> $ go run test2.go 2018/11/26 10:47:27 [180.163.21.59] <nil> 2018/11/26 10:47:27 lookup *.qzone.qq.com: no such host 2018/11/26 10:47:27 [] </pre>
FeatureRequest
low
Critical
384,169,778
flutter
Breaking changes that would improve the overall API
This bug lists changes we'll probably never make, but should consider if we ever for some reason decided to start over in a new universe. ## Foundation - [ ] `ChangeNotifier` and `Listenable` and company are a bit mixed up. We could refactor the code here so that we more clearly have a `Listenable` and a `ListenableController`, and a `ValueListenable` and a `ValueListenableController`, and so on. ## Services - [ ] `StandardMessageCodec` should be a static class or should be stateful. - [ ] `dev.flutter/channel-buffers` control messages should use a more efficient binary format rather than an inefficient bespoke `\r`-delimited text-based format. - [ ] `flutter/system`, `flutter/navigation`, etc, control messages should use a more efficient binary format rather than JSON. ## Painting - [x] `addListener`/`removeListener` on `ImageStream` have a broken API that should be redesigned. - [ ] `NotchedShape` should be a shape, or should have another name. - [ ] `TextStyle` has many decoration properties, it would be nice if they were all in one object, maybe separate objects for underline and overline and strike-through. ## Gestures - [ ] Rename `PointerMoveEvent` to `PointerUpdateEvent`, because it's also triggered at button changes without moves. ## Animation - [ ] `Tween`s should be immutable. - [ ] Consider renaming `vsync` in AnimationController to `tickerProvider` (#37255) - [ ] Instead of one `AnimationController`, have one for each type of controller (bounded, unbounded, simulation, repeating, etc). - [ ] Remove `Animatable.animate` in favour of `Animation.drive`. - [ ] `CurveTween` constructor's argument should be positional. (https://github.com/flutter/flutter/issues/21433) ## Rendering - [ ] `Adaptor` -> `Adapter` - [ ] SliverChildListDelegate should have a `children` argument instead of a positional one - [ ] `CustomPainter.hitTest` should take a `Size` argument (https://github.com/flutter/flutter/issues/28206) - [ ] `Layer.pushLayer` should be renamed to `withLayer`, since it is not matched by a `popLayer`. The same applies to its relatives. - [ ] `BlendMode.srcATop` -> `BlendMode.srcAtop` - [ ] `BlendMode.dstATop` -> `BlendMode.dstAtop` ## Widgets - [ ] TabController should be a ValueListenable - [ ] Rename `WidgetBuilder` to make it clear it's a typedef. - [ ] Rename `TransitionBuilder` to make it clear it's a typedef with a child. - [ ] Rename `AnimatedBuilder` to make it clear it takes any `Listenable` (even `AnimationBuilder` would be more consistent...), see also: https://github.com/flutter/flutter/issues/24722#issuecomment-517170702. - [ ] Rename `Listener` to be `PointerListener` - [ ] (Subclasses of) `AnimatedWidget` and `ImplicitlyAnimatedWidget` should be named consistently (see https://github.com/flutter/flutter/issues/24722#issuecomment-517170702). - [ ] Name the Foo vs FooTransition vs AnimatedFoo widgets more consistently (e.g. FadeTransition vs AnimatedOpacity, also [`AnimatedSize`](https://github.com/flutter/flutter/issues/40475)). - [ ] Consider merging all the one-argument `FooBuilder` typedefs into one generic typedef, or otherwise cleaning up the inconsistencies there. - [x] `inheritFromWidgetOfExactType` should be generic rather than taking a `Type` (so it can guarantee its return type is of the same type as the argument). - [ ] `Container`'s width and height properties should size the child, and imply an alignment. - [ ] It's weird that we have both `showDialog` and `showGeneralDialog`. - [ ] `AnimatedList` should be `AnimatedListView`. - [ ] `SliverMultiBoxAdaptorWidget` is not an adaptor; consider naming like `SliverMultiChildRenderObjectWidget`. - [ ] Rename `RichText` to `RawText`. - [ ] Change the type of `maxLines` from int to double (https://github.com/flutter/flutter/issues/35064). - [ ] `ColorFiltered` -> `ColorFilter` - [ ] Be consistent about callback parameter names for tapping/pressing something (onTap vs onPressed, https://github.com/flutter/flutter/issues/17407) - [ ] `FocusNode` and `FocusScopeNode` should use 'focusable' instead of 'canRequestFocus' as the attribute for controlling focusability. - [ ] MediaQueryData.disableAnimations should be called MediaQueryData.enableAnimations with the opposite semantics (see style guide under "double negatives"). - [ ] MediaQuery.viewPadding should be used instead of MediaQuery.padding throughout the framework. viewPadding was added ATF to address edge cases, but the API surface has become confusing as a result, see #59204 - [ ] `Element.activate` should rename to `Element.reactivate`, because only the reactivate will callback. - [ ] `OnInvokeCallback` should be called just `InvokeCallback` per the style guide. - [ ] Navigator: `reportsRouteUpdateToEngine` should be `reportRouteUpdateToEngine` (imperative not descriptive). - [ ] https://github.com/flutter/flutter/issues/115820 ## Material - [ ] ColorScheme.on* shouldn't look like event handlers - [ ] Handle the tap target padding for e.g. mini fabs more intuitively (dunno what that would entail exactly) - [ ] `InputDecoration.filled` should be `isFilled` - [ ] `InputDecoration` should only have one prefix field and one suffix field, rather than three each (widget, icon, text). And it probably should have an `error`, a `label`, and a `hint` instead of just `errorText`, `labelText`, and `hintText`. - [ ] Change `Divider` and `VerticalDivider`'s `indent` and `endIndent` properties to simply be `margin` - [ ] `Colors.grey` -> `Colors.gray`
framework,c: API break,P3,team-framework,triaged-framework
high
Critical
384,183,767
go
x/build/cmd/coordinator: use cmd/go's build caching
Now that cmd/go has good caching, cmd/coordinator should use it, somehow. We'd likely need some hooks in cmd/go to accommodate our needs, but Russ said he was fine with that when I asked him maybe a year ago. /cc @dmitshur @bcmills
Performance,Builders,NeedsFix,FeatureRequest
low
Major
384,217,487
vue-element-admin
使用Tinymce时报错Error in nextTick: "TypeError: Cannot read property 'parse' of undefined"
点击编辑回显数据 `this.$refs.editor.setContent(this.temp.intr)` 功能没问题,就是报了一个错,好像是tinymce初始化没有完成 [Vue warn]: Error in nextTick: "TypeError: Cannot read property 'parse' of undefined"
enhancement :star:
low
Critical
384,253,255
go
cmd/compile: consider teaching prove about unexported integer fields
Consider the following program: ``` $ cat f.go package p type T struct { intOff int uintOff uint data []byte } func (t *T) reset(data []byte) { t.intOff = 0 t.uintOff = 0 t.data = data } func withIntVar(data []byte) { i := 0 for i < len(data) { _ = data[i] // no bounds check i++ } } func (t *T) withIntField(data []byte) { for t.intOff < len(t.data) { _ = t.data[t.intOff] // bounds check! t.intOff++ } } func (t *T) withUintField(data []byte) { for t.uintOff < uint(len(t.data)) { _ = t.data[t.uintOff] // no bounds check t.uintOff++ } } $ go version go version devel +048c9164a0 Sat Nov 24 23:55:07 2018 +0000 linux/amd64 $ go build -gcflags=-d=ssa/check_bce/debug=1 f.go # command-line-arguments ./f.go:25:13: Found IsInBounds ``` Lengths are signed, `int` is simple, and it's the default integer type, so it's understandable why most indexes and offsets are signed. This works fine in our first example; we get no bounds checks. However, this breaks down when we start using struct field integers, if the loop logic is split between multiple methods. We can see that the second example has a bounds check. Only making the field unsigned is enough to convince prove/BCE that the offset is always within the bounds of the slice, presumably because in the signed integer field case the compiler isn't smart enough to figure out that `t.intOff >= 0` in all cases. For example, see https://go-review.googlesource.com/c/go/+/150919, where the JSON decoder gets ~1.5% faster by just making the offset field unsigned, but at the cost of adding more type conversions in a bunch of places. I presume that we could prove that a signed integer field is never negative, granted that it's an unexported field and is only ever assigned non-negative values, such as: * `t.field = N`, where N is bounded to be non-negative * `if t.field < N { t.field++ }`, where N is bounded to be non-negative The JSON example above does use `d.off = len(d.data) + 1` to mark an EOF, which could get the field to overflow if `len(d.data)` is the maximum integer value, but I presume we could rewrite it to use `d.off = len(data)`. Otherwise, all the assignments seem to keep the offset within non-negative bounds. Ideally, the prove pass would be smart enough to also work with variations of these rules, such as the code below. But that could be tracked as a separate issue. ``` i := t.intOff data := t.data for i < len(data) { _ = data[i] i++ } t.intOff = i ``` If there's a good reason why the prove pass can't ever deal with fields, please feel free to close this issue. I'm working with the assumption that it's possible, but hasn't been implemented yet. /cc @aclements @rasky @josharian @bradfitz
Performance,compiler/runtime
low
Critical
384,277,021
pytorch
How to use model.net.Clip?
I want to clip loss using Clip op like this model.net.Clip([input,output], 0,10) but it dosen't work!! the Operators Catalog of caffe2 is too simple. Does anyone can help me? Thank you very much!!!
caffe2
low
Minor
384,278,347
pytorch
Error: DeviceGuardImpl for cpu is not available (static linking PyTorch)
I am getting this: ``` Error: unhandled exception: p ASSERT FAILED at /data/Storage/Development/nimtorch/aten/include/c10/impl/DeviceGuardImplInterface.h:130, please report a bug to PyTorch. DeviceGuardImpl for cpu is not available (getDeviceGuardImpl at /data/Storage/Development/nimtorch/aten/include/c10/impl/DeviceGuardImplInterface.h:130) ``` I wonder if there is some forgotten initialization code that is not getting called when linked statically? Static library build not being tested and broken? ## Previous content Hi I followed the steps given in the following tutorial https://pytorch.org/tutorials/advanced/cpp_export.html to port my pytorch code to C++ and have been successfully able to run the model in C++. I compiled my C++ code as mentioned in the tutorial using the CMakeLists.txt file. However, I want to now run my code on a different platform(having similar configuration) and thus want to compile the code for static linking of the libraries. I tried using the following command for building the application: ```cmake -D CMAKE_PREFIX_PATH=/home/pytorch/libtorch -D OpenCV_DIR=/home/opencv/build -D BUILD_SHARED_LIBS=OFF -DCMAKE_EXE_LINKER_FLAGS="-static" ..``` which gave the following error: `/usr/bin/ld: attempted static link of dynamic object /home/pytorch/libtorch/lib/libtorch.so` Can anyone help me by explaining what should be the best way for doing static linking of the library?
module: build,triaged,module: static linking,has workaround
medium
Critical
384,289,297
rust
Oh rust doctest lints, where art þou? (Add a way to run clippy on doctests)
Currently, doctests benefit from being code in numerous ways, such as being tested. However, this unfortunately does not (yet?) apply to clippy lints. For (an atmittedly contrived) example: ```rust /// is the given number odd? /// /// # Examples /// /// ```rust ///# use testdoclints::is_odd; /// let mut a = 1; /// a = a + 1; // this should lint `clippy::assign_op_pattern` /// assert!(!is_odd(a)); /// ``` pub fn is_odd(x: usize) -> bool { (x & 1) == 1 } ``` Running `cargo clippy` shows no lint. To solve this, we'd need to be able to hook into the test code generation and present the resulting AST and HIR to our lints. I am unsure where to put this issue, but as clippy is not the only source of custom lints, I think solving it within rust/rustdoc makes sense. cc @Manishearth @oli-obk
T-rustdoc,A-lints,A-doctests,A-clippy
medium
Critical
384,306,346
TypeScript
strictFunctionTypes prevents an assignment not related to functions
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.3.0-dev.20181122 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** covariance, contravariance, keyof, interface, class, generic **Code** ```ts interface IBase { foo: string; } interface IDerived extends IBase { bar: string; } type StringPropertyNames<T> = { [P in keyof T]: T[P] extends string ? P : never }[keyof T] type StringProperties<T> = Pick<T, StringPropertyNames<T>> interface Foo<T> { readonly Bar: StringProperties<T>; } let baseProperties: StringProperties<IBase> let derivedProperties: StringProperties<IDerived> let baseInterface: Foo<IBase> let derivedInterface: Foo<IDerived> baseProperties = derivedProperties // no error baseInterface = derivedInterface ``` `tsc test.ts --strictFunctionTypes` **Expected behavior:** Compiles without errors. **Actual behavior:** ``` test.ts:23:1 - error TS2322: Type 'Foo<IDerived>' is not assignable to type 'Foo<IBase>'. Property 'bar' is missing in type 'IBase' but required in type 'IDerived'. 23 baseInterface = derivedInterface ~~~~~~~~~~~~~ test.ts:6:5 6 bar: string; ~~~ 'bar' is declared here. ``` [**Playground Link:**](https://www.typescriptlang.org/play/#src=interface%20IBase%20%7B%0D%0A%20%20%20%20foo%3A%20string%3B%0D%0A%7D%0D%0A%0D%0Ainterface%20IDerived%20extends%20IBase%20%7B%0D%0A%20%20%20%20bar%3A%20string%3B%0D%0A%7D%0D%0A%0D%0Atype%20StringPropertyNames<T>%20%3D%20%7B%20%5BP%20in%20keyof%20T%5D%3A%20T%5BP%5D%20extends%20string%20%3F%20P%20%3A%20never%20%7D%5Bkeyof%20T%5D%0D%0Atype%20StringProperties<T>%20%3D%20Pick<T%2C%20StringPropertyNames<T>>%0D%0A%0D%0Ainterface%20Foo<T>%20%7B%0D%0A%20%20%20%20readonly%20Bar%3A%20StringProperties<T>%3B%0D%0A%7D%0D%0A%0D%0Alet%20baseProperties%3A%20StringProperties<IBase>%0D%0Alet%20derivedProperties%3A%20StringProperties<IDerived>%0D%0A%0D%0Alet%20baseInterface%3A%20Foo<IBase>%0D%0Alet%20derivedInterface%3A%20Foo<IDerived>%0D%0A%0D%0AbaseProperties%20%3D%20derivedProperties%20%2F%2F%20no%20error%0D%0AbaseInterface%20%3D%20derivedInterface%0D%0A) <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> **Related Issues:** <!-- Did you find other bugs that looked similar? --> #24190 The [documentation says](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-6.html): > Under --strictFunctionTypes function type parameter positions are checked contravariantly instead of bivariantly. I couldn't find any information saying it should affect anything else.
Suggestion,Awaiting More Feedback
low
Critical
384,346,568
go
cmd/go: -json test report ordering
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.11.2 darwin/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/Users/me/Library/Caches/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/me/.go:/Users/mark/Projects/go" GOPROXY="" GORACE="" GOROOT="/Users/me/.goenv/versions/1.11.2" GOTMPDIR="" GOTOOLDIR="/Users/me/.goenv/versions/1.11.2/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="/Users/mark/Projects/mine/my_pkg/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/vy/hzm88gt90292ztr9f3qg9zhr0000gn/T/go-build057809249=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre></details> ### What did you do? I ran tests for a module using the following commands: ```bash $ go test -v ./... $ go test -v -json ./... $ go test -v ./... | go tool test2json ``` Repository: https://github.com/reproduce/golant-test-ordering-issue ### What did you expect to see? I expected to see the test events to be in the exact same order in each case ### What did you see instead? <details><summary><code>go test -v ./...</code> Output</summary><br><pre> $ go test -v ./... === RUN TestSuccess --- PASS: TestSuccess (0.00s) === RUN TestSkip --- SKIP: TestSkip (0.00s) test_test.go:10: skipping test === RUN TestFail --- FAIL: TestFail (0.00s) test_test.go:14: failed test === RUN TestMain === RUN TestMain/success === RUN TestMain/skip === RUN TestMain/fail --- FAIL: TestMain (0.00s) --- PASS: TestMain/success (0.00s) --- SKIP: TestMain/skip (0.00s) test_test.go:23: skipping test --- FAIL: TestMain/fail (0.00s) test_test.go:27: failed test FAIL FAIL my_pkg 0.005s ? my_pkg/pkg/no_tests [no test files] testing: warning: no tests to run PASS ok my_pkg/pkg/subpkg 0.005s [no tests to run] </pre></details> <details><summary><code>go test -v -json ./...</code> Output</summary><br><pre> $ go test -v -json ./... {"Time":"2018-11-26T15:03:16.782812+01:00","Action":"output","Package":"my_pkg/pkg/subpkg","Output":"testing: warning: no tests to run\n"} {"Time":"2018-11-26T15:03:16.783598+01:00","Action":"output","Package":"my_pkg/pkg/subpkg","Output":"PASS\n"} {"Time":"2018-11-26T15:03:16.783605+01:00","Action":"output","Package":"my_pkg/pkg/subpkg","Output":"ok \tmy_pkg/pkg/subpkg\t(cached) [no tests to run]\n"} {"Time":"2018-11-26T15:03:16.783614+01:00","Action":"pass","Package":"my_pkg/pkg/subpkg","Elapsed":0.001} {"Time":"2018-11-26T15:03:17.015719+01:00","Action":"run","Package":"my_pkg","Test":"TestSuccess"} {"Time":"2018-11-26T15:03:17.015757+01:00","Action":"output","Package":"my_pkg","Test":"TestSuccess","Output":"=== RUN TestSuccess\n"} {"Time":"2018-11-26T15:03:17.015786+01:00","Action":"output","Package":"my_pkg","Test":"TestSuccess","Output":"--- PASS: TestSuccess (0.00s)\n"} {"Time":"2018-11-26T15:03:17.015797+01:00","Action":"pass","Package":"my_pkg","Test":"TestSuccess","Elapsed":0} {"Time":"2018-11-26T15:03:17.015804+01:00","Action":"run","Package":"my_pkg","Test":"TestSkip"} {"Time":"2018-11-26T15:03:17.015816+01:00","Action":"output","Package":"my_pkg","Test":"TestSkip","Output":"=== RUN TestSkip\n"} {"Time":"2018-11-26T15:03:17.01583+01:00","Action":"output","Package":"my_pkg","Test":"TestSkip","Output":"--- SKIP: TestSkip (0.00s)\n"} {"Time":"2018-11-26T15:03:17.015837+01:00","Action":"output","Package":"my_pkg","Test":"TestSkip","Output":" test_test.go:10: skipping test\n"} {"Time":"2018-11-26T15:03:17.015848+01:00","Action":"skip","Package":"my_pkg","Test":"TestSkip","Elapsed":0} {"Time":"2018-11-26T15:03:17.015854+01:00","Action":"run","Package":"my_pkg","Test":"TestFail"} {"Time":"2018-11-26T15:03:17.015859+01:00","Action":"output","Package":"my_pkg","Test":"TestFail","Output":"=== RUN TestFail\n"} {"Time":"2018-11-26T15:03:17.015864+01:00","Action":"output","Package":"my_pkg","Test":"TestFail","Output":"--- FAIL: TestFail (0.00s)\n"} {"Time":"2018-11-26T15:03:17.01587+01:00","Action":"output","Package":"my_pkg","Test":"TestFail","Output":" test_test.go:14: failed test\n"} {"Time":"2018-11-26T15:03:17.015883+01:00","Action":"fail","Package":"my_pkg","Test":"TestFail","Elapsed":0} {"Time":"2018-11-26T15:03:17.015888+01:00","Action":"run","Package":"my_pkg","Test":"TestMain"} {"Time":"2018-11-26T15:03:17.015942+01:00","Action":"output","Package":"my_pkg","Test":"TestMain","Output":"=== RUN TestMain\n"} {"Time":"2018-11-26T15:03:17.015956+01:00","Action":"run","Package":"my_pkg","Test":"TestMain/success"} {"Time":"2018-11-26T15:03:17.015963+01:00","Action":"output","Package":"my_pkg","Test":"TestMain/success","Output":"=== RUN TestMain/success\n"} {"Time":"2018-11-26T15:03:17.015969+01:00","Action":"run","Package":"my_pkg","Test":"TestMain/skip"} {"Time":"2018-11-26T15:03:17.015974+01:00","Action":"output","Package":"my_pkg","Test":"TestMain/skip","Output":"=== RUN TestMain/skip\n"} {"Time":"2018-11-26T15:03:17.015984+01:00","Action":"run","Package":"my_pkg","Test":"TestMain/fail"} {"Time":"2018-11-26T15:03:17.01599+01:00","Action":"output","Package":"my_pkg","Test":"TestMain/fail","Output":"=== RUN TestMain/fail\n"} {"Time":"2018-11-26T15:03:17.015996+01:00","Action":"output","Package":"my_pkg","Test":"TestMain","Output":"--- FAIL: TestMain (0.00s)\n"} {"Time":"2018-11-26T15:03:17.016019+01:00","Action":"output","Package":"my_pkg","Test":"TestMain/success","Output":" --- PASS: TestMain/success (0.00s)\n"} {"Time":"2018-11-26T15:03:17.016026+01:00","Action":"pass","Package":"my_pkg","Test":"TestMain/success","Elapsed":0} {"Time":"2018-11-26T15:03:17.016037+01:00","Action":"output","Package":"my_pkg","Test":"TestMain/skip","Output":" --- SKIP: TestMain/skip (0.00s)\n"} {"Time":"2018-11-26T15:03:17.016042+01:00","Action":"output","Package":"my_pkg","Test":"TestMain/skip","Output":" test_test.go:23: skipping test\n"} {"Time":"2018-11-26T15:03:17.016049+01:00","Action":"skip","Package":"my_pkg","Test":"TestMain/skip","Elapsed":0} {"Time":"2018-11-26T15:03:17.016054+01:00","Action":"output","Package":"my_pkg","Test":"TestMain/fail","Output":" --- FAIL: TestMain/fail (0.00s)\n"} {"Time":"2018-11-26T15:03:17.01606+01:00","Action":"output","Package":"my_pkg","Test":"TestMain/fail","Output":" test_test.go:27: failed test\n"} {"Time":"2018-11-26T15:03:17.016088+01:00","Action":"fail","Package":"my_pkg","Test":"TestMain/fail","Elapsed":0} {"Time":"2018-11-26T15:03:17.016094+01:00","Action":"fail","Package":"my_pkg","Test":"TestMain","Elapsed":0} {"Time":"2018-11-26T15:03:17.016099+01:00","Action":"output","Package":"my_pkg","Output":"FAIL\n"} {"Time":"2018-11-26T15:03:17.016309+01:00","Action":"output","Package":"my_pkg","Output":"FAIL\tmy_pkg\t0.005s\n"} {"Time":"2018-11-26T15:03:17.016339+01:00","Action":"fail","Package":"my_pkg","Elapsed":0.006} {"Time":"2018-11-26T15:03:17.017209+01:00","Action":"output","Package":"my_pkg/pkg/no_tests","Output":"? \tmy_pkg/pkg/no_tests\t[no test files]\n"} {"Time":"2018-11-26T15:03:17.017238+01:00","Action":"skip","Package":"my_pkg/pkg/no_tests","Elapsed":0} </pre></details> <details><summary><code>go test -v ./... | go tool test2json</code> Output</summary><br><pre> $ go test -v ./... | go tool test2json {"Action":"run","Test":"TestSuccess"} {"Action":"output","Test":"TestSuccess","Output":"=== RUN TestSuccess\n"} {"Action":"output","Test":"TestSuccess","Output":"--- PASS: TestSuccess (0.00s)\n"} {"Action":"pass","Test":"TestSuccess"} {"Action":"run","Test":"TestSkip"} {"Action":"output","Test":"TestSkip","Output":"=== RUN TestSkip\n"} {"Action":"output","Test":"TestSkip","Output":"--- SKIP: TestSkip (0.00s)\n"} {"Action":"output","Test":"TestSkip","Output":" test_test.go:10: skipping test\n"} {"Action":"skip","Test":"TestSkip"} {"Action":"run","Test":"TestFail"} {"Action":"output","Test":"TestFail","Output":"=== RUN TestFail\n"} {"Action":"output","Test":"TestFail","Output":"--- FAIL: TestFail (0.00s)\n"} {"Action":"output","Test":"TestFail","Output":" test_test.go:14: failed test\n"} {"Action":"fail","Test":"TestFail"} {"Action":"run","Test":"TestMain"} {"Action":"output","Test":"TestMain","Output":"=== RUN TestMain\n"} {"Action":"run","Test":"TestMain/success"} {"Action":"output","Test":"TestMain/success","Output":"=== RUN TestMain/success\n"} {"Action":"run","Test":"TestMain/skip"} {"Action":"output","Test":"TestMain/skip","Output":"=== RUN TestMain/skip\n"} {"Action":"run","Test":"TestMain/fail"} {"Action":"output","Test":"TestMain/fail","Output":"=== RUN TestMain/fail\n"} {"Action":"output","Test":"TestMain","Output":"--- FAIL: TestMain (0.00s)\n"} {"Action":"output","Test":"TestMain/success","Output":" --- PASS: TestMain/success (0.00s)\n"} {"Action":"pass","Test":"TestMain/success"} {"Action":"output","Test":"TestMain/skip","Output":" --- SKIP: TestMain/skip (0.00s)\n"} {"Action":"output","Test":"TestMain/skip","Output":" test_test.go:23: skipping test\n"} {"Action":"skip","Test":"TestMain/skip"} {"Action":"output","Test":"TestMain/fail","Output":" --- FAIL: TestMain/fail (0.00s)\n"} {"Action":"output","Test":"TestMain/fail","Output":" test_test.go:27: failed test\n"} {"Action":"fail","Test":"TestMain/fail"} {"Action":"fail","Test":"TestMain"} {"Action":"output","Output":"FAIL\n"} {"Action":"output","Output":"FAIL\tmy_pkg\t0.006s\n"} {"Action":"output","Output":"? \tmy_pkg/pkg/no_tests\t[no test files]\n"} {"Action":"output","Output":"testing: warning: no tests to run\n"} {"Action":"output","Output":"PASS\n"} {"Action":"output","Output":"ok \tmy_pkg/pkg/subpkg\t(cached) [no tests to run]\n"} {"Action":"pass"} </pre></details> As per https://golang.org/cmd/test2json I expect to see the events to be in the exact same order: > With that one exception, the concatenation of the Output fields of all output events is the exact output of the test execution. But they are not. This causes a problem when one would like to get a json output for some kind of reporting purpose, but still want to display the regular test report to the user.
NeedsInvestigation
low
Critical
384,417,410
pytorch
[Caffe2] Check failed: output->size() == values_.size() output size: 1 given size: 1563551
## Question/Support Hi there, I ran into this error which I've seen in a few places after searching but I am not really sure how to adjust my model to work with caffe2: ``` D/blackbox-ndk: Attempting to load protobuf netdefs... D/blackbox-ndk: Couldn't parse net from data. Len: 109181755, File: sidewalk_init_net.pb D/blackbox-ndk: done. Instantiating predictor... A/native: [F given_tensor_fill_op.h:27] Check failed: output->size() == values_.size() output size: 1 given size: 1563551 terminating. ``` I am using the AICamera code which I've got to work fine using the .pb models but when I add my own it breaks. It seems related to how I exported the model... Maybe I am running really old caffe2 cpp code under the NDK (0.8.1 I think)? ## To Reproduce I exported my code via: ```python x = Variable(torch.randn(1, 3, 256, 256, requires_grad=True)) x = x.cuda() torch_out = torch.onnx._export(learn.model, x, f"{DATA_PATH}model.onnx", export_params=True) import caffe2.python.onnx.backend import onnx model = onnx.load(f"{DATA_PATH}model.onnx") prepared_backend = caffe2.python.onnx.backend.prepare(model) W = {model.graph.input[0].name: x.data.cpu().numpy()} # Run the Caffe2 net: c2_out = prepared_backend.run(W)[0] np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3) from caffe2.python.onnx.backend import Caffe2Backend as c2 init_net, predict_net = c2.onnx_graph_to_caffe2_net(model) with open(f"{DATA_PATH}sidewalk_init_net.pb", "wb") as f: f.write(init_net.SerializeToString()) with open(f"{DATA_PATH}sidewalk_predict_net.pb", "wb") as f: f.write(predict_net.SerializeToString()) ``` And ran it fine via: ```python p = workspace.Predictor(init_net, predict_net) print(img.shape) # (1, 3, 256, 256) out = p.run([img]) ``` The error occurs when I do net->ParseFromArray(data, len): ```c++ loadToNetDef(mgr, &_initNet, "sidewalk_init_net.pb"); // A function to load the NetDefs from protobufs. void loadToNetDef(AAssetManager* mgr, caffe2::NetDef* net, const char *filename) { AAsset* asset = AAssetManager_open(mgr, filename, AASSET_MODE_BUFFER); assert(asset != nullptr); const void *data = AAsset_getBuffer(asset); assert(data != nullptr); off_t len = AAsset_getLength(asset); assert(len != 0); if (!net->ParseFromArray(data, len)) { alog("Couldn't parse net from data. Len: %d, File: %s\n", len, filename); } AAsset_close(asset); } ``` ## Expected behavior Load the model successfully without error. ## Environment - PyTorch Version (e.g., 1.0): '1.0.0.dev20181125' - OS (e.g., Linux): Ubuntu 14.04 LTS - How you installed PyTorch (`conda`, `pip`, source): conda nightly - Build command you used (if compiling from source): - Python version: Python 3.6.6 :: Anaconda, Inc. - Any other relevant information: Onnx: '1.3.0' - caffe2: head -n 20 core/macros.h says: ```c++ #define CAFFE2_VERSION_MAJOR 0 #define CAFFE2_VERSION_MINOR 8 #define CAFFE2_VERSION_PATCH 1 ``` ## Other things - I did come across this: https://github.com/caffe2/caffe2/issues/1076 but I am not sure it helps me since I think I've organized my tensor dimensions by batch_size, color_channels, height, width (1, 3, 256, 256) - I looked over the the notebook for Squeezenet here: https://github.com/caffe2/AICamera/blob/master/Exporting%20Squeezenet%20to%20mobile.ipynb however nothing appears different during export but obviously, I am using a completely different model. - Just to ensure I am not doing anything different in the cpp code, I took the same exported models and used them in the AICamera demo instead of the squeeze*.pb files & was able to reproduce the same issue.
caffe2
low
Critical
384,423,902
pytorch
[c10d] Configurable timeout per operation for MPI backend
Also see #14297. cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar @jiayisuse @agolynski
oncall: distributed,feature,triaged,distributed-backlog
low
Minor
384,432,563
terminal
Showing Schemes doesn't work on WSL
From #22: > > > @Falcury `-q -x` works for me, though showing schemes does not: > > ```shell > [rofrol@DESKTOP-NBALJ88 ~]$ pushd ~/rofrol/installed/colortool/ &> /dev/null > [rofrol@DESKTOP-NBALJ88 ~/rofrol/installed/colortool]$ ./colortool.exe -s > > Unhandled Exception: System.IO.IOException: The handle is invalid. > > at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) > at System.Console.GetBufferInfo(Boolean throwOnNoConsole, Boolean& succeeded) > at System.Console.get_WindowWidth() > at ColorTool.Program.PrintSchemes() > at ColorTool.Program.Main(String[] args) > ``` This should be fixed. * [ ] We need to add support for `-s -x` to print the scemes in linux-y mode * [ ] We should probably also catch this exception for the `-s` flag and properly display an error along the lines of "You should try `-s -x`"
Product-Colortool,Help Wanted,Area-Interop,Issue-Bug,Priority-3
low
Critical
384,471,467
pytorch
[caffe2] Caffe2 GlobalInit should be run before any other API calls
There is no error when the code (ipynb) below is ran: ```python from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals # We'll also import a few standard python libraries from matplotlib import pyplot import numpy as np import time # These are the droids you are looking for. from caffe2.python import core, workspace, brew from caffe2.proto import caffe2_pb2 from caffe2.python.model_helper import ModelHelper ``` ```python inp = np.random.randn(7, 3, 224, 224).astype(np.float32) ``` ```python workspace.FeedBlob('X', inp) ``` True ```python model = ModelHelper() ``` ```python conv1 = brew.conv(model, 'X', 'conv1', 3, 5, 3) ``` ```python workspace.RunNetOnce(model.param_init_net) ``` True ```python workspace.CreateNet(model.net) ``` True ```python workspace.RunNet(model.net) ``` True However, if I run the same code in py file: ```python from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals # We'll also import a few standard python libraries from matplotlib import pyplot import numpy as np import time # These are the droids you are looking for. from caffe2.python import core, workspace, brew from caffe2.proto import caffe2_pb2 from caffe2.python.model_helper import ModelHelper inp = np.random.randn(7, 3, 224, 224).astype(np.float32) workspace.FeedBlob('X', inp) model = ModelHelper() conv1 = brew.conv(model, 'X', 'conv1', 3, 5, 3) workspace.RunNetOnce(model.param_init_net) workspace.CreateNet(model.net) workspace.RunNet(model.net) print(model.net.Proto()) print(workspace.FetchBlob('conv1').shape, "Final") ``` Error (while running in py file): ```bash WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. WARNING: Logging before InitGoogleLogging() is written to STDERR W1126 14:07:14.579519 321242560 init.h:99] Caffe2 GlobalInit should be run before any other API calls. W1126 14:07:14.579742 321242560 init.h:99] Caffe2 GlobalInit should be run before any other API calls. I1126 14:07:14.579951 321242560 operator.cc:169] Engine CUDNN is not available for operator Conv. ```
caffe2
low
Critical
384,488,547
youtube-dl
Discovery.ca - Unauthorized 401 error when trying to download.
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.11.23*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.11.23** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [x] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` C:\bin\youtube-dl>youtube-dl https://www.discovery.ca/shows/vintage-tech-hunters/video?vid=1528929 [debug] System config: [] [debug] User config: [u'-v', u'-n', u'--ap-mso', u'Rogers', u'--ffmpeg-location', u'C:\\bin\\ffmpeg'] [debug] Custom config: [] [debug] Command-line args: [u'https://www.discovery.ca/shows/vintage-tech-hunters/video?vid=1528929'] [debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252 [debug] youtube-dl version 2018.11.23 [debug] Python version 2.7.10 (CPython) - Windows-8-6.2.9200 [debug] exe versions: ffmpeg N-92528-g90ac0e5f29, ffprobe N-92528-g90ac0e5f29 [debug] Proxy map: {} [debug] Using fake IP 99.237.190.158 (CA) as X-Forwarded-For. [9c9media] 1528929: Downloading JSON metadata [9c9media] 1528929: Downloading JSON metadata [9c9media] 1528929: Downloading m3u8 information WARNING: Failed to download m3u8 information: HTTP Error 401: Unauthorized [9c9media] 1528929: Downloading f4m manifest WARNING: Unable to download f4m manifest: HTTP Error 401: Unauthorized [9c9media] 1528929: Downloading MPD manifest WARNING: Failed to download MPD manifest: HTTP Error 401: Unauthorized ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. Traceback (most recent call last): File "c:\python27\lib\site-packages\youtube_dl\YoutubeDL.py", line 792, in extract_info ie_result = ie.extract(url) File "c:\python27\lib\site-packages\youtube_dl\extractor\common.py", line 508, in extract ie_result = self._real_extract(url) File "c:\python27\lib\site-packages\youtube_dl\extractor\ninecninemedia.py", line 52, in _real_extract self._sort_formats(formats) File "c:\python27\lib\site-packages\youtube_dl\extractor\common.py", line 1292, in _sort_formats raise ExtractorError('No video formats found') ExtractorError: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. ``` --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://www.discovery.ca/shows/vintage-tech-hunters/video?vid=1528929 --- ### Description of your *issue*, suggested solution and other information I'm receiving an Unauthorized error when trying to download a video from Discovery.ca. I'm not exactly sure why. I see that Discovery is a supported extractor (perhaps .com only and not .ca?) I have tested my .netrc file, by downloading a video from another site, that requires the same authentication, and I've tested to confirm that I can login to the browser and play the video in it's entirety. Hoping someone can shed some light here. Much appreciated. Thanks!
geo-restricted
low
Critical
384,558,397
vscode
terminal stays open after process exit
Initially I thought this was an issue with MIEngine, but they report that it is up to VSCode to properly close the external console after debugging https://github.com/Microsoft/MIEngine/issues/807 Version: 1.29.1 Commit: bc24f98b5f70467bc689abf41cc5550ca637088e Date: 2018-11-15T19:07:43.495Z Electron: 2.0.12 Chrome: 61.0.3163.100 Node.js: 8.9.3 V8: 6.1.534.41 Architecture: x64 repro 1. debug cpp with external terminal 2. when the program ends, the terminal remains open "Press any key to continue.." is holding these zombie terminals open (we get a new one with each run/debug run)
feature-request,debug
medium
Critical
384,574,074
go
net/http: Transport does not support proxy schemes added with RegisterProtocol
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.11.1 linux/amd64 </pre> ### Does this issue reproduce with the latest release? I checked https://golang.org/src/net/http/transport.go?s=3628:10127#L93 using godoc and it has no handling for Proxies with schemes other than http(s)/socks5, so yes. ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/Users/xxx/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/Users/xxx/go" GOPROXY="" GORACE="" GOROOT="/usr/local/go" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/Users/xxxx/src/devops-automation/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build522908579=/tmp/go-build -gno-record-gcc-switches" </pre></details> ### What did you do? dt.RegisterProtocol("ssh", proxy.NewSSHTransport(bastionHost)) `NewSSHTransport` returns an `http.Transport` which has `Dial()` which ensures a tunnel is open and then calls `ssh.Client.Dial()` on the open client to create a tunneled connection dt.Proxy() -> url.URL{Scheme: "ssh", Host: bastionHost+":22"} req := http.NewRequest("GET", "http://testurl.com", nil) ### What did you expect to see? sshTransport Dial to be used for the proxy ### What did you see instead? DefaultTransport Dial was used for the proxy. It would be nice not to have to rewrite RoundTrip to allow additional proxy schemes. I think that this could be solved by checking if either Request Scheme or Proxy Scheme use a different transport and then delegating the rest of the request to the scheme's Transport.
FeatureRequest
low
Critical
384,590,848
TypeScript
Incorrect rename of imported declaration
**TypeScript Version:** aa3734c14834afb454c49bc956489980d990a765 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** **Code** file1.ts ```ts export const abc = 1; ``` file2.ts ```ts import {abc} from "./file1"; abc; // Rename this one to def ``` **Expected behavior:** file1.ts ```ts export const def = 1; ``` file2.ts ```ts import {def} from "./file1"; def; // Rename this one to def ``` **Actual behavior:** file1.ts ```ts export const def = 1; ``` file2.ts ```ts import {abc as def} from "./file1"; def; // Rename this one to def ``` It looks like the prefix/suffix logic assumes the export is a separate statement and not the declaration itself.
Bug
low
Major
384,660,149
nodebestpractices
Security: Prevent SSRF attacks
SSRF (Server Side Request Forgery) vulnerability allows an attacker to change a parameter used on the Node.js application to create or control requests from the vulnerable server. This introduces attack vectors such as - scanning the internal network - timeout the thread - bypass host based authentication - sending requests impersonating the server The example could show the use of a whitelist of allowed domains and protocols from where the Node.js can fetch remote resources (and mention to avoid the use of user provided url's unless really required)
help wanted,new best practice,security,writer-needed
low
Minor
384,697,784
flutter
Regression: KeepAlive in SliverList throws: Incorrect use of ParentDataWidget
We used to be able to manually specify KeepAlive widgets in our SliverLists like so: ```dart new SliverList( delegate: new SliverChildBuilderDelegate( (context, index) { return new KeepAlive( keepAlive: true, child: new Container( padding: const EdgeInsets.symmetric(vertical: 8.0), child: new Text('$index'), ), ); }, childCount: formChildren.length, addAutomaticKeepAlives: false, addRepaintBoundaries: false, ), ); ``` After the upgrade to flutter v 0.11.9, this now throws the following exception: ``` I/flutter (29475): ══╡ EXCEPTION CAUGHT BY RENDERING LIBRARY ╞═════════════════════════════════════════════════════════ I/flutter (29475): The following assertion was thrown during performLayout(): I/flutter (29475): Incorrect use of ParentDataWidget. I/flutter (29475): KeepAlive widgets must be placed directly inside SliverWithKeepAliveWidget widgets. I/flutter (29475): KeepAlive(no depth, keepAlive: true, dirty) has a SliverWithKeepAliveWidget ancestor, but there are I/flutter (29475): other widgets between them: I/flutter (29475): - IndexedSemantics(index: 0) I/flutter (29475): These widgets cannot come between a KeepAlive and its SliverWithKeepAliveWidget. I/flutter (29475): The ownership chain for the parent of the offending KeepAlive was: I/flutter (29475): IndexedSemantics ← SliverList ← IconTheme ← _InheritedTheme ← Theme ← _FormScope ← WillPopScope ← I/flutter (29475): Form-[LabeledGlobalKey<FormState>#07345] ← StreamBuilder<bool> ← _StoreStreamListener<OPState, I/flutter (29475): bool> ← ⋯ ``` The solution is to also specify `addSemanticIndexes: false`, and manually wrap the items in an `IndexedSemantics` widget: ```dart new SliverList( delegate: new SliverChildBuilderDelegate( (context, index) { return new KeepAlive( keepAlive: true, child: new IndexedSemantics( index: index, child: new Container( padding: const EdgeInsets.symmetric(vertical: 8.0), child: new Text('$index'), ), ), ); }, childCount: formChildren.length, addAutomaticKeepAlives: false, addRepaintBoundaries: false, addSemanticIndexes: false, ), ); ``` Flutter doctor output: ``` [✓] Flutter (Channel beta, v0.11.9, on Linux, locale en_US.UTF-8) • Flutter version 0.11.9 at /home/vic/devel/flutter • Framework revision d48e6e433c (6 days ago), 2018-11-20 22:05:23 -0500 • Engine revision 5c8147450d • Dart version 2.1.0 (build 2.1.0-dev.9.4 f9ebf21297) ``` I suppose this is actually the desired way of doing things now, in which case the issue might just be one of improving docs around KeepAlive: specifically to mention that `addSemanticIndexes` has to be switched to `false` and widgets need to be wrapped in their own `IndexedSemantics` widget as well [eg: https://docs.flutter.io/flutter/widgets/SliverChildListDelegate/addAutomaticKeepAlives.html and https://docs.flutter.io/flutter/widgets/KeepAlive-class.html]? But I also notice that the documentation steers users towards using the `AutomaticKeepAliveClientMixin` - interestingly I feel like composing with `KeepAlive` is simpler than implementing the mixin (and more "Flutter-y") - specifically because it can be done in one place, in the SliverChildBuilderDelegate's builder, rather than having to implement it for each child widget.
framework,f: scrolling,d: api docs,P2,team-framework,triaged-framework
low
Major
384,739,169
opencv
HDR MergeMertens - tif uint16 output missing
When using python to produce LDR image (example [here](https://docs.opencv.org/3.2.0/d3/db7/tutorial_hdr_imaging.html)) I am able to import TIFF 16 bit , three channel , colour images unchanged. However processing using MergeMertens results only in uint 8-Bit tiff colour images. Would it be possible to include something like bittype as an argument when using mergemertens? I could not find any option (or documentation) how to solve this problem. - OpenCV => 3.2.0 - Operating System / Platform => Windows 64 Bit - Python=> 2.7.15
feature,category: photo
low
Major
384,740,146
pytorch
DeviceOption::set_device_type() doesn't accept caffe2::CPU anymore
## 📚 Documentation ### TL;DR It'd be best if someone could write up an up-to-date method on loading a (pretrained) caffe2 network in C++. I've searched for the past couple of days, but all methods that I've tried don't work with the current pytorch version ('1.0.0a0+60e7d04', from the master branch) But at the same time I understand that pytorch v1.0 is still a preview and there could be big API changes, in which case I would like to ask a small question. How, in its current state, am I supposed to call the `DeviceOption::set_device_type()` function? It seems like it used to be by passing in `c10::DeviceType`, but now it's expecting an integer (`::google::protobuf::int32`). But I saw no documentation on this. --- ### My situation I have a model that I trained in Pytorch, in Python. Using onnx, I converted my model, and saved it as two different .pb files using the `caffe2.python.backend.Caffe2Backend.onnx_graph_to_caffe2_net()` function, and then saving it `with open(...)`. Now I'm trying to load it in C++ caffe2. Here's my current code. ``` int main() { caffe2::NetDef init_net, pred_net; std::string init_path = "10B_2018-11-27_10-27-38_init.pb"; std::string pred_path = "10B_2018-11-27_10-27-38_pred.pb"; CAFFE_ENFORCE(caffe2::ReadProtoFromFile(init_path, &init_net)); CAFFE_ENFORCE(caffe2::ReadProtoFromFile(pred_path, &pred_net)); init_net.mutable_device_option()->set_device_type(<???>); ^^^ pred_net.mutable_device_option()->set_device_type(<???>); ^^^ ``` Now when I look at some previous sources, like [this](https://github.com/caffe2/caffe2/issues/1737) or [this](https://github.com/BIGBALLON/Caffe2_Demo/blob/master/03_cpp_forward/main.cpp), I'm supposed to pass in `caffe2::CPU`, which is of type `const c10::DeviceType`, but currently in Pytorch '1.0.0a0+60e7d04', this function expects type `'::google::protobuf::int32' (aka 'int')`, but **there is no documentation** on which integer represents CPU or CUDA etc.. Furthermore, some (more recent) resources suggest saving it directly as an onnx model and loading it with caffe2 backend (c.f., [TRANSFERING A MODEL FROM PYTORCH TO CAFFE2 AND MOBILE USING ONNX](https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html)), I haven't seen anything similar in C++. Is it not yet supported? Any help would be highly appreciated. Please help!
caffe2
low
Minor
384,743,645
TypeScript
Add a --strictNaNChecks option, and a NaN / integer / float type to avoid runtime NaN errors
*I have read the FAQ and looked for duplicate issues.* ## Search Terms * NaN * NaN type * Integer type ## Related Issues * [#21279: strictNullChecks safeguards against null and undefined, but not NaN ](https://github.com/Microsoft/TypeScript/issues/21279) * [#15135: NaN, Infinity and -Infinity not accepted in number literal types](https://github.com/Microsoft/TypeScript/issues/15135) * [#15351: Infinity/NaN literal type (dup)](https://github.com/Microsoft/TypeScript/issues/15351) * [#195: Suggestion: int type](https://github.com/Microsoft/TypeScript/issues/195) * [#4639: Proposal: int types](https://github.com/Microsoft/TypeScript/issues/4639) * BigInt is scheduled for TS 3.0 - [#15096 - Support for TC39 "BigInt: Arbitrary precision integers in JavaScript" proposal](https://github.com/Microsoft/TypeScript/issues/15096) ## Suggestion `NaN` has been a big source of errors in my code. I was under the impression that TypeScript (and Flow) could help to prevent these errors, but this is not really true. TypeScript can prevent *some* `NaN` errors, because you cannot add a number to an object, for example. But there are many math operations that can return `NaN`. These `NaN` values often propagate through the code silently and crash in some random place that was expecting an integer or a float. It can be extremely difficult to backtrack through the code and try to figure out where the `NaN` came from. I would like TypeScript to provide a better way of preventing runtime `NaN` errors, by ensuring that an unhandled `NaN` value cannot propagate throughout the code. This would be a compile-time check in TypeScript. Other solutions might be a run-time check added with a Babel plugin, or a way for JS engines to throw an error instead of returning `NaN` (but these are outside the scope of this issue.) ## Use Cases / Examples ```js const testFunction = (a: number, b: number) => { if (a > b) { return; } else if (a < b) { return; } else if (a === b) { return; } else { throw new Error("Unreachable code"); } } testFunction(1, 2); testFunction(1, 0 / 0); testFunction(1, Math.log(-1)); testFunction(1, Math.sqrt(-2)); testFunction(1, Math.pow(99999999, 99999999)); testFunction(1, parseFloat('string')); ``` A programmer might assume that the `Unreachable code` error could never be thrown, because the conditions appear to be exhaustive, and the types of `a` and `b` are `number`. It is very easy to forget that `NaN` breaks all the rules of comparison and equality checks. It would be really helpful if TypeScript could warn about the possibility of `NaN` with a more fine-grained type system, so that the programmer was forced to handle these cases. ### Possible Solutions TypeScript could add a `--strictNaNChecks` option. To implement this, I think TS might need to add some more fine-grained number types that can be used to exclude `NaN`. The return types of built-in JavaScript functions and operations would be updated to show which functions can return `NaN`, and which ones can never return `NaN`. A call to `!isNaN(a)` would narrow down the type and remove the possibility of `NaN`. Here are some possible types that would make this possible: ```ts type integer type float type NaN type Infinity type number = integer | float | NaN | Infinity // Backwards compatible type realNumber = integer | float // NaN and Infinity are not valid values ``` (I don't know if `realNumber` is a good name, but hopefully it gets the point across.) Here are some examples of what this new type system might look like: ```ts const testFunction = (a: integer, b: integer) => { if (a > b || a < b || a === b) { return; } else { throw new Error("Unreachable code"); } } // Ok testFunction(1, 2); // Type error. TypeScript knows that a division might produce a NaN or a float testFunction(1, 0 / 0); const a: integer = 1; const b: integer = 0; const c = a + b; // inferred type is `integer`. Adding two integers cannot produce NaN or Infinity. testFunction(1, c); // Ok const d = a / b; // inferred type is `number`, which includes NaN and Infinity. testFunction(1, d); // Type error (number is not integer) const e = -2; // integer const f = Math.sqrt(e); // inferred type is: integer | float | NaN (sqrt of an integer cannot return Infinity) const g: number = 2; const h = Math.sqrt(g); // inferred type is number (sqrt of Infinity is Infinity) testFunction(1, h); // Type error. `number` is not compatible with `integer`. if (!isNaN(h)) { // The type of h has been narrowed down to integer | float | Infinity testFunction(1, h); // Still a type error. integer | float | Infinity is not compatible with integer. } if (Number.isInteger(h)) { // The type of h has been narrowed down to integer testFunction(1, h); // Ok } ``` When the `--strictNaNChecks` option is disabled (default), then the `integer` and `float` types would also include `NaN` and `Infinity`: ```ts type integer // Integers plus NaN and Infinity type float // Floats plus NaN and Infinity type number = integer | float // Backwards compatible type realNumber = number // Just an alias, for forwards-compatibility. ``` I would personally be in favor of making this the default behavior, because `NaN` errors have caused me a lot of pain in the past. They even made me lose trust in the type system, because I didn't realize that it was still possible to run into them. I would really love to prevent errors like this at compile-time: <img width="446" alt="screen shot 2018-11-27 at 4 35 34 pm" src="https://user-images.githubusercontent.com/139536/49078834-aba8e900-f271-11e8-8bbe-15ae4ea7f930.png"> This error is from a fully-typed Flow app, although I'm switching to TypeScript for any future projects. It's one of the very few crashes that I've seen in my app, but I just gave up because I have no idea where it was coming from. I actually thought it was a bug in Flow, but now I understand that type checking didn't protect me against `NaN` errors. It would be really awesome if it did! (Sorry for the Flow example, but this is a real-world example where a `NaN` type check would have saved me a huge amount of time.) ## Number Literal Types It would be annoying if you had to call `isNaN()` after every division. When the programmer calls `a / 2`, there is no need to warn about `NaN` (unless `a` is a `number` type that could potentially be `NaN`.) `NaN` is only possible for `0 / 0`. So if either the dividend or the divisor are non-zero numbers, then the `NaN` type can be excluded in the return type. And actually zero can be excluded as well, if both dividend and divisor are non-zero. Maybe this can be done with the `Exclude` [conditional type](https://www.typescriptlang.org/docs/handbook/advanced-types.html#conditional-types)? Something like: ```ts type nonZeroNumber = Exclude<number, 0> type nonZeroRealNumber = Exclude<realNumber, 0> type nonZeroInteger = Exclude<integer, 0> type nonZeroFloat = Exclude<float, 0> ``` If the dividend and divisor type both match `nonZeroInteger`, then the return type would be `nonZeroFloat`. So you could test any numeric literal types against these non-zero types. e.g.: ``` const a = 2; // Type is numeric literal "2" // "a" matches the "nonZeroInteger" type, so the return type is "nonZeroFloat" // (this excludes Infinity as well.) // (Technically it could be "nonZeroInteger", or even "2" if TypeScript did // constant propagation. But that's way outside the scope of this issue.) const b = 4 / a; ``` ## Checklist My suggestion meets these guidelines: * [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [X] This wouldn't change the runtime behavior of existing JavaScript code * [X] This could be implemented without emitting different JS based on the types of the expressions * [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,In Discussion,Domain: Literal Types
high
Critical
384,826,570
angular
Refine RouteReuseStrategy
# 🚀 RouteReuseStrategy 2.0 ### Relevant Package This feature request is for @angular/router ### Description The current [`RouteReuseStrategy` API](https://angular.io/api/router/RouteReuseStrategy) is totally confusing and probably broken. ### Describe the solution you'd like In order for this API to become usable, at least the following problems have to be solved: * #25521: Introduce `ngOnAttach`/`ngOnDetach` lifecycle hooks * #20114: impossible to store/retrieve siblings AND ALSO a non-sibling * #20072: RouteReuseStrategy doesn't reuse the parent tree components * [Why is `retrieve` called before `shouldAttach`](https://github.com/angular/angular/pull/22475#issuecomment-441914257)? * There is no 'official' way to properly dispose cached routes. `DetachedRouteHandle` is said to be 'opaque', but to dispose a route one has to use the content of this structure: `(handle as any).componentRef.destroy()`. Does this call suffice to dispose a route? Why not add a public API for this? Also a `(handle as any).componentRef.hostView.destroyed` check has to be used in `retrieve` because of #20072 to avoid reattaching a destroyed component. * Documentation. More than one sentence for every method and a clear description of their call sequence are needed. * Code examples (working). * Ship alternative ready-to-use implementations with the router, not only `DefaultRouteReuseStrategy`. E.g. a greedy strategy that tries to cache and reuse all the routes. ### Describe alternatives you've considered <!-- ✍️--> Have you considered any alternative solutions or workarounds? The current API makes you use a lot of workarounds, and still you cannot be sure whether you've done everything right. E.g. see [my attempt to implement](https://stackblitz.com/github/thorn0/angular-router-experiments?file=src%2Fapp%2Fcustom-reuse-strategy.ts) the greedy reuse strategy.
feature,area: router,feature: under consideration
high
Critical
384,849,334
rust
help, pass closure variable to Ref::map function failed
The demo code is: ```rust #![feature(refcell_map_split)] use std::cell::{Ref, RefCell}; fn main() { let cell = RefCell::new([1, 2, 3, 4,5]); let borrow = cell.borrow(); // let (begin, end) = Ref::map_split(borrow, |s| s.split_at(3)); // OK // extract closure as a variable then pass as argument, raise error. let lambda = |s: &[i32;5]| s.split_at(3); let (begin, end) = Ref::map_split(borrow, lambda); assert_eq!(*begin, [1, 2, 3]); assert_eq!(*end, [4, 5]); } ``` [It complains error when run](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=658e5261a8d79ef36cc18aa43359ff76): ``` error[E0271]: type mismatch resolving `for<'r> <[closure@examples/dddd.rs:10:18: 10:45] as std::ops::FnOnce<(&'r [i32; 5],)>>::Output == (&'r _, &'r _)` --> examples/dddd.rs:11:24 | 11 | let (begin, end) = Ref::map_split(borrow, lambda); | ^^^^^^^^^^^^^^ expected bound lifetime parameter, found concrete lifetime | = note: required by `<std::cell::Ref<'b, T>>::map_split` ``` How to fix this while keep passing closure as an argument to Ref::map_split ?
C-enhancement,A-diagnostics,A-closures,T-compiler
low
Critical
384,882,108
create-react-app
Remove bit.ly links from template
We need to remove the bit.ly links from the template because they point to the old README. We need to decide what to replace them with though. We could pick a new URL shortner but we could wind up in this situation again. Some allow you to change link destinations but who would own that account? Follow up to: https://github.com/facebook/create-react-app/pull/5808. Related: https://github.com/facebook/create-react-app/issues/5536
tag: documentation,tag: internal
low
Major
384,889,716
go
cmd/compile: initialization optimization fails when using -ldflags=-X=VAL
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.11.1 darwin/amd64 </pre> ### Does this issue reproduce with the latest release? yes, tested with `go version go1.11.2 darwin/amd64` ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/Users/thibault.jamet/Library/Caches/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/thibault.jamet/dev/" GOPROXY="" GORACE="" GOROOT="/usr/local/go" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/z3/r2mc8fw12vb83952716284n00000gq/T/go-build606665018=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre></details> ### What did you do? <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> I narrowed down the problem to a simple go program: ```package main import "fmt" var version = "dev" var versions = struct { API string Code string }{"0.1.0", version} func main() { fmt.Println(version) fmt.Println(versions) } ``` Then build it using the specific flag described in the [toolchain tricks](https://github.com/golang/go/wiki/GcToolchainTricks#including-build-information-in-the-executable), `go build -o test -ldflags "-X main.version=1.0.0" .` ### What did you expect to see? When running `./test` I would expect to get an output of ``` 1.0.0 {0.1.0 1.0.0} ``` ### What did you see instead? An output of ``` 1.0.0 {0.1.0 dev} ``` I suspect this is related to when `main.version` is evaluated, using ldflags makes the version resolved at link time, where the compiler may have already inlined its use and thus remove the reference used by the linker
NeedsFix,compiler/runtime
low
Critical
384,988,694
go
crypto/x509: verify checks root certificate
### What version of Go are you using (`go version`)? <pre> $ go version go version go1.11.2 darwin/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/Users/witek/Library/Caches/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/witek/go" GOPROXY="" GORACE="" GOROOT="/usr/local/Cellar/go/1.11.2/libexec" GOTMPDIR="" GOTOOLDIR="/usr/local/Cellar/go/1.11.2/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/k6/n2t1dlcx5b3_h_2hgys27n940000gn/T/go-build556083463=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre></details> ### What did you do? ``` package main import ( "crypto/rand" "crypto/rsa" "crypto/x509" "crypto/x509/pkix" "encoding/pem" "fmt" "log" "math/big" "os" "time" ) func saveCertificate(cert *x509.Certificate, path string) { certOut, _ := os.Create(path) pem.Encode(certOut, &pem.Block{Type: "CERTIFICATE", Bytes: cert.Raw}) certOut.Close() } func savePrivateKey(key *rsa.PrivateKey, path string) { keyOut, _ := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) pem.Encode(keyOut, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(key)}) keyOut.Close() } func main() { rootCertTemplate := &x509.Certificate{ SerialNumber: big.NewInt(1653), Subject: pkix.Name{ CommonName: "Root", }, NotBefore: time.Now(), NotAfter: time.Now().AddDate(10, 0, 0), IsCA: true, KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign, BasicConstraintsValid: true, MaxPathLen: 0, MaxPathLenZero: true, } rootPriv, _ := rsa.GenerateKey(rand.Reader, 2048) rootPub := &rootPriv.PublicKey rootCertBytes, err := x509.CreateCertificate(rand.Reader, rootCertTemplate, rootCertTemplate, rootPub, rootPriv) if err != nil { log.Println("create ca failed", err) return } rootCert, _ := x509.ParseCertificate(rootCertBytes) saveCertificate(rootCert, "root.crt") savePrivateKey(rootPriv, "root.key") intermediateCertTemplate := &x509.Certificate{ SerialNumber: big.NewInt(1658), Subject: pkix.Name{ CommonName: "Intermediate", }, NotBefore: time.Now(), NotAfter: time.Now().AddDate(10, 0, 0), SubjectKeyId: []byte{1, 2, 3, 4, 6}, KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign, IsCA: true, BasicConstraintsValid: true, MaxPathLen: 0, MaxPathLenZero: true, } intermediatePriv, _ := rsa.GenerateKey(rand.Reader, 2048) intermediatePub := &intermediatePriv.PublicKey intermediateCertBytes, err := x509.CreateCertificate(rand.Reader, intermediateCertTemplate, rootCert, intermediatePub, rootPriv) intermediateCert, _ := x509.ParseCertificate(intermediateCertBytes) saveCertificate(intermediateCert, "intermediate.crt") savePrivateKey(intermediatePriv, "intermediate.key") leafCertTemplate := &x509.Certificate{ SerialNumber: big.NewInt(1680), Subject: pkix.Name{ CommonName: "Leaf", }, NotBefore: time.Now(), NotAfter: time.Now().AddDate(10, 0, 0), SubjectKeyId: []byte{1, 2, 3, 4, 7}, KeyUsage: x509.KeyUsageDigitalSignature, } leafPriv, _ := rsa.GenerateKey(rand.Reader, 2048) leafPub := &leafPriv.PublicKey leafCertBytes, err := x509.CreateCertificate(rand.Reader, leafCertTemplate, intermediateCert, leafPub, intermediatePriv) leafCert, _ := x509.ParseCertificate(leafCertBytes) saveCertificate(leafCert, "leaf.crt") savePrivateKey(leafPriv, "leaf.key") roots := x509.NewCertPool() roots.AddCert(rootCert) intermediates := x509.NewCertPool() intermediates.AddCert(intermediateCert) opts := x509.VerifyOptions{ Roots: roots, Intermediates: intermediates, } _, err = leafCert.Verify(opts) if err != nil { fmt.Println("Verification failed: ", err.Error()) return } fmt.Printf("Success!\n") } ``` Example saves root, intermediate and leaf certificate to files so it can be verified with openssl. ### What did you expect to see? Verification should be successful. ### What did you see instead? Verification fails because MaxPathLen is set to 0 on root certificate. According to x509 specification root certs should not be part of chain and should not be validated at all. RFC5280 6.1 > The primary goal of path validation is to verify the binding between a subject distinguished name or a subject alternative name and subject public key, as represented in the target certificate, based on the public key of the trust anchor. > ... > To meet this goal, the path validation process verifies, among other things, that a prospective certification path (a sequence of n certificates) satisfies the following conditions: > (a) for all x in {1, ..., n-1}, the subject of certificate x is the issuer of certificate x+1; > **(b) certificate 1 is issued by the trust anchor;** > (c) certificate n is the certificate to be validated (i.e., the target certificate); and > (d) for all x in {1, ..., n}, the certificate was valid at the time in question. As far as I understand it means that trust anchor (which in example is a root CA self-signed certificate) is not in this list of certificates to be validated because the first one should be the one that trust anchor issued. > When the trust anchor is provided in the form of a self-signed certificate, this self-signed certificate is not included as part of the prospective certification path. Information about trust anchors is provided as inputs to the certification path validation algorithm (Section 6.1.1). Trust anchor is defined in section 6.1.1 as: > (d) trust anchor information, describing a CA that serves as a trust anchor for the certification path. The trust anchor information includes: > (1) the trusted issuer name, > (2) the trusted public key algorithm, > (3) the trusted public key, and > (4) optionally, the trusted public key parameters associated with the public key. > The trust anchor information **may be** provided to the path processing procedure in the form of a self-signed certificate. In fact it doesn't need to be a certificate at all. Also in verification algorithm there is: > (l) If the certificate was not self-issued, verify that max_path_length is greater than zero and decrement max_path_length by 1. The root in example is self signed but anyway - in my opinion it should not be even a part of verification chain. It should be only a trust anchor. Openssl verification is successful on such chain ``` openssl verify -trusted root.crt -untrusted intermediate.crt leaf.crt leaf.crt: OK ```
NeedsInvestigation
low
Critical
384,995,485
opencv
imgproc: use of undeclared identifier 'ippBorderFirstStageInMem'
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). Please: * Read the documentation to test with the latest developer build. * Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue. * Try to be as detailed as possible in your report. * Report only one problem per created issue. This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => 3.4.4 - Operating System / Platform => Arch Linux - Compiler => gcc (GCC) 8.2.1 20180831 ##### Detailed description While compiling with Intel IPP 2017.0 I get ``` /home/sergiu/Projects/opencv/modules/imgproc/src/morph.cpp: In function ‘bool cv::ippMorph(int, int, int, const uchar*, size_t, uchar*, size_t, int, int, int, int, int, int, int, int, int, int, int, uchar*, size_t, int, int, int, int, int, const double*, int, bool)’: /home/sergiu/Projects/opencv/modules/imgproc/src/morph.cpp:1226:34: error: ‘ippBorderFirstStageInMem’ was not declared in this scope iwBorderType &= ippBorderFirstStageInMem; ^~~~~~~~~~~~~~~~~~~~~~~~ /home/sergiu/Projects/opencv/modules/imgproc/src/morph.cpp:1226:34: note: suggested alternative: ‘ippBorderInMem’ iwBorderType &= ippBorderFirstStageInMem; ^~~~~~~~~~~~~~~~~~~~~~~~ ippBorderInMem ``` ##### Steps to reproduce 1. Install Intel IPP 2017 2. Run CMake on OpenCV 3.4.4 3. Compile `opencv_imgproc`
bug,category: imgproc,priority: low,category: build/install
low
Critical
385,015,011
pytorch
BUILD_CAFFE2_OPS=OFF is not tested in CI
caffe2
low
Minor
385,016,325
godot
Polygon2D UV Editor doesn't allow points to be added.
c23710a Another small feature/enhancement request. It would be nice if the Polygon2D UV/Poly edit tools allowed points to be added after creating the polygons. So all operations can be done in the tool without having to switch back and forth or reset things. ![image](https://user-images.githubusercontent.com/13004169/49115970-de200980-f2a4-11e8-80cb-ad45fe63c6a2.png)
enhancement,topic:editor,usability
low
Major
385,030,335
pytorch
Use of STL templates in cpu/ directory (compiling with different AVX settings) is silently hazardous
In #13993, a user noticed that when compiling with fbgemm, AVX2 instructions started being used in unrelated invocations of `std::unordered_map`. The root cause turned out to be because we compiled code with `-mavx`, and that caused some standard library functions (used by those files) to be compiled with AVX instructions, which then clobbered the non-AVX compiled versions of those library functions. The blog post at https://randomascii.wordpress.com/2016/12/05/vc-archavx-option-unsafe-at-any-speed/ nicely explains the situation. The general "cure" for this problem is to very carefully not use any inline functions or templates inside AVX compiled files. However, this is a pretty subtle matter (for example, `floorf` in `math.h` is non-static inline on VC++) and easy to get wrong. It would be nice to detect in the build system in some way if we have gotten it wrong. One possible approach to solving this problem is to write some extra scripts to grovel through the object files we produce when compiling this code, and check if there are any symbols from the standard library which could get clobbered in this case. You'd have to implement this separately for each platform, because standard library implementations are different and that can cause differences in whether or not things get recompiled into the compilation unit or not. CC @dskhudia cc @malfet @seemethere @walterddr
module: build,triaged
low
Major
385,031,786
TypeScript
Object.defineProperty doesn't define properties on 'this'
```js // @ts-check class C { constructor() { Object.defineProperty(this, "x", { value: "hello" }); this.x.toLowerCase(); } } ``` **Expected**: No error. **Actual**: `Property 'x' does not exist on type 'C'.`
Suggestion,Awaiting More Feedback,Domain: JavaScript
low
Critical
385,034,036
flutter
GIFs with high frame rates run slower than expected
## Steps to Reproduce 1. Have a gif file 2. Have code like final Image image = Image.file(File('gif_file_path'), fit: BoxFit.contain); 3. Compare gif in chrome with flutter and notice it repeats or runs slower in Flutter Here's an example video: https://www.youtube.com/watch?v=EoLbQRfTSbM ## Flutter Doctor ``` $ flutter doctor -v [✓] Flutter (Channel master, v0.11.11-pre.7, on Mac OS X 10.13.6 17G65, locale en-US) • Flutter version 0.11.11-pre.7 at /Users/KG/Developer/Flutter/flutter • Framework revision bcac1bd5e2 (37 minutes ago), 2018-11-27 14:44:53 -0800 • Engine revision 5bf4deb435 • Dart version 2.1.0 (build 2.1.0-dev.9.4 f9ebf21297) [✓] Android toolchain - develop for Android devices (Android SDK 28.0.3) • Android SDK at /Users/KG/Library/Android/sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-28, build-tools 28.0.3 • ANDROID_HOME = /Users/KG/Library/Android/sdk • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01) • All Android licenses accepted. [!] iOS toolchain - develop for iOS devices (Xcode 10.0) • Xcode at /Applications/Xcode.app/Contents/Developer • Xcode 10.0, Build version 10A255 ✗ Verify that all connected devices have been paired with this computer in Xcode. If all devices have been paired, libimobiledevice and ideviceinstaller may require updating. To update with Brew, run: brew update brew uninstall --ignore-dependencies libimobiledevice brew uninstall --ignore-dependencies usbmuxd brew install --HEAD usbmuxd brew unlink usbmuxd brew link usbmuxd brew install --HEAD libimobiledevice brew install ideviceinstaller • ios-deploy 1.9.4 • CocoaPods version 1.5.3 [✓] Android Studio (version 3.1) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin version 28.0.1 • Dart plugin version 173.4700 • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01) [✓] IntelliJ IDEA Community Edition (version 2018.2.5) • IntelliJ at /Applications/IntelliJ IDEA CE.app • Flutter plugin version 29.1.3 • Dart plugin version 182.4892.25 [!] VS Code (version 1.21.1) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension not installed; install from https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter [!] Connected device ! No devices available ```
engine,a: quality,customer: product,a: images,has reproducible steps,P2,found in release: 3.7,found in release: 3.9,team-engine,triaged-engine
low
Major
385,046,639
flutter
the canvas content overflow from the child page when drag back.
![image](https://user-images.githubusercontent.com/5997900/49120123-cd948100-f2ee-11e8-890d-56a8db49b638.png) The body of child page was a entire custompainter, and when I drag back, it overflowed from the page. Is this a flutter problem?
framework,f: routes,has reproducible steps,P2,found in release: 3.3,found in release: 3.5,team-framework,triaged-framework
low
Minor
385,048,489
flutter
Provide a flutter tool wrapper around dartdoc?
The `flutter` tool already basically provides a wrapper around dartanalyzer. Maybe it could provide a wrapper around `dartdoc` too? Currently `dartdoc` requires setting the `FLUTTER_ROOT` environment variable to generate project documentation with working references to Flutter API documentation, and it requires using `--link-to-remote`. Meanwhile the `flutter` tool already sets `FLUTTER_ROOT` itself and hides it from users.
c: new feature,tool,P3,team-tool,triaged-tool
low
Major
385,122,571
pytorch
[JIT] Tracing a script function/module where not all args are Tensors
## 🐛 Bug Suppose I write a script function/module where one argument is an int. Then tracing a larger model that uses this script function will fail. ## To Reproduce ```python @torch.jit.script def foo(x, y:int): return x + y def bar(x): return foo(x, 4) x = torch.zeros(3) foo(x, 4) # this works for any `x` and `y` bar(x) # this works too traced_bar = torch.jit.trace(bar, (x,)) # this errors ``` The tracing fails with `ValueError: Auto nesting doesn't know how to process an input object of type int. Accepted types: Tensors, or lists/tuples of them` ## Environment PyTorch version: 1.0.0a0+60e7d04 Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.5 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609 CMake version: version 3.11.1 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: GeForce GTX 1080 Ti GPU 1: GeForce GTX 1080 Ti GPU 2: GeForce GTX 1080 Ti GPU 3: GeForce GTX 1080 Ti Nvidia driver version: 390.77 cuDNN version: Probably one of the following: /usr/local/cuda-9.1/targets/x86_64-linux/lib/libcudnn.so.7.1.3 /usr/local/cuda-9.1/targets/x86_64-linux/lib/libcudnn_static.a Versions of relevant libraries: [pip] Could not collect [conda] cuda91 1.0 h4c16780_0 pytorch [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 1.0.0a0+60e7d04 <pip> [conda] torchvision 0.2.1 py36_1 pytorch
oncall: jit
low
Critical
385,132,578
react
Fail to render input in a separate window on Edge
<!-- Note: if the issue is about documentation or the website, please file it at: https://github.com/reactjs/reactjs.org/issues/new --> **Do you want to request a *feature* or report a *bug*?** Bug **What is the current behavior?** On Edge, when to render any <input> components in a separate window, it comes to react-dom error for `<input>`, and JS error like > SCRIPT5673: Unknown runtime error **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. `window.open` dose not work well on JSFiddle or CodeSandbox, so put a page link to reproduce the behavior https://haojy.github.io/issues/input-error-in-separate-window.html **What is the expected behavior?** `<input>` component should be rendered as expected without errors **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?** React: v16.6.3 and V16.3.0 browser: - only on Edge v42 - works well on IE 11/Chrome 70/Safari 12
Browser: IE,Component: DOM,Type: Needs Investigation
low
Critical
385,141,852
pytorch
[Caffe2] Error protos.protos_size() == OutputSize() when loading dataset created by regular Caffe (datum)
## 🐛 Bug <!-- A clear and concise description of what the bug is. --> I tried to **load a leveldb dataset created by regular Caffe (datum) in Caffe2** using **brew.db_input** from the Python API, however I got the following error **(Case 1)**: ``` WARNING: Logging before InitGoogleLogging() is written to STDERR E1124 13:53:47.919953 8191 prefetch_op.h:110] Prefetching error [enforce fail at tensor_protos_db_input.h:68] protos.protos_size() == OutputSize(). E1124 13:53:47.920176 8161 prefetch_op.h:83] Prefetching failed. E1124 13:53:47.936815 8161 net_simple.cc:63] Operator failed: input: "dbreader_./data/dpnet_dpnet/TORCS_Training_1F" output: "data" output: "label" name: "" type: "TensorProtosDBInput" arg { name: "batch_size" i: 64 } device_option { device_type: 1 cuda_gpu_id: 0 } WARNING:caffe2.python.workspace:Original python traceback for operator `-1107198840` in network `train_net` in exception above (most recent call last): Traceback (most recent call last): File "CNNTrainer_dpnet_dpnet.py", line 23, in <module> stepsize=8000 File "/home/carlos/Documents/git/Caffe2_scripts/caffe2_torcs_predictor/CNNCreator_dpnet_dpnet.py", line 158, in train workspace.RunNet(train_model.net) File "/home/carlos/Documents/git/pytorch/build/caffe2/python/workspace.py", line 217, in RunNet StringifyNetName(name), num_iter, allow_fail, File "/home/carlos/Documents/git/pytorch/build/caffe2/python/workspace.py", line 178, in CallWithExceptionIntercept return func(*args, **kwargs) RuntimeError: [enforce fail at pybind_state.cc:1025] success. Error running net train_net ``` I also tried to load the dataset using **brew.image_input** but I got the following error **(Case 2)**: ``` WARNING:caffe2.python.workspace:Original python traceback for operator `1222868328` in network `train_net` in exception above (most recent call last): Traceback (most recent call last): File "CNNTrainer_dpnet_dpnet.py", line 24, in <module> stepsize=8000 File "/home/carlos/Documents/git/Caffe2_scripts/caffe2_torcs_predictor/CNNCreator_dpnet_dpnet.py", line 195, in train workspace.CreateNet(train_model.net, overwrite=True) File "/home/carlos/Documents/git/pytorch/build/caffe2/python/workspace.py", line 152, in CreateNet StringifyProto(net), overwrite, File "/home/carlos/Documents/git/pytorch/build/caffe2/python/workspace.py", line 178, in CallWithExceptionIntercept return func(*args, **kwargs) RuntimeError: [enforce fail at cast.h:15] TensorProto_DataType_Parse(s, &to). Unknown 'to' argument: LEVELDB ``` ## To Reproduce Steps to reproduce the behavior: **(Case 1):** ``` def add_input(self, model, batch_size, db, db_type, device_opts): with core.DeviceScope(device_opts): # load the data data, label = brew.db_input( model, blobs_out=["data", "label"], batch_size=batch_size, db=db, db_type=db_type, ) # don't need the gradient for the backward pass data = model.StopGradient(data, data) return data, label . . . # == Training model == train_model= model_helper.ModelHelper(name="train_net", arg_scope=arg_scope) data, label = self.add_input(train_model, batch_size=batch_size, db=os.path.join(self._data_dir_, 'TORCS_Training_1F'), db_type='leveldb', device_opts=device_opts) predictions = self.create_model(train_model, data, label, device_opts=device_opts) self.add_training_operators(train_model, predictions, label, device_opts, opt_type, base_learning_rate, policy, stepsize, epsilon, beta1, beta2, gamma, momentum) self.add_accuracy(train_model, predictions, label, device_opts, eval_metric) with core.DeviceScope(device_opts): brew.add_weight_decay(train_model, weight_decay) # Initialize and create the training network workspace.RunNetOnce(train_model.param_init_net) workspace.CreateNet(train_model.net, overwrite=True) # Main Training Loop print("== Starting Training for " + str(num_epoch) + " epochs ==") for i in range(num_epoch): workspace.RunNet(train_model.net) if i % 50 == 0: print 'Iter ' + str(i) + ': ' + 'Loss ' + str(workspace.FetchBlob("loss")) + ' - ' + 'Accuracy ' + str(workspace.FetchBlob('accuracy')) print("Training done") ``` **(Case 2):** ``` def AddImageInput(self, model, reader, batch_size, db_type, is_test): #img_size ''' The image input operator loads image and label data from the reader and applies transformations to the images (random cropping, mirroring, ...). ''' data, label = brew.image_input( model, reader, ["data", "label"], batch_size=batch_size, color=3, output_type=db_type, use_caffe_datum=False, crop=0, mirror=0, is_test=is_test, ) data = model.StopGradient(data, data) return data, label . . . # == Training model == train_model= model_helper.ModelHelper(name="train_net", arg_scope=arg_scope) reader = train_model.CreateDB("reader", db=os.path.join(self._data_dir_, 'torcs-train-nchw-leveldb'), db_type='leveldb') data, label = self.AddImageInput(train_model, reader=reader, batch_size=batch_size, db_type='leveldb', is_test=False) predictions = self.create_model(train_model, data, label, device_opts=device_opts) self.add_training_operators(train_model, predictions, label, device_opts, opt_type, base_learning_rate, policy, stepsize, epsilon, beta1, beta2, gamma, momentum) self.add_accuracy(train_model, predictions, label, device_opts, eval_metric) with core.DeviceScope(device_opts): brew.add_weight_decay(train_model, weight_decay) # Initialize and create the training network workspace.RunNetOnce(train_model.param_init_net) workspace.CreateNet(train_model.net, overwrite=True) # Main Training Loop print("== Starting Training for " + str(num_epoch) + " epochs ==") for i in range(num_epoch): workspace.RunNet(train_model.net) if i % 50 == 0: print 'Iter ' + str(i) + ': ' + 'Loss ' + str(workspace.FetchBlob("loss")) + ' - ' + 'Accuracy ' + str(workspace.FetchBlob('accuracy')) print("Training done") ``` ## Expected behavior Load leveldb dataset created by regular Caffe (datum) without any errors. ## Environment - PyTorch Version (e.g., 1.0): Caffer2 tag v0.4.0 - OS (e.g., Linux): Ubuntu 16.04 - How you installed PyTorch (`conda`, `pip`, source): Build from source - Build command you used (if compiling from source): - Python version: - CUDA/cuDNN version: 8.0/7.0.5 - GPU models and configuration: GTX 1050 - Any other relevant information:
caffe2
low
Critical
385,157,736
pytorch
The speed of scatter is influenced by the data size while using nn.DataParallel
## 🐛 Bug I am using `nn.DataParallel` for a Neural Machine Translation task, when the number of parallel sentences is more than 10 millions, the training speed becomes much slower. After some analysis, I find that the cost of time of **scatter** operation increased strangely for some batches. <!-- A clear and concise description of what the bug is. --> ## Experiment Result ## 100 thousand sentence pairs Experiment on this small dataset shows that all the batches work well as expected. ``` src size: torch.Size([256, 32, 1]) // model input tgt size: torch.Size([256, 17, 1]) // model input src device: 0 // device number of src tensor tgt device: 0 // device number of tgt tensor time consumption of scatter: 0.013 // time for nn.DataParallel to scatter data to multi-gpu time consumption of model processing: 0.141 // time for module to process data time consumption of gather: 0.001 // time for nn.DataParallel to gather data time consumption of batch: 0.396 // total time for this batch, which considered some additional operation besides operations mentioned above. ``` ## 10 millions sentence pairs While most of the batches work well as previous experiment, some batches will spend more time on scatter operation on this dataset. As a consequence, the overall training speed becomes slower. I pick a strange batch as below: ``` src size: torch.Size([112, 73, 1]) tgt size: torch.Size([112, 53, 1]) src device: 0 tgt device: 0 time consumption of scatter: 0.714 // which growth from 0.01 to around 0.7 on this dataset time consumption of model processing: 0.131 time consumption of gather: 0.001 time consumption of batch: 71.116 ``` ## 70 millions sentence pairs Compared with the second experiment, some batches spend even much more time on data scatter, and the overall training speed is much slower. ``` src size: torch.Size([88, 93, 1]) tgt size: torch.Size([88, 66, 1]) src device: 0 tgt device: 0 time consumption of scatter: 6.630 time consumption of model processing : 0.133 time consumption of gather: 0.001 time consumption of batch : 7.038 ``` ## Note: 1. The behavior of strange batches are similar on a dataset. 2. It seems that there is no correlation between strange time consumption and the batch size. 3. The consumption for scatter is calculated by subtract the time before the model wrapped by `DataParallel` is called from the time that `forward` method of real model is called. <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> I expect that the time consumption shouldn't be influenced by the dataset size. ## Environment Please copy and paste the output from our [environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py) (or fill out the checklist below manually). You can get the script and run it with: ``` wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py # For security purposes, please check the contents of collect_env.py before running it. python collect_env.py ``` PyTorch version: 0.4.1 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Ubuntu 16.04.5 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609 CMake version: Could not collect Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.0.176 GPU models and configuration: GPU 0: Tesla P40 GPU 1: Tesla P40 Nvidia driver version: 384.59 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.4 Versions of relevant libraries: [pip] Could not collect [conda] Could not collect ## Additional context <!-- Add any other context about the problem here. --> cc @SsnL @VitalyFedyunin @ngimel
module: performance,module: dataloader,triaged,module: data parallel
low
Critical
385,175,880
TypeScript
Type Alias Record/Interface Selector Pattern - Discriminated Unions, Fails to discriminate type shape, like switch does
It would be great, if this pattern would work, because it is so much simpler to use and would be so much faster typically than extends ? pattern, because there is no interrogation of the types with all complex patterns and shapes and forms, which gets quite hairy, performance slow downs major. With this pattern it just becomes iterative selection, hopefully just dictionary as is javascript in the backing the code. It would also ensure that everything is typechecked correctly for all of these interesting ways to use typescript. [email protected] ```.ts type TSPrimatives = boolean | number | string | Date; type TSShapes = ShapeTsType<any> | ShapeTsRecord<any> | ShapeTsArrayPrimative<any> | ShapeTsArrayRecord<any>; interface ShapeTsType<T extends TSPrimatives> { __Type : T __Shape: 'T' } interface ShapeTsRecord<T extends Record<string, TSShapes>> { __Record : T __Shape: 'R' } interface ShapeTsArrayPrimative<T extends TSShapes> { __ArrayPrimative : {w:T} __Shape:'AP' } interface ShapeTsArrayRecord<T extends Record<string,TSShapes>> { __ArrayRecord : T __Shape:'AR' } type ExtractShape<T extends ShapeTsType<any>> = T['__Type'] type ExractTsShape<T extends TSShapes> = ({ 'T' : ExtractShape<T> // Fails to discriminate the type 'R' : T['__Record'] // Fails to discriminate the type 'AP' : T['__ArrayPrimative'] // Fails to discriminate the type 'AR' : T['__ArrayRecord'] // Fails to discriminate the type })[T['__Shape']] type ExtractRecordTSShape<T extends Record<string, TSShapes>> = { [K in keyof T] : ExractTsShape<T[K]> } ``` https://www.typescriptlang.org/docs/handbook/advanced-types.html Discriminated Unions You can combine singleton types, union types, type guards, and type aliases to build an advanced pattern called discriminated unions, also known as tagged unions or algebraic data types. Discriminated unions are useful in functional programming. Some languages automatically discriminate unions for you; TypeScript instead builds on JavaScript patterns as they exist today. There are three ingredients: Types that have a common, singleton type property — the discriminant. A type alias that takes the union of those types — the union. Type guards on the common property. interface Square { kind: "square"; size: number; } interface Rectangle { kind: "rectangle"; width: number; height: number; } interface Circle { kind: "circle"; radius: number; } First we declare the interfaces we will union. Each interface has a kind property with a different string literal type. The kind property is called the discriminant or tag. The other properties are specific to each interface. Notice that the interfaces are currently unrelated. Let’s put them into a union: type Shape = Square | Rectangle | Circle; Now let’s use the discriminated union: function area(s: Shape) { switch (s.kind) { case "square": return s.size * s.size; case "rectangle": return s.height * s.width; case "circle": return Math.PI * s.radius ** 2; } }
Suggestion,Awaiting More Feedback
low
Major
385,187,906
rust
Avoid hashing more than is strictly necessary, in the compiler.
It should be possible to build cheaper interners than what we currently have, using `raw_entry`. cc @rust-lang/compiler @Gankro @Amanieu
C-cleanup,T-compiler
low
Major
385,206,861
pytorch
Optimizer warning when parameters "change"
## 🚀 Feature I vote for a slight change in the architecture of the 'communication' between the optimizer and the actual model. The optimizer should know the parameters of the 'actual' model and if the parameters the optimizer knows about do not match the ones of the model then it should spit out an error or at least a warning. Also when trying to interpret the setup of the optimizer one does not get the names of the parameters the optimizer works on... it should remember that as well. ## Motivation I played around with a simple RNN for learning the 'bracket language', i.e. valid expressions involving the start symbol '<', the end symbol '>', normal characters 'a','b','c' and brackets '(' and ')'. A word is valid if it starts with '<', ends in '>' and every opening bracket has a matching closing bracket. Example: <aab()c((c)ba)> is valid while <<aa()>, (), <aa(b> are not. I played around with different infrastructures (with memory, without memory, etc) so at first I created different classes for the different infrastructures. Then I came to realise that much of the code is being repeated. That is not good. So I refactored a little bit: I created an RNN class that just leaves the "forwardFancaShmancyStuff()" function unimplemented, i.e. it looked like this (pseudo code): ``` class RNN(torch.nn): def __init__(self, hiddenSize, ...): # translate a single character integer 0=<,1=>,2=a,...,7=) into real vectors # that we can put into GRU or any other fanca-shmancy stuff self.encoder = nn.Embedding(7, hiddenSize) # turn hidenSize-vectors into vectors of length of the alphabet in order to sample # next character self.decoder = nn.Linear(hiddenSize, 7) self.optimizer = torch.optim.Adadelta(self.parameters(), lr=someLearningRate) def fit(...): # train def sample(...): # sample a new word def forward(...): # apply self.encoder self.forwardFancaShmancyStuff(...) # apply decoder and return def forwardFancaShmancyStuff(...) # not implemented ``` Every new infrastructure I wanted to try out was then just inheriting everything from RNN and was just implementing "forwardFancaShmancyStuff()". **All of a sudden the performance of the models dropped from 90% valid expressions to 50% valid expressions although I did not change a single thing (same training set, same hyperparameters, same random seeding... literally everything was exactly the same), I just refactored the code!!! How can that be??** So I searched for quite some time. The reason is that in the inheriting class there are new parameters being created like this: ``` class VaniallaGRU(RNN): def __init__(...): call super constructor in order to setup encoder and decoder self.gru = torch.nn.GRU(hiddenSize, hiddenSize, amountLayers) def forwardFancaShmancyStuff(...): call gru ``` However, when calling "fit()" now then the optimizer was created at the time the superclass (RNN) has been created and *not* within "VaniallaGRU", i.e. the optimizer will now silently optimize only the encoder and the decoder but not the GRU and that was causing the extreme drop in performance. ## Pitch In the setting above where one develops NN architectures in a clean software architectural way the optimizer should check whether the models parameters are the same as the ones when it was initialized. If not (as is the case above) then it should respond with an error or at least a warning like "Warning: optimizer will not optimize all parameters of the neural net". One could also introduce a parameter like "warnIfNotAllParamsAreOptimized=True" which one can set to False deliberately in order to allow that (maybe one loads a complicated old model and just wants to retrain a part of it). ## Alternatives One could clearly reinitialize or re-setup the optimizer in every class that inherits from RNN. However, one tends to forget that :-) ## Additional context None. cc @vincentqb @iramazanli
module: optimizer,triaged,enhancement
low
Critical
385,229,655
TypeScript
In JS, don't complain about a better inferred type if there's no code action
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.2.0-rc <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** inferred type **Code** JS file with `checkJS` on: ![unbenannt](https://user-images.githubusercontent.com/17641229/49149750-34279800-f30b-11e8-9e8c-59a268fdfdf2.PNG) This message is on **every single** parameter without a JSDoc comment! **Expected behavior:** No complaint, or a code action that is offered **Actual behavior:** Complaint about a better inferred type, <kbd>Ctrl</kbd> + <kbd>.</kbd> offers no code actions. **Related Issues:** https://github.com/Microsoft/TypeScript/issues/22357
Suggestion,Fixed,Domain: JavaScript,Experience Enhancement,Fix Available
low
Minor
385,237,054
neovim
virtualtext: multiple EOL virtual text annotations per line
`nvim_buf_clear_highlight` with id=2 will also clear the highlight for id=1. ```vim let b = bufnr('%') let id1 = nvim_buf_set_virtual_text(b, 0, 0, [['foo', 'Error']], {}) let id2 = nvim_buf_set_virtual_text(b, 0, 0, [['foo', 'Error']], {}) echo [id1, id2] call nvim_buf_clear_highlight(b, id2, 0, -1) ``` This happens with both using `0` for an auto-generated src_id, or when using 1 and 2 directly. Also using the same text is not relevant here. Relevant code: https://github.com/neovim/neovim/blob/def608cd248218f9473caebb99e848e67a46a6f6/src/nvim/buffer.c#L5485-L5520 Neovim v0.3.2-931-gdef608cd2.
enhancement,needs:design,marks
low
Critical
385,318,785
go
proposal: spec: enums as an extension to types
# Yet another enum proposal Related: #19814, #28438 # First of all, what is the issue with `const`? Why can't we use that instead? Well first of all, `iota` of course only works with anything that works with an untyped integer. Also, the namespace for the constants are at the package level, meaning that if your package provides multiple utilities, there is no distinction between them other than their type, which may not be immediately obvious. For instance if I had my own `mat` (material) package, I'd want to define `mat.Metal`, `mat.Plastic`, and `mat.Wood`. Then maybe classify my materials as `mat.Soft`, `mat.Neutral`, and `mat.Hard`. Currently, all of these would be in the same namespace. What would be good is to have something along the lines of `mat.Material.Metal`, `mat.Material.Plastic`, `mat.Material.Wood`, and then `mat.Hardness.Soft`, `mat.Hardness.Neutral`, and `mat.Hardness.Hard`. Another issue with using constants is that they may have a lot of runtime issues. Consider the following: ``` var ErrInvalidWeekday = errors.New("invalid weekday") type Weekday byte const ( Sunday Weekday = iota Monday Tuesday // ... ) func (f Weekday) Valid() bool { return f <= Saturday } func (d Weekday) Tomorrow() Weekday { if !d.Valid() { panic(ErrInvalidWeekday) } if d == Sunday { return Saturday } return d + 1 } ``` Not only is there a lot of boilerplate code where we _define_ the "enum", but there is also a lot of boilerplate whenever we _use_ the "enum", not to mention that it means that we need to do runtime error checking, as there are bitflags that are not valid. I thought to myself. What even _are_ enums? Let's take a look at some other languages: ## C ``` typedef enum week{Sun,Mon,Tue,Wed,Thu,Fri,Sat} Weekday; Weekday day = Sun; ``` This ends up being similar to Go's `iota`. But it suffers the same pitfalls that we have with `iota`, of course. ~~But since it has a dedicated type, there is some compile-time checking to make sure that you don't mess up too easily.~~ I had assumed there was compile-time checking to make sure that things like `Weekday day = 20` were at least compile-time warnings, but at least with `gcc -Wextra -Wall` there are no warnings for it. ## C++ This section was added in an edit, originally C and C++ were grouped together, but C++11 has added `enum class` and `enum struct` which are very similar to Java's (next section). They do have compile-time checking to make sure that you don't compare two different types, or do something like `Weekday day = 20`. `Weeday day = static_cast<Weekday>(20)` still works, however. We should not allow something like this. https://github.com/golang/go/issues/28987#issuecomment-445296770 Syntax: ``` enum class Weekday { sun, mon, tues, ... }; Weekday day = Weekday::sun; Weekday day2 = static_cast<Weekday>(2); // tuesday ``` ## Java An enum is a kind of class. This class has several static members, named after the enum values you define. The type of the enum value is of the class itself, so each enum value is an object. ``` enum Weekday { SUNDAY(), // parentheses optional, if we define a constructor, we can add arguments here MONDAY, TUESDAY, // ... SATURDAY; // define methods here public String toString() { // ... } } ``` I personally like this implementation, although I would appreciate if the objects were immutable. The good thing about this implementation is that you are able to define methods on your enum types, which can be extremely useful. We can do this in Go today, but with Go you need to validate the value at runtime which adds quite a bit of boilerplate and a small efficiency cost. This is not a problem in Java because there are no possible enum values other than the ones you define. ## Kotlin Kotlin, being heavily inspired by Java, has the same implementation. They are even more clearly objects, as they are called `enum class` instead of simply `enum`. ## Swift Proposal #28438 was inspired by these. I personally don't think they're a good fit for Go, but it's a different one, so let's take a look: ``` enum Weekday { case Sunday case Monday case Tuesday // ... } ``` The idea becomes more powerful, as you can define "case functions" (syntax is `case SomeCase(args...)`, which allow something like `EnumType.number(5)` being separate from `EnumType.number(6)`. I personally think it is more fitting to just use a function instead, although it does seem like a powerful idea. I barely have any Swift experience though, so I don't know the advantages of a lot of the features that come with Swift's implementation. ## JavaScript ``` const Weekday = Object.freeze({ Sunday: Symbol("Sunday"), Monday: Symbol("Monday"), Tuesday: Symbol("Tuesday"), // ... }); ``` This is probably the best you can do in JavaScript without a static type system. I find this to be a good implementation for JavaScript, though. It also allows the values to actually have behavior. # Okay, so enough with other languages. What about Go? We need to ask ourselves, what would we want out of enums? 1. Named, Immutable values. 2. Compile-time validation. (We don't want to have to manually check at runtime to see if enum values are valid) 3. A consise way to define the values (the only thing that `iota` really provides) And what is an enum? The way that I have always seen it, enums are an _exhaustive_ list of immutable values for a given type. # Proposal Enums limit what values a type can hold. So really, enums are just an _extension_ on what a type can do. "Extension" perhaps isn't the right word, but the syntax should hopefully make my point. The enum syntax should reflect this. The proposed syntax would be `type TypeName <base type> enum { <values> }` ``` package mat // import "github.com/user/mat" // iota can be used in enums type Hardness int enum { Soft = iota Neutral Hard } // Enums should be able to be objects similar to Java, but // they should be required to be immutable. A readonly types // proposal may help this out. Until then, it may be good just to either // have it as a special case that enum values' fields cannot be edited, // or have a `go vet` warning if you try to assign to an enum value's field. type Material struct { Name string Strength Hardness } enum { Metal = Material{Name: "Metal", Strength: values(Hardness).Hard } // these would greatly benefit from issue #12854 Plastic = Material{Name: "Plastic", Strength: values(Hardness).Neutral } Foam = Material{Name: "Foam", Strength: values(Hardness).Soft } } // We can define functions on `Material` like we can on any type. // Strong returns true if this is a strong material func (m Material) Strong() bool { return m.Strength >= Hardness.Neutral } ``` The following would be true with enums: * `int enum { ... }` would be the type that `Hardness` is based on. `int enum { ... }` has the underlying type `int`, so `Hardness` also has the underlying type `int`. * Assigning an untyped constant to a variable with an enum type is allowed, but results in a compile error if the enum does not support the constant expression's value (That's a long winded way of saying `var h Hardness = 1` is allowed, but `var h Hardness = 100` is not. This is similar how it is a compile error to do `var u uint = -5`) * As with normal types, assigning a typed expression to a variable (`var h Hardness = int(5)`) of a different type is not allowed * There _is_ a runtime validation check sometimes, although this can be ommited in most cases. The runtime check occurs when converting to the new type. For instance `var h Hardness = Hardness(x)` where `x` is an integer variable. * Using arithmetic operators on enums with underlying arithmetic types should probably either not be allowed, or be a runtime panic with a `go vet` flag. This is because `h + 1` may not be a valid `Hardness`. Syntax ideas for reading syntax values: 1. `Type.Name` * It's a common syntax people are familiar with, but it makes `Type` look like a value. 2. `Type#Name, Type@Name, etc` * Something like these would make the distinction that `Type` is not a value, but it doesn't feel familiar or intuitive. 3. `Type().Name` * This one doesn't make too much sense to me but it popped in my head. 4. `values(Type).Name, enum(Type).Name, etc` * `values` would be a builtin function that takes a type, and returns its enumeration values as a struct value. Passing a type that has no `enum` part would of trivially return `struct{}{}`. It seems extremely verbose though. It would also clash as `values` is a pretty common name. Many `go vet` errors may result from this name. A different name such as `enum` may be good. I personally believe `values(Type).Name` (or something similar) is the best option, although I can see `Type.Name` being used because of it's familiarity. I would like more critique on the enum definitions rather than reading the values, as that is mainly what the proposal mainly focuses on. Reading values from an enum is trivial once you have a syntax, so it doesn't really shouldn't need to be critiqued too much. What needs to be critiqued is what the goal of an `enum` is, how well this solution accomplishes that goal, and if the solution is feasible. # Points of discussion There has been some discussion in the comments about how we can improve the design, mainly the syntax. I'll take the highlights and put them here. If new things come up and I forget to add them, please remind me. ### Value list for the enum should use parentheses instead of curly braces, to match `var/const` declaration syntax. * **Advantage:** More consistent with the rest of the language * **Disadvantage:** Doesn't quite look as nice when declaring enums of structs ### Perhaps changing the type syntax from `<underlying type> enum ( values )` to `enum <underlying type> ( values )`. * **Advantage:** `int enum ( ... )` -> `enum int ( ... )` and similar become more readable and consistent with other languages. * **Advantage:** Ambiguities such as `[]byte enum ( ... )` get resolved to either `enum []byte ( ... )` or `[]enum byte ( ... )`. * **Disadvantage:** `struct { ... } enum ( ... )` -> `enum struct { ... } ( ... )` becomes less readable. * **Disadvantage:** (In my eyes) it doesn't illustrate how this enum implementation works _quite_ as well. ### Add type inference to enum declarations * **Advantage:** Definitions become more concise, especially when declaring inline types with enums. * **Disadvantage:** The concise-ness comes with a price to readability, in that the original type of the enum is not in a consistent location. * **My Comment:** Type inference in Go is typically done in places which would benefit from it often, like declaring a variable. There really should be very few enum declarations "per capita" of code, so I (personally) think the verbosity of requiring the type is justified. ### Use the `Type.Value` syntax for reading enum values * I've already talked about advantages and disadvantages to this above, but it was mentioned that we already use `Type.Method` to reference methods, so it wouldn't be quite as bad to reference enum values as `Type.Value`. ### Ranging over enum values is not discussed * I forgot about it when writing the original text, but luckily it doesn't undermine the proposal. This is an easy thing to fit in though. We can use `Type.Slice` which returns a `[]Type` ### Regarding zero values * We have two choices - either the first enum value, or the zero value of the underlying type. * **First enum value:** Makes more intuitive sense when you first look at it * **Zero value of type:** More consistent with the rest of Go, but may cause a compile error if the zero value of the type is not in the enum * **My Comment:** I think the zero value of the type should be used. The zero value of a type is _always_ represented as all-zeros in binary, and this shouldn't change that. On top of that, the only thing the `enum` "attachment" to a type does is limit what values variables of the type can hold. So under this rationale, I think it makes intuitive sense that if the enum for a type doesn't include the zero-value, then declaring a variable with the zero-value should fail to compile. This may seem strange at first, but as long as the compile error message is something intuitive (ie `illegal assignment to fooVar: FooType's enum does not contain value <value>`) it shouldn't be much of a problem.
LanguageChange,Proposal,LanguageChangeReview
high
Critical
385,340,125
rust
Warn against `mod lib;` in `main.rs`
Right now this is possible and compiles without warning: Given a cargo crate with name "foo": ```rust // src/lib.rs ``` ```rust // src/main.rs mod lib; fn main() {} ``` ... even though we now have duplicated code in the library "foo" and the submodule "lib" This can be confusing for a beginner because the code can still compile or just breaks "late" depending on what you do: ```rust // src/lib.rs pub struct Foo; ``` ```rust // src/main.rs mod lib; // either of those work (though not at the same time): use foo::Foo; use crate::lib::Foo; fn main() {} ``` cargo or rustc should detect if we use `lib.rs` as a submodule, and warn or error against it. --- This issue assumes 2018 edition without uniform paths syntax.
A-lints,T-lang,T-cargo
low
Critical
385,349,814
pytorch
Add a debug mode which is -O0 for framework code, but -O for kernels
One of the reasons people don't like `DEBUG=1` defaulting to `-O0` is because it makes their tests run slowly. But they also don't like `-O` because you get not very useful information inside gdb. Wouldn't it be nice to have the best of both worlds? Well, maybe we can achieve this, by having a mode where we can compile kernels with optimizations, and framework code without it. We'd have to come up with some sort of dividing principle in cmake land (maybe folder structure), but hopefully most of the time you want to debug a backtrace, it's because some framework code is bad, not a kernel. cc @malfet @seemethere @walterddr
module: build,triaged
low
Critical
385,368,907
pytorch
Flaky download from files.pythonhosted.org when installing botocore
Log: https://circleci.com/gh/pytorch/pytorch/316264?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link ``` Exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1198, in prepare_files do_download, File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1376, in unpack_url self.session, File "/usr/lib/python2.7/dist-packages/pip/download.py", line 546, in unpack_http_url resp = session.get(target_url, stream=True) File "/usr/share/python-wheels/requests-2.2.1-py2.py3-none-any.whl/requests/sessions.py", line 467, in get return self.request('GET', url, **kwargs) File "/usr/lib/python2.7/dist-packages/pip/download.py", line 237, in request return super(PipSession, self).request(method, url, *args, **kwargs) File "/usr/share/python-wheels/requests-2.2.1-py2.py3-none-any.whl/requests/sessions.py", line 455, in request resp = self.send(prep, **send_kwargs) File "/usr/share/python-wheels/requests-2.2.1-py2.py3-none-any.whl/requests/sessions.py", line 558, in send r = adapter.send(request, **kwargs) File "/usr/share/python-wheels/requests-2.2.1-py2.py3-none-any.whl/requests/adapters.py", line 378, in send raise ConnectionError(e) ConnectionError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: /packages/53/80/8328098290ca3bfeca05b96c4041c5a7f861219c22624ff482aa859107e9/botocore-1.12.25-py2.py3-none-any.whl (Caused by <class 'socket.error'>: [Errno 104] Connection reset by peer) Storing debug log for failure in /home/circleci/.pip/pip.log ``` I'm not sure what we should do here: we can't preload botocore onto our Docker image, because this happens before we have gotten our Docker image.
triaged,module: flaky-tests,better-engineering
low
Critical
385,376,232
go
go/printer: consider using last position for nodes with no position
Reminder issue to experiment with using the last known (end) position for nodes where we don't have a position (and where NoPos is not used to indicate absence of a token). From e-mail conversation with @aclements: > > On Wed, Nov 28, 2018 at 8:34 AM Austin Clements <[email protected]> wrote: > > Thanks! Assigning the new Specs I created in the const block to the position of the const worked perfectly. > > > > Is there a reason we don't treat NoPos nodes as having the position of the nearest ancestor when printing? > > NoPos sometimes indicates that a token is absent (e.g., the position of '(' or ')' is NoPos if those parentheses are missing). > > But I suppose in other cases it might work - interesting thought. I'm slightly worried that making such a change might break existing carefully tuned code though. But worthwhile an experiment.
NeedsInvestigation
low
Minor
385,387,051
go
reflect: document struct field layout in memory
### What version of Go are you using (`go version`)? 1.11 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? N/A ### What did you do? Read the docs, specifically as to (1) whether the memory layout of structures has any predictable qualities, (2) whether reflect's Field() (on either reflect.Type or reflect.Value) finds them in a predictable order. ### What did you expect to see? More detailed information. Empirically, it appears that fields are always reported in source code order, and are always laid out in source code orde. Gven the AST entries representing the declaration of two struct field identifiers for a given struct, the one that came first will have a lower offset, and its field number, as used by Field() or FieldByIndex(), etc., will also be lower. I don't think this being untrue breaks any compatibility promises, so it may not be true. It's almost never significant. I've seen one case where behavior of code might actually depend significantly on it, in the behavior of a library using reflect to iterate over structs to synthesize a list of command-line flags at runtime. To be clear, I'm not necessarily asking that the behavior be codified. I'd also be fine with "the ordering is not guaranteed to be consistent", as long as it's safe to assume that, given a value and type that correspond to the same structure type, Field(i) gives *corresponding* fields in both of them. But right now, even *that* is not actually specified, although it's hard to imagine any possible way it could be untrue without breaking all sorts of things. Note overlap with issue #10014 on struct layout/padding. Really, the substantive new part is probably just the observation that nothing is stated about the semantics of Field(), and that if it's not guaranteed to pick one of source order or memory order, or some order anyway, that would be nice to know. ### What did you see instead? Less detailed information.
Documentation,NeedsInvestigation,compiler/runtime
low
Minor
385,398,102
pytorch
prim::ConstantChunk derivative formula doesn't handle undefined inputs
It's a multi-output node, so it has to handle the case when not all outputs get well defined gradients (e.g. when some outputs are unused).
oncall: jit
low
Minor
385,403,208
pytorch
building bundled nccl fails in (caffe2, cuda 8, cudnn 7) CI environment
the error is the same as seen in #13362 - they're likely related, although this does not involve ccache. To uncover this bug, remove the hardcoding of USE_SYSTEM_NCCL=1 in .jenkins/caffe2/build.sh The error ends up looking like this ``` nvlink fatal : Internal error: reference to deleted section ``` I partially investigated this, with these findings (1) building the bundled nccl works in the exact same environment when driven by the pytorch build (i.e. calling setup.py). Tested by spinning up the docker container and trying both the caffe2 and the pytorch build. (2) the difference is, for some reason, because the pytorch build has `MAKEFLAGS=s -j --jobserver-fds=3,4` and `MFLAGS=s -j --jobserver-fds=3,4`, while the caffe2 build has only `-s` in each variable. So - something about the jobserver makes the build succeed during the torch build. I verified this by explicitly setting `MAKEFLAGS=-s` and `MFLAGS=-s` in nccl.cmake, and running another pytorch build. It then fails in the same way as the caffe2 build. To uncover this difference in the first place, I replaced the build command in nccl.cmake with `bash -c "env > /tmp/ncclenv"` to output the environment of the nccl sub-build, and diffed the resulting file when driven by the pytorch build and the caffe2 build.
caffe2
low
Critical
385,404,762
pytorch
TestAdagrad.test_row_wise_sparse_adagrad intermittently fails health check
Sample log: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-devtoolset7-rocmrpm-centos7.5-test/846//consoleFull ``` 18:23:25 =================================== FAILURES =================================== 18:23:25 ___________________ TestAdagrad.test_row_wise_sparse_adagrad ___________________ 18:23:25 18:23:25 self = <caffe2.python.operator_test.adagrad_test.TestAdagrad testMethod=test_row_wise_sparse_adagrad> 18:23:25 18:23:25 @settings(suppress_health_check=[HealthCheck.filter_too_much]) 18:23:25 > @given(inputs=hu.tensors(n=2), 18:23:25 lr=st.floats(min_value=0.01, max_value=0.99, 18:23:25 allow_nan=False, allow_infinity=False), 18:23:25 epsilon=st.floats(min_value=0.01, max_value=0.99, 18:23:25 allow_nan=False, allow_infinity=False), 18:23:25 data_strategy=st.data(), 18:23:25 **hu.gcs) 18:23:25 def test_row_wise_sparse_adagrad(self, inputs, lr, epsilon, 18:23:25 data_strategy, gc, dc): 18:23:25 E FailedHealthCheck: Data generation is extremely slow: Only produced 7 valid examples in 1.00 seconds (108 invalid ones and 0 exceeded maximum size). Try decreasing size of the data you're generating (with e.g.max_size or max_leaves parameters). 18:23:25 E See https://hypothesis.readthedocs.io/en/latest/healthchecks.html for more information about this. If you want to disable just this health check, add HealthCheck.too_slow to the suppress_health_check settings for this test. 18:23:25 18:23:25 /usr/local/caffe2/lib/python2.7/site-packages/caffe2/python/operator_test/adagrad_test.py:168: FailedHealthCheck 18:23:25 ---------------------------------- Hypothesis ---------------------------------- 18:23:25 Trying example: test_row_wise_sparse_adagrad(self=<caffe2.python.operator_test.adagrad_test.TestAdagrad testMethod=test_row_wise_sparse_adagrad>, inputs=[array([0.], dtype=float32), array([0.], dtype=float32)], lr=0.01, epsilon=0.01, data_strategy=data(...), gc=, dc=[, device_type: 6]) 18:23:25 Trying example: test_row_wise_sparse_adagrad(self=<caffe2.python.operator_test.adagrad_test.TestAdagrad testMethod=test_row_wise_sparse_adagrad>, inputs=[array([[[[ 0.275142, 0.275142, 0.618039, -0.606138, 0.275142]], 18:23:25 18:23:25 [[ 0.275142, 0.275142, 0.806599, 0.275142, 0.275142]]], 18:23:25 18:23:25 18:23:25 [[[ 0.275142, 0.275142, -0.672307, 0.275142, 0.275142]], 18:23:25 18:23:25 [[-0.692087, -0.71764 , 0.275142, 0.275142, 0.275142]]]], 18:23:25 dtype=float32), 18:23:25 array([[[[-0.250365, -0.250365, -0.250365, -0.250365, -0.250365]], 18:23:25 18:23:25 [[-0.250365, -0.250365, -0.250365, 0.260523, -0.250365]]], 18:23:25 18:23:25 18:23:25 [[[-0.905285, -0.250365, -0.250365, -0.250365, -0.250365]], 18:23:25 18:23:25 [[-0.250365, -0.250365, -0.250365, -0.250365, -0.250365]]]], 18:23:25 dtype=float32)], lr=0.07642670198526025, epsilon=0.5954366965838439, data_strategy=data(...), gc=, dc=[, device_type: 6]) ``` It's worth figuring out why this is intermittent; our Hypothesis test generation should be deterministic, which means we should ALWAYS fail the health check, if our generation strategy is bad.
caffe2
low
Critical
385,418,692
go
runtime: potential for self-deadlock in runtime write barriers
In the runtime there's an implicit invariant that's being maintained: write barriers may not be called into while the mheap.lock is held or the runtime could deadlock. This invariant is not documented anywhere nor is it enforced via `nowritebarrier` or `nowritebarrierrec` annotations. Consider the following situation: 1. Call into runtime.XXX 2. runtime.XXX grabs the heap lock. 3. runtime.XXX assigns a heap pointer value into some location. 4. The write barrier is on and is invoked. 5. The write barrier attempts to enqueue the pointer into a marking work buffer. 6. There are no empty work buffers available to enqueue the pointer for marking. 7. Call into mheap.allocManual to allocate space for more work buffers. 8. mheap.allocManual attempts to grab the heap lock. 9. Deadlock. After some manual inspection, I'm fairly convinced that this isn't really a problem today because everywhere the heap lock is held, only `notinheap`-annotated structures are manipulated, so no write barriers are ever generated in those sections. However, this invariant adds an additional mental burden when making changes to the runtime, and one day we may want to allow such write barriers (see https://go-review.googlesource.com/c/go/+/46751/12/src/runtime/mgc.go#266 for example), so perhaps the invariant doesn't need to exist at all. traceBufs for example are managed via sysAlloc and sysFree directly, whereas GC mark work buffers live in heap-allocated spans. If we were to move GC mark work buffers to use the same model, this would effectively remove this invariant. Making this change will require some work, so I'm kicking this issue to 1.13. CC @aclements @RLH
compiler/runtime
low
Minor
385,423,955
pytorch
Batch matmul with sparse matrix, dense vector
When I use torch.matmul or SparseTensor.mm, I can't seem to do batch matmul: ` torch.matmul(sparse_mat, batch) ` <br>gives: ``` RuntimeError Traceback (most recent call last) <ipython-input-45-a815f1be316f> in <module>() ----> 1 torch.matmul(sparse_mat, batch) RuntimeError: sparse tensors do not have strides ``` When the matrix is dense, it runs without a problem: ` torch.matmul(sparse_mat.to_dense(), batch) ` So I have had to resort to iterating over batches, which makes it a bit slower than the custom implementation I built for my project. With parallelization over batches, it would be really simple to write custom autograd functions to speed up my implementation (something like [this](https://gist.github.com/anonymous/49c10bc17ac4a97307d52c07d01a2870)) Any suggestions or upcoming support for batch operations for dense vector * sparse matrix -> dense vector operations? Am I missing something that is currently working for this? cc @vincentqb
todo,module: sparse,triaged
medium
Critical
385,428,600
TypeScript
Make JS/TS Import fix insert import in linter specified order
<!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> Popular linting rules in the TypeScript community require imports to be in alphabetic order (see [ordered-imports in tslint:recommended](https://github.com/palantir/tslint/blob/a43cfdc4534c63447a6a5a66a6802716892503d2/src/configs/recommended.ts#L125)). This is generally useful for readability of imports. In VS Code, when a developer references an external module that hasn't been imported, VS Code offers a useful fix, which adds the appropriate import: ![image](https://user-images.githubusercontent.com/3891951/49174467-3d106d80-f2fb-11e8-9a13-3ae3323e398d.png) This is extremely useful! Thanks you! :tada: However, once automatically imported, the import is just added to the end of the import list, leading to a linter error (since it's not in alphabetic order)😕: ![image](https://user-images.githubusercontent.com/3891951/49177771-e6a72d00-f302-11e8-825d-7585225ec203.png) You can then auto-fix the import issue (VS Code is aware of the TSLint preferences): ![image](https://user-images.githubusercontent.com/3891951/49175461-7d70eb00-f2fd-11e8-9f69-213eef2ea944.png) Would it be possible to short-circuit these steps such that when a developer "fixes" a missing import, it automatically inserts it in the appropriate order based on linting rules? FWIW, this also applies to the JS community (eslint has import order rules as well).
Suggestion,Awaiting More Feedback,Domain: TSServer
low
Critical
385,485,096
rust
edition idioms: Incorrect span in `extern crate` removal
[First reported upstream](https://github.com/rust-lang/cargo/issues/6360) this code: ```rust #![warn(rust_2018_idioms)] #[cfg_attr(test, macro_use)] extern crate itertools; use itertools::Itertools; fn main() { println!("{:?}", (0..1).collect_vec()); } ``` [when compiled](https://play.rust-lang.org/?version=beta&mode=debug&edition=2018&gist=04ce404ce50b4ad3ab21582d8556d382) yields: ``` warning: unused extern crate --> src/main.rs:4:1 | 4 | extern crate itertools; | ^^^^^^^^^^^^^^^^^^^^^^^ help: remove it | note: lint level defined here --> src/main.rs:1:9 | 1 | #![warn(rust_2018_idioms)] | ^^^^^^^^^^^^^^^^ = note: #[warn(unused_extern_crates)] implied by #[warn(rust_2018_idioms)] ``` but the suggestion is incorrect! Both the `extern crate` and the attribute on the item should be removed.
C-enhancement,A-lints,A-diagnostics,T-compiler,A-suggestion-diagnostics,D-incorrect,A-edition-2018
low
Critical
385,497,546
pytorch
[Caffe2] GPU test passed. Cannot see on nvidia-smi
GPU test passed. However, when I run code no PID is shown on nvidia-smi. Is there other way to check whether the code is running on GPU? Or is there some more lines of code I need to add so that caffe2 detects GPUs and uses it?
caffe2
low
Minor
385,510,829
go
x/build/maintner: isTempError improvements
Currently `isTempError` logs the error and vacuously returns true ```go func isTempErr(err error) bool { log.Printf("IS TEMP ERROR? %T %v", err, err) return true } ``` This is used in the `sync` function's error group members in conjunction with `loop` to determine if the error returned from the various source `sync` functions is a temporary error or not. ```go for { err := gr.sync(ctx, token, loop) if loop && isTempErr(err) { log.Printf("Temporary error from github %v: %v", gr.ID(), err) time.Sleep(30 * time.Second) continue } log.Printf("github sync ending for %v: %v", gr.ID(), err) return err } ``` This guarantees that if `SyncLoop` is called, the routines will loop forever (`loop` is true and `isTempErr` always returns true), but it also means that if `Sync` is called, and any repository sync operation fails, the entire error group will exit. In a `corpus` that is tracking hundreds of repositories with tens of thousands of issues, a single temporary error in the `Sync` call will always abort all other routines in the group. It would be useful to be able to provide criteria for what constitutes a temporary error or not. I propose something similar to the following: ```go type errCheck func(error) bool // SyncLoop runs forever (until an error or context expiration) and // updates the corpus as the tracked sources change. func (c *Corpus) SyncLoop(ctx context.Context, check errCheck) error { if check == nil { check = isTempErr } return c.sync(ctx, true, check) } // Sync updates the corpus from its tracked sources. func (c *Corpus) Sync(ctx context.Context, check errCheck) error { if check == nil { check = isTempErr } return c.sync(ctx, false, check) } func (c *Corpus) sync(ctx context.Context, loop bool, check errCheck) error { if _, ok := c.mutationSource.(*netMutSource); ok { return errors.New("maintner: can't run Corpus.Sync on a Corpus using NetworkMutationSource (did you mean Update?)") } group, ctx := errgroup.WithContext(ctx) for _, w := range c.watchedGithubRepos { gr, token := w.gr, w.token group.Go(func() error { log.Printf("Polling %v ...", gr.id) for { err := gr.sync(ctx, token, loop) isTempErr := check(err) if loop && isTempErr { log.Printf("Temporary error from github %v: %v", gr.ID(), err) time.Sleep(30 * time.Second) continue } log.Printf("github sync ending for %v: %v", gr.ID(), err) if isTempErr { err = nil } return err } }) } // Rest of function similarly calls check } ``` This would give the consumers of `corpus` the ability to determine what constitutes a temporary error, but default back to the old behaviour by passing `nil` to `Sync` and `SyncLoop`
Builders,NeedsInvestigation
low
Critical
385,550,088
TypeScript
JSDoc @type for list of declarations
Issue Type: <b>Bug</b> When using the `@type` JSDoc for an empty array when declaring multiple variables, only the first declaration is recognized by intellisense. It is overridden by what is actually in the array. It's possible I'm not documenting this correctly, so if that's the case please correct me. Example of it working for the first declaration: ![image](https://user-images.githubusercontent.com/498929/48814902-6b85bb80-ecf1-11e8-8009-5a5893ad1b78.png) Example of it failing for other declarations: ![image](https://user-images.githubusercontent.com/498929/48814923-82c4a900-ecf1-11e8-82a5-eb2093541873.png) ![image](https://user-images.githubusercontent.com/498929/48814953-953ee280-ecf1-11e8-8428-9faba5e1c411.png) VS Code version: Code 1.29.1 (bc24f98b5f70467bc689abf41cc5550ca637088e, 2018-11-15T19:13:36.375Z) OS version: Windows_NT x64 10.0.18272 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz (12 x 4044)| |GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: enabled<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled| |Memory (System)|15.94GB (4.89GB free)| |Process Argv|| |Screen Reader|yes| |VM|0%| </details><details><summary>Extensions (9)</summary> Extension|Author (truncated)|Version ---|---|--- vscode-css-formatter|aes|1.0.1 vscode-eslint|dba|1.7.0 vscode-npm-script|eg2|0.3.5 asciidecorator|hel|0.2.0 vscode-auto-open-markdown-preview|hnw|0.0.4 classic-asp|ili|0.0.4 mssql|ms-|1.4.0 python|ms-|2018.10.1 cpptools|ms-|0.20.1 </details> <!-- generated by issue reporter -->
Bug,Domain: JavaScript
low
Critical
385,570,771
go
cmd/trace: does not track or report in-syscall properly
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? 1.11.2 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? osx, darwin, amd64 ### What did you do? simple ping/pong network test [here](https://github.com/robaho/go-network-test) you need to change the benchmark time to make it run longer I selected an area of threads in the graph. ### What did you expect to see? Since it is a ping-pong, I would expect to see threads alternating between in-syscall, and running. ### What did you see instead? all of the in-syscall counts are 0, but the running alternates from 0 to 1 as expected Here is a screen shot: ![image](https://user-images.githubusercontent.com/4958833/49200195-47fde900-f361-11e8-9a8f-6833eda16d59.png) and here is when I scroll down to the running: ![image](https://user-images.githubusercontent.com/4958833/49200226-78458780-f361-11e8-899d-fd7af6ee4e78.png)
NeedsInvestigation,compiler/runtime
low
Major
385,581,747
flutter
Show users how to run analyze locally when CI fails
Got this from multiple contributors now. After sending a PR, the analyze shard fails with tests they have no idea existed and it doesn't show them how to test and fix locally before pushing more commits to the PR. i.e. dart dev/bots/analyze.dart
team,tool,P2,team-tool,triaged-tool
low
Major
385,627,385
vscode
[folding] Move line up/down should skip over folded regions or folded sections
<!-- Please search existing issues to avoid creating duplicates. --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> <!-- Use Help > Report Issue to prefill these. --> - VSCode Version:1.30.0-insider - OS Version: windows 10 I think when code is in a folded region or section it should never unfold unless explicitly commanded, so if you are moving a line using move command, the line should skip over both folded regions and folded sections Steps to Reproduce: 1. create a folded region using #region #endregion 2. fold some code section 3. move a line using alt+up/down over the folded region/section ![vscodemovelinebug](https://user-images.githubusercontent.com/6554605/49209732-84b4f900-f3d0-11e8-9bbb-e4f1b4fd018f.gif) <!-- Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes
help wanted,feature-request,editor-folding,editor-commands
medium
Critical
385,638,327
go
bytes: tests are slow on arm
I was watching windows/arm build output, and I noticed that `bytes` package tests are pretty slow https://farmer.golang.org/temporarylogs?name=windows-arm&rev=311d87dbebbb0238196d3aa13fd9a37f655e1fc3&st=0xc4704774a0 ``` ok bytes 146.447s ``` and then I looked at linux/arm too, and it is slow too https://farmer.golang.org/temporarylogs?name=linux-arm&rev=311d87dbebbb0238196d3aa13fd9a37f655e1fc3&st=0xc4576f58c0 ``` ok bytes 121.089s ``` I just felt sorry for the builders, and decided to create this issue. I run byte tests here, and they are fast ``` c:\>go test -count=1 bytes ok bytes 1.106s ``` I also did not notice any particular test that is slow than others. Maybe there is `bytes` package code that can be made faster? Maybe some tests can be adjusted to make arm builders faster? Maybe nothing can be done? Then just close the issue. Thank you. Alex
NeedsInvestigation
low
Major
385,654,311
TypeScript
JSDoc : add support for @method (and @property)
## Search Terms * JSDoc * `@method` * `@property` * support ## Suggestion It would be great to add support for `@method` and `@property` (and maybe `@memberof`) now that annotations are on their way. I found bug #15715 that only talks about `@property` support but I think this makes more sense to see those in a same light because they should have a very common implementation and moreover they meet the same goals as a whole. At least I think and I explain my reasoning about this below! ## Use Cases I have functions that do add methods and/or getters on a class right now, waiting for annotations to get their way in JS/TS. But let's say this: when annotations are here, those 2 flags support will be needed anyway (or at least I think so: I don't see how to make IntelliSense work without those)! ```js /** * Some class * @class */ class Test { /* whatever */ } // this adds a #foo() method on the class prototype as well as a #bar getter(/setter) someMethodAddingABehavior(Test, ...params); ``` And tomorrow with annotations: ```js /** * Some class * @class */ @someAnnotationAddingABehavior(...params) class Test { /* whatever */ } ``` ## Examples I'd like to be able to document those doing something like: ```js /** * What foo is doing is great! * @method foo * @param {string} testParam some testing parameter * @returns {string} some random string * @memberof Test.prototype */ /** * Bar is awesome too! * @property {number} bar * @memberof Test.prototype */ someMethodAddingABehavior(Test, ...params); ``` Or at least to be able to document them in the class itself: ```js /** * Some class * @class */ class Test { /* whatever */ /** * What foo is doing is great! * @method foo * @param {string} testParam some testing parameter * @returns {string} some random string */ /** * Bar is awesome too! * @property {number} bar */ } ``` With support of these, I would expect `vscode` to suggest to me the `foo` method and the `bar` property when completing properties of a value of type `Test`. ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,In Discussion,Domain: JSDoc,Domain: JavaScript
low
Critical
385,733,855
react
Sometimes controlled email inputs break in Chrome due to punycoding
**Do you want to request a *feature* or report a *bug*?** Report a bug **What is the current behavior?** In Chrome, when typing a sharp S (ß, a German letter) in an input field with `type="email"`, it converts the `ß` to `ss` (~~expected~~ unexpected behaviour) and the cursor jumps back to the beginning of the input field (unexpected behaviour). This does only happen if the `ß` is part of the domain. Trying to type `test@testß.de` will end as `.detest@testss`: ![Example](https://i.imgur.com/SWQ0p4f.gif) It can be tested with the latest Google Chrome: https://codepen.io/anon/pen/MzzEqB If you don't have a `ß` on your keyboard, you can reproduce the bug by just Copy&Pasting it. **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?** Tested with Chrome Version 70.0.3538.110 (Official Build) (64-bit) on Ubuntu 18.04 and React 16.6.3 In Firefox, this does not happen, as it does not convert `ß` to `ss`. I didn't test other browsers.
Type: Bug,Component: DOM
low
Critical
385,766,612
go
proposal: text/template/v2: return errors from HTMLEscape and JSEscape
`template.HTMLEscape` and `template.JSEscape` each accept an `io.Writer` and write to it. However, they ignore errors from `Write`. That's fine when the destination `io.Writer` is one that cannot fail (such as a `*bytes.Buffer`), but can mask real errors in general (see also https://github.com/golang/go/issues/20803#issuecomment-312318808). It is possible for the caller to detect those errors by wrapping the `io.Writer`, but wrapping an `io.Writer` to capture an error that it already returns needlessly complicates the code. These functions should return errors, and leave the decision about whether those errors are safe to ignore up to the caller. **Compatibility** This change would be call-site compatible (leaving the vast majority of callers unchanged), but would break programs that pass or assign `template.HTMLEscape` or `template.JSEscape` as a `func(io.Writer, []byte)`. Such uses should be rare. As an alternative, we could add variants of those functions that do return errors; however, separate variants would mask missing error checks from analysis tools. I believe it would be better to simply change the signature in a Go 2 cleanup.
v2,Proposal
low
Critical
385,792,398
pytorch
[Caffe2] How to fetch trainable parameters?
This is what I tried ```python with core.DeviceScope(core.DeviceOption(caffe2_pb2.CUDA, 0)): # DenseNet is a class where all the methods have been added obj = DenseNet((1, 3, 224, 224)) workspace.SwitchWorkspace(obj.current_workspace) workspace.RunNetOnce(obj.model.param_init_net) for name in workspace.Blobs(): # because the input image blob name is input if name != 'input': print("{}:\n{}".format(name, workspace.FetchBlob(name).shape)) for operations in obj.model.net.Proto().op: print(operations.input)
caffe2
low
Minor
385,816,884
react
findDOMNode deprecation
## Timeline 1. <= 16.3: `findDOMNode` is *discouraged* but accepted for certain use cases 2. 16.3 (2018-03-28): `forwardRef` is introduced: It can be used in HOCs to avoid using `findDOMNode` on the enhanced component 3. 16.6 (2018-10-23): `findDOMNode` is deprecated in `React.StrictMode` 4. 16.7.alpha (2018-10-24): `React.Concurrent` mode is released: This mode *extends* `React.StrictMode` in a way that `findDOMNode` is deprecated in that mode too. 5. 16.8 (Q2 2019): stable `React.Concurrent` mode ## findDOMNode use cases If you have more use cases please let me know. I only started with some examples from `mui-org/material-ui`. ### with a planned alternative - focus handling (React Fire, "exploratory phase") - passive event listeners ([facebook/react#6436]). "Passive events will likely be a part of [React Fire]." - [facebook/react#13525] ## State of `forwardRef` `react` has 3.4M downloads/week. ### `hoist-non-react-statics` (3.9M downloads/week; not clear what percentage is 2.x) A utility mainly used in HOCs and [encouraged to use in the official react docs](https://reactjs.org/docs/higher-order-components.html#static-methods-must-be-copied-over). However everyone stuck at `2.x` will likely encounter issues with `forwardRef` since that version does not handle any `react@^16.3` features. ^3.2.0 should have no issues apart from some minor issues with propTypes hoisting from `forwardRef` to `forwardRef`. The latest stable from zeit/next still uses that outdated version. However the latest canary for 7.0.3 does not. ### react-docgen (400k downloads/week) Not recognized as a valid component definition. PR open at [reactjs/react-docgen#311]. ### react-redux (1.4M downloads/week) `connect` does properly forward their refs in the beta release of 6.x. No timeline for stable release given however 3 betas have already been released so it's probably soon. ### react-router (1.4M downloads/week) `withRouter` is planned to forward refs ([ReactTraining/react-router#6056#issuecomment-435524678]). However no comment about the other components and no major release candidate is published. ### display name `React.forwardRef` components are recognized by `react-devtools`. However when wrapped in a HOC it's very likely that the display name is lost. See [facebook/react#14319] ### The issue **Assumptions:** - you are not in control of your whole component tree i.e. you use components from 3rd party libraries - you want to use `React.ConcurrentMode` - Usable includes production and development. It specifically means for development that deprecation warnings in a component make that component not usable in development mode because of all the *noise* it adds in those cases. *Noise* because it's not actionable if that component is from a 3rd party library. If none of those applies to you then you probably don't have an issue with `findDOMNode` deprecation. The mode of a partial tree can only be made more restrictive but not loosened up. If you wrap your tree in `React.StrictMode` and use a component from a 3rd party library that 3rd party library has to be `React.StrictMode` compliant too. This means that you can't use `React.StrictMode` effectiveley. This might be ok since it's for development only anyway and has no implications for production. However Concurrent mode can have actual implications for production. Since it is new and the community wants to use new things libraries have to make sure that they are strict mode compliant too. In addition between the relase of an alternative in the form of `React.forwardRef` and the deprecation only 7 months have passed. One could argue that this is plenty of time but (at least from my perspective) the work on migrating from `findDOMNode` to refs and `forwardRef` was postponed because `findDOMNode` was not deprecated yet. However the actual deprecation happened one day before the release of `unstable_ConcurrentMode` virtually giving no time to migrate. ~We'll have to see when a stable `16.7` release will happen but assuming this happens today only a month has passed between deprecation and *virtual* removal.~ [React 16.x Roadmap] was release pointing towards Q2 2019 as a release date of stable `React.Concurrent` mode. This relaxes pressure for library maintainers quite a bit IMO. ### Conclusion Refs are not a viable upgrade path to replace `findDOMNode` yet. Until refs are usable without headaches from forwarding refs `findDOMNode` should be undeprecated. ## Releated - [forwarding Refs guide on official react docs](https://reactjs.org/docs/forwarding-refs.html) - [findDOMNode API documentation](https://reactjs.org/docs/react-dom.html#finddomnode) (includes arguments against usage) - [pull request that deprecated findDOMNode](https://github.com/facebook/react/pull/13841) [facebook/react#6436]: https://github.com/facebook/react/issues/6436 [facebook/react#13525]: https://github.com/facebook/react/issues/13525 [facebook/react#14319]: https://github.com/facebook/react/issues/14319 [reactjs/react-docgen#311]: https://github.com/reactjs/react-docgen/pull/311 [ReactTraining/react-router#6056#issuecomment-435524678]: https://github.com/ReactTraining/react-router/issues/6056#issuecomment-435524678 [React 16.x Roadmap]: https://reactjs.org/blog/2018/11/27/react-16-roadmap.html
Type: Feature Request
medium
Minor
385,828,404
rust
debuginfo tests should detect a python-less gdb
Some of the debuginfo tests require a Python-enabled gdb. If this isn't available, they should be disabled. See https://github.com/rust-lang/rustc-guide/pull/243 and https://github.com/rust-lang/rust/issues/52452
A-testsuite,C-enhancement,T-testing-devex
low
Critical
385,847,877
go
cmd/go: document module-mode behavior of multi-element GOPATHs
go version devel +311d87dbeb Thu Nov 29 08:30:13 2018 +0000 linux/amd64 ``` $ go env GOARCH="amd64" GOBIN="" GOCACHE="/home/mvdan/go/cache" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/mvdan/go/land:/home/mvdan/go" GOPROXY="" GORACE="" GOROOT="/home/mvdan/tip" GOTMPDIR="" GOTOOLDIR="/home/mvdan/tip/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build190148469=/tmp/go-build -gno-record-gcc-switches" ``` In my case, `mvdan.cc/sh/cmd/gosh` is a main package inside the second GOPATH directory. It's also a module, so it can be built in either mode. ``` $ cd ~/go/src/mvdan.cc/sh/cmd/gosh $ GO111MODULE=on go list -f {{.Target}} /home/mvdan/go/land/bin/gosh $ GO111MODULE=off go list -f {{.Target}} /home/mvdan/go/bin/gosh ``` I understand why the GOPATH build puts the binary where it does - that's the old and documented logic. And I too sort of understand why the module build goes into the first `GOPATH` entry - it's not building the package within any one `GOPATH`, so it just picks the first. However, this can be very confusing. In my case, I ended up with two binaries, and because of `PATH` I was always running the older. The only mention is brief, under `go help modules`: > and installed commands (in GOPATH/bin, unless GOBIN is set) I realise now that my fix should be to either set up a global `GOBIN`, or to stop using a multi-element `GOPATH`. I still think the documentation could be clearer, though. I think we should discourage the use of a multi-element GOPATH with no GOBIN in the modules world, because of the tricky scenario above. /cc @bcmills @myitcv @rogpeppe
Documentation,help wanted,NeedsFix
low
Critical
385,878,297
flutter
Detect Java network issues when running `flutter doctor --android-licenses`
We've had reports of users needing to set the Java network proxy before being able to accept the Android licenses. We should be able to write a simple Java executable that probes for basic network connectivity: ```java import java.net.InetAddress; import java.net.InetSocketAddress; import java.nio.channels.SocketChannel; public class NetworkProber { private static final int HTTP_PORT = 80; public static void main(String[] args) { try { InetAddress google = InetAddress.getByName("www.google.com"); InetSocketAddress address = new InetSocketAddress(google, HTTP_PORT); SocketChannel channel = SocketChannel.open(address); System.out.println("Network OK"); closeChannelSilently(channel); System.exit(0); } catch (Throwable t) { System.err.println(t.getMessage()); System.err.println("Network unreachable"); System.exit(1); } } private static void closeChannelSilently(SocketChannel channel) { try { channel.close(); } catch (Exception e) { } } } ``` We could prebuild that tool into a JAR file in the engine repo and upload that with our other prebuilt artifacts, then use it in flutter_tools to ensure that we'll be able to reach the network when accepting Android licenses here: https://github.com/flutter/flutter/blob/72926bdff72e7915136c653c8448107ba9252e36/packages/flutter_tools/lib/src/android/android_workflow.dart#L167 To do this, we can directly invoke Java, as discovered from the Android SDK here: https://github.com/flutter/flutter/blob/72926bdff72e7915136c653c8448107ba9252e36/packages/flutter_tools/lib/src/android/android_sdk.dart#L447
tool,t: flutter doctor,a: first hour,P3,team-tool,triaged-tool
low
Minor