id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
โŒ€
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
371,253,501
rust
Use `name.namespace.html` as the canonical URL, not `kind.name.html`
I just wanted to look at the `ManuallyDrop` docs again, and my browser helpfully remembered that it's at <https://doc.rust-lang.org/std/mem/union.ManuallyDrop.html>. But, despite the type being stable, that page no longer exists; it's now <https://doc.rust-lang.org/std/mem/struct.ManuallyDrop.html>. Is the specific item kind actually important to have in the path? The namespace (type vs value vs ...) certainly is, but especially for things without public fields, making the struct-vs-union distinction so front-and-centre seems overall unhelpful.
T-rustdoc,A-stability
medium
Critical
371,257,157
go
cmd/link: possible linker bug when using the -T flag to set the text address
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? 1.11.1 darwin/amd64 ### Does this issue reproduce with the latest release? Yes. ### What operating system and processor architecture are you using (`go env`)? ```$ go env GOARCH="amd64" GOBIN="" GOCACHE="on" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/iansmith/qemu/xxx" GOPROXY="" GORACE="" GOROOT="/Users/iansmith/qemu/go-1.11/go" GOTMPDIR="" GOTOOLDIR="/Users/iansmith/qemu/go-1.11/go/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="0" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/w0/fw3qzvvx54b39b790yxq6v100000gn/T/go-build145968613=/tmp/go-build -gno-record-gcc-switches -fno-common" ``` ### What did you do? `aarch64-elf-readelf -W -l -S bin/xxx_xxx/surprise` If possible, provide a recipe for reproducing the error. I have found this error to be caused by any link that supplies an start address *other* than one ending in 0x1000. I was doing some experiments on arm (thus the aarch64 above) but I think this is actually in the offset calculations for the elf headers (section and program). Commands like this seem to cause the problem (using the -T flag with something that doesn't end in 1000): `GOOS=xxx GOARCH=xxx go install -v -ldflags='-T 0x88000' github.com/iansmith/xxx/surprise/cmd/surprise` A complete runnable program is good. A link on play.golang.org is best. https://play.golang.org/p/b32fVLNTGxP That link shows the diff to my "fix". My fix is clearly not the real one, but whoever wrote the code in elf.go (`go/src/cmd/link/internal/ld/elf.go`) will probably see the problem immediately when they see what I have done. ### What did you expect to see? I expected to see normal elf section headers and program headers. After my (ahem) "fix", you get the normal output: ``` $ aarch64-elf-readelf -W -l -S bin/xxx_xxx/surprise There are 24 section headers, starting at offset 0x158: Section Headers: [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 0] NULL 0000000000000000 000000 000000 00 0 0 0 [ 1] .text PROGBITS 0000000000088000 001000 03f8c0 00 AX 0 0 8 [ 2] .rodata PROGBITS 00000000000d0000 050000 024e79 00 A 0 0 32 [ 3] .shstrtab STRTAB 0000000000000000 074e80 000193 00 0 0 1 [ 4] .typelink PROGBITS 00000000000f5020 075020 000624 00 A 0 0 32 [ 5] .itablink PROGBITS 00000000000f5648 075648 000008 00 A 0 0 8 [ 6] .gosymtab PROGBITS 00000000000f5650 075650 000000 00 A 0 0 1 [ 7] .gopclntab PROGBITS 00000000000f5660 075660 0337c3 00 A 0 0 32 [ 8] .noptrdata PROGBITS 0000000000130000 0b0000 000ba8 00 WA 0 0 32 [ 9] .data PROGBITS 0000000000130bc0 0b0bc0 0015b0 00 WA 0 0 32 [10] .bss NOBITS 0000000000132180 0b2180 01a330 00 WA 0 0 32 [11] .noptrbss NOBITS 000000000014c4c0 0cc4c0 002258 00 WA 0 0 32 [12] .zdebug_abbrev PROGBITS 0000000000150000 0c0000 000112 00 0 0 8 [13] .zdebug_line PROGBITS 0000000000150112 0c0112 008fb1 00 0 0 8 [14] .zdebug_frame PROGBITS 00000000001590c3 0c90c3 0031ac 00 0 0 8 [15] .zdebug_pubnames PROGBITS 000000000015c26f 0cc26f 000973 00 0 0 8 [16] .zdebug_pubtypes PROGBITS 000000000015cbe2 0ccbe2 001cf1 00 0 0 8 [17] .debug_gdb_scripts PROGBITS 000000000015e8d3 0ce8d3 00003c 00 0 0 1 [18] .zdebug_info PROGBITS 000000000015e90f 0ce90f 015a44 00 0 0 8 [19] .zdebug_loc PROGBITS 0000000000174353 0e4353 009793 00 0 0 8 [20] .zdebug_ranges PROGBITS 000000000017dae6 0edae6 00309b 00 0 0 8 [21] .note.go.buildid NOTE 0000000000087f9c 000f9c 000064 00 A 0 0 4 [22] .symtab SYMTAB 0000000000000000 100000 006630 18 23 91 8 [23] .strtab STRTAB 0000000000000000 106630 006bf7 00 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings), I (info), L (link order), O (extra OS processing required), G (group), T (TLS), C (compressed), x (unknown), o (OS specific), E (exclude), p (processor specific) Elf file type is EXEC (Executable file) Entry point 0xc4520 There are 5 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align PHDR 0x000040 0x0000000000087040 0x0000000000087040 0x000118 0x000118 R 0x10000 NOTE 0x000f9c 0x0000000000087f9c 0x0000000000087f9c 0x000064 0x000064 R 0x4 LOAD 0x000000 0x0000000000087000 0x0000000000087000 0x0408c0 0x0408c0 R E 0x10000 LOAD 0x057000 0x00000000000d7000 0x00000000000d7000 0x051e23 0x051e23 R 0x10000 LOAD 0x0b7000 0x0000000000137000 0x0000000000137000 0xffffffffffffb180 0x017718 RW 0x10000 readelf: Error: the segment's file size is larger than its memory size Section to Segment mapping: Segment Sections... 00 01 .note.go.buildid 02 .text .note.go.buildid 03 .typelink .itablink .gosymtab .gopclntab 04 .noptrbss ``` ### What did you see instead? Note the negative value in the 3rd program header (the program text segment)! .shstrtab (section header string table) section has been overwritten so the names are garbage. ``` $ aarch64-elf-readelf -W -l -S bin/xxx_xxx/surprise There are 24 section headers, starting at offset 0x158: Section Headers: [Nr] Name Type Address Off Size ES Flg Lk Inf Al [ 0] ^B NULL 0000000000000000 000000 000000 00 0 0 0 [ 1] PROGBITS 0000000000088000 001000 03f8c0 00 AX 0 0 8 [ 2] PROGBITS 00000000000d0000 050000 024e79 00 A 0 0 32 [ 3] STRTAB 0000000000000000 06de80 000193 00 0 0 1 [ 4] PROGBITS 00000000000f5020 075020 000624 00 A 0 0 32 [ 5] PROGBITS 00000000000f5648 075648 000008 00 A 0 0 8 [ 6] ^V PROGBITS 00000000000f5650 075650 000000 00 A 0 0 1 [ 7] PROGBITS 00000000000f5660 075660 0337c3 00 A 0 0 32 [ 8] PROGBITS 0000000000130000 0b0000 000ba8 00 WA 0 0 32 [ 9] PROGBITS 0000000000130bc0 0b0bc0 0015b0 00 WA 0 0 32 [10] ^B NOBITS 0000000000132180 0b2180 01a330 00 WA 0 0 32 [11] NOBITS 000000000014c4c0 0cc4c0 002258 00 WA 0 0 32 [12] ^B PROGBITS 0000000000150000 0c0000 000112 00 0 0 8 [13] PROGBITS 0000000000150112 0c0112 008fb1 00 0 0 8 [14] ^D PROGBITS 00000000001590c3 0c90c3 0031ac 00 0 0 8 [15] PROGBITS 000000000015c26f 0cc26f 000973 00 0 0 8 [16] ^E^A PROGBITS 000000000015cbe2 0ccbe2 001cf1 00 0 0 8 [17] PROGBITS 000000000015e8d3 0ce8d3 00003c 00 0 0 1 [18] PROGBITS 000000000015e90f 0ce90f 015a44 00 0 0 8 [19] PROGBITS 0000000000174353 0e4353 009793 00 0 0 8 [20] PROGBITS 000000000017dae6 0edae6 00309b 00 0 0 8 [21] NOTE 0000000000087f9c 000f9c 000064 00 A 0 0 4 [22] SYMTAB 0000000000000000 100000 006630 18 23 91 8 [23] STRTAB 0000000000000000 106630 006bf7 00 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings), I (info), L (link order), O (extra OS processing required), G (group), T (TLS), C (compressed), x (unknown), o (OS specific), E (exclude), p (processor specific) Elf file type is EXEC (Executable file) Entry point 0xc4520 There are 5 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align PHDR 0x000040 0x0000000000087040 0x0000000000087040 0x000118 0x000118 R 0x10000 NOTE 0x000f9c 0x0000000000087f9c 0x0000000000087f9c 0x000064 0x000064 R 0x4 LOAD 0xffffffffffff9000 0x0000000000080000 0x0000000000080000 0x0478c0 0x0478c0 R E 0x10000 LOAD 0x050000 0x00000000000d0000 0x00000000000d0000 0x058e23 0x058e23 R 0x10000 LOAD 0x0b0000 0x0000000000130000 0x0000000000130000 0x002180 0x01e718 RW 0x10000 Section to Segment mapping: Segment Sections... 00 01 02 03 ^V 04 ^B ```
help wanted,NeedsInvestigation,compiler/runtime
low
Critical
371,262,423
go
cmd/compile: add column info to export data
None of the compiler's historical or present export data formats ($$, $$B, indexed) support column information, even though the compiler and the go/{token,ast,types}) packages are capable of producing and consuming this information. We should add it to the export data file using a suitably efficient encoding. Otherwise various analysis tools will continue to report either the correct column, or column 1, depending on whether they are reporting the position of an object loaded from source or export data. @ianthehat
NeedsFix,compiler/runtime
medium
Major
371,303,935
pytorch
C++ frontend: how to debug nan gradients
Hi, I am getting very large gradients and then, even with clamping, nan gradients (suddenly all of them). I am surprised because I am porting my working program from Python to C++ backends. How to debug this situation? I can't see any register_backward_hook or similar function. Thanks, Slawek
module: cpp,triaged,enhancement
low
Critical
371,309,565
pytorch
Port dragon4_scientific for pretty float tensor print.
## ๐Ÿš€ Feature <!-- A clear and concise description of the feature proposal --> In #12746 , we should port dragon4_scientific instead of fixed precision printing. https://github.com/numpy/numpy/blob/f36d2d4d3f622f7901e3d5ade13e04fc05062948/numpy/core/src/multiarray/multiarraymodule.c#L3530
module: printing,triaged,enhancement
low
Minor
371,326,341
TypeScript
Rename file by renaming on an aliased path
From https://github.com/Microsoft/vscode/issues/60918 <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.2.0-dev.20181019 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** - rename - path rename **Code** For a simple js project `jsconfig.json` ```json { "compilerOptions": { "module": "commonjs", "target": "es2016", "jsx": "preserve", "checkJs": true, "baseUrl": ".", "paths": { "~/*": [ "*" ] } }, "exclude": [ "node_modules", "**/node_modules/*" ] } ``` `main.ts` ```js import * as foo from '~/src/foo'; console.log(foo); ``` `src/foo.js` ```js export class Foo { } ``` 1. In `main.js`, f2 rename on `foo` in the path **Expected behavior:** This should allow renaming `foo.js` **Actual behavior:** See error: `you cannot rename a module via a global import`
Bug,Domain: TSServer
low
Critical
371,377,280
godot
Preloaded enums do not automatically update to latest version
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** <!-- Specify commit hash if non-official. --> b550f93cfd7862fdf9bf4fc838f2ad04ef89a131 **OS/device including version:** <!-- Specify GPU model and drivers if graphics-related. --> Linux **Issue description:** <!-- What happened, and what was expected. --> Consider the following situation: ``` #scriptA.gd enum MyEnum{ ValueA, ValueB } ... #scriptB.gd const ScriptA := preload("res://scriptA.gd") export(ScriptA.MyEnum) var myEnum := 0 ``` Everything is fine so far; your enum will appear in the inspector, and you can assign it to `ValueA` and `ValueB`. However, you modify your enum later: ``` enum MyEnum{ ValueA, ValueB, ValueC } ``` `myEnum` can still only be assigned to `ValueA` or `ValueB`, `ValueC` does not appear. To trigger it to update, you must do any of the following: 1. Comment out the line that preloads, save, then uncomment, and save 2. Comment out the export, save, then uncomment, and save 3. Restart the editor Ideally, saving the enum would trigger a reload for anything referencing it. I assumed this was the case for anything that has been preloaded, but I tried it with a custom resource, and it seemed to show the updated version.
bug,topic:gdscript,topic:editor,confirmed
low
Major
371,388,051
pytorch
tutorial_blob ERROR
OS: Ubuntu 14.04 gcc: 4.8.4 g++: 4.8.4 [ 90%] Built target caffe2_pybind11_state [ 91%] Built target caffe2_observers [ 92%] Linking CXX executable ../bin/tutorial_blob /usr/bin/ld: CMakeFiles/tutorial_blob.dir/tutorial_blob.cc.o: undefined reference to symbol '__cxa_allocate_exception@@CXXABI_1.3' //usr/lib/x86_64-linux-gnu/libstdc++.so.6: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status make[2]: *** [bin/tutorial_blob] Error 1 make[1]: *** [binaries/CMakeFiles/tutorial_blob.dir/all] Error 2 make: *** [all] Error 2 Please give me a hand or solution, thanks a lot!!!!!
caffe2
low
Critical
371,397,431
godot
GridMap items' mesh material data lost after export
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** v3.1.alpha.calinou.4c1a5d9 **OS/device including version:** Arch Linux **Issue description:** Exporting MeshLib doesn't work for item's material data. I get this error when trying to export MeshLib: `Resource was not pre cached for the resource section, bug?` and this one, however I don't remember when exactly: `drivers/gles3/rasterizer_storage_gles3.cpp:1013 - Condition ' !texture->active ' is true. returned: Ref<Image>() ` Perhaps tiles in a gridmap store SpatialMaterial data in a wrong place? Manually editing materials works fine. This is why I think so: 1. I tried manually changing material data via MeshLib item properties in an inspector. 2. It changed successfully, color was visible. 3. After removing current MeshLib, exporting the new one and assigning it to a new GridMap node, item materials are lost as before, EXCEPT the ones that I modified using previous GridMap/MeshLib item properties: ![imgur-2018_10_18-09 27 45](https://user-images.githubusercontent.com/15695377/47137666-9080ac00-d2b7-11e8-98af-f918ecbe0eb5.png) These are just the same as previously. **Steps to reproduce:** 1. Create a MeshLib scene, 2. Add MeshInstances with material data, 3. Export, 4. Assign MeshLib to a GridMap node, 5. SpatialMaterials are lost Edit: Materials edited via GridMap item properties disappear after the editor's restart.
enhancement,topic:core,confirmed
low
Critical
371,401,815
godot
Godot recognizes that the file has been deleted outside the editor, but does nothing about it
**Godot version:** 3.1 b550f93 **OS/device including version:** Windows 10, Linux Mint 19 **Issue description:** When I create for example Sprite with texture from file, save it and delete outside editor this file, then editor doesn't show any warning about missing dependencies but Godot delete this file from FileSystem, so it must know that this file was deleted/move. After trying to open game this error in Output shows: > drivers/unix/net_socket_posix.cpp:191 - Socket error: 10054 But when I check console this errors occurs: > ERROR: No loader found for resource: res://asf.png At: core/io/resource_loader.cpp:192 ERROR: poll: res://icon.tscn:3 - Parse Error: [ext_resource] referenced nonexistent resource at: res://asf.png At: scene/resources/scene_format_text.cpp:439 ERROR: load: Condition ' err != OK ' is true. returned: RES() At: core/io/resource_loader.cpp:155 ERROR: Failed loading resource: res://icon.tscn At: core/io/resource_loader.cpp:192 ERROR: Failed loading scene: res://icon.tscn At: main/main.cpp:1709 ERROR: _get_socket_error: Socket error: 10054 **Steps to reproduce:** ![jfile](https://user-images.githubusercontent.com/41945903/70704048-8dbdf300-1cd1-11ea-8aa0-044dc6810c72.gif)
bug,topic:editor
low
Critical
371,418,456
pytorch
Pytorch BatchNorm2D Unstable
I was tuning the following customed nn module: ```class ConvE(nn.Module): def __init__(self, args, num_entities): super(ConvE, self).__init__() self.entity_dim = args.entity_dim self.relation_dim = args.relation_dim assert(args.emb_2D_d1 * args.emb_2D_d2 == args.entity_dim) assert(args.emb_2D_d1 * args.emb_2D_d2 == args.relation_dim) self.emb_2D_d1 = args.emb_2D_d1 self.emb_2D_d2 = args.emb_2D_d2 self.num_out_channels = args.num_out_channels self.w_d = args.kernel_size self.HiddenDropout = nn.Dropout(args.hidden_dropout_rate) self.FeatureDropout = nn.Dropout(args.feat_dropout_rate) # stride = 1, padding = 0, dilation = 1, groups = 1 self.conv1 = nn.Conv2d(1, self.num_out_channels, (self.w_d, self.w_d), 1, 0) self.bn0 = nn.BatchNorm2d(1) self.bn1 = nn.BatchNorm2d(self.num_out_channels) self.bn2 = nn.BatchNorm1d(self.entity_dim) self.register_parameter('b', nn.Parameter(torch.zeros(num_entities))) h_out = 2 * self.emb_2D_d1 - self.w_d + 1 w_out = self.emb_2D_d2 - self.w_d + 1 self.feat_dim = self.num_out_channels * h_out * w_out self.fc = nn.Linear(self.feat_dim, self.entity_dim) def forward(self, e1, r, kg): E1 = kg.get_entity_embeddings(e1).view(-1, 1, self.emb_2D_d1, self.emb_2D_d2) R = kg.get_relation_embeddings(r).view(-1, 1, self.emb_2D_d1, self.emb_2D_d2) E2 = kg.get_all_entity_embeddings() stacked_inputs = torch.cat([E1, R], 2) stacked_inputs = self.bn0(stacked_inputs) X = self.conv1(stacked_inputs) X = self.bn1(X) # <= problem line X = F.relu(X) X = self.FeatureDropout(X) X = X.view(-1, self.feat_dim) X = self.fc(X) X = self.HiddenDropout(X) X = self.bn2(X) X = F.relu(X) X = torch.mm(X, E2.transpose(1, 0)) X += self.b.expand_as(X) S = F.sigmoid(X) return S``` If I include the "problem line" in code, the model performance became very unstable and the loss declines slowly. If I commented out the line, the model performance becomes normal. The strange part is that it used to be fine to include this line in my code and this problem caught my attention after I switched to 0.4.0+. I wonder if it is caused by any changed in the BatchNorm2d operator. Below the pink curve is the result obtained w/ the problem line and the orange curve is obtained w/o it. ![screen shot 2018-10-18 at 1 32 19 am](https://user-images.githubusercontent.com/4227871/47141498-af5e4e80-d275-11e8-8a90-356829ce8768.png) The experiment code repo can be found here: https://github.com/salesforce/MultiHopKG/blob/master/src/emb/fact_network.py#L131
triaged,module: norms and normalization
low
Major
371,447,191
godot
Godot reimporting file after changing its name inside editor
**Godot version:** 3.1 b550f93 **OS/device including version:** Windows 10 **Issue description:** When I rename an asset inside editor, then file is reimported instead just rename files in import. In .import folder I have 4 files ![aaaaaaa](https://user-images.githubusercontent.com/41945903/47143990-b9f50400-d2c6-11e8-93a1-0ce50a1aafb4.png) renamed file has different name in .import but inside md5 file has same hash as original > source_md5="43733c76d1d3d2c3f44a9548970ee53c" dest_md5="3e65b9d26c534f673719d0ae2210f1ab" **Steps to reproduce:** ![jfile](https://user-images.githubusercontent.com/41945903/70703894-46376700-1cd1-11ea-863c-37d97a3b5fcc.gif)
enhancement,topic:editor
low
Minor
371,485,538
TypeScript
Subclass method is not allowed (TS2425) if the parent class has a property of type any with the same name
TypeScript appears to treat class methods as something fundamentally different from a class prototype property of type function. This is not in agreement with the standard semantics of ES6. <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** all versions, as far as I have been able to test. <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** method member property any parent class subclass **Code** ```ts class Super { prop: any; } class Derived extends Super { prop() {} } ``` **Expected behavior:** should compile without a problem. Note that by the ES6 standard, the above definition of `Derived` is equivalent to the following, which TypeScript compiles without a problem: ```ts class Derived extends Super {} Derived.prototype.prop = function(){}; ``` **Actual behavior:** ```console Error TS2425: Class 'Super' defines instance member property 'prop', but extended class 'Derived' defines it as instance member function. ``` **Playground Link:** https://www.typescriptlang.org/play/#src=class%20Super%20%7B%0D%0A%20%20%20%20prop%3A%20any%3B%0D%0A%7D%0D%0A%0D%0Aclass%20Derived%20extends%20Super%20%7B%0D%0A%20%20%20%20prop()%20%7B%7D%0D%0A%7D%0D%0A **Related Issues:** I didn't find similar issues, although I started by looking for issues requesting for the ability to disable particular errors (TS2425 in this case). There are several (closed) issues in which such an ability is requested or suggested, but I believe the proper solution would be to bring TypeScript in agreement with ES6. After all, it claims to be a superset!
Suggestion,Awaiting More Feedback
medium
Critical
371,498,665
pytorch
Initialization error when moving data to the GPU
I want to move my data to the GPU in **collate_fn** because it will be done in parallel and **moving the data to the GPU** is my main **bottleneck**. But I have the following error message. How to make it possible? I search a way to transfer my data in parallel so **I don't have to wait every mini batch** `RuntimeError: Traceback (most recent call last): File "/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 57, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "<ipython-input-18-8366151b8664>", line 17, in collate_fn data= data.to(torch.device('cuda')) RuntimeError: cuda runtime error (3) : initialization error at /opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/generic/THCStorage.c:60 `
module: cuda,triaged
low
Critical
371,577,084
material-ui
[MobileStepper] Way to access a custom slide number by clicking on one of the dots
I would like to create a stepper similar to the "Mobile with carousel effect" one on the [Steppers demo page](https://material-ui.com/demos/steppers/#mobile-stepper---text-with-carousel-effect), but where clicking on each of the dots would actually make the slide update accordingly. I didn't find an option to do that in the [API](https://material-ui.com/api/mobile-stepper/), did I miss something or do I need to duplicate the MobileStepper and add this behavior manually? Thanks in advance!
new feature,component: stepper
low
Major
371,617,822
go
net/http: no way to get server Handler's write error to client (sometimes)
### What version of Go are you using (`go version`)? 1.11 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? linux/amd64 ### What did you do? I wrote an http response after the client socket was closed. https://play.golang.org/p/upytSYnrYmL ### What did you expect to see? I expected to be able to get the write error somehow. ### What did you see instead? There does not seem to be any way to get the write error, even after flushing. More background: The use case is we had an http handler that took too long to respond. After figuring out what happened, we wanted to make sure our http server logged all errors writing responses so it would be easy to see this happening in the future. In our particular case, we weren't actually writing a body (we only called `WriteHeader`). I was hoping I could do something like this in a middleware to log response errors in a generic way: ``` nextHandler(w, r) if flusher, ok := w.(http.Flusher); ok { flusher.Flush() if _, err := w.Write(nil); err != nil { // log "err" for posterity } } ``` Perhaps this could be achieved by having `http.response.write` return `conn.werr` if it is set? It is worth mentioning that the request context is cancelled in this case, so that could be used to detect the client is gone. However, I expected there to be someway to get the actual `Write` error.
NeedsDecision,FeatureRequest
low
Critical
371,644,768
pytorch
BUILD_BINARY is a lie
## ๐Ÿ› Bug I expect `BUILD_BINARY=0` to not build any binaries. It still builds lots of binaries. ## To Reproduce Steps to reproduce the behavior: 1. `BUILD_BINARY=0 python setup.py build_deps develop` 2. `ls build/bin` <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ``` (/scratch/ezyang/pytorch-build-env) ezyang@devfair040:/scratch/ezyang/pytorch-build$ ls build/bin AlgorithmsTest common_subexpression_elimination_test event_test net_gpu_test string_ops_test apply_test common_test fatal_signal_asan_no_sig_test net_test SubgraphMatcherTest apply_utils_test context_gpu_test fixed_divisor_test NeuralNetTest TarjansImplTest atest context_test generate_proposals_op_test nnpack_test tbb_init_test backend_cutting_test converter_nomigraph_test generate_proposals_op_util_boxes_test observer_test test_api basic conv_op_cache_cudnn_test generate_proposals_op_util_nms_test operator_fallback_gpu_test test_jit batch_matmul_op_gpu_test conv_to_nnpack_transform_test graph_test operator_gpu_test test_parallel batch_matmul_op_test conv_transpose_op_mobile_test GraphTest operator_schema_test text_file_reader_utils_test BinaryMatchImplTest cpuid_test half_test operator_test time_observer_test blob_gpu_test cuda_half_test init_test parallel_net_test timer_test blob_test cuda_optional_test integer_divider_test pattern_net_transform_test transform_test boolean_unmask_ops_test cuda_packedtensoraccessor_test js_embed predictor_test typeid_test broadcast_test cuda_rng_test logging_test protoc undefined_tensor_test c10_Array_test cudnn_test MatchTest proto_utils_test utility_ops_gpu_test c10_flags_test dead_code_elim_test math_gpu_test reshape_op_gpu_test utility_ops_test c10_Metaprogramming_test depthwise3x3_conv_op_test math_test roi_align_op_gpu_test verify_api_visibility c10_registry_test device_test mobile_test scalar_tensor_test weakref_test c10_TypeList_test dispatch_test module_test scalar_test workspace_test c10_TypeTraits_test distributed_test mpi_gpu_test simple_queue_test wrapdim_test c10_utils_cpu_test dlconvertor_test mpi_test smart_tensor_printer_test c10_utils_gpu_test elementwise_op_gpu_test native_test ssa_test c10_utils_hip_test elementwise_op_test net_async_tracing_test stats_test cast_test event_gpu_test net_dag_utils_test stream_test ``` ## Expected behavior Nothing <!-- A clear and concise description of what you expected to happen. --> ## Environment This is on a devfair. cc @malfet @seemethere @walterddr
module: build,triaged
low
Critical
371,657,632
node
fs.lchmod opens file with O_WRONLY unnecessarily, fails on directories and non-writable files
* **Version**: v10.12.0 * **Platform**: Darwin geegaw.local 18.0.0 Darwin Kernel Version 18.0.0: Wed Aug 22 20:13:40 PDT 2018; root:xnu-4903.201.2~1/RELEASE_X86_64 x86_64 * **Subsystem**: fs https://github.com/isaacs/chmodr/pull/20 Node's fs.lchmod implementation opens the file in write-only mode. (On Darwin, at least.) This is unnecessary, and fails for read-only files. Additionally, this approach (open and then use fchmod) fails for directories. Is there a reason why native lchmod isn't being used? Systems that have `O_SYMLINK` also have `lchmod(3)`, don't they? This is important because lchmod is the only way to avoid a (minor) security vulnerability when doing recursive mode setting on directories. If we have to restrict the use of lchmod to only symlinks, then we're back in TOCTOU territory.
help wanted,fs
low
Minor
371,664,515
pytorch
Test that (cd build && ninja) immediately after build is no-op in CI
## ๐Ÿ› Bug If I do a build like `python setup.py build_deps develop`, and then I say `(cd build && ninja)`, this should NOT run cmake, and it should be no-op. This is not currently true. Once we fix it, we should add a test for it so it doesn't regress. cc @malfet @seemethere @walterddr @ezyang @pytorch/pytorch-dev-infra
module: build,module: ci,triaged
low
Critical
371,682,700
electron
Feature request: get back selected file format from showSaveDialog
It's great that https://github.com/electron/electron/issues/10335 is now implemented. As a follow-up feature, is there also a way to get the selected file format back, e.g. in the callback? There is no way to tell what the user selected, if two formats have the same extension. If I have, say: dialog.showSaveDialog({filters: [ { name: 'EPUB 3', extensions: ['epub'] } { name: 'EPUB 2', extensions: ['epub'] } ]}, function(fileName, bookmark, fileFormatName){ console.log(fileFormatName) }); Actual: `undefined` Desired: `EPUB 3` Thanks for considering.
enhancement :sparkles:
low
Major
371,705,999
go
x/mobile: Memory crash in iOS devices
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go1.10.3 darwin/amd64 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOOS="darwin" ### What did you do? ``` key, err := scrypt.Key(password, salt, 262144, 8, 1, keyLen) if err != nil { return } debug.FreeOSMemory() ``` ### What did you expect to see? A successful run ### What did you see instead? A crash https://bpaste.net/show/308796b08e4c Same code runs fine on android but happens to crash on iOS.
NeedsInvestigation,mobile
low
Critical
371,744,456
youtube-dl
[SoundCloud] Add support to download 256kbps AAC from SoundCloud Go+ accounts
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.10.05*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.10.05** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [x] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### Description of your *issue*, suggested solution and other information [As of October 17, 2018](https://www.engadget.com/2018/10/17/soundcloud-brings-lossless-streaming-go-subscribers/), users of SoundCloud Go+ can stream all audio in 256kbps AAC. Once you have upgraded to Go+, you opt-in your account to the feature by visiting Settings > Streaming > High quality audio. Only accounts with Go+ enabled are eligible to use this feature. A 30-day free trial is available. I request that functionality is added to allow the ability to download user-uploaded content at this higher bitrate. Current youtube-dl users are limited to 128kbps MP3 or 64kbps Opus. Thanks for your continued hard work.
account-needed
low
Critical
371,777,804
TypeScript
Overloads do not emit JSDoc on non-implementation signatures
<!-- ๐Ÿšจ STOP ๐Ÿšจ ๐—ฆ๐—ง๐—ข๐—ฃ ๐Ÿšจ ๐‘บ๐‘ป๐‘ถ๐‘ท ๐Ÿšจ Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.2.0-dev.201xxxxx <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** **Code** > .ts file ```ts /** * this will lost */ function a() function a() { } /** * this will keep */ function b() { } function c() /** * this will keep, but i don't think this is good way */ function c() { } ``` **Expected behavior:** all will keep ```ts /** * this will lost */ function a() { } /** * this will keep */ function b() { } /** * this will keep, but i don't think this is good way */ function c() { } ``` **Actual behavior:** a is lost > output .js file ```ts function a() { } /** * this will keep */ function b() { } /** * this will keep, but i don't think this is good way */ function c() { } ``` ---------------- > and when it at .d.ts * a will keep * b will keep too * but c will lost **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> http://www.typescriptlang.org/play/index.html#src=%2F**%0D%0A%20*%20this%20will%20lost%0D%0A%20*%2F%0D%0Afunction%20a()%0D%0Afunction%20a()%20%7B%20%7D%0D%0A%0D%0A%2F**%0D%0A%20*%20this%20will%20keep%0D%0A%20*%2F%0D%0Afunction%20b()%20%7B%20%7D%0D%0A%0D%0Afunction%20c()%0D%0A%2F**%0D%0A%20*%20this%20will%20keep%2C%20but%20i%20don%27t%20think%20this%20is%20good%20way%0D%0A%20*%2F%0D%0Afunction%20c()%20%7B%20%7D **Related Issues:** <!-- Did you find other bugs that looked similar? -->
Suggestion,Domain: Comment Emit,Experience Enhancement
low
Critical
371,782,845
pytorch
Model with Caffe2 runs much slower than it with pytorch in GPU mode !!!!
Firstly, I run my model(CRNN model) on Pytorch GPU. With code: ``` directory = os.path.dirname(os.path.realpath(__file__)) model_file = os.path.join(directory, './pth/netCRNN_epoch17_idx78105_step2733675.pth') keys = new_key.KEY_4498 model = Resnet18(False, num_classes=len(keys) + 2) state_dict = torch.load(model_file) model.load_state_dict(state_dict) model.cuda() model.train(False) begin = time.time() tol_time = 0 for i in range(5000): x = torch.randn(1, 3, 32, 512, requires_grad=False).cuda() begin = time.time() out = model(x) end = time.time() tol_time += (end - begin) print('cost time:', tol_time / 5000) ``` **Result: cost time: 0.002339872884750366** Secondly, I convert my pytorch model to onnx and then convert onnx to pb in caffe2. With code: ``` directory = os.path.dirname(os.path.realpath(__file__)) model_file = os.path.join(directory, './pth/netCRNN_epoch17_idx78105_step2733675.pth') keys = new_key.KEY_4498 model = Resnet18(False, num_classes=len(keys) + 2) state_dict = torch.load(model_file) model.load_state_dict(state_dict) model.cuda() model.train(False) x = torch.randn(1, 3, 32, 512, requires_grad=False).cuda() # Export the model torch_out = torch.onnx._export(model, # model being run x, # model input (or a tuple for multiple inputs) "ocr_api.onnx", # where to save the model (can be a file or file-like object) export_params=True) ``` and ``` convert-onnx-to-caffe2 ocr_api.onnx --output predict_net.pb --init-net-output init_net.pb ``` Thirdly,I run model in Caffe2 on GPU. With code: ``` workspace.ResetWorkspace() device_opts = core.DeviceOption(caffe2_pb2.CUDA, 0) input_data = np.random.rand(1, 3, 32, 128).astype(np.float32) # NCHW init_def = caffe2_pb2.NetDef() with open('init_net.pb', 'rb') as f: init_def.ParseFromString(f.read()) init_def.device_option.CopyFrom(device_opts) workspace.RunNetOnce(init_def) # workspace.RunNetOnce(init_def.SerializeToString()) net_def = caffe2_pb2.NetDef() with open('predict_net.pb', 'rb') as f: net_def.ParseFromString(f.read()) net_def.device_option.CopyFrom(device_opts) input_name = net_def.external_input[0] workspace.FeedBlob(input_name, input_data,device_opts) workspace.CreateNet(net_def) # workspace.CreateNet(net_def.SerializeToString()) name = net_def.name output_name = net_def.external_output[-1] input_name = net_def.external_input[0] print(name, input_name, output_name) num_iters = 5000 tol_time = 0 for i in range(num_iters): input_data = np.random.rand(1, 3, 32, 512).astype(np.float32) # NCHW start = time.time() workspace.FeedBlob(input_name, input_data,device_opts) workspace.RunNet(name, 1) results = workspace.FetchBlob(output_name) end = time.time() tol_time += (end - start) print('Run time per RunNet: {}'.format(tol_time / num_iters)) ``` **Run time per RunNet: 0.005980247449874878** **But, the speed is slower than that in Pytorch ! How weird!** Any one konws what happens!
caffe2
low
Major
371,784,816
vue
'inject' Properties are not added to the CombinedVueInstance type definition
### Version 2.5.17 ### Reproduction link [https://codesandbox.io/s/rr18r3vm9p](https://codesandbox.io/s/rr18r3vm9p) ### Steps to reproduce 1. Copy Minimal reproduction link into a local environment, and run the webpack compilation process. **OR** 1. Initialize a vue vm ```typescript let vm = new Vue({ el: "#app", render: (h: any) => h(someComponent, {}), provide: { service: { something: "Hello, World" } } }); ``` 2. Try and access service in a SFC ```typescript export default Vue.extend({ name: "someComponent", inject: ["service"], data() { return { accessService: this.service.something // Property 'service' does not exist on type CombinedVueInstance... } } }); ``` ### What is expected? When declaring injections in a component in typescript, you should be able to access the injection with `this.injection` ### What is actually happening? When accessing an injection in a vue single file component, it is currently throwing an error during the webpack compilation process, stating that the injection `Property 'injection' does not exist on type 'CombinedVueInstance<Vue...` --- **Please note:** that the link to minimal reproduction won't show the error logs from the webpack compiling, as it will compile successfully, but with errors. This will need to be tested in a local environment to see what is happening. As this is in typescript, we're currently using Webpack to compile it to a single file, and then use this on our application. The compilation will complete successfully, however will print multiple errors to the console after compiling, about not being able to access properties, etc. When running in the browser it works successfully. We've dug around in the `vue/types` folder, and to the best of our knowledge think that Inject should be a part of the `type DataDef` or something of this sort. Is there possibly a temporary workaround that we can use to avoid having these errors, until a fixed release is proposed? <!-- generated by vue-issues. DO NOT REMOVE -->
typescript
low
Critical
371,806,948
flutter
VideoProgressIndicator does not give correct buffer values
VideoProgressIndicator cannot display buffer progress correctly. I try to get the value of controller.value.buffer[0].end.inMillseconds. It shows a value between 0 and 100 which is just the progress value that has been shown.
c: new feature,a: video,p: video_player,package,team-ecosystem,P2,triaged-ecosystem
low
Minor
371,817,901
opencv
I guess there is a mistake with the source code of stereosgbm.cpp.
I have read the source code of stereosgbm.cpp of OpenCV2.4.13. and the related articles. I found it maybe a mistake in the source code. The 8 directions around a special pixel X: 1 2 3 0 X 4 7 6 5 The code uses dynamic program to aggregate the BT cost after SAD windows. Firstly, it uses the index from 0 to width-1 for directions {0, 1, 2, 3}. as shown above. But then it uses the index from width-1 to 0 for direction 4. The code is like this: ``` for( x = width1 - 1; x >= 0; x-- ) { CostType* Sp = S + x*D; int minS = MAX_COST, bestDisp = -1; if( npasses == 1 ) { int xm = x*NR2, xd = xm*D2; int minL0 = MAX_COST; int delta0 = minLr[0][xm + NR2] + P2; CostType* Lr_p0 = Lr[0] + xd + NRD2; Lr_p0[-1] = Lr_p0[D] = MAX_COST; CostType* Lr_p = Lr[0] + xd; const CostType* Cp = C + x*D; ....... { for( d = 0; d < D; d++ ) { int L0 = Cp[d] + min((int)Lr_p0[d], min(Lr_p0[d-1] + P1, min(Lr_p0[d+1] + P1, delta0))) - delta0; Lr_p[d] = (CostType)L0; minL0 = min(minL0, L0); int Sval = Sp[d] = saturate_cast<CostType>(Sp[d] + L0); if( Sval < minS ) { minS = Sval; bestDisp = d; } } minLr[0][xm] = (CostType)minL0; } ...... } ``` I think the correct expression should be: ``` int minL4 = MAX_COST; int delta4 = minLr[0][xm + NR2 + 4] + P2; CostType* Lr_p4 = Lr[0] + xd + NRD2 + D2 * 4; ``` and ``` minLr[0][xm + 4] = (CostType)minL4; ``` where the number '4' indicate the direction 4 as shown above. Has anyone else found this mistake?
incomplete
low
Minor
371,855,181
vscode
Show Timestamp while Debugging
Geetings, I am debugging my JavaScript files in VSCode with the ChromeDebugger extension. I recognized that I cant activate timestamps in the DebugConsole in VSCode. Is it possible to show timestamps like in Chrome DevTools? Or is this a ChromeDebugger extension Feature request? **No Timestamps:** ![vscode_notimestamps](https://user-images.githubusercontent.com/24226141/47206346-b9bd3d00-d388-11e8-8d61-85e7d44fa8bf.png) **Want Timestamps:** ![inkedchromedevtools_withtimestamps_li](https://user-images.githubusercontent.com/24226141/47206551-36e8b200-d389-11e8-88a4-aa14a187d956.jpg)
feature-request,debug,debug-console
medium
Critical
371,876,608
pytorch
[CAPI] Increase of memory usage when exporting a Adam optimzer
## ๐Ÿ› Bug Similarly than bug #12284 but with an instance of Adam optimizer. The stored file and gpu memory usage grow very quick. ## Expected behavior No memory increase. ## Environment Collecting environment information... PyTorch version: 1.0.0a0 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Ubuntu 16.04.5 LTS GCC version: (GCC) 8.2.0 CMake version: version 3.12.0 Python version: 3.5 Is CUDA available: Yes CUDA runtime version: 9.0.176 GPU models and configuration: GPU 0: GeForce GTX 1070 Nvidia driver version: 396.54 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.3.1 /usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a Versions of relevant libraries: [pip] Could not collect [conda] Could not collect ## Additional context Note that I have the patch of bug #12284. This seem to be a similar but disjoint bug. cc @yf225 @glaringlee @vincentqb
module: cpp,module: optimizer,module: memory usage,triaged
low
Critical
371,880,507
godot
Project.godot and exported class names with multiple repositories.
**Godot version:** `328679fd` **Issue description:** Preface: I am working with multiple git repositories, a main one and others that are in sub folder. The problem that I am facing right now is that when someone does not have one of the submodules, project.godot is edited and the path to those scripts that export the classname are deleted. It is not a big problem(since they are generated again when I open the project), but I am basically ping-ponging the project.godot file between me, that i have all the submodules installed, and another person which only has one of them. Checking out branches then require to also reset the project.godot file. **Proposal:** Save classnames in a different file that can be handled on its own.
enhancement,discussion,topic:editor
low
Minor
371,915,901
pytorch
caffe2: Unsupported type of tensor: nullptr (uninitialized)
## ๐Ÿ› Bug After exporting a basic RNN model to caffe2, running it results in assertion error thrown in `operator.h`: `Unsupported type of tensor: nullptr (uninitialized)` ## To Reproduce Steps to reproduce the behavior: Created a gist here: https://gist.github.com/Nimitz14/7a17707d570f4003b5f93ce2fafecfcd Change the filepaths starting with `/work/case` to run. The error: ``` starting terminate called after throwing an instance of 'c10::Error' what(): [enforce fail at operator.h:791] . Unsupported type of tensor: nullptr (uninitialized)Error from operator: input: "3" input: "0" output: "10" name: "" type: "Gather" device_option { device_type: 0 device_id: 0 } Aborted (core dumped) ``` ## Expected behavior To run and print the model outputs. ## Environment ``` PyTorch version: 1.0.0a0+3123206 Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.5 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609 CMake version: version 3.11.0 Python version: 3.5 ``` built with `python3 setup.py install`
caffe2
low
Critical
371,926,132
TypeScript
Can't use type aliases or conditional types resolving to Promise for async/await return types
<!-- ๐Ÿšจ STOP ๐Ÿšจ ๐—ฆ๐—ง๐—ข๐—ฃ ๐Ÿšจ ๐‘บ๐‘ป๐‘ถ๐‘ท ๐Ÿšจ --> <!-- Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the CONTRIBUTING guidelines: https://github.com/Microsoft/TypeScript/blob/master/CONTRIBUTING.md * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ --> <!-- If you have a QUESTION: THIS IS NOT A FORUM FOR QUESTIONS. Ask questions at http://stackoverflow.com/questions/tagged/typescript or https://gitter.im/Microsoft/TypeScript --> <!-- If you have a SUGGESTION: Most suggestion reports are duplicates, please search extra hard before logging a new suggestion. See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> <!-- If you have a BUG: Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.1 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** - async promise alias - async promise conditional **Code** ```ts type P<T> = Promise<T> async function foo<T>(x: T): P<T> { return x; } async function bar<T>(x: T): T extends number ? Promise<number> : Promise<any> { return x; } ``` **Expected behavior:** No error. **Actual behavior:** Type 'P' is not a valid async function return type in ES5/ES3 because it does not refer to a Promise-compatible constructor value. Type 'T extends number ? Promise<number> : Promise<any>' is not a valid async function return type in ES5/ES3 because it does not refer to a Promise-compatible constructor value. **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> [Link](http://www.typescriptlang.org/play/index.html#src=type%20P<T>%20%3D%20Promise<T>async%20function%20foo<T>(x%3A%20T)%3A%20P<T>%20{%20%20%20%20return%20x%3B}async%20function%20bar<T>(x%3A%20T)%3A%20T%20extends%20number%20%3F%20Promise<number>%20%3A%20Promise<any>%20{%20%20%20%20return%20x%3B}) **Related Issues:**
Suggestion,Awaiting More Feedback
low
Critical
371,948,254
godot
Kinematic body does not move when exactly in between two static bodies
Godot 3.1 alpha 1 Godot 3.0.6 As reported in this Q&A post: https://godotengine.org/qa/34661/how-can-i-simulate-no-friction-in-a-kinematic-body-2d If I exactly place a box Kinematic Body 2D aligned horizontally between two static boxes (using pixel and grid snapping), that body will not be able to move vertically with `move_and_slide`. If I take out one of the static boxes, it can move. Under the same situation, a rigidbody is able to move, but the kinematic one isn't. Example project: [KinematicBoxBetweenTwoOthers.zip](https://github.com/godotengine/godot/files/2495904/KinematicBoxBetweenTwoOthers.zip)
bug,documentation,topic:physics
low
Minor
371,961,949
youtube-dl
Funkwhale
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.10.05*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.10.05** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones - [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [x] Site support request (request for adding support for a new site) - [x] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### Purpose of this *issue* is a *site support request* **[Funkwhale](https://funkwhale.audio/)** - Single audio: `https://funkwhale.mastodon.host/library/tracks/4821` - Album playlist: `https://funkwhale.mastodon.host/library/albums/1120` - Artist playlist: `https://funkwhale.mastodon.host/library/artists/845` `youtube-dl` Falls back to generic and fails. --- ### Description of your *issue*, suggested solution and other information Related to #16301, as PeerTube is also a project from the [Fediverse](https://fediverse.party/).
account-needed
low
Critical
371,984,359
pytorch
Massive initial memory overhead GPU
## ๐Ÿ› Bug There is a huge RAM overhead for using the GPU even for processing small tensors. Here's a standalone script: ``` python # test.py import torch import argparse parser = argparse.ArgumentParser() parser.add_argument('size', type=int) args = parser.parse_args() torch.set_grad_enabled(False) device = 'cuda' if torch.cuda.is_available() else 'cpu' model = torch.nn.Conv2d(1, 1, 1).to(device) x = torch.rand(1, 1, args.size, args.size).to(device) y = model(x) ``` Recording using [GNU time](https://www.gnu.org/software/time/): ``` shell $ /usr/bin/time -v python test.py 100 Command being timed: "python test.py 100" User time (seconds): 0.26 System time (seconds): 0.03 Percent of CPU this job got: 114% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.26 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 1904088 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 16238 Voluntary context switches: 40 Involuntary context switches: 19 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 ``` The line to pay attention here is: **Maximum resident set size (kbytes): 1904088**. It takes roughly 2 GB of RAM in order to simply *use the GPU* to process a 100x100 image. In contrast, doing the same for CPU: ``` shell $ CUDA_VISIBLE_DEVICES='' /usr/bin/time -v python test.py 100 Command being timed: "python test.py 100" User time (seconds): 0.29 System time (seconds): 0.04 Percent of CPU this job got: 116% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.29 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 149352 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 16432 Voluntary context switches: 39 Involuntary context switches: 19 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 ``` takes only ~150 MB. Using the following script, I constructed a plot of RAM usage vs image size: ``` perl # test.pl foreach my $device ('', 0) { foreach (1..30) { $_ *= 100; my @outs = split /\n/, `CUDA_VISIBLE_DEVICES=$device /usr/bin/time -v python test.py $_ 2>&1`; foreach (@outs) { print $1 / 1024 . ",\n" if m/Maximum resident.*?(\d+)/ } } } ``` Running `perl test.pl` produces 60 lines of output; the first 30 are for CPU, the second 30 are GPU. Plotting these yields: ![memory use](https://raw.githubusercontent.com/davidmascharka/davidmascharka.com/master/img.png?token=AFWhVkGfeKu__pRyB5eke9RRz_U1G9gCks5b0yXLwA%3D%3D) The numbers produced on my machine are as follows: ``` # CPU 145.5234375, 145.90234375, 145.609375, 145.43359375, 145.56640625, 145.33984375, 145.51171875, 145.3359375, 146.34375, 149.0078125, 150.75, 153.23046875, 156.47265625, 159.55859375, 162.66796875, 166.2734375, 170.31640625, 173.98046875, 178.40234375, 183.2109375, 187.625, 192.75390625, 197.88671875, 202.8828125, 209.078125, 214.2578125, 220.86328125, 226.41796875, 233.5078125, 239.9375, # GPU 1859.98828125, 1859.20703125, 1859.90234375, 1861.25, 1862.359375, 1861.1171875, 1859.54296875, 1858.77734375, 1858.9765625, 1863.28125, 1862.94921875, 1859.296875, 1860.77734375, 1861.5625, 1862.75390625, 1859.83984375, 1859.99609375, 1860.80078125, 1860.09375, 1862.703125, 1858.71875, 1858.75, 1860.671875, 1859.6875, 1859.0234375, 1858.921875, 1859.98046875, 1860.04296875, 1859.015625, 1858.77734375, ``` ## Expected behavior Memory usage on the GPU side should not be significantly higher than on the CPU side. Luckily, RAM usage does not grow substantially (as indeed it should not), but the high startup cost is concerning, especially since this is just a 1x1 conv operating on 1-d input. ## Environment ``` PyTorch version: 1.0.0a0+ff608a9 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Ubuntu 16.04.5 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609 CMake version: version 3.12.2 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce GTX 1080 Ti GPU 1: GeForce GTX 1080 Ti GPU 2: GeForce GTX 1080 Ti GPU 3: GeForce GTX 1080 Ti Nvidia driver version: 390.87 cuDNN version: 7.0.3 Versions of relevant libraries: numpy (1.15.2) ``` ## Additional Notes I've observed stranger behavior in the curve on the CPU where for small images the memory consumption grows exponentially up to ~2 GB then drops and grows linearly. I'm attempting to reproduce this behavior in a small, standalone script like the above. cc @ngimel
module: cuda,module: memory usage,triaged
high
Critical
371,995,746
rust
libstd initializaton on sparc64-unknown-linux-gnu fails
Trying to run jemallocator tests on `sparc64-unknown-linux-gnu` I run into this error: ``` Running `qemu-sparc64 /target/sparc64-unknown-linux-gnu/debug/deps/ffi-3572dfd160eda606` thread '<unnamed>' panicked at 'assertion failed: signal(libc::SIGPIPE, libc::SIG_IGN) != libc::SIG_ERR', libstd/sys/unix/mod.rs:86:9 error: process didn't exit successfully: `qemu-sparc64 /target/sparc64-unknown-linux-gnu/debug/deps/ffi-3572dfd160eda606` (signal: 4, SIGILL: illegal instruction) ``` which points here: https://github.com/rust-lang/rust/blob/master/src/libstd/sys/unix/mod.rs#L86 This check seems to only depend on the `target_os`, so I find it weird that it fails for `sparc64-unknown-linux-gnu` given that it works on other linux targets.
T-libs-api,O-SPARC,C-bug
low
Critical
372,009,424
godot
Editor: Cannot Load an Image on New ImageTexture of a Sprite.
**Godot version:** 3.1 alpha #bde3e88 Mac OS X, High Sierra **Issue description:** Cannot Load an Image on New ImageTexture of a Sprite. ![screen shot 2018-10-19 at 11 15 24 am](https://user-images.githubusercontent.com/930478/47227959-0bb69080-d392-11e8-9a0e-11eb0fc00521.png)
discussion
low
Minor
372,042,966
pytorch
Eigen in Caffe2 doesn't produce vectorized instructions
Compiling with: `CFLAGS=-march=native python setup.py build develop` Repro code ``` from caffe2.python import core, workspace test_net = core.Net("layer_norm_test") test_net.LayerNorm(["input"], ["output", "mean", "stddev"], epsilon=1e-5) import numpy as np workspace.FeedBlob('input', np.random.rand(20, 5, 10, 10).astype('f')) workspace.CreateNet(test_net) s = time.time() workspace.RunNet(test_net.Name(), NITER) t_caffe2 = time.time() - s print('caffe2 time per iter', t_caffe2 / NITER * 1000) ``` Performance is 11x slower than internal build with BUCK Verified that vector instructions are not generated with `perf`: ``` caffe2::LayerNormOp<caffe2::CPUContext>::DoRunWithType<float> โ”‚ nop โ”‚7d8: mov %rcx,%rax โ”‚ vmovss (%r8,%rcx,4),%xmm0 1.45 โ”‚ cqto 0.02 โ”‚ idiv %rdi 26.52 โ”‚ mov %rcx,%rax 0.15 โ”‚ add %r10,%rdx 0.03 โ”‚ vsubss (%r12,%rdx,4),%xmm0,%xmm0 6.46 โ”‚ cqto โ”‚ idiv %rsi 29.93 โ”‚ add %r11,%rdx 0.50 โ”‚ vdivss 0x0(%r13,%rdx,4),%xmm0,%xmm0 24.20 โ”‚ vmovss %xmm0,(%r9,%rcx,4) 1.76 โ”‚ add $0x1,%rcx โ”‚ cmp %rcx,%rbx โ”‚ โ†‘ jg 7d8 โ”‚810: add $0x1,%r14 ``` Verified from checking `build/build.ninja` that the flags is passed to the file in question (and built on clean setup): ``` build caffe2/CMakeFiles/caffe2.dir/operators/layer_norm_op.cc.o: CXX_COMPILER__caffe2 ../caffe2/operators/layer_norm_op.cc || cmake_order_depends_target_caffe2 DEFINES = -DCPUINFO_SUPPORTED_PLATFORM=1 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNX_NAMESPACE=onnx_torch -DUSE_GCC_ATOMICS=1 -D_FILE_OFFSET_BITS=64 -Dcaffe2_EXPORTS DEP_FILE = caffe2/CMakeFiles/caffe2.dir/operators/layer_norm_op.cc.o.d FLAGS = --std=c++11 -march=native -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-unused-but-set-variable -Wno-maybe-uninitialized -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O2 -fPIC -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -fvisibility=hidden -DCAFFE2_BUILD_MAIN_LIB -O2 -std=gnu++11 INCLUDES = -I../aten/src -I. -I../ -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -I../third_party/protobuf/src -isystem ../cmake/../third_party/googletest/googletest/include -I../cmake/../third_party/benchmark/include -isystem ../cmake/../third_party/eigen -isystem /usr/include/python2.7 -isystem /data/users/dzhulgakov/pytorch-env/lib/python2.7/site-packages/numpy/core/include -isystem ../cmake/../third_party/pybind11/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -Icaffe2/aten/src/TH -I../aten/src/TH -Icaffe2/aten/src -Iaten/src -I../aten/src/THNN -I../aten/src/THCUNN -I../aten/../third_party/catch/single_include -Icaffe2/aten/src/ATen -I../aten/src/ATen/.. -I../caffe2/core/nomnigraph/include -isystem include -I../c10/.. -I../third_party/NNPACK/include -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/FP16/include IN_ABS = /data/users/dzhulgakov/pytorch/caffe2/operators/layer_norm_op.cc OBJECT_DIR = caffe2/CMakeFiles/caffe2.dir OBJECT_FILE_DIR = caffe2/CMakeFiles/caffe2.dir/operators TARGET_COMPILE_PDB = caffe2/CMakeFiles/caffe2.dir/ TARGET_PDB = lib/libcaffe2.pdb ``` Machine: ``` lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 24 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 94 Model name: Intel Core Processor (Skylake) Stepping: 3 CPU MHz: 2394.452 BogoMIPS: 4788.90 Virtualization: VT-x Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 4096K L3 cache: 16384K NUMA node0 CPU(s): 0-23 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat ``` Discovered as part of #11867
caffe2
low
Critical
372,074,975
TypeScript
TypeScript doesn't narrow out `undefined` after constructor call when callee is `any`
TypeScript doesn't seem to narrow the type of a variable correctly in cases where the constructor being called is typed as `any`. If the type of a variable includes `undefined` in its type signature, it's not narrowed out after a constructor call even though by definition (I think--correct me if I'm wrong) `new fn()` *must* always evaluate to an object if it doesn't throw first. **TypeScript Version:** 3.1.3 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** constructor narrow **Code** ```ts let x: { [x: string]: any } | undefined; x = new (Object as any)(); // AFAIK `x` can't possibly be undefined after this point console.log(x.foo); // error: Object is possibly 'undefined'. ``` **Expected behavior:** No error. **Actual behavior:** Compile-time error: `Object is possibly 'undefined'.` **Playground Link:** http://www.typescriptlang.org/play/#src=let%20x%3A%20%7B%20%5Bx%3A%20string%5D%3A%20any%20%7D%20%7C%20undefined%3B%0D%0Ax%20%3D%20new%20(Object%20as%20any)()%3B%0D%0Aconsole.log(x.foo)%3B%0D%0A
Suggestion,Needs Proposal,Domain: Control Flow
low
Critical
372,083,342
TypeScript
Mapped tuples types iterates over all properties
**TypeScript Version:** 3.2.0-dev.20181019 **Search Terms:** mapped tuples reify **Code** ```ts type Foo = ['a', 'b']; interface Bar { a: string; b: number; } type Baz = { [K in keyof Foo]: Bar[Foo[K]]; }; // Expected Baz to be [string, number] ``` **Expected behavior:** Baz should be [string, number] **Actual behavior:** Type '["a", "b"][K]' cannot be used to index type 'Bar'. **Playground Link:** https://www.typescriptlang.org/play/index.html#src=type%20Foo%20%3D%20%5B'a'%2C%20'b'%5D%3B%0D%0Ainterface%20Bar%0D%0A%7B%0D%0A%09a%3A%20string%3B%0D%0A%09b%3A%20number%3B%0D%0A%7D%0D%0A%0D%0Atype%20Baz%20%3D%20%7B%20%5BK%20in%20keyof%20Foo%5D%3A%20Bar%5BFoo%5BK%5D%5D%3B%20%7D%3B%20%2F%2F%20Expected%20Baz%20to%20be%20%5Bstring%2C%20number%5D%0D%0A%0D%0Atype%20WorkingBaz%20%3D%20%7B%20%5BK%20in%20Exclude%3Ckeyof%20Foo%2C%20keyof%20any%5B%5D%3E%5D%3A%20Foo%5BK%5D%20extends%20keyof%20Bar%20%3F%20Bar%5BFoo%5BK%5D%5D%20%3A%20never%3B%20%7D%20%26%20%7B%20length%3A%20Foo%5B'length'%5D%3B%20%7D%20%26%20any%5B%5D%3B **Related Issues:** https://github.com/Microsoft/TypeScript/issues/25947 Given the Mapped tuple types feature (#25947). I'd expect the code above to work cleanly. However, I still need to do: ```ts type WorkingBaz = { [K in Exclude<keyof Foo, keyof any[]>]: Foo[K] extends keyof Bar ? Bar[Foo[K]] : never; } & { length: Foo['length']; } & any[]; ``` To have an equivalent type. As far as I understand, the "K" in a mapped type on a tuple should iterate only on numeric keys of this tuple. Therefore, Foo[K] should always be a valid key for Bar...
Bug,Domain: Mapped Types
high
Critical
372,091,952
rust
A codegen option to stub `default_hook` + safe optimizations to `lang_start_internal` shrink the minimum executable size by 60%
The smallest "Hello World" binary in Rust using the normal entry point is roughly: ```rust extern crate libc; use std::alloc::System; #[global_allocator] static ALLOCATOR: System = System; fn main() { const BUF: &'static [u8] = b"Hello World!\n"; unsafe { libc::write(libc::STDOUT_FILENO, BUF.as_ptr() as *const libc::c_void, BUF.len() ) }; } ``` That's a good test of the minimum Rust binary size. Compiled with `opt-level = 's'` `lto = true` on macOS 10.14 with the 17th's nightly, it is: | panic | symbols | size | |----------------------|----------|------| | (default = 'unwind') | | 160K | | (default = 'unwind') | stripped | 121K | | 'abort' | | 133K | | 'abort' | stripped | 101K | However about 60% of that is code that's (usually, see below) not used anywhere else in a program, and could be eliminated by adding a codegen option to disable the default panic hook + making a few safe optimizations to `lang_start_internal`. ### 1. Codegen option to replace std::panicking::default_hook with an empty shim This option would simply cause `std::panicking::default_hook` to be defined as `fn default_hook(_: &PanicInfo) {}`. With that definition, the above program shrinks from 120K -> 79K unstripped, 121K -> 55K striped (62K & 43K with `panic = 'abort'`). Decompilation indicates everything eliminated is backtrace/dsym/dwarf stuff. Panicking silently by eliminating the default hook is desirable in some situations, especially in stripped executables. Note that the code size reduction can't be achieved with `std::panicking::set_hook` though, because panic calls before main prevent `default_hook` from being optimized away. ### 2. Optimize an inlined `sys_common::cleanup()` `sys_common::cleanup` is inlined into `lang_start_internal` and guaranteed to only ever be called once. However the compiler can't optimize away the tests in its `std::sync::Once::call_once`, resulting in many unnecessary panic checks. Manually removing these further shrinks an unwinding executable to 69K unstripped, 47K stripped. (`panic = 'abort'` even further to 53K & 34K.) Symbols from `std::sync::Once` account for the reduction, and it would apply to any program not using that type. ### 3. Change `Thread::new` to `Thread::_new` From: ```rust let thread = Thread::new(Some("main".to_owned())); ``` to: ```rust let thread = Thread::_new(::ffi::CString::_from_box_unchecked(Box::new(*b"main\0"))); ``` In the former, `"main"` is guaranteed to result in a valid CString, and `Option` in guaranteed to be `Some`. Nonetheless 2 panic checks and a mutation are emitted. In the latter, the resultant CString is still guaranteed to be valid. This cuts <1K, but skips the 2 unnecessary checks. Note that `pub(crate) unsafe fn _from_box_unchecked` is added to `CString` to skip mutating `b"main"` to `b"main\0"` but is not strictly necessary to skip the checks. However `CString`'s Invariant 2 is still preserved with all public unsafe functions. This doesn't currently happen, but `Thread::new(Some("main".to_owned()))` would be inlined into `lang_start_internal` and the checks and mutation would be optimized away if the compiler could be taught: * that `"main".to_owned()` (`into()` a `Vec<u8>`) contains no `NulError`, and thus the call to `CString::new` could be rewritten to `CString::from_vec_unchecked`, * that afterwards, `CString::from_vec_unchecked` could be short-circuited by appending `0` to the original `Vec<u8>`, * and to apply these through calls to `Thread::new`. ### Comments The final reductions are: | panic | symbols | old | new | |----------------------|----------|------|-----| | (default = 'unwind') | | 160K | 69K | | (default = 'unwind') | stripped | 121K | 47K | | 'abort' | | 133K | 53K | | 'abort' | stripped | 101K | 34K | `panic = 'unwind'` still correctly unwinds with an empty `default_hook` (albeit silently). AFAIK none of these changes should require an RFC, but let me know.
C-enhancement,A-codegen,T-libs-api,I-heavy
low
Critical
372,097,412
go
runtime/race: goroutine IDs in race failures are incorrect
### What version of Go are you using (`go version`)? Both: `go version go1.10.4 darwin/amd64` `go version go1.11 darwin/amd64` ### Does this issue reproduce with the latest release? Yes. ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOCACHE="/Users/pmattis/Library/Caches/go-build" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/pmattis/Development/go" GORACE="" GOROOT="/Users/pmattis/Development/go1.10" GOTMPDIR="" GOTOOLDIR="/Users/pmattis/Development/go1.10/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/qc/fpqpgdqd167c70dtc6840xxh0000gn/T/go-build123544902=/tmp/go-build -gno-record-gcc-switches -fno-common" ``` ### What did you do? The goroutine IDs contained in stacktraces do not match up with the goroutine IDs in race failure messages. The small repro below demonstrates this. ``` package racegoid import ( "runtime/debug" "testing" "time" ) func TestRaceGoroutineID(t *testing.T) { var a int f := func() { start := time.Now() debug.PrintStack() time.Sleep(time.Second) for time.Since(start) < 3*time.Second { a = 5 } } go f() f() t.Log(a) } ``` Notice that the stack traces indicate that the test is using goroutines 35 and 36 while the race failures indicate goroutines 6 and 7. ``` ~ go test -v -race === RUN TestRaceGoroutineID goroutine 35 [running]: runtime/debug.Stack(0x118d679, 0x10a6a51, 0x12f82f8) /Users/pmattis/Development/go1.11/src/runtime/debug/stack.go:24 +0xb5 runtime/debug.PrintStack() /Users/pmattis/Development/go1.11/src/runtime/debug/stack.go:16 +0x34 github.com/petermattis/race-goid.TestRaceGoroutineID.func1() /Users/pmattis/Development/go/src/github.com/petermattis/race-goid/test_test.go:13 +0x69 github.com/petermattis/race-goid.TestRaceGoroutineID(0xc0000cc100) /Users/pmattis/Development/go/src/github.com/petermattis/race-goid/test_test.go:20 +0xd8 testing.tRunner(0xc0000cc100, 0x11decf8) /Users/pmattis/Development/go1.11/src/testing/testing.go:827 +0x163 created by testing.(*T).Run /Users/pmattis/Development/go1.11/src/testing/testing.go:878 +0x651 goroutine 36 [running]: runtime/debug.Stack(0x118d679, 0x10a6a51, 0x12f82f8) /Users/pmattis/Development/go1.11/src/runtime/debug/stack.go:24 +0xb5 runtime/debug.PrintStack() /Users/pmattis/Development/go1.11/src/runtime/debug/stack.go:16 +0x34 github.com/petermattis/race-goid.TestRaceGoroutineID.func1() /Users/pmattis/Development/go/src/github.com/petermattis/race-goid/test_test.go:13 +0x69 created by github.com/petermattis/race-goid.TestRaceGoroutineID /Users/pmattis/Development/go/src/github.com/petermattis/race-goid/test_test.go:19 +0xce ================== WARNING: DATA RACE Write at 0x00c00009a090 by goroutine 7: github.com/petermattis/race-goid.TestRaceGoroutineID.func1() /Users/pmattis/Development/go/src/github.com/petermattis/race-goid/test_test.go:16 +0x85 Previous write at 0x00c00009a090 by goroutine 6: github.com/petermattis/race-goid.TestRaceGoroutineID.func1() /Users/pmattis/Development/go/src/github.com/petermattis/race-goid/test_test.go:16 +0x85 github.com/petermattis/race-goid.TestRaceGoroutineID() /Users/pmattis/Development/go/src/github.com/petermattis/race-goid/test_test.go:20 +0xd7 testing.tRunner() /Users/pmattis/Development/go1.11/src/testing/testing.go:827 +0x162 Goroutine 7 (running) created at: github.com/petermattis/race-goid.TestRaceGoroutineID() /Users/pmattis/Development/go/src/github.com/petermattis/race-goid/test_test.go:19 +0xcd testing.tRunner() /Users/pmattis/Development/go1.11/src/testing/testing.go:827 +0x162 Goroutine 6 (running) created at: testing.(*T).Run() /Users/pmattis/Development/go1.11/src/testing/testing.go:878 +0x650 testing.runTests.func1() /Users/pmattis/Development/go1.11/src/testing/testing.go:1119 +0xa8 testing.tRunner() /Users/pmattis/Development/go1.11/src/testing/testing.go:827 +0x162 testing.runTests() /Users/pmattis/Development/go1.11/src/testing/testing.go:1117 +0x4ee testing.(*M).Run() /Users/pmattis/Development/go1.11/src/testing/testing.go:1034 +0x2ee main.main() _testmain.go:42 +0x221 ================== --- FAIL: TestRaceGoroutineID (3.00s) test_test.go:21: 5 testing.go:771: race detected during execution of test FAIL exit status 1 FAIL github.com/petermattis/race-goid 3.011s ```
RaceDetector,NeedsFix
low
Critical
372,103,817
go
x/build/cmd/gopherbot: race condition in auto-closure of scratch CLs causing gopherbot to stall 30s
When closing scratch CLs, the maintner corpus doesn't get updated fast enough and it attempts to close the same CLs again, resulting in a 409 conflict: ``` 2018/10/19 20:19:32 closing scratch CL https://golang.org/cl/140997 ... 2018/10/19 20:19:32 abandon scratch reviews: HTTP status 409 Conflict; change is abandoned 2018/10/19 20:19:32 HTTP status 409 Conflict; change is abandoned 2018/10/19 20:19:32 gopherbot ran in 1.745149115s 2018/10/19 20:19:32 sleeping 30s after previous error. ``` This error causes gopherbot to sleep for 30s as a result. This is related to #28226 /cc @dmitshur
Builders,NeedsFix
low
Critical
372,125,184
TypeScript
Wiki page 'Using the Language Service API' does not describe how to use the Language Service API
https://github.com/Microsoft/TypeScript/wiki/Using-the-Language-Service-API
Help Wanted,Docs
low
Major
372,141,533
flutter
Provide clear feedback when a hot reload has finished
In the latest round of the "Build a Simple UI" study, we've seen two participants waiting for hot reload to finish when it, in fact, had finished already. It appears that the current feedback in the console is not clear enough (see below). This is a problem when the changes are subtle on the UI or the simulator is covered by another window. ``` Initializing hot reload... Syncing files to device iPhone 6s... Reloaded 0 of 427 libraries in 690ms. ``` Proposal: ``` Initializing hot reload... Syncing files to device iPhone 6s... Reloaded 0 of 427 libraries in 690ms. Hot reload completed. ``` Cc: @gspencergoog @jayoung-lee @galeyang
tool,t: hot reload,from: study,c: proposal,P3,team-tool,triaged-tool
low
Major
372,149,184
TypeScript
Auto-import triggers for reserved keyword exports
**TypeScript Version:** 3.1.3 **Search Terms:** auto import keyword **Code** ```ts // a.ts function return_() {} function throw_() {} function break_() {} function continue_() {} export { return_ as return, throw_ as throw, break_ as break, continue_ as continue }; // b.ts // typing any of the following keywords in VS Code now auto-imports one of the above exports return throw break continue ``` **Expected behavior:** Auto-import should not trigger for reserved words. **Actual behavior:** Auto-import triggers any time you write a reserved word with the same name as an export, adding an import declaration to the top of the file for a keyword that is invalid.
Bug,Domain: Quick Fixes
low
Minor
372,151,326
pytorch
RelaxedOneHotCategorical not implementing entropy (and other abstract methods)
## ๐Ÿš€ Feature <!-- A clear and concise description of what the bug is. --> ## To Reproduce ```python from torch.distributions import RelaxedOneHotCategorical d = RelaxedOneHotCategorical(0.1, torch.tensor((0.1, 0.9))) d.entropy() ``` throws a NotImplementedError The methods `mean`, `variance` and `enumerate_support` don't seem to be implemented either. ## Expected behavior One simple solution would be to implement the methods as follows ```python def entropy(self): return self.base_dist._categorical.entropy() ``` - PyTorch Version: 0.4.1.post2 (I briefly checked the code in master and the methods don't seem to be implemented there either) - OS: Linux - How you installed PyTorch: conda - Python version: 3.6 cc @fritzo @neerajprad @alicanb @vishwakftw @nikitaved
module: distributions,triaged,enhancement
low
Critical
372,152,550
TypeScript
Declaration file emitted with esModuleInterop can't be consumed without it
**TypeScript Version:** 3.2.0-dev.20181019 **Search Terms:** **Code** ```ts import abs from "abs"; // using @types/abs which uses `export =` export const x: typeof abs = <any>null; ``` **Expected behavior:** Output `.d.ts` is: ```ts import abs = require("abs"); export declare const x: typeof abs; ``` **Actual behavior:** ```ts import abs from "abs"; export declare const x: typeof abs; ``` Discovered in `quill` on DefinitelyTyped, which imports from `quill-delta`, which is [written in TypeScript](https://github.com/quilljs/delta/blob/master/src/Delta.ts) with `esModuleInterop`.
Suggestion,Domain: Declaration Emit,Experience Enhancement
low
Major
372,185,314
angular
AnimationBuilder query returns zero elements when ":enter" or ":leave" is used
## I'm submitting a... <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior I am using **AnimationBuilder** to apply animations on child elements' `:enter` and `:leave` transitions, I tried queries like as the ones in [the query docs](https://angular.io/api/animations/query#usage-notes) Using `:enter .child-element` and `.child-element:enter` throw the following error > ERROR Error: Unable to create the animation due to the following errors: `query(":enter .child-element")` returned zero elements. (Use `query(":enter .child-element", { optional: true })` if you wish to allow this.) ## Expected behavior Ability to query child elements with `:enter` and `:leave` transitions ## Minimal reproduction of the problem with instructions ```ts const builder = this.animationBuilder.build([ query( '.child-element', // Works // ':enter .child-element', // This does not work ! // '.child-element:enter', // This also does not work ! [ stagger(100, [useAnimation(animation)]) ] ) ]); ``` https://stackblitz.com/edit/angular-jh4gnd?file=src%2Fapp%2Fapp.component.ts ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> ## Environment <pre><code> Angular version: 7.0.0 <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [x ] Chrome (desktop) version XX - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: 8 - Platform: Windows Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
type: bug/fix,area: animations,freq2: medium,P3
low
Critical
372,199,810
kubernetes
Allow updating rbd monitors of persisted volumes
**Is this a BUG REPORT or FEATURE REQUEST?**: Probably a bit of both. /kind bug /kind feature /sig storage **What happened**: We needed to move our ceph cluster which in turn gave the ceph monitors new ips. In k8s the ips are stored in the storageclass and in each pv. We updated the storageclass successfully but trying to update a pv it fails: ``` $ kubectl patch pv pvc-9841e253-d453-11e8-a2fe-0ec0b66da86e -p $'spec:\n rbd:\n monitors: ["10.0.1.2:1234"]' The PersistentVolume "pvc-9841e253-d453-11e8-a2fe-0ec0b66da86e" is invalid: spec.persistentvolumesource: Forbidden: is immutable after creation ``` **What you expected to happen**: It should be possible to change the monitors easily. **How to reproduce it (as minimally and precisely as possible)**: Create a rbd backed PV and try to change the monitors section afterwards. **Environment**: - Kubernetes version (use `kubectl version`): v1.11.3
kind/bug,sig/storage,kind/feature,lifecycle/frozen
medium
Critical
372,200,277
node
Non-unique V8 scriptUrl when using dynamic import in CJS
<!-- Thank you for reporting a possible bug in Node.js. Please fill in as much of the template below as you can. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify the affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you can. --> * **Version**: 10.12.0 * **Platform**: Linux 64 bit * **Subsystem**: Modules <!-- Please provide more details below this comment. --> This bug is a regression in `10.12.0`, it worked in `10.11.0`. I am currently working on Node tooling and code coverage using the V8 profiler. When requesting profiling data, V8 returns an object per script. This object has a `url` value (the value passed to V8 when creating the module). To support coverage, the URL should be unique: for each file, there should be unique profiling data. The latest Node version breaks this unicity when dealing with dynamic imports in CJS. If I have the following `main.js` (commonJS) file: ``` import('./hello-world') .then(helloWorld => helloWorld.helloWorld()) ``` And execute it (with `--experimental-modules`), V8 receives the following modules with the same URL: ``` // scriptId: 76 // url: file:///data/projects/web/istanbulize/src/test/fixtures/esm-hello-world/main.js import { executor, $default } from ""; export { $default as default } if (typeof executor === "function") { // add await to this later if top level await comes along executor() } ``` ``` // scriptId: 77 // url: file:///data/projects/web/istanbulize/src/test/fixtures/esm-hello-world/main.js (function (exports, require, module, __filename, __dirname) { import('./hello-world') .then(helloWorld => helloWorld.helloWorld()) }); ``` The first module should not use this url since it does not really correspond to this file but is an internal module to bridge between CJS and ESM. When importing CJS modules from ESM, they get the `cjs-facade://` protocol instead of `file://`. This should be applied here too to fix the ambiguity. Older versions of Node (10.11.0 and before) used the system path instead of the file URL for the CJS module. You had `file:///main.js` and `/main.js` so it was possible to detect this case and ignore the file URL because it corresponds to the wrapper. Now that file URLs are used everywhere (a good thing), it is no longer possible to distinguish the internal wrapper from the real file unless you look at the source code.
help wanted,module,vm
low
Critical
372,207,520
terminal
MiniTerm sample should specify long-running task
In the MiniTerm sample you create tasks to effectively pump IO. Since these could be long-running tasks in a practical application based on the sample, the sample should probably specify TaskCreationOptions.LongRunning to avoid consuming a couple of threads from the ThreadPool.
Area-Performance,Issue-Samples,Product-Meta
low
Minor
372,222,711
rust
rustc fails to infer default types that appear inside a path
``` type MyWorld = World<u32>; struct World<S = u32> { s: S, } impl<S> World<S> { pub fn new() -> Self { unimplemented!() } } fn main() { //let mut world = World::new(); // Doesn't compile let mut world: World = World::new(); // <= This works :/ let mut world = MyWorld::new(); // This also works } ``` [playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2015&gist=8fe5516b40f25a2c4dd2d7dbfd430b7f) ``` let mut world = World::new(); // Doesn't compile let mut world: World = World::new(); // This works ``` To me this seems like an oversight in how types are inferred inside a path.
A-inference
low
Critical
372,226,165
TypeScript
can't emit import when overwrite declare
<!-- ๐Ÿšจ STOP ๐Ÿšจ ๐—ฆ๐—ง๐—ข๐—ฃ ๐Ÿšจ ๐‘บ๐‘ป๐‘ถ๐‘ท ๐Ÿšจ Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.2.0-dev.201xxxxx <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** **Code** ```ts import hashSum = require('hash-sum'); import shortid = require('shortid'); declare function shortid(): string declare function hashSum(input): string export function hashAny(seed?, ...argv): string { if (!seed) { seed = shortid() } else if (typeof seed !== 'string') { seed = hashSum(seed) } return String(seed) } export default hashAny; ``` **Expected behavior:** > output js file include ```ts const hashSum = require("hash-sum"); const shortid = require("shortid"); ``` **Actual behavior:** > miss require `hash-sum`, `shortid` ```ts "use strict"; Object.defineProperty(exports, "__esModule", { value: true }); function hashAny(seed, ...argv) { if (!seed) { seed = shortid(); } else if (typeof seed !== 'string') { seed = hashSum(seed); } return String(seed); } exports.hashAny = hashAny; exports.default = hashAny; ``` **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> **Related Issues:** <!-- Did you find other bugs that looked similar? -->
Bug,Domain: Error Messages
low
Critical
372,232,568
flutter
The height of statusbar return 0.0 when running below IOS 11 system
I tested the statusbar height in different versions of the IOS system, the system below IOS 11 would got the problem that the statusbar height is 0.0.
platform-ios,framework,engine,e: OS-version specific,P2,team-ios,triaged-ios
low
Minor
372,237,065
opencv
python grabcut sample input window, issue with pop-up
##### System information (version) - OpenCV => 3.4.3 - Operating System / Platform => Ubuntu 64 Bit - Compiler => Ananconda ##### Detailed description If anyone tries to play with the python grabcut sample demo from samples/python/grabcut.py, it does not work as expected. The expected functionality is to be able to draw a rectangle using right mouse button in the input window. But the right mouse button is also tied to the help pop up for the window. So the right button-down is not registered properly. ##### Steps to reproduce Just run the code samples/python/grabcut.py using the latest version. ##### Solution Replace this line (121); ```.py cv.namedWindow('input') ``` with this line: ```.py cv.namedWindow('input', cv.WINDOW_GUI_NORMAL) ```
category: highgui-gui,category: samples,incomplete
low
Minor
372,237,663
flutter
BlendMode.multiply and BlendMode.modulate are swapped
i have the custom `CustomPainter` class: ```dart class BlendPainter extends CustomPainter { ui.Image image, mask; BlendPainter(this.image, this.mask); @override void paint(Canvas canvas, Size size) { Rect r = Offset.zero & size; Paint paint = Paint(); if (image != null && mask != null) { Size inputSize = Size(mask.width.toDouble(), mask.height.toDouble()); FittedSizes fs = applyBoxFit(BoxFit.contain, inputSize, size); Rect src = Offset.zero & fs.source; Rect dst = Alignment.center.inscribe(fs.destination, r); canvas.saveLayer(dst, Paint()); canvas.drawImageRect(image, src, dst, paint); // paint.blendMode = BlendMode.multiply; paint.blendMode = BlendMode.modulate; canvas.drawImageRect(mask, src, dst, paint); canvas.restore(); } } @override bool shouldRepaint(CustomPainter oldDelegate) => true; } ``` and for `image`: ![image](https://user-images.githubusercontent.com/5962376/47258889-7a830f00-d4a2-11e8-859f-cf01fa7da5b5.png) and `mask`: ![mask](https://user-images.githubusercontent.com/5962376/47258880-5fb09a80-d4a2-11e8-8afe-b1a7d1f8b9b5.png) i got the following result for `BlendMode.multiply`: ![multiply](https://user-images.githubusercontent.com/5962376/47264578-0f751f00-d51a-11e8-9e22-0b5e99997890.png) and `BlendMode.modulate`: ![modulate](https://user-images.githubusercontent.com/5962376/47264583-1c920e00-d51a-11e8-8c78-c94ca0e59d43.png) i am pretty sure that `BlendMode.multiply` should also multiply the alphas of both images so the result should not be round rectangle - this leads me to the conclusion that those both modes are swapped or something
engine,dependency: skia,has reproducible steps,P2,team-engine,triaged-engine,found in release: 3.19,found in release: 3.21
low
Major
372,249,687
TypeScript
Property ? is missing in wrong type
I'm seeing a regression where code that works in 3.1 does not work in tsnext. It seems like typescript is not looking at the right type (`ContourTransform`) and tried to match against a different type (`WordcloudTransform`). <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** @next (3.2) but not 3.1 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** Property ? is missing in type **Code** https://github.com/vega/vega-typings/blob/master/tests/spec/examples.ts#L3394 **Expected behavior:** The tests should not fail and they work fine in ts 3.1. **Actual behavior:** https://travis-ci.org/vega/vega-typings/jobs/444180003 ``` ERROR: 3394:3 expect TypeScript@next compile error: Type '({ name: string; url: string; transform: { type: "filter"; expr: string; }[]; } | { name: string; source: string; transform: { type: "contour"; x: { expr: string; }; y: { expr: string; }; size: { signal: string; }[]; count: { ...; }; }[]; })[]' is not assignable to type 'Data[]'. Type '{ name: string; url: string; transform: { type: "filter"; expr: string; }[]; } | { name: string; source: string; transform: { type: "contour"; x: { expr: string; }; y: { expr: string; }; size: { signal: string; }[]; count: { ...; }; }[]; }' is not assignable to type 'Data'. Type '{ name: string; source: string; transform: { type: "contour"; x: { expr: string; }; y: { expr: string; }; size: { signal: string; }[]; count: { signal: string; }; }[]; }' is not assignable to type 'Data'. Type '{ name: string; source: string; transform: { type: "contour"; x: { expr: string; }; y: { expr: string; }; size: { signal: string; }[]; count: { signal: string; }; }[]; }' is not assignable to type '{ url: string | SignalRef; } & { name: string; on?: OnTrigger[] | undefined; format?: FormatJSON | FormatSV | FormatDSV | ({ type: "topojson"; property?: string | undefined; } & { ...; }) | ({ ...; } & { ...; }) | { ...; } | SignalRef | undefined; transform?: Transform[] | undefined; }'. Type '{ name: string; source: string; transform: { type: "contour"; x: { expr: string; }; y: { expr: string; }; size: { signal: string; }[]; count: { signal: string; }; }[]; }' is not assignable to type '{ url: string | SignalRef; }'. Property 'url' is missing in type '{ name: string; source: string; transform: { type: "contour"; x: { expr: string; }; y: { expr: string; }; size: { signal: string; }[]; count: { signal: string; }; }[]; }'. ```
Bug
low
Critical
372,259,980
go
x/build/cmd/gopherbot: reply to commit comments saying we do not monitor them
We get from time to time a comment on a commit in the main repo. We do not monitor those and nobody gets a ping for them (except the author of the commit who might or might not check GitHub notifications and does not have to). gopherbot should reply automatically pointing them towards the issue tracker and https://github.com/golang/go/wiki/Questions. (Almost sure there was an issue for this, but couldn't find it. Please close as dup if anyone does.)
Builders,FeatureRequest
low
Major
372,269,086
rust
Unable to trace an crash from the backtrace, no trace for my own code
Windows, use msvc toolchain, release build and config โ€˜debug = trueโ€™ in [profile.release], and marked #[inline(never)] the problem is unable to trace an crash from the backtrace, no trace for my own code. while debug build backtrace works, can find my own code line ``` rustc 1.29.1 (b801ae664 2018-09-20) cargo 1.29.0 (524a578d7 2018-08-05) the trace is below: PS D:\rust-projects\hello_cargo\target\release> $env:RUST_BACKTRACE=1 PS D:\rust-projects\hello_cargo\target\release> .\hello_cargo.exe The value of x is: 5 thread โ€˜mainโ€™ panicked at โ€˜Not a number!: ParseIntError { kind: InvalidDigit }โ€™, libcore\result.rs:945:5 stack backtrace: 0: std::sys::windows::backtrace::set_frames at C:\projects\rust\src\libstd\sys\windows\backtrace\mod.rs:104 1: std::sys::windows::backtrace::set_frames at C:\projects\rust\src\libstd\sys\windows\backtrace\mod.rs:104 2: std::sys_common::backtrace::_print at C:\projects\rust\src\libstd\sys_common\backtrace.rs:71 3: std::sys_common::backtrace::_print at C:\projects\rust\src\libstd\sys_common\backtrace.rs:71 4: std::panicking::default_hook::{{closure}} at C:\projects\rust\src\libstd\panicking.rs:211 5: std::panicking::default_hook at C:\projects\rust\src\libstd\panicking.rs:227 6: std::panicking::rust_panic_with_hook at C:\projects\rust\src\libstd\panicking.rs:475 7: std::panicking::continue_panic_fmt at C:\projects\rust\src\libstd\panicking.rs:390 8: std::panicking::rust_begin_panic at C:\projects\rust\src\libstd\panicking.rs:325 9: core::panicking::panic_fmt at C:\projects\rust\src\libcore\panicking.rs:77 10: core::result::unwrap_failedcore::num::ParseIntError at C:\projects\rust\src\libcore\macros.rs:26 11: core::result::Result<u32, core::num::ParseIntError>::expect at C:\projects\rust\src\libcore\result.rs:809 12: core::result::Result<u32, core::num::ParseIntError>::expect at C:\projects\rust\src\libcore\result.rs:809 13: std::rt::lang_start::{{closure}}<()> at C:\projects\rust\src\libstd\rt.rs:74 14: std::rt::lang_start_internal::{{closure}} at C:\projects\rust\src\libstd\rt.rs:59 15: std::rt::lang_start_internal::{{closure}} at C:\projects\rust\src\libstd\rt.rs:59 16: panic_unwind::__rust_maybe_catch_panic at C:\projects\rust\src\libpanic_unwind\lib.rs:105 17: std::panicking::try at C:\projects\rust\src\libstd\panicking.rs:289 18: std::panicking::try at C:\projects\rust\src\libstd\panicking.rs:289 19: std::panicking::try at C:\projects\rust\src\libstd\panicking.rs:289 20: main 21: invoke_main at f:\dd\vctools\crt\vcstartup\src\startup\exe_common.inl:78 22: invoke_main at f:\dd\vctools\crt\vcstartup\src\startup\exe_common.inl:78 23: BaseThreadInitThunk 24: RtlUserThreadStart ```
T-libs-api,O-windows-msvc,C-bug
low
Critical
372,288,181
TypeScript
language service suggestion for Declare property '...' in constructor argument
## Search Terms - typescript declare property in constructor - typescript language service declare property in constructor ## Suggestion When a non-existing property `x` is accessed on `this` in a class method, one of the fixes that ts language service currently suggests, is: > Declare property x Which adds property `x` to the class. It would be nice to also have **Declare property x in constructor arguments**. It can use `private` access modifier by default. ## Use Cases While it's generally useful, it becomes especially handy when developing applications with DI frameworks like Angular that use constructor injection to wire dependencies together. ## Examples \- ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. new expression-level syntax)
Suggestion,Domain: Quick Fixes,Experience Enhancement
low
Minor
372,294,699
rust
Derived trait shadows a blanket default impl, specialization unshadows but the generic impl ends up being used
This example does not compile with 1.31.0-nightly (2018-10-20 155510e377ae2a8d8ee0): ```rust #![feature(specialization)] use std::borrow::Borrow; #[derive(PartialEq)] struct MyString(String); impl Borrow<str> for MyString { fn borrow(&self) -> &str { &self.0 } } impl<Rhs> PartialEq<Rhs> for MyString where Rhs: ?Sized + Borrow<str>, { default fn eq(&self, rhs: &Rhs) -> bool { println!("default impl used"); self.0 == rhs.borrow() } } #[cfg(never)] impl PartialEq<str> for MyString { fn eq(&self, other: &str) -> bool { println!("specialized impl used"); self.0 == other } } fn main() { let s = MyString(String::from("Hello, world!")); println!("{}", s == "ะ—ะดั€ะฐะฒัั‚ะฒัƒะน, ะผะธั€!"); } ``` The error output: ``` error[E0308]: mismatched types --> src/main.rs:34:25 | 34 | println!("{}", s == "ะ—ะดั€ะฐะฒัั‚ะฒัƒะน, ะผะธั€!"); | ^^^^^^^^^^^^^^^^^^ expected struct `MyString`, found reference | = note: expected type `MyString` found type `&'static str` ``` Enabling the specialized `impl PartialEq<str>` makes the example work, but it prints "default impl used". Removing the `derive` attribute also fixes the example, but only if the manually specialized impl is not present.
T-compiler,A-specialization,C-bug,requires-nightly,F-specialization
low
Critical
372,320,473
pytorch
ubuntu16.04 build from source caffe2
## โ“ Questions and Help ### Please note that this issue tracker is not a help form and this issue will be closed. cm@cm-Vostro-2421:~/pytorch$ python setup.py install Building wheel torch-1.0.0a0+ed02619 running install setup.py::run() running build_deps setup.py::build_deps::run() + SYNC_COMMAND=cp ++ command -v rsync + '[' -x /usr/bin/rsync ']' + SYNC_COMMAND='rsync -lptgoD' + CMAKE_COMMAND=cmake ++ command -v cmake3 + [[ -x '' ]] + USE_CUDA=0 + USE_ROCM=0 + USE_NNPACK=0 + USE_MKLDNN=0 + USE_GLOO_IBVERBS=0 + CAFFE2_STATIC_LINK_CUDA=0 + RERUN_CMAKE=1 + [[ 8 -gt 0 ]] + case "$1" in + USE_CUDA=1 + shift + [[ 7 -gt 0 ]] + case "$1" in + USE_NNPACK=1 + shift + [[ 6 -gt 0 ]] + case "$1" in + break + CMAKE_INSTALL='make install' + BUILD_SHARED_LIBS=ON + USER_CFLAGS= + USER_LDFLAGS= + [[ -n '' ]] + [[ -n '' ]] + [[ -n '' ]] ++ uname + '[' Linux == Darwin ']' +++ dirname ../tools/build_pytorch_libs.sh ++ cd ../tools/.. +++ pwd ++ printf '%q\n' /home/cm/pytorch + BASE_DIR=/home/cm/pytorch + TORCH_LIB_DIR=/home/cm/pytorch/torch/lib + INSTALL_DIR=/home/cm/pytorch/torch/lib/tmp_install + THIRD_PARTY_DIR=/home/cm/pytorch/third_party + C_FLAGS=' -I"/home/cm/pytorch/torch/lib/tmp_install/include" -I"/home/cm/pytorch/torch/lib/tmp_install/include/TH" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THC" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THS" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THCUNN"' + C_FLAGS=' -I"/home/cm/pytorch/torch/lib/tmp_install/include" -I"/home/cm/pytorch/torch/lib/tmp_install/include/TH" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THC" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THS" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1' + LDFLAGS='-L"/home/cm/pytorch/torch/lib/tmp_install/lib" ' + LD_POSTFIX=.so ++ uname + [[ Linux == \D\a\r\w\i\n ]] + [[ 0 -eq 1 ]] + LDFLAGS='-L"/home/cm/pytorch/torch/lib/tmp_install/lib" -Wl,-rpath,$ORIGIN' + CPP_FLAGS=' -std=c++11 ' + GLOO_FLAGS='-DBUILD_TEST=OFF ' + THD_FLAGS= + NCCL_ROOT_DIR=/home/cm/pytorch/torch/lib/tmp_install + [[ 1 -eq 1 ]] + GLOO_FLAGS+='-DUSE_CUDA=1 -DNCCL_ROOT_DIR=/home/cm/pytorch/torch/lib/tmp_install' + [[ 0 -eq 1 ]] + CWRAP_FILES='/home/cm/pytorch/torch/lib/ATen/Declarations.cwrap;/home/cm/pytorch/torch/lib/THNN/generic/THNN.h;/home/cm/pytorch/torch/lib/THCUNN/generic/THCUNN.h;/home/cm/pytorch/torch/lib/ATen/nn.yaml' + CUDA_NVCC_FLAGS=' -I"/home/cm/pytorch/torch/lib/tmp_install/include" -I"/home/cm/pytorch/torch/lib/tmp_install/include/TH" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THC" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THS" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1' + [[ -z '' ]] + CUDA_DEVICE_DEBUG=0 + '[' -z '' ']' ++ getconf _NPROCESSORS_ONLN + MAX_JOBS=4 + BUILD_TYPE=Release + [[ -n '' ]] + [[ -n '' ]] + echo 'Building in Release mode' Building in Release mode + mkdir -p /home/cm/pytorch/torch/lib/tmp_install + for arg in '"$@"' + [[ nccl == \n\c\c\l ]] + pushd /home/cm/pytorch/third_party ~/pytorch/third_party ~/pytorch/build + build_nccl + mkdir -p build/nccl + pushd build/nccl ~/pytorch/third_party/build/nccl ~/pytorch/third_party ~/pytorch/build + [[ 1 -eq 1 ]] + cmake ../../nccl -DCMAKE_MODULE_PATH=/home/cm/pytorch/cmake/Modules_CUDA_fix -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/cm/pytorch/torch/lib/tmp_install '-DCMAKE_C_FLAGS= -I"/home/cm/pytorch/torch/lib/tmp_install/include" -I"/home/cm/pytorch/torch/lib/tmp_install/include/TH" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THC" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THS" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1 ' '-DCMAKE_CXX_FLAGS= -I"/home/cm/pytorch/torch/lib/tmp_install/include" -I"/home/cm/pytorch/torch/lib/tmp_install/include/TH" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THC" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THS" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THCS" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THNN" -I"/home/cm/pytorch/torch/lib/tmp_install/include/THCUNN" -DOMPI_SKIP_MPICXX=1 -std=c++11 ' -DCMAKE_SHARED_LINKER_FLAGS= -DCMAKE_UTILS_PATH=/home/cm/pytorch/cmake/public/utils.cmake -DNUM_JOBS=4 -- The C compiler identification is GNU 5.4.0 -- The CXX compiler identification is GNU 5.4.0 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Looking for pthread.h -- Looking for pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Found CUDA: /usr/local/cuda (found suitable version "8.0", minimum required is "7.0") -- Autodetected CUDA architecture(s): 2.1(2.0) -- Set NVCC_GENCODE for building NCCL: -gencode=arch=compute_20,code=sm_21 -- Configuring done -- Generating done -- Build files have been written to: /home/cm/pytorch/third_party/build/nccl + make install -j4 Scanning dependencies of target nccl [100%] Generating lib/libnccl.so make[3]: ่ญฆๅ‘Š: ๅญ make ไธญๅผบๅˆถ -jN: ๅ…ณ้—ญ jobserver ๆจกๅผใ€‚ Generating nccl.h.in > nccl.h Compiling init.cu > /home/cm/pytorch/third_party/build/nccl/obj/init.o Compiling ring.cu > /home/cm/pytorch/third_party/build/nccl/obj/ring.o Compiling bootstrap.cu > /home/cm/pytorch/third_party/build/nccl/obj/bootstrap.o Compiling transport.cu > /home/cm/pytorch/third_party/build/nccl/obj/transport.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored init.cu:52:1: warning: โ€˜ncclNetโ€™ initialized and declared โ€˜externโ€™ ncclNet_t* ncclNet = NULL; ^ nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling misc/group.cu > /home/cm/pytorch/third_party/build/nccl/obj/misc/group.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling misc/nvmlwrap.cu > /home/cm/pytorch/third_party/build/nccl/obj/misc/nvmlwrap.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling misc/ibvwrap.cu > /home/cm/pytorch/third_party/build/nccl/obj/misc/ibvwrap.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling misc/rings.cu > /home/cm/pytorch/third_party/build/nccl/obj/misc/rings.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling misc/utils.cu > /home/cm/pytorch/third_party/build/nccl/obj/misc/utils.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling misc/enqueue.cu > /home/cm/pytorch/third_party/build/nccl/obj/misc/enqueue.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling transport/p2p.cu > /home/cm/pytorch/third_party/build/nccl/obj/transport/p2p.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling transport/shm.cu > /home/cm/pytorch/third_party/build/nccl/obj/transport/shm.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling transport/net.cu > /home/cm/pytorch/third_party/build/nccl/obj/transport/net.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling transport/net_socket.cu > /home/cm/pytorch/third_party/build/nccl/obj/transport/net_socket.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling transport/net_ib.cu > /home/cm/pytorch/third_party/build/nccl/obj/transport/net_ib.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling collectives/all_reduce.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/all_reduce.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling collectives/all_gather.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/all_gather.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling collectives/broadcast.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/broadcast.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling collectives/reduce.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/reduce.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling collectives/reduce_scatter.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/reduce_scatter.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling all_reduce.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/all_reduce_prod.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Grabbing nccl.h > /home/cm/pytorch/third_party/build/nccl/include/nccl.h Compiling broadcast.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/broadcast_prod.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling reduce.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/reduce_prod.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling all_gather.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/all_gather_prod.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling reduce_scatter.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/reduce_scatter_prod.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling all_reduce.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/all_reduce_min.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling broadcast.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/broadcast_min.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling reduce.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/reduce_min.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling all_gather.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/all_gather_min.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling reduce_scatter.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/reduce_scatter_min.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling all_reduce.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/all_reduce_max.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling broadcast.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/broadcast_max.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling reduce.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/reduce_max.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling all_gather.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/all_gather_max.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling reduce_scatter.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/reduce_scatter_max.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling all_reduce.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/all_reduce_sum.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling broadcast.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/broadcast_sum.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling reduce.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/reduce_sum.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling all_gather.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/all_gather_sum.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). Compiling reduce_scatter.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/reduce_scatter_sum.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored Compiling functions.cu > /home/cm/pytorch/third_party/build/nccl/obj/collectives/device/functions.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored ptxas warning : Too big maxrregcount value specified 96, will be ignored nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). nvlink fatal : Internal error: reference to deleted section Makefile:83: recipe for target '/home/cm/pytorch/third_party/build/nccl/obj/collectives/device/devlink.o' failed make[5]: *** [/home/cm/pytorch/third_party/build/nccl/obj/collectives/device/devlink.o] Error 1 Makefile:45: recipe for target 'devicelib' failed make[4]: *** [devicelib] Error 2 Makefile:24: recipe for target 'src.build' failed make[3]: *** [src.build] Error 2 CMakeFiles/nccl.dir/build.make:60: recipe for target 'lib/libnccl.so' failed make[2]: *** [lib/libnccl.so] Error 2 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/nccl.dir/all' failed make[1]: *** [CMakeFiles/nccl.dir/all] Error 2 Makefile:127: recipe for target 'all' failed make: *** [all] Error 2 Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-nnpack nccl caffe2 libshm gloo c10d THD' - [Discussion Forum](https://discuss.pytorch.org/)
caffe2
low
Critical
372,324,498
TypeScript
API about Identifier's scope
I want to write a CustomTransformers for my webpack loader config. Babel's visitors have API about [scope](https://www.henryzoo.com/babel.github.io/docs/advanced/plugins/scope/) If I get a path of ImportDeclaration `import * as React from "react"`, I can using `path.scope.getBinding('React').referencePaths` to get all reference paths using React. Is there any API in TypeScript to get identifier's **scope** or **referenced nodes/identifiers**? Also, Is there any API in TypeScript to get ImportDeclaration's moduleSpecifier's *resolved source file*? for example `import * as React from "react"`'s resolved source file is `/home/me/my-ts-project/node_modules/react/cjs/react.development.js`
Suggestion,API,Experience Enhancement
low
Minor
372,326,541
terminal
Reading extended attributes (RGB colors) from the screen buffer
TL;DR: How? Scenario: an app needs to read a part of the screen buffer, output something, and then, when needed, restore the original buffer content. ReadConsoleOutput always worked seamlessly for that, but now Windows supports 256 colours, true colours and underline text, but CHAR_INFO.Attributes is still WORD and obviously can't hold all that luxury. I remember reading somewhere that you don't have plans to extend output APIs (WriteConsoleOutput and friends) because it can be achieved with VT sequences (with limitations, inconvenience, more work, but, technically - yes, it can), but what about reading? I tried to search first and found [this](https://github.com/Microsoft/console/issues/88#issuecomment-387777464) comment, but I don't think that #57 would help here - no 3rd-party terminals involved in this case, we're reading purely static data. ![screen](https://user-images.githubusercontent.com/11453922/47342037-8cc79d80-d69a-11e8-83b8-d9d12747d7b4.png)
Issue-Feature,Product-Conhost,Area-Server
medium
Major
372,342,267
opencv
JavaScript fastNlMeansDenoising() Support
##### System information (version) - OpenCV => 3.4.3 (opencv.js) - Operating System / Platform => Ubuntu 17.10 - x86_64 GNU/Linux - Compiler => chrome engine ( web browser ) ##### Detailed description I faced this error when i use fastNlMeansDenoising() method: `Uncaught TypeError exception: cv.fastNlMeansDenoising() is not a function` ##### Steps to reproduce ``` let src = imread(IMG_SRC); let dst = cv.fastNlMeansDenoising(src,None,10,10,7,21); imshow('DST_ELEMENT_ID, dst); ``` ##### Request its helpful to present a denoising method for javascript version (not mentioned in documents)
feature,priority: low,category: photo,category: javascript (js)
low
Critical
372,376,430
godot
Incorrect texture UV for Label (glyphs' UVs are used instead of label's)
**Godot version:** 3.0.6 **OS/device:** Windows 8.1, Toshiba Satellite **Issue description:** For shaders attached to Labels, UV is not as expected. If you attach a shader to a Label and try to do a simple vertical gradient on it, UV.y = 1.0 does not correspond to the bottom of the Label. **Steps to reproduce:** 1. Create a Label. 2. Add a dynamic font of your choosing. 2. Attach the following shader. 3. Observe that the bottom of the text does not have color (1.0, 0.0, 0.0), as would be expected at UV = 1.0. 4. The multiplier must be changed to around 10.0 in order to see the expected gradient. ``` shader_type canvas_item; render_mode unshaded; // One would expect UV.y = 1.0 to correspond to the bottom of the text. // This does not seem to be the case, as the bottom of the text is not pure red. // You must change the multiplier to around 10.0 in order to see the desired gradient. uniform float multiplier = 1.0; void fragment() { float texture_alpha = texture(TEXTURE, UV).a; if (texture_alpha > 0.0) { COLOR = vec4(UV.y * multiplier, 0.0, 0.0, texture_alpha); } else { COLOR = texture(TEXTURE, UV); } } ``` **Minimal reproduction project:** [ExampleProject.zip](https://github.com/godotengine/godot/files/2499944/ExampleProject.zip)
bug,topic:rendering,confirmed,topic:gui
medium
Major
372,440,879
vscode
Expanding emmet abbreviations inside JSX inline functions not working
Issue Type: <b>Bug</b> Expanding emmet abbreviations inside inline functions is not working. Given the example React component below I am unable to expand any abbreviations inside the render function. Expanding works in rest of the file. ```jsx import React from 'react'; import { Form } from 'react-final-form'; const SomeComponent = () => ( <Form render={() => ( <pre>Expanding Emmet abbreviations inside this function is not working</pre> )} /> ); export default SomeComponent; ``` What bothers me the most is that it is not even working when explicitly selecting from the menu "Emmet: Expand Abbreviation". If it was up to me I would make the explicit choice from the menu expand whatever and in whatever syntax. Additionally would like to note that "Emmet: Wrap with Abbreviation" seems to already work like that. Shouldn't the "Emmet: Expand Abbreviation" work the same way? --- VS Code version: Code 1.28.2 (7f3ce96ff4729c91352ae6def877e59c561f4850, 2018-10-17T00:18:43.347Z) OS version: Darwin x64 18.0.0 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz (8 x 2200)| |GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: enabled<br>rasterization: enabled<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled| |Load (avg)|2, 2, 2| |Memory (System)|16.00GB (2.19GB free)| |Process Argv|| |Screen Reader|no| |VM|0%| </details><details><summary>Extensions (4)</summary> Extension|Author (truncated)|Version ---|---|--- vscode-eslint|dba|1.6.1 EditorConfig|Edi|0.12.5 vscode-styled-components|jpo|0.0.22 material-icon-theme|PKi|3.6.0 </details> <!-- generated by issue reporter -->
bug,emmet
low
Critical
372,451,603
TypeScript
Declarations are not generated for a high order component
<!-- ๐Ÿšจ STOP ๐Ÿšจ ๐—ฆ๐—ง๐—ข๐—ฃ ๐Ÿšจ ๐‘บ๐‘ป๐‘ถ๐‘ท ๐Ÿšจ Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.2.0-dev.201xxxxx, 2.9.1, 3.1.3 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** declaration, high-order, hoc **Code** ```ts export function Executor(func: () => any) { class ExecutingClass { execute() { func(); } } return ExecutingClass; } ``` **Expected behavior:** Running both `tsc` and `tsc -d` produces no errors. **Actual behavior:** When I'm running `tsc` on this code, everything works fine. When I'm running `tsc -d` on this code, I have this error: ``` hoc.ts:1:17 - error TS4060: Return type of exported function has or is using private name 'ExecutingClass'. 1 export function Executor(func: () => any) { ``` It is still possible to generate typings for this if you change code to this: ```ts export function Executor(func: () => any) { const ExecutingClass = class { execute() { func(); } } return ExecutingClass; } export default Executor; ```
Suggestion,Domain: Declaration Emit,Experience Enhancement
low
Critical
372,492,491
go
cmd/vet: flag using %s:%d to construct network addresses
I recently diagnosed a bug in someoneโ€™s Go program where a user reported that they couldnโ€™t get the program to connect, and it turned out the issue was that the programmer used `fmt.Sprintf("%s:%d", host, port)` instead of `net.JoinHostPort` to construct a network address to pass to `net.Dial`: https://twitter.com/zekjur/status/1049642773278650368. The issue is subtle: `net.JoinHostPort` correctly handles IPv6 address literals (e.g. 2001:4860:4860::8888), whereas fmt.Sprintf doesnโ€™t. A very similar issue is to use `strings.Split` instead of `net.SplitHostPort`. I noticed that both of these issues are fairly prevalent, likely because programmers arenโ€™t that accustomed to using IPv6 literals yet, but I expect them to become more common as IPv6 adoption continues to grow. I propose adding a vet check to flag using %s:%d format strings with arguments whose names contain port, addr, host, listen or bind. This heuristic will flag 12356 occurrencesยน I found on GitHub using BigQuery, and hopefully make programmers aware not only of net.JoinHostPort but also net.SplitHostPort, for which writing a check is significantly harder. I can send a CL to implement this. Let me know what you think. โ‘  The BigQuery query I used was: ``` WITH lines AS (SELECT id, SPLIT(content, "\n") AS line FROM `gocontents.gocontents`) SELECT path, flattened_line FROM lines CROSS JOIN UNNEST(lines.line) AS flattened_line JOIN `gocontents.gofiles` AS files ON files.id = lines.id WHERE REGEXP_CONTAINS(flattened_line, r"fmt.Sprintf\(\"%s:%d\", (port|addr|host|listen|bind)") ```
Proposal,Proposal-Accepted,FeatureRequest,Analysis
medium
Critical
372,523,928
vscode
Task Manager: provide the ability to reuse custom problem matchers
I have several related tasks that share a custom output format. I'd like to be able to define a problem matcher for this format once and reference it in multiple tasks like I can for built-in matchers, eg. `"problemMatcher": "$my-matcher"`.
feature-request,tasks
medium
Critical
372,531,592
pytorch
USE_OPENMP=OFF is ignored [Caffe2]
## ๐Ÿ› Bug I've tried submitting this bug report before, but it seems that github is playing funny today so it may be a duplicate. Since f0d8a36e709dbf6a4c3aed7faf0da2b113668fe7 the ```USE_OPENMP``` cmake flag is ignored and the libcaffe2.so is always linked against libgomp.so ## To Reproduce <pre><code> $ ldd libcaffe2.so linux-vdso.so.1 => (0x00007ffe90ac2000) librt.so.1 => /lib64/librt.so.1 (0x00007fdeb32f2000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fdeb30dc000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fdeb2ed8000) libm.so.6 => /lib64/libm.so.6 (0x00007fdeb2c54000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fdeb2a37000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007fdeb2731000) <b>libgomp.so.1 => /usr/lib64/libgomp.so.1 (0x00007fdeb251c000)</b> libc.so.6 => /lib64/libc.so.6 (0x00007fdeb2188000) /lib64/ld-linux-x86-64.so.2 (0x00007fdeb56b5000) </pre></code> ## Expected behavior The libcaffe2.so library should not depend on ```libgomp.so``` when building with ```-DUSE_OPENMP=OFF```. ## Environment - PyTorch/Caffe2 version: 6dd71947ea39b321dbc7c357122c2d685e973e7d - OS: Centos 6 x86_64 - Build command you used (if compiling from source): ```cmake ../.. -DCMAKE_INSTALL_PREFIX="$install_prefix" -DBUILD_PYTHON=OFF -DUSE_CUDA=OFF -DUSE_NATIVE_ARCH=OFF -DUSE_GFLAGS=OFF -DUSE_GLOG=OFF -DUSE_GLOO=OFF -DBUILD_SHARED_LIBS=ON -DBUILD_TEST=OFF -DBUILD_BINARY=OFF -DUSE_LMDB=OFF -DUSE_LEVELDB=OFF -DUSE_MPI=OFF -DUSE_OPENMP=OFF -DUSE_OPENCV=OFF -DBUILD_ATEN=OFF -DUSE_NNPACK=OFF -DCAFFE2_DISABLE_NUMA=1 -DUSE_MKLDNN=OFF -DUSE_IDEEP=OFF -DUSE_MKLML=OFF -DUSE_DISTRIBUTED=OFF```
caffe2
low
Critical
372,536,137
godot
Mono: Allow the C# solution file to be created when you click "Create & Edit"
I'm working on a tutorial and I thought to myself that to a beginner that it isn't really clear to the user that they need to create the solution file or add a script before that solution file is created on new applications. So I was wondering as a small QoL change on new projects, why not create that solution file when you click "Create & Edit" on a fresh project.
enhancement,topic:editor,usability,topic:dotnet
low
Minor
372,566,551
go
cmd/compile: recognize rand.Intn() is bounded for BCE
### What version of Go are you using (`go version`)? go version go1.11.1 windows/amd64 ### What did you do? I implemented an array-shuffling function, using the math/rand.Intn() function. https://play.golang.org/p/tlTHcU4StTF (reduced example) ### What did you expect to see? I expected the compiler to not generate bounds checking code. ### What did you see instead? Instead, said code was generated. **_Note:_** Even if I change the code to https://play.golang.org/p/-65t27s9OD-, the bounds checking code is still generated (maybe a different issue?).
Performance,compiler/runtime
low
Minor
372,582,376
javascript-algorithms
Luhn algorithm
I would like to add Luhn algorithm(https://en.wikipedia.org/wiki/Luhn_algorithm) to "src/algorithms/uncategorized/" is it ok for you?
enhancement
medium
Minor
372,592,220
vscode
VS Code changes indention of current line
I am using VS Code 1.28.2 on Windows 10. I'm editing an HTML file indented with tabs. But I have HTML `<pre>` sections which I want indented with spaces. Let's say I have the following code: ```html <pre> foo //I press <Enter> at the end of this line </pre> ``` * When I open the file VS Code detects I use tabs. Fine. * When I hit `<Enter>` at the end of the line that says "foo", VS Code indents the next line. Fine. * VS Code isn't smart enough to realize that I want to indent the next line with spaces instead of a tab, because it already got into its head that I want to use tabs everywhere. It's not smart enough to realize that the previous line ` foo` was indented with spaces. It uses a tab for the next line. But OK, I can live with that, too. It's understandable. * **Bug:** What isn't acceptable is that VS Code _changes the indention of the existing line_ from spaces to tabs!! Yes, when I hit `<Enter>` after "foo", it not only indents the next line but _changes the indention that comes before `foo`_ from spaces to tabs. ```html <pre> foo //bug: the two spaces have been converted to a tab! //this new line also has a tab, which I don't want but can accept for now </pre> ``` VS Code has no business of changing the existing line. I simply hit `<Enter>` to go to the next time. This is a huge problem, and it has likely corrupted a bunch of lines in this file because I didn't catch on to what it was doing. Sure, we have auto-indent features to try to automatically insert indent on the _following_ line, but please don't mess with the line I already have. How can I turn this off?
bug,html,editor-autoindent,on-unit-test
low
Critical
372,632,018
rust
Const closure weirdness
The following compiles just fine: ```Rust #![feature(impl_trait_in_bindings)] fn main() { let func: impl Fn(&str) -> usize = |s| s.parse().unwrap(); } ``` However this does not: ```Rust #![feature(impl_trait_in_bindings)] fn main() { const func: impl Fn(&str) -> usize = |s| s.parse().unwrap(); } ``` It gives the following error message: ``` error: non-defining existential type use in defining scope --> src/main.rs:4:42 | 4 | const func: impl Fn(&str) -> usize = |s| s.parse().unwrap(); | ^^^^^^^^^^^^^^^^^^^^^^ lifetime `` is part of concrete type but not used in parameter list of existential type error: aborting due to previous error error: Could not compile `playground`. ``` Why is this an error? The closure does not capture anything, so it's lifetime is `'static`, and const requires `'static`.
A-lifetimes,A-closures,A-const-eval
low
Critical
372,640,814
go
cmd/cover: branching to a labeled for misses some coverage
See below for a simple program that reports statements as uncovered when they are covered. ### What version of Go are you using (`go version`)? go version go1.11 darwin/amd64 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? Mac OS 10.13.6 ### What did you do? Run "go test -cover" on the following code: demo.go: ```go package test func testit(x, y int) int { var i int if x == 0 { goto forloop } goto returnit forloop: // The problem also appears if for is changed to switch but not to if for i < y { i += 1 } return x + i returnit: return x * y } ``` demo_test.go ```go package test import ( "fmt" "testing" ) var testCases = []struct { x, y, ret int }{ {1, 1, 1}, {0, -1, 0}, {0, 3, 3}, } func TestDemo(t *testing.T) { for n, c := range testCases { t.Run(fmt.Sprint("Case#", n), func(t *testing.T) { if r := testit(c.x, c.y); r != c.ret { t.Errorf("testit(%d, %d) returned %d expected %d", c.x, c.y, r, c.ret) } }) } } ``` ### What did you expect to see? 100% coverage ### What did you see instead? ``` Chriss-MacBook-Pro:test godev$ go test -cover PASS coverage: 88.9% of statements ok test 0.005s ``` Running the html coverage tool shows the code "forloop: for i < y" as not covered. Changing the for to a switch does not change the result. However, changing the for to an if results in 100% coverage. The "goto" jumping over this code appears to be essential to the bug. This issue may be the same as #27015.
NeedsDecision,compiler/runtime
low
Critical
372,643,695
go
x/build/cmd/gopherbot: keeps repeating some actions without effect
I noticed this in the gopherbot logs recently, and it's possible to reproduce locally in dry-run mode. Gopherbot keeps on repeating some actions every iteration. These actions don't have an effect, and so gopherbot never stops repeating them. ``` 2018/10/22 07:44:15 got corpus update after 252.395017ms 2018/10/22 07:44:15 No reviewers or cc: https://go-review.googlesource.com/c/gddo/+/135536 2018/10/22 07:44:15 Adding no-owners tag to change https://go-review.googlesource.com/c/gddo/+/135536... 2018/10/22 07:44:15 No reviewers or cc: https://go-review.googlesource.com/c/gddo/+/135537 2018/10/22 07:44:15 Adding no-owners tag to change https://go-review.googlesource.com/c/gddo/+/135537... 2018/10/22 07:44:16 closing scratch CL https://golang.org/cl/71730 ... 2018/10/22 07:44:16 closing scratch CL https://golang.org/cl/71850 ... 2018/10/22 07:44:16 closing scratch CL https://golang.org/cl/72090 ... 2018/10/22 07:44:16 closing scratch CL https://golang.org/cl/72091 ... 2018/10/22 07:44:16 closing scratch CL https://golang.org/cl/72110 ... 2018/10/22 07:44:16 closing scratch CL https://golang.org/cl/72131 ... 2018/10/22 07:44:16 gopherbot ran in 1.335854405s ``` <details><summary>Reproduce Steps</summary><br> ``` $ go get -u golang.org/x/build/cmd/gopherbot $ go run golang.org/x/build/cmd/gopherbot -dry-run -daemon 2018/10/22 14:27:47 Loading data from log *maintner.netMutSource ... 2018/10/22 14:27:47 Downloading 39043 bytes of https://maintner.golang.org/logs/41 ... 2018/10/22 14:27:47 wrote /Users/dmitshur/Library/Caches/golang-maintner/0041.growing.mutlog 2018/10/22 14:27:55 Reloaded data from log *maintner.netMutSource. 2018/10/22 14:27:56 No reviewers or cc: https://go-review.googlesource.com/c/gddo/+/135536 2018/10/22 14:27:56 Adding no-owners tag to change https://go-review.googlesource.com/c/gddo/+/135536... 2018/10/22 14:27:56 No reviewers or cc: https://go-review.googlesource.com/c/gddo/+/135537 2018/10/22 14:27:56 Adding no-owners tag to change https://go-review.googlesource.com/c/gddo/+/135537... 2018/10/22 14:27:56 [dry-run] would've closed scratch CL https://golang.org/cl/71730 ... 2018/10/22 14:27:56 [dry-run] would've closed scratch CL https://golang.org/cl/71850 ... 2018/10/22 14:27:56 [dry-run] would've closed scratch CL https://golang.org/cl/72090 ... 2018/10/22 14:27:56 [dry-run] would've closed scratch CL https://golang.org/cl/72091 ... 2018/10/22 14:27:56 [dry-run] would've closed scratch CL https://golang.org/cl/72110 ... 2018/10/22 14:27:56 [dry-run] would've closed scratch CL https://golang.org/cl/72131 ... 2018/10/22 14:27:56 gopherbot ran in 1.434313646s 2018/10/22 14:27:56 Updating data from log *maintner.netMutSource ... 2018/10/22 14:29:22 Downloading 1242 bytes of https://maintner.golang.org/logs/41 ... 2018/10/22 14:29:22 wrote /Users/dmitshur/Library/Caches/golang-maintner/0041.growing.mutlog 2018/10/22 14:29:22 Reloaded data from log *maintner.netMutSource. 2018/10/22 14:29:22 got corpus update after 1m25.647922592s 2018/10/22 14:29:23 No reviewers or cc: https://go-review.googlesource.com/c/gddo/+/135536 2018/10/22 14:29:23 Adding no-owners tag to change https://go-review.googlesource.com/c/gddo/+/135536... 2018/10/22 14:29:23 No reviewers or cc: https://go-review.googlesource.com/c/gddo/+/135537 2018/10/22 14:29:23 Adding no-owners tag to change https://go-review.googlesource.com/c/gddo/+/135537... 2018/10/22 14:29:23 [dry-run] would've closed scratch CL https://golang.org/cl/71730 ... 2018/10/22 14:29:23 [dry-run] would've closed scratch CL https://golang.org/cl/71850 ... 2018/10/22 14:29:23 [dry-run] would've closed scratch CL https://golang.org/cl/72090 ... 2018/10/22 14:29:23 [dry-run] would've closed scratch CL https://golang.org/cl/72091 ... 2018/10/22 14:29:23 [dry-run] would've closed scratch CL https://golang.org/cl/72110 ... 2018/10/22 14:29:23 [dry-run] would've closed scratch CL https://golang.org/cl/72131 ... 2018/10/22 14:29:23 gopherbot ran in 860.347827ms 2018/10/22 14:29:23 Updating data from log *maintner.netMutSource ... 2018/10/22 14:29:23 Downloading 1317 bytes of https://maintner.golang.org/logs/41 ... 2018/10/22 14:29:23 wrote /Users/dmitshur/Library/Caches/golang-maintner/0041.growing.mutlog 2018/10/22 14:29:23 gerrit code.googlesource.com/gocloud: Ref {CLNumber:34491 Version:0} => 4702bd58135b78127c759a94658b7606e5445e46 2018/10/22 14:29:23 Reloaded data from log *maintner.netMutSource. 2018/10/22 14:29:23 got corpus update after 195.853575ms 2018/10/22 14:29:24 No reviewers or cc: https://go-review.googlesource.com/c/gddo/+/135536 2018/10/22 14:29:24 Adding no-owners tag to change https://go-review.googlesource.com/c/gddo/+/135536... 2018/10/22 14:29:24 No reviewers or cc: https://go-review.googlesource.com/c/gddo/+/135537 2018/10/22 14:29:24 Adding no-owners tag to change https://go-review.googlesource.com/c/gddo/+/135537... 2018/10/22 14:29:24 [dry-run] would've closed scratch CL https://golang.org/cl/71730 ... 2018/10/22 14:29:24 [dry-run] would've closed scratch CL https://golang.org/cl/71850 ... 2018/10/22 14:29:24 [dry-run] would've closed scratch CL https://golang.org/cl/72090 ... 2018/10/22 14:29:24 [dry-run] would've closed scratch CL https://golang.org/cl/72091 ... 2018/10/22 14:29:24 [dry-run] would've closed scratch CL https://golang.org/cl/72110 ... 2018/10/22 14:29:24 [dry-run] would've closed scratch CL https://golang.org/cl/72131 ... 2018/10/22 14:29:24 gopherbot ran in 726.996791ms 2018/10/22 14:29:24 Updating data from log *maintner.netMutSource ... 2018/10/22 14:29:41 Downloading 37 bytes of https://maintner.golang.org/logs/41 ... 2018/10/22 14:29:41 wrote /Users/dmitshur/Library/Caches/golang-maintner/0041.growing.mutlog 2018/10/22 14:29:41 Reloaded data from log *maintner.netMutSource. 2018/10/22 14:29:41 got corpus update after 17.501948429s 2018/10/22 14:29:42 No reviewers or cc: https://go-review.googlesource.com/c/gddo/+/135536 2018/10/22 14:29:42 Adding no-owners tag to change https://go-review.googlesource.com/c/gddo/+/135536... 2018/10/22 14:29:42 No reviewers or cc: https://go-review.googlesource.com/c/gddo/+/135537 2018/10/22 14:29:42 Adding no-owners tag to change https://go-review.googlesource.com/c/gddo/+/135537... 2018/10/22 14:29:42 [dry-run] would've closed scratch CL https://golang.org/cl/71730 ... 2018/10/22 14:29:42 [dry-run] would've closed scratch CL https://golang.org/cl/71850 ... 2018/10/22 14:29:42 [dry-run] would've closed scratch CL https://golang.org/cl/72090 ... 2018/10/22 14:29:42 [dry-run] would've closed scratch CL https://golang.org/cl/72091 ... 2018/10/22 14:29:42 [dry-run] would've closed scratch CL https://golang.org/cl/72110 ... 2018/10/22 14:29:42 [dry-run] would've closed scratch CL https://golang.org/cl/72131 ... 2018/10/22 14:29:42 gopherbot ran in 723.348371ms 2018/10/22 14:29:42 Updating data from log *maintner.netMutSource ... ``` </details><br> It's pretty harmless, but would be nice to fix. The underlying issues causing this are: - [ ] #22635โ€“x/build/maintner: Gerrit CL deletions are not reflected in model - [ ] #30184โ€“x/build/maintner: GitHub issue becoming 404 is not reflected in model - [x] #28318โ€“x/build/maintner: reports incorrect Hashtags for some CLs Fixing those issues should resolve this issue. /cc @bradfitz @andybons
Builders,NeedsInvestigation,umbrella
low
Major
372,708,159
three.js
ArrayCamera rotation support (including OrbitControls.js)
It would be great to be able to rotate an array camera as you would a regular camera, i.e. either accessing it's .rotation property or passing the camera into an orbit controls script. While this isn't particularly valuable for VR applications, any other type of application that leverages the optimization possible through ArrayCamera has to do a lot of workarounds to make rotation function properly.
Enhancement
low
Minor
372,716,467
TypeScript
Prototype property assignments don't work for js-native-types
_From @Yegorich555 on October 19, 2018 12:40_ - VSCode Version: 1.28.2 - OS Version: Window 10 x64 - All extensions disabled Steps to Reproduce: ``` Array.prototype.test = function() { return 'ok'; } var arr = []; arr.te - I want to see function 'test' in intellisense ``` ![image](https://user-images.githubusercontent.com/25006810/47218661-35cd7a00-d3b5-11e8-81cc-97439c61747f.png) _Copied from original issue: Microsoft/vscode#61315_
Bug,Awaiting More Feedback,Domain: JavaScript
low
Minor
372,718,558
go
go/parser: accept ...T as type in all parameter lists; leave it to go/types to complain
This could simplify the parser and also make it more error-tolerant in the presence of such error. It would also be more consistent with what the compiler does and better match the spec (where the rules about uses of ... are in the prose and not the EBNF). Clean-up task.
NeedsFix
low
Critical
372,757,158
pytorch
Come up with a better strategy for noticing BC-breaking attribute additions to serializable classes
In https://github.com/pytorch/pytorch/pull/8367 @apaszke caught that an added field would break pickles of this class. This is a much, much too subtle for us to rely solely on code review to catch instances of this. We should add some lint, or automated reminder, to check for this, or maybe another mechanism to "burn in" the minimum set of attributes, so that addition of an attribute causes a test failure. cc @ezyang @gchanan
module: bc-breaking,module: molly-guard,triaged
low
Critical
372,790,753
angular
animation for :decrement is not executing inside *ngFor
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> When trying to animate a list (using `div` elements and `*ngFor`) where items are moved up and down by one position, one at a time, the animation for `:decrement` is not executing, only the `:increment` transition is executing. After 2 rounds, the expected behavior is finally correct and both the `:decrement` and `:increment` transition are executed. ## Expected behavior <!-- Describe what the desired behavior would be. --> I would expect both the `:decrement` and `:increment` transition to be fired from the beginning since the variable to which the transition is attached is _decremented_ for one element, but _incremented_ for the other one. ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://stackblitz.com or similar (you can use this template as a starting point: https://stackblitz.com/fork/angular-gitter). --> https://stackblitz.com/edit/angular-ae6tq9 The gist of the code: ```typescript //... @Component({ selector: 'my-app', templateUrl: './app.component.html', animations: [ trigger('move', [ transition(':decrement', [style({ opacity: 0 }), animate('1s ease', style({ opacity: 1 }))]), transition(':increment', [style({ transform: 'scale(0.5)' }), animate('1s ease', style({ transform: 'scale(1)' }))]), ]) ], styleUrls: ['./app.component.css'] }) export class AppComponent { //... moveUp(item: string) { const index = this.items.indexOf(item); if (index > 0) { this.items.splice(index, 1); this.items.splice(index - 1, 0, item); } } moveDown(item: string) { const index = this.items.indexOf(item); if (index < this.items.length) { this.items.splice(index, 1); this.items.splice(index + 1, 0, item); } } } ``` ```html <div *ngFor="let item of items; let index = index;"> <div [@move]="index"> <span (click)="moveUp(item)">^</span>&nbsp;<span (click)="moveDown(item)">v</span> {{index}} : {{item}} </div> </div> ``` ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> Unless there is something wrong with my code, this bug is preventing animation on simple list reordering. ## Environment <pre><code> Angular version: X.Y.Z <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [x] Chrome (desktop) version 69.0.3497.100 - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [x] Firefox version 59.0.2 - [x] Safari (desktop) version 12.0 - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: XX <!-- run `node --version` --> - Platform: <!-- Mac, Linux, Windows --> Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
type: bug/fix,area: animations,freq3: high,P3
low
Critical
372,823,540
TypeScript
incorrect leading(?) comment for variable declaration in loop statement
<!-- ๐Ÿšจ STOP ๐Ÿšจ ๐—ฆ๐—ง๐—ข๐—ฃ ๐Ÿšจ ๐‘บ๐‘ป๐‘ถ๐‘ท ๐Ÿšจ Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.2.0-dev.201xxxxx <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** **Code** ```ts for (let i = 0; i <= 10; ++i) { let _a, _b; /// 123 } ``` **Expected behavior:** ```ts for (var i = 0; i <= 10; ++i) { var _a = void 0, _b = void 0; /// 123 } ``` **Actual behavior:** ```ts for (var i = 0; i <= 10; ++i) { var _a = void 0, _b = /// 123 void 0; /// 123 } ``` **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> **Related Issues:** <!-- Did you find other bugs that looked similar? -->
Bug,Help Wanted,Effort: Moderate,Domain: Comment Emit
low
Critical
372,846,221
pytorch
Installing pytorch from source on labs.cognitiveclass.ai
There are free linux-ppc64le Jupyter notebook environments (similar to Googe Colab): 1) https://labs.cognitiveclass.ai/tools/jupyterlab/ (no GPU) 2) https://labs.cognitiveclass.ai/tools/jupyterlab-for-ibm-powerai/ (with GPU) They have Terminal available, but without "apt-get" working. I tried to install pytorch 1.x from the source, but failed. I was using instructions from here: https://developer.ibm.com/tutorials/install-pytorch-on-power/ Unfortunately some of these instructions don't help, in particular `apt-get install gfortran` doesn't work. I tried `conda install -c anaconda gfortran_linux-ppc64le -y` but it doesn't help too. Any ideas? ``` [ 83%] Linking CXX executable ../bin/cuda_rng_test [ 83%] Linking CXX executable ../bin/device_test [ 83%] Linking CXX executable ../bin/conv_op_cache_cudnn_test /usr/bin/ld: warning: libgfortran.so.4, needed by /opt/conda/envs/py3.6/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link) /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_concat_string@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7' collect2: error: ld returned 1 exit status /usr/bin/ld: warning: libgfortran.so.4, needed by /opt/conda/envs/py3.6/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link) /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_concat_string@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7' collect2: error: ld returned 1 exit status caffe2/CMakeFiles/device_test.dir/build.make:95: recipe for target 'bin/device_test' failed make[2]: *** [bin/device_test] Error 1 CMakeFiles/Makefile2:2636: recipe for target 'caffe2/CMakeFiles/device_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/device_test.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... [ 83%] Building CXX object caffe2/CMakeFiles/caffe2_pybind11_state_gpu.dir/python/pybind_state_dlpack.cc.o caffe2/CMakeFiles/cuda_rng_test.dir/build.make:97: recipe for target 'bin/cuda_rng_test' failed make[2]: *** [bin/cuda_rng_test] Error 1 CMakeFiles/Makefile2:2384: recipe for target 'caffe2/CMakeFiles/cuda_rng_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/cuda_rng_test.dir/all] Error 2 [ 83%] Building CXX object caffe2/CMakeFiles/caffe2_pybind11_state_gpu.dir/python/pybind_state_nomni.cc.o [ 83%] Linking CXX executable ../bin/apply_test [ 83%] Linking CXX executable ../bin/net_async_tracing_test [ 83%] Linking CXX executable ../bin/cudnn_test /usr/bin/ld: warning: libgfortran.so.4, needed by /opt/conda/envs/py3.6/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link) /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_concat_string@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7' collect2: error: ld returned 1 exit status caffe2/CMakeFiles/conv_op_cache_cudnn_test.dir/build.make:97: recipe for target 'bin/conv_op_cache_cudnn_test' failed make[2]: *** [bin/conv_op_cache_cudnn_test] Error 1 CMakeFiles/Makefile2:2132: recipe for target 'caffe2/CMakeFiles/conv_op_cache_cudnn_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/conv_op_cache_cudnn_test.dir/all] Error 2 [ 83%] Building CXX object caffe2/CMakeFiles/caffe2_pybind11_state_gpu.dir/python/pybind_state_registry.cc.o [ 83%] Linking CXX executable ../bin/conv_to_nnpack_transform_test /usr/bin/ld: warning: libgfortran.so.4, needed by /opt/conda/envs/py3.6/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link/) usr//optbin//condald/:envs /warningpy3.6:/ liblibgfortran.so.4/,libopenblas.so.0 :needed undefined reference to `_gfortran_concat_stringby@ /GFORTRAN_7opt/'conda /envs//py3.6opt//libconda//libopenblas.so.0,envs /py3.6/lib/libopenblas.so.0: undefined referencenot tofound `(_gfortran_etimetry@ GFORTRAN_7using' -/rpathopt /orconda/envs /-py3.6rpath/-liblink/)libopenblas.so.0 :/ optundefined/ condareference/ envsto/ py3.6`/lib_gfortran_compare_string/@libopenblas.so.0GFORTRAN_7:' undefined reference to `_gfortran_concat_string@GFORTRAN_7' ... /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7' collect2: error: ld returned 1 exit status caffe2/CMakeFiles/elementwise_op_gpu_test.dir/build.make:97: recipe for target 'bin/elementwise_op_gpu_test' failed make[2]: *** [bin/elementwise_op_gpu_test] Error 1 CMakeFiles/Makefile2:2048: recipe for target 'caffe2/CMakeFiles/elementwise_op_gpu_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/elementwise_op_gpu_test.dir/all] Error 2 [ 83%] Linking CXX executable ../bin/pattern_net_transform_test cc1: warning: command line option โ€˜-fvisibility-inlines-hiddenโ€™ is valid for C++/ObjC++ but not for C cc1: warning: command line option โ€˜-fvisibility-inlines-hiddenโ€™ is valid for C++/ObjC++ but not for C [ 84%] Linking CXX executable ../bin/blob_gpu_test /usr/bin/ld: warning: libgfortran.so.4, needed by /opt/conda/envs/py3.6/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link) /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_concat_string@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7' collect2: error: ld returned 1 exit status caffe2/CMakeFiles/pattern_net_transform_test.dir/build.make:95: recipe for target 'bin/pattern_net_transform_test' failed make[2]: *** [bin/pattern_net_transform_test] Error 1 CMakeFiles/Makefile2:2468: recipe for target 'caffe2/CMakeFiles/pattern_net_transform_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/pattern_net_transform_test.dir/all] Error 2 cc1: warning: command line option โ€˜-fvisibility-inlines-hiddenโ€™ is valid for C++/ObjC++ but not for C cc1: warning: command line option โ€˜-fvisibility-inlines-hiddenโ€™ is valid for C++/ObjC++ but not for C /usr/bin/ld: warning: libgfortran.so.4, needed by /opt/conda/envs/py3.6/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link) /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_concat_string@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7' collect2: error: ld returned 1 exit status caffe2/CMakeFiles/blob_gpu_test.dir/build.make:97: recipe for target 'bin/blob_gpu_test' failed make[2]: *** [bin/blob_gpu_test] Error 1 CMakeFiles/Makefile2:2174: recipe for target 'caffe2/CMakeFiles/blob_gpu_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/blob_gpu_test.dir/all] Error 2 [ 84%] Linking CXX executable ../bin/cuda_optional_test /usr/bin/ld: warning: libgfortran.so.4, needed by /opt/conda/envs/py3.6/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link) /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_concat_string@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7' collect2: error: ld returned 1 exit status caffe2/CMakeFiles/cuda_optional_test.dir/build.make:785: recipe for target 'bin/cuda_optional_test' failed make[2]: *** [bin/cuda_optional_test] Error 1 CMakeFiles/Makefile2:2426: recipe for target 'caffe2/CMakeFiles/cuda_optional_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/cuda_optional_test.dir/all] Error 2 [ 84%] Linking CXX executable ../bin/cuda_packedtensoraccessor_test /usr/bin/ld: warning: libgfortran.so.4, needed by /opt/conda/envs/py3.6/lib/libopenblas.so.0, not found (try using -rpath or -rpath-link) /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_concat_string@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_etime@GFORTRAN_7' /opt/conda/envs/py3.6/lib/libopenblas.so.0: undefined reference to `_gfortran_compare_string@GFORTRAN_7' collect2: error: ld returned 1 exit status caffe2/CMakeFiles/cuda_packedtensoraccessor_test.dir/build.make:794: recipe for target 'bin/cuda_packedtensoraccessor_test' failed make[2]: *** [bin/cuda_packedtensoraccessor_test] Error 1 CMakeFiles/Makefile2:2258: recipe for target 'caffe2/CMakeFiles/cuda_packedtensoraccessor_test.dir/all' failed make[1]: *** [caffe2/CMakeFiles/cuda_packedtensoraccessor_test.dir/all] Error 2 [ 84%] Linking CXX shared module python/caffe2_pybind11_state_gpu.cpython-37m-powerpc64le-linux-gnu.so [ 84%] Built target caffe2_pybind11_state_gpu Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-nnpack nccl caffe2 libshm c10d THD' ```
triaged,module: POWER
low
Critical
372,924,547
go
x/tools/cmd/go-contrib-init: requires scratch repo to be set up as x/scratch
In previous Go Contributor Workshops, I had attendees run the following: ``` $ cd $(go env GOPATH)/src/golang.org/x $ git clone https://go.googlesource.com/scratch $ cd scratch $ go-contrib-init -repo scratch All good. Happy hacking! ``` However, that's tedious and not particularly portable. For example, it doesn't work with environments that use a multi-path GOPATH. So, since @kevinburke graciously set up a memorable domain name for it, I've tried to update the instructions to be the following: ``` $ go get -d goscratch.club $ cd $(go list -f {{.Dir}} goscratch.club) $ go-contrib-init -repo scratch All good. Happy hacking! ``` However, that doesn't work, as `go-contrib-init` requires that all repos be checked out under `golang.org/x`: ``` $ pwd /home/mvdan/go/land/src/goscratch.club $ go-contrib-init -repo scratch The repo you want to work on is currently not on your system. Run "go get -d golang.org/x/scratch" to obtain this repo then go to the directory "/home/mvdan/go/land/src/golang.org/x/scratch" ``` It's been decided that `golang.org/x/scratch` should not exist (https://github.com/golang/go/issues/25921), so I think `go-contrib-init` should be taught that the `scratch` repo doesn't belong there. I think it should accept any directory for it under GOPATH. Or, once we convert the subrepos to modules, the GOPATH directory check should be removed entirely. /cc @dmitshur @bradfitz
NeedsDecision,Tools
low
Minor
372,977,021
pytorch
Fail to Run Caffe2 with Successful Build: This caffe2 python run does not have GPU support
## Issue description With the latest PyTorch code, I successfully built Caffe2. But when I tried the imports, it reported errors and exited. For `from caffe2.python import core`, it directly reported `Segmentation fault (core dumped)`. For `from caffe2.python import workspace`, it reported the following warnings that I have never encountered before: ``` WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. Segmentation fault (core dumped) ``` Note that I have been using an older version (~May, 2018) for a while, and everything worked just fine. I updated the code today (Oct 10, 2018) with `git pull --recurse-submodules` and `git submodule update --init --recursive`, as I need the updated operators. I tried deleting the previous "build" directory and regenerating all files, but the problem still exists. --- There are similar issues such as: - https://github.com/pytorch/pytorch/issues/9484 - https://github.com/pytorch/pytorch/issues/9604 - https://github.com/facebookresearch/video-nonlocal-net/issues/6 - https://github.com/facebookresearch/Detectron/issues/370 - https://github.com/caffe2/caffe2/issues/2513 The error messages reported in these issues usually contain extra information, e.g. complaining some *.so is not found. But no such complaining appeared in my case. ## Code example I built Caffe2 with the following commands: ``` mkdir build && cd build cmake -DCMAKE_INSTALL_PREFIX=/home/<user>/lib/pytorch/caffe2 -DCUDNN_INCLUDE_DIR=/home/<user>/lib/cudnn-8.0-linux-x64-v7/include -DCUDNN_LIBRARY=/home/<user>/lib/cudnn-8.0-linux-x64-v7/lib64/libcudnn.so -DUSE_MPI=OFF -DUSE_FFMPEG=ON .. make -j32 make install -j32 ``` During the building process, everything looked just as normal. I set the environment variables with the following command: ``` export PATH=<caffe2_install_dir>/bin:${PATH} export LD_LIBRARY_PATH=<caffe2_install_dir>/lib:${LD_LIBRARY_PATH} export PYTHONPATH=<pytorch_repo_dir>/build:${PYTHONPATH} ``` ## System Info - PyTorch or Caffe2: **Caffe2** - How you installed PyTorch (conda, pip, source): **source** - Build command you used (if compiling from source): *See above.* - OS: **Ubuntu 14.04** - PyTorch version: **(latest clone)** - Python version: **Python 2.7.15 :: Anaconda, Inc.** - CUDA/cuDNN version: **8.0/cudnn-8.0-linux-x64-v7** - GPU models and configuration: **(Titan Xp)** - GCC version (if compiling from source): **5.4.0** - CMake version: **3.11.3** - Versions of any other relevant libraries:
caffe2
low
Critical
373,076,560
TypeScript
Generic type union assignment via Pick throws TS2322
**TypeScript Version:** 3.1.3 and 3.2.0-dev.20181023 Works in Version 3.0.3 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** generic union assignment TS2322 setState **Code** ```ts interface A { a: string; } class Base<T> { prop: T; update<U extends keyof T>(props: Pick<T, U>) { /* like react setState */ } } class Test<T> extends Base<A & T> { test() { const str: string = "foo"; this.prop.a = str; //ok this.update({ a: str //err TS2322 }); } } // explicitly remove overloading properties type SaveMerge<T, R> = T & Pick<R, Exclude<keyof R, keyof T>>; class Test2<T> extends Base<SaveMerge<A, T>> { test() { const str: string = "foo"; this.prop.a = str; //ok this.update({ a: str //err TS2322 }); } } ``` **Expected behavior:** Compiles fine as in 3.0.3 **Actual behavior:** ``` test.ts:18:13 - error TS2322: Type 'string' is not assignable to type 'string & T["a"]'. Type 'string' is not assignable to type 'T["a"]'. 18 a: str //err ~ test.ts:3:5 3 a: string; ~ The expected type comes from property 'a' which is declared here on type 'Pick<A & T, "a">' test.ts:29:13 - error TS2322: Type 'string' is not assignable to type 'string & T["a"]'. Type 'string' is not assignable to type 'T["a"]'. 29 a: str //err ~ test.ts:3:5 3 a: string; ~ The expected type comes from property 'a' which is declared here on type 'Pick<SaveMerge<A, T>, "a">' ``` **Playground Link:** http://www.typescriptlang.org/play/#src=interface%20A%20%7B%0D%0A%20%20%20%20a%3A%20string%3B%0D%0A%7D%0D%0A%0D%0Aclass%20Base%3CT%3E%20%7B%0D%0A%20%20%20%20prop%3A%20T%3B%0D%0A%0D%0A%20%20%20%20update%3CU%20extends%20keyof%20T%3E(props%3A%20Pick%3CT%2C%20U%3E)%20%7B%20%7D%0D%0A%7D%0D%0A%0D%0A%0D%0Aclass%20Test%3CT%3E%20extends%20Base%3CA%20%26%20T%3E%20%7B%0D%0A%20%20%20%20test()%20%7B%0D%0A%20%20%20%20%20%20%20%20const%20str%3A%20string%20%3D%20%22foo%22%3B%0D%0A%20%20%20%20%20%20%20%20this.prop.a%20%3D%20str%3B%20%2F%2Fok%0D%0A%20%20%20%20%20%20%20%20this.update(%7B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20a%3A%20str%20%2F%2Ferr%20TS2322%0D%0A%20%20%20%20%20%20%20%20%7D)%3B%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0A%2F%2F%20explicitly%20remove%20overloading%20properties%0D%0Atype%20SaveMerge%3CT%2C%20R%3E%20%3D%20T%20%26%20Pick%3CR%2C%20Exclude%3Ckeyof%20R%2C%20keyof%20T%3E%3E%3B%0D%0Aclass%20Test2%3CT%3E%20extends%20Base%3CSaveMerge%3CA%2C%20T%3E%3E%20%7B%0D%0A%20%20%20%20test()%20%7B%0D%0A%20%20%20%20%20%20%20%20const%20str%3A%20string%20%3D%20%22foo%22%3B%0D%0A%20%20%20%20%20%20%20%20this.prop.a%20%3D%20str%3B%20%2F%2Fok%0D%0A%20%20%20%20%20%20%20%20this.update(%7B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20a%3A%20str%20%2F%2Ferr%20TS2322%0D%0A%20%20%20%20%20%20%20%20%7D)%3B%0D%0A%20%20%20%20%7D%0D%0A%7D **Related Issues:** Maybe #26274
Bug,Domain: Conditional Types
low
Critical
373,081,806
TypeScript
Quick info on JSX element should show inferred type arguments
**TypeScript Version:** 3.2.0-dev.20181023 **Search Terms:** **Code** ```ts class C<T> extends React.Component<T> {} const x = <C x={42} />; ``` **Expected behavior:** Quick into at the use of `C` should show `C<{ x: number }>`. **Actual behavior:** Just shows `class C<T>`.
Bug,Domain: Quick Info
low
Minor
373,086,320
flutter
Flutter Camera video size too big
I'm using iPhone. I captured video with camera plugin. The video length is about 15 seconds and size is around 25 MB how many times I did. I tried capture video with normal iPhone camera app and size is around 8 MB at most 10MB. Is this normal behavior?
c: new feature,p: camera,package,team-ecosystem,P3,triaged-ecosystem
low
Major
373,141,831
TypeScript
Inconsistency between import fix and moduleNameResolver for baseUrl + paths behavior
**TypeScript Version:** 3.2.0-dev.20181023 **Code** ```ts /// <reference path="fourslash.ts" /> // @Filename: /src/a.ts ////export const a = 0; // @Filename: /src/dir/b.ts ////[|a|]; // @Filename: /tsconfig.json ////{ //// "compilerOptions": { //// "baseUrl": ".", //// "paths": { //// "*": ["src/types/*"] //// } //// } ////} goTo.file("/src/dir/b.ts"); verify.importFixAtPosition([ `import { a } from "src/a"; a` ]); ``` The import code fix will import from "src/a". But this will be a compile error. The function `tryLoadModuleUsingPaths` has some behavior that looks like it might be a bug: It returns `SearchResult<Resolved>` instead of just `Resolved`. Then if the `SearchResult` is not undefined (it usually isn't) we return its value (which may be undefined) -- thus we never get around to testing `baseUrl`. I'm not 100% certain that's a bug though -- it might be the correct behavior to ignore `baseUrl` when `paths` is set, in which case the import fix should be changed to not try importing from `baseUrl` when `paths is set. CC @DanielRosenwasser @rbuckton for opinions on what the correct behavior is.
Bug
low
Critical
373,155,290
go
cmd/cgo: mark cgo generated wrappers hidden
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version devel +c2a8d5972f Sat Sep 22 10:58:54 2018 +0200 linux/amd64 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? GOARCH="amd64" GOBIN="" GOCACHE="/home/elias/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/elias/dev/go" GOPROXY="" GORACE="" GOROOT="/home/elias/dev/go-release" GOTMPDIR="" GOTOOLDIR="/home/elias/dev/go-release/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build186016377=/tmp/go-build -gno-record-gcc-switches" ### What did you do? ``` $ cat test.go package main /* __attribute__ ((visibility ("hidden"))) void some_c_func() { } */ import "C" func main() { C.some_c_func() } $ go build test.go $ objdump -t test|grep some_c_func ``` ### What did you expect to see? All cgo wrappers for `some_c_func` marked local. ### What did you see instead? ``` 00000000006bd160 l O .data 0000000000000008 main._cgo_1622f853b35c_Cfunc_some_c_func 000000000044fb20 l F .text 0000000000000052 main._Cfunc_some_c_func 000000000044fc10 l F .text 0000000000000001 some_c_func 000000000044fc20 g F .text 0000000000000001 _cgo_1622f853b35c_Cfunc_some_c_func ``` The `_cgo_1622f853b35c_Cfunc_some_c_func` function is global, not local as I would expect. I would expect it to be local even if `some_c_func` itself is not.
help wanted,NeedsInvestigation,compiler/runtime
low
Critical
373,193,653
go
proposal: context/v2: update context package
This issue is intended to cover ideas about updating the context package for Go 2. - The current context package leads to stuttering in declarations: `ctx context.Context`. - The current `Context.WithValue` function accepts values of any types, which is easy to misuse by passing, say, a string rather than a value of some package-local type. - The name `Context` is confusing to some people, since the main use of contexts is cancelation of goroutines. - Context values are passed everywhere explicitly, which troubles some people. Some explicitness is clearly good, but can we make it simpler? See also #24050 and #20280.
v2,Proposal
high
Critical
373,207,775
react-native
TextInput with SecureEntry sometimes highlights yellow with "Strong Password" text, and becomes unuseable
<!-- Requirements: please go through this checklist before opening a new issue --> - [X] Review the documentation: https://facebook.github.io/react-native - [X] Search for existing issues: https://github.com/facebook/react-native/issues - [X] Use the latest React Native release: https://github.com/facebook/react-native/releases ## Environment React Native Environment Info: System: OS: macOS 10.14 CPU: x64 Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz Memory: 54.86 MB / 16.00 GB Shell: 5.3 - /bin/zsh Binaries: Node: 8.12.0 - ~/.nvm/versions/node/v8.12.0/bin/node Yarn: 1.9.4 - ~/.yarn/bin/yarn npm: 6.4.1 - ~/.nvm/versions/node/v8.12.0/bin/npm Watchman: 4.9.0 - /usr/local/bin/watchman SDKs: iOS SDK: Platforms: iOS 12.0, macOS 10.14, tvOS 12.0, watchOS 5.0 IDEs: Xcode: 10.0/10A255 - /usr/bin/xcodebuild npmPackages: react: 16.6.0-alpha.8af6728 => 16.6.0-alpha.8af6728 react-native: ^0.57.3 => 0.57.3 npmGlobalPackages: react-native-cli: 2.0.1 ## Description I haven't been able to identify why this occurs, but sometimes users experience a yellow cover over the textField, with a "Strong Password" text on the right, and something cut off on the left. I can still tap the textInput and "Type" but the value does not change within the textInput. (The red x, and warning text is my own) ![screen shot 2018-10-23 at 2 17 54 pm](https://user-images.githubusercontent.com/8305711/47391487-c87c5a80-d6ce-11e8-91e7-477808bce562.png) I've tried adding, and removing `textContentType="password"`, but this issue still persists.
Component: TextInput,Bug
high
Critical
373,213,171
go
x/net/html: Tokenizer could have a Position method
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? go version go1.10.4 linux/amd64 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? amd64/linux ### What did you do? I have a program that reads HTML streams and extracts sections of interest. These byte slices are currently copied of from tokenizer.Raw(). But I'd rather just store the offsets and lengths into the full stream, given that I still want to hang on to the stream anyway. This would reduce the amount of copying needed and make my application more efficient. This would be really easy if html.Tokenizer had a method Position() that would return the offset of the start of the current token (or possibly the start and end of the current token) from the start of the stream returned by its reader. ### What did you expect to see? A method Position() that tells me the offset of the start of the current token. And maybe also the end as the second return value. ### What did you see instead? There's no such method. And there's no such count either in the struct.
NeedsDecision
low
Minor
373,225,157
terminal
Some installed fonts are not loading
Windows build: Microsoft Windows [Version 10.0.16299.665] What I'm trying to do: Install a font to windows and load it into the default terminal emulator for WSL ubuntu. What's wrong: Not all installed fonts are available. As one example, I have installed [GNU unifont](http://unifoundry.com/unifont/index.html) as shown here: ![image](https://user-images.githubusercontent.com/18600213/47393305-b6a5b200-d6e4-11e8-83ec-06ec49e03b7f.png) But I am unable to locate this font in the settings pane of the default terminal. ![image](https://user-images.githubusercontent.com/18600213/47393400-ef458b80-d6e4-11e8-8bb0-79af31a0bd8f.png) ![image](https://user-images.githubusercontent.com/18600213/47393423-02f0f200-d6e5-11e8-9633-d2d2a6850ed2.png) On the other hand, other terminal emulators have no problem locating all of my fonts. Here's the font selected ConEmu's settings pane: ![image](https://user-images.githubusercontent.com/18600213/47393525-5c592100-d6e5-11e8-804b-f8e466c07eaf.png) This is not the only font I've had issues with. For some reason, fonts like [Ubuntu Mono](https://design.ubuntu.com/font/) show up just fine. But fonts like [GNU Unifont](http://unifoundry.com/unifont/index.html) or [Everson Mono](http://www.evertype.com/emono/) will not show up. Is there some hidden feature of these fonts which makes them incompatible for WSL ubuntu's default terminal? Or is this perhaps a defect? Any help with getting these fonts added would be greatly appreciated. Thank you. As a side note: I'd also appreciate if anyone could educate me on what is the proper name for "WSL ubuntu's default terminal."
Issue-Question,Product-Conhost,Area-Rendering
medium
Major
373,230,435
TypeScript
JSDoc generated by infer-from-usage codefix should be sorted
```js /** * @param {number} y * @return {number} * */ function f(x,y,z) { return x * y * z } ``` **Expected behavior:** Infer from usage sorts JSDoc after adding new inferences: ```js /** * @param {number} x * @param {number} y * @param {number} z * @return {number} */ function f(x,y,z) { return x * y * z } ``` **Actual behavior:** ```js /** * @param {number} y * @return {number} * @param {number} x * @param {number} z */ function f(x,y,z) { return x * y * z } ``` Note that #27978 doesn't fix this problem.
Bug,Domain: JavaScript
low
Minor
373,231,550
react
Update release script to handle alpha react-reconciler deps
This commit 1a57dc6 broke it
Component: Build Infrastructure,React Core Team
medium
Minor
373,242,357
pytorch
Improved performance for torch.multinomial with small batches
## ๐Ÿš€ Feature `torch.multinomial` (with replacement) has very poor performance with small batches but large numbers of samples or categories. In the regime of batch size N=1, categories C=100,000 and samples S=10,000, `numpy.choice` (running on CPU) is up to 4x faster than `torch.multinomial` running on a P100 GPU: ![vary_c](https://user-images.githubusercontent.com/2718714/47396316-0da88880-d6df-11e8-8d83-1d2282975c8e.png) ![vary_s](https://user-images.githubusercontent.com/2718714/47396317-113c0f80-d6df-11e8-8d1b-6d3f1a518be6.png) ![vary_n](https://user-images.githubusercontent.com/2718714/47396319-1305d300-d6df-11e8-8eb0-f0c6aaaa46f8.png) (Full benchmarking script available here: https://github.com/jcjohnson/pytorch-multinomial-benchmark) These benchmarks suggests that `torch.multinomial` does a good job exploiting parallelism over the batch dimension, but not over the category or sample dimensions; I feel that this could be improved to yield significantly better performance in small-batch scenarios. (There also appear to be performance issues when sampling without replacement https://github.com/pytorch/pytorch/issues/11931) ## Motivation After some benchmarking I found that a project I'm working on is bottlenecked by sampling in small-batch, large-sample, large-category regimes; based on above benchmarks I'm forced to perform the sampling on CPU with numpy for the best performance; this both feels wrong and still causes sampling to be the bottleneck. ## Pitch I'm not terribly familiar with details of the current CUDA backend, but when sampling with replacement it should be possible to exploit parallelism over the sample dimension to achieve significantly improved performance over numpy.
module: performance,module: cpu,triaged
low
Major