id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
βŒ€
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
290,242,300
vue
Functional single file component with components option.
### Version 2.5.13 ### Reproduction link NG pattern (functional) https://codesandbox.io/s/004vv2onw0 OK pattern (no functional) https://codesandbox.io/s/q9k5q8qq56 ### Steps to reproduce I found can't use `components` option when `functional` single file component. ```html <template functional> <div> <some-children /> </div> </template> <script> import SomeChildren from "./SomeChildren" export default { components: { SomeChildren } } </script> ``` It's occure `Unknown custom element`. ### What is expected? Not occure `Unknown custom element` and use child component ### What is actually happening? It's occure `Unknown custom element` --- In workaround, it not occure when use `Vue.component`. ```js import Vue from "vue" import SomeChildren from "./SomeChildren" Vue.component("some-children", SomeChildren); export default {} // can use <some-children /> ``` <!-- generated by vue-issues. DO NOT REMOVE -->
feature request,improvement,has PR
high
Critical
290,268,889
create-react-app
Resolve symlinks using Node resolution mechanism
Copying my comment from #1107 since that issue is pretty crowded and I want it to be here instead. ---- Having read some discussions in https://github.com/nodejs/node/pull/6537 and https://github.com/isaacs/node6-module-system-change I think we're actually already being inconsistent in how we handle symlinks. Let's forget about the issue about compiling source code for a second. I feel like that can be solved by https://github.com/facebookincubator/create-react-app/issues/1333 and thus is not the most important one here. The important part here is that the resolution should match Node algorithm so that our webpack setup matches our test setup. I thought that was the case, but I was wrong. Consider this structure: ``` my-comp index.js // already compiled package.json // react is a peer dependency my-app node_modules my-comp // symlink to my-comp folder ``` It turns out that if `my-comp` doesn't declare a dependency on `react`, it *can find React in the browser builds but not in tests*. <img width="696" alt="screen shot 2018-01-21 at 12 25 54" src="https://user-images.githubusercontent.com/810438/35194148-462b1d9c-fea6-11e7-83da-8beda0655109.png"> <img width="1082" alt="screen shot 2018-01-21 at 12 26 06" src="https://user-images.githubusercontent.com/810438/35194149-46498f66-fea6-11e7-813c-ac8f71bfe48f.png"> We introduced this regression in https://github.com/facebookincubator/create-react-app/pull/1359. We probably missed it because if `my-comp` *does* explicitly depend on React (e.g. as a `devDependency`), the test passes (and in the browser we just get two Reacts).
issue: bug,issue: needs investigation
medium
Major
290,295,764
rust
#[link(name=…)] makes life hard for non-cargo build systems
I'm not sure how that feauter is handled by rustc (and cargo?). My Nix-based build system (carnix) cannot build packages using it, because it has no way to tell a library declared in this way has to be used (I tried just grepping over the source files, but that fails because some modules might not get compiled, depending on the platform and features). I'd like at least one of the following: - help on what options I need to pass to rustc in order to compile these crates anyway, - or a new command in cargo and/or in rustc to output flags for these files (such as `cargo:rustc-flags=…`) without compiling anything, that can then be passed to rustc to actually compile the crate.
A-linkage,T-dev-tools,C-feature-request
low
Minor
290,295,881
angular
style binding handling css `calc` function improperly
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> `style.paddingLeft="calc(10px + 1px * 1)` does nothing. (but `style.paddingLeft="calc(10px)` does) `[ngStyle]="{'padding-left': 'calc(10px + 1px * 1)'}"` works as expected. ## Expected behavior <!-- Describe what the desired behavior would be. --> `style.paddingLeft="calc(10px + 1px * 1)` works as same as `[ngStyle]="{'padding-left': 'calc(10px + 1px * 1)'}"` ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> https://stackblitz.com/edit/angular-gde28w ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> ## Environment <pre><code> Angular version: 5.x.x <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [ ] Chrome (desktop) version XX - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: XX <!-- run `node --version` --> - Platform: <!-- Mac, Linux, Windows --> Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
type: bug/fix,area: animations,freq2: medium,hotlist: google,P3
low
Critical
290,311,223
rust
./x.py build --on-fail=/bin/bash has no effect
Trying to look into #43982, I was running `./x.py build --jobs=4 --on-fail=/bin/bash`. Once I hit the ICE, the build exited without dropping me to a shell as I had expected. @nagisa wanted to try and repro this. re-uploaded [build log](https://github.com/rust-lang/rust/files/1650313/build.log) as a gist: https://gist.github.com/Mark-Simulacrum/f660624e0195d733da752649a6cf7538
T-bootstrap,C-bug
low
Major
290,315,349
TypeScript
Request: Allow abstract classes to implement MappedTypes that are instantiated with type parameters.
Ok, so the title is a mouthful, but here's an example of what we're trying to accomplish: ```ts interface DeferredProperty<T> { get(): T; } type Deferred<T> = { [P in keyof T]: DeferredProperty<T[P]> }; ``` Here we use mapped types to express the idea that a interface type might be wrapped with a new type that exposes the same properties as it, except as functions instead of data properties. So, for example: ```ts interface Person { age: number, name: string } var deferredPerson: Deferred<Person>; var age = deferredPerson.age.get(); interface Vehicle { numWheels: number, cost: number } var deferredVehicle: Deferred<Vehicle>; var cost = deferredVehicle.cost.get(); ``` We'd like to make a lot of our types deferable in our system, so we create a simple abstract base class that will help us out by doing some of the work for us. In other words, we'd like to be able to write: ```ts abstract class BaseDeferred<T> implements Deferred<T> { //<-- note: currently not legal // some common stuff, including concreate methods // Maybe some abstract methods. // protected abstract whatever(): void; // etc. } ``` We could then do the following: ```ts class DeferredPerson extends BaseDeferred<Person> { } class DeferredVehicle extends BaseDeferred<Vehicle> { } ``` At this point TypeScript would say: "Hey, DeferredPerson doesn't properly implement ```BaseDeferred<Person>```, it is missing ```age: DeferredProperty<number>``` and ```name: DeferredProperty<string>```. (And likewise for DeferredVehicle) However, this isn't currently allowed as we cannot say: ```abstract class BaseDeferred<T> implements Deferred<T>``` This is a somewhat understandable restriction. After all, how can the compiler actually validate that ```BaseDeferred<T>``` is implementing ```Deferred<T>``` when it cannot know (at this point) how the ```Deferred<T>``` lookup type will expand. While understandable, it would be nice if this restriction could potentially be lifted for abstract types. Because the type is abstract, we would like it if the check was only actually done at the time the type was concretely derived from. So, for example, when someone wrote: ```ts class DeferredPerson extends BaseDeferred<Person> { // Now the compiler create the full type signature for BaseDeferred<Person> and then checked DeferredPerson against it. } ``` -- The workaround today is to do the following: ```ts abstract class BaseDeferred<T> { // <-- note: no implements } class DeferredPerson extends BaseDeferred<Person> implements Deferred<Person> { } class DeferredVehicle extends BaseDeferred<Vehicle> implements Deferred<Vehicle> { } ``` The compiler now appropriately does the right checks. This is unpleasant though as it's a very simple thing to miss. Because all subclasses must implement this type, we would very much like to push that requirement up to the base class and have the enforcement applied uniformly across all subtypes. -- Thanks much, and i hope everyone is doing great! We're having a blast with TS, especially (ab)using the type system to express some very interesting things. The more crazy stuff that can be expressed (especially around constraints and variadic types) the happier we are πŸ™‚
Suggestion,In Discussion
medium
Major
290,317,266
rust
Misleading error message when passing a reference to an FnOnce and a non-reference is expected
Compiling this code: ```rust struct Foo {} impl Foo { pub fn foo(&self) -> String { "foo".to_owned() } } fn main() { let foo = Foo{}; let closure = move || foo; call(&closure); } fn call<F: FnOnce() -> Foo>(f: F) { println!("{}", f().foo()) } ``` [gives me the following error](https://play.rust-lang.org/?gist=962c9bd74d972a21741a179447a75965&version=nightly): ``` error[E0525]: expected a closure that implements the `Fn` trait, but this closure only implements `FnOnce` --> src/main.rs:11:19 | 11 | let closure = move || foo; | ^^^^^^^^^^^ 12 | 13 | call(&closure); | ---- the requirement to implement `Fn` derives from here | note: closure is `FnOnce` because it moves the variable `foo` out of its environment --> src/main.rs:11:27 | 11 | let closure = move || foo; | ^^^ ``` `call` *does* expect a closure which implements `FnOnce`, not one which implements `Fn` as the error message suggests. Accordingly, the line "expected a closure that implements the `Fn` trait, but this closure only implements `FnOnce`" is very misleading. The actual problem here is that a reference to an `FnOnce` is being passed, rather than the `FnOnce` itself. It would be great if the error message reflected this fact.
C-enhancement,A-diagnostics,T-compiler
low
Critical
290,321,007
flutter
Document that WillPopScope prevents swipe to go back on MaterialPageRoute
## Steps to Reproduce 1. Add a `MaterialPageRoute` to your app's page stack ```dart return new MaterialPageRoute<Null>( settings: settings, builder: (BuildContext context) => new StoryPage(itemId: itemId), ); ``` 1. Wrap that page's `Scaffold` in a `WillPopScope` widget ```dart return new WillPopScope( onWillPop: () async { return true; }, child: new Scaffold( … ), ); ``` 1. Try to swipe to go back on iOS. The page won't swipe. This also occurs if you replace the `MaterialPageRoute` with a `CupertinoPageRoute`. I think this might be an unintended side-affect of fixing #8269. For a live example of this, [see my repo here](https://github.com/dudeofawesome/hn_flutter/blob/03077e9c1681a6223b4f95e2b41f2a516e9ebb30/lib/pages/story.dart#L308). ## Flutter Doctor ``` [βœ“] Flutter (on Mac OS X 10.13.2 17C205, locale en-US, channel master) β€’ Flutter version 0.0.21-pre.284 at /Users/DudeOfAwesome/Library/Developer/flutter β€’ Framework revision 7cdfe6fa0e (2 days ago), 2018-01-20 00:36:00 -0800 β€’ Engine revision e45eb692b1 β€’ Tools Dart version 2.0.0-dev.16.0 β€’ Engine Dart version 2.0.0-edge.93d8c9fe2a2c22dc95ec85866af108cfab71ad06 [βœ“] Android toolchain - develop for Android devices (Android SDK 27.0.3) β€’ Android SDK at /Users/DudeOfAwesome/Library/Android/sdk β€’ Android NDK at /Users/DudeOfAwesome/Library/Android/sdk/ndk-bundle β€’ Platform android-27, build-tools 27.0.3 β€’ ANDROID_HOME = /Users/DudeOfAwesome/Library/Android/sdk β€’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b08) [βœ“] iOS toolchain - develop for iOS devices (Xcode 9.2) β€’ Xcode at /Applications/Xcode.app/Contents/Developer β€’ Xcode 9.2, Build version 9C40b β€’ ios-deploy 1.9.2 β€’ CocoaPods version 1.3.1 [βœ“] Android Studio (version 3.0) β€’ Android Studio at /Applications/Android Studio.app/Contents β€’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b08) [βœ“] IntelliJ IDEA Ultimate Edition (version 2017.2.5) β€’ Flutter plugin version 18.1 β€’ Dart plugin version 172.4343.25 [βœ“] Connected devices β€’ iPhone X β€’ C927CB75-4D41-426B-9B32-D859B2FC5FFD β€’ ios β€’ iOS 11.2 (simulator) ```
framework,customer: mulligan (g3),d: api docs,d: examples,customer: crowd,P2,team-framework,triaged-framework
low
Critical
290,325,348
TypeScript
No error when namespace export is used before assigned
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.6.2 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** namespace decorator same class **Code** ```ts @C1.Decorator export class C1{ } export namespace C1 { export function Decorator<T>(target: T) { } } @C1.Decorator @C2.Decorator export class C2 { } export namespace C2 { export function Decorator<T>(target: T) { } } ``` **Expected behavior:** if this is not a feature, decorator should be defined at runtime, or it should provide some warning or error during compilation? **Actual behavior:** decorator from namespace C1 used on class C1 is undefined, decorator from C1 is defined when used on C2, decorator from C2 is undefined **Playground Link:** https://www.typescriptlang.org/play/index.html#src=%40C1.Decorator%0D%0Aexport%20class%20C1%7B%20%7D%0D%0Aexport%20namespace%20C1%20%7B%20%0D%0A%20%20export%20function%20Decorator%3CT%3E(target%3A%20T)%20%7B%20%7D%0D%0A%7D%0D%0A%40C1.Decorator%0D%0A%40C2.Decorator%0D%0Aexport%20class%20C2%20%7B%20%7D%0D%0A%0D%0Aexport%20namespace%20C2%20%7B%20%0D%0A%20%20export%20function%20Decorator%3CT%3E(target%3A%20T)%20%7B%20%7D%0D%0A%7D **Related Issues:** I did not find any
Bug,Help Wanted
low
Critical
290,332,549
rust
The "method references Self" object-safety rule is overly restrictive
Is there any reason why the following trait method can't be object safe? ```rust trait Trait { fn take_and_return_box(self: Box<Self>) -> Box<Self>; } ``` Intuitively, it would return `Box<T>` when called on a `Box<T>` where `T: Trait`, and `Box<Trait>` when called on a `Box<Trait>`. The compiler would do an unsize coercion on the `Box<T>` returned by the vtable method, combining it with the vtable of the `Box<Trait>` that was passed in. If I'm not mistaken, the object safety rule could be relaxed to allow methods that reference the `Self` type in the return value, as long as it's a type that meets certain `CoerceUnsized` requirements: - Let `ReturnType<Self>` be the return type of the method, e.g. `Box<Self>` - For a given type `T`, let `ReturnType<T>` be `ReturnType<Self>` with every occurrence of `Self` replaced with `T` - The trait method is object safe if the following conditions are met: - `ReturnType<Trait>: Sized` - for every type `T: Unsize<Trait>`, `ReturnType<T>: CoerceUnsized<Trait>`. Note: this is very similar to what I think the object-safety rules for [arbitrary_self_types (#44874)](https://github.com/rust-lang/rust/issues/44874) will look like
A-trait-system,T-lang,C-feature-request,T-types,A-trait-objects
low
Major
290,333,959
flutter
`TapGestureRecognizer` leaked by `_RenderSlider` and `_RenderRangeSlider`
These classes uses `TapGestureRecognizer` but does not dispose it. - `RenderToggleable` - `_RenderCupertinoSwitch` - `_RenderSlider` Is it the intended implementation? A parent class of `TapGestureRecognizer`, `PrimaryPointerGestureRecognizer` uses `Timer` and has `dispose` method. `LongPressGestureRecognizer` is used similarly.
c: crash,framework,has reproducible steps,P2,found in release: 3.10,team-framework,triaged-framework
low
Major
290,501,833
vue-element-admin
Can U support Parcel(https://parceljs.org)
Webpack is too complicated Thanks!
discussion
low
Minor
290,529,809
rust
Tracking Issue for Incremental Compilation
Incremental compilation will soon be available in stable Rust (with version 1.24) but there's still lots of room for improvement. This issue will try to give an overview of specific areas for potential optimization. ## Compile Time The main goal for incremental compilation is to decrease compile times in the common case. Here are some of things that affect it: ### Caching Efficiency The biggest topic for incremental compilation is: How efficiently are we using our caches. It can be further subdivided into the following areas: #### Data Structure Stability Caching efficiency depends on how the data in the cache is represented. If data is likely to change frequently then the cache is likely to be invalidated frequently. One example is source location information: Source location information is likely to change. Add a comment somewhere and the source location of everything below the comment has changed. As a consequence, everything in the cache that contains source location information is likely in need of frequent invalidation. It would be preferable to factor data structures in a way that confines this volatility to only few kinds of cache entries. The following issues track concrete plans to improve the situation here: - [ ] Improve caching efficiency by handling spans in a more robust way. #47389 - [x] Turn translation-related attributes into a query. #47320 #### Object File Granularity and Partitioning The problem of object file granularity can be viewed as a variation of "Data Structure Stability" but it is so important for compile times that I want to list it separately. At the moment the compiler will put machine code that comes from items in the same source-level module into the same object file. This means that changing one function in a module will lead to all functions in that module to be re-compiled. Obviously this can lead to a large amount of redundant work. For full accuracy we could in theory have one object file per function - which indeed improves re-compile times a lot in many cases - but that would increase disk space requirements to an unacceptable level and makes compilation sessions with few cache hits much more expensive (TODO: insert numbers). And as long as we don't do incremental linking it might also make linking a bottleneck. The main goal here is to re-compile as little unchanged code as possible while keeping overhead small. This is a hard problem and some approaches are: - Keep using a fixed partitioning scheme but improve the partitioning algorithm - Implement an adaptive scheme that reacts to set of changes a user makes - It might even be a good idea to write a simulator that allows to test different schemes and then feed it with actual data generated by an instrumented compiler. Adaptive schemes would require us to re-think part of our testing and benchmarking infrastructure and writing a simulator is a big project of its own, so short term we should look into improved static partitioning schemes: - [ ] Take type parameters into account when assigning generic instances to CGUs. (TODO) - [ ] Do per-MonoItem dependency tracking in order to collect data about granularity fallout. (#48211) Another avenue for improvement of how we handle object files: - [ ] Allow for re-using object files that contain unused code. (#48212) #### Whole-Cache Invalidation Currently commandline arguments are tracked at a very coarse level: Any change to a commandline argument will completely invalidate the cache. The red-green tracking system could take care of making this finer grained but quite a lot of refactoring would need to happen in order to make sure that commandline arguments are *only* accessible via queries. Note that this currently completely prevents sharing a cache between `cargo check` and `cargo build`. #### Avoid Redundant Work The compiler is doing redundant work at the moment. Reducing it will also positively affect incremental compilation: - [ ] Linking is not done incrementally although at least Gold and MSVC would support it. (TODO) - [x] Instances of generic functions are duplicated for every crate that uses them. #47317 - [x] Closures are unnecessarily duplicated for generic instances. #46477 - [x] Add support for split-debuginfo on platforms that allow it. #34651 #### Querify More Logic, Cache More Queries We can only cache things that are represented as queries, and we can only profit from caching for things that are (transitively) cached. There are some obvious candidates for querification: - [x] Cache the specialization_graph query. #48987 - [x] Cache type_of and some other queries. #47455 - [x] Turn translation-related attributes into a query. #47320 - [x] Querify WF-checking so it can be cached. #46753 - [x] Cache check_match and use ensure() for coherence-related queries. #46881 - [x] Enable query result caching for many more queries. #46556 A more ambitious goal would be to querify name resolution and macro expansion. ### Framework Overhead Doing dependency tracking will invariably introduce some overhead. We should strive to keep this overhead low. The main areas of overhead are: - Building the dependency graph during compiler execution - Computing query result hashes and dependency node identifiers - Loading and storing the dependency graph from/to disk Another thing that falls into this category is: - Efficiency of loading and storing cache entries The following issues track individual improvements: - [ ] Load query result cache in background or use memory mapped file. (TODO) - [ ] Investigate sharing more data inside result cache (e.g. ty::Slice). (TODO) - [ ] Reduce amount of cache sanity checking for non-debug_assertions compilers. (TODO) - [ ] Investigate making the result cache updateable in-place. #48231 - [x] Use a struct-of-arrays instead of an array-of-structs for SerializedDepGraph. #47326 - [x] Speed up result hashing and DepNode computation by caching certain stable hashes. #47294 - [x] Optimize DepGraph::try_mark_green(). #47293 - [x] Speed up leb128 encoding and decoding for unsigned values. #46919 - [x] Use an array instead of a hashmap for storing result hashes. #46842 - [x] Precompute small hash for filenames to save some work. #46839 - [x] Speed up span hashing by caching expansion context hashes. #46562 - [x] Load dep-graph in the background. #46555 - [x] Don't encode Fingerprint values with leb128. #45875 - [x] Explore delayed read-edge deduplication or getting rid of it entirely. #45873 - [x] Maybe optimize case of dependency-less anonymous queries. #45408 - [x] Implement "eval-always" queries. #45238 We should continuously profile common use cases to find out where we can further reduce framework overhead. ## Disk Space Requirements Build directories of large C/C++ and Rust code bases can be *massive*, oftentimes many gigabytes. Since incremental compilation has to keep around the previous version of each object file for re-use, plus LLVM IR, plus the dependency graph and query result cache, build directories can up to triple in size when incremental compilation is turned on (depending on which crates are compiled incrementally). The best way to reduce cache size is to reduce the amount of translated code that we need to cache. Solving #47317 and #46477 would help here. MIR-only RLIBs (#38913), which are one way to solve #47317, might also obviate the need to cache LLVM IR at all. ## Runtime Performance of Incrementally Compiled Code Currently delivering good runtime performance for incrementally compiled code is only a side goal. If incrementally compiled code is "fast enough" we rather try to improve compile times. However, since ThinLTO also supports an incremental mode, we could provide a middle ground between "re-compile everything for good performance" and "re-compile incrementally and only get 50% performance". - [x] Make ThinLTO compatible with incremental compilation. (https://github.com/rust-lang/rust/pull/53673) --------------- If you have any further ideas or spot anything that I've missed, let me know in the comments below!
T-compiler,A-incr-comp,C-tracking-issue,S-tracking-impl-incomplete
medium
Critical
290,547,907
vscode
Support reading/writing chunks in remote fs
This is a follow up from #32503 where `IFileService.updateContent()` now accepts a `ITextSnapshot` to prevent loading the entire buffer into memory. I left a TODO [here](https://github.com/Microsoft/vscode/blob/master/src/vs/workbench/services/files/electron-browser/remoteFileService.ts#L366) to use `ITextSnapshot` directly and not the current fallback `snapshotToString`
feature-request,api,remote,api-proposal
low
Minor
290,557,932
react
Fabric Todos
For my own notes here are some spill-overs from the Fabric renderer commit. - [ ] Update currentProps for updates in the commit phase. Needs a host effect to be marked and we need a hook to do host updates in the persistent mode. - [x] Actually use currentProps when extracting events in the component tree. - [ ] Resuming will need to be able to not reuse host nodes used by another thread. - [ ] Should always clone direct siblings of a changed node, in case they will relayout.
Type: Umbrella,React Core Team
medium
Minor
290,603,650
go
x/mobile: accessing field on nil on iOS raises exception instead of panicking (gomobile)
The spec says accessing a field (`x.foo`) should panic if `x` is `nil`, but it seems that in an iOS library built with gomobile, the result is an EXC_BAD_ACCESS exception being raised. Please answer these questions before submitting your issue. Thanks! #### What did you do? If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. Here's an example that works fine in a regular Go program but the equivalent compiled into an iOS app crashes: https://play.golang.org/p/KsIyVll7NE0. #### What did you expect to see? Accessing a field on nil panics, and is handled by a surrounding defer/recover block. #### What did you see instead? Accessing a field on nil raises a native exception, crashing the program unrecoverably. #### System details ``` go version go1.9.2 darwin/amd64 GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/verm/remix/amp" GORACE="" GOROOT="/usr/local/Cellar/go/1.9.2/libexec" GOTOOLDIR="/usr/local/Cellar/go/1.9.2/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/36/92jbhtpj41s6q3w4fcq2v01w0000gn/T/go-build431793073=/tmp/go-build -gno-record-gcc-switches -fno-common" CXX="clang++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOROOT/bin/go version: go version go1.9.2 darwin/amd64 GOROOT/bin/go tool compile -V: compile version go1.9.2 uname -v: Darwin Kernel Version 16.7.0: Mon Nov 13 21:56:25 PST 2017; root:xnu-3789.72.11~1/RELEASE_X86_64 ProductName: Mac OS X ProductVersion: 10.12.6 BuildVersion: 16G1114 lldb --version: lldb-900.0.64 Swift-4.0 ```
NeedsInvestigation,mobile
low
Critical
290,617,023
rust
2x benchmark loss in rayon-hash from multiple codegen-units
I'm seeing a huge slowdown in [rayon-hash](https://github.com/rayon-rs/rayon-hash) benchmarks, resolved with `-Ccodegen-units=1`. ``` $ rustc -Vv rustc 1.25.0-nightly (97520ccb1 2018-01-21) binary: rustc commit-hash: 97520ccb101609af63f29919bb0a39115269c89e commit-date: 2018-01-21 host: x86_64-unknown-linux-gnu release: 1.25.0-nightly LLVM version: 4.0 $ cargo bench --bench set_sum Compiling [...] Finished release [optimized] target(s) in 5.51 secs Running target/release/deps/set_sum-833cf161cf760670 running 4 tests test rayon_set_sum_parallel ... bench: 2,295,348 ns/iter (+/- 152,025) test rayon_set_sum_serial ... bench: 7,730,830 ns/iter (+/- 171,552) test std_set_sum_parallel ... bench: 10,038,209 ns/iter (+/- 188,189) test std_set_sum_serial ... bench: 7,733,258 ns/iter (+/- 134,850) test result: ok. 0 passed; 0 failed; 0 ignored; 4 measured; 0 filtered out $ RUSTFLAGS=-Ccodegen-units=1 cargo bench --bench set_sum Compiling [...] Finished release [optimized] target(s) in 6.48 secs Running target/release/deps/set_sum-833cf161cf760670 running 4 tests test rayon_set_sum_parallel ... bench: 1,092,732 ns/iter (+/- 105,979) test rayon_set_sum_serial ... bench: 6,152,751 ns/iter (+/- 83,103) test std_set_sum_parallel ... bench: 8,957,618 ns/iter (+/- 132,791) test std_set_sum_serial ... bench: 6,144,775 ns/iter (+/- 75,377) test result: ok. 0 passed; 0 failed; 0 ignored; 4 measured; 0 filtered out ``` `rayon_set_sum_parallel` is the showcase for this crate, and it suffers the most from CGU. From profiling and disassembly, this seems to mostly be a lost inlining opportunity. In the slower build, the profile is split 35% `bridge_unindexed_producer_consumer`, 34% `Iterator::fold`, 28% `Sum::sum`, and the hot loop in the first looks like: ```asm 16.72 β”‚126d0: cmpq $0x0,(%r12,%rbx,8) 6.73 β”‚126d5: ↓ jne 126e1 <rayon::iter::plumbing::bridge_unindexed_producer_consumer+0x201> 16.65 β”‚126d7: inc %rbx 0.00 β”‚126da: cmp %rbp,%rbx 7.20 β”‚126dd: ↑ jb 126d0 <rayon::iter::plumbing::bridge_unindexed_producer_consumer+0x1f0> 0.05 β”‚126df: ↓ jmp 1272f <rayon::iter::plumbing::bridge_unindexed_producer_consumer+0x24f> 26.93 β”‚126e1: mov 0x0(%r13,%rbx,4),%eax 4.26 β”‚126e6: movq $0x1,0x38(%rsp) 2.27 β”‚126ef: mov %rax,0x40(%rsp) 1.88 β”‚126f4: mov %r14,%rdi 4.58 β”‚126f7: β†’ callq 15b90 <<u64 as core::iter::traits::Sum>::sum> 0.68 β”‚126fc: movq $0x1,0x38(%rsp) 2.58 β”‚12705: mov %r15,0x40(%rsp) 0.62 β”‚1270a: movq $0x1,0x48(%rsp) 0.31 β”‚12713: mov %rax,0x50(%rsp) 0.49 β”‚12718: movb $0x0,0x58(%rsp) 2.50 β”‚1271d: xor %esi,%esi 0.41 β”‚1271f: mov %r14,%rdi 0.85 β”‚12722: β†’ callq 153f0 <<core::iter::Chain<A, B> as core::iter::iterator::Iterator>::fold> 1.30 β”‚12727: mov %rax,%r15 2.16 β”‚1272a: ↑ jmp 126d7 <rayon::iter::plumbing::bridge_unindexed_producer_consumer+0x1f7> ``` With CGU=1, 96% of the profile is in `bridge_unindexed_producer_consumer`, with this hot loop: ```asm 2.28 β”‚1426d: test %rbx,%rbx β”‚14270: ↓ je 14296 <rayon::iter::plumbing::bridge_unindexed_producer_consumer+0x146> 5.40 β”‚14272: mov (%rbx),%ebx 2.75 β”‚14274: add %rbx,%rax 1.47 β”‚14277: lea (%rdx,%rsi,4),%rbx 0.21 β”‚1427b: nopl 0x0(%rax,%rax,1) 29.56 β”‚14280: cmp %rdi,%rsi 0.04 β”‚14283: ↓ jae 14296 <rayon::iter::plumbing::bridge_unindexed_producer_consumer+0x146> 2.87 β”‚14285: add $0x4,%rbx 20.22 β”‚14289: cmpq $0x0,(%rcx,%rsi,8) 1.48 β”‚1428e: lea 0x1(%rsi),%rsi 8.00 β”‚14292: ↑ je 14280 <rayon::iter::plumbing::bridge_unindexed_producer_consumer+0x130> 25.25 β”‚14294: ↑ jmp 1426d <rayon::iter::plumbing::bridge_unindexed_producer_consumer+0x11d> ```
I-slow,C-enhancement,A-codegen,T-compiler,C-optimization
low
Critical
290,635,294
go
path/filepath: EvalSymlinks of container mapped directory returns an error
### What version of Go are you using (`go version`)? `go version go1.9.2 windows/amd64` and `go version go1.10beta2 windows/amd64`. ### Does this issue reproduce with the latest release? Yes, this is reproducable with go `1.10beta2` and `1.9.2` ### What operating system and processor architecture are you using (`go env`)? ``` set GOARCH=amd64 set GOBIN= set GOCACHE=C:\Users\ContainerAdministrator\AppData\Local\go-build set GOEXE=.exe set GOHOSTARCH=amd64 set GOHOSTOS=windows set GOOS=windows set GOPATH=C:\gopath set GORACE= set GOROOT=C:\go set GOTMPDIR= set GOTOOLDIR=C:\go\pkg\tool\windows_amd64 set GCCGO=gccgo set CC=gcc set CXX=g++ set CGO_ENABLED=1 set CGO_CFLAGS=-g -O2 set CGO_CPPFLAGS= set CGO_CXXFLAGS=-g -O2 set CGO_FFLAGS=-g -O2 set CGO_LDFLAGS=-g -O2 set PKG_CONFIG=pkg-config set GOGCCFLAGS=-m64 -mthreads -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=C:\Users\ContainerAdministrator\AppData\Local\Temp\go-build022882258 =/tmp/go-build -gno-record-gcc-switches ``` ### What did you do? Running `filepath.EvalSymlinks` against a bind mounted directory within a windows container (for example, `\\?\ContainerMappedDirectories\D48AFB19-F38F-4DAE-A698-4D718D506B15`) returns the following error: ``` PS C:\Users\Administrator> docker run --rm -v c:\:c:\host golang:1.10-rc-nanoserver-sac2016 go run c:\host\eval_symlinks.go panic: GetFileAttributesEx \ContainerMappedDirectories: The system cannot find the file specified. goroutine 1 [running]: main.main() c:/host/eval_symlinks.go:11 +0xe6 exit status 2 ``` `eval_symlinks.go` above is the following program: ```golang package main import ( "fmt" "path/filepath" ) func main() { path, err := filepath.EvalSymlinks("c:\\host") if err != nil { panic(err) } fmt.Printf("eval path: %s\n", path) } ``` Note that the symlink is valid. For example, I can `cd` to it: ``` PS C:\gopath> cmd /c dir c:\ Volume in drive C has no label. Volume Serial Number is 8E1D-692C Directory of c:\ 01/22/2018 01:34 PM <DIR> go 01/08/2018 10:34 AM <DIR> gopath 01/22/2018 01:34 PM <SYMLINKD> host [\\?\ContainerMappedDirectories\007C5A5F-50A3-4019-BCF1-1BA20F4745 11/20/2016 03:32 AM 1,894 License.txt 01/22/2018 01:34 PM <DIR> Program Files 07/16/2016 04:09 AM <DIR> Program Files (x86) 01/22/2018 01:34 PM <DIR> Users 01/22/2018 01:34 PM <DIR> Windows 1 File(s) 1,894 bytes 7 Dir(s) 21,256,376,320 bytes free PS C:\gopath> cd \\?\ContainerMappedDirectories\007C5A5F-50A3-4019-BCF1-1BA20F4745F4 PS Microsoft.PowerShell.Core\FileSystem::\\?\ContainerMappedDirectories\007C5A5F-50A3-4019-BCF1-1BA20F4745F4> ``` golang 1.9.2 returns the same error: ``` PS C:\Users\Administrator> docker run --rm -v c:\:c:\host golang:1.9.2-nanoserver-sac2016 go run c:\host\eval_symlinks.go panic: GetFileAttributesEx \ContainerMappedDirectories: The system cannot find the file specified. goroutine 1 [running]: main.main() c:/host/eval_symlinks.go:11 +0xfe exit status 2 ``` I see that EvalSymlinks on Windows was reimplemented recently (https://github.com/golang/go/commit/66c03d39f3aa65ec522c41e56c569391786539a7), maybe there was a nuance wrt the `\\?\` path style that was missed? ### What did you expect to see? I expected `filepath.EvalSymlinks` to return the original directory (`C:\host`) or the directory that was linked to (`\\?\ContainerMappedDirectories\007C5A5F-50A3-4019-BCF1-1BA20F4745`). ### What did you see instead? The error `GetFileAttributesEx \ContainerMappedDirectories: The system cannot find the file specified`.
OS-Windows,WaitingForInfo,NeedsInvestigation
medium
Critical
290,660,592
electron
webContents.startDrag() for dragging and dropping remote files out of Electron into local filesystem
* Electron version: 1.7.10 * Operating system: Mac OS X Sierra Hey guys! I love what you've done with Electron. Keep it up! πŸ’― I'm building a remote file explorer in Electron. I would like to have the ability to drag files *out* of Electron into the Mac OS X Finder. This can be achieved using `webContents.startDrag()` but this only works for **local files** that already exist. Since my file explorer manages remote files, I do not have a local file path to pass to webContents.startDrag() yet. Is there any way to implement this kind of remote file drag-drop behavior with Electron? I was thinking of listening to the 'drop' event, intercepting the path, and downloading the files to this path via Node.js code. However, Electron never seems to call any event after the webContents.startDrag() file has been successfully moved. Any ideas? This has been asked on StackOverflow as well with no solution as of today: https://stackoverflow.com/questions/43209509/is-it-possible-to-drag-a-remote-file-out-of-electron-app-onto-the-file-system?rq=1
enhancement :sparkles:,platform/macOS
medium
Major
290,664,812
flutter
Warn that flutter run/build must be run before building directly inside Xcode
It's currently easy to land on obscure error messages when the user opens the template created project's xcode project directly and then tries to build. Add a build phase script to check for the staleness or absence of `Generated.xcconfig` or some such.
tool,platform-mac,a: quality,P2,team-tool,triaged-tool
low
Critical
290,761,483
vue
Vue.extend mutates original object
### Version 2.5.13 ### Reproduction link [http://jsfiddle.net/vmvabzam/](http://jsfiddle.net/vmvabzam/) ### Steps to reproduce - run the fiddle and look at the code vs the html ### What is expected? `Foo.props` should still be an array after extending it ### What is actually happening? `Foo.props` are normalized after using Vue.extend on it <!-- generated by vue-issues. DO NOT REMOVE -->
discussion
medium
Minor
290,884,109
rust
The mutable borrow is released when matching on a Option<&mut Self> in a function, but not when the function is inlined
```rust #![feature(nll)] struct Node { value: char, children: Vec<Node>, } impl Node { fn new(value: char) -> Self { Self { value, children: Vec::new(), } } fn add_child(&mut self, value: char) -> &mut Self { self.children.push(Self::new(value)); self.children.last_mut().unwrap() } fn get_child(&mut self, value: char) -> Option<&mut Self> { self.children.iter_mut().find(|n| n.value == value) } fn add_word(&mut self, word: String) { let mut cursor = self; for c in word.chars() { // The extracted version of the function works // cursor = cursor.get_or_add(c); // But the inlined version of the function does not. cursor = match cursor.get_child(c) { Some(node) => node, None => cursor.add_child(c), }; } } fn get_or_add(&mut self, value: char) -> &mut Self { match self.get_child(value) { Some(node) => node, None => self.add_child(value), } } } fn main() {} ``` ``` error[E0499]: cannot borrow `*cursor` as mutable more than once at a time --> src/main.rs:34:25 | 32 | cursor = match cursor.get_child(c) { | ------ first mutable borrow occurs here 33 | Some(node) => node, 34 | None => cursor.add_child(c), | ^^^^^^ second mutable borrow occurs here ``` [Originally from Stack Overflow](https://stackoverflow.com/q/48395307/155423)
T-compiler,A-NLL,C-bug,NLL-polonius
medium
Critical
290,891,110
flutter
Document and test a mechanism to require a certain Flutter version to build
Right now there is no way to specify the required Flutter version in the `pubspec.yaml`. One should be able to pin the build to a specific version (or commit) of Flutter so that the build is reproducible in the future. Furthermore, there is no way currently to ensure developers working on a shared codebase all use the same version of Flutter, making collaboration on bugfixing and debugging potentially impossible. tl;dr it would be nice if it were possible to specify the version of Flutter to use just like one could specify the Dart SDK version to use.
c: new feature,tool,P2,team-tool,triaged-tool
low
Critical
290,893,140
flutter
Provide an easy way to select the version of Flutter to use
This ties in with #14229 β€” it would be great if there was a way to select the version of Flutter to use from the `flutter` tool. Currently to my understanding the manual process to do so is: ```bash $ cd flutter $ git checkout [branch, tag or commit hash] $ cd ../my-app/ $ flutter packages get ``` (assuming both the `flutter` folder and the `my-app` folder live in the same directory) What would be nice would be able to use a command like `flutter switch-to [version]` (e.g., `flutter switch-to 0.0.20`). Ideally running `flutter run` or `flutter build` could do this automatically, but that poses the question of how it would work for people working on multiple projects using different versions of Flutter at the same time.
c: new feature,tool,P3,team-tool,triaged-tool
high
Critical
290,922,109
create-react-app
Knowing when dev compile is done
<!-- PLEASE READ THE FIRST SECTION :-) --> ### Is this a bug report? No ### How to tell when webpack dev compile is ready? I want to be able to run a second process after the dev server is up and running and the compile is done, e.g. run a mock api server. Normally the webpack compiler emits a 'done' event that you can hook into and start a new process or do something else. With CRA, the compiler object is hidden away in the `start` script. Is there any way of accessing this event without ejecting?
issue: proposal
low
Critical
290,939,932
flutter
Enable tree shaking of platform specific code
Flutter apps can conditionally render platform specific UI using the APIs in [platform.dart](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/foundation/platform.dart). Currently these APIs depend on non-const state, resulting in this conditionally code not being tree-shakeable. We should support making this const, at least in release builds.
framework,dependency: dart,c: proposal,P3,team-framework,triaged-framework
medium
Major
290,967,813
youtube-dl
Add Support for DataCamp Courses
- [x ] I've **verified** and **I assure** that I'm running youtube-dl **2018.01.21** - [x ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your *issue*? - [x ] Site support request (request for adding support for a new site) ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single Course: https://www.datacamp.com/courses/intro-to-python-for-data-science - Tracks: https://www.datacamp.com/tracks/data-analyst-with-python --- ### Description of your *issue*, suggested solution and other information Please add support for downloading courses from DataCamp.
site-support-request,account-needed
low
Critical
290,971,211
go
cmd/gofmt: Unexpected formatting multiline functions in struct literal
### What version of Go are you using (`go version`)? ``` go version go1.9.2 darwin/amd64 ``` ### What operating system and processor architecture are you using (`go env`)? ``` GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/drew/go" GORACE="" GOROOT="/usr/local/opt/go/libexec" GOTOOLDIR="/usr/local/opt/go/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/g3/0kqyss7j5zj0dj3gdkz2m9lw0000gn/T/go-build269266413=/tmp/go-build -gno-record-gcc-switches -fno-common" CXX="clang++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" ``` ### What did you do? Run `gofmt` on the following code: ```go package main type x struct { i int someFunc func() } func main() { _ = &x{ i: 1, someFunc: func() { _ = 1 _ = 2 }, } _ = &x{ i: 1, someFunc: func() {}, } } ``` ### What did you expect to see? ```go package main type x struct { i int someFunc func() } func main() { _ = &x{ i: 1, someFunc: func() { _ = 1 _ = 2 }, } _ = &x{ i: 1, someFunc: func() {}, } } ``` Notice the first struct literal, that the value for `i` is indented to match `someFunc` as it is in the second literal. ### What did you see instead? The file remains unchanged. This seems to only happen when a multiline function is declared inline.
NeedsInvestigation
low
Critical
290,990,714
youtube-dl
youtube-dl doesn't see the video file if page contains YouTube trailer
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.01.21*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.01.21** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your *issue*? - [x] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```): ``` youtube-dl -v https://www.kinofondas.lt/filmas/laikinai-/ [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['-v', 'https://www.kinofondas.lt/filmas/laikinai-/'] [debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252 [debug] youtube-dl version 2018.01.18 [debug] Python version 3.4.4 (CPython) - Windows-10-10.0.16299 [debug] exe versions: ffmpeg N-86950-g1bef008, ffprobe N-86950-g1bef008 [debug] Proxy map: {} [generic] laikinai-: Requesting header WARNING: Falling back on generic information extractor. [generic] laikinai-: Downloading webpage [generic] laikinai-: Extracting information [download] Downloading playlist: Laikinai [generic] playlist Laikinai: Collected 1 video ids (downloading 1 of them) [download] Downloading video 1 of 1 [youtube] XCm5U-fneMo: Downloading webpage [youtube] XCm5U-fneMo: Downloading video info webpage [youtube] XCm5U-fneMo: Extracting video information [debug] Default format spec: bestvideo+bestaudio/best WARNING: Requested formats are incompatible for merge and will be merged into mkv. [debug] Invoking downloader on 'https://r8---sn-uxv-8ovl.googlevideo.com/videoplayback?itag=136&pl=20&mime=video%2Fmp4&keepalive=yes&mm=31&mn=sn-uxv-8ovl&aitags=133%2C134%2C135%2C136%2C160%2C242%2C243%2C244%2C247%2C278&gir=yes&requiressl=yes&mt=1516740784&mv=m&ei=EaFnWreeMtHb7ATevpzABw&ms=au&ip=<MY_IP_EDITED>&key=yt6&lmt=1425717181198456&dur=64.833&expire=1516762481&clen=12092425&beids=%5B9466592%5D&id=o-AIrFBjioNpOUWgiP4XgLbtEzJEdpb9JfQvGv-S0as8k7&source=youtube&initcwndbps=873750&sparams=aitags%2Cclen%2Cdur%2Cei%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Ckeepalive%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Crequiressl%2Csource%2Cexpire&ipbits=0&signature=53AC0D6CE479425418ACC6DFD053D239913A655D.89313904B6B52353C80B0C3CD6A7A986A002FF85&ratebypass=yes' [download] Destination: LAIKINAI trailer-XCm5U-fneMo.f136.mp4 [download] 100% of 11.53MiB in 00:18 [debug] Invoking downloader on 'https://r8---sn-uxv-8ovl.googlevideo.com/videoplayback?itag=251&pl=20&mime=audio%2Fwebm&keepalive=yes&mm=31&mn=sn-uxv-8ovl&gir=yes&requiressl=yes&mt=1516740784&mv=m&ei=EaFnWreeMtHb7ATevpzABw&ms=au&ip=<MY_IP_EDITED>&key=yt6&lmt=1432463668288276&dur=64.881&expire=1516762481&clen=963948&beids=%5B9466592%5D&id=o-AIrFBjioNpOUWgiP4XgLbtEzJEdpb9JfQvGv-S0as8k7&source=youtube&initcwndbps=873750&sparams=clen%2Cdur%2Cei%2Cgir%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Ckeepalive%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Crequiressl%2Csource%2Cexpire&ipbits=0&signature=A86794240E36291A9F1566749AB552479FBF50F5.82B89C05D2151EDC8E8B82121D6ECA8E7EAC8D37&ratebypass=yes' [download] Destination: LAIKINAI trailer-XCm5U-fneMo.f251.webm [download] 100% of 941.36KiB in 00:04 [ffmpeg] Merging formats into "LAIKINAI trailer-XCm5U-fneMo.mkv" [debug] ffmpeg command line: ffmpeg -y -i "file:LAIKINAI trailer-XCm5U-fneMo.f136.mp4" -i "file:LAIKINAI trailer-XCm5U-fneMo.f251.webm" -c copy -map "0:v:0" -map "1:a:0" "file:LAIKINAI trailer-XCm5U-fneMo.temp.mkv" Deleting original file LAIKINAI trailer-XCm5U-fneMo.f136.mp4 (pass -k to keep) Deleting original file LAIKINAI trailer-XCm5U-fneMo.f251.webm (pass -k to keep) [download] Finished downloading playlist: Laikinai ``` --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Single video: https://www.kinofondas.lt/filmas/laikinai-/ (click yellow play icon in the middle of the page) Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights. --- ### Description of your *issue*, suggested solution and other information e.g. this page has no YouTube trailer https://www.kinofondas.lt/filmas/filmas-8-minutes/ so youtube-dl downloads the movie the fine. Under --list-formats it lists one file with format mp4 and resolution unknown. Same scenario, but a page with YouTube trailer included e.g. at https://www.kinofondas.lt/filmas/laikinai-/ and it downloads only the trailer from YouTube. It's not a coincidence, I tried multiple other pages that has no trailer and they all work, but as soon as there a page with YouTube trailer, the movie file gets ignored and only the trailer is downloaded, it also sees only YouTube formats under --list-formats. tldr: somehow youtube trailer is blocking the video.
site-support-request
low
Critical
291,034,259
create-react-app
Maybe use execa?
See: * https://github.com/testem/testem/pull/1205 * https://github.com/sindresorhus/execa#why
issue: proposal
low
Minor
291,048,288
pytorch
[feature request]Support AVX512F intrinstics to vectorize operations
Pytorch has written SSE, AVX and AVX2 [intrinstics](https://github.com/pytorch/pytorch/tree/master/aten/src/TH/vector) to vectorize operations on CPU. Now AVX-512 instruction sets are more and more widely introduced to Intel CPUs. It is necessary to add AVX-512F intrinstics to get better performance.
feature,module: cpu,triaged
low
Major
291,073,610
puppeteer
Coverage: recording cumulative coverage at points in time.
Discussed this with @aslushnikov, but documenting here. Use case: while a page is loading, report coverage snapshots at various milestones. e.g. at `DOMContLoaded`, `load`, and `networkidle0`. This would allow users to measure the effectiveness of a lazy loading strategy, for example. My goal was to create a breakdown of CSS/JS coverage at given points in time with URL attribution: <img width="627" alt="screen shot 2018-01-24 at 1 58 48 pm" src="https://user-images.githubusercontent.com/238208/35312281-c66f91d0-010e-11e8-93a2-65ebb0d73f84.png"> But this is currently hard to do b/c: - it would be useful to provide an API call that calculates the used bytes and total bytes for a coverage object. Basically make this [common pattern](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#class-coverage) more convenient. - coverage is not cumulative, numbers go up and down between snapshots - `totalBytes` (denominator) in the screenshot doesn't represent the total file size. I'd expect all of them to be `39255`, the same va reported by the devtools: <img width="1126" alt="screen shot 2018-01-24 at 2 14 55 pm" src="https://user-images.githubusercontent.com/238208/35312715-fcbb68f2-0110-11e8-8cd8-dcdf5c64f8cc.png"> POC code: http://jsbin.com/qohunokalo/1/edit?js,output ```js // start coverage const dcl = page.waitForNavigation({waitUntil: 'domcontentloaded'}).then(async response => { const {jsCoverage, cssCoverage} = await stopCoverage(page); // print metrics by URL @ domcontentloaded await startCoverage(page); }); const load = page.waitForNavigation({waitUntil: 'load'}).then(async response => { const {jsCoverage, cssCoverage} = await stopCoverage(page); // print metrics by URL @ load await startCoverage(page); }); const networkidle0 = page.waitForNavigation({waitUntil: 'networkidle0'}).then(async response => { const {jsCoverage, cssCoverage} = await stopCoverage(page); // print metrics by URL @ networkidle0 }); await Promise.all([dcl, load, networkidle0]); ```
feature,chromium
low
Minor
291,189,805
TypeScript
keyword to force calling the super on any method
Today I faced the following code ```ts class A { // silly warning comment: if you override this, don't forget to call the super method to avoid memory leaks onExit() { // do some important cleaning stuff } } class B extends A { onExit() { super.onExit(); // good } } class C extends A { onExit() { // forgot to call to super.onExit = memory leaks } } ``` The problem is that, unlike a constructor, there is no way to force a method overriding another one to call the parent "super" function. I wished we had a "concrete"* keyword to let the user know he must call the super method. ```ts class A { concrete onExit() { // do some cleaning stuff } } class B extends A { onExit() { super.onExit(); // no error } } class C extends A { // error: Declaration of derived method must contain a 'super' call onExit() { } } ``` In another language, I could have used the ```final``` keyword to prevent overriding the method but then… no overriding allowed neither. * "concrete" In opposition to "abstract" (for lack of a better name), other ideas: "important" or "mandatory"
Suggestion,Awaiting More Feedback
high
Critical
291,192,427
pytorch
[Feature Request] clip_grad_norm for sparse gradients
1.It would be great to have another common gradient clipping strategy, i.e. clip_grad_value in PyTorch. Compared to clip_grad_norm, which is sensitive to the total number of parameters in the model, clip_grad_value will be easier to use as tuning max_value will be more straightforward than tuning max_norm. Still, AFAIK clip_grad_norm is the recommended way to do gradient clipping since it preserves the direction of the gradients while clip_grad_value does not. 2.The current implementation of clip_grad_norm can not handle sparse gradients. A common use case is when sparse=True in nn.Embedding layers. I will issue a PR if people feel this is worth adding. cc @vincentqb @aocsa
module: sparse,triaged
low
Minor
291,288,008
youtube-dl
Add ElTrece support
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like --- ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.01.21*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.01.21** ### Before submitting an *issue* make sure you have: - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [x] Site support request (request for adding support for a new site) - [ ] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue* --- ### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows: ``` [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: [u'-v', u'https://www.eltrecetv.com.ar/programas/simona/capitulos-completos/capitulo-2_099427'] [debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2018.01.21 [debug] Python version 2.7.13 (CPython) - Linux-4.4.104-18.44-default-x86_64-with-SuSE-42.2-x86_64 [debug] exe versions: ffmpeg 3.3.4, ffprobe 3.3.4 [debug] Proxy map: {} [generic] capitulo-2_099427: Requesting header WARNING: Falling back on generic information extractor. [generic] capitulo-2_099427: Downloading webpage [generic] capitulo-2_099427: Extracting information ERROR: Unsupported URL: https://www.eltrecetv.com.ar/programas/simona/capitulos-completos/capitulo-2_099427 Traceback (most recent call last): File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 2192, in _real_extract doc = compat_etree_fromstring(webpage.encode('utf-8')) File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2541, in compat_etree_fromstring doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory))) File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2530, in _XML parser.feed(text) File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1653, in feed self._raiseerror(v) File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1517, in _raiseerror raise err ParseError: not well-formed (invalid token): line 4, column 124 Traceback (most recent call last): File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 784, in extract_info ie_result = ie.extract(url) File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 438, in extract ie_result = self._real_extract(url) File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 3108, in _real_extract raise UnsupportedError(url) UnsupportedError: Unsupported URL: https://www.eltrecetv.com.ar/programas/simona/capitulos-completos/capitulo-2_099427 ``` --- ### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**): - Programs index: https://www.eltrecetv.com.ar/capitulos-completos - Episodes from a single program: https://www.eltrecetv.com.ar/las-estrellas/capitulos-completos - Episode: https://www.eltrecetv.com.ar/programas/las-estrellas/capitulos-completos/capitulo-171_099424 - Episode link provided for embedding: http://www.eltrecetv.com.ar/node/99424/player.html --- ### Description of your *issue*, suggested solution and other information This is a TV station from Argentina, which publishes already aired shows in UHD.
site-support-request
low
Critical
291,391,051
rust
UdpSocket::set_ttl does not set IPv6 hoplimit field
It is not possible to set a hoplimit in an IPv6 packet sent over UDP. `UdpSocket::set_ttl` does not modify the hoplimit in the IPv6 packet and there seems to be no other way to do so. **I tried this code:** use std::net::{Ipv6Addr, SocketAddrV6, UdpSocket}; fn main() { let laddr = SocketAddrV6::new(Ipv6Addr::new(0,0,0,0,0,0,0,1), 50000, 0, 0); let destination = SocketAddrV6::new(Ipv6Addr::new(0,0,0,0,0,0,0,1), 50000, 0, 0); let socket = UdpSocket::bind(laddr).unwrap(); socket.connect(destination).unwrap(); socket.set_ttl(10).unwrap(); println!("{:?}", socket.ttl().unwrap()); let payload = [1,2,3,4,5,6,7,8]; socket.send(&payload).unwrap(); } **I expected to see this happen:** Since `UdpSocket` has no other function that would set the hoplimit in an IPv6 packet, I believe `set_ttl` should be doing this. I expected to capture a UDP packet with an IPv6 hoplimit field of 10. **Instead, this happened:** The output of the program is as expected: 10 I used `tcpdump -v -i lo udp port 50000` to capture the packet: tcpdump: listening on lo, link-type EN10MB (Ethernet), capture size 262144 bytes 23:43:02.146680 IP6 (flowlabel 0x487a1, hlim 64, next-header UDP (17) payload length: 16) localhost.localdomain.50000 > localhost.localdomain.50000: [bad udp cksum 0x0023 -> 0x6917!] UDP, length 8 The hoplimit field of the packet is still set to the default value of 64. ## Meta `rustc --version --verbose`: rustc 1.23.0 binary: rustc commit-hash: unknown commit-date: unknown host: x86_64-unknown-linux-gnu release: 1.23.0 LLVM version: 4.0 <!-- TRIAGEBOT_START --> <!-- TRIAGEBOT_ASSIGN_START --> <!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"kckeiks"}$$TRIAGEBOT_ASSIGN_DATA_END --> <!-- TRIAGEBOT_ASSIGN_END --> <!-- TRIAGEBOT_END -->
C-enhancement,T-libs-api
low
Minor
291,415,161
godot
Indent dictionary key/value entries in TSCN format
**Godot version:** godot3 tip **Issue description:** I have two suggestions for making tscn files easier to parse in third party scripting languages. 1. In a tscn file, multiline values should be indented. The one that affects me the most is in the Animation sub resource. ``` tracks/0/keys = { "times": PoolRealArray( 0, 1), "transitions": PoolRealArray( 1, 1), "update": 0, "values": [ Color( 0.45112, 0.09609, 0.18779, 1 ), Color( 0.55547, 0.0429, 0.18366, 1 ) ] } ``` This would parse better if multi-line values were delineated in some way. There is no specific method in the INI format for doing so but my preferred option would be an indent like this: ``` tracks/0/keys = { "times": PoolRealArray( 0, 1), "transitions": PoolRealArray( 1, 1), "update": 0, "values": [ Color( 0.45112, 0.09609, 0.18779, 1 ), Color( 0.55547, 0.0429, 0.18366, 1 ) ] } ``` 2. In a tscn file, the AnimationPlayer node value "autoplay" is defined twice. It should probably only be defined once. **Steps to reproduce:** 1. Create a scene with an AnimationPlayer 2. Create an empty animation. 3. Set the animation to autoplay 4. Save the file. Inside the file the autoplay value is set twice. **Minimal reproduction project:** ``` [gd_scene load_steps=2 format=2] [sub_resource type="Animation" id=1] resource_name = "test1" length = 1.0 loop = true step = 0.1 [node name="AnimationPlayer" type="AnimationPlayer" index="0"] root_node = NodePath("..") autoplay = "idle" playback_process_mode = 1 playback_default_blend_time = 0.0 playback_speed = 1.0 anims/test1 = SubResource( 1 ) blend_times = [ ] autoplay = "idle" ``` 3. Finally, parsing a tscn when there's shader code stored inside could also be problematic but I figure if you're trying to do that then you kind of deserve problems.
enhancement,discussion,topic:core
low
Minor
291,548,947
youtube-dl
Login to Youtube + Google Prompt/Authenticator
I know it's possible to download Youtube videos that require you to be logged in by supplying in your email and password however I do have the Google Prompt added as well as an authenticator. Here's what I tried (and failed): - Supplying my email, password and authentication code but I got "Invalid parameters" and at the same time my phone got the Google Prompt. I tried accepting the Google Prompt but it didn't help. - Supplying my email and password and quickly accepting the Google Prompt as soon as it came up on my phone. Didn't work. - Removing the Google Prompt option and relying on just the authenticator, same thing as my first attempt. Any ideas?
bug
low
Critical
291,566,348
TypeScript
Make the type guarding on 'typeof' be transitive.
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 --> <!-- Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the CONTRIBUTING guidelines: https://github.com/Microsoft/TypeScript/blob/master/CONTRIBUTING.md * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ --> <!-- If you have a QUESTION: THIS IS NOT A FORUM FOR QUESTIONS. Ask questions at http://stackoverflow.com/questions/tagged/typescript or https://gitter.im/Microsoft/TypeScript --> <!-- If you have a SUGGESTION: Most suggestion reports are duplicates, please search extra hard before logging a new suggestion. See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> <!-- If you have a BUG: Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.7.0 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** typeof + transitive + type guard + typeof same **Code** ```ts function test(a: string | number, b: string | number) { if (typeof a === typeof b && typeof a === 'string') return b.length } ``` **Expected behavior:** variable `b` be asserted as string type. **Actual behavior:** TS report: `Property 'length' does not exist on type 'string | number'.` at `b.length` **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> [Playground](http://www.typescriptlang.org/play/index.html#src=function%20test(a%3A%20string%20%7C%20number%2C%20b%3A%20string%20%7C%20number)%20%7B%0A%20%20if%20(typeof%20a%20%3D%3D%3D%20typeof%20b%20%26%26%20typeof%20a%20%3D%3D%3D%20'string')%20return%20b.length%0A%7D) **Related Issues:**
Suggestion,Awaiting More Feedback
low
Critical
291,577,359
rust
Check for Integer Overflow by Default
It would be good to always check integers for overflow and thereby providing users with an integer type that actually behaves like an integer or at least fails completely instead of giving "wrong" results. This was discussed on IRC last week and three distinct cases were identified: 1. An integer is desired and the implicit modular arithmetic is incorrect 2. Modular arithmetic is desired 3. An integer is desired but the user is sure that overflows are impossible and needs the extra speed of omitting the checks My proposal is to make (1.) the default. For (2.) there is already [Wrapping](https://doc.rust-lang.org/std/num/struct.Wrapping.html) but (3.) should also be annotated requiring people to assert that they have done their homework and a) are sure that overflows cannot cause problems b) the compiler cannot infer that the situation is safe and remove the checks c) actually need the speed-up of omitting the checks. I am aware that there are [checked operations](https://doc.rust-lang.org/std/primitive.i64.html#method.checked_mul) and compiler flags to keep overflow checks in release builds but the defaults are important and the defaults are wrong! This issue is also discussed in the following two posts: https://mail.mozilla.org/pipermail/rust-dev/2014-June/010363.html https://huonw.github.io/blog/2016/04/myths-and-legends-about-integer-overflow-in-rust/
C-enhancement,T-libs-api
medium
Critical
291,594,325
flutter
Downloading lines for flutter doctor should display information
## Steps to Reproduce When `flutter doctor` downloads dependencies, there are no indicators of what is going on. This is particularly important in places with slow Internet connections. I would expect a modern download CLI to include: * [ ] download size * [ ] % downloaded * [ ] progress bar * [ ] count of how many things will be downloaded (1 of 14...) ## Flutter Doctor ```none $ flutter doctor Downloading package sky_engine...(cached) Downloading common tools...(cached) Downloading darwin-x64 tools...(cached) Downloading darwin-x64 tools... Downloading android-arm-profile/darwin-x64 tools... Downloading android-arm-release/darwin-x64 tools... Downloading android-x86 tools... Downloading android-x64 tools... Downloading android-arm tools... Downloading android-arm-profile tools... Downloading android-arm-release tools... Downloading ios tools... Downloading ios-profile tools... Downloading ios-release tools... [βœ“] Flutter (on Mac OS X 10.12.6 16G1114, locale en-US, channel master) β€’ Flutter version unknown at /Users/srawlins/code/flutter β€’ Framework revision d3705f3ea9 (8 hours ago), 2018-01-24 22:27:24 -0800 β€’ Engine revision d715860925 β€’ Tools Dart version 2.0.0-dev.16.0 β€’ Engine Dart version 2.0.0-edge.8d9d68751a505426eb5f59a9d29f103fde6bd474 [βœ—] Android toolchain - develop for Android devices βœ— Unable to locate Android SDK. Install Android Studio from: https://developer.android.com/studio/index.html On first launch it will assist you in installing the Android SDK components. (or visit https://flutter.io/setup/#android-setup for detailed instructions). If Android SDK has been installed to a custom location, set $ANDROID_HOME to that location. [-] iOS toolchain - develop for iOS devices (Xcode 8.3.3) β€’ Xcode at /Applications/Xcode.app/Contents/Developer β€’ Xcode 8.3.3, Build version 8E3004b βœ— Flutter requires a minimum Xcode version of 9.0.0. Download the latest version or update via the Mac App Store. βœ— Your Mac needs to enabled for developer mode before using Xcode for the first time. Run 'sudo DevToolsSecurity -enable' to enable developer mode. βœ— libimobiledevice and ideviceinstaller are not installed. To install, run: brew install --HEAD libimobiledevice brew install ideviceinstaller βœ— ios-deploy not installed. To install: brew install ios-deploy βœ— CocoaPods not installed. CocoaPods is used to retrieve the iOS platform side's plugin code that responds to your plugin usage on the Dart side. Without resolving iOS dependencies with CocoaPods, plugins will not work on iOS. For more info, see https://flutter.io/platform-plugins To install: brew install cocoapods pod setup [βœ—] Android Studio (not installed) β€’ Android Studio not found; download from https://developer.android.com/studio/index.html (or visit https://flutter.io/setup/#android-setup for detailed instructions). [-] Connected devices β€’ None ```
tool,a: quality,P2,team-tool,triaged-tool
low
Major
291,614,391
rust
Struct initializer `..x` syntax should work for other structs with structurally equal subset of fields
This should be allowed: ```rust struct X { a: u8, b: u16, c: u32, d: i8 } struct Y { a: u8, b: u16, c: u32, d: i8 } let x = X { a: 1, b: 2, c: 3, d: 4 }; let y = Y { a: 5, ..x }; ``` if the fields that are missing from `Y`'s struct initializer have the **same** names and types in `X` as in `Y`. This occurs very often and I often have to write code that pulls over many fields manually (e.g. to convert between structs with different (serde) attributes and to derive view structs from model structs etc.).. It would make sense to allow the above as syntactic sugar for this: ```rust let y = Y { a: 4, b: x.b, c: x.c, d: x.d }; ``` So the compiler would: 1. determine the set of fields missing from the initializer syntax (`{ b: u16, c: u32, d: i8 }` above) 2. check if those fields are present in the type of `x` with the same types You may say "just `impl From<X> for Y`" but that's exactly where many of these patterns occur! And usually I have to do conversion functions on the few fields that are not pulled over from `x`, so a proc-macro to derive `From` wouldn't work, because `y.a` would have a different type than `x.a`.
T-lang,C-feature-request
medium
Major
291,629,398
svelte
Proposal: Top-Level <:Body> Injections
Imagine I have the following component, `<TopLevelThing>`. ```HTML <div class="top-level-thing"> <p>Top level stuff.</p> </div> ``` It's intended to be used at the top-level of the `<body>`. However, I might want to use it as part of a component which is deep inside the `<body>`, nowhere near the top-level. This could be achieved with a special tag like the following. ```HTML <:Body> <TopLevelThing /> </:Body> ``` Svelte could inject this into the top-level of the `<body>` alongside other elements. The advantage is that it would retain all the functionality of a component: the lifecycle, properties, and being part of the component organisation and structure. One common use case for this could be modals. They can often relate very specifically to the organisation and structure of a chain of components. However, perhaps because of styling or some other reason it may be more logical for the modal structure to be located at the top-level of the `<body>`, rather than within the parent component structure. It's quite possible that an alternative approach, using current Svelte features, is most appropriate to these kinds of use cases. However, this approach came to my mind, and I thought it was worth airing. As an aside, this feature does raise some additional questions. For instance, if a `<:Body>` tag sounds good, would it make more sense to have a more generic `<:Injection>` tag instead, which could be used with more than just the `<Body>`? Also, might developers find it useful in some cases, of their choosing, for `<:Injection>` tags to persist even when their parent components are destroyed?
feature request,popular
high
Critical
291,639,945
vscode
Comment for erb files in rails
When doing "CTRL + / " for lines where ruby codes are written in the " <% %> " tag in html.erb files, it always leaves the last "%>" part as HTML
under-discussion,editor-commands
low
Minor
291,651,833
rust
codegen-units + ThinLTO is not as good as codegen-units = 1
We recently had a fair amount of reports about code generation quality drop. One of the recent causes for the quality drop is the enablement of codegen-units and ThinLTO. It seems that ThinLTO is not capable of producing results matching those obtained by compiling without codegen-units in the first place. The list of known reports follows: * https://github.com/rust-lang/rust/issues/47665 * https://github.com/rust-lang/rust/issues/47561 * https://github.com/rust-lang/rust/issues/47356 * https://github.com/rust-lang/rust/issues/47062 (specifically this [comment](https://github.com/rust-lang/rust/issues/47062#issuecomment-360449227)) * https://github.com/rust-lang/rust/issues/47321 * https://github.com/rust-lang/rust/issues/47770 * https://github.com/rust-lang/rust/issues/53833 Improvements to ThinLTO quality are inbound with the soon-to-happen LLVM upgrade(s), however those do not help sufficiently, it would be nice to figure out why ThinLTO is not doing good enough job. cc @alexcrichton @nikomatsakis
A-LLVM,I-slow,T-compiler
medium
Critical
291,668,697
godot
Using Viewport Texture causes "node not found" error / "Cannot get path of node as it is not in a scene tree"
___ ***Bugsquad note:** This issue has been confirmed several times already. No need to confirm it further.* ___ **Godot version:** v3.0.rc3.official **OS/device including version:** Ubuntu 16.04 **Issue description:** I'm using the canvas 2D drawing functions to create a texture and then using that texture on other nodes. Instancing a 2D scene that uses local viewport textures causes runtime errors in the Output and Debugger panes but the project still runs correctly. The errors are: `Node not found: Viewport` `ViewportTexture: Path to node is invalid` The errors show in the Output pane when the parent scene is loaded and in the Debugger when the project runs. It makes testing tedious because I never know if the error in the Debugger is real or not. I'm not sure if my node setup is correct for what I'm trying to accomplish but it works. **Steps to reproduce:** I've attached a project that repros the problem. - Open the project, open the main scene, and you should see the errors in the Output pane - Run the project and you should see the errors in the Debugger pane **Minimal reproduction project:** [node-not-found-example.zip](https://github.com/godotengine/godot/files/1665135/node-not-found-example.zip)
bug,topic:core,confirmed
medium
Critical
291,847,743
TypeScript
Resolving multiple package.json "main" fields
**TL;DR:** A new compiler option `mainFields` for selecting multiple fields in `package.json` instead of just `package.json#main`. --- There are lots of related issues to this one (which I link to below), but I want to focus on just this specific proposal. --- Packages often look like this: ```js { "name": "my-package", "main": "dist/cjs/index.js", "module": "dist/esm/index.js" "source": "src/index.ts" } ``` Notice how we have multiple fields which specify multiple entry points. These entry points all refer to the same code, just in different compile states and configurations. Many tools use these fields in order to find the entry point that they care about. For example, tools like Webpack and Rollup will use `package.json#module` in order to find ES modules. Other tools will use fields like `package.json#source` (or `src`) for local package development. While these fields aren't part of the official Node module resolution algorithm. They are a community convention which has proven to be useful in lots of scenarios. For TypeScript, one such scenario that this would be useful for is with multi-package repos or "monorepos". These are repositories where the code for multiple npm packages exist and are symlinked together locally. ``` /project/ package.json /packages/ /package-one/ package.json /node_modules/ /package-two/ -> ../../package-two (symlink) /package-two/ package.json ``` Inside each package, you'll generally have a `src/` directory that gets compiled to `dist/` ``` /package-two/ package.json /src/ index.ts /dist/ index.js index.d.ts ``` Right now it is really painful to use TypeScript with one of these repos. This is because TypeScript will use the `package.json#main` to resolve to the packages `dist` folders. The problem with this is that the `dist` folders might not exist and if they do exist they might not be compiled from the most recent version of `src`. To work around this today you can add a `index.ts` file in the root of each of your packages to point to the right location and make sure that the root `index.ts` file does not get shipped to npm. ``` /package-two/ index.ts /src/index.ts ``` ```ts // package-two/index.ts export * from './src/index' ``` It sucks that you need this file, and if you ever forget to create it in a new package, you'll revert back to really crap behavior. If, instead of all that, TypeScript supported a new compiler option `mainFields` which looked like: ```js { "compilerOptions": { "mainFields": ["source", "main"] } } ``` > **Note:** [Webpack has this same configuration option](https://webpack.js.org/configuration/resolve/#resolve-mainfields) You could add `package.json#source` (in addition to `package.json#main`) and resolve it to the right location locally. The algorithm would look like this: For each `mainField`: 1. Check if the `package.json` has a field with that name 2. If the package.json does not have the field, continue to next `mainField` 3. If it field exists, check for a file at that location. 4. If no file at that location exists, continue to the next `mainField` 5. If the file exists, use that file as the resolved module and stop looking I think this is the relevant code: https://github.com/Microsoft/TypeScript/blob/b363f4f9cd6ef98f9451ccdcc7321d151195200b/src/compiler/moduleNameResolver.ts#L987-L1014 **Related Issues:** - https://github.com/Microsoft/TypeScript/issues/21137 "Path mapping in yarn workspaces" - https://github.com/Microsoft/TypeScript/issues/20248 "Module resolution for sub-packages picks d.ts file when .ts file is available" - https://github.com/Microsoft/TypeScript/issues/18442 "Support `.mjs` output"
Suggestion,Awaiting More Feedback,Scenario: Monorepos & Cross-Project References
high
Critical
291,855,947
go
fmt: Scanf works differently on Windows and Linux
### What version of Go are you using (`go version`)? go version go1.9.3 windows/amd64 and go version go1.9.3 linux/amd64 ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? on Windows: ``` set GOARCH=amd64 set GOBIN= set GOEXE=.exe set GOHOSTARCH=amd64 set GOHOSTOS=windows set GOOS=windows set GOPATH=D:\golang set GORACE= set GOROOT=C:\Go set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64 set GCCGO=gccgo set CC=gcc set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\ANS_AS~1\AppData\Local\Temp\go-build251374523=/tmp/go-build -gno-record-gcc-switches set CXX=g++ set CGO_ENABLED=1 set CGO_CFLAGS=-g -O2 set CGO_CPPFLAGS= set CGO_CXXFLAGS=-g -O2 set CGO_FFLAGS=-g -O2 set CGO_LDFLAGS=-g -O2 set PKG_CONFIG=pkg-config ``` on Linux: ``` GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/ans_ashkan/go" GORACE="" GOROOT="/usr/local/go" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build871792404=/tmp/go-build -gno-record-gcc-switches" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" ``` ### What did you do? ```golang package main import "fmt" func main() { var firstString string var secondString string fmt.Printf("Please enter first string:\n") fmt.Scanf("%s", &firstString) fmt.Printf("Please enter second string:\n") n, err := fmt.Scanf("%s", &secondString) fmt.Println(err) } ``` ### What did you expect to see? Same behavior on Windows and Linux. Either it should capture `secondString` on Windows and Linux, or reject it on both operating systems. ### What did you see instead? on Windows: `unexpected newline`. on Linux: no errors.
OS-Windows,NeedsFix
medium
Critical
291,868,439
rust
Attributes on macro does not trigger error or warning
Using `rustc 1.25.0-nightly (9fd7da904 2018-01-25)`, ```rust fn foo() { println!("hello world"); } fn main() { #[rustfmt_skip] foo (); } ``` the abvoe code triggers an error: ``` error[E0658]: The attribute `rustfmt_skip` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642) --> test.rs:18:5 | 18 | #[rustfmt_skip] | ^^^^^^^^^^^^^^^ | = help: add #![feature(custom_attribute)] to the crate attributes to enable error: aborting due to previous error ``` However, the following code compiles without any error: ```rust fn main() { #[rustfmt_skip] println! ("hello world"); } ``` It looks like the attributes on macros are entirely ignored from the compiler. Is this an expected behavior?
A-diagnostics,T-compiler,C-bug
low
Critical
291,882,490
flutter
Antialiasing behaviour when same-colour
_Latest status update: https://github.com/flutter/flutter/issues/14288#issuecomment-1890250258; some work around suggestions: https://github.com/flutter/flutter/issues/14288#issuecomment-1026332976_ ---- ## Steps to Reproduce Following source code: ```dart import 'package:flutter/material.dart'; const Color color = const Color.fromARGB(255, 100, 100, 100); void main() => runApp( new Container( color: const Color.fromARGB(255, 0, 0, 0), child: new Row( mainAxisAlignment: MainAxisAlignment.end, textDirection: TextDirection.ltr, children: [ new Expanded( child: new Container( color: color, ), ), new Expanded( child: new Container( color: color, ), ), new Expanded( child: new Container( color: color, ), ), new Expanded( child: new Container( color: color, ), ), new Expanded( child: new Container( color: color, ), ), new Expanded( child: new Container( color: color, ), ), new Expanded( child: new Container( color: color, ), ), ], ), ), ); ``` produces following result: <img src="https://user-images.githubusercontent.com/5489307/35438910-2d31024e-02a1-11e8-91a7-3fad44bbb803.png" height="450" /> Looks like background of the container is popping out and we see vertical lines. That should not be the case as all children of the row are Expanded and thus should fill the whole area. If we remove one child lines are gone. ## Logs ``` Launching lib/main.dart on Android SDK built for x86 in debug mode... Initializing gradle... Resolving dependencies... Running 'gradlew assembleDebug'... Built build/app/outputs/apk/app-debug.apk (22.4MB). I/FlutterActivityDelegate( 8398): onResume setting current activity to this D/EGL_emulation( 8398): eglMakeCurrent: 0xaad2c640: ver 3 1 (tinfo 0xa057c5b0) E/eglCodecCommon( 8398): glUtilsParamSize: unknow param 0x000082da E/eglCodecCommon( 8398): glUtilsParamSize: unknow param 0x000082da E/eglCodecCommon( 8398): glUtilsParamSize: unknow param 0x00008cdf E/eglCodecCommon( 8398): glUtilsParamSize: unknow param 0x00008cdf E/eglCodecCommon( 8398): glUtilsParamSize: unknow param 0x00008824 E/eglCodecCommon( 8398): glUtilsParamSize: unknow param 0x00008824 D/ ( 8398): HostConnection::get() New Host Connection established 0xa31a3640, tid 8416 D/EGL_emulation( 8398): eglMakeCurrent: 0xaad2c640: ver 3 1 (tinfo 0xa3183790) D/EGL_emulation( 8398): eglMakeCurrent: 0xaad2c760: ver 3 1 (tinfo 0xa057cc10) Syncing files to device Android SDK built for x86... ``` ## Flutter Doctor ``` [βœ“] Flutter (on Linux, locale en_US.UTF-8, channel master) β€’ Flutter version unknown at <path_to_flutter> β€’ Framework revision 5ae770345a (3 days ago), 2018-01-23 13:46:14 -0800 β€’ Engine revision 171d032f86 β€’ Tools Dart version 2.0.0-dev.16.0 β€’ Engine Dart version 2.0.0-edge.93d8c9fe2a2c22dc95ec85866af108cfab71ad06 [βœ“] Android toolchain - develop for Android devices (Android SDK 27.0.3) β€’ Android SDK at <path_to_android> β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-27, build-tools 27.0.3 β€’ ANDROID_HOME = <path_to_android> β€’ Java binary at: <path_to_android-studio>/jre/bin/java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01) [βœ“] Android Studio (version 3.0) β€’ Android Studio at <path_to_android-studio> β€’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01) [βœ“] Connected devices β€’ Android SDK built for x86 β€’ emulator-5554 β€’ android-x86 β€’ Android 8.1.0 (API 27) (emulator) ```
framework,engine,customer: crowd,c: rendering,has reproducible steps,P3,a: gamedev,found in release: 3.3,found in release: 3.4,workaround available,team-engine,triaged-engine
high
Critical
291,897,370
create-react-app
Inform the developer about new releases and how to upgrade
Hi, guys. How do you know if there is a new release to update? I usually use `npm-check` but other people could check manually going to [Releases](https://github.com/facebook/create-react-app/releases), among others. Whatever it is, you get the information after requesting manually. I want to make a proposal to avoid this. If there is a new version, CRA will alert you about this if you run on development mode. So, you could see a message (browser console and terminal) while you are working. Messages might vary. I'll give you some possible examples as proposals. - Info: There is a new major release. Read more on... - Info: There are 2 new major release ahead to your current version. Read more on... - Info: There is a new minor release. You could upgrade easily just running this command on your terminal... How cool would be that for developers who want to save time and be focus on the app as much as possible?
issue: proposal
low
Minor
291,900,705
vscode
Support assigning numeric value to "editor.wrappingIndent"
- VSCode Version: 1.19.3 `editor.wrappingIndent` is currently supporting `"none"`, `"same"`, `"indent"`. The problem with `"same"` is that it is difficult to know where the new line starts, and the problem with `"indent"` is that it is difficult to know where the new block starts. (Also, I don't like `"none"` 😜) Many IDEs and editors support custom numeric value for wrapping indent. I hope this feature is also supported in Visual Studio Code. **Current:** `"editor.wrappingIndent": "same"` ![Current](https://user-images.githubusercontent.com/2531397/35442210-c0ee2cbc-02e9-11e8-8678-9a85dcbf0795.png) **Desired:** `"editor.wrappingIndent": 1` ![Desired](https://user-images.githubusercontent.com/2531397/35442235-ddcd1fb4-02e9-11e8-8973-ff836bedca8b.png)
feature-request,editor-core
low
Minor
291,921,217
go
cmd/go: coverage profile should be cached with tests
As briefly discussed here: https://twitter.com/_rsc/status/956888213314068481 I don't see why Go shouldn't cache the results of `-coverprofile` when running tests, as test coverage shouldn't vary from run to run, given the same set of arguments. ### What version of Go are you using (`go version`)? `go version go1.10rc1 darwin/amd64` ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? ``` $GOOS="darwin" $GOARCH="amd64" ``` ### What did you do? Call `go test -coverprofile=coverprofile.out ./...` multiple times. ### What did you expect to see? Test results cached between runs. ``` $ go test -coverprofile=coverprofile.out ./... ok github.com/bbrks/tmp 2.893s coverage: 46.1% of statements $ go test -coverprofile=coverprofile.out ./... ok github.com/bbrks/tmp (cached) ``` ### What did you see instead? Test results were not cached between runs. ``` $ go test -coverprofile=coverprofile.out ./... ok github.com/bbrks/tmp 2.893s coverage: 46.1% of statements $ go test -coverprofile=coverprofile.out ./... ok github.com/bbrks/tmp 2.893s coverage: 46.1% of statements ```
NeedsFix
high
Critical
291,923,557
youtube-dl
[go] Add CenturyLink support
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.01.21** - [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections - [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones ### What is the purpose of your *issue*? - [ ] Bug report (encountered problems with youtube-dl) - [ ] Site support request (request for adding support for a new site) - [x] Feature request (request for a new functionality) - [ ] Question - [ ] Other --- ``` PS C:\Users\rumag\Desktop\youtube-dl> py -3 -m youtube_dl --all-subs --ap-username USERNAME --ap-password PASSWORD --ap-mso DTV "http://watchdisneychannel.go.com/andi-mack/video/vdka4245399/02/102-andi-mack-cast-party " -v [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['--all-subs', '--ap-username', 'PRIVATE', '--ap-password', 'PRIVATE', '--ap-mso', 'DTV', 'http://watchdisneychannel.go.com/andi-mack/video/vdka4245399/02/102-andi-mack-cast-party', '-v'] [debug] Encodings: locale cp1252, fs utf-8, out utf-8, pref cp1252 [debug] youtube-dl version 2018.01.21 [debug] Python version 3.6.4 (CPython) - Windows-10-10.0.16299-SP0 [debug] exe versions: ffmpeg N-86911-gb664d1f, ffprobe N-86911-gb664d1f, rtmpdump 2.4 [debug] Proxy map: {} [Go] vdka4245399: Downloading JSON metadata [Go] VDKA4245399: Downloading Provider Redirect Page [Go] VDKA4245399: Downloading Provider Login Page [Go] VDKA4245399: Logging in [Go] VDKA4245399: Confirming Login [Go] VDKA4245399: Retrieving Session ERROR: Unable to download webpage: HTTP Error 401: Unauthorized (caused by <HTTPError 401: 'Unauthorized'>); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output. File "C:\Users\rumag\Desktop\youtube-dl\youtube_dl\extractor\common.py", line 517, in _request_webpage return self._downloader.urlopen(url_or_request) File "C:\Users\rumag\Desktop\youtube-dl\youtube_dl\YoutubeDL.py", line 2198, in urlopen return self._opener.open(req, timeout=self._socket_timeout) File "C:\Users\rumag\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 532, in open response = meth(req, response) File "C:\Users\rumag\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 642, in http_response 'http', request, response, code, msg, hdrs) File "C:\Users\rumag\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 570, in error return self._call_chain(*args) File "C:\Users\rumag\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 504, in _call_chain result = func(*args) File "C:\Users\rumag\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 650, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) ... <end of log> ``` --- - Single video: http://watchdisneychannel.go.com/andi-mack/video/vdka4245399/02/102-andi-mack-cast-party --- ### Description of *issue* Is it possible to add CenturyLink? I tried with DTV since they have a partnership, but it's not working. CenturyLink is avaible on the website. And if you could make sure the one called "CenturyLink Prism" would work, it would be nice.
tv-provider-account-needed
low
Critical
291,932,502
go
gccgo: GOARCH is not validated
Please answer these questions before submitting your issue. Thanks! ### What version of Go are you using (`go version`)? ``` go version go1.9 linux/amd64 ``` ### Does this issue reproduce with the latest release? Yes ### What did you do? ``` $ GOARCH=386 go build -v -compiler gccgo ./mypkg/ go build: when using gccgo toolchain, please pass compiler flags using -gccgoflags, not -gcflags mypkg # mypkg version.go:7:9: error: import file β€˜strconv’ not found "strconv" $ GOARCH=xxx go build -v -compiler gccgo ./mypkg/ go build: when using gccgo toolchain, please pass compiler flags using -gccgoflags, not -gcflags mypkg $ echo $? 0 ``` ### What did you expect to see? Failure when the provided `GOARCH` is invalid. ### What did you see instead? It builds for the default `GOARCH`. If a valid architecture is provided it tries to build that one, failing if the standard library is not built for it. If a bogus one is provided, no errors are returned and the default one is picked instead.
NeedsInvestigation
low
Critical
291,948,033
flutter
flutter run usage for iOS (e.g. hotrun/ios) does not include architecture type
It's a little odd that our usage stats report arch type for Android but not iOS: hotrun/android-arm hotrun/ios hotrun/android-x86 hotrun/android-x64 run/android-arm It seems like either we should add the arch type to iOS or remove them from all of them (and get the arch type through another pivot of the data). Not a big deal.
c: new feature,team,tool,P2,team-tool,triaged-tool
low
Minor
292,005,962
go
archive/zip: invalid zip64 extra fields as per strict interpretation of APPNOTE.TXT
### What version of Go are you using (`go version`)? Looking at git master (currently d85a353 of writer.go) ### Description There seems to be an error in *archive/zip/writer.go* when creating a zip64 extra field when the local header offset is less than UINT32_MAX (e.g. for the first file in the central directory). https://github.com/golang/go/blob/master/src/archive/zip/writer.go#L114 writes the 64-bit offset of the local header to the zip64 extra field even if the 32-bit offset written at https://github.com/golang/go/blob/master/src/archive/zip/writer.go#L129 isn't equal to 0xFFFFFFFF. According to APPNOTE.TXT, this isn't allowed as per: ``` The order of the fields in the zip64 extended information record is fixed, but the fields MUST only appear if the corresponding Local or Central directory record field is set to 0xFFFF or 0xFFFFFFFF. ``` While Info-ZIP and other implementations may not care about extraneous fields, Microsoft's implementation explodes accordingly at https://referencesource.microsoft.com/#WindowsBase/Base/MS/Internal/IO/Zip/ZipIOExtraFieldZip64Element.cs,120 because it won't expect the field by this logic https://referencesource.microsoft.com/#WindowsBase/Base/MS/Internal/IO/Zip/ZipIOCentralDirectoryFileHeader.cs,92 ### Proposed solution Change https://github.com/golang/go/blob/master/src/archive/zip/writer.go#L126 to say: ```go if h.isZip64() || h.offset > uint32max { ``` instead of ```go if h.offset > uint32max { ``` to mimic the logic some lines above that decides when to write the zip64 extra field at all. This seems to be the minimal change to fix the issue, otherwise the extra field might need to vary in size. P.S. thanks to archive/zip authors for providing a readable reference
NeedsInvestigation
low
Critical
292,081,759
rust
derived Clone implementations for many-variant enums are unnecessarily large
The derived Clone implementations in Firefox weigh 192k [1], which is a lot. Of that 192k, 79k of that is just for PropertyDeclaration::clone. PropertyDeclaration is a very large enum, but most of the variants are simple POD types. Ideally Rust would coalesce those together, and then generate special cases for the types that need it. Unfortunately, it appears that adding a single non-POD variant causes all the cases to be enumerated separately, as seen in the testcase at [2]. From IRC: > eddyb bholley: yeah I think this is a valid issue > eddyb: if you do generate that sort of "(partial) identity match" it's unlikely to have anything else collapse it > eddyb: LLVM might have a pass for switch-discriminant-dependent-values > eddyb: but it probably has serious limitations in practice > eddyb: bholley: but yeah this is one of those cases where not scattering in the first place is significantly easier than gathering later` Ideally we'd do the same for PartialEq, since PropertyDeclaration::eq weighs another 61k. [1] nm --print-size --size-sort --radix=d libxul.so | grep "\.\.Clone" | grep -v Cloned | cut -d " " -f 2 | awk '{s+=$1} END {print s}' [2] https://play.rust-lang.org/?gist=0207af76d2e05acdbed913b7df96aa77&version=stable
C-enhancement,T-compiler,I-heavy,C-optimization
medium
Major
292,090,607
go
runtime/race: document and fix "trap when race detected"
This is not so much a _bug_, as an _ask_. When using the `-race` option for `go test`, I would like to see the arguments that are provided at the call site, down the stack, when a data race is detected. Currently, I only see the methods themselves. If I omit the `-race` option, I can see the data race as reported by a map concurrency problem, and this _does_ include call parameters. In my particular example, I am having trouble with a map that the `go/types#Checker` method uses. The concurrency issue is happening deep in the Go code, rather than something that I have immediate control over. (Note: I do _not_ think that this is a problem in the Go type checker library at this point; no reason to. I'm sure I'm doing something wrong on my side.) I have many goroutines running a type checker at the same time. If I had parameters on the call stack, I could tell a lot more about the situation that causes the issue. ### What version of Go are you using (`go version`)? ``` % go version go version go1.9.3 darwin/amd64 ``` ### Does this issue reproduce with the latest release? I have not tried with the 1.10 beta. ### What operating system and processor architecture are you using (`go env`)? ``` % go env GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/bropa18/work" GORACE="" GOROOT="/usr/local/Cellar/go/1.9.3/libexec" GOTOOLDIR="/usr/local/Cellar/go/1.9.3/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/hx/fh6ygqmx4ls2cs9djgk5bhkm3nt0vs/T/go-build080984833=/tmp/go-build -gno-record-gcc-switches -fno-common" CXX="clang++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" ``` ### What did you do? Unfortunately, I don't have a _small_ working example of this scenario. I found this while debugging a race condition in my... um, complex program. I can try to create a more minimal example at a later date, if the team thinks it's helpful. ### What did you expect to see? I would _like_ to see the parameters in the call stack. This example from the map's concurrency checker is a great example: ``` fatal error: concurrent map read and map write goroutine 1506 [running]: runtime.throw(0x1299de0, 0x21) /usr/local/Cellar/go/1.9.3/libexec/src/runtime/panic.go:605 +0x95 fp=0xc43034cec8 sp=0xc43034cea8 pc=0x102b495 runtime.mapaccess1_faststr(0x124cbc0, 0xc42de16f90, 0xc43806c3c8, 0x1, 0xc438094f00) /usr/local/Cellar/go/1.9.3/libexec/src/runtime/hashmap_fast.go:217 +0x43a fp=0xc43034cf20 sp=0xc43034cec8 pc=0x100b99a go/types.(*Scope).Lookup(...) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/scope.go:71 go/types.(*Checker).selector(0xc42a704fc0, 0xc4382ca180, 0xc42d8b0820) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/call.go:287 +0x1167 fp=0xc43034d210 sp=0xc43034cf20 pc=0x1184f27 go/types.(*Checker).typExprInternal(0xc42a704fc0, 0x13c83a0, 0xc42d8b0820, 0x0, 0x0, 0x0, 0x0, 0xc428b8f400, 0x145f6c8) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/typexpr.go:234 +0x9f3 fp=0xc43034d3c0 sp=0xc43034d210 pc=0x11b4e43 go/types.(*Checker).typExpr(0xc42a704fc0, 0x13c83a0, 0xc42d8b0820, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/typexpr.go:132 +0x89 fp=0xc43034d418 sp=0xc43034d3c0 pc=0x11b39f9 go/types.(*Checker).typ(0xc42a704fc0, 0x13c83a0, 0xc42d8b0820, 0x1, 0xa1970dd1f3) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/typexpr.go:140 +0x63 fp=0xc43034d470 sp=0xc43034d418 pc=0x11b3b03 go/types.(*Checker).typExprInternal(0xc42a704fc0, 0x13c8460, 0xc42d8b0840, 0x0, 0x0, 0x0, 0x0, 0xc438094f00, 0xc430505060) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/typexpr.go:276 +0x913 fp=0xc43034d620 sp=0xc43034d470 pc=0x11b4d63 go/types.(*Checker).typExpr(0xc42a704fc0, 0x13c8460, 0xc42d8b0840, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/typexpr.go:132 +0x89 fp=0xc43034d678 sp=0xc43034d620 pc=0x11b39f9 go/types.(*Checker).typ(0xc42a704fc0, 0x13c8460, 0xc42d8b0840, 0xc420025300, 0x13ca860) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/typexpr.go:140 +0x63 fp=0xc43034d6d0 sp=0xc43034d678 pc=0x11b3b03 go/types.(*Checker).collectParams(0xc42a704fc0, 0xc43823ef00, 0xc4380633b0, 0xc43823ef01, 0x0, 0x0, 0x0, 0x0) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/typexpr.go:408 +0x4c2 fp=0xc43034d838 sp=0xc43034d6d0 pc=0x11b6372 go/types.(*Checker).funcType(0xc42a704fc0, 0xc438218cf0, 0x0, 0xc42d8b0e20) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/typexpr.go:149 +0x1e0 fp=0xc43034d948 sp=0xc43034d838 pc=0x11b3d10 go/types.(*Checker).funcDecl(0xc42a704fc0, 0xc438095220, 0xc42fdcff20) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/decl.go:333 +0xe2 fp=0xc43034d9e8 sp=0xc43034d948 pc=0x1189a92 go/types.(*Checker).objDecl(0xc42a704fc0, 0x13ca860, 0xc438095220, 0x0, 0xc43034db28, 0x0, 0x8) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/decl.go:87 +0x38b fp=0xc43034dad8 sp=0xc43034d9e8 pc=0x11883bb go/types.(*Checker).packageObjects(0xc42a704fc0, 0xc43219e400, 0x88, 0x90) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/resolver.go:463 +0x127 fp=0xc43034db78 sp=0xc43034dad8 pc=0x11a5207 go/types.(*Checker).checkFiles(0xc42a704fc0, 0xc4380e6640, 0x3, 0x3, 0x0, 0x0) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/check.go:239 +0xbd fp=0xc43034dbc0 sp=0xc43034db78 pc=0x1185e9d go/types.(*Checker).Files(0xc42a704fc0, 0xc4380e6640, 0x3, 0x3, 0x1, 0x1) /usr/local/Cellar/go/1.9.3/libexec/src/go/types/check.go:230 +0x49 fp=0xc43034dc00 sp=0xc43034dbc0 pc=0x1185db9 github.com/object88/langd.(*Loader).processComplete(0xc4200a0120, 0xc4234d0500) /Users/bropa18/work/src/github.com/object88/langd/loader.go:444 +0x5b5 fp=0xc43034deb0 sp=0xc43034dc00 pc=0x11fc465 github.com/object88/langd.(*Loader).processStateChange(0xc4200a0120, 0xc4242e6030, 0x30) /Users/bropa18/work/src/github.com/object88/langd/loader.go:360 +0x75f fp=0xc43034dfc8 sp=0xc43034deb0 pc=0x11fbcef runtime.goexit() /usr/local/Cellar/go/1.9.3/libexec/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc43034dfd0 sp=0xc43034dfc8 pc=0x105af41 created by github.com/object88/langd.(*Loader).Start.func1 /Users/bropa18/work/src/github.com/object88/langd/loader.go:242 +0x15d ``` In particular, I can see the parameters to my `processComplete` method: ``` github.com/object88/langd.(*Loader).processComplete(0xc4200a0120, 0xc4234d0500) ``` By seeing the parameters, I can see whether or not my `processComplete` method is getting called with the same parameters simultaneously. ### What did you see instead? ``` ================== WARNING: DATA RACE Read at 0x00c44156b380 by goroutine 55: runtime.mapaccess1_faststr() /usr/local/Cellar/go/1.9.3/libexec/src/runtime/hashmap_fast.go:208 +0x0 go/types.(*Checker).selector() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/scope.go:71 +0x1af1 go/types.(*Checker).exprInternal() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/expr.go:1227 +0x2be0 go/types.(*Checker).rawExpr() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/expr.go:959 +0x91 go/types.(*Checker).exprOrType() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/expr.go:1573 +0x6c go/types.(*Checker).call() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/call.go:15 +0x8a go/types.(*Checker).exprInternal() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/expr.go:1424 +0x2a39 go/types.(*Checker).rawExpr() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/expr.go:959 +0x91 go/types.(*Checker).multiExpr() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/expr.go:1530 +0x72 go/types.(*Checker).expr() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/expr.go:1524 +0x56 go/types.(*Checker).varDecl() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/decl.go:166 +0x3bc go/types.(*Checker).objDecl() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/decl.go:81 +0x461 go/types.(*Checker).packageObjects() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/resolver.go:463 +0x17e go/types.(*Checker).checkFiles() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/check.go:239 +0xca go/types.(*Checker).Files() /usr/local/Cellar/go/1.9.3/libexec/src/go/types/check.go:230 +0x56 github.com/object88/langd.(*Loader).processComplete() /Users/bropa18/work/src/github.com/object88/langd/loader.go:444 +0x830 github.com/object88/langd.(*Loader).processStateChange() /Users/bropa18/work/src/github.com/object88/langd/loader.go:347 +0x3d2 ``` I can observe that `processComplete` was invoked, but it's missing context: ``` github.com/object88/langd.(*Loader).processComplete() ``` Without the parameter information, I don't know whether or not the same data is being passed into the method.
Documentation,RaceDetector,help wanted,Proposal-Accepted,NeedsFix
medium
Critical
292,114,458
godot
[Bullet] Fast RigidBody always go through walls and Godot Crash because of Collision
___ ***Bugsquad note:** This issue has been confirmed several times already. No need to confirm it further.* ___ **Godot version:** Godot 3 RC3 **Issue description:** Fast moving Rigidbody3D always go through walls the collision is not always detected .. even if i make the wall thick and add more invisible walls to it .. still go through them problem number 2 is : when i added a collision to Rigidbody .. everything works fine select player mesh and : Create Convex collision sibling .. works fine but when i close the project and i open it again Godot Crash or Close it self and the way to reopen the project is to open the scene that contain that Collision in a Text editor and remove the line of Collision from there and in this case , this is the line you must remove from "Level.tscn" to open the project `[node name="PlayerCollisionShape" type="CollisionShape" parent="Player" index="1"]` **Steps to reproduce:** use : apply_impulse with higher value inside rigid body if you set a lower value it will collide well but not for higher value and this is the project .. if it didn't open for you .. remove the line from Level.tscn in Text editor **Minimal reproduction project:** [FastRigidBody.zip](https://github.com/godotengine/godot/files/1670027/FastRigidBody.zip)
bug,confirmed,topic:physics,topic:3d
medium
Critical
292,135,438
rust
Rust make system should respect -j1
The build system seems not to respect -j1 for make (origin report is https://bugs.gentoo.org/613794) : ``` # /usr/bin/pstree -Ulnspua 1038 init,1 └─sudo,1026 /opt/tb/bin/chr.sh /home/tinderbox/run/17.0-no-multilib_libressl_20180124-193635 /bin/bash /tmp/job.sh └─chr.sh,1038 /opt/tb/bin/chr.sh /home/tinderbox/run/17.0-no-multilib_libressl_20180124-193635 /bin/bash /tmp/job.sh └─su,1113 - root -c /bin/bash /tmp/job.sh └─bash,1125 /tmp/job.sh └─emerge,10956 -b /usr/lib/python-exec/python3.5/emerge --update dev-lang/rust └─sandbox,7479,portage /usr/lib/portage/python3.5/ebuild.sh compile └─ebuild.sh,7621 /usr/lib/portage/python3.5/ebuild.sh compile └─ebuild.sh,7820 /usr/lib/portage/python3.5/ebuild.sh compile └─python2.7,7838 ./x.py build --verbose --config=/var/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/config.toml └─python2.7,7485 ./x.py build --verbose --config=/var/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/config.toml └─bootstrap,7486 build --verbose --config=/var/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/config.toml └─cmake,23604 --build . --target install --config Release -- -j 12 └─gmake,23616 -j 12 install └─gmake,23742 -f CMakeFiles/Makefile2 all ``` Is there anything which can be improved in the Gentoo package definition or is this a rust issue ?
T-bootstrap,C-bug
low
Critical
292,160,857
rust
Allow using Fn(A, B, C) -> _ for unconstrained FnOnce::Output.
`F: Fn(A, B, C) -> R` desugars to `F: Fn<(A, B, C), Output = R>` currently. `F: Fn(A, B, C) -> _` could easily desugar to `F: Fn<(A, B, C)>`. More generally, we can allow `T: Trait<AssocTy = _>`, meaning the same as `T: Trait`. This form would make it easier to be generic over the return type without having to specify it as another generic parameter (which is worse in type definitions than `impl`s, as it leaks to users). cc @eternaleye (who suggested it) @nikomatsakis @withoutboats @Centril
T-lang,C-feature-request
medium
Major
292,195,983
rust
unsupported cyclic reference between types/traits error only when implicitly using associated type
Getting `error[E0391]: unsupported cyclic reference between types/traits detected` only when using `T::A` to refer to an assocaited type but it works when using `<T as Test>::A` which seems inconsistent. ```rust struct Struct<T>(T); trait Test { type A; type B; } // Compiles fn test<T>() where T: Test<A = i32, B = Struct<<T as Test>::A>> { } // error[E0391]: unsupported cyclic reference between types/traits detected fn test2<T>() where T: Test<A = i32, B = Struct<T::A>> { } fn main() { } ``` https://play.rust-lang.org/?gist=c2f9100b205db71e6c444501a021e8ac&version=stable
A-associated-items,T-compiler,C-bug
low
Critical
292,204,239
puppeteer
Behavior of setViewport() inconsistent in Headful
**Environment:** * Puppeteer: 1.0.0 * Platform / OS: Ubuntu 16.04 LTS (1920x1080@1x) * Node.js: 8.9.3 * Chromium: 65.0.3312.0 (Developer Build) (64-bit) **What steps will reproduce the problem?** Running the following code several times. ```javascript const puppeteer = require('puppeteer'); (async () => { const browser = await puppeteer.launch({ args: [ '--start-maximized', ], headless: false, }); const page = await browser.newPage(); /** * This should make Chromium infer current screen resolution (only in headless = false). */ await page.setViewport({ width: 0, height: 0, }); await page.goto('http://matanich.com/test/viewport-width', { waitUntil: [ 'domcontentloaded', 'load', ], }); let result = await page.evaluate( () => { return window.innerWidth; } ); console.log(`Detected window.innerWidth to be ${result}.`); await browser.close(); })(); ``` **What is the expected result?** I should get the save `window.innerWidth` all the time. **What happens instead?** Seems that _sometimes_ `setViewport()` fails silently and fallbacks to the [default viewport size](https://github.com/GoogleChrome/puppeteer/blob/cb684ebbc4de271c7f2187f4e52c0b3a83cc0122/lib/Page.js#L58) of 800x600. ``` $ node setViewportTest.js Detected window.innerWidth to be 1855. $ node setViewportTest.js Detected window.innerWidth to be 1855. $ node setViewportTest.js Detected window.innerWidth to be 800. $ node setViewportTest.js Detected window.innerWidth to be 1855. $ node setViewportTest.js Detected window.innerWidth to be 1855. $ node setViewportTest.js Detected window.innerWidth to be 1855. $ node setViewportTest.js Detected window.innerWidth to be 800. ```
bug,upstream,chromium,confirmed,P3
medium
Major
292,217,008
opencv
Since GPU modules are not yet supported by OpenCV-Python?
Since GPU modules are not yet supported by OpenCV-Python? Is there any plan to support gpu?
feature,priority: low,category: gpu/cuda (contrib)
low
Minor
292,264,085
vscode
[folding] Allow folding block comment that starts in the middle of the line
An additional request on top of now-closed feature-request [11524](https://github.com/Microsoft/vscode/issues/11524). I sometimes like to write my block comments in this form: ``` let someflag : boolean = false;/* this flag does blah blah blah blah blah blah*/ let somethingelse : string; ``` So that folding that comment completely removes it from view outside of that trailing start marker. However, the implementation to feature-request 11524 does not detect this type of block comment and only seems to pick up on block comments started on their own new line. I am not sure how popular that style of commenting is, but it would help me a lot of it could be picked up by that feature. Thanks.
feature-request,editor-folding
low
Minor
292,386,047
bitcoin
Timeout downloading block
When investigating my mempool statistics there are from time to time >15 minutes intervals without blocks followed immediately by two or three fast blocks and it happened too often to be just due to the high variance of exponential distributiion. Looking into the logs revealed the culprit: ``` grep Timeout -A1 debug.log | tail 2018-01-29 08:45:20 Timeout downloading block 00000000000000000031a8bf543031f89b7a2a445d5b872883b2b5375c251571 from peer=47947, disconnecting 2018-01-29 08:45:22 UpdateTip: new best=00000000000000000031a8bf543031f89b7a2a445d5b872883b2b5375c251571 height=506627 version=0x20000000 log2_work=87.990329 tx=296113862 date='2018-01-29 08:34:56' progress=0.999993 cache=85.3MiB(564264txo) -- 2018-01-29 09:41:12 Timeout downloading block 000000000000000000254d01e7f0d550506ae1d23be0d255e9e650ecbe887e6a from peer=51377, disconnecting 2018-01-29 09:41:13 UpdateTip: new best=000000000000000000254d01e7f0d550506ae1d23be0d255e9e650ecbe887e6a height=506635 version=0x20000000 log2_work=87.990749 tx=296124995 date='2018-01-29 09:31:05' progress=0.999994 cache=88.7MiB(591588txo) -- 2018-01-29 11:33:16 Timeout downloading block 00000000000000000064da61afd140ad94b973d5d355e653111c04edba9ea625 from peer=51664, disconnecting 2018-01-29 11:33:17 UpdateTip: new best=00000000000000000064da61afd140ad94b973d5d355e653111c04edba9ea625 height=506642 version=0x20000000 log2_work=87.991116 tx=296138316 date='2018-01-29 11:22:36' progress=0.999993 cache=94.6MiB(639913txo) ``` The peer seems to be always the same (IP addresses aren't logged): ``` 2018-01-29 09:42:12 receive version message: /ViaBTC:bitpeer.0.2.0/: version 70015, blocks=0, us=0.0.0.0:0, peer=51664 ``` ### What behavior did you expect? First, I'd expect a shorter timeout (it seems to be 10 minutes). While a 10 minutes timeout is not that critical for my node, it means that you shouldn't use an unmodified bitcoin core for solo mining or you risk high orphan rate. Also why is the node not banned? It is disconnected but it is allowed to immediately reconnect. Wouldn't it be possible to exploit this behaviour to make the blocks of competitors relay slowly, by just sending the inv message for them but never delivering the content. This is with bitcoin v0.15.1
P2P
low
Critical
292,399,857
rust
Wrong error: cannot move out of captured outer variable in an `FnMut` closure
> error: cannot move out of captured outer variable in an `FnMut` closure ```rust struct Foo { a: i32, b: i32, bar: Bar, } struct Bar; impl Bar { fn f<F: FnMut()>(&self, mut f: F) { f(); } } fn main() { let mut foo = Foo { a: 1, b: 2, bar: Bar }; let a = &mut foo.a; let b = &mut foo.b; (|| { // works *if true {a} else {b} = 42; })(); let mut foo = Foo { a: 1, b: 2, bar: Bar }; let a = &mut foo.a; let b = &mut foo.b; foo.bar.f(|| { // doesn't work *if true {a} else {b} = 42; }); } ``` https://play.rust-lang.org/?gist=4ce6948a92c2fcb281b3cade8574691d&version=nightly But the second case should work too!
C-enhancement,A-diagnostics,A-closures,A-borrow-checker,T-compiler,D-terse
low
Critical
292,425,175
pytorch
Rebuild from no-CUDA to CUDA leads to: error: #error "Expected GLOO_USE_CUDA to be defined"
Hello I'm trying to build from source for multinorminal distribution, but I can't get it built. In contrary, if built with NO_DISTRIBUTED=1 the build passes but of course things like pooling won't work, which is something I need. - OS: Ubuntu 16.04 - PyTorch version: From master branch - How you installed PyTorch (conda, pip, source): source - CUDA/cuDNN version: 9.1 - GCC version (if compiling from source): 5.4 - Error messages and/or stack traces of the bug > .../pytorch/torch/lib/gloo/gloo/cuda.h:25:2: error: #error "Expected GLOO_USE_CUDA to be defined" > #error "Expected GLOO_USE_CUDA to be defined" > CMake Error at gloo_cuda_generated_cuda_private.cu.o.cmake:203 (message): > Error generating > .../pytorch/torch/lib/build/gloo/gloo/CMakeFiles/gloo_cuda.dir//./gloo_cuda_generated_cuda_private.cu.o > > But make itself seems to be aware of CUDA 9.1 > Install the project... > -- Install configuration: "Release" > -- Installing: .../pytorch/torch/lib/tmp_install/lib/libshm.so > -- Set runtime path of ".../pytorch/torch/lib/tmp_install/lib/libshm.so" to "" > -- Up-to-date: .../pytorch/torch/lib/tmp_install/include/libshm.h > -- Installing: ...pytorch/torch/lib/tmp_install/bin/torch_shm_manager > -- CUDA detected: 9.1 > -- Added CUDA NVCC flags for: sm_30 sm_35 sm_50 sm_52 sm_60 sm_61 sm_70 > -- Found libcuda: /usr/lib/x86_64-linux-gnu/libcuda.so > -- Found libnvrtc: /usr/local/cuda/lib64/libnvrtc.so > -- Configuring done > -- Generating done > -- Build files have been written to: .../pytorch/torch/lib/build/gloo >
module: build,low priority,triaged,has workaround
low
Critical
292,512,044
go
x/playground: feature request: prepopulate file system with contents of $GOROOT/src
Adding the $GOROOT contents to the playground file system would make it possible to use more of the `go/*` packages in the playground more easily. This would be helpful for both reporting bugs in the `go/*` packages and for sharing demonstrations of how to use the `go/*` packages.
NeedsInvestigation
medium
Critical
292,521,956
angular
HttpClient.head expects payload by default. Doesn't follow spec
## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [x ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> ## Current behavior making a HEAD request with `HttpClient.head` with no `options` (or more specifically, no `options.responseType`) will always result in an exception being thrown. That is, assuming the API respects the HTTP spec. "the server MUST NOT return a message-body in the response" https://tools.ietf.org/html/rfc2616#section-9.4 Angular expects there to be a JSON payload instead of a blank "text" payload ## Expected behavior `options.responseType` should default to `'text'` when making a HEAD request ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> ## What is the motivation / use case for changing the behavior? The developer shouldn't have to specify a response type for every single HEAD request if they are following spec. ## Environment <pre><code> Angular version: X.Y.Z <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [ ] Chrome (desktop) version XX - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: XX <!-- run `node --version` --> - Platform: <!-- Mac, Linux, Windows --> Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
breaking changes,freq1: low,area: common/http,type: confusing,P4
low
Critical
292,552,503
flutter
Code Push / Hot Update / out of band updates
_This is currently not on Flutter's roadmap, for reasons discussed in these comments: https://github.com/flutter/flutter/issues/14330#issuecomment-1279484739, https://github.com/flutter/flutter/issues/14330#issuecomment-485565194_ _Code push for Flutter is available as a third-party product from shorebird.dev, as discussed in this comment: https://github.com/flutter/flutter/issues/14330#issuecomment-2099442823_ _This comment gives a brief overview of various kinds of "hot update" features, and gives terminology for referring to them, which can help if you wish to communicate unambiguously about this topic: https://github.com/flutter/flutter/issues/14330#issuecomment-442274897_ ---- Often people ask if Flutter supports "code push" or "hot update" or other similar names for pushing out-of-store updates to apps. Currently we do not offer such a solution out of the box, but the primary blockers are not technological. Flutter supports just in time (JIT) or interpreter based execution on both Android and iOS devices. Currently we remove these libraries during --release builds, however we could easily include them. The primary blockers to this feature resolve around current quirks of the iOS ecosystem which may require apps to use JavaScript for this kind of over-the-air-updates functionality. Thankfully Dart supports compiling to JavaScript and so one could imagine several ways in which one compile parts of ones application to JavaScript instead of Dart and thus allows replacement of or augmentation with those parts in deployed binaries. This bug tracks adding some supported solution like this. I'll dupe all the other reports here.
c: new feature,engine,dependency: dart,customer: crowd,a: production,P3,team-engine,triaged-engine
high
Critical
292,563,620
flutter
a11y on Android: cut/paste not announced correctly
When text is selected in a text field and "cut" is chosen from the Local Context Menu, an announcement acknowledging the copy is expected (e.g. "cut \<text\>"). In Flutter that announcement is missing. Similarly, when "paste" is chosen from the Local Context Menu, an announcement acknowledging the paste (e.g. "pasted \<text\>") is expected. This one is also missing in Flutter.
a: text input,platform-android,engine,a: accessibility,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-android,triaged-android,fyi-text-input
low
Major
292,575,824
neovim
win: more fnamemodify() behavior
:echo fnamemodify('/foo/bar/baz', ':p') returns: /foo/bar/baz but gvim on Windows returns: C:\foo\bar\baz #7842 made progress here, but doesn't seem to cover this case.
platform:windows,filesystem
low
Minor
292,680,043
pytorch
[Feature Request]would PackedSequence support unsorted sequences?
By default, sequences packed in `PackedSequence` will be sorted by length. However in some cases where a datum contains two sequences (**A - B**), the descendant order of **A** is not the order of **B**. This will course some mismatches. Maybe the correct way is: `sort A -> feed into RNN -> unsort A -> sort B -> feed into RNN -> unsort B ` I am wondering if `PackedSequence` can support the **sort** and **unsort** operation automatically. Below is the code I implemented for an RNN: ```python def forward(self, input_ids: List, input_lens: List, lens_sorted=True): """Sort by input lens""" if lens_sorted: sorted_word_ids, sorted_lens = torch.autograd.Variable(torch.LongTensor(input_ids)), input_lens else: sorted_lens, sorted_idx = torch.sort(torch.autograd.Variable(torch.LongTensor(input_lens)), 0, descending=True) sorted_lens = sorted_lens.cpu().data.numpy().tolist() sorted_word_ids = torch.index_select(torch.autograd.Variable(torch.LongTensor(input_ids)), dim=0, index=sorted_idx) unsorted_idx = torch.zeros(sorted_idx.size()).long() \ .scatter_(0, sorted_idx.cpu().data, torch.LongTensor(list(range(len(input_ids))))) """Calculating the hidden and outputs""" # ...... """Unsort by input lens""" if lens_sorted: unsorted_outputs, unsorted_hidden = outputs, hidden else: unsorted_outputs = torch.index_select(outputs, dim=0, index=torch.autograd.Variable(unsorted_idx)) unsorted_hidden = torch.index_select(hidden, dim=1, index=torch.autograd.Variable(unsorted_idx)) return unsorted_outputs, unsorted_hidden ``` cc @albanD @mruberry
feature,module: nn,triaged
low
Minor
292,707,330
pytorch
[feature request] Type-1 Multi-layer bidirectional RNN
Hello I request another type of multi-layer bidirectional RNN. Currently forward output and backward output is concatenated after each layer. But, for language modeling, we need independent forward rnn and backward rnn util the last layer and output concatenation is only need at the last layer. thank you. cc @ezyang @gchanan @zou3519 @csarofeen @ptrblck
module: cudnn,module: rnn,triaged,function request
medium
Critical
292,792,500
vscode
Extensions: "group" should provide intellisense to show valid groups to contribute menus to
![image](https://user-images.githubusercontent.com/900690/35570021-2f201ad8-05ce-11e8-8cdf-c7903449d14b.png) /cc @jrieken
help wanted,feature-request,menus
low
Minor
292,834,827
vue-element-admin
η‚Ήε‡»εˆ‡ζ’θ·―η”±ηš„ζ—Άε€™οΌŒζœ‰ζ—Άε€™δΌšζŠ₯ι”™οΌŒError: Loading chunk 1 failed. at HTMLScriptElement.d (bootstrap 7d5ba07478b35f182b62:103)
εͺζœ‰εœ¨ ζ‰“εŒ…εŽηš„ηΊΏδΈŠηŽ―ε’ƒδΌšε‡Ίι”™οΌŒη”¨ηš„ζ˜―addRouter εŠ¨ζ€ ζ·»εŠ ηš„οΌŒδΈ‡εˆ†ζ„Ÿθ°’ζ₯ΌδΈ»ζŒ‡ε―ΌοΌŒθ°’θ°’ Error: Loading chunk 1 failed. at HTMLScriptElement.d (bootstrap 7d5ba07478b35f182b62:103)
help wanted
high
Critical
292,854,717
go
x/build/cmd/coordinator: detect temp dir leaks
The build infrastructure could explicitly set $TMPDIR (or %TMP% on Windows) to an empty directory and at the end of a successful build check that the directory is still empty. If not, it could fail. Or we could do it in cmd/dist test, perhaps optionally. /cc @ianlancetaylor @alexbrainman
Builders
low
Major
292,911,309
rust
cargo not built when a target is specified
`./x.py build src/tools/cargo` completes without error and without building `cargo` when a target is specified via the command line or via a `config.toml`. I'd expect that `cargo` would build the same regardless of what target is specified, but instead it does not build at all when a target is specified. Example: ``` ➜ rust git:(master) βœ— ./x.py build src/tools/cargo --target=x86_64-unknown-fuchsia Updating submodules Finished dev [unoptimized] target(s) in 0.0 secs Build completed successfully in 0:00:04 ➜ rust git:(master) βœ— ./x.py build src/tools/cargo Updating submodules Finished dev [unoptimized] target(s) in 0.0 secs Building stage0 std artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu) Finished release [optimized] target(s) in 0.0 secs Copying stage0 std from stage0 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu) Building stage0 test artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu) Finished release [optimized] target(s) in 0.0 secs Copying stage0 test from stage0 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu) Building stage0 compiler artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu) Finished release [optimized] target(s) in 0.0 secs Copying stage0 rustc from stage0 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu) Building stage0 codegen artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu, llvm) Finished release [optimized] target(s) in 0.0 secs Assembling stage1 compiler (x86_64-unknown-linux-gnu) Building stage1 std artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu) Finished release [optimized] target(s) in 0.0 secs Copying stage1 std from stage1 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu) Building stage1 test artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu) Finished release [optimized] target(s) in 0.0 secs Copying stage1 test from stage1 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu) Building stage1 compiler artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu) Finished release [optimized] target(s) in 0.0 secs Copying stage1 rustc from stage1 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu) Building stage1 codegen artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu, llvm) Finished release [optimized] target(s) in 0.0 secs Assembling stage2 compiler (x86_64-unknown-linux-gnu) Uplifting stage1 std (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu) Copying stage2 std from stage1 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu) Uplifting stage1 test (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu) Copying stage2 test from stage1 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu) Uplifting stage1 rustc (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu) Copying stage2 rustc from stage1 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu) Building stage2 tool cargo (x86_64-unknown-linux-gnu) Compiling cargo v0.26.0 (file:///usr/local/google/home/cramertj/src/rust/src/tools/cargo) Finished release [optimized] target(s) in 13.21 secs Build completed successfully in 0:00:23 ``` cc @pylaligand
C-enhancement,A-cross,T-bootstrap,T-infra
low
Critical
292,936,599
vscode
Support suppressing commit character from being inserted during completion
I have a completion provider that inserts parens when completing method calls, eg. `myFunc()`. Today I tried adding `(` as a commit character so that if I've typed `myFun` and press `(` then it'll still complete. However this results in `myFunc()()` because the completion text includes the parens, the commit character then gets added, and then the automatic closing paren. Although I could remove the parens from the completion, then they wouldn't be inserted if you hit `<enter>` either. @mjbvz suggested raising this as he may have had a similar issue with snippets in JS/TS. Is there a good workaround for this, or can the API be extended in some way to handle this?
feature-request,api,suggest,api-proposal
medium
Critical
292,972,346
pytorch
[Feature request] Optimize autograd/ATen when a gradient is clearly zero
When an input parameter ***clearly*** has zero gradient (not due to numerical coincidence), the current ATen guideline requires the backward function to return a zero tensor. Since there is no special treatment of zero tensors, autograd engine will still trace back the entire graph supporting that variable. This can happen easily when ATen functions have multiple outputs (e.g. many double backward functions). For example, ```python i = deep_nn_1(...) w = deep_nn_2(...) o = conv(i, w) loss = autograd.grad(o.sum(), i, create_graph=True).sum() loss.backward() ``` In conv double backward, `gI` only depends on `gO` and `ggW`, and `gW` only depends on `gO` and `ggI`. In this case, `ggW` is zero when doing the double backward, which means that `gI` should be zero and `deep_nn_2` doesn't need to be traversed at all. However, ATen conv double backward still [would output a zero `gI`](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/Convolution.cpp#L686-L688) even if it already has the check on `ggW.defined()` to compute `gI`. Then the autograd engine will go through `deep_nn_2` unnecessarily. cc @ezyang @SsnL @albanD @zou3519 @gqchen
feature,module: autograd,triaged
low
Minor
293,000,495
pytorch
[docs] Docs website search finds duplicates and produces bad snippets
Searching for `torch.nn.functional.conv2d` returns results with 4 identical links to http://pytorch.org/docs/master/nn.html?highlight=torch%20nn%20functional%20conv2d#torch.nn.functional.conv2d Link with results is: http://pytorch.org/docs/master/search.html?q=torch.nn.functional.conv2d&check_keywords=yes&area=default This is true for master and 0.3 cc @ezyang @zou3519 @jlin27 @mruberry
module: docs,triaged,module: doc infra
low
Major
293,030,789
godot
[3.x] Script Editor: Completion tooltip can be cut by edges of the editor
**Godot version:** 3.0 Stable **OS/device including version:** Windows 10 **Issue description:** The function tooltip gets cut off at the bottom of the script editor. See the attachment. ![comeon](https://user-images.githubusercontent.com/21234930/35605741-9a13d8b2-05ff-11e8-82de-2c2c1acaed82.png)
bug,topic:editor,confirmed,usability
low
Major
293,081,397
pytorch
[Feature Request] Extract glimpses from a batch of images (as in tf.image.extract_glimpse)
I think something similar to Tensorflow's extract_glimpse function would be useful. I'd like to extract a single patch from an image, but I'd like to do it to a batch of images at once with control over where the patch is centered on each image. This is useful in models that select individual patches of images/feature maps to focus on. https://www.tensorflow.org/api_docs/python/tf/image/extract_glimpse There doesn't seem to be a way to do this in Pytorch without a loop which is a bit slow. Or is there? cc @fmassa @vfdev-5
triaged,module: vision,function request
low
Major
293,204,680
pytorch
Speed up data loading for `TensorDataset` if the underlying dataset supports index by a list of indices
Looking at https://github.com/pytorch/pytorch/blob/5b43c22f73279c67084a2357a489420c705cb84f/torch/utils/data/dataloader.py#L259 The loader fetches one row at a time of the data set, and then combine them into a minibatch. It is quite inefficient if the underlying data set already supports indexing by a list of indices. If there are a lot of elements in a row, e.g. image data, it is relatively OK since it takes more time to process (infer + back-propagate in GPU) about the data. However if the feature dimension is small, say < 100, then data loading becomes the bottleneck. An alternative is to do the following ```python # batch = self.collate_fn([self.dataset[i] for i in indices]) batch = self.dataset[indices] ``` I applied monkey patch for my specific problem. ``` def data_loader_next(self): if self.num_workers == 0: # same-process loading indices = next(self.sample_iter) # may raise StopIteration # I know that my dataset supports index by indices. # -- batch = self.collate_fn([self.dataset[i] for i in indices]) batch = self.dataset[indices] if self.pin_memory: batch = pin_memory_batch(batch) return batch # check if the next sample has already been generated if self.rcvd_idx in self.reorder_dict: batch = self.reorder_dict.pop(self.rcvd_idx) return self._process_next_batch(batch) if self.batches_outstanding == 0: self._shutdown_workers() raise StopIteration while True: assert (not self.shutdown and self.batches_outstanding > 0) idx, batch = self.data_queue.get() self.batches_outstanding -= 1 if idx != self.rcvd_idx: # store out-of-order samples self.reorder_dict[idx] = batch continue return self._process_next_batch(batch) DataLoaderIter.next = data_loader_next DataLoaderIter.__next__ = data_loader_next ``` It speeds up the data loading by 5 times while `using num_workers=0`. cc @SsnL @VitalyFedyunin @ngimel @mruberry
module: performance,module: dataloader,triaged
low
Minor
293,220,925
go
proposal: spec: introduce structured tags
This proposal is for a new syntax for struct tags, one that is formally defined in the grammar and can be validated by the compiler. ## Problem The current struct tag format is defined in the spec as a string literal. It doesn't go into any detail of what the format of that string might look like. If the user somehow stumbles upon the reflect package, a simple space-separated, key:"value" convention is mentioned. It doesn't go into detail about what the value might be, since that format is at the discretion of the package that uses said tag. There will never be a tool that will help the user write the value of a tag, similarly to what gocode does with regular code. The format itself might be poorly documented, or hard to find, leading one to guess what can be put as a value. The reflect package itself is probably not the biggest user-facing package in the standard library as well, leading to a plethora of stackoverflow questions about how multiple tags can be specified. I myself have made the error a few times of using a comma to delimit the different tags. ## Proposal EDIT: the original proposal introduced a new type. After the initial discussion, it was decided that there is no need for a new type, as a struct type or custom types whose underlying types can be constant (string/numeric/bool/...) will do just as well. A tag value can be either a struct, whose field types can be constant, or custom types, whose underlying types are constant. According to the go spec, that means a field/custom type can be either a string, a boolean, a rune, an integer, a floating-point, or a complex number. Example definition and usage: ```go package json type Rules struct { Name string OmitEmpty bool Ignore bool } func processTags(f reflect.StructField) { // reflect.StructField.Tags []interface{} for _ ,t := range f.Tags { if jt, ok := t.(Rules); ok { ... break } } } ``` ```go package sqlx type Name string ``` Users can instantiate values of such types within `struct` definitions, surrounded by `[` and `]` and delimited by `,`. The type cannot be omitted when the value is instantiated. ```go package mypackage import json import sqlx type MyStruct struct { Value string [json.Rules{Name: "value"}, sqlx.Name("value")] PrivateKey []byte [json.Rules{Ignore: true}] } ``` ## Benefits Tags are just types, they are clearly defined and are part of a package's types. Tools (such as gocode) may now be made for assisting in using such tags, reducing the cognitive burden on users. Package authors will not need to create "value" parsers for their supported tags. As a type, a tag is now a first-class citizen in godoc. Even if a tag lacks any kind of documentation, a user still has a fighting chance of using it, since they can now easily go to do definition of a tag and just look up its fields, or see the definition in godoc. Finally, if the user has misspelled something, the compiler will now inform them of an error, instead of it occurring either at runtime, or being silently ignored as is the case right now. ## Backwards compatibility To preserve backwards compatibility, string-based tags will not be removed, but merely deprecated. To ensure a unified behavior across libraries, their authors should ignore any string-based tags if any of their recognized structured tags have been included for a field. For example: ```go type Foo struct { Bar int `json:"bar" yaml:"bar,omitempty"` [json.OmitEmpty] } ``` A hypothetical json library, upon recognizing the presence of the `json.OmitEmpty` tag, should not bother looking for any string-based tags. Whereas, the yaml library in this example, will still use the defined string-based tag, since no structured yaml tags it recognizes have been included by the struct author. ## Side note This proposal is strictly for replacing the current stuct tags. While the tag grammar can be extended to be applied to a lot more things that struct tags, this proposal is not suggesting that it should, and such a discussion should be done in a different proposal.
LanguageChange,Proposal,LanguageChangeReview
high
Critical
293,292,611
flutter
Add an example of a dynamically-updatable Simulation subclass
I'm trying to build a simple app which animates a red circle following the user's finger as it's dragged vertically on the screen. The problem I'm having is that the animation stops completely while the finger is dragged across the screen. As soon as dragging stops, the animation resumes normally. I believe the problem is that there is no way to update the current SpringSimulation with a new target value, but rather I'm forced to create a new one on each "drag" event. This resets the internal simulation timer to zero, causing the movement to stop. From quick experimentation I couldn't get tween animations to work either, so I believe they have the same problem. ![flutter](https://user-images.githubusercontent.com/1006207/35668561-34d16dbe-0732-11e8-89f2-5930b83cad07.gif) ## Source code ```dart import 'package:flutter/material.dart'; import 'package:flutter/physics.dart'; void main() => runApp(new MyApp()); class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return new MaterialApp( title: 'Flutter Demo', theme: new ThemeData( primarySwatch: Colors.blue, ), home: new Signature(), ); } } class Signature extends StatefulWidget { SignatureState createState() => new SignatureState(); } class SignatureState extends State<Signature> with SingleTickerProviderStateMixin { double _y = 0.0; Widget build(BuildContext context) { return new GestureDetector( onPanUpdate: (DragUpdateDetails details) { RenderBox referenceBox = context.findRenderObject(); Offset localPosition = referenceBox.globalToLocal(details.globalPosition); springSimulation = new SpringSimulation( spring, _y, localPosition.dy, animationController.velocity); animationController.animateWith(springSimulation); }, child: new CustomPaint(painter: new SignaturePainter(_y)), ); } SpringDescription spring = new SpringDescription( mass: 1.0, stiffness: 100.0, damping: 10.0); SpringSimulation springSimulation; AnimationController animationController; initState() { super.initState(); animationController = new AnimationController( vsync: this, lowerBound: double.NEGATIVE_INFINITY, upperBound: double.INFINITY, ); animationController.addListener(() { print(animationController.value); setState(() { print(animationController.value); _y = animationController.value; }); }); } } class SignaturePainter extends CustomPainter { final double y; SignaturePainter(this.y); void paint(Canvas canvas, Size size) { var paint = new Paint() ..color = Colors.red ..strokeCap = StrokeCap.round ..strokeWidth = 20.0; canvas.drawCircle(new Offset(size.width / 2, y), 20.0, paint); } bool shouldRepaint(SignaturePainter other) => other.y != y; } ``` ## Flutter Doctor ``` [βœ“] Flutter (on Linux, locale en_GB.UTF-8, channel alpha) β€’ Flutter version 0.0.21 at /home/cachapa/flutter β€’ Framework revision 2e449f06f0 (2 days ago), 2018-01-29 14:26:51 -0800 β€’ Engine revision 6921873c71 β€’ Tools Dart version 2.0.0-dev.16.0 β€’ Engine Dart version 2.0.0-edge.da1f52592ef73fe3afa485385cb995b9aec0181a [βœ“] Android toolchain - develop for Android devices (Android SDK 27.0.1) β€’ Android SDK at /home/cachapa/Android/Sdk β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-27, build-tools 27.0.1 β€’ Java binary at: /opt/android-studio/jre/bin/java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01) [βœ“] Android Studio (version 3.0) β€’ Android Studio at /opt/android-studio β€’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01) [βœ“] Connected devices β€’ Pixel 2 β€’ FA79L1A10786 β€’ android-arm β€’ Android 8.1.0 (API 27) ```
framework,a: animation,d: api docs,c: proposal,P3,team-framework,triaged-framework
low
Major
293,297,130
flutter
Background text contrast should be increased in Date and time pickers
I got that piece of feedback from an a11y review. Are we following material guidelines here?
framework,f: material design,a: accessibility,f: date/time picker,c: proposal,P2,team-design,triaged-design
low
Minor
293,337,440
vscode
[json] schema Validation/Intellisense very slow when JSON deep and Schema Complex
<!-- Do you have a question? Please ask it on https://stackoverflow.com/questions/tagged/vscode. --> In my project we design an API that allows people to define web pages in json using a bunch of nested components. We have schema files for all components and how to combine them. When merged into 1 schema file it becomes around 7000 lines long. I am noticing a huge slowdown when a file has JSON that is relatively deep. Here is what it looks like when the depth is still okay ![fast](https://user-images.githubusercontent.com/1192452/35651240-d51b3006-0693-11e8-91d7-0a8ad9c4b07c.gif) And here is what it looks like when a deep component is defined in the same file ![slow](https://user-images.githubusercontent.com/1192452/35651340-36c7e646-0694-11e8-94d1-1c9e00ac4011.gif) <!-- Use Help > Report Issue to prefill these. --> - VSCode Version: Version 1.20.0-insider - OS Version: OSX 10.13.3 Steps to Reproduce: 1. Set schema for JSON to some large and complex schema 2. Open up a JSON file that uses the schema 3. write enough JSON so that you are nested a bunch 4. try to use intellisense - whole editor slows down - Intellisense suggestion doesn't pop up until validation is done - even looks like it validates the file a few ways while you sit Is it possible to just turn off JSON validation and keep the Intellisense? Or just validate on file save? Could we show Intellisense suggestions before validation happens? Also, when i am editing 2 JSON files the slowness is shared by the two. Even after closing the file that made everything slow, I still have to wait for some validation thing to finish before it realized the file is closed and updates the UI If needed I can try to reproduce this bug with generic schema files. What seems to be the issue is when i have a Collection component. This component can hold any other component, including more nested collections. If I nest a bunch of collections the problem presents itself. <!-- Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes/No Yes
bug,freeze-slow-crash-leak,json
low
Critical
293,349,921
TypeScript
Incorrect type inference for array rest assignment and inability to add annotation
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 --> <!-- Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the CONTRIBUTING guidelines: https://github.com/Microsoft/TypeScript/blob/master/CONTRIBUTING.md * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ --> <!-- If you have a QUESTION: THIS IS NOT A FORUM FOR QUESTIONS. Ask questions at http://stackoverflow.com/questions/tagged/typescript or https://gitter.im/Microsoft/TypeScript --> <!-- If you have a SUGGESTION: Most suggestion reports are duplicates, please search extra hard before logging a new suggestion. See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> <!-- If you have a BUG: Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.7.1-insiders.20180127 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** array destructuring rest assignment **Code** ```ts // A *self-contained* demonstration of the problem follows... // Test this by running `tsc` on the command-line, rather than through another build tool such as Gulp, Webpack, etc. const [stringA, ...numbers] = ['string', 1, 2]; // const [stringA, ...numbers] = <[string, number, number]>['string', 1, 2]; // const [stringA, ...numbers]: [string, number[]] = ['string', 1, 2]; // const [stringA, ...numbers]: [string, [number, number]] = ['string', 1, 2]; ``` **Expected behavior:** Expect `numbers` to be of type `number[]` rather than` (string | number)[]`. TS also does not allow annotating as a tuple **Actual behavior:** const numbers: (string | number)[] **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> http://www.typescriptlang.org/play/index.html#src=const%20%5BstringA%2C%20...numbers%5D%20%3D%20%5B'string'%2C%201%2C%202%5D%3B%0A%2F%2F%20const%20%5BstringA%2C%20...numbers%5D%20%3D%20%3C%5Bstring%2C%20number%2C%20number%5D%3E%5B'string'%2C%201%2C%202%5D%3B%0A%2F%2F%20const%20%5BstringA%2C%20...numbers%5D%3A%20%5Bstring%2C%20number%5B%5D%5D%20%3D%20%5B'string'%2C%201%2C%202%5D%3B%0A%2F%2F%20const%20%5BstringA%2C%20...numbers%5D%3A%20%5Bstring%2C%20%5Bnumber%2C%20number%5D%5D%20%3D%20%5B'string'%2C%201%2C%202%5D%3B **Related Issues:**
Bug
low
Critical
293,367,860
vscode
searching for "IntelliSense" or "completions" doesn't provide expected result
I had someone ask how to turn off completions for Dockerfiles and it turns out the setting `editor.quickSuggestions` controls whether or not the completion list shows up. ``` json // Controls if suggestions should automatically show up while typing "editor.quickSuggestions": { "other": true, "comments": false, "strings": false }, ``` It took me a long time to find this setting because I searched for "completion" "IntelliSense" and probably a bunch of other things to no avail. With the bing search now in place, I would have expected that this setting would show up for "completion" or "intellisense", but it still doesn't. I also searched for "other" but it doesn't show up. If I search for "comments" it does (but it looks like a result of bing search).
bug,settings-editor,confirmed,settings-search
low
Major
293,459,753
rust
Unergonomic structured suggestions in rustc
While checking `span_help`s that could be `span_suggestion` I noticed that rustc contains lots of code similar to ```rust match fcx.tcx.sess.codemap().span_to_snippet(self.cast_span) { Ok(s) => { err.span_suggestion(self.cast_span, "try casting to a reference instead", format!("&{}{}", mtstr, s)); } Err(_) => { span_help!(err, self.cast_span, "did you mean `&{}{}`?", mtstr, tstr) } } ``` where we try to get a snippet and if that fails, since we can't produce a nice suggestion, we produce a `help` message that contains a message. We should probably provide a helper for that. First I thought that we could add a helper that does essentially the above without all the duplication, but with the new approximate suggestions (#47540) we can always produce the suggestion, but mark it as approximate if we need to use the fallback value. I'd assume the above example would look something like this: ```rust err.span_possibly_approximate_suggestion( self.cast_span, // span to replace "try casting to a reference instead", // message fcx.tcx.sess.codemap().span_to_snippet(self.cast_span).ok(), // optional snippet tstr, // default if snippet is none |snip| format!("&{}{}", mtstr, snip), // closure taking snippet and producing the replacement code ); ``` cc @Manishearth @nrc
C-cleanup,A-diagnostics,T-compiler
low
Minor
293,519,989
angular
[Feature request] router resolver should be cancellable
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [x] Feature request [ ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question </code></pre> I search around the issues regarding this and did find one but it was closed (https://github.com/angular/angular/issues/13056). I believe it should rather be a feature request. It is completely legit to expect that the ongoing route and the observable sitting inside the resolve function should be discarded if the user changes their mind and preceded to another route. Youtube is an example. ## Current behavior <!-- Describe how the issue manifests. --> The scenario is the user clicks a slow route link that fetches data necessary for a component, and while that is resolving the user hits back, or clicks on another link. I would then want to cancel the current resolve and begin the new routing. To demotrate: https://stackblitz.com/edit/angular-u6zquh To try to solve the problem, I inject the router event into the SomeResolve class and hoping to do like... (below is pseudo code) ````ts @Injectable() export class SomeResolve implements Resolve<any> { constructor(private someService$: SomeService, private router$: Router) { } resolve() { return this.someService$ .takeUntil( this.router$.events.pipe( filter(NavigationStart) ) ); } } ```` It seems that all the router events are blocked until resolver complete. Below is quoted from Angular.io router and navigation section. > The Observable provided to the Router must complete. If the Observable does not complete, the navigation will not continue. So, this is not a solution because no events are shooted before resolver function complete. User still needs to wait for the ajax/observable complete before proceeding. ## Expected behavior <!-- Describe what the desired behavior would be. --> The router events are not blocked or any idea which can lead to cancellable router resolver. <pre><code> Angular version: 5.2.0 <!-- Check whether this is still an issue in the most recent Angular version -->
feature,freq3: high,area: router,feature: under consideration,feature: votes required
medium
Critical
293,555,084
vscode
Separate tab size and indent size
Visual Studio Code have not appear to have the ability to do the Tools->Options Tab settings of Visual Studio Professional where you can specify that tabs are 8 spaces but the indent size if 4? All our code uses that style and many other editors let you do this, but Visual Studio Code doesn't seem to support this. I've tried lots of things in Visual Studio Code but not found any combination of settings that work. Using Visual Studio Code 1.19.3 (and also Visual Studio 2015 (and Professional 2012)). In Visual Studio, setting tabs at 8 and spaces at 4 means (for a blank line): - Press tab key once, indent of 4 spaces. - Press tab again, indent of tab (removes 4 spaces, adds tab) - Press tab third time, indent is tab + 4 spaces.
feature-request,editor-core
high
Critical
293,583,217
pytorch
MultiGPU hangs Titan Xp in multiprocessing/queue.py
PyTorch GitHub Issues Guidelines -------------------------------- We like to limit our issues to bug reports and feature requests. If you have a question or would like help and support, please visit our forums: https://discuss.pytorch.org/ If you are submitting a feature request, please preface the title with [feature request]. When submitting a bug report, please include the following information (where relevant): - OS: - PyTorch version: - How you installed PyTorch (conda, pip, source): - Python version: - CUDA/cuDNN version: - GPU models and configuration: - GCC version (if compiling from source): In addition, including the following information will also be very helpful for us to diagnose the problem: - A script to reproduce the bug. Please try to provide as minimal of a test case as possible. - Error messages and/or stack traces of the bug - Context around what you are trying to do **Hi there,** I have been trying for about a week and a half to get all 3 of my NVIDIA Titan Xp, but the process always hangs in the multiprocessing directory in the "queues.py" and the "synchronizing.py" whenever I use more than 1 of them. MY SETUP: - OS: macOS Sierra 10.12.5 - PyTorch version: 0.3.0.post4 - PyTorch installation via pip -Python version: 3.5 -CUDA/cuDNN version: CUDA 8.0.61 and torch.backends.cudnn.version() = 7003 -GPU models and configuration: 3 NVIDIA Titan Xps installed on PCI express board This problem seems to occur whenever I use the data parallel functionality in PyTorch. It happens on very simple calculations and in things like ConvNets etc. It loads the network architecture and then gets stuck at what seems to be the beginning of training the network. > Traceback (most recent call last): File "train.py", line 229, in <module> output = netD(inputv) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 325, in __call__ result = self.forward(*input, **kwargs) File "/home/chris/Documents/MPhilProjects/GANs/PyTorchApproach/DCGAN-pytorch/model.py", line 110, in forward output = nn.parallel.data_parallel(self.main, input, range(self.ngpu)) File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 113, in data_parallel outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids) File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/parallel_apply.py", line 59, in parallel_apply thread.join() File "/usr/lib/python3.5/threading.py", line 1054, in join Traceback (most recent call last): File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap self.run() File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 36, in _worker_loop r = index_queue.get() File "/usr/lib/python3.5/multiprocessing/queues.py", line 343, in get res = self._reader.recv_bytes() File "/usr/lib/python3.5/multiprocessing/connection.py", line 216, in recv_bytes buf = self._recv_bytes(maxlength) File "/usr/lib/python3.5/multiprocessing/connection.py", line 407, in _recv_bytes buf = self._recv(4) File "/usr/lib/python3.5/multiprocessing/connection.py", line 379, in _recv chunk = read(handle, remaining) KeyboardInterrupt self._wait_for_tstate_lock() File "/usr/lib/python3.5/threading.py", line 1070, in _wait_for_tstate_lock Process Process-2: elif lock.acquire(block, timeout): KeyboardInterrupt Traceback (most recent call last): File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap self.run() File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 36, in _worker_loop r = index_queue.get() File "/usr/lib/python3.5/multiprocessing/queues.py", line 342, in get with self._rlock: File "/usr/lib/python3.5/multiprocessing/synchronize.py", line 96, in __enter__ return self._semlock.__enter__() KeyboardInterrupt Process Process-5: Traceback (most recent call last): File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap self.run() File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 36, in _worker_loop r = index_queue.get() File "/usr/lib/python3.5/multiprocessing/queues.py", line 342, in get with self._rlock: File "/usr/lib/python3.5/multiprocessing/synchronize.py", line 96, in __enter__ return self._semlock.__enter__() KeyboardInterrupt ^CException ignored in: <module 'threading' from '/usr/lib/python3.5/threading.py'> Traceback (most recent call last): File "/usr/lib/python3.5/threading.py", line 1288, in _shutdown t.join() File "/usr/lib/python3.5/threading.py", line 1054, in join self._wait_for_tstate_lock() File "/usr/lib/python3.5/threading.py", line 1070, in _wait_for_tstate_lock elif lock.acquire(block, timeout): KeyboardInterrupt I tried looking at the connections b/w them and got the following result: ![image](https://user-images.githubusercontent.com/6062598/35688012-1823ef76-0768-11e8-8fca-c6147df47a80.png) I tried testing the p2pBandwidth example that comes in cuda and got the following result: ![image](https://user-images.githubusercontent.com/6062598/35688187-786df2aa-0768-11e8-87d0-a6c7462df3bd.png) But I am still essentially clueless as to why they won't work together. Any advice / solutions would be **greatly** appreciated! Best, Michael cc @ngimel
module: multiprocessing,module: cuda,triaged,module: macos
medium
Critical
293,600,274
rust
In debug mode, print a message for attempted unwind past FFI call
See https://github.com/rust-lang/rust/issues/47616 and https://github.com/rust-lang/rust/pull/46833 . Rust now generates an abort rather than attempting to unwind through an FFI call. In debug mode, could we generate a friendly error message explaining that, rather than just aborting? (In release mode we should continue to just abort.)
C-enhancement,A-diagnostics,T-compiler
low
Critical
293,630,879
rust
Editing proc-macro crate = undefined symbol
Note: This is with incremental-comp turned off, and a compiler built yesterday. Occasionally when working on Diesel's codegen crates, I'll get an error like this: ``` error: dlsym(0x113560e00, __rustc_derive_registrar__55df2b44c5129c66de0914ff53563457_89): symbol not found --> diesel/src/lib.rs:121:1 | 121 | extern crate diesel_derives2; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``` The only fix when this happens is to `cargo clean`. It randomly happens when I change some code, but not 100% of the time.
A-macros-2.0,C-bug
low
Critical
293,639,360
rust
"errror: reference to '...' is ambiguous" message in LLDB
When I try to print a local variable in LLDB, e.g. with `po foo`, and there's a function with the same name, I get this message: ``` error: reference to 'foo' is ambiguous candidate found by name lookup is 'foo' candidate found by name lookup is 'some_module::some_struct::{{impl}}::foo' ```
C-enhancement,T-dev-tools
low
Critical
293,640,047
vscode
[json] improve property suggestions with oneOf
<!-- Do you have a question? Please ask it on https://stackoverflow.com/questions/tagged/vscode. --> <!-- Use Help > Report Issue to prefill these. --> - VSCode Version: Version 1.20.0-insider - OS Version: 10.13.3 Steps to Reproduce: 1. Use this JSON schema ```json { "$schema": "http://json-schema.org/draft-04/schema#", "type": "object", "oneOf": [ { "title": "Wrapper", "type": "object", "required": [ "asset" ], "properties": { "asset": { "type": "object" } } }, { "title": "No Wrapper", "type": "object", "properties": { "id": { "type": "string" } } } ] } ``` 2. Start writing JSON using this schema and type: ```json { "" } ``` 3. Only id is suggested as an option, I expect both asset and id to be suggestions NOTE: if i make "id" also required I get the correct suggestions. <!-- Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes/No Yes
feature-request,json
medium
Major