id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
269,302,686 | TypeScript | `SVGElement.className` should be read-only | When trying to assign a value to it, the browser yells: `TypeError: Cannot assign to read only property 'className' of object '#<SVGAElement>'`.
Furthermore, the type of the `className` property is set as `any`, but it should be `SVGAnimatedString`. | Bug,Help Wanted,Domain: lib.d.ts | low | Critical |
269,306,626 | rust | Use systems page size instead of a hard-coded constant for File I/O? | Hi, I have a question regarding the following constant that is used throughout the `BufReader`:
https://github.com/rust-lang/rust/blob/6ccfe68076abc78392ab9e1d81b5c1a2123af657/src/libstd/sys_common/io.rs#L10
Shouldn't this variable rather be the page size for the system (determined at runtime) instead of 8KB hard-coded (for better IO performance)? This avoids that the system accidentally splits memory between pages that should really be contiguous on one page. For example, if my page size was 16KB, the system could accidentally allocate the memory in an unfortunate way so that the 8KB from the `BufReader` are now split across two pages. If it used 16KB pages, the system could map it 1:1 to a memory page. Why is the `BufReader` size hard-coded to 8KB? | I-slow,C-enhancement,T-libs,A-io | low | Major |
269,332,389 | go | cmd/compile: reordering struct field accesses can alter performance | ### What version of Go are you using (`go version`)?
go version devel +47c868dc1c Sat Oct 28 11:53:49 2017 +0000 linux/amd64
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/mvdan/go/land:/home/mvdan/go"
GORACE=""
GOROOT="/home/mvdan/tip"
GOTOOLDIR="/home/mvdan/tip/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build268906853=/tmp/go-build -gno-record-gcc-switches"
```
### What did you do?
https://play.golang.org/p/d5g7tuaxHW
go test -bench=.
### What did you expect to see?
Both of them performing equally.
### What did you see instead?
With 6 runs of each on an idle machine:
```
name time/op
Separate-4 2.48ns Β± 1%
Contiguous-4 2.15ns Β± 2%
```
This is a minifiedin version of a performance issue I had in a very hot function. In particular, the function is the ASCII fast path of a rune advance method.
Here's the assembly for the two funcs:
```
"".(*T).separate STEXT nosplit size=26 args=0x10 locals=0x0
0x0000 00000 (f_test.go:9) TEXT "".(*T).separate(SB), NOSPLIT, $0-16
0x0000 00000 (f_test.go:9) MOVQ "".t+8(SP), AX
0x0005 00005 (f_test.go:10) INCQ 8(AX)
0x0009 00009 (f_test.go:11) MOVQ $0, (AX)
0x0010 00016 (f_test.go:12) MOVQ 8(AX), AX
0x0014 00020 (f_test.go:12) MOVQ AX, "".~r0+16(SP)
0x0019 00025 (f_test.go:12) RET
"".(*T).contiguous STEXT nosplit size=29 args=0x10 locals=0x0
0x0000 00000 (f_test.go:15) TEXT "".(*T).contiguous(SB), NOSPLIT, $0-16
0x0000 00000 (f_test.go:15) MOVQ "".t+8(SP), AX
0x0005 00005 (f_test.go:16) MOVQ $0, (AX)
0x000c 00012 (f_test.go:17) MOVQ 8(AX), CX
0x0010 00016 (f_test.go:17) INCQ CX
0x0013 00019 (f_test.go:17) MOVQ CX, 8(AX)
0x0017 00023 (f_test.go:18) MOVQ CX, "".~r0+16(SP)
0x001c 00028 (f_test.go:18) RET
```
Funnily enough, the faster one results in an extra instruction. How that makes sense is beyond me. Perhaps it's because the first one accesses `8(AX)` three times, and the second only twice?
My understanding of the compiler and assembly are limited, so any pointers welcome.
/cc @randall77 @philhofer | Performance,NeedsInvestigation,compiler/runtime | low | Critical |
269,346,921 | go | proposal: encoding/json: add omitnil option | Note: This proposal already has as a [patch] from 2015 by @bakineggs, but it appears to have fallen between the cracks.
I have the following case:
```go
type Join struct {
ChannelId string `json:"channel_id"`
Accounts []Ident `json:"accounts,omitempty"`
History []TextEntry `json:"history,omitempty"`
}
```
This struct is used for message passing and the slices are only relevant (and set to non-`nil`) in some cases. However, since `encoding/json` does not differentiate between a `nil` slice and an empty slice, there will be legitimate cases where a field is excluded when it's not expected to (e.g., the `History` slice is set, but empty).
I reiterate the proposal by Dan in his patch referred above to support an `omitnil` option which allows this differentiation for slices and maps.
*Note for hypothetical Go 2.0:* This is already how `omitempty` works for pointers to Go's basic types (e.g., `(*int)(nil)` is omitted while pointer to `0` is not). For Go 2.0 the behavior of `omitempty` could change to omit both `nil` and `0` when specified, and then only `nil` would be omitted when `omitnil` is specified.
[patch]: https://go-review.googlesource.com/c/go/+/10686 | Proposal,Proposal-Hold | high | Critical |
269,396,714 | rust | Consider more fine grained grouping for built-in lints | Copying the comment from https://github.com/rust-lang/rust/pull/45424#issuecomment-338345084:
I've also audited the remaining ungrouped lints.
In principle new lint groups can be created for them, but it looks like fine-grained lint grouping didn't find its use even in clippy, so I didn't do anything.
**Unused++.**
These can also be quite reasonably added into the `unused` group, but less obviously than those I added into it in this PR.
```
STABLE_FEATURES
RENAMED_AND_REMOVED_LINTS
UNKNOWN_LINTS
UNUSED_COMPARISONS
```
**Bad style++.**
Probably can be added into the `bad_style` group, but it currently consists only of casing-related lints.
```
NON_SHORTHAND_FIELD_PATTERNS
WHILE_TRUE
```
**Future compatibility++.**
Errors that are reported as lints for some reasons unknown to me.
See the question in https://github.com/rust-lang/rust/pull/45424#discussion_r146084996 as well.
```
CONST_ERR // ?
UNKNOWN_CRATE_TYPES // Deny-by-default
NO_MANGLE_CONST_ITEMS // Deny-by-default
NO_MANGLE_GENERIC_ITEMS
```
**Restrictions.**
Something generally reasonable that can be prohibited if necessary.
```
BOX_POINTERS // Allow-by-default
UNSAFE_CODE // Allow-by-default
UNSTABLE_FEATURES // Allow-by-default
MISSING_DOCS // Allow-by-default
MISSING_COPY_IMPLEMENTATIONS // Allow-by-default
MISSING_DEBUG_IMPLEMENTATIONS // Allow-by-default
```
**Pedantic.**
Something not bad enough to always report/fix.
```
UNUSED_RESULTS // Allow-by-default
UNUSED_IMPORT_BRACES // Allow-by-default
UNUSED_QUALIFICATIONS // Allow-by-default
TRIVIAL_CASTS // Allow-by-default
TRIVIAL_NUMERIC_CASTS // Allow-by-default
VARIANT_SIZE_DIFFERENCES // Allow-by-default
UNIONS_WITH_DROP_FIELDS
```
**Obvious mistakes.**
Prevent foot shooting, some can become hard errors in principle.
```
OVERFLOWING_LITERALS
EXCEEDING_BITSHIFTS // Deny-by-default
UNCONDITIONAL_RECURSION
MUTABLE_TRANSMUTES // Deny-by-default
IMPROPER_CTYPES
PLUGIN_AS_LIBRARY
PRIVATE_NO_MANGLE_FNS
PRIVATE_NO_MANGLE_STATICS
```
**General purpose lints.**
```
WARNINGS
DEPRECATED
``` | C-enhancement,A-lints,T-lang | low | Critical |
269,404,494 | go | lib/time: update tzdata before release | The timezone database in lib/time should be updated shortly before the 1.10 release (to whatever tzdata release is current then). There was https://golang.org/cl/74230 attempting to do this just now, but it was too early. @ALTree suggested to open an issue about this, so we don't forget.
The latest available version is shown at https://www.iana.org/time-zones. | NeedsFix,release-blocker,recurring | high | Critical |
269,420,235 | pytorch | Data sampling seems to be more complicated than necessary | I love Pytorch for its flexibility and debugging friendly environment, and like Andrej Karpathy said after using Pytorch, "I have more energy. My skin is clearer. My eye sight has improved."
However, I am finding sampling from datasets a bit more convoluted that it needs to be. I was hoping for a way to efficiently extract samples (using multiprocessing) from the dataset by providing a batch index list as input, e.g.
```python
batch_indices = [5,4,2]
loader.get_indices(batch_indices)
```
where `loader`is a `DataLoader`object or a `torch.data.Dataset` object. In other words, I am looking for a simple, yet flexible sampling interface.
Currently, if I want to sample using a non-uniform distribution, first I have to define a sampler class for the `loader`, then within the class I have to define a generator that returns indices from a pre-defined list. Later, whenever the sampling distribution changes I have to re-create the sampler object that takes input values which are used to compute the new sampling distribution. I didn't find an easier way yet.
As a result, defining the data loader would be something like,
```python
loader = data.DataLoader(train_set, sampler=sampler(train_set), num_workers=2)`
```
for the initialization step; and
```python
loader.sampler = sampler(train_set, some_values)
```
between epochs.
This seems to give me certain restrictions (please correct me if I am wrong),
1. dynamic sampling is not well supported with this approach - consider the case where the sampling distribution or the batch size changes after every iteration (not epoch); and
2. this makes it necessary to have an epoch-based outer loop. Once I tried to avoid having epochs by using `itertools.cycle` on a `DataLoader` with `RandomSampler`, but it gave me a bad memory leak.
Therefore, is it an issue if we have the API allow for a sampling procedure that looks like the code below ? My goal is to have a flexible data sampling interface while harnessing the multiprocessing power of `DataLoader`.
```python
for i in range(n_iters):
# 1. Sample according to `probs`
indices = np.random.choice(n, batch_size, p=probs)
batch = loader.get_indices(indices)
# 2. Optimization step
opt.zero_grad()
loss = model.compute_loss(batch)
loss.backward()
opt.step()
# 3. Change probs values according to some criterion
probs = get_newProbs(probs, loss)
```
If the `DataLoader` **api** can't be changed, what if we add three extra functions to `torch.data.Dataset` ?
1. `.get_indices(batch_indices, collate_fn)` - to extract a batch from indices using a collate function;
2. `.spawn_workers(num_workers)` - to initialize workers for sampling with multiprocessing; and
3. `.terminate_workers()` - to terminate the worker threads.
What do you think ?
I would be happy to submit a pull request for this feature!
cc @SsnL | module: dataloader,triaged | low | Critical |
269,446,431 | TypeScript | __metadata should register function that returns type instead of literal type | Imagine case with circular dependencies
```
class Car {
@Field owner: Person // !!! Error: Person is not defined.
}
class Person {
@Field car: Car;
}
// car has owner, owner has car
```
Typescript metadata would be emitted here like
eg.
`__metadata('design:type', Person)`.
As `Person` is injected for the first time before `Person` class is initialized, it will result with `ReferenceError` saying `Person is not defined`.
If it'd emit metadata like:
```__metadata('design:type', () => Person)```
it'd be fine.
Later on, when using Reflect.metadata, it would also need to call meta function instead of just returning the type.
If you think it's good idea, do you have any suggestions about starting point for PR that would implement this change?
| Suggestion,Revisit,Domain: Decorators | low | Critical |
269,497,493 | rust | link_dead_code flag breaks sodiumoxide build | When trying to disable dead code elimination project fails to build.
To be clear, this works:
`cargo test --no-run`
this doesn't:
`RUSTFLAGS=-Clink_dead_code cargo test --no-run`
```
Build log:
error: linking with `cc` failed: exit code: 1
|
= note: "cc" "-m64" "-L" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper0.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper1.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper10.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper11.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper12.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper13.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper14.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper15.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper2.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper3.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper4.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper5.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper6.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper7.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper8.rust-cgu.o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.libwhisper9.rust-cgu.o" "-o" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libwhisper-3838df784f3557e7.crate.allocator.rust-cgu.o" "-nodefaultlibs" "-L" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps" "-L" "/usr/local/Cellar/libsodium/1.0.15/lib" "-L" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libchrono-a2ac93e1e7f3829f.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libnum-c49ae6ecf79aff3b.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libnum_iter-529703f02f959052.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libnum_integer-ef7f59b4e3fd69a0.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libnom-d63d0bb4b90726c4.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libmemchr-90d85a68cc7f0681.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libtime-9ed328ba0dc074f6.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libnum_traits-5f9924077010f966.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libsodiumoxide-18211dc8e7914ec4.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libserde-9cbcd9e9b1f85d31.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/liblibsodium_sys-0b032eb7c19f6f3e.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libbytes-d7c1bd52839d451a.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libiovec-c288cddee47c6551.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/liblibc-1d475d610e8905d5.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libbyteorder-330a9355803e29d8.rlib" "/Users/andoriyu/Dev/Heaven/libwhisper-rs/target/debug/deps/libquick_error-8a3cabb77e931a5b.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libtest-191b92e1a25a742e.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libterm-16adb5ef965afad6.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libgetopts-f78c669374ceb40f.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libstd-a812896ed8dd253f.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libpanic_unwind-f80668a71535d14a.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/liballoc_jemalloc-e7385b9dc1f6352a.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libunwind-fa5ca42c4beb9fd9.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/liballoc_system-39205359e68fcafd.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/liblibc-6d1727ccc0bf3375.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/liballoc-59037b68a5b9d10d.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libstd_unicode-db482e95dfaeb4c7.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/librand-6dde5ed2dcdc460f.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libcore-ef96fd3d49f3c876.rlib" "/Users/andoriyu/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libcompiler_builtins-213aeb9cef9ff383.rlib" "-l" "sodium" "-l" "System" "-l" "resolv" "-l" "pthread" "-l" "c" "-l" "m"
= note: Undefined symbols for architecture x86_64:
"_crypto_stream_aes128ctr", referenced from:
sodiumoxide::crypto::stream::aes128ctr::stream::h096683557c24d55a in libsodiumoxide-18211dc8e7914ec4.rlib(sodiumoxide-18211dc8e7914ec4.sodiumoxide1.rust-cgu.o)
"_crypto_stream_aes128ctr_xor", referenced from:
sodiumoxide::crypto::stream::aes128ctr::stream_xor::h446c7cf4f1728666 in libsodiumoxide-18211dc8e7914ec4.rlib(sodiumoxide-18211dc8e7914ec4.sodiumoxide1.rust-cgu.o)
sodiumoxide::crypto::stream::aes128ctr::stream_xor_inplace::h15e74395e3a300a9 in libsodiumoxide-18211dc8e7914ec4.rlib(sodiumoxide-18211dc8e7914ec4.sodiumoxide1.rust-cgu.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
Pretty odd...
## Meta
`rustc --version --verbose`:
```
rustc 1.23.0-nightly (269cf5026 2017-10-28)
binary: rustc
commit-hash: 269cf5026cdac6ff47f886a948e99101316d7091
commit-date: 2017-10-28
host: x86_64-apple-darwin
release: 1.23.0-nightly
LLVM version: 4.0
```
`cargo --version --verbose`:
```
cargo 0.24.0-nightly (e5562ddb0 2017-10-26)
release: 0.24.0
commit-hash: e5562ddb061b8eb5a0e754d702f164a1d42d0a21
commit-date: 2017-10-26
```
Backtrace:
| A-linkage,I-crash,P-medium,T-compiler,C-bug,link-dead-code | low | Critical |
269,565,373 | pytorch | type of torch.bernoulli and torch.multinomial inconsistent | type of `torch.bernoulli` and `torch.multinomial` inconsistent:
- `torch.bernoulli` => torch.FloatTensor
- `torch.multinomial` => torch.LongTensor
I think the types should be self-consistent, ie both FloatTensor, or both LongTensor.
cc @vincentqb @fritzo @neerajprad @alicanb @vishwakftw | module: distributions,triaged | low | Minor |
269,623,019 | TypeScript | @ts-ignore for the block scope and imports | currently @ts-ignore only mutes the errors from the line immediately below it
would be great to have the same for
1. the whole next block
2. also for all imports
### use case:
refactoring: commenting out a piece of code to see what would break without it, yet avoiding to deal with the errors in the file where commented code is which can be many | Suggestion,Awaiting More Feedback,VS Code Tracked | high | Critical |
269,671,588 | opencv | Annotation squares have an offset to the cursor in `opencv_annotation` | ##### System information (version)
- OpenCV => 3.3.0
- Operating System / Platform => macOS High Sierra (10.13)
- Compiler => Installed with homebrew
##### Detailed description
When annotating images using `opencv_annotation` the annotation program start the annotation markings (the red/green squares) with a offset from cursor in the y-axis depending on the y-position of the cursor (proportional it seems). The offset is not dependent on where the annotation/marking started, only the current position of the cursor.
At the bottom (y~0, assuming [0,0] is in the bottom left corner) the marking starts with a negative offset from the cursor, and then the offset gradually increases when the y-position of the cursor increases. It seems the offset is 0 around one third to the top (y-position to the cursor is 1/3 of the height of the image).
##### Steps to reproduce (example)
If I start an annotation on the bottom of an image, the marking appears below the cursor:
<img width="400" alt="bottom" src="https://user-images.githubusercontent.com/6630430/32182217-74de28f6-bd96-11e7-8008-b22fb351a954.png">
About one third up the image, the marking is where it should be:
<img width="577" alt="one-third" src="https://user-images.githubusercontent.com/6630430/32182261-845b8562-bd96-11e7-8bf1-a3fac2dd7772.png">
When I reach the top, the marking appears above the cursor:
<img width="523" alt="top" src="https://user-images.githubusercontent.com/6630430/32182700-9d475280-bd97-11e7-8e05-3901ab362d90.png">
This makes it hard to annotate objects near the top and the bottom:
<img width="600" alt="hard annotation" src="https://user-images.githubusercontent.com/6630430/32182733-ad783552-bd97-11e7-8452-5cb092dc5068.png">
| bug,priority: low,category: highgui-gui,platform: ios/osx,category: apps | low | Minor |
269,673,629 | puppeteer | Support extensions execution contexts | The [documentation](https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#overview) show that Frames can be associated with extensions but there's no documentation on how to register an extension & run a script under one of the extensions environments (option page/background page/page action/ect...).
It would be awesome to run automated tests for web-extensions developpers. | feature,chromium | medium | Critical |
269,687,481 | opencv | The output of cv::convertPointsFromHomogeneous does not match the documentation for points at infinity. | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => :master:
##### Detailed description
The documentation <https://github.com/opencv/opencv/blob/master/modules/calib3d/include/opencv2/calib3d.hpp#L1290>
says the output should be (0,0,...,0) for points at infinity.
But the implementation just discards the last component.
Please refer to
<https://github.com/opencv/opencv/blob/master/modules/calib3d/src/fundam.cpp#L875>
```.cpp
float scale = sptr[i][3] != 0 ? 1.f/sptr[i][3] : 1.f;
dptr[i] = Point2f(sptr[i].x*scale, sptr[i].y*scale);
```
<!-- your description -->
##### Steps to reproduce
```.cpp
#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/calib3d.hpp>
int main()
{
std::vector<cv::Point3f> x1;
x1.push_back({2,3,0});
cv::Mat x2;
cv::convertPointsFromHomogeneous(x1, x2);
std::cout << x2 << std::endl;
return 0;
}
```
Output:
```
[2, 3]
```
instead of the expected
```
[0, 0]
```
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | category: documentation,category: calib3d | low | Critical |
269,731,667 | angular | Animation state is not reapplied | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
Even when disabled, styles of animation are preserved. This is not bad, however when animation state changes and and we enable animation, styles are not updated.
## Expected behavior
<!-- Describe what the desired behavior would be. -->
Styles should update when animation becomes active again.
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
https://stackblitz.com/edit/angular-ds3u1p
sidenav switches between mobile and mini variant depending on window width(xs - mobile, gt-xs -mini)
1) Go to "mini" variant(`collapse` state is active),
2) Go to mobile variant(`collapse` state is still active)
3) Open sidenav and close it(`collapse` is no longer active)
4) Go to "mini" variant(`collapse` is not active <- problem)
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
inconsistency
## Environment
<pre><code>
Angular version: 4.4.6
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [x] Chrome (desktop) version 59
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: XX <!-- run `node --version` -->
- Platform: Linux(Ubuntu 16.04) <!-- Mac, Linux, Windows -->
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
| type: bug/fix,area: animations,freq2: medium,state: needs more investigation,P3 | low | Critical |
269,745,508 | TypeScript | hook into `tsc --watch` using stdout -- the unix way | This is a **feature request**.
I would like to launch `tsc -w` and then read from stdout to know when tsc -w has started and completed a build. Like so:
```js
const cp = require('child_process');
const k = cp.spawn('bash');
setImmediate(function(){
k.stdin.end(`\n tsc --watch --project x \n`);
});
let stdout = '';
k.stdout.on('data', function(d){
stdout += String(d);
if(/foobar/.test(stdout)){
makeMyDay();
}
});
```
Right now, `tsc --watch` outputs stdout that is mostly just human readable, and doesn't provide very useful information:
```
1:33:44 PM - File change detected. Starting incremental compilation...
../suman-types/dts/it.d.ts(6,18): error TS2430: Interface 'ITestDataObj' incorrectly extends interface 'ITestOrHookBase'.
Property 'cb' is optional in type 'ITestDataObj' but required in type 'ITestOrHookBase'.
../suman-types/dts/test-suite.d.ts(13,23): error TS2503: Cannot find namespace 'Chai'.
1:33:44 PM - Compilation complete. Watching for file changes.
```
**Proposal:**
`tsc --watch` should have a new flag, that tells `tsc --watch` to output machine readable data to stdout, instead of human readable data. For example:
`tsc --watch --machine-stdio`
Using this new flag, no existing users will be affected.
Ideally, output JSON to stdout, something like this upon a file change:
```js
const data = {source: '@tscwatch', event: 'file_change', srcfilePath: filePath, willTranspile: true/false, destinationFilePath: null / destinationFilePath};
console.log(JSON.stringify(data);
```
then after transpilation finishes, write something like this to stdout:
```js
const data = {source: '@tscwatch', event: 'compilation_complete', errors: null / errors:[]};
console.log(JSON.stringify(data));
```
if it's behind a flag (`--machine-stdio`) then it won't affect any current users, so should be safe.
I need a truly unique field value like '@tscwatch' so that I know that the JSON is coming from a certain process. In general, it's possible to be parsing more than one JSON stream from a process, so having a unique field in each JSON object makes this better.
| Suggestion,Awaiting More Feedback | low | Critical |
269,770,729 | create-react-app | Parse build output into a rich format we can display properly | The build overlay still looks less rich than the runtime one:
<img width="813" alt="screen shot 2017-10-30 at 22 07 12" src="https://user-images.githubusercontent.com/810438/32198125-b434c83e-bdbe-11e7-8cc3-8390b197376b.png">
It would be nice to actually parse this (if it matches known Babel and ESLint formats) and display a richer version with:
* Error message and file information in a runtime stack frame-like view
* Highlighted line that causes the issue
There would need to be tests verifying we don't regress if message format updates. | tag: enhancement | low | Critical |
269,794,239 | TypeScript | Implicit any quick fix should infer from JSX component usage | ```ts
function Foo(props) {
return <div>
{props.x}, {props.y}, {props.z}
</div>
}
let a = <Foo x={100} y="hello" z={true} />;
```
Expected:
```ts
function Foo(props: { x: number, y: string, z: boolean}) {
return <div>
{props.x}, {props.y}, {props.z}
</div>
}
let a = <Foo x={100} y="hello" z={true} />;
```
Actual:
```ts
function Foo(props: { x: React.ReactNode; y: React.ReactNode; z: React.ReactNode; }) {
return <div>
{props.x}, {props.y}, {props.z}
</div>
}
let a = <Foo x={100} y="hello" z={true} />;
```
| Bug,Domain: Quick Fixes | low | Minor |
269,801,548 | go | go/importer: fix and enable TestFor for gccgo | The go command and gccgo don't get along at the moment. Once they do, importer.TestFor needs to be enabled for gccgo and checked. It may just work. | NeedsFix | low | Minor |
269,933,043 | pytorch | High CPU use by clock_gettime syscall | I noticed, while training a network via https://github.com/abhiskk/fast-neural-style, that CPU use is consistently high in kernel time that is not IO wait. I suspected this was caused by excessive syscalls or context switches.
My platform is Ubuntu 16.04.2, kernel 4.10, CUDA 8, cudnn 6 with a 1080 Ti
To reproduce, I run `python3 neural_style/neural_style.py train --dataset /path/to/COCO2014 --vgg-model-dir vgg --batch-size 4 --save-model-dir saved-models --cuda 1`
`htop` will show high kernel time use (in detailed mode, so it isn't confused with IO waits)
`strace -f -p <PID>` shows that the calls are for `CLOCK_MONOTONIC_RAW`, which is indeed unsupported in vDSO and falls back to a syscall (see http://elixir.free-electrons.com/linux/v4.10.17/source/arch/x86/entry/vdso/vclock_gettime.c#L245). `argdist` for low overhead tracing shows ~2.5m calls per second (`sudo ./argdist -p <PID> -C 'p::sys_clock_gettime(clockid_t clk_id, struct timespec *tp):int'`)
I started manually running parts of the code in the interpreter while sampling the amount of calls happening via `argdist` on the python interpreter, and the culprits are `.cuda()` and `.cpu()` calls on tensors, which trigger many calls to `clock_gettime()` each (amounts are inconsistent and range between hundreds and tens of thousands). I haven't gone down further, but it seems to me that if it's at all possible to replace `CLOCK_MONOTONIC_RAW` with `CLOCK_MONOTONIC`, it will give plenty of performance for such a minor change. Not sure if it's pytorch or CUDA that actually contains said code, though.
cc @ngimel @VitalyFedyunin | module: performance,module: cuda,triaged | low | Major |
270,010,833 | pytorch | CUDA topk is slow for some input sizes | Hi,
I was able to reproduce a configuration in which I have what I believe is a GPU synchronization issue :
```python
import torch
from torch.autograd import Variable
def accuracy_2d(output, target, topk=(1,)):
"""
Computes the precision@k for the specified values of k
Considers output is : NxCxHxW and target is : NxHxW
"""
maxk = max(topk)
total_nelem = target.size(0) * target.size(1) * target.size(2)
_, pred = output.topk(maxk, 1, True, True)
correct = target.unsqueeze(1).expand(pred.size())
correct = pred.eq(correct)
res = []
for k in topk:
correct_k = correct[:, :k].contiguous().view(-1).float().sum(0)
res.append(correct_k.mul_(100.0 / total_nelem))
return res
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
top1 = AverageMeter()
top3 = AverageMeter()
target = torch.LongTensor(16, 172, 172).cuda()
target_var = Variable(target)
pred = Variable(torch.FloatTensor(16, 135, 172, 172).cuda())
prec1, prec3 = accuracy_2d(pred.data, target, topk=(1, 3))
top1.update(prec1[0], 16)
top3.update(prec3[0], 16)
```
I noticed that removing the last two lines of code does not produce the expected behaviour, I suggest this is where the synchronization happens. Also, I think this is related to how `.topk` handles large kernels (cf @soumith).
cc @ngimel @VitalyFedyunin | module: performance,module: cuda,triaged,module: sorting and selection | low | Major |
270,120,147 | react | Treat value={null} as empty string | Per @gaearon's request, I'm opening up a new issue based on https://github.com/facebook/react/issues/5013#issuecomment-340898727.
Currently, if you create an input like `<input value={null} onChange={this.handleChange} />`, the null value is a flag for React to treat this as an uncontrolled input, and a console warning is generated. However, this is often a valid condition. For example, when creating a new object (initialized w/ default values from the server then passed to the component as props) in a form that requires address, Address Line 2 is often optional. As such, passing null as value to this controlled component is a very reasonable thing to do.
One can do a workaround, i.e. `<input value={foo || ''} onChange={this.handleChange} />`, but this is an error-prone approach and quite awkward.
Per issue referenced above, the React team has planned on treating null as an empty string, but that hasn't yet occurred. I'd like to propose tackling this problem in the near future.
Please let me know if I can help further. | Component: DOM,Type: Discussion | medium | Critical |
270,137,440 | go | encoding/json: JSON tags don't handle empty properties, non-standard characters | #### What did you do?
https://play.golang.org/p/RRB1VFNufW
Trying to unmarshal with tags like this:
```
type Data struct {
Foo string `json:"Foo"`
Empty string `json:""`
Quote string "json:\"\\\""
Smiley string "json:\"\U0001F610\""
}
```
#### What did you expect to see?
```
{"Foo": "bla", "": "nothing", "\"": "quux", "π": ":-|"}
{"Foo":"bla","":"nothing","\"":"quux","π":":-|"}
```
#### What did you see instead?
```
{"Foo": "bla", "": "nothing", "\"": "quux", "π": ":-|"}
{"Foo":"bla","Empty":"","Quote":"","Smiley":""}
```
#### System details
```
go version go1.9.2 darwin/amd64
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/schani/go"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/1h/7ghts1ys53xdk8y1czrjzdmw0000gn/T/go-build770874315=/tmp/go-build -gno-record-gcc-switches -fno-common"
CXX="clang++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOROOT/bin/go version: go version go1.9.2 darwin/amd64
GOROOT/bin/go tool compile -V: compile version go1.9.2
uname -v: Darwin Kernel Version 16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64
ProductName: Mac OS X
ProductVersion: 10.12.6
BuildVersion: 16G29
lldb --version: lldb-900.0.50.1
Swift-4.0
gdb --version: GNU gdb (GDB) 7.12.1
```
| NeedsDecision | medium | Critical |
270,148,445 | flutter | flutter_driver finders should allow accessing descendants, first, and last | The [flutter_driver finders](https://docs.flutter.io/flutter/flutter_driver/CommonFinders-class.html) are very limited making it difficult to find a single instance of any repeated widget. If the finders supported descendant, first, and last (as they do in [flutter_test](https://docs.flutter.io/flutter/flutter_test/CommonFinders-class.html)) this would become much easier.
As is, the only way to find one of a set of repeated elements is to assign a predictable unique key to the first element in the list. This often requires some hackery - we usually don't generate lists based on indices - happens in a callback that takes some renderable data type. | c: new feature,framework,t: flutter driver,P3,team-framework,triaged-framework | medium | Critical |
270,156,954 | TypeScript | TS auto import should support configuring whether a star or a qualified import is used. | _From @dbaeumer on October 31, 2017 11:29_
Testing: #37177
- vscode source code opened in VS Code
- open a file that doesn't import `'vs/base/common/types'`
- type isStringArray
- select the entry from code complete list
- the following import is inserted:
```ts
import { isStringArray } from 'vs/base/common/types';
```
However what I want in this case is
```ts
import * as Types from 'vs/base/common/types';
```
and the code should become `Types.isStringArray`
Would be cool if I can control this via a setting.
_Copied from original issue: Microsoft/vscode#37258_ | Suggestion,Awaiting More Feedback,VS Code Tracked | medium | Critical |
270,161,625 | go | archive/zip: FileHeader.Extra API is problematic | The `FileHeader.Extra` field is used by the `Writer` to write the "extra" field for the local file header and the central-directory file header. This is problematic because the Go implementation assumes that the extra bytes used in the two headers are the same. While is this is often the case, it is not always true.
See http://mdfs.net/Docs/Comp/Archiving/Zip/ExtraField and you will notice that it frequently describes a "Local-header version" and a "Central-header version", where the formats sometimes differ.
The `Reader` does not have this problem because it entirely ignores the local headers.
I haven't thought much about what the right action is moving forward, whether to deprecate this field or add new API. I just want to file this issue, so I remember to address it later. | NeedsFix | low | Minor |
270,221,205 | TypeScript | Fix setTimeout/setInterval/setImmediate functions | Fix too lax typings.
**TypeScript Version:** master
**Expected behavior:**
```ts
declare function setTimeout(handler: (...args: any[]) => void, timeout?: number, ...args: any[]): number;
```
**Actual behavior:**
```ts
declare function setTimeout(handler: (...args: any[]) => void, timeout: number): number;
declare function setTimeout(handler: any, timeout?: any, ...args: any[]): number;
``` | Bug,Help Wanted,Domain: lib.d.ts | low | Major |
270,237,733 | kubernetes | Support disk io requests and limits | <!-- This form is for bug reports and feature requests ONLY!
If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
/kind feature
Kubernetes should support disk io requests and limits.
/cc @kubernetes/sig-node-feature-requests | sig/node,kind/feature,lifecycle/frozen,needs-triage | medium | Critical |
270,250,759 | TypeScript | Error "'this' implicitly has type 'any'" when used .bind() | ```sh
βΊ tsc --version
Version 2.7.0-dev.20171020
```
**Code**
tsconfig.json:
```json
{
"compilerOptions": {
"allowJs": true,
"checkJs": true,
"noEmit": true,
"strict": true
},
"files": [
"index.d.ts",
"index.js"
]
}
```
index.js:
```js
/** @type {MyObj} */
const o = {
foo: function() {
(function() {
console.log(this); // <- Unexpected error here.
}.bind(this))();
}
};
```
index.d.ts:
```ts
interface MyObj {
foo(this: { a: number }): void;
}
```
**How it looks in the editor:**
Context of `foo()` is defined:

But context passed to the nested function is lost:

**Expected behavior:**
There should not be error, because `this` explicitly specified by `.bind()`.
**Actual behavior:**
```sh
βΊ tsc
index.js(5,25): error TS2683: 'this' implicitly has type 'any' because it does not have a type annotation.
```
| Suggestion,Awaiting More Feedback,Domain: JavaScript | medium | Critical |
270,308,907 | rust | Lint for undesirable, implicit copies | As part of https://github.com/rust-lang/rust/issues/44619, one topic that keeps coming up is that we have to find some way to mitigate the risk of large, implicit copies. Indeed, this risk exists today even without any changes to the language:
```rust
let x = [0; 1024 * 10];
let y = x; // maybe you meant to copy 10K words, but maybe you didn't.
```
In addition to performance hazards, implicit copies can have surprising semantics. For example, there are several iterator types that *would* implement `Copy`, but we were afraid that people would be surprised. Another, clearer example is a type like `Cell<i32>`, which could certainly be copy, but for this interaction:
```rust
let x = Cell::new(22);
let y = x;
x.set(23);
println!("{}", y.get()); // prints 22
```
For a time before 1.0, we briefly considered introducing a new `Pod` trait that acted like `Copy` (memcpy is safe) but without the implicit copy semantics. At the time, @eddyb argued persuasively against this, basically saying (and rightly so) that this is more of a linting concern than anything else -- the implicit copies in the example above, after all, don't lead to any sort of unsoundness, they just may not be the semantics you expected.
Since then, a number of use cases have arisen where having some kind of warning against implicit, unexpected copies would be useful:
- Iterators implementing `Copy`
- Copy/clone closures (closures that are copy can be surprising just as iterators can be)
- `Cell`, ~~`RefCell`, and other types with interior mutability implementing `Copy`~~
- Behavior of `#[derive(PartiallEq)]` and friends with packed structs
- A coercion from `&T` to `T` (i.e., it'd be nice to be able to do `foo(x)` where `foo: fn(u32)` and `x: &u32`)
I will writeup a more specific proposal in the thread below.
| A-lints,T-lang,C-tracking-issue,S-tracking-design-concerns | medium | Major |
270,362,664 | rust | assertion failed: !are_upstream_rust_objects_already_included(sess) when building rustc_private with monolithic lto | EDIT: repo for reproducing: https://github.com/matthiaskrgr/rustc_crashtest_lto , run cargo build --release
````
rustc --version #rustc 1.23.0-nightly (8b22e70b2 2017-10-31)
git clone https://github.com/rust-lang-nursery/rustfmt
cd rustfmt
git checkout 0af8825eb104e6c7b9444693d583b5fa0bd55ceb
echo "
[profile.release]
opt-level = 3
lto = true
" >> Cargo.toml
RUST_BACKTRACE=full cargo build --release --verbose
````
crashes rustc:
````
Fresh quote v0.3.15
Fresh utf8-ranges v1.0.0
Fresh num-traits v0.1.40
Fresh unicode-xid v0.0.4
Fresh getopts v0.2.15
Fresh serde v1.0.16
Fresh itoa v0.3.4
Fresh void v1.0.2
Fresh dtoa v0.4.2
Fresh diff v0.1.10
Fresh term v0.4.6
Fresh regex-syntax v0.4.1
Fresh unicode-segmentation v1.2.0
Fresh log v0.3.8
Fresh lazy_static v0.2.9
Fresh libc v0.2.32
Fresh synom v0.11.3
Fresh toml v0.4.5
Fresh unreachable v1.0.0
Fresh serde_json v1.0.4
Fresh strings v0.1.0
Fresh memchr v1.0.2
Fresh syn v0.11.11
Fresh thread_local v0.3.4
Fresh aho-corasick v0.6.3
Fresh serde_derive_internals v0.16.0
Fresh regex v0.2.2
Fresh serde_derive v1.0.16
Fresh env_logger v0.4.3
Compiling rustfmt-nightly v0.2.13 (file:///home/matthias/vcs/github/rustfmt)
Running `rustc --crate-name rustfmt src/bin/rustfmt.rs --crate-type bin --emit=dep-info,link -C opt-level=3 -C lto --cfg 'feature="cargo-fmt"' --cfg 'feature="default"' --cfg 'feature="rustfmt-format-diff"' -C metadata=8286bf522b4875a9 -C extra-filename=-8286bf522b4875a9 --out-dir /home/matthias/vcs/github/rustfmt/target/release/deps -L dependency=/home/matthias/vcs/github/rustfmt/target/release/deps --extern serde_derive=/home/matthias/vcs/github/rustfmt/target/release/deps/libserde_derive-2b4ee28cf16ac2a4.so --extern term=/home/matthias/vcs/github/rustfmt/target/release/deps/libterm-752362bbc8237001.rlib --extern serde=/home/matthias/vcs/github/rustfmt/target/release/deps/libserde-45127027bf81d438.rlib --extern log=/home/matthias/vcs/github/rustfmt/target/release/deps/liblog-d09fa7f67c1f577c.rlib --extern diff=/home/matthias/vcs/github/rustfmt/target/release/deps/libdiff-6cc97c0e6df9495d.rlib --extern getopts=/home/matthias/vcs/github/rustfmt/target/release/deps/libgetopts-8ff6434fa2a5d019.rlib --extern unicode_segmentation=/home/matthias/vcs/github/rustfmt/target/release/deps/libunicode_segmentation-6bb2cdd83d97a0ec.rlib --extern serde_json=/home/matthias/vcs/github/rustfmt/target/release/deps/libserde_json-53e4f5d05eed2957.rlib --extern strings=/home/matthias/vcs/github/rustfmt/target/release/deps/libstrings-04c4ec84130f6565.rlib --extern regex=/home/matthias/vcs/github/rustfmt/target/release/deps/libregex-48d942f70d747749.rlib --extern toml=/home/matthias/vcs/github/rustfmt/target/release/deps/libtoml-65d6559cb921e7a7.rlib --extern env_logger=/home/matthias/vcs/github/rustfmt/target/release/deps/libenv_logger-10d3b6fcb2fa4ecb.rlib --extern libc=/home/matthias/vcs/github/rustfmt/target/release/deps/liblibc-2029413d0fb43b31.rlib --extern rustfmt_nightly=/home/matthias/vcs/github/rustfmt/target/release/deps/librustfmt_nightly-13335655e8960ac0.rlib -C target-cpu=native`
error: internal compiler error: unexpected panic
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.23.0-nightly (8b22e70b2 2017-10-31) running on x86_64-unknown-linux-gnu
note: run with `RUST_BACKTRACE=1` for a backtrace
thread 'rustc' panicked at 'assertion failed: !sess.lto()', /checkout/src/librustc_trans/back/link.rs:1287:8
stack backtrace:
0: 0x7fc18198a153 - std::sys::imp::backtrace::tracing::imp::unwind_backtrace::hf409d569470ae30b
at /checkout/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: 0x7fc1819847f0 - std::sys_common::backtrace::_print::h9f8ff77762968e1c
at /checkout/src/libstd/sys_common/backtrace.rs:69
2: 0x7fc181997473 - std::panicking::default_hook::{{closure}}::h233cc40af697cbfb
at /checkout/src/libstd/sys_common/backtrace.rs:58
at /checkout/src/libstd/panicking.rs:381
3: 0x7fc18199717d - std::panicking::default_hook::hefff18022ca24d92
at /checkout/src/libstd/panicking.rs:391
4: 0x7fc181997907 - std::panicking::rust_panic_with_hook::hd94a4492e4561dca
at /checkout/src/libstd/panicking.rs:577
5: 0x7fc17f79a6c1 - std::panicking::begin_panic::h1d4a7052e8a95c5a
6: 0x7fc17f75d24d - _ZN11rustc_trans4back4link13link_natively17hfbc8890611f67b24E.llvm.C0978D50
7: 0x7fc17f75879d - rustc_trans::back::link::link_binary::ha0632ae2f8eab4a2
8: 0x7fc17f770135 - <rustc_trans::LlvmTransCrate as rustc_trans_utils::trans_crate::TransCrate>::link_binary::hef3d77a5e1caaee2
9: 0x7fc181d5d03b - rustc_driver::driver::compile_input::h6d65afe4a82d280a
10: 0x7fc181da41a0 - rustc_driver::run_compiler::h6a01af2106f7c680
11: 0x7fc181d330f2 - _ZN3std10sys_common9backtrace28__rust_begin_short_backtrace17h5665586c1dd72980E.llvm.B78FDE68
12: 0x7fc1819e0a0e - __rust_maybe_catch_panic
at /checkout/src/libpanic_unwind/lib.rs:99
13: 0x7fc181d4e7a2 - _ZN50_$LT$F$u20$as$u20$alloc..boxed..FnBox$LT$A$GT$$GT$8call_box17hda2c4e140d408872E.llvm.B78FDE68
14: 0x7fc18199634b - std::sys::imp::thread::Thread::new::thread_start::h024eb26cf106639b
at /checkout/src/liballoc/boxed.rs:772
at /checkout/src/libstd/sys_common/thread.rs:24
at /checkout/src/libstd/sys/unix/thread.rs:90
15: 0x7fc17bd51739 - start_thread
16: 0x7fc18165ce7e - clone
17: 0x0 - <unknown>
error: Could not compile `rustfmt-nightly`.
Caused by:
process didn't exit successfully: `rustc --crate-name rustfmt src/bin/rustfmt.rs --crate-type bin --emit=dep-info,link -C opt-level=3 -C lto --cfg feature="cargo-fmt" --cfg feature="default" --cfg feature="rustfmt-format-diff" -C metadata=8286bf522b4875a9 -C extra-filename=-8286bf522b4875a9 --out-dir /home/matthias/vcs/github/rustfmt/target/release/deps -L dependency=/home/matthias/vcs/github/rustfmt/target/release/deps --extern serde_derive=/home/matthias/vcs/github/rustfmt/target/release/deps/libserde_derive-2b4ee28cf16ac2a4.so --extern term=/home/matthias/vcs/github/rustfmt/target/release/deps/libterm-752362bbc8237001.rlib --extern serde=/home/matthias/vcs/github/rustfmt/target/release/deps/libserde-45127027bf81d438.rlib --extern log=/home/matthias/vcs/github/rustfmt/target/release/deps/liblog-d09fa7f67c1f577c.rlib --extern diff=/home/matthias/vcs/github/rustfmt/target/release/deps/libdiff-6cc97c0e6df9495d.rlib --extern getopts=/home/matthias/vcs/github/rustfmt/target/release/deps/libgetopts-8ff6434fa2a5d019.rlib --extern unicode_segmentation=/home/matthias/vcs/github/rustfmt/target/release/deps/libunicode_segmentation-6bb2cdd83d97a0ec.rlib --extern serde_json=/home/matthias/vcs/github/rustfmt/target/release/deps/libserde_json-53e4f5d05eed2957.rlib --extern strings=/home/matthias/vcs/github/rustfmt/target/release/deps/libstrings-04c4ec84130f6565.rlib --extern regex=/home/matthias/vcs/github/rustfmt/target/release/deps/libregex-48d942f70d747749.rlib --extern toml=/home/matthias/vcs/github/rustfmt/target/release/deps/libtoml-65d6559cb921e7a7.rlib --extern env_logger=/home/matthias/vcs/github/rustfmt/target/release/deps/libenv_logger-10d3b6fcb2fa4ecb.rlib --extern libc=/home/matthias/vcs/github/rustfmt/target/release/deps/liblibc-2029413d0fb43b31.rlib --extern rustfmt_nightly=/home/matthias/vcs/github/rustfmt/target/release/deps/librustfmt_nightly-13335655e8960ac0.rlib -C target-cpu=native` (exit code: 101)
````
| I-ICE,E-needs-test,T-compiler,C-bug | low | Critical |
270,446,108 | go | proposal: encoding/json: preserve unknown fields | Yesterday I've implemented https://github.com/golang/go/issues/15314, which allows to optionally fail JSON decoding if an object has a key which cannot be mapped to a field in the destination struct.
In the discussion of that proposal, a few people floated the idea of having a mechanism to collect such keys/values instead of silently ignoring them or failing to parse.
The main use case I can think of is allowing for JSON to be decoded into structs, modified, and serialized back while preserving unknown keys (modulo the order in which they appeared, and potentially "duplicate" keys that are dropped due to uppercase/lowercase collisions, etc.). This behavior is supported by many languages / libraries and other serialization systems such as protocol buffers.
I propose to add this type to the JSON package:
```go
type UnknownFields map[string]interface{}
```
Users of the JSON package can then embed this type in structs for which they'd like to use the feature:
```go
type Data struct {
json.UnknownFields
FirstField int
SecondField string
}
```
On decoding, any object key/value which cannot be mapped to a field in the destination struct would be decoded and stored in UnknownFields. On encoding, any key present UnknownFields would be added to the serialized object.
I can think of a couple edge cases which are tricky, and I propose to resolve them as follows:
##### Nested structs
It's possible for nested structs to also declare UnknownFields. In such cases any UnknownFields in nested structs should be ignored, both when decoding and encoding. Pros: it is consistent with how we already flatten fields, and it's the only way to ensure decoding is unambiguous. Cons: keys that somehow were set to UnknownFields in a child struct would be ignored on encoding.
##### Key collisions
When encoding it's possible that a key in UnknownFields would collide with another field on the struct. In such cases the key in UnknownFields should be ignored. Pros: it is consistent with the behavior in absence of UnknownFields, seems generally less error prone, it cannot happen in a plain decode/edit/encode cycle, it's unambiguous. Cons: it can possibly lead to silently dropping some values.
PS: I'm happy to do the implementation should the proposal or some variation of it be approved. | Proposal,Proposal-Hold | medium | Critical |
270,462,099 | go | image: support LJPEG | Please consider adding support for Lossless JPEG (SOF3, aka LJPEG). This format is used in medical imaging (DICOM) and RAW files of digital cameras (DNG). An argument for this to be in standard library is that most of the required parts are already in place (JPEG parser, Huffman coding), whereas an external library would need to replicate those private methods. | help wanted,NeedsFix | low | Major |
270,470,531 | kubernetes | Create guidelines, documentation and tooling about apiextension PKI | * how do I deploy extensions securely?
* how do I implement rotation?
* can I use the kubernetes certificates API?
* is there any way to improve automation of this? | priority/important-soon,kind/documentation,sig/api-machinery,sig/auth,lifecycle/frozen | low | Major |
270,473,557 | youtube-dl | YT-DL dont downloading from new page of IPRIMA TV | youtube-dl -u ******@seznam.cz --verbose http://play.iprima.cz/filmy/bitva-o-sevastopol
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-u', u'PRIVATE', u'--verbose', u'-F', u'http://play.iprima.cz/filmy/bitva-o-sevastopol']
Type account password and press [Return]:
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.10.29
[debug] Python version 2.7.13 - Linux-4.9.0-4-amd64-x86_64-with-debian-9.2
[debug] exe versions: avconv 3.2.8-1, avprobe 3.2.8-1, ffmpeg 3.2.8-1, ffprobe 3.2.8-1
[debug] Proxy map: {}
[IPrima] bitva-o-sevastopol: Downloading webpage
[IPrima] p396592: Downloading player
ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 784, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 434, in extract
ie_result = self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/iprima.py", line 93, in _real_extract
self._sort_formats(formats)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 1072, in _sort_formats
raise ExtractorError('No video formats found')
ExtractorError: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
| geo-restricted,account-needed | low | Critical |
270,492,153 | go | net/http: Transport: add a ConnectionManager interface to separate the connection management from http.Transport | `http.Transport` gives us a real solid HTTP client functionality and a faithful protocol implementation. The connection pooling/management is also bundled into `http.Transport`. `http.Transport` connection management takes a stance on a few areas. For example,
- it does not limit the number of *active* connections
- it reuses available connections in a LIFO manner
There are real needs and use cases where we need a different behavior there. We may want to limit the number of active connections. We may want to have a different connection pooling policy (e.g. FIFO). But today it is not possible if you use `http.Transport`. The only option is to implement the HTTP client, but we like the protocol implementation that exists in `http.Transport`.
There are several issues filed because of the inability to override or modify the connection management behavior of `http.Transport`:
- #14984
- #6785
- #17775
- #17776
among others.
It would be great if the connection management aspect of `http.Transport` is separated from the protocol aspect of `http.Transport` and becomes pluggable (e.g. a `ConnectionManager` interface). Then we could choose to provide a different connection management implementation and mix it with the protocol support of `http.Transport`.
The `http.Transport` API would add this new optional field:
```go
type Transport struct {
...
// ConnMgr provides the connection management behavior
// if nil, a default connection manager is used (yet to be named)
ConnMgr ConnectionManager
}
```
The connection manager should have a fairly simple API while encapsulating the complex behavior in the implementation. An incomplete API might look like:
```go
type ConnectionManager interface {
Get() (net.Conn, error)
Put(conn net.Conn)
}
```
It would be a pretty straightforward pool-like API. The only wrinkle might be that it should allow for the possibility that `Get` may be blocking (with a timeout) for certain implementations that want to allow timed waits for obtaining a connection from the connection manager.
It'd be great if the current "connection manager" is available publicly so some implementations can start with the base implementation and configure/customize it or override some methods as needed. | NeedsInvestigation,FeatureRequest | low | Critical |
270,525,618 | kubernetes | All etcd3 watches close after 10m or so | It appears that we are not properly handling the watch expired error by continuing the watch from the most recent RV seen on a watch. When compaction occurs watches appear to terminate (the RV we started the watch on no longer exists). We should be able to restart the watch at the last RV we received on the watch at the storage level and continue.
```
E0607 17:45:11.447234 367 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted
``` | kind/bug,priority/backlog,sig/scalability,sig/api-machinery,lifecycle/frozen | medium | Critical |
270,537,122 | flutter | DefaultTextStyle docs are lacking | https://api.flutter.dev/flutter/widgets/DefaultTextStyle-class.html
Doesn't say much. Topics it might explain are things like "when would you use one of these" or "what widgets set one of these for you", etc.
My understanding is MaterialApp sets a crazy default text style, might be useful to note that here (and how you're expected to interact with that default). | framework,d: api docs,a: typography,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-framework,triaged-framework | low | Minor |
270,542,429 | rust | private_in_public lint triggered for pub associated type computed using non-pub trait | I was surprised to get a private_in_public deprecation warning for using a private trait to _compute_ an associated type that's itself pub.
De-macro'd, shortened example of what I was doing:
```rust
#![feature(try_from)]
trait Bar {
type Inner;
}
pub struct B16(u16);
pub struct B32(u32);
impl Bar for B16 {
type Inner = u16;
}
impl Bar for B32 {
type Inner = u32;
}
use std::convert::{TryFrom, TryInto};
impl TryFrom<B32> for B16 {
type Error = <<B16 as Bar>::Inner as TryFrom<<B32 as Bar>::Inner>>::Error;
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// warning: private type `<B16 as Bar>::Inner` in public interface (error E0446)
// (Despite the fact that that actual type is core::num::TryFromIntError)
fn try_from(x: B32) -> Result<Self, Self::Error> {
Ok(B16(x.0.try_into()?))
}
}
fn main() {}
```
Repro: https://play.rust-lang.org/?gist=1f84c630e07ddd54d2bf208aa85ed8bb&version=nightly
I don't understand how that type is part of the public interface, since I can't get to it using TryFrom.
(Do close if this is known and covered by things like https://github.com/rust-lang/rfcs/pull/1671#issuecomment-268422405, but I couldn't find any issues talking about this part at least.) | C-enhancement,A-visibility,T-compiler | low | Critical |
270,656,049 | flutter | Make it less confusing to use Cupertino with text under MaterialApp | Avoid user confusing wrt the default error text style from MaterialApp when using Cupertino widgets. | framework,f: material design,f: cupertino,a: quality,a: typography,P2,team-design,triaged-design | low | Critical |
270,731,203 | go | cmd/cover: inconsistent treatment of comments | Run coverage on the following program using these commands:
```
$ cd $GOROOT/src/test
$ cat test.go
package test
func f() {
if true {
println("A")
}
// comment
if true {
// comment
if false {
println("B")
}
} else {
// comment
println("D")
}
}
$ cat test_test.go
package test
import "testing"
func Test(t *testing.T) { f() }
$ go test -coverprofile=c.out test
ok test 0.015s coverage: 66.7% of statements
$ command go tool cover -html=c.out
```
This is the rendered HTML output:

Observe that the first comment is grey, not green, even though it was covered. Is this a simple bookkeeping error, or is there a design reason why we shouldn't consider the entire span from `func f() {` up to `if false` as covered, and render it green?
(This is Google internal issue 68650370.) | help wanted,NeedsInvestigation,compiler/runtime | low | Critical |
270,788,506 | rust | Literate doctests | This might be RFC-worthy.
What if you could tie several sequential doctests together so that they operate as a single code block, but you can still write narrative comments in between in a literal style? This could cut down on `ignore`/`no_run` code blocks as well as hidden lines at the same time.
### Example
Here is an example of using mutable variables. First, let's make some bindings.
```rust
let a = 1;
let mut b = 2;
```
`a` can't be modified, but `b` can!
```rust,cont
// a += 1; // this would be an error!
b += 1;
```
Clicking the run button on either code block would open a playpen with all the code. Syntax up for debate.
cc @QuietMisdreavus | T-rustdoc,C-feature-request,A-doctests | low | Critical |
270,818,914 | go | cmd/vet: flag atomic.Value usages with interface types | Consider the following:
```go
var v atomic.Value
var err error
err = &http.ProtocolError{}
v.Store(err)
err = io.EOF
v.Store(err)
```
The intention to have a atomic value store for errors. However, running this code panics:
```
panic: sync/atomic: store of inconsistently typed value into Value
```
This is because `atomic.Value` requires that the underlying *concrete* type be the same (which is a reasonable expectation for its implementation). When going through the `atomic.Value.Store` method call, the fact that both these are of the `error` interface is lost.
Perhaps we should add a vet check that flags usages of `atomic.Value` where the argument passed in is an interface type?
Vet criterions:
* frequency: not sure, I haven't done an analysis through all Go corpus.
* correctness: this is almost always wrong. Any "correct" usages should type assert to the concrete value first.
* accuracy: if the type information available can conclusively show an argument is an interface type, then very accurate.
\cc @robpike
\cc @dominikh for `staticcheck` | NeedsDecision,Analysis | low | Critical |
270,820,725 | nvm | `lts/*` should point to latest installed line, not latest available line | - Operating system and version:
Ubuntu 16.04.3 LTS
- `nvm debug` output:
<details>
<!-- do not delete the following blank line -->
```sh
nvm --version: v0.33.6
$SHELL: /bin/bash
$HOME: /home/build
$NVM_DIR: '$HOME/.nvm'
$PREFIX: ''
$NPM_CONFIG_PREFIX: ''
$NVM_NODEJS_ORG_MIRROR: ''
$NVM_IOJS_ORG_MIRROR: ''
shell version: 'GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)'
uname -a: 'Linux 4.4.0-98-generic #121-Ubuntu SMP Tue Oct 10 14:24:03 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux'
OS version: Ubuntu 16.04.3 LTS
sed: -e expression #1, char 9: Unmatched ) or \)
curl: , curl 7.47.0 (x86_64-pc-linux-gnu) libcurl/7.47.0 GnuTLS/3.4.10 zlib/1.2.8 libidn/1.32 librtmp/2.3
wget: /usr/bin/wget, GNU Wget 1.17.1 built on linux-gnu.
git: /usr/bin/git, git version 2.7.4
grep: /bin/grep (grep --color=auto), grep (GNU grep) 2.25
awk: /usr/bin/awk, GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.4, GNU MP 6.1.0)
sed: /bin/sed, sed (GNU sed) 4.2.2
cut: /usr/bin/cut, cut (GNU coreutils) 8.25
basename: /usr/bin/basename, basename (GNU coreutils) 8.25
rm: /bin/rm, rm (GNU coreutils) 8.25
sed: -e expression #1, char 9: Unmatched ) or \)
mkdir: , mkdir (GNU coreutils) 8.25
xargs: /usr/bin/xargs, xargs (GNU findutils) 4.7.0-git
nvm current: none
which node:
which iojs:
which npm:
npm config get prefix: The program 'npm' is currently not installed. You can install it by typing:
sudo apt install npm
npm root -g: The program 'npm' is currently not installed. You can install it by typing:
sudo apt install npm
```
</details>
- `nvm ls` output:
<details>
<!-- do not delete the following blank line -->
```sh
v4.8.3
v5.3.0
v6.10.3
v6.11.0
v6.11.1
v6.11.2
v6.11.3
v6.11.4
v6.11.5
default -> lts/* (-> N/A)
node -> stable (-> v6.11.5) (default)
stable -> 6.11 (-> v6.11.5) (default)
iojs -> N/A (default)
lts/* -> lts/carbon (-> N/A)
lts/argon -> v4.8.5 (-> N/A)
lts/boron -> v6.11.5
lts/carbon -> v8.9.0 (-> N/A)
```
</details>
- How did you install `nvm`? (e.g. install script in readme, homebrew):
install script in readme
- What steps did you perform?
Opened a new terminal session (e.g. SSH client or new tmux tab).
- What happened?
Error message:
````
N/A: version "N/A -> N/A" is not yet installed.
You need to run "nvm install N/A" to install it before using it.
````
- What did you expect to happen?
No error message.
- Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`?
No. | feature requests,bugs | medium | Critical |
270,822,351 | angular | Compiling with Closure Compiler requires unnecessary extern | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
When compiling an AOT compiler bundle with Closure Compiler, an error prevents the bundle from being compiled.
[14:46:34] WARN node_modules/@angular/platform-browser/esm2015/platform-browser.js:1542:
Originally at:
node_modules/packages/platform-browser/src/dom/util.js:38: ERROR - variable COMPILED is undeclared
1 error(s), 0 warning(s)
As a workaround, I made COMPILED an extern, but this variable is not needed in my application. I believe this had something to do with this change:
https://github.com/angular/angular/commit/db74f44a97b545488c4e05bf4210dfb733fe8d6f
## Expected behavior
<!-- Describe what the desired behavior would be. -->
COMPILED should not be required by an application to compile AOT compiled code with Closure Compiler.
## Minimal reproduction of the problem with instructions
Follow the instructions in the README.md of this repo:
https://github.com/steveblue/angular5-closure-extern-bug
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
Angular should not require a dev to include unnecessary externs just to build with Closure Compiler.
## Environment
<pre><code>
Angular version: 5.0.0
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [ ] Chrome (desktop) version XX
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: 6.11.0
- Platform: MacOS 11.12.6
</code></pre> | freq1: low,area: compiler,type: use-case,P3 | low | Critical |
270,828,042 | opencv | cv2.so link two versions: libopencv_imgcodecs.so,libopencv_imgproc.so,libopencv_core.so | - OpenCV => 3.3
- Operating System / Platform => ubuntu 16.04 64 Bit
- Compiler => gcc version 5.4.0 20160609
I cleaned opencv-3.1 in ubuntu 16.04,then build 3.3,when
`ldd /usr/lib/python2.7/dist-packages/cv2.so|grep 3.1`
it gives:
```
/lib64/ld-linux-x86-64.so.2 (0x0000557321d2a000)
libopencv_imgcodecs.so.3.1 => not found
libopencv_imgproc.so.3.1 => not found
libopencv_core.so.3.1 => not found
```
as you can see that cv2.so(from opencv 3.3) **linked three not existed libs** from opencv-3.1
when
`ldd /usr/lib/python2.7/dist-packages/cv2.so|grep libopencv_core`
it gives
```
libopencv_core.so.3.3 => /usr/lib/x86_64-linux-gnu/libopencv_core.so.3.3..
libopencv_core.so.3.1 => not found
```
obviously,**two versions of libopencv_core got linked into cv2.so**,can anybody tell me:
how do this happened?
how to solve this problem?
my steps:
```
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=$(python -c
"import sys; print(sys.prefix)") -D PYTHON_EXECUTABLE=$(which python) -D
OPENCV_EXTRA_MODULES_PATH=/home/user/opencv_contrib/modules -D WITH_QT=ON -D
WITH_OPENGL=ON -D WITH_IPP=ON -D WITH_OPENNI2=ON -D WITH_V4L=ON -D
WITH_FFMPEG=ON -D WITH_GSTREAMER=OFF -D WITH_OPENMP=ON -D WITH_VTK=ON -D
BUILD_opencv_java=OFF -D BUILD_opencv_python3=OFF -D WITH_CUDA=ON -D
ENABLE_FAST_MATH=1 -D WITH_NVCUVID=ON -D CUDA_FAST_MATH=ON -D
BUILD_opencv_cnn_3dobj=ON -D FORCE_VTK=ON -D WITH_TBB=ON -D WITH_CUBLAS=ON -
D CUDA_NVCC_FLAGS="-D_FORCE_INLINES" -D WITH_GDAL=ON -D WITH_XINE=ON ..
```
`make -j 48`
`make install -j 48` | category: build/install,incomplete | low | Minor |
270,846,133 | go | runtime: windows-amd64-race builder fails with errno=1455 | I noticed some failures on windows-amd64-race builder:
https://build.golang.org/log/51b118b069de539851ff1cc07769de863bf47797
```
--- FAIL: TestRace (2.21s)
race_test.go:71: failed to parse test output:
# command-line-arguments_test
runtime: VirtualAlloc of 1048576 bytes failed with errno=1455
fatal error: out of memory
...
FAIL
FAIL runtime/race 40.936s
```
and
https://build.golang.org/log/f73793faa51acd4b45e9ef88a18e462a4913448c
```
==2820==ERROR: ThreadSanitizer failed to allocate 0x000000400000 (4194304) bytes at 0x040177c00000 (error code: 1455)
runtime: newstack sp=0x529fdf0 stack=[0xc042200000, 0xc042202000]
morebuf={pc:0x4083a4 sp:0x529fe00 lr:0x0}
sched={pc:0x47bec7 sp:0x529fdf8 lr:0x0 ctxt:0x0}
runtime: gp=0xc0421ec300, gp->status=0x2
runtime: split stack overflow: 0x529fdf0 < 0xc042200000
fatal error: runtime: split stack overflow
...
FAIL runtime/trace 14.859s
...
--- FAIL: TestMutexMisuse (0.28s)
mutex_test.go:173: Mutex.Unlock: did not find failure with message about unlocked lock: fork/exec C:\Users\gopher\AppData\Local\Temp\go-build022577002\b075\sync.test.exe: The paging file is too small for this operation to complete.
mutex_test.go:173: Mutex.Unlock2: did not find failure with message about unlocked lock: fork/exec C:\Users\gopher\AppData\Local\Temp\go-build022577002\b075\sync.test.exe: The paging file is too small for this operation to complete.
mutex_test.go:173: RWMutex.Unlock: did not find failure with message about unlocked lock: fork/exec C:\Users\gopher\AppData\Local\Temp\go-build022577002\b075\sync.test.exe: The paging file is too small for this operation to complete.
mutex_test.go:173: RWMutex.Unlock2: did not find failure with message about unlocked lock: fork/exec C:\Users\gopher\AppData\Local\Temp\go-build022577002\b075\sync.test.exe: The paging file is too small for this operation to complete.
mutex_test.go:173: RWMutex.Unlock3: did not find failure with message about unlocked lock: fork/exec C:\Users\gopher\AppData\Local\Temp\go-build022577002\b075\sync.test.exe: The paging file is too small for this operation to complete.
mutex_test.go:173: RWMutex.RUnlock: did not find failure with message about unlocked lock: fork/exec C:\Users\gopher\AppData\Local\Temp\go-build022577002\b075\sync.test.exe: The paging file is too small for this operation to complete.
mutex_test.go:173: RWMutex.RUnlock2: did not find failure with message about unlocked lock: fork/exec C:\Users\gopher\AppData\Local\Temp\go-build022577002\b075\sync.test.exe: The paging file is too small for this operation to complete.
mutex_test.go:173: RWMutex.RUnlock3: did not find failure with message about unlocked lock: fork/exec C:\Users\gopher\AppData\Local\Temp\go-build022577002\b075\sync.test.exe: The paging file is too small for this operation to complete.
FAIL
FAIL sync 2.803s
```
1455 is ERROR_COMMITMENT_LIMIT "The paging file is too small for this operation to complete."
Perhaps we just need to make builders page file larger or give PC more memory or something.
Alex | OS-Windows,NeedsInvestigation,compiler/runtime | low | Critical |
270,862,507 | kubernetes | deployment run partly in constrain of quota | <!-- This form is for bug reports and feature requests ONLY!
If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
> Uncomment only one, leave it on its own line:
>
> /kind bug
/kind feature
**What happened**:
Deployment is a concept of a bunch of pod which complete some kind of task together.But it will run partly in constrain of quota.
**What you expected to happen**:
If deployment does not meet quota requirements, it should be refused as a whole, not run partly
**How to reproduce it (as minimally and precisely as possible)**:
create a quota like
`apiVersion: v1`
`kind: ResourceQuota`
`metadata:`
`name: pod-demo`
`spec:`
`hard:`
`pods: "2"`
then create a deployment with 5 replicas, k8s would run 2 pod normally rather than refuse this deployment
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
| sig/scheduling,lifecycle/frozen | low | Critical |
270,895,028 | TypeScript | typeof Foo['bar'] has strange precedence | Type queries combined with indexed access types currently produce a parse tree that is surprising in its behavior.
One would think that `typeof Foo['bar']` would be parsed as `typeof (Foo['bar'])`, which would really be something like `typeof Foo.bar`.
That's not the case. It's actually parsed as `(typeof Foo)['bar']`.
Conveniently, it seems that semantically (when type-checking) these are identical, but it seems strange for syntactic consumers. Do we believe this is currently working as intended? | Suggestion,Help Wanted,Revisit,Effort: Moderate,Experimentation Needed | low | Major |
270,965,476 | go | runtime/pprof: CPU profiling isn't implemented on Plan 9 | null | OS-Plan9,NeedsFix,compiler/runtime | low | Minor |
271,011,834 | bitcoin | "Rolling forward" at startup can take a long time, and is not interruptible | bitcond is spending hours "Rolling forward" upon startup, seemingly reprocessing blocks that were previously processed. There have been reports of this happening since July on stackexchange and reddit, so I thought it time an issue was raised, given it's still happening in V0.15.1. | UTXO Db and Indexes,Validation,Resource usage | high | Critical |
271,021,130 | youtube-dl | Support download from cmore? | - [x ] I've **verified** and **I assure** that I'm running youtube-dl **2017.10.29**
- [x ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x ] Site support request (request for adding support for a new site)
| site-support-request,account-needed | low | Critical |
271,038,651 | rust | Blanket impl of AsRef for Deref | There are currently FIXMEs [here](https://github.com/rust-lang/rust/blob/294f0eef736aa13cadf28ce7160a18a94ca7b87c/library/core/src/convert/mod.rs#L511-L517) and [here](https://github.com/rust-lang/rust/blob/294f0eef736aa13cadf28ce7160a18a94ca7b87c/library/core/src/convert/mod.rs#L532-L538) in the core `AsRef` and `AsMut` impls to change the implementations for `&` and `&mut` to a blanket impl over `Deref` and `DerefMut`. However, this blanket impl results in a number of conflicts (even with `deref`). Because of this, I opened https://github.com/rust-lang/rust/pull/45378 to remove the FIXMEs, but @withoutboats pointed out that we could potentially use intersection specialization to resolve the issue.
I'm not sure exactly what this impl (or set of impls) would look like, so I've closed the PR and opened this issue to track the resolution of the FIXMEs. | C-cleanup,A-specialization,S-blocked | low | Major |
271,058,905 | go | runtime: race detector: more aggressive scheduler perturbations | A program was [posted to golang-nuts](https://groups.google.com/forum/#!topic/golang-nuts/U7plctxPET4), with the question βis this code thread [safe]?β
The program has a race: it executes a `select` with a default case many times in parallel, and the default case calls `close` on a channel. (You can see the race by inserting a `runtime.Gosched()`: https://play.golang.org/p/_PWTTCwPgi.)
To my surprise, the program runs without error even with the race detector enabled.
The [Go Memory Model](https://golang.org/ref/mem) defines an order between sends and receives but not between closes and sends or closes and other closes, so I think this is technically a data race (and not just a logic race). The race detector should report it.
`src/racy/main.go`:
```go
package main
import (
"fmt"
"sync/atomic"
"time"
)
func main() {
die := make(chan struct{})
i := int32(0)
closed := func() {
select {
case <-die:
atomic.AddInt32(&i, 1)
default:
close(die)
fmt.Println("closed")
}
}
N := 100000
for i := 0; i < N; i++ {
go closed()
}
time.Sleep(10 * time.Second)
fmt.Println(atomic.LoadInt32(&i))
}
```
```
$ go version
go version devel +1852573521 Wed Nov 1 16:58:36 2017 +0000 linux/amd64
$ go run -race src/racy/main.go
closed
99999
``` | RaceDetector,NeedsInvestigation,compiler/runtime | low | Critical |
271,062,176 | pytorch | improve performance of common CPU clone / contiguous calls with HPTT | HPTT provides quite a dramatic speedup for stuff like `x.transpose(0, 1).contiguous()`, calls which are quite common all over the place.
With Numpy as a baseline it provides 5x+ speedup consistently.
The API looks pretty simple, worth thinking of integrating
https://github.com/springer13/hptt | module: cpu,triaged | low | Major |
271,071,722 | kubernetes | ControllerRevision has implicit patch and patchtype semantics | Use of the ControllerRevision object by the StatefulSet and DaemonSet controllers places strategic merge patch content in the revision data.
To successfully make use of this data, a client must know the data represents a patch, and what type of patch, in order to successfully apply it to roll back.
The nature of the data and the type of patch should be explicit in the object.
/sig apps
/cc @kow3ns | sig/apps,lifecycle/frozen | low | Major |
271,072,969 | kubernetes | ControllerRevision rollback does not handle multiple versions correctly | `kubectl rollout undo` assumes the revision content stored in a ControllerRevision applies to a fixed group/version. The content currently placed in a controller revision by the statefulset and daemonset controllers happens to be patch content of the pod template spec located at a consistent JSON path, but that is not guaranteed by the API.
The `rollout undo` commands should determine which version to apply the `undo` patch to from the information recorded in the ControllerRevision (possibly the .metadata.ownerReferences[@controller==true] kind/apiVersion)
/sig apps
/sig cli
/cc @kow3ns | kind/bug,sig/apps,sig/cli,lifecycle/frozen | low | Major |
271,075,537 | angular | Location.go doesn't work for external URLs |
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
Calling, for example, `location.go('https://www.google.com')` fails silently
## Expected behavior
<!-- Describe what the desired behavior would be. -->
`location.go('https://www.google.com')` should navigate to Google. If that's not intended to ever work, the documentation should say so, and it should throw an error. Right now, it implies that this should work.
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
https://plnkr.co/edit/guGKhOqhBM3uG4ljAEGR?p=preview
Click "go". Notice nothing happens.
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
I want to be able to navigate to external URLs in a testable way without having to provide my own injectable wrapper around window.location
## Environment
<pre><code>
Angular version: 5.0.0
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [x] Chrome (desktop) version 61
</code></pre>
| feature,area: common,workaround1: obvious,freq1: low,P4 | low | Critical |
271,118,480 | pytorch | Feature Request: Distributed send arbitrary objects | It would be useful for torch.distributed.send and .recv to be able to send arbitrary objects. I have two requests:
1. One version of send and recv that does not copy to tensor, but instead returns a new tensor. This way, we can send tensors of arbitrary sizes, useful for many reinforcement learning settings
2. Ability to send arbitrary Python objects, preferrably over pickle streams. mpi4py supports this mode, and it is sometimes very useful to prototype with despite its inefficiencies.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar | oncall: distributed,feature,module: pickle,module: serialization,triaged | low | Major |
271,154,501 | go | runtime: TestWindowsStackMemoryCgo is flaky | CL 74490 has added TestWindowsStackMemoryCgo that has been flaky. It fails with on windows-386-2008 builder
https://build.golang.org/log/6eac250b23e6d93f36a6824e260c69c67bd639be
https://build.golang.org/log/8528fd237adab902144de46f4a1814a162ed1514
https://build.golang.org/log/790eb5b35c79bb3154775987f5294c66715ed85a
```
--- FAIL: TestWindowsStackMemoryCgo (0.03s)
crash_cgo_test.go:460: Failed to read stack usage: strconv.Atoi: parsing "59678\r\nThis application has requested the Runtime to terminate it in an unusual way.\nPlease contact the application's support team for more information.\r\nruntime: failed to create new OS thread (12)\r\n": invalid syntax
FAIL
FAIL runtime 19.962s
```
on windows-amd64-2008 builder
https://build.golang.org/log/48659fdd5ce7d5fa4b5ac88009a1bb58a3ca3989
https://build.golang.org/log/b90c3a5bac8098da6dbbd0f1de78b349885d064f
and on windows-amd64-race builder
https://build.golang.org/log/c15b87b42bd461acb0066ca98b8f22aa982b5fc2
The error 12 is (from https://docs.microsoft.com/en-us/cpp/c-runtime-library/errno-doserrno-sys-errlist-and-sys-nerr ) ENOMEM Not enough memory.
We also had trybots failed with different message
https://storage.googleapis.com/go-build-log/7228f1ad/windows-386-2008_ede19d12.log
```
--- FAIL: TestWindowsStackMemoryCgo (0.03s)
crash_cgo_test.go:460: Failed to read stack usage: strconv.Atoi: parsing "69550\r\nThis application has requested the Runtime to terminate it in an unusual way.\nPlease contact the application's support team for more information.\r\nruntime: failed to create new OS thread (13)\r\n": invalid syntax
FAIL
FAIL runtime 21.491s
```
13 is EACCES Permission denied
Alex | help wanted,OS-Windows,NeedsInvestigation,compiler/runtime | low | Critical |
271,169,883 | opencv | OpenCV Protobuf issues for >= 3.3.1 | ##### System information (version)
- OpenCV => 3.3.1
- Operating System / Platform => Ubuntu 16.04 64 Bit
- Compiler => gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609
- used language here: Python 3.5.2
- cmake version: 3.5.1
##### Detailed description
After switching from OpenCV 3.3.0 to 3.3.1 when `import cv2` in my Python 3.5.2 I get the error:
> ImportError: /usr/local/lib/libopencv_dnn_modern.so.3.3: undefined symbol: _ZTIN6google8protobuf7MessageE
I compile OpenCV from source via cmake ...
> cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_CUDA=OFF -D WITH_TBB=ON -D WITH_V4L=ON -D WITH_QT=ON -D USE_GStreamer=ON -D WITH_OPENGL=ON -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules -D WITH_IPP=ON -D PYTHON_DEFAULT_EXECUTABLE=$(which python3) -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_PERF_TESTS=OFF -D BUILD_TESTS=OFF -D BUILD_DOCS=OFF -D ENABLE_FAST_MATH=1 -D PROTOBUF_PROTOC_EXECUTABLE=/usr/bin/protoc ..
... and protobuf is installed via basic apt `libprotobuf-dev protobuf-compiler` (which gives e.g. libprotoc 2.6.1) and also via pip (`pip3 show protobuf` -> Version: 3.4.0).
However, here is what I discovered:
When using OpenCV 3.3.0, cmake gives me:
> -- Protobuf: YES
Whereas OpenCV 3.3.1 (and also master branch) gives me:
> -- Protobuf: NO
(Am I assuming correclty, that the latter disables building/using the protobuf source that comes with opencv (Version: 3.1.0. right?), thus defaults to my apt protobuf which is incompatible with what is expected or so?)
My main question: Why `Protobuf: NO` now?
Switching back to 3.3.0 following same build procedure everything is fine, thus I assume this being a bug/issue worth investigating.
Let me know if I can provide you with any files or further details on this. | category: build/install | low | Critical |
271,182,420 | nvm | nvm uninstall does not work | <!-- Thank you for being interested in nvm! Please help us by filling out the following form if youβre having trouble. If you have a feature request, or some other question, please feel free to clear out the form. Thanks! -->
- Operating system and version:
MacOS High Sierra v10.13
- `nvm debug` output:
<details>
<!-- do not delete the following blank line -->
```sh
nvm --version: v0.33.6
$TERM_PROGRAM: iTerm.app
$SHELL: /bin/zsh
$HOME: /Users/nibo
$NVM_DIR: '$HOME/.nvm'
$PREFIX: ''
$NPM_CONFIG_PREFIX: ''
$NVM_NODEJS_ORG_MIRROR: ''
$NVM_IOJS_ORG_MIRROR: ''
shell version: 'zsh 5.3 (x86_64-apple-darwin17.0)'
uname -a: 'Darwin 17.0.0 Darwin Kernel Version 17.0.0: Thu Aug 24 21:48:19 PDT 2017; root:xnu-4570.1.46~2/RELEASE_X86_64 x86_64'
OS version: Mac 10.13 17A405
curl: /usr/bin/curl, curl 7.54.0 (x86_64-apple-darwin17.0) libcurl/7.54.0 LibreSSL/2.0.20 zlib/1.2.11 nghttp2/1.24.0
wget: /usr/local/bin/wget, GNU Wget 1.19.1 built on darwin16.6.0.
git: /usr/local/bin/git, git version 2.14.2
grep: grep: aliased to grep --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn} (grep --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn}), grep (BSD grep) 2.5.1-FreeBSD
awk: /usr/bin/awk, awk version 20070501
sed: illegal option -- -
usage: sed script [-Ealn] [-i extension] [file ...]
sed [-Ealn] [-i extension] [-e script] ... [-f script_file] ... [file ...]
sed: /usr/bin/sed,
cut: illegal option -- -
usage: cut -b list [-n] [file ...]
cut -c list [file ...]
cut -f list [-s] [-d delim] [file ...]
cut: /usr/bin/cut,
basename: illegal option -- -
usage: basename string [suffix]
basename [-a] [-s suffix] string [...]
basename: /usr/bin/basename,
rm: illegal option -- -
usage: rm [-f | -i] [-dPRrvW] file ...
unlink file
rm: /bin/rm,
mkdir: illegal option -- -
usage: mkdir [-pv] [-m mode] directory ...
mkdir: /bin/mkdir,
xargs: illegal option -- -
usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] [-J replstr]
[-L number] [-n number [-x]] [-P maxprocs] [-s size]
[utility [argument ...]]
xargs: /usr/bin/xargs,
nvm current: v6.11.3
which node: $NVM_DIR/versions/node/v6.11.3/bin/node
which iojs: iojs not found
which npm: $NVM_DIR/versions/node/v6.11.3/bin/npm
npm config get prefix: $NVM_DIR/versions/node/v6.11.3
npm root -g: $NVM_DIR/versions/node/v6.11.3/lib/node_modules
```
</details>
- `nvm ls` output:
<details>
<!-- do not delete the following blank line -->
```sh
v0.12.15
v4.5.0
v6.9.1
-> v6.11.3
v7.7.1
v8.5.0
default -> 6.11.3 (-> v6.11.3)
defualt -> v6 (-> v6.11.3)
node -> stable (-> v8.5.0) (default)
stable -> 8.5 (-> v8.5.0) (default)
iojs -> N/A (default)
lts/* -> lts/boron (-> N/A)
lts/argon -> v4.8.4 (-> N/A)
lts/boron -> v6.11.4 (-> N/A)
```
</details>
- How did you install `nvm`? (e.g. install script in readme, homebrew):
install script in reame: `curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.6/install.sh | bash`
- What steps did you perform?
`nvm uninstall v8.5`
- What happened?
nothing, output nothing
- What did you expect to happen?
uninstall v8.5
- Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`?
<!-- if this does not apply, please delete this section -->
- If you are having installation issues, or getting "N/A", what does `curl -I --compressed -v https://nodejs.org/dist/` print out?
<details>
<!-- do not delete the following blank line -->
```sh
```
</details>
| uninstalling,needs followup | low | Critical |
271,240,099 | rust | Trait object calls not devirtualized | ```rust
trait A {
fn k(&self, x: i32) -> i32;
}
pub struct O {
}
pub struct P {
}
impl A for O {
fn k(&self, x: i32) -> i32 {
x
}
}
impl A for P {
fn k(&self, x: i32) -> i32 {
x
}
}
pub enum R {
P(P),
O(O),
}
impl R {
fn inner(&self) -> &A {
match self {
&R::P(ref p) => p,
&R::O(ref o) => o,
}
}
pub fn k(&self, x: i32) -> i32 {
match self {
&R::P(ref p) => p.k(x),
&R::O(ref o) => o.k(x),
}
}
pub fn j(&self, x: i32) -> i32 {
self.inner().k(x)
}
}
```
compiles to
```assembly
core::ptr::drop_in_place:
push rbp
mov rbp, rsp
pop rbp
ret
core::ptr::drop_in_place:
push rbp
mov rbp, rsp
pop rbp
ret
<example::O as example::A>::k:
push rbp
mov rbp, rsp
mov eax, esi
pop rbp
ret
<example::P as example::A>::k:
push rbp
mov rbp, rsp
mov eax, esi
pop rbp
ret
example::R::k:
push rbp
mov rbp, rsp
mov eax, esi
pop rbp
ret
example::R::j:
push rbp
mov rbp, rsp
cmp byte ptr [rdi], 0
lea rdi, [rdi + 1]
lea rax, [rip + vtable.1]
lea rcx, [rip + vtable.0]
cmove rcx, rax
pop rbp
jmp qword ptr [rcx + 24]
vtable.0:
.quad core::ptr::drop_in_place
.quad 0
.quad 1
.quad <example::O as example::A>::k
vtable.1:
.quad core::ptr::drop_in_place
.quad 0
.quad 1
.quad <example::P as example::A>::k
```
Notice how much worse R::j() is then R::k() | A-LLVM,I-slow,C-enhancement,T-compiler,A-trait-objects,C-optimization | low | Major |
271,271,397 | react | [RN] Don't receive events on unknown tags | Flow uncovered this:
https://github.com/facebook/react/blob/92b7b172cce9958b846844f0b46fd7bbd8c5140d/packages/react-native-renderer/src/ReactNativeEventEmitter.js#L174-L175
Need to verify if we can just return early and not process the events in this case. | Type: Needs Investigation,React Core Team | medium | Minor |
271,295,048 | opencv | Question about cv::VideoWriter | Hi,
i the video_writer.cpp sample i see:
```#if defined(HAVE_OPENCV_CUDACODEC) && defined(WIN32)```
are there any plans to add gpu acceleration (cuda support) to cv::VideoWriter on Mac Osx or Linux?
see: https://github.com/opencv/opencv/blob/master/samples/gpu/video_writer.cpp
the video_writer.cpp sample seems platform independent:
```#if defined(HAVE_OPENCV_CUDACODEC)```
see: https://github.com/opencv/opencv/blob/master/samples/gpu/video_reader.cpp
why does it need windows in video_writer.cpp?
Thanks,
Arnold | priority: low,category: videoio,category: gpu/cuda (contrib),platform: ios/osx | low | Minor |
271,295,587 | go | x/net/html: add Escape/Unescape transformers | The [`golang.org/x/net/html`] package contains two functions similar to the ones in the [`html`] package for escaping and unescaping HTML entities:
```
func EscapeString(string) string
func UnescapeString(string) string
```
unfortunately these require loading entire documents into memory and converting them to a string before attempting to escape them.
It would be nice if there were also an implementation of the [`Transformer`] (and [`SpanningTransformer`]) interface from [`golang.org/x/text/transform`] that could perform escaping / unescaping on long byte streams without requiring buffering the entire stream into memory.
This could either be two functions which return transformers:
```
// Escaper returns a transformer that escapes special characters.
// See EscapeString for more information.
func Escaper() transform.SpanningTransformer
// Unescaper returns a transformer that unescapes special characters.
// See UnescapeString for more information.
func Unescaper() transform.SpanningTransformer
```
or a `Transformer` type which contains all the various helper methods (String, Bytes, etc.) that transformer based packages in the text tree have.
[`golang.org/x/net/html`]: https://godoc.org/golang.org/x/net/html
[`html`]: https://godoc.org/html
[`Transformer`]: https://godoc.org/golang.org/x/text/transform#Transformer
[`SpanningTransformer`]: https://godoc.org/golang.org/x/text/transform#SpanningTransformer
[`golang.org/x/text/transform`]: https://godoc.org/golang.org/x/text/transform
/cc @mpvl | Proposal,Proposal-Accepted | low | Major |
271,304,107 | neovim | TUI: draw "virtual cursor" instead of using terminal emulator cursor | In terminal mode, users often have a choice of setting up the terminal emulator to display the current cursor position via a caret (`|`), an underline (`_`), or a block (`β`), each with its own tradeoffs.
The caret and underscore are the least obstructive (i.e. one can clearly make out what character is under them) but are also very hard to visually spot when the cursor moves to an unknown location (search for a symbol and then spend 10 seconds playing with the arrow keys trying to obtain a visual indication of where the cursor wound up after the jump). The block is easy to spot, but obscures the character under the block (did I already type `}` and navigate backwards or did I not?). Setting up the terminal to blink the cursor block helps tremendously, but one must still wait for the block to blink before being able to
Recently while using [vis](https://github.com/martanne/vis), I saw a different approach that doesn't require (as far as I can tell) cooperation from the terminal emulator to pull off. The actual cursor is hidden, and instead the block corresponding to where the cursor should be is rendered in reverse/negative. This is perhaps best illustrated with screenshots:
**Underline mode**
Can you find the cursor?

**Block mode**
Do you think you can see everything in the buffer in the screenshot below?

... because this is what you see when the cursor blinks:

**Inverted Block**
How vis solves the problem

In vis, the cursor does not blink, but that can also be easily implemented with a timer that alternates between inverse and actual colors for the cursor position.
I think this is a tremendous usability gain and would urge consideration of adopting this feature. | enhancement,tui | medium | Major |
271,304,947 | godot | Cannot Keyframe "Visible Characters" Attribute In The RichText Node | I tried keyframing a type-writer text effect in my game for a cut-scene using the RichText node and it does not seem to work. Is this a known issue? I was trying to keyframe the "Visible Characters" attribute. | discussion,topic:animation | low | Major |
271,319,158 | rust | [std::char] Add MAX_UTF8_LEN and MAX_UTF16_LEN | # Background
UTF-8 encoding on any character can take up to 4 bytes (`u8`). UTF-16 encoding can take up to 2 words (`u16`). This is a promise from the encoding specs, and an assumption made in many places inside rust libs and applications.
Currently, there's lots of *magic* numbers 4 and 2 everywhere in the code, creating buffer long enough to encode a character into as UTF-8 or UTF-16.
## Examples
https://github.com/rust-lang/rust/blob/b7041bfab3a83702a8026fb7a18d8ea7d54cc648/src/libcore/tests/char.rs#L236-L239
https://github.com/rust-lang/rust/blob/b7041bfab3a83702a8026fb7a18d8ea7d54cc648/src/libcore/tests/char.rs#L253-L256
# Proposal
Add the followings public definitions to `std::char` and `core::char` to be used inside the rust codebase and publicly.
```rust
pub const MAX_UTF8_LEN: usize = 4;
pub const MAX_UTF16_LEN: usize = 2;
```
# Why should we do this?
This will allow the code to be written like this:
```rust
let mut buf = [0; char::MAX_UTF16_LEN];
let b = input.encode_utf16(&mut buf);
```
This will guide usersβwithout them knowing too much details of UTF-8/UTF-16 encodingsβto allocate the correct amount of memory while writing the code, instead of waiting until some runtime error is raise, which actually may not happen in basic tests and discovered externally. Also, it increases readability for anyone reading such code.
Besides using these max-length values for char-level allocations, user can also use them for pre-allocate memory for encoding some chars list into UTF-8/UTF-16.
# How we teach this?
The std/core libs will be updated to use these values wherever possible (see [this list](https://gist.github.com/behnam/64484153914d862a2c75d57d15fc58e4)), and docs for encoding-related functions in `char` module are updated to evangelize using these values when allocating memory to be used by the encoding functions.
* https://doc.rust-lang.org/std/primitive.char.html#method.len_utf8
* https://doc.rust-lang.org/std/primitive.char.html#method.len_utf16
* https://doc.rust-lang.org/std/primitive.char.html#method.encode_utf8
* https://doc.rust-lang.org/std/primitive.char.html#method.encode_utf16
# Alternatives
### 1) Only update the docs
We can just update the function docs to talk about these max-length values, but not name them as a `const` value.
### 2) New functions for allocations with max limit
Although this can be handy for some users, it would be limited to only one use-case of these numbers and not helpful for other operations.
----
What do you think?
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"NoobProgrammer31"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | E-easy,T-libs-api,C-feature-accepted | low | Critical |
271,324,473 | neovim | TUI: better terminal-type detection | In #7473 we incorrectly decided that a terminal supported colon-delimited RGB sequences. We need a more reliable way to detect support for that. And other capabilities, where possible.
Vim "term response" handling: https://github.com/neovim/neovim/issues/6279
- `may_req_ambiguous_char_width()` [src](https://github.com/vim/vim/blob/93343725b5fa1cf580a24302455980faacae8ee2/src/term.c#L3570)
- `req_codes_from_term`, `check_for_codes_from_term` [src](https://github.com/vim/vim/blob/93343725b5fa1cf580a24302455980faacae8ee2/src/term.c#L6428
)
colon-delimited RGB:
- Motivation for colon delimiters: https://github.com/kovidgoyal/kitty/issues/160#issuecomment-341902195
- See also https://github.com/neovim/neovim/issues/7473#issuecomment-341899848
- detect via [DECRQSS](https://github.com/neovim/neovim/issues/7490#issuecomment-348645326)
cursor capability:
- Vim patch to detect xterm-compatible cursor capability: https://github.com/vim/vim/pull/2126
- Related: https://github.com/neovim/neovim/pull/5556 | enhancement,ux,tui | low | Minor |
271,339,910 | go | cmd/vendor/github.com/google/pprof/internal/driver: TestHttpsInsecure fails | ### What version of Go are you using (`go version`)?
go version devel +bb98331 Mon Nov 6 01:35:58 2017 +0000 windows/386
### What operating system and processor architecture are you using (`go env`)?
Windows XP 32 bit
```
set GOARCH=386
set GOBIN=
set GOCACHE=go-build
set GOEXE=.exe
set GOHOSTARCH=386
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=C:\Documents and Settings\aa\dev\
set GORACE=
set GOROOT=C:\Documents and Settings\aa\dev\go
set GOTMPDIR=
set GOTOOLDIR=C:\Documents and Settings\aa\dev\go\pkg\tool\windows_386
set GCCGO=gccgo
set GO386=sse2
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m32 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\DOCUME~1\aa\LOCALS~1\Temp\go-build119289778=/tmp/go-build -gno-record-gcc-switches
```
### What did you do?
I run
```
go test -short -v -run=TestHttpsInsecure cmd/vendor/github.com/google/pprof/internal/driver
```
command
### What did you expect to see?
```
PASS
```
### What did you see instead?
```
=== RUN TestHttpsInsecure
--- FAIL: TestHttpsInsecure (15.11s)
fetch_test.go:420: fetchProfiles(https+insecure://127.0.0.1:2343/debug/pprof/profile) got non-symbolized profile: len(p.Function)==0
FAIL
exit status 1
FAIL cmd/vendor/github.com/google/pprof/internal/driver 15.261s
```
Alex | Testing,OS-Windows | low | Critical |
271,343,492 | vue | Create a package for building custom renderers | ### What problem does this feature solve?
As the author of nativescript-vue I had to set up a similar build setup as Vue's in order to be able to import certain parts of Vue directly into nativescript-vue. The main source of issues was the aliases used across the Vue repository (which do make sense btw!).
To solve that issue, I would love to have an official package for creating (and registering) custom renderers into Vue, which would enclose most of the Vue specific logic of patching / hydrating etc.
A good example of what I have in mind would be the react's package that does it: https://github.com/facebook/react/tree/master/packages/react-reconciler
I would love to get some work done on this, but I'd work with the core team to make sure the best possible quality.
### What does the proposed API look like?
```js
// my custom renderer
// for example: nativescript-vue.js
import VueRenderer from 'vue-renderer'
// a class for creating native views in NativeScript
import ViewUtils from './ViewUtils.js'
export default new VueRenderer({
// Node operations
createElement(tagName) {},
createElementNS(namespace, tagName) {},
createTextNode(text) {},
createComment(text) {},
insertBefore(parentNode, newNode, referenceNode) {},
removeChild(node, child) {},
appendChild(node, child) {},
parentNode(node) {},
nextSibling(node) {},
tagName(node) {},
setTextContent(node, text) {},
setAttribute(node, attribute, value) {},
// Additional methods that need to be specified
// but for example:
createRoot() {} // this would get called to create the root element for the root Vue instance
})
```
```js
// then in userland we could just do
import Vue from 'vue'
import NativescriptVue from 'nativescript-vue'
Vue.use(NativescriptVue)
new Vue({
render(h) => h('label', { text: 'Hello World' })
})
```
<!-- generated by vue-issues. DO NOT REMOVE --> | intend to implement,feature request | medium | Critical |
271,362,910 | flutter | Documentation for location of non-positioned children in RenderStack is wrong | The second paragraph of RenderStack's documentation says that:
> /// The final location of non-positioned children is determined by the alignment
/// parameter. The left of each non-positioned child becomes the
/// difference between the child's width and the stack's width scaled by
/// alignment.x. The top of each non-positioned child is computed
/// similarly and scaled by alignment.y. So if the alignment x and y properties
/// are 0.0 (the default) then the non-positioned children remain in the
/// upper-left corner. If the alignment x and y properties are 0.5 then the
/// non-positioned children are centered within the stack.
Sounds like it describes alignment for FractionalOffset. Current implementation uses Alignment.
I am not a native English speaker, and feels hard to express the intention. | framework,d: api docs,P2,team-framework,triaged-framework | low | Minor |
271,637,993 | go | context: relax recommendation against putting Contexts in structs | This is a follow-on from discussion in #14660.
Right now the context package documentation says
> Do not store Contexts inside a struct type; instead, pass a Context explicitly to each function that needs it. The Context should be the first parameter, typically named ctx: [...]
This advice seems overly restrictive. @bradfitz wrote in that issue:
> While we've told people not to add contexts to structs, I think that guidance is over-aggressive. The real advice is not to *store* contexts. They should be passed along like parameters. But if the struct is essentially just a parameter, it's okay. I think this concern can be addressed with package-level documentation and examples.
Let's address this concern with package-level documentation.
Also see @rsc's comment at https://github.com/golang/go/issues/14660#issuecomment-200145301.
/cc @Sajmani @bradfitz @rsc | Documentation,help wanted,NeedsInvestigation | medium | Major |
271,641,843 | opencv | No controls for # of threads work when using AREA resizer |
##### System information (version)
- OSX 10.11
- Clang
- OpenCV 3.2.0
##### Detailed description
OpenCV runs work for AREA resizer on 8 threads regardless of how `cv::setNumThreads` is invoked or `OPENCV_FOR_THREADS_NUM` is set. It is seemingly impossible to produce a single-threaded AREA resizer in OpenCV.
##### Steps to reproduce
```
cv::setNumThreads(1);
cv::resize(mat, dst, width, height, CV_INTER_AREA);
```
runs on 8 threads instead of 1. | feature,category: core,platform: ios/osx | low | Minor |
271,652,878 | angular | Add value change options to ControlValueAccessor writeValue and onChange | ## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[x] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
A component that implements `ControlValueAccessor` and transforms values to stay in bounds cannot respect `{ emitEvent: false }` option when value patched, it will always emit an event if the `onChange` function is called.
## Expected behavior
If a value is patched to a component that is out-of-bounds and cannot be displayed the component should be able to to transform the incoming value and also respect the option `{ emitEvent: false }`.
[updateControl](https://github.com/angular/angular/blob/1cfa79ca4e21788e0323baf544704ee7ef7d63ea/packages/forms/src/directives/shared.ts#L100) should use an options object like patchValue/setValue.
## Minimal reproduction of the problem with instructions
http://plnkr.co/edit/GyJLgvBkpPdD1y74lZOH?p=preview
In the reproduction you'll notice that when a value outside of the bounds is passed it will update the model. In doing so it will emit a change event to the console even if the new value was patched using `{ emitEvent: false }`.
## What is the motivation / use case for changing the behavior?
The default HTML5 slider will set the value of the slider to the max value if a value higher than the max is set. If I wanted to create a custom slider that followed the same behavior I couldn't do this without emitting an event even if `patchValue(outofboundsvalue, { emitEvent: false });` is used.
## Environment
<pre><code>
Angular version: latest
I'm currently using 4.3.3, example plnkr uses latest.
Browser:
- [x] Chrome (desktop) version 61
This will be the same behavior in all browsers, but I've only tested with Chrome.
</code></pre>
Others:
I'm not sure if this type of writeValue functionality is an anti-pattern or if a custom validator should be used instead. An HTML5 input fields will update a passed value to stay in bounds so I'm assuming Angular form components should have the option to do the same.
Also, not sure if this is a bug or intentional, so I labeled as feature request for now.
| feature,freq2: medium,area: forms,feature: under consideration | high | Critical |
271,658,509 | TypeScript | Reconsider if we need to cache file system entries given that we have tweaked triggers for watch | Determine if we still need to cache file system entries in the directory or non caching has similar performance given that watch triggers and invokes have changed since the cached directory structure host implementation | Infrastructure | low | Major |
271,823,916 | TypeScript | URLSearchParams/Headers constructor also receives iterables including Map | **TypeScript Version:** master
**Code**
```ts
new Headers(new Map([['abc', 'bcd']])).get('abc')
new Headers({ [Symbol.iterator]() { return new Map([['abc', 'bcd']]).entries() } }).get('abc')
```
**Expected behavior:** Both returns `bcd` so should pass without errors
**Actual behavior:** 'not assignable' error
| Suggestion,Help Wanted,Domain: lib.d.ts,Experience Enhancement | low | Critical |
271,874,540 | angular | Validator.pattern adds ^ and $ for you for strings - not for patterns | See https://github.com/angular/angular/blame/6e8e3bd2488eb0896932fe1a10afefbb0dccbca9/packages/forms/src/validators.ts#L148
It is not really intuitive to add `^` and `$` if the pattern is a string and not a regex expression. I doubt that this behavior is in any way wanted.
See https://github.com/angular/angular/issues/10150 and https://github.com/angular/angular/issues/15751 | breaking changes,help wanted,freq2: medium,area: forms,type: confusing,forms: validators,P4 | low | Minor |
271,894,341 | rust | Can't compile core for msp430 - LLVM ERROR | ``` console
$ rustup default nightly-2017-11-04
$ rustup component add rust-src
$ cp -r $(rustc --print sysroot)/lib/rustlib/src/rust/src .
$ cd src/libcore
$ cargo build --target msp430-none-elf
Compiling core v0.0.0 (file:///home/japaric/tmp/src/libcore)
LLVM ERROR: Cannot select: t3: ch = AtomicFence t0, Constant:i16<7>, Constant:i16<1>
t1: i16 = Constant<7>
t2: i16 = Constant<1>
In function: _ZN4core4sync6atomic5fence17h15543a17f4e17a82E
error: Could not compile `core`.
```
found while bisecting #45834
cc @pftbest | A-LLVM,T-compiler,A-target-specs,C-bug,WG-embedded,requires-nightly,O-msp430 | low | Critical |
271,923,529 | rust | Stack overflow in fmt::Display impl | It's possible to write a impl of fmt::Display which compiles successfully and produces a stack overflow.
A minimal repro is available here: https://play.rust-lang.org/?gist=fb115e1e625e1b8038cdd13c5f9cdbb8&version=stable
The code is here for completeness of the bug report:
```rust
use std::fmt;
fn main() {
println!("Hello, world!");
let test = Test::Alpha;
println!("Preparing to stack overflow");
let test_string = test.to_string();
println!("test string: {}", test_string);
}
pub enum Test {
Alpha,
Bravo,
}
impl fmt::Display for Test {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.to_string())
}
}
```
I would expect an error at compile time
Instead the error occurs at runtime as a stack overflow.
## Meta
```
rustc 1.21.0 (3b72af97e 2017-10-09)
binary: rustc
commit-hash: 3b72af97e42989b2fe104d8edbaee123cdf7c58f
commit-date: 2017-10-09
host: x86_64-apple-darwin
release: 1.21.0
LLVM version: 4.0
```
```
thread 'main' has overflowed its stack
fatal runtime error: stack overflow
Abort trap: 6
```
| E-hard,A-lints,T-compiler,C-feature-request | low | Critical |
271,947,010 | opencv | OpenCV 3.2 Persistence.hpp error when trying to write a uint64_t | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
-->
- OpenCV => 3.2
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
##### Detailed description
I am running a code that ran well until I changed a parameter from ```int``` to ```uint64_t```. The parameter is ```timeCreated```. All the other parameters are ```int``` and ```doubles```. The code is as follows
##### Steps to reproduce
```
void blah::PrintObject( cv::FileStorage& fout )
{ fout << "ID" << id;
fout << "timeCreated " << timeCreated;
fout << "numUpdatedTimeSteps" << numUpdatedTimeSteps;
fout << "numTimesUpdatedThisTimeStep" << numTimesUpdatedThisTimeStep;
fout << "time" << time;
}
The error I obtain is:
```3>C:\Software\OpenCV_3.2.0\Windows_VS2015_x64_build\include\opencv2/core/persistence.hpp(1113): error C2668: 'cv::write': ambiguous call to overloaded function```
As soon as I replace
```fout << "timeCreated " << timeCreated;```
by
```fout << "timeCreated " << static_cast<int>(timeCreated);```
everything compiles just fine. So my question is: How I make it work for ``` uint64_t```? | category: build/install,RFC,incomplete | low | Critical |
271,992,110 | flutter | Create a multi-staged sequential builder layout-er | Combine components of `LayoutBuilder` and `MultiChildLayoutDelegate` to sequentially build a child, lay it out, then take the layout info from that child and feed into the build and layout of the next child until there are no more children. | c: new feature,framework,customer: google,P3,team-framework,triaged-framework | low | Major |
271,998,092 | rust | "this expression will panic at run-time" warnings can't be suppressed | Here's a reduced test-case:
```rust
let a = [0u8; 2usize];
println!("{}", a[0]);
println!("{}", a[1]);
if a.len() == 3 {
println!("{}", a[2]);
}
```
Now some background.
In Servo we generate large amounts of rust code using different systems. Today @upsuper was trying to generate shared code for two different structs that have different fields, let's simplify it as:
```rust
struct Foo {
bar: [u8; 2],
}
struct Bar {
bar: [u8; 3],
}
```
To generate shared code for both, he wanted to do something like:
```rust
let instance = ${var}; // Instance is either `Foo` or `Bar`, we'll expand this twice.
do_something_with(instance.bar[0]);
do_something_with(instance.bar[1]);
if instance.bar.len() == 3 {
do_something_with(instance.bar[2]);
}
```
However, that generates an unsuppresable warning like:
```
^^^^ index out of bounds: the len is 2 but the index is 2
```
So it seems that Rust doesn't account for that code being unreachable in the `Foo` case.
In any case, being able to suppress that warning would've made this code straight-forward. Instead of that we probably need to work around it with something like:
```rust
#[inline(always)]
fn doit(a: &[u8]) {
do_something_with(a[0]);
do_something_with(a[1]);
if a.len() == 3 {
do_something_with(a[2]);
}
}
doit(&instance.bar)
```
Which is not great. | A-lints,T-compiler,C-feature-request | low | Minor |
272,001,127 | go | x/build/devapp: support R=N in Gerrit Reviews | Previously, when a Change contained a comment with the contents βR=<release>β, it was displayed on dev.golang.org/release. Since moving to maintner this functionality is no longer supported.
Support it again.
@TocarIP @ianlancetaylor @bradfitz | Builders | low | Minor |
272,031,686 | rust | armv7 exhausts memory on stage0 librustc w/ debuginfo | I'm having trouble natively compiling rustc armv7-unknown-linux-gnueabihf with `--enable-debuginfo`, on both the beta and master branches. I haven't seen this on any stable release before. To reproduce the issue, I'm running on Fedora 26 armv7hl, using all default options apart from enabling debuginfo.
```
Building stage0 compiler artifacts (armv7-unknown-linux-gnueabihf -> armv7-unknown-linux-gnueabihf)
[...]
Compiling rustc v0.0.0 (file:///home/jistone/rust-beta/src/librustc)
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
error: Could not compile `rustc`.
Caused by:
process didn't exit successfully: `/home/jistone/rust-beta/build/bootstrap/debug/rustc --crate-name rustc src/librustc/lib.rs [...]` (signal: 6, SIGABRT: process abort signal)
```
I guess that C++ exception is probably LLVM running out of memory. This excerpt is from beta sources -- on master it also errors, but without the exception info, instead a SIGSEGV. I doubt that difference is very significant, just bad luck for who failed to allocate memory.
In either case, when I watch progress in htop, the process grows right up to 3GB virt size and then dies. That's the limit of the 32-bit user address space, of course. The resident size was just a little over 2GB at that time, so something is holding wasted memory here. It could be jemalloc, or could be extra capacity in vec/map/etc. -- I'm not sure how to tell.
My workaround for now is to use `--enable-debuginfo-only-std`.
(And FWIW, 32-bit i686 is not having this problem for me.) | A-debuginfo,O-Arm,T-compiler,E-help-wanted,I-compilemem,C-bug | low | Critical |
272,050,770 | kubernetes | DefaultTolerationSeconds admission plugin flags pollute all commands | The DefaultTolerationSeconds admission plugin configures itself via globally registered flags, rather than via plugin config, as expected.
This has several negative effects:
* The global flags are visible in completely unrelated commands, like kubectl:
```
$ kubectl options
The following options can be passed to any command:
...
--default-not-ready-toleration-seconds=300: Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
--default-unreachable-toleration-seconds=300: Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
```
* The admission plugin is not positioned to receive config as apiserver config is updated to be able to be consumed from files, and then dynamically
The global flags should be deprecated, and a config type created to hold the config for this admission plugin (like other configurable admission plugins)
cc @kubernetes/sig-scheduling-bugs | kind/bug,priority/backlog,sig/scheduling,lifecycle/frozen | low | Critical |
272,056,515 | rust | Provide a way to get feedback on compiler error rates | As per https://github.com/rust-lang/rust/pull/45452#issuecomment-339329308, and [other conversations](https://internals.rust-lang.org/search?q=telemetry) that have been had over time, there's a feeling that we should have ways of identifying what `rustc` features are being used/encountered. This can range from full featured telemetry support in `rustc`/`cargo`, to a simple webapp accepting people to post compiler output. Right now the only ways we receive feedback for confusing output are:
- IRC/gitter
- this issue tracker
- stackoverflow
- /r/rust
If we added telemetry support on rustc, it would also _have_ to be opt-in, and it could start with being a simple flag passed to `rustc` to send a post request with the text output of the compiler. These reports could then be aggregated by type. | C-enhancement,A-diagnostics,metabug,T-compiler | low | Critical |
272,085,451 | flutter | Autocorrect tooltips don't appear on iOS | ## Steps to Reproduce
When using a native iOS app without text prediction enabled, there is a pop-up that indicates when autocorrections are going to occur, and when backspacing on a correction a pop-up appears that gives alternative choices for the word.
In flutter, the autocorrections happen silently and require backspacing into the word to make the correction. See the gif below for a comparison of the behavior.
Flutter should either integrate with the iOS autocorrection system to provide the popups or emulate them. This is a pretty big usability concern which makes it more difficult to enter text because the corrections are harder to notice and harder to fix.

## Logs
N/A, this is a design issue.
## Flutter Doctor
```
$ flutter doctor
Building flutter tool...
[β] Flutter (on Mac OS X 10.12.6 16G29, locale en-US, channel master)
β’ Flutter at /Users/jwm/Projects/flutter
β’ Framework revision 337150308c (2 days ago), 2017-11-05 19:39:49 -0800
β’ Engine revision e059cc0258
β’ Tools Dart version 1.25.0-dev.11.0
[β] Android toolchain - develop for Android devices
β Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.io/setup/#android-setup for detailed instructions).
If Android SDK has been installed to a custom location, set $ANDROID_HOME to that location.
[β] iOS toolchain - develop for iOS devices (Xcode 9.0.1)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Xcode 9.0.1, Build version 9A1004
β’ ios-deploy 1.9.2
β’ CocoaPods version 1.3.1
[β] Android Studio (not installed)
β’ Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.io/setup/#android-setup for detailed instructions).
[β] IntelliJ IDEA Community Edition (version 2017.2.5)
β’ Flutter plugin version 18.2
β’ Dart plugin version 172.4343.25
[β] Connected devices
β’ iPhone 7 Plus β’ DF665BBF-F7D6-4B75-8FCA-E092AD4AD6DC β’ ios β’ iOS 11.0 (simulator)
```
| a: text input,c: new feature,platform-ios,framework,f: material design,a: fidelity,f: cupertino,customer: crowd,P2,team-ios,triaged-ios,fyi-text-input | low | Critical |
272,167,903 | storybook | How to expand to the full height of the story | I'm having problems trying to accomplish something that should be quite simple, I wanna have a container in my story that expands to the full height of the story "canvas" I'm trying setting the style of this container component like `flex:1` but it doesn't work, I'm using [storybook-addon-scissors](https://github.com/PeterPanen/storybook-addon-scissors) to set the size of my story and I would like the content of my story to adapt as I change its container size. This container component Instead of expanding like expected with `flex: 1` it just expands according to the content inside it.
Thanks in advance to anyone that can shed some light here. | feature request,ui | medium | Major |
272,232,507 | TypeScript | Identifier suggestion algorithm could be improved | **TypeScript Version:** 2.7.0-dev.20171108
**Code**
```ts
const getListLength = 0;
const getListLengths = 1;
createListLength;
```
**Expected behavior:**
Suggests `getListLength`.
**Actual behavior:**
Suggests `getListLengths`. | Suggestion,Needs Proposal | low | Minor |
272,237,506 | go | cmd/gofmt: a comment at the end of a line clings onto/prevents insertion of a newline before the next line | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.9.2 darwin/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
```
### What did you do?
1) Created a file named `a.go` with the following contents:
```
package foo // import "example.org/foo"
import "fmt"
var ErrWhatever = fmt.Errorf("whatever")
```
2) Ran `gofmt a.go`. The output was:
```
package foo // import "example.org/foo"
import "fmt"
var ErrWhatever = fmt.Errorf("whatever")
```
### What did you expect to see?
Expected the `gofmt` output to have a newline between the package and the import lines; i.e. like so:
```
package foo // import "example.org/foo"
import "fmt"
var ErrWhatever = fmt.Errorf("whatever")
```
### What did you see instead?
The `gofmt` output did not have a newline between the package and the import lines.
If the import comment `// import "example.org/foo"` isn't present in the file, then the newline is produced. It looks like `gofmt`'s behavior is different when an import comment exists, but that shouldn't be the case. | help wanted,NeedsFix | medium | Critical |
272,351,226 | go | x/build/maintner: Gerrit CL and GitHub issue deletions are not reflected in model | on https://dev.golang.org/release, search for Copybaraimportfromhttps. It shows CLs that have been deleted:
```
Copybaraimportfromhttps
CL 72110 Copybara import from https://github.com/golang/scratch/pull/2 .
CL 72131 Copybara import from https://github.com/golang/scratch/pull/2.
PRESUBMITforCopybarahttps
CL 72090 PRESUBMIT for Copybara https://github.com/golang/scratch/pull/2 .
CL 72091 PRESUBMIT for Copybara https://github.com/golang/scratch/pull/2 .
```
These are pulled in via `ForeachOpenCL`.
/cc @kevinburke @bradfitz | Builders | low | Major |
272,404,734 | rust | Invalid casts for C-style enums should trigger the "overflowing_literals" lint | Is there any reason why the "integer out of range" lint doesn't trigger when you cast a C-style enum to an integer type which isn't big enough?
For example say I have the following program:
```rust
enum Foo {
A = 1,
C = 1234,
}
fn main() {
let a = Foo::A as u32;
let c = Foo::C as u8;
let normal = 1234 as u8;
println!("{}, {}, {}", a, c, normal);
}
```
When you compile it, you get a warning on the integer cast, but not the `Foo::C as u8` line.
```
Compiling playground v0.0.1 (file:///playground)
warning: literal out of range for u8
--> src/main.rs:9:18
|
9 | let normal = 1234 as u8;
| ^^^^
|
= note: #[warn(overflowing_literals)] on by default
Finished dev [unoptimized + debuginfo] target(s) in 0.46 secs
Running `target/debug/playground`
1, 210, 210
```
I would have thought the compiler knows what integer size a C-style enum can fit into and then be able to lint accordingly. So for this case a `Foo`'s largest variant has a value of `1234` meaning it can fit into a `u16` (or `i16`), but not a `u8`.
[(playground)](https://play.rust-lang.org/?gist=76a4606a72b21ea88adf7cd0858d16e8&version=nightly) | A-lints,A-diagnostics,T-compiler,C-feature-request | low | Critical |
272,419,976 | nvm | .bashrc edge case (nvm source commented out) | When updating nvm using the `curl` command rec'd in the readme (https://github.com/creationix/nvm/blame/9953a52afb275617fe5d108cefcd1354cf47bfc0/README.md#L44), this is part of the output:
```
=> nvm source string already in /Users/shoshanaberleant/.bashrc
=> bash_completion source string already in /Users/shoshanaberleant/.bashrc
```
Technically, this is true, but, in fact, the nvm source string was commented out, so the `nvm` command doesn't work.
Here is the check:
https://github.com/creationix/nvm/blob/9953a52afb275617fe5d108cefcd1354cf47bfc0/install.sh#L346 | bugs,installing nvm: profile detection | low | Minor |
272,425,034 | TypeScript | Long (infinite?) compile times intersecting with large union | **TypeScript Version:** 2.7.0-dev.20171108
**Code**
At the bottom of `src/compiler/utilities.ts` replace the return type of `hasJSDocNodes` with:
```ts
export function hasJSDocNodes(node: Node): node is HasJSDoc & { jsdoc: JSDoc[] } {
```
(Since `hasJSDocNodes` ensures that `jsdoc` will be defined.)
**Expected behavior:**
Eventually compiles.
**Actual behavior:**
Very long compile time. Probably has to do with `HasJSDoc` being a large union. | Bug | low | Minor |
272,432,381 | kubernetes | Kubectl reported a Deployment scaled where as replicas are unavailable | /kind bug
/sig api-machinery
**What happened**:
kubectl reported a Deployment scaled even when the replicas were unavailable.
**What you expected to happen**:
All replicas in the Deployment were unavailable and even then the command output showed that Deployment scaled.
**How to reproduce it (as minimally and precisely as possible)**:
Create a Deployment using the configuration file:
```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.12.1
ports:
- containerPort: 80
- containerPort: 443
```
Create a ResourceQuota using the configuration file:
```
apiVersion: v1
kind: ResourceQuota
metadata:
name: quota
spec:
hard:
cpu: "10"
memory: 6Gi
pods: "10"
replicationcontrollers: "3"
services: "5"
configmaps: "5"
```
Describe the quota:
```
$ kubectl describe quota/quota
Name: quota
Namespace: default
Resource Used Hard
-------- ---- ----
configmaps 0 5
cpu 300m 10
memory 0 6Gi
pods 3 10
replicationcontrollers 0 3
services 1 5
```
Scale the deployment:
```
$ kubectl scale --replicas=12 deployment/nginx-deployment
deployment "nginx-deployment" scaled
```
The output indicates that the deployment was scaled.
Getting more details about Deployment shows that 9 replicas are unavailable:
```
$ kubectl describe deployment/nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Wed, 08 Nov 2017 15:25:03 -0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=1
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replicas"...
Selector: app=nginx
Replicas: 12 desired | 3 updated | 3 total | 3 available | 9 unavailable
```
Is this the expected behavior?
Also, no reason is given as to why the replicas did not scale.
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T21:07:53Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:30:51Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
AWS
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
| kind/feature,sig/apps,sig/cli,lifecycle/frozen | medium | Critical |
272,533,110 | angular | Please let HttpParams.set and .append accept one object-like arg with nested data | ## I'm submitting a...
[ x ] Feature request
</code></pre>
## Current behavior
You can only set or append to HttpParams one by one.
## Expected behavior
With Http.get, .post, etc, you could pass any object with nested object-like data as options.params. First level keys would be mapped to query params, and values JSON encoded if they're object-like. I'd like to achieve the same without adding complexity by doing something like:
> new HttpParams.set( { where: { id: [ 1,2,3 ] } } )
## What is the motivation / use case for changing the behavior?
In migrating to HttpClient from Http, I had to add a helper method:
```
private anyToHttpParams(obj: any) { // requires lodash
return Object.entries(obj)
.reduce(
(params, [key, value]) => params.set(
key, _.isObjectLike(value) ? JSON.stringify(value) : value
),
new HttpParams()
);
}
// this.http.get(url, { params: this.anyToHttpParams({ where: { id: [ 1,2,3 ] } }) } );
```
For me, this was sort of a breaking change as well, as I didn't get a compiler error when I just replaced Http with HttpClient and passed objects with nested data to params like I used to with Http, but in network console my xhr URLs changed to something like `/action?[object object]`.
## Environment
Angular version: 5.0.1 | feature,area: common/http,P4,feature: under consideration | medium | Critical |
272,536,530 | youtube-dl | [Request] Site Support: Oprah.com | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.11.06*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.11.06**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'--write-srt', u'--sub-lang', u'en', u'--ap-mso', u'nor105', u'--ap-username', u'PRIVATE', u'--ap-password', u'PRIVATE', u'http://www.oprah.com/own-queensugar/copper-sun', u'-v']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.11.06
[debug] Python version 2.7.5 - Linux-3.10.0-693.2.2.el7.x86_64-x86_64-with-centos-7.4.1708-Core
[debug] exe versions: avconv 10.1, avprobe 10.1, ffmpeg 3.4, ffprobe 3.4
[debug] Proxy map: {}
[generic] copper-sun: Requesting header
WARNING: Falling back on generic information extractor.
[generic] copper-sun: Downloading webpage
[generic] copper-sun: Extracting information
ERROR: Unsupported URL: http://www.oprah.com/own-queensugar/copper-sun
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/youtube_dl/extractor/generic.py", line 2159, in _real_extract
doc = compat_etree_fromstring(webpage.encode('utf-8'))
File "/usr/lib/python2.7/site-packages/youtube_dl/compat.py", line 2539, in compat_etree_fromstring
doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory)))
File "/usr/lib/python2.7/site-packages/youtube_dl/compat.py", line 2528, in _XML
parser.feed(text)
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
ParseError: not well-formed (invalid token): line 72, column 72
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/youtube_dl/YoutubeDL.py", line 784, in extract_info
ie_result = ie.extract(url)
File "/usr/lib/python2.7/site-packages/youtube_dl/extractor/common.py", line 437, in extract
ie_result = self._real_extract(url)
File "/usr/lib/python2.7/site-packages/youtube_dl/extractor/generic.py", line 3059, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: http://www.oprah.com/own-queensugar/copper-sun
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: http://www.oprah.com/own-queensugar/copper-sun
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
| geo-restricted | low | Critical |
272,549,787 | axios | HTTP/2 support | Is there any plan to add HTTP/2 support for node.js? Since it is now officially supported in node.js 9, I think it's highly desirable to have this library support it as well. | state::triage | high | Critical |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.