id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
287,317,489 | pytorch | `from` keyword in `random_` gives error | In `tensor.random_(from=foo)`, `from` cannot be used because it's reserved for imports. Trying to call that function using the keyword gives a `SyntaxError`.
cc @vincentqb @fritzo @neerajprad @alicanb @vishwakftw | module: distributions,triaged | low | Critical |
287,418,628 | youtube-dl | Error with Simpsonsworld.com / FXNetworks | I used youtube-dl to successfully download one episode from simpsonsworld.com yesterday afternoon. However, by yesterday evening, the same exact command was yielding the following error. Thank you for your consideration.
~/Desktop/temp/Simpsons$ youtube-dl -U
youtube-dl is up-to-date (2018.01.07)
~/Desktop/temp/Simpsons$ youtube-dl --ap-mso Charter_Direct --ap-username [email protected] --ap-password blahblah --verbose http://www.simpsonsworld.com/video/273376835817
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'--ap-mso', u'Charter_Direct', u'--ap-username', u'PRIVATE', u'--ap-password', u'PRIVATE', u'--verbose', u'http://www.simpsonsworld.com/video/273376835817']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.01.07
[debug] Python version 2.7.12 (CPython) - Linux-4.4.0-104-generic-x86_64-with-Ubuntu-16.04-xenial
[debug] exe versions: avconv N-80901-gfebc862, avprobe N-80901-gfebc862, ffmpeg N-80901-gfebc862, ffprobe N-80901-gfebc862, rtmpdump 2.4
[debug] Proxy map: {}
[FXNetworks] 273376835817: Downloading webpage
[FXNetworks] 273376835817: Downloading Provider Redirect Page
[FXNetworks] 273376835817: Downloading Provider Login Page
[FXNetworks] 273376835817: Logging in
[FXNetworks] 273376835817: Confirming Login
[FXNetworks] 273376835817: Retrieving Session
### ERROR: Unable to download webpage: HTTP Error 401: Unauthorized (caused by HTTPError()); please report this issue on https://yt-dl.org/bug .
Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 517, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2198, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python2.7/urllib2.py", line 435, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 548, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 473, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 556, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
| tv-provider-account-needed | low | Critical |
287,487,481 | flutter | request: better tools for notifying form changes | ## Feature request
Taken from some samples and a gitter chat question, there is no proper way to see if a user has made any changes to a form. I was told to use a boolean _saveNeeded flag to track changes within the validator of the field. Now, that does not kick in until form.validate() is called.
The [TextFormField class](https://docs.flutter.io/flutter/material/TextFormField-class.html) does not offer a onChanged method, nor do FormField or [Form](https://docs.flutter.io/flutter/widgets/Form-class.html).
An escape is to use the autovalidate on Form, but this shows very ugly forms because all validation warnings are shown directly after load.
Without a dirty flag to the form, it is impossible to track the current state of the form during the onWillPop callback, in case the user exits the form accidentally without saving the content.
My suggestions are (one of multiple of the following)
- add a Dirty state to the formState
- add a parameter to onWillPop, indicating the dirty state
- add a onChanged method to Form, with the changed field as parameter
or
- add a onChanged method to FormField/TextFormField
```
new Form(
key: formKey,
autovalidate: false,
onWillPop: _onWillPop,
child: new ListView(
padding: const EdgeInsets.all(16.0),
children: <Widget>[
//Description
new TextFormField(
initialValue: expenseJSON['description'],
keyboardType: TextInputType.text,
decoration: const InputDecoration(
hintText: 'Description',
labelText: 'Description *',
),
obscureText: false,
validator: (String value) {
_saveNeeded = true;
if (value.isEmpty)
return 'Description is mandatory';
if (value.length < 3)
return 'Description is mandatory';
final RegExp nameExp = new RegExp(r'^[A-Za-z ]+$');
if (!nameExp.hasMatch(value))
return 'Description contains invalid characters.';
return null;
},
onSaved: (String value) {
print('Save description to JSON: ' + value);
expenseJSON['description'] = value;
},
),
``` | c: new feature,framework,f: material design,customer: crowd,c: proposal,P2,team-design,triaged-design | low | Major |
287,531,011 | flutter | Request sample code for *Transition widgets and subclasses of ImplicitlyAnimatedWidget | As requested in #13932, we'd like to introduce *Transition widgets and subclasses of ImplicitlyAnimatedWidget for building simple animations before the tween animation tutorial. This issue requests adding sample code to these widgets' API docs, so users can understand how to use them after being referred to them by the Animations landing page or the Animations widget catalog. | framework,a: animation,from: study,d: api docs,c: proposal,P2,team-framework,triaged-framework | low | Minor |
287,536,611 | flutter | Consider renaming vsync in AnimationController to tickerProvider | From the Animation API UX study, developers had trouble understanding what `vsync` meant and how it was related to TickerProvider, when they were defining an AnimationController object. It would be clearer to rename the `vsync` property to `tickerProvider`, so Flutter only introduces one new concept instead of two to developers new to the animations API. | framework,a: animation,c: API break,from: study,c: proposal,P3,team-framework,triaged-framework | low | Major |
287,545,539 | TypeScript | Formatter: Place JSX closing tag on new line | <!-- BUGS: Please use this template. -->
<!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript -->
<!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md -->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.7.0-dev.20180108
**Code**
```ts
<div>
wat</div>
```
**Expected behavior:**
```ts
<div>
wat
</div>
```
**Actual behavior:**
No changes are made by the formatter.
**Related:**
While not directly related, I came across this while working on a fix for #20766. I can work on a fix for this issue as well. | Suggestion,Help Wanted,Domain: Formatter | low | Critical |
287,706,728 | opencv | CUDA ORB ROI boundary conditions should be modefied in "void ORB_Impl::detectAndComputeAsync" | ##### System information (version)
- OpenCV => 3.3:
- Operating System / Platform => Ubuntu16.04 Linux 4.10.0-42-generic x86_64
- Compiler => codeblocks 13.12
##### Detailed description
In CUDA ORB, if the size of the picture input is too small, detectAndComputeAsync will report
> OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in GpuMat.
While in CPU ORB, the same picture will not cause the same problem when `ORB::create()` choose the same value.
After reading the sources code in /opencv-3.3.1/modules/cudafeatures2d/src/orb.cpp and comparing it with CPU ORB in /opencv-3.3.1/modules/features2d/src/orb.cpp, I find that obtaining method of ROI in CUDA ORB causes the error:
In `void ORB_Impl::buildScalePyramids(InputArray _image, InputArray _mask, Stream& stream)`, it goes like this:
`float scale = 1.0f / getScale(scaleFactor_, firstLevel_, level);
Size sz(cvRound(image.cols * scale), cvRound(image.rows * scale));`
`Rect inner(edgeThreshold_, edgeThreshold_, sz.width - 2 * edgeThreshold_, sz.height - 2 * edgeThreshold_);`
It means that if `nLevels` and `edgeThreshold` in `cv::cuda::ORB::create()` are high enough, it will make `inner` out of the GpuMat and cause the problem. Even default values `nLevels=8` and `edgeThreshold = 31` will crash when I choose a [170*209] image.
However, in CPU ORB method, `void ORB_Impl::detectAndCompute( InputArray _image, InputArray _mask, std::vector<KeyPoint>& keypoints, OutputArray _descriptors, bool useProvidedKeypoints )` gives another ROI choosing method:
```
for( level = 0; level < nLevels; level++ )
{
float scale = getScale(level, firstLevel, scaleFactor);
layerScale[level] = scale;
Size sz(cvRound(image.cols/scale), cvRound(image.rows/scale));
Size wholeSize(sz.width + border*2, sz.height + border*2);
if( level_ofs.x + wholeSize.width > bufSize.width )
{
level_ofs = Point(0, level_ofs.y + level_dy);
level_dy = wholeSize.height;
}
Rect linfo(level_ofs.x + border, level_ofs.y + border, sz.width, sz.height);
layerInfo[level] = linfo;
layerOfs[level] = linfo.y*bufSize.width + linfo.x;
level_ofs.x += wholeSize.width;
}
```
Inside, `Rect linfo(level_ofs.x + border, level_ofs.y + border, sz.width, sz.height);` can avoid the problem properly.
Maybe ORB in CPU is updated frequently while CUDA ORB is ignored, for there are few information about CUDA ORB can be found in Internet and ORB is the solution submitted for CPU opertion. But the problem can be solved soon. Thanks!
##### Steps to reproduce
[orb_gpu_tidyup.txt](https://github.com/opencv/opencv/files/1621838/orb_gpu_tidyup.txt)


When `Ptr<cuda::ORB> d_orb = cuda::ORB::create(500, 1.2f, 6, 31, 0, 2, 0, 31, 20,true);` is changed to `Ptr<cuda::ORB> d_orb = cuda::ORB::create(500, 1.2f, 8, 31, 0, 2, 0, 31, 20,true);`, the error will happen.
| priority: low,category: gpu/cuda (contrib) | low | Critical |
287,751,994 | rust | [Nightly Regression] False positive warning with constants in conditional | This is modified code from a real regression in my code:
```
#![allow(unused_variables)]
fn main() {
const N: usize = 1_000;
const M: usize = 4;
const V: [u32; M] = [1, 2, 3, 4];
let x = if N <= M {
V[N - 1]
} else {
0
};
}
```
```
warning: this expression will panic at run-time
--> C:\lavoro\bugs\test.rs:9:9
|
9 | INIT[N - 1]
| ^^^^^^^^^^^ index out of bounds: the len is 4 but the index is 999
rustc 1.25.0-nightly (f62f77403 2018-01-10)
binary: rustc
commit-hash: f62f774035735a06c880c48c0b9017fcc0577e33
commit-date: 2018-01-10
host: x86_64-pc-windows-gnu
release: 1.25.0-nightly
LLVM version: 4.0
```
D language and C++17 avoid such problems using "static if" and "if constexpr". | T-compiler,C-bug,A-const-eval | low | Critical |
287,758,388 | youtube-dl | rec-tube.com | [x] Site support request (request for adding support for a new site)
https://rec-tube.com/watch/2017101813010487/?2 | site-support-request,nsfw | low | Minor |
287,783,314 | kubernetes | System-wide performance benchmarks & setting up CI-testing for them | This has been under discussion for a while now and it's time to finally do it. Opening this issue as an umbrella for benchmarks across the system.
FMU there seem to be 2 kinds of those in the codebase atm:
- benchmark unit tests (e.g `BenchmarkQuantityMarshalJSON`, `BenchmarkWatchHTTP`, `BenchmarkRandomStringGeneration`): Like normal unit tests, these are runnable using a single test binary - and we seem to have quite a few of these, mainly around api-machinery and helper libraries.
- benchmark integration tests (e.g `BenchmarkScheduling`, `BenchmarkSchedulingAntiAffinity`, `test/e2e_node/jenkins/benchmark`): These require to also run 1 or 2 other components (like etcd, apiserver). I can currently find these only for some components (unless I'm missing sth), mainly around node and scheduler. IMHO we should have much more of these (for e.g controller-manager, apiserver?) as they often help us with identifying performance issues in a timely, cheap and easy way.
To start with, I'm going to work towards setting up scheduler benchmarks on our CI (possibly more). However, the onus of writing useful and informative benchmarks for the respective components lies with the SIGs. I'll be happy to assist (mostly from the CI-side) as much as I can here.
cc @kubernetes/sig-scalability-misc @kubernetes/sig-testing-misc
FYI @wojtek-t @porridge @smarterclayton @davidopp @bsalamat (folks involved in the discussions so far) | sig/scalability,sig/scheduling,sig/testing,lifecycle/frozen | medium | Major |
287,834,558 | go | plugin: using coverpkg when testing results in plugin version mismatch | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
1.9.2
### Does this issue reproduce with the latest release?
yes - 1.9.2 is the latest, stable release
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/vagrant/pluginbugs"
GORACE=""
GOROOT="/usr/local/go-1.9.2"
GOTOOLDIR="/usr/local/go-1.9.2/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build917652042=/tmp/go-build -gno-record-gcc-switches"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
### What did you do?
https://github.com/jdef/plugins1/blob/master/README.md
### What did you expect to see?
* all tests pass successfully
### What did you see instead?
* the `go test` invocation using `-coverpkg` fails with "plugin was built with a different version of package..." message
| help wanted,NeedsFix | low | Critical |
287,873,236 | rust | Audit set of bundled MinGW import libraries | The set of import libraries that Rust bundles should strictly be what is needed to link against the libraries that come with the Rust distribution. Missing an import library causes link errors while including unnecessary import libraries bloats the distribution.
| Library | Necessary |
|--|--|
| libadvapi32.a | Yes |
| libbcrypt.a | No |
| libcomctl32.a | No |
| libcomdlg32.a | No |
| libcredui.a | Yes and missing! |
| libcrypt32.a | No |
| libdbghelp.a | Yes |
| libgdi32.a | Yes |
| libimagehlp.a | No |
| libiphlpapi.a | No |
| libkernel32.a | Yes |
| libmsimg32.a | Yes |
| libmsvcrt.a | Yes |
| libodbc32.a | No |
| libole32.a | No |
| liboleaut32.a | No |
| libopengl32.a | Yes |
| libpsapi.a | Yes |
| librpcrt4.a | No |
| libsecur32.a | Yes and missing! |
| libsetupapi.a | Yes |
| libshell32.a | Yes |
| libuser32.a | Yes |
| libuserenv.a | Yes |
| libuuid.a | No |
| libwinhttp.a | No |
| libwinmm.a | No |
| libwinspool.a | Yes |
| libws2_32.a | Yes |
| libwsock32.a | No | | A-linkage,C-enhancement,T-compiler,O-windows-gnu,T-bootstrap | low | Critical |
287,888,746 | flutter | Let flutter be installable via homebrew | Latest update: https://github.com/flutter/flutter/issues/14050#issuecomment-1012647917
A description of the desired user experience: https://github.com/flutter/flutter/issues/14050#issuecomment-446712064
----
Didn't find any existing issues on this. Opening for tracking.
Let flutter be `brew install flutter`able. | c: new feature,tool,platform-mac,a: first hour,customer: crowd,P3,team-tool,triaged-tool | high | Critical |
287,913,897 | TypeScript | Destructing assignments with initializers do not affect control flow correctly in strictNullChecks | <!-- BUGS: Please use this template. -->
<!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript -->
<!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md -->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.7.0-dev.201xxxxx
**Code**
```ts
const { y = "A" } = { y: "B" };
y; // y's type here
```
**Expected behavior:**
y is `"B"` - `"B"` is not a type with `undefined` in its domain, so the initializer will _never_ be run. (And should be marked as unreachable code, and definitely not incorporated into the variable's type)
**Actual behavior:**
y is `"A" | "B"` - this is fine outside of `strictNullChecks`, but is lacking within it.
| Bug | low | Critical |
287,915,620 | flutter | Service protocol extensions for screenshotting need to be able to handle multiple Flutter views. | The service protocol extensions "_flutter.screenshot" and "_flutter.screenshotSkp" return a single item. This involves choosing a random view when mutliple Flutter views are present. These calls must be modified to return an array of resources. | customer: fuchsia,tool,engine,a: desktop,P3,team-macos,triaged-macos | low | Minor |
287,924,700 | TypeScript | Provide lib for modern DOM | This is basically a re-opening of #2910, which @mhegazy asked me to do.
Nowadays TS has the `lib` compiler option that allow users to selectively choose which core definitions to include or not based on their target.
It would be nice to add a new option similar to `dom` and `dom.iterable` for users targetting a modern DOM runtime. Not sure how to name this since W3C has moved from releases to a "living standard" model.
For example, the living standard includes interfaces `ChildNode` with methods `after`, `before`, `remove` and `replaceWith`; `ParentNode` with methods `prepend` and `append`.
Those methods have been in Firefox and Chrome for a long while, they're in preview in Edge (17035+).
They are very convenient and there's no reason a dev that targets modern browsers shouldn't be able to do `"lib": "dom.living"` and use them.
Unfortunately I am not sure if there's a list somewhere of all those new features... except going through the whole standard :( | Suggestion,Help Wanted,Domain: lib.d.ts,Experience Enhancement | low | Major |
287,933,302 | TypeScript | Suggestion for readonly interface | Avoid duplicating "readonly" for each property by applying it to the interface definition itself.
This is to reduce boilerplate code defining immutable objects.
Having interface where ALL properties are read only:
```ts
interface State {
readonly prop1: string;
readonly prop2: string;
...
readonly prop22: string;
}
```
equals to:
```ts
readonly interface State {
prop1: string;
prop2: string;
...
prop22: string;
}
``` | Suggestion,Awaiting More Feedback | medium | Major |
287,963,197 | react-native | Inverted FlatList displays activity indicator at the bottom | ### Is this a bug report?
yes
### Have you read the [Contributing Guidelines](https://facebook.github.io/react-native/docs/contributing.html)?
yes
### Environment
Environment: all newer environments
Packages: any packages
Target Platform: any platform
Version: 0.53.0 and all other versions of RN that support FlatList with the ```inverted``` attribute
### Steps to Reproduce
1. ```return <FlatList {...props} inverted data={[...]} refreshing={this.state.isLoading} onRefresh={this.someListUpdatingFunction} />```
(just add the 'inverted' attribute to any RN ```<FlatList/>``` to reproduce, then try pulling **down** to refresh, then try pulling **up** to refresh)
### Expected Behavior
onRefresh should allow users to refresh using the **most common** way that users have been trained to refresh data over the years, which is to "pull **down** to refresh" and see an ActivityIndicator spinning **above** the FlatList, **never** below it... this should be the default **even when** inverted={true}
### Actual Behavior
user has to pull UP to refresh (with Activity Indicator at the bottom) ... literal vs intuitive: when taken "literally", i can agree that this is expected behavior since it is after all "inverted" ... however, when taken "intuitively", it's not so expected, **especially** for users... I don't think app users have ever been trained to pull up to refresh data, except for maybe in the "OfferUp" chat app!
### Most Applicable Use Case
- a chat app, where users pull down to load more chat history that's appended above, similar to how it's done in iOS Messages app
- UIs like terminals, event logs, chat, etc... where it's common to insert new content from the bottom and load old content when the user scrolls to the top or pulls down to refresh
### Demo
https://snack.expo.io/S1UOyWgSM
(to reproduce, just add the 'inverted' attribute to any RN ```<FlatList/>``` component using v0.53.0 or any version of RN that supports FlatList) | Good first issue,Help Wanted :octocat:,Type: Enhancement,Component: FlatList,Bug | medium | Critical |
287,980,447 | pytorch | Protect user from No module named _C import error | After installing pytorch, I tried to test that it worked by firing up python and import torch.
I get the error
ImportError: No module named _C
I was sitting in the root of the pytorch git tree. Moving out of the tree solves the problem, but it's mysterious until you find someone else on the web who had it.
Please consider adding a check that the user isn't sitting in the tree during import to save newbies from this issue?
Thanks! | module: error checking,triaged,module: pybind | low | Critical |
288,046,199 | pytorch | CUDNN_STATUS_INTERNAL_ERROR when training with conv3d | i am trying to do a training with inflated 3d network
replacing the model with different one will not casue error.
setting up torch.backends.cudnn.enabled to False also fixes the problem, which is quite inefficiency.
here are some info about the enviroment cudnn version 7003, pytorch 0.3
i tried with different version of pytorch and cudnn, nothing helps
removing .nv does not help neither.
gpu and drivers are checked, and nothing seems wrong
model codes:
https://gist.github.com/yunkaili/297cfc8765269e4d677f7e4bfc7cc064
belows are some error info
```
Traceback (most recent call last):
File "main.py", line 316, in <module>
main()
File "main.py", line 123, in main
train(train_loader, model, criterion, optimizer, epoch)
File "main.py", line 171, in train
output = model(input_var)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 68, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 78, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply
raise output
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/parallel_apply.py", line 42, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/home/liyunkai/i3d_base_model2/models.py", line 134, in forward
base_out = self.base_model(input.view((-1, sample_len, cfg.TRAIN.FRAMES_IN_SNIPPET) + input.size()[-2:]))
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/home/liyunkai/i3d_base_model2/i3d_model.py", line 163, in forward
x = self.layer4(x)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/container.py", line 67, in forward
input = module(input)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/home/liyunkai/i3d_base_model2/i3d_model.py", line 86, in forward
out = self.conv3(out)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/conv.py", line 383, in forward
self.padding, self.dilation, self.groups)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py", line 126, in conv3d
return f(input, weight, bias)
RuntimeError: CUDNN_STATUS_INTERNAL_ERROR
^CException ignored in: <module 'threading' from '/usr/lib/python3.5/threading.py'>
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 1288, in _shutdown
t.join()
File "/usr/lib/python3.5/threading.py", line 1054, in join
self._wait_for_tstate_lock()
File "/usr/lib/python3.5/threading.py", line 1070, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 3208, in atexit_operations
self.reset(new_session=False)
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 1205, in reset
self.displayhook.flush()
File "/usr/local/lib/python3.5/dist-packages/IPython/core/displayhook.py", line 306, in flush
gc.collect()
KeyboardInterrupt
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 3208, in atexit_operations
self.reset(new_session=False)
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 1205, in reset
self.displayhook.flush()
File "/usr/local/lib/python3.5/dist-packages/IPython/core/displayhook.py", line 306, in flush
gc.collect()
KeyboardInterrupt
THCudaCheck FAIL file=/pytorch/torch/lib/THC/generic/THCStorage.c line=184 error=77 : an illegal memory access was encountered
terminate called after throwing an instance of 'std::runtime_error'
what(): cuda runtime error (77) : an illegal memory access was encountered at /pytorch/torch/lib/THC/generic/THCStorage.c:184
``` | module: cudnn,module: convolution,triaged | low | Critical |
288,057,844 | angular | Angular - Form Validation (duplicate trigger function custom validate) | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
I am testing to make form-validation custom and i found something unreasonable, when the first init component new FormGroup, i pass the function name like customValidator to custom validate and i console.log to test it action, and i see new FormGroup is call three time action like this:
the command console.log is console.log('customValidator', control);
customValidator FormControl {validator: ƒ, asyncValidator: null, _onCollectionChange: ƒ, pristine: true, touched: false, …}
hero-form-reactive.component.ts:31 customValidator FormControl {validator: ƒ, asyncValidator: null, _onCollectionChange: ƒ, pristine: true, touched: false, …}
hero-form-reactive.component.ts:31 customValidator FormControl {validator: ƒ, asyncValidator: null, _onCollectionChange: ƒ, pristine: true, touched: false, …}
## Expected behavior
<!-- Describe what the desired behavior would be. -->
Because it call three time lead to when you call api to async it will duplicate request, i can't fix temporary with debounce lodash, but i think it shound be call one time when init
```javascript
ngOnInit(): void {
this.heroForm = new FormGroup({
'name': new FormControl(this.hero.name, [
Validators.required,
Validators.minLength(4),
this.customValidator.bind(this), // <----- Custom validator method call
]),
'alterEgo': new FormControl(this.hero.alterEgo),
'power': new FormControl(this.hero.power, Validators.required)
});
}
customValidator(control: FormControl) {
console.log('customValidator', control);
}
```
[Link to Repro](https://plnkr.co/edit/HOOnul9tqUagPB1P4E0l?p=preview)
with file name app/reactive/hero-form-reactive.component.ts
When you open Developer Tools you will see it call three time actions in custom function validate.
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
## Environment
<pre><code>
Angular version: 5.1.0
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [x] Chrome (desktop) version 63.0.3239.132 (Official Build) (64-bit)
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: v8.9.3
- Platform: Mac
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
| type: bug/fix,area: forms,state: confirmed,P4 | low | Critical |
288,089,899 | go | net/http: make Transport's idle connection management aware of DNS changes? |
### What version of Go are you using (`go version`)?
go version go1.9 linux/amd64
and
go version go1.9.2 linux/amd64
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
% go env
GOARCH="amd64"
GOBIN="/home/sszuecs/go/bin"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/sszuecs/go"
GORACE=""
GOROOT="/usr/share/go"
GOTOOLDIR="/usr/share/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build505089582=/tmp/go-build -gno-record-gcc-switches"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
### What did you do?
I am running `go run main.go` and change the /etc/hosts to change www.google.de to 127.0.0.1.
Depending on the order it fails to switch DNS after IdleConnTimeout or it will never request DNS again. DNS will be tried to lookup in case you first target it to 127.0.0.1 and after that comment the entry in /etc/hosts. The problem is that if you want to change your target loadbalancers via DNS lookup, this will not be done. The workaround is commented in the code, which reliably will do the DNS failover. The problem seems to be that IdleConnTimeout is bigger then the time.Sleep duration in the code, which you can also change to see that this works. In case of being an edge proxy with high number of requests, the case IdleConnTimeout < process-next-request will never happen.
package main
import (
"log"
"net"
"net/http"
"time"
)
func main() {
tr := &http.Transport{
DialContext: (&net.Dialer{
Timeout: 5 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).DialContext,
TLSHandshakeTimeout: 5 * time.Second,
IdleConnTimeout: 5 * time.Second,
}
go func(rt http.RoundTripper) {
for {
time.Sleep(1 * time.Second)
req, err := http.NewRequest("GET", "https://www.google.de/", nil)
if err != nil {
log.Printf("Failed to do request: %v", err)
continue
}
resp, err := rt.RoundTrip(req)
if err != nil {
log.Printf("Failed to do roundtrip: %v", err)
continue
}
//io.Copy(ioutil.Discard, resp.Body)
resp.Body.Close()
log.Printf("resp status: %v", resp.Status)
}
}(tr)
// FIX:
// go func(transport *http.Transport) {
// for {
// time.Sleep(3 * time.Second)
// transport.CloseIdleConnections()
// }
// }(tr)
ch := make(chan struct{})
<-ch
}
### What did you expect to see?
I want to to see that IdleConnTimeout will reliably close idle connections, such that DNS is queried for new connections again, similar to the goroutine case in the code. We need to slowly being able to fade traffic.
### What did you see instead?
In case you start the application with /etc/hosts entry is not set, and then change it, it will never fail the request, so the new DNS lookup is not being made. | NeedsFix | high | Critical |
288,161,838 | pytorch | Better header hygiene in ATen | Steps to reproduce:
1. Edit `native_functions.yaml` to add a single new function
2. Rebuild
Expected result: Quick rebuild
Actual result: Everything in the universe rebuilds
This is because a good portion of our C++ functions include `ATen/ATen.h`, which includes the autogenerated (very long) header file, which gets modified when you edit `native_functions.yaml`.
I suggest we separate the headers which depend on methods from functions. Methods have to be defined in the class so life is hard, but functions are not, so they can be split up and only functions you actually need can be included. This requires a bit more import discipline, but you now get to avoid long rebuilds.
(Or maybe you think the cure is worse than the disease.)
cc @ezyang @bhosmer @smessmer @ljk53 | module: internals,triaged | low | Minor |
288,170,934 | vscode | [folding] "Select all Occurrences" bug | - VSCode Version: 1.19.2
- OS Version: Win 10
Steps to Reproduce:
- Open a new document and paste the following content
```
abc
abc
def
ghi
abc
abc
def
ghi
```
- Place the cursor in the first line and execute Fold (`Ctrl + Shift + [`)
- Select "abc" in the 3rd line
- Select all occurrences (`Ctrl + Shift + L`)
- type A to replace all selections with "A"
- Unfold All (`Ctrl + K` `Ctrl + J`)
- There are 3 A's in the document. Expected: 4.
| bug,editor-folding | low | Critical |
288,180,837 | youtube-dl | Support request for Family Channel websites (Family, CHRGD, Family Jr., Telemagino) | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.01.07*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.01.07**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
youtube-dl -v https://www.chrgd.ca/videos/roboslugs/
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'https://www.chrgd.ca/videos/roboslugs/']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2018.01.07
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.16299
[debug] exe versions: ffmpeg 3.3.4, ffprobe 3.3.4, rtmpdump 2.4
[debug] Proxy map: {}
[generic] roboslugs: Requesting header
WARNING: Falling back on generic information extractor.
[generic] roboslugs: Downloading webpage
[generic] roboslugs: Extracting information
[redirect] Following redirect to https://www.chrgd.ca/enable-javascript
[generic] enable-javascript: Requesting header
WARNING: Falling back on generic information extractor.
[generic] enable-javascript: Downloading webpage
[generic] enable-javascript: Extracting information
ERROR: Unsupported URL: https://www.chrgd.ca/enable-javascript
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp4k56gdgt\build\youtube_dl\YoutubeDL.py", line 784, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp4k56gdgt\build\youtube_dl\extractor\common.py", line 438, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp4k56gdgt\build\youtube_dl\extractor\generic.py", line 3086, in _real_extract
youtube_dl.utils.UnsupportedError: Unsupported URL: https://www.chrgd.ca/enable-javascript
...
<end of log>
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://www.chrgd.ca/videos/roboslugs/
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.
If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
| geo-restricted | low | Critical |
288,188,812 | rust | incr.comp.: Improve caching efficiency by handling spans in a more robust way |
The Problem
-----------
Source location information (which, for simplicity, I'll just call "spans" from here on out) are the bane of incremental compilation's existence. Why is that? Unlike most other kinds of frequent changes done to source code, changing spans has (seemingly) non-local effects.
As an example, let's first consider a "regular" change of a program, like turning a `+` in an expression into a `-`. This change means that the function containing the expression has to be re-analyzed and the object file it was instantiated in has to be re-compiled. So far, so expected. Now, in contrast, consider a change that affects the span of a function, like adding a line with a comment to it. At first glance, it looks like we haven't really changed anything -- we just added a comment after all -- but that's not true. Spans are part of the HIR, the MIR, ScopeTrees, and (via debuginfo and panic messages) even LLVM IR and object files. So, adding a comment to a function will *legitimately* cause the function and its containing object file to be re-compiled. That's a bit unexpected and sad, but how is it "non-local"?
Remember that we added a new line with a comment to a function, thus changing the span of the function. What I didn't explicitly mention was that by adding this line, we shifted down *everything following that line* in the same source file, thus changing not only one function but potentially dozens of functions and type definitions. That's what I described as "non-local" effects (or rather "seemingly" non-local because shifting everything by a line is a legitimate, real change to everything that has been shifted, it's just easy to overlook).
"That's horrific!", you say, "We have to do something about it!" Well, I'm glad you think so too.
What can we do?
---------------
As stated above, the changes and the invalidation throughout the incr.comp. cache that they cause are legitimate. They are not false positives, since changing the source location of a function really changes (for example) the MIR of that function. So we cannot just be smarter about tracking spans or ignore them altogether. However, what we can do is refactoring the representation of HIR, MIR, etc, so that they don't actually contain spans anymore. The span information has to be somewhere, and we still have to be able to map various things in HIR, MIR, etc to spans, but spans can be split out into separate tables. As a consequence, HIR, MIR, and ScopeTrees will be represented in a way that is impervious to changes that don't affect their *structure*.
One way to achieve this (and the only way I know) is to introduce the concept of "abstract spans". An abstract span does not directly contain a source location, but it identifies a source location uniquely. For example, if we store all spans in a side table then the abstract span would be the key to this table. For this to bring any improvement over the current situation, an abstract span must be stable across changes that don't affect the structure of the thing containing it. E.g. shifting down a function by a line can change the thing the abstract span points to, but the value of the abstract span itself must not change. (This is simple to achieve by using a scheme that is similar to what we already do for `HirId`. Implementing it without increasing memory requirements is harder).
Implementation Strategies
-------------------------
There are a few prerequisites for the implementation:
- Span information must be tracked by a different `DepNode` than `Hir`, `HirBody`, `OptimizedMir`, etc, which implies that it must not be directly accessible from any of the data covered by these `DepNodes`.
- Span information must still be tracked, but in contrast to the current situation, we only want to depend on it when the information is actually used.
- Abstract spans must be stable.
These goals can be achieved by:
- Splitting out span information during HIR lowering and storing it separately. I imagine having one table per `HirId::owner` that then corresponds to one `DepNode`, making spans be tracked at the same granularity as HIR items.
- Replacing `Span` fields with abstract spans in HIR, MIR, etc. This will mean quite a bit of refactoring everywhere these spans are used (as opposed to just being copied around)
Alternatively, this could also be achieved by:
- Making the `CodeMap` inaccessible from queries and generating a map from `Span` value to `DepNode` during HIR lowering, thus effectively making the existing `Span` type abstract.
- Providing a query that allows to decode a `Span` to its contents.
- Making sure that none of the error reporting APIs take `Spans` directly.
I lean a bit towards alternative (1) but it's hard to gauge which one will lead to cleaner, more robust code in the end. Solution (1) would have a risk of false positives (too much invalidation), while solution (2) has the risk of false negatives (changes not detected) because existing APIs present tracking holes. Not detecting changes seems like the worse problem.
Regardless of the implementation, we will have to store additional tables in crate metadata that allow mapping from abstract spans to regular spans for upstream crates.
Abstract Span Representation
----------------------------
Ideally, an abstract span would not take up more space than one `u32`, which is how much space a `Span` takes up. One way to achieve this would be by making abstract spans be `struct SpanId(ast::NodeId)`. Mapping from `SpanId` to `Span` would then involve mapping from `NodeId` to `HirId`, taking the `HirId::owner` to identify the correct side table *and* `DepNode`, and then the `HirId::local_id` as key into the side table. However, this only works for the current crate. In order for this to work across crates, we would either have to make `SpanId` also contain a `CrateNum` (thus doubling its size to 8 bytes), or implement a `NodeId` remapping scheme, similar to what we do for imported `FileMaps` and formerly already had for AST "inlining". With the latter in place we might be able to remove the `HirId` from some of the HIR structs again, which would help amortize its implementation effort.
`NodeId`-based abstract spans have the restriction of only being able to represent things that have a `NodeId`. However, that should be easily solved by assigning `NodeId`s to things that at the moment have a `Span` but no `NodeId`.
`NodeId`-based abstract spans have the advantage that HIR structs would not have to store a separate span field. The `SpanId` could be generated from the already available `NodeId`.
Abstract spans could be implemented completely separately from `NodeId` and `HirId` but there's probably little advantage to doing so while quite a bit of new infrastructure would have to be put into place.
Guarding against Regressions
----------------------------
After putting so much effort into using abstract spans, we'll want to avoid that vanilla `Span` values make their way into query results again. Luckily this should be easily achievable by adding an assertion to the `HashStable` implementation for `Span` that makes sure we don't encounter unexpected invocations.
Abstract Spans Level 2
----------------------
The first goal would be to use abstract spans in everything up until and including MIR. An even more ambitious goal would be to also use abstract spans in cached LLVM IR and/or object files. That might allow us to skip re-optimizing code and just patch up source locations (if it's really just spans that have changed -- detecting that is another challenge).
Call for feedback
-----------------
Since this will probably result in quite a few changes, I'd like to get some feedback before jumping into an implementation. Here are some guiding questions:
- Did I explain the problem properly?
- Do you know an alternative to span abstraction for solving the problem?
- Which of the two implementation approaches would you choose?
- Is there a better way of implementing abstract spans?
Any kind of feedback is welcome!
cc @rust-lang/compiler | C-enhancement,T-compiler,A-incr-comp,C-optimization | medium | Critical |
288,231,272 | svelte | Whole-app optimisation | I keep bringing this up as a thing-we-should-do but it's probably time we had an issue for it with specific ideas about *what* it means and *how* to get there.
I'll kick things off with a few ideas of things we could do:
### Static properties
```html
<!-- App.html -->
<Greeting name='world'/>
<!-- Greeting.html -->
<h1>Hello {{name}}!</h1>
```
Right now, this involves creating three separate text nodes inside the `<h1>` (which we *could* collapse into one — Scott Bedard had some good ideas in Gitter), and adding update code that waits for `state.name` to change. We could replace all that with
```js
h1.textContent = 'Hello world!';
```
### Static computed properties
As a corollary to the above, if you know the values of the inputs to a computed property, and know that the computed property function is pure, you can precompute the value.
### Collapsing entire components
A 'component' is really two things — the main fragment, and the interface. In a lot of cases, such as the `<Greeting>` component above, we don't actually *need* the interface — we can statically determine that there are no lifecycle hooks or events, and no way that the user could get a reference to the component.
### Optimising styles
Component-level unused style removal is cool, but if we had all your styles we could start to do [Styletron](http://styletron.js.org/)-style optimisations.
---
Will add to this list as other things occur to me; please suggest others!
| feature request,popular,compiler | medium | Major |
288,315,071 | vscode | Git - Detect git repositories under ignored paths | This is a feature request.
I work in several projects, mostly Docker-related, where I need to edit code inside git repositories that are ignored inside the main one.
Example file tree:
```
main-repo/
.git/
.gitignore # <-- Here we ignore the `external_sources` folder
src/
[files]
external_sources/
extra1/
.git/
[files]
extra2/
.git/
[files]
```
The reasoning behind this is an aggregated Docker project that includes sources from many unrelated places. Developing means mounting local code, changing it, and pushing code to all repos (the main one and the ones under `external_sources`, where we mainly need to open PRs for that), but deploying to production means building a different Docker image where we download and merge external code instead of copying it from localhost.
Boilerplate apart, the feature request is to be able to find those subrepositories, even if they are untracked from the main one, and let the user use the full SCM interface (diffs, SCM section...) on them.
Right now, the diffs do not show because you are editing ignored code, and the SCM does not show these folders for the same reason.
I gues that now that multi-root workspaces are in, this shouldn't be so hard to achieve...
Thanks! | bug,help wanted,git | medium | Critical |
288,324,579 | godot | Allow specifying C# class name instead of source file path as node's script | **Godot version:**
`master` / 30d7943311cdf4efbc6794a52df0e298f5e6d975
**OS/device including version:**
_Manjaro Linux 17.1-rc2_
**Issue description:**
Currently, C# is treated in the same manner as GDScript, in that it is assumed the code would be reused in its source code form. However, it is not ideal for C# which is a compiled language and often used to create binary assemblies that can be reused in other programs.
With Godot, it is not possible to attach a C# class to a node directly (by specifying its fully qualified class name), but must be done by providing a path to the source file where the class is declared.
This is problematic, since C# allows declaring a class that has a different name from its source file, or even declaring multiple classes in a single source file.
More importantly, it makes creating a library or a framework that can be used in Godot very difficult, since it is impossible to reuse such a module in its binary form, as it is generally done by a normal C# project (by using NuGet, and etc.).
This limitation also prevents one from writing editor plugins, as described in #15237.
Ideally, users should be able to reference a C# type by its fully qualified name (including the namespace) and attach it to a node, as long as the assembly that contains the class is referenced by the main C# project which is associated with the current Godot project. | enhancement,discussion,topic:dotnet | high | Critical |
288,326,892 | go | cmd/go: default GOBIN to GOPATH[0]/bin when outside of GOPATH? | ### What version of Go are you using (`go version`)?
1.9.2
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
Linux, amd64
### What did you do?
```
$ docker run -it --rm --name test golang:1 bash
$ go get github.com/nats-io/go-nats-streaming
$ go install /go/src/github.com/nats-io/go-nats-streaming/examples/stan-pub.go
```
### What did you expect to see?
`/go/bin/stan-pub` binary installed, consistent with the requirements when `go install <package>` is used, just GOPATH is set and that should be enough, GOBIN should not be required.
### What did you see instead?
Error message:
`go install: no install location for .go files listed on command line (GOBIN not set)`
| NeedsFix,GoCommand | medium | Critical |
288,334,589 | youtube-dl | support for http://new-play.tudou.com | - [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.01.07**
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: http://new-play.tudou.com/v/385712786.html?spm=a2hzp.8253869.0.0
- Playlist: http://id.tudou.com/i/UMTcwNTIxNjA2OA==/videos?order=2
---
### Description of your *issue*, suggested solution and other information
Tudou uses a new URL for their videos, "new-play.tudou.com". Could support be added for this? | request | low | Critical |
288,341,812 | vscode | Can we get a few more pixels for clicking the cursor at the beginning of a line? | Hello!
I'm routinely frustrated when trying to click on the beginning of a line of text to place the cursor or start a selection - I aim for it but it is very narrow (first half of the first character) and I miss and either hit the code-folding vertical or I get text-column two instead. Can that be made a little wider please?

Thanks!
*Addition by @hediet*: See #127025 for how the cursor is misleading. | bug,editor-rendering | low | Major |
288,348,823 | godot | Building the summator module from docs as shared module results in error | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
Master Branch from today (1/13/2018)
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Win 10 | MSVC 2017 x64
**Issue description:**
<!-- What happened, and what was expected. -->
So after succeeding in building the summator module as a static library (by following the docs) I tried to do the same thing but make the library shared instead.
This appears to work at first but it generates a library with a name that is repeated twice **'summator.windows.tools.64.windows.tools.64.lib'** instead of **'summator.windows.tools.64.lib'.**
I've had this error before and I thought it was just from me or it will be fixed soon. But it is still persistent.
This is the full error .
`scons platform=windows summator_shared=yes
scons: Reading SConscript files ...
Detected MSVC compiler: amd64
Compiled program architecture will be a 64 bit executable (forcing bits=64).
Checking for C header file mntent.h... (cached) no
scons: done reading SConscript files.
scons: Building targets ...
[ 97%] [91mLinking Program [95m==> [93mbin\godot.windows.tools.64.exe[0m
LINK : fatal error LNK1181: cannot open input file 'summator.windows.tools.64.windows.tools.64.lib'
scons: *** [bin\godot.windows.tools.64.exe] Error 1181
scons: building terminated because of errors.`
**Steps to reproduce:**
Following the docs http://docs.godotengine.org/en/latest/development/cpp/custom_modules_in_cpp.html#creating-a-new-module using an MSVC 2017 compiler will reproduce this.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| bug,platform:windows,topic:buildsystem,documentation | low | Critical |
288,357,737 | create-react-app | Look into simplifying post-ejected projects | Webpack has been landing some improvements and adding sane defaults—perhaps we can revisit some of the configuration and try to remove it? Alternatively, we could try hiding more code into `react-dev-utils`. | tag: enhancement | low | Minor |
288,376,719 | pytorch | [Feature proposal] Add MC-derived optimizers | Recent papers have proposed SGD variants based on stochastic gradient MCMC algorithms that can complete with SOTA optimizers like Adam. Would anyone be interested in an implementation of Santa ([Chen et al. 2016](http://people.ee.duke.edu/~lcarin/Santa_aistats16.pdf)) and relativistic stochastic gradient descent ([Lu, Perrone et al. 2017](http://proceedings.mlr.press/v54/lu17b/lu17b.pdf))?
cc @vincentqb | feature,module: optimizer,triaged,needs research | low | Minor |
288,407,363 | youtube-dl | [raywenderlich] Site support request | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.01.14*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.01.14**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [x] Question
- [ ] Other
---
youtube-dl -v -u testname -p testpass -o "C:/MyVideos/test/" https://videos.raywenderlich.com/courses/105-testing-in-ios/lessons/22
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'-u', u'PRIVATE', u'-p', u'PRIVATE', u'-o', u'C:/MyVideos/test/', u'https://videos.r
aywenderlich.com/courses/105-testing-in-ios/lessons/22']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2018.01.14
[debug] Python version 2.7.13 (CPython) - Windows-2012ServerR2-6.3.9600
[debug] exe versions: ffmpeg N-89794-gc51301db14
[debug] Proxy map: {}
[generic] 22: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 22: Downloading webpage
[generic] 22: Extracting information
ERROR: Unsupported URL: https://videos.raywenderlich.com/courses/105-testing-in-ios/lessons/22
Traceback (most recent call last):
File "c:\python27\lib\site-packages\youtube_dl\extractor\generic.py", line 2176, in _real_extract
doc = compat_etree_fromstring(webpage.encode('utf-8'))
File "c:\python27\lib\site-packages\youtube_dl\compat.py", line 2541, in compat_etree_fromstring
doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory)))
File "c:\python27\lib\site-packages\youtube_dl\compat.py", line 2530, in _XML
parser.feed(text)
File "c:\python27\lib\xml\etree\ElementTree.py", line 1653, in feed
self._raiseerror(v)
File "c:\python27\lib\xml\etree\ElementTree.py", line 1517, in _raiseerror
raise err
ParseError: mismatched tag: line 74, column 4
Traceback (most recent call last):
File "c:\python27\lib\site-packages\youtube_dl\YoutubeDL.py", line 784, in extract_info
ie_result = ie.extract(url)
File "c:\python27\lib\site-packages\youtube_dl\extractor\common.py", line 438, in extract
ie_result = self._real_extract(url)
File "c:\python27\lib\site-packages\youtube_dl\extractor\generic.py", line 3086, in _real_extract
raise UnsupportedError(url)
UnsupportedError: Unsupported URL: https://videos.raywenderlich.com/courses/105-testing-in-ios/lessons/22
| site-support-request,account-needed | low | Critical |
288,450,846 | create-react-app | Follow up about Jest issues | Before cutting 2.0 final I'd like to have a good understanding of these:
- [ ] https://github.com/facebook/jest/issues/3251#issuecomment-337032314 (potential regression in Jest 21+)
- [ ] https://github.com/facebook/jest/issues/5311 (just a weird warning) | tag: underlying tools | low | Minor |
288,461,284 | create-react-app | Automatically install dependencies on yarn start? | Developers less familiar with modern JavaScript likely have trouble understanding why an error message like this comes up after running `yarn start`:

Would `create-react-app` benefit from adding a check on `yarn start` that ensures the latest dependencies are already installed? | issue: proposal | low | Critical |
288,531,409 | rust | Implement method suggestions for associated functions | This doesn't suggest `bar` but IMO should ([see it live](https://play.rust-lang.org/?gist=107b9a29435251c232c81653d97290c3&version=stable)):
```rust
struct A { }
impl A {
fn foo() -> Self {
/*Self::*/bar()
}
fn bar() -> Self { A { } }
}
fn main() {
let v = A::foo();
}
```
produces
```shell
error[E0425]: cannot find function `bar` in this scope
--> src/main.rs:5:17
|
5 | /*Self::*/bar()
| ^^^ not found in this scope
error: aborting due to previous error
error: Could not compile `playground`.
```
I'd like a `help/note` saying: "Maybe you meant to call `Self::bar()?" in this case, but if for example `bar` has a typo, I'd like to see more suggestions, like here:
```rust
impl A {
fn foo() -> Self {
/*Self::*/bar()
}
fn bar0() -> Self { A { } }
fn bar1() -> u32 { 0 } // mismatching-types
fn bar2() -> Self { A { } }
}
```
It should suggest whether I meant `Self::bar0` or `Self::bar2` (`Self::bar1` produces a different type). If I remove `bar0` and `bar2`, I'd expect to see a suggestion for `Self::bar1` though. | C-enhancement,A-diagnostics,A-associated-items,T-compiler,A-suggestion-diagnostics,D-papercut | low | Critical |
288,535,226 | rust | rustc should suggest using `move` for closures where a variable is `Copy` and does not live long enough | I was trying to get [this code snippet](https://play.rust-lang.org/?gist=34a9759c0547866af2d02a57658f5a63&version=stable) to work earlier:
```rust
[1, 2, 4, 7].iter()
.flat_map(|num| {
(0 .. 5).map(|x| (x, num))
})
.for_each(|(x, num)| println!("{} {}", x, num));
```
Where `rustc` gives the following error:
```
error[E0597]: `num` does not live long enough
--> src/main.rs:4:34
|
4 | (0 .. 5).map(|x| (x, num))
| --- ^^^ does not live long enough
| |
| capture occurs here
5 | })
| - borrowed value only lives until here
6 | .for_each(|(x, num)| println!("{} {}", x, num));
| - borrowed value needs to live until here
```
I found out that the solution was to add a `move` to the inner closure, but as I did not know much about the keyword this was not especially obvious.
```rust
(0 .. 5).map(move |x| (x, num))
```
I think it would be very helpful if a note could be added to situations like this to provide an easy solution:
```
help: try adding `move` to the closure like this: `(0 .. 5).map(move |x| (x, num))`
help: This means that the value is moved into the closure instead of being referenced.
``` | C-enhancement,A-diagnostics,T-compiler | low | Critical |
288,590,954 | rust | macro_rules: "no syntax variables matched as repeating at this depth" fires before "unknown macro variable" | Consider the following code:
```rust
macro_rules! a {
(begin $ard: ident end) => {
[$arg]
}
}
macro_rules! b {
(begin $($ard: ident),* end) => {
[$($arg),*]
}
}
fn main() {
let (m, n) = (1, 2);
let x = a![begin m end];
let y = b![begin n end];
}
```
This produces the following pair of error messages:
```
error: unknown macro variable `arg`
--> src/main.rs:3:10
|
3 | [$arg]
| ^^^^
...
15 | let x = a![begin m end];
| --------------- in this macro invocation
error: attempted to repeat an expression containing no syntax variables matched as repeating at this depth
--> src/main.rs:9:11
|
9 | [$($arg),*]
| ^^^^^^
error: Could not compile `playground`.
```
While the second error message is *correct* in principle, it is also misleading. When I get a message like that, my focus is on counting how deeply nested the variable is. I usually don't consider "wait is the syntax variable *misspelled*?" when I see that error message.
I think we could and should first check if a macro variable occurs at *any* depth (and report the first error if not) before we report anything about whether a match is found at the *current* depth (and report the second error if not). | C-enhancement,A-diagnostics,A-macros,T-compiler | low | Critical |
288,611,985 | node | ABI compatibility tool | ABI stability is a hard requirement when upgrading V8 within the same Node branch. This is so that native modules that uses V8 APIs and built for a particular Node branch do not need to be rebuilt.
So far, we've been mostly ensured ABI stability through intensive eyeballing of changes to `deps/v8/include/*.h` and by running [CITGM](https://github.com/nodejs/citgm). This is tedious and could yield false positives.
I have played around a bit with this [ABI compliance checker](https://lvc.github.io/abi-compliance-checker/), requiring some [tweaking](https://gist.github.com/hashseed/074be0c9c2889813ad92f95bdef266c7) of V8's build files and an additional GN arg (`use_debug_fission = false`).
Using that tool is fairly inconvenient, *very* slow (took me like some 20 minutes), and the result is not all that useful. It is a [good start](https://i.imgur.com/kWvfCRI.png), but
- does not find differences in the [constants in v8.h](https://github.com/v8/v8/blob/74a2a8f6113bf64c918345c218699e1fd1d85477/include/v8.h#L8877).
- includes changes to all header files in v8, not just ones in `v8/include`.
I'm neither an expert on DWARF nor on binary compatibility, so some contribution and help here would be greatly appreciated. Alternative here is to wait until every native module has migrated to n-api, but in the meantime... | help wanted,v8 engine,tools | low | Critical |
288,673,242 | rust | Inconsistent inlineing of Iterator Adaptors - Missed Optimizations | While profiling some rust code of me, I noticed that the following pattern does not optimize well:
```rust
vec![1,2,3,4]
.into_iter()
.map(|v| ...)
.skip_while(|v| ...)
```
`skip_while` is implemented using `find` and `find` is implemented using `try_fold`. The functions `SkipWhile::next()` and `Iterator::find()` use the `#[inline]` annotation. The function `Map::try_fold()` does not. This means that `Map::try_fold()` will not be inlined.
I started looking at the source code and inlineing of iterators seems to follow no rule. I could not find any bug reports related to this.
* [`Filter::try_fold` is inline](https://github.com/rust-lang/rust/blob/8ff449d505728276e822ca9a80c1e7b2da8288a2/src/libcore/iter/mod.rs#L1396)
* [`Enumerate::try_fold` is inline](https://github.com/rust-lang/rust/blob/8ff449d505728276e822ca9a80c1e7b2da8288a2/src/libcore/iter/mod.rs#L1633)
* [`Rev::try_fold` is not inline](https://github.com/rust-lang/rust/blob/8ff449d505728276e822ca9a80c1e7b2da8288a2/src/libcore/iter/mod.rs#L427)
Some iterators like [`Cloned`](https://github.com/rust-lang/rust/blob/8ff449d505728276e822ca9a80c1e7b2da8288a2/src/libcore/iter/mod.rs#L515) do not have any function marked as inline. Not even `next()` is marked as inline.
The [PR introducing `try_fold`](https://github.com/rust-lang/rust/pull/45595) does not give justification why some `try_fold`s are inline and some are not.
The methods `len` and `is_empty` of `ExactSizeIterator`'s are also not marked as inlineable, even though they are always implemented as pass-through to the underlying iterator.
If desired I can prepare a pull request to mark those functions as inlineable. Is there a list of functions for the iterator traits (e.g., Iterator, ExactSizeIterator) which should be inline/not be inline? | I-slow,C-enhancement,T-libs-api,A-iterators | low | Critical |
288,694,808 | material-ui | Dropdown component | - [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
Stemming from discussion in [this bug](https://github.com/mui-org/material-ui/issues/8202), it would be great to have a "Dropdown" component. I see it as a sort of non-modal Popover--it closes in response to clicks outside, but doesn't swallow those clicks, so that you don't have to click twice to activate another interactive element on the page (once to dismiss the open Popover, then again to activate the other element).
My specific use case is that I have a pair of icons next to each other in an AppBar, that each open a small dropdown menu. I would like users to be able to switch between menus (or go to an input/whatever else on the page) without having to click twice.
For reference, I am converting a project of mine from reactstrap to Material UI, and previously I was using reactstrap's [Dropdown](https://reactstrap.github.io/components/dropdowns/) component, which behaves how I would like this to behave. @oliviertassinari linked me to https://material.io/guidelines/components/buttons.html#buttons-dropdown-buttons as the Material UI spec that seems to specify this case. | new feature,component: menu | high | Critical |
288,700,714 | godot | Specular aliasing becomes more visible when depth of field or glow is enabled | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
Godot 3.0 rc1 official
**OS/device including version:**
Ubuntu 16.04 64 bit
GeForce GTX 750 Ti, NVidia driver 384.111
**Issue description:**
Graphical artifacts are visible (for some viewing angles and zoom levels) when `Dof Far Blur` or `Dof Near Blur` in `environment` is enabled. These artifacts are placed in the areas of "high" reflections (see images).
**Steps to reproduce:**
Open 3D material_testers demo, enable `Dof Far Blur` and `Dof Near Blur`, try to zoom in/out, rotate camera. Make the same in the running scene. (artifacts are visible both in editor and in running scene).
Artifacts:

Blur off:

Blur on:

**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
| enhancement,topic:rendering,confirmed,topic:3d | low | Critical |
288,709,776 | neovim | Timer callback is not called in inputlist() | <!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`: 0.2.3-dev
- Vim (version: 8.0.0107) behaves differently? Yes
- Operating system/version: macOS High Sierra
- Terminal name/version: zsh
- `$TERM`: tmux-256color
### Steps to reproduce using `nvim -u NORC`
```vim
" test.vim
call timer_start(0, { -> execute('echomsg "Timer has called"', '') })
echomsg 'Before input()'
call input('Promot: ') | redraw
echomsg 'After input()'
call timer_start(0, { -> execute('echomsg "Timer has called"', '') })
echomsg 'Before inputlist()'
call inputlist(['foo', 'bar']) | redraw
echomsg 'After inputlist()'
```
```
nvim -u NORC
:source %
" Hit <CR> twice MANUALLY
:messages
```
### Actual behaviour
```
Before input()
Timer has called
After input()
Before inputlist()
After inputlist()
Timer has called <--- This is wrong
```
### Expected behaviour
```
Before input()
Timer has called
After input()
Before inputlist()
Timer has called <--- Should be here
After inputlist()
```
### Problem
Usually I wrap `input()` or `inputlist()` with a function to call `inputsave()` and `inputrestore()` safely like
```vim
function! s:input() abort
call inputsave()
try
return input()
finally
call inputrestore()
endtry
endfunction
```
In this case, calling `feedkeys()` just before `s:input()` does not work as expect so I need to wrap it with `timer_start` to make sure that `feedkeys()` is called during `input()` or `inputlist()` like
```vim
call timer_start(0, { -> feedkeys('Hello World' . "\<CR>", 't') })
let result = s:input('Prompt: ')
if result !=# 'Hello World'
throw 'FAILED!'
endif
```
But Neovim's `inputlist()` does not call timer callback so I cannot use above hack for `inputlist()`
| compatibility,complexity:low,event-loop | low | Critical |
288,767,358 | rust | Weird behavior with closure lifetime inference | In the code below, I expect everything to compile, including the commented out code. However, the commented out code does not actually compile. In particular, I believe that adding `let foo = foo` should not affect whether the code compiles, and changing a temporary into a variable should not affect whether the code compiles. (I ran into this problem, or a variant of it, when troubleshooting a problem on the #rust-beginners IRC channel.)
[Playground link](https://play.rust-lang.org/?gist=1768605487cd56aedffc6f7a91e8b58e&version=stable)
```rust
struct Foo;
fn go(x: &mut Foo) -> &mut Foo {x}
fn fix_closure<'a, F: FnOnce() -> &'a mut Foo>(x: F) -> F { x }
fn call<'a, F: FnOnce() -> &'a mut Foo>(x: F) -> &'a mut Foo { x() }
fn id<T>(x: T) -> T { x }
fn good1() {
let foo = &mut Foo;
let f = || {let foo = foo; go(foo)};
f();
}
/*
fn bad1() {
let foo = &mut Foo;
let f = || go(foo);
f();
}
*/
fn good2() {
let foo = &mut Foo;
let f = move || {let foo = foo; go(foo)};
f();
}
/*
fn bad2() {
let foo = &mut Foo;
let f = move || go(foo);
f();
}
*/
fn good3() {
let foo = &mut Foo;
call(|| go(foo));
}
/*
fn bad3() {
let foo = &mut Foo;
let f = || go(foo);
call(f);
}
fn bad4() {
let foo = &mut Foo;
(|| go(foo))();
}
*/
fn good4() {
let foo = &mut Foo;
let f = fix_closure(|| go(foo));
f();
}
/*
fn bad5() {
let foo = &mut Foo;
let f = id(|| go(foo));
f();
}
*/
fn main() {}
``` | A-lifetimes,A-closures,T-compiler,A-inference,C-bug | low | Minor |
288,876,923 | rust | missed optimization: enum move of the active variant | This code performs a memcpy of 2064 bytes when 32 bytes would suffice. [See it live @godbolt](https://godbolt.org/g/4A7jmF):
```rust
#![feature(test)]
#![feature(rustc_private)]
extern crate smallvec;
extern crate test;
use smallvec::SmallVec;
#[inline(never)]
fn clobber<T>(x: T) { test::black_box(x); }
pub fn bar() {
// Capacity for 256 `f64`s on the stack.
// The stack size of `SmallVec` is 256 * 8 + 8 (len) + 8 (discriminant) = 2064 bytes
let mut v = SmallVec::<[f64; 256]>::new();
let size = ::std::mem::size_of::<SmallVec<[f64; 256]>>();
clobber(size);
for _ in 0..300 {
v.push(3.14);
}
// The vector reallocates to the heap, the size of the active variant
// is 8 (len) + 8 (capacity) + 8 (ptr) + 8 (discriminant) = 32 bytes
clobber(v); // memcpy's 2064 bytes instead of 32bytes...
}
```
generates this assembly for the second call to `clobber`:
```asm
mov edx, 2064
mov rdi, rbx
call memcpy@PLT
mov rdi, rbx
call example::clobber
```
The layout of `SmallVec` is:
```rust
pub struct SmallVec<A: Array> {
len: usize,
data: SmallVecData<A>,
}
enum SmallVecData<A: Array> {
Inline { array: A },
Heap { ptr: *mut A::Item, capacity: usize },
}
```
> Note: If `NonZero` would be stable, `SmallVecData` could use it here to remove the 8 bytes of the discriminant.
So I guess that the problem is that moving an enum just naively generates a memcpy of the whole enum. I think this is a good default for small enums. When enums get large, they might contain many small variants, and maybe some large ones.
I don't know if there is an optimal way to solve this problem since adding branches to `memcpy` only some variants might incur a performance cost (unless LLVM knows the value of the active variants and can optimize the branches away).
We should probably at least always `memcpy` small enums up to some size (16bytes? 256 bytes?) and for larger enums have some heuristic like:
- are all variants approximately equally sized? Then just `memcpy` the whole enum
- are some variants small and some variants large? How many "size classes" are there? Generate as few branches as possible here (ideally two, one for small and one for large variants, but might require more).
| I-slow,C-enhancement,T-compiler,C-optimization | low | Major |
288,886,255 | godot | There seem to be no way to access Blend Shapes values via code | **Godot version:**
3.0 Beta 2
**Issue description:**
Looks like it's impossible to change values of a Blend Shape in a mesh with a script, or at least it's not documented, even though you can change them in the editor. | topic:core,documentation | low | Major |
288,946,508 | rust | Undefined reference to `_Unwind_Resume` | I tried to compile a no_std project with Rust, but setting the `eh_unwind_resume` lang_item does not seem to work properly. Building in release with cargo or -O with rustc works properly, but a debug build fails with an `undefined reference to '_Unwind_Resume'`.
[Here](https://gist.github.com/jefftime/d7e96461138b5aa6895550b63caac66c) is the code I tried to get working.
rustc --version --verbose:
```
rustc 1.25.0-nightly (e6072a7b3 2018-01-13)
binary: rustc
commit-hash: e6072a7b3835f1875e81c9fd27799f9b20a0770c
commit-date: 2018-01-13
host: x86_64-unknown-linux-gnu
release: 1.25.0-nightly
LLVM version: 4.0
``` | A-linkage,A-LLVM,T-compiler,C-bug | medium | Critical |
288,978,876 | godot | RigidBody gravity affected if CollisionShape attached is rotated/translated when using Bullet physics | **Godot version:**
v3.0-rc1_x11.64
**OS/device including version:**
Debian 9 64-bit
**Issue description:**
Rotating/translating CollisionShape node attached to RigidBody affects physics (gravity) on rigid body when Bullet engine is used. GodotPhysics is unaffected. Basically, rigid body either is not falling down or starts to fly/float.
**Steps to reproduce:**
Run project attached. Switch between Bullet and GodotPhysics engine.
**Minimal reproduction project:**
[GodotCollisionShapeBug.zip](https://github.com/godotengine/godot/files/1636021/GodotCollisionShapeBug.zip)
| bug,confirmed,topic:physics | low | Critical |
288,985,153 | vscode | Feature Request: Contribute commands with additional arguments | Hello, I'm creating an extension to provide easily accessible documentation shortcuts from the command palette in the editor. Currently it does not seem possible to do something like this (where `otherArg` would be an arbitrary argument in package.json):
```
"contributes": {
"commands": [
{
"command": "extension.doSomething",
"title": "Example Title",
"otherArg": "some other argument"
}
]
},
```
I see that a similar question has been asked here https://github.com/Microsoft/vscode/issues/26436 and also in a resulting (unanswered) SO post here https://stackoverflow.com/questions/43909741/can-i-pass-arguments-to-command-in-contributes-block.
Is this a feature that would be considered? If not, how might this type of thing be accomplished from the extension developer's perspective? I see there is an `args` parameter for command handler functions, but I'm not seeing a way to call `executeCommand` from the package.json file.
For reference, here is a screenshot of how my equivalent package looks in the command palette in sublime text:

Thanks!
| feature-request,api,menus | medium | Major |
289,029,735 | angular | Forms: State that setErrors() will make status === INVALID regardless of value passed for key | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Feature request
[x] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
In the `setErrors` example for [AbstractControl ](https://angular.io/api/forms/AbstractControl#setErrors) it shows passing `true` for the key. **The issue is that the control's status will be set to INVALID no matter the value of the key (or even just passing `setErrors` an empty object** [(see _calculateStatus code)](https://github.com/angular/angular/blob/5.2.0/packages/forms/src/model.ts#L561-L635)
However, as the example is given, it's reasonable to infer that you clear the errors by `setErrors( {"notUnique": false})`, when really you'd need to do `setErrors(null)` or, as the example *does* show, set the control's value to a new value.
````
const login = new FormControl("someLogin");
login.setErrors({
"notUnique": true
});
expect(login.valid).toEqual(false);
expect(login.errors).toEqual({"notUnique": true});
login.setValue("someOtherLogin");
expect(login.valid).toEqual(true);
````
## Expected behavior
<!-- Describe what the desired behavior would be. -->
Update documentation to explicitly say how to clear errors in code (as this is useful to know when doing integration (template) testing, where you just want to set an error, check the template, then clear the error and check that any alerts have gone away.
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
**Failing test**
````
const login = new FormControl("someLogin");
login.setErrors({
"notUnique": true
});
expect(login.valid).toEqual(false);
expect(login.errors).toEqual({"notUnique": true});
login.setErrors({"notUnique": false});
expect(login.valid).toEqual(true); //will still be INVALID since the check: if(this.errors) will return TRUE
````
**Suggested example**
````
const login = new FormControl("someLogin");
login.setErrors({
"notUnique": true
});
expect(login.valid).toEqual(false);
expect(login.errors).toEqual({"notUnique": true});
login.setValue("someOtherLogin"); //or: login.setErrors(null); <===add this comment
expect(login.valid).toEqual(true);
````
Alternatively (though this would be code change, not just a documentation change), is that in `_calculateStatus()`, you only set to INVALID if at least one key in the `errors` object is set to `true`.
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
When I am doing integration (template) testing to validate that alert/error messages show up when a control is invalid, and I choose to use the `setErrors()` function to create an INVALID status, I want to know how to then create a VALID status (either by setting a valid value or by passing null to the `control.setErrors()` function.
## Environment
<pre><code>
Angular version: 4.4.1
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [ ] Chrome (desktop) version XX
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: XX <!-- run `node --version` -->
- Platform: <!-- Mac, Linux, Windows -->
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
| feature,state: Needs Design,breaking changes,area: forms,feature: under consideration | medium | Critical |
289,074,499 | go | runtime/pprof: labels are not added to profiles | ### What version of Go are you using (`go version`)?
I have tested on:
go version go1.9.2 darwin/amd64
go version go1.9.2 linux/amd64
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
darwin: GOARCH=amd64 GOOS=darwin
linux: GOARCH=amd64 GOOS=linux
### What did you do?
pprof labels that I add using pprof.Do() do not appear in the goroutine profile.
Steps:
- Compile and start: https://play.golang.org/p/SgYgnDqaVKB
- Start "go tool pprof localhost:5555/debug/pprof/goroutine"
- Run the "tags" command
- See no tags, but I expect to see a tag for the label a-label=a-value
I also downloaded the file "localhost:5555/debug/pprof/goroutine"", gunzipped that file, and did not see either the label key nor value in the protobuf file.
When I run "go tool pprof localhost:5555/debug/pprof/goroutine" twice and in the second run run "tags", I see
```
(pprof) tags
bytes: Total 3
2 (66.67%): 325.31kB
1 (33.33%): 902.59kB
```
This shows that labels can work. (I expect no output on the first run, since it is reasonable for no heap memory to have been allocated.)
### What did you expect to see?
I expect to see the tags command output the label key-value pair in the program.
### What did you see instead?
The tags command reports an empty value:
```
(pprof) tags
(pprof)
``` | help wanted,NeedsFix,FeatureRequest,compiler/runtime | medium | Critical |
289,077,268 | rust | Assert well-formedness of spans | `rustc` currently has peppered throughout the codebase some checks for "backwards spans", spans where their end is earlier than their start. I believe that the current approach of actively checking for this and not fail is very reasonable in beta and stable `rustc`, but in nightly builds I would like to have an assertion so that we expose this incorrectly formatted spans so that the underlying cause can be fixed.
This might take the form of a simple assert and ICEing (not my preference, but it would certainly bring attention to the problem) to generating a diagnostic error and emitting it, allowing the compiler to continue working, but making the problem visible to anyone using the nightly compiler, with text prompting a report in this issue tracker. If we go down the later route, it probably should be a warning, so that anyone using the nightly compiler in production isn't stopped from using it due to a `rustc` bug.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"starmut"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | C-enhancement,A-diagnostics,T-compiler | low | Critical |
289,105,206 | rust | Should unit tests spawn new processes instead of threads? | Imagine a (contrived) piece of code like this:
```
lazy_static!(Q: Mutex<Vec<u32>> = Mutex::new(Vec::new()));
#[test]
fn test_a() {
{
Q.lock().unwrap().push(5);
}
assert_eq!(Q.lock().unwrap().len(), 1);
}
fn test_b() {
{
Q.lock().unwrap().clear();
}
assert_eq!(Q.lock().unwrap().len(), 0);
{
let q = Q.lock().unwrap();
q.push(5);
q.push(6);
}
assert_eq!(Q.lock().unwrap().len(), 2);
}
```
This is broken by default because cargo test will by default spawn multiple threads for the tests so the global state of Q. The user would have to know to pass --test-threads=1 or the test author would have to create a global mutex to synchronize each test on. This seems like a lot of unnecessary boilerplate, not to mention an unnecessary pain point for new developers to discover when they can't figure out why tests are producing weird results.
There also doesn't seem to be a solution at the moment that would enable you to achieve the benefits of running tests in parallel at all for shared state between tests. The global mutex approach effectively reduces the test code to implicitly running with --test-threads=1. Using separate processes by default (or at least providing a decorator to do it easily) would solve the vast majority of such problems without requiring authors to significantly restructure their code.
FWIW, [other](https://github.com/google/googletest/blob/master/googletest/docs/FAQ.md#why-dont-google-test-run-the-tests-in-different-threads-to-speed-things-up) [test](https://stackoverflow.com/questions/12304927/is-there-a-way-to-run-c-unit-tests-tests-in-parallel) frameworks go with the process sharding approach instead of threads. | T-libs-api,C-feature-request,A-libtest,A-process | medium | Critical |
289,118,999 | flutter | Add an option to "flutter test/drive/analyze" to make output more machine readable | Let's make flutter analysis and testing output easier to parse for automation and CI systems.
Adding an option to the flutter test/drive/analyze commands to produce parsable output like JSON would be ideal, e.g. 'flutter test --json'. | a: tests,c: new feature,tool,t: flutter driver,customer: alibaba,c: proposal,P3,team-tool,triaged-tool | medium | Major |
289,174,580 | pytorch | Very slow on CPU | my model was really slow with pytorch so I tried one of your examples and saw it is slow as well ([seq2seq](http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html)).
It takes a long time...:
27m 55s (- 390m 53s) (5000 6%) 2.8939
56m 31s (- 367m 27s) (10000 13%) 2.3238
85m 20s (- 341m 23s) (15000 20%) 2.0054
117m 23s (- 322m 50s) (20000 26%) 1.7787
146m 48s (- 293m 36s) (25000 33%) 1.5661
I’m running only on CPU
I’m working on Ubuntu 16.04.3.
64 cores.
Python 3.6.3 :: Anaconda custom (64-bit).
tried using MKL (installed via conda)
didn't help.
Any ideas?
cc @VitalyFedyunin @ngimel @zou3519 | module: performance,module: rnn,module: cpu,triaged | medium | Major |
289,208,824 | opencv | Semantic conflicts in cv::InputArray for std::array, float[] and std::vector | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
##### Detailed description
While converting
- `std::array<float, N>`
- `std::vector<float>`
- `float[]`
to `cv::InputArray`,
`std::array` is treated as a column vector, see [here][1]
```.cpp
template<typename _Tp, std::size_t _Nm> inline
_InputArray::_InputArray(const std::array<_Tp, _Nm>& arr)
{ init(FIXED_TYPE + FIXED_SIZE + STD_ARRAY + traits::Type<_Tp>::value + ACCESS_READ, arr.data(), Size(1, _Nm)); }
```
`Size(1,_Nm)`: 1 column, _Nm rows,
while `float[]` and `std::vector<float>` are treated as row vectors,
see [here][2]
```.cpp
template<typename _Tp> inline
_InputArray::_InputArray(const _Tp* vec, int n)
{ init(FIXED_TYPE + FIXED_SIZE + MATX + traits::Type<_Tp>::value + ACCESS_READ, vec, Size(n, 1)); }
```
`Size(n,1)`: n columns, 1 row,
and [here][3]
```.cpp
return szb == szi ? Size((int)szb, 1) : Size((int)(szb/CV_ELEM_SIZE(flags)), 1);
```
[3]: https://github.com/opencv/opencv/blob/master/modules/core/src/matrix.cpp#L1620
[2]: https://github.com/opencv/opencv/blob/master/modules/core/include/opencv2/core/mat.inl.hpp#L119
[1]: https://github.com/opencv/opencv/blob/master/modules/core/include/opencv2/core/mat.inl.hpp#L90
Furthermore, when a `std::vector<float>` is used to
initialize a `cv::Mat`, it is considered as a column vector,
see [here][4]
```.cpp
template<typename _Tp> inline
Mat::Mat(const std::vector<_Tp>& vec, bool copyData)
: flags(MAGIC_VAL | traits::Type<_Tp>::value | CV_MAT_CONT_FLAG), dims(2), rows((int)vec.size()),
cols(1), data(0), datastart(0), dataend(0), datalimit(0), allocator(0), u(0), size(&rows), step(0)
```
which confilcts with `cv::InputArray::getMat`, since it returns a row vector
when initialized with `std::vector<float>`.
[4]: https://github.com/opencv/opencv/blob/master/modules/core/include/opencv2/core/mat.inl.hpp#L562
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | category: core,RFC,future | low | Critical |
289,233,820 | opencv | Conflicts of cv::InputArray::size() and cv::InputArray::getSz() for std::array<cv::Mat, N> | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
##### Detailed description
<!-- your description -->
For `std::array<cv::Mat, N>`,
- `cv::InputArray::getSz()` returns a column vector representation, see [here][1]
```.cpp
template<std::size_t _Nm> inline
_InputArray::_InputArray(const std::array<Mat, _Nm>& arr)
{ init(STD_ARRAY_MAT + ACCESS_READ, arr.data(), Size(1, _Nm)); }
#endif
```
- `cv::InputArray::size()` returns a row vector representation, see [here][2]
```.cpp
return sz.height==0 ? Size() : Size(sz.height, 1);
```
[2]: https://github.com/opencv/opencv/blob/master/modules/core/src/matrix.cpp#L1659
[1]: https://github.com/opencv/opencv/blob/master/modules/core/include/opencv2/core/mat.inl.hpp#L94
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | bug,category: core,RFC | low | Critical |
289,240,072 | opencv | Request to enable CV_Assert for cv::Mat::size() | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
##### Detailed description
The current version of `cv::Mat::size()`
supports only 2-D matrices, but the
assertion is enabled only in the debug mode, see
[here][1]
```.cpp
inline
Size MatSize::operator()() const
{
CV_DbgAssert(p[-1] <= 2);
return Size(p[1], p[0]);
}
```
It is error-prone to use `cv::Mat::size()` and `cv::InputArray::size()` for
n-D matrices in the release mode.
Change `CV_DbgAssert(p[-1] <= 2);` to
`CV_Assert(p[-1] <= 2);` will eliminate this concern.
[1]: https://github.com/opencv/opencv/blob/a55aed5f42b89fd0544de1eaeb95b76c1c9d3734/modules/core/include/opencv2/core/mat.inl.hpp#L1407
<!-- your description -->
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | category: core,RFC | low | Critical |
289,274,210 | go | go/types/gotype: build tags not supported | ### What version of Go are you using (`go version`)?
`go version go1.9.2 linux/amd64`
### Does this issue reproduce with the latest release?
Already using latest release, so yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOOS="linux"
### What did you do?
Created a program where one of two build tags must be provided to build, then tried to lint it with `gotype`.
`main.go`
```go
package main
import (
"./sub"
"fmt"
)
func main() {
fmt.Printf("Hello, %s!\n", sub.Get())
}
```
`sub/sub_a.go`
```go
//+build a
package sub
func Get() string {
return "world"
}
```
`sub/sub_b.go`
```go
//+build b
package sub
func Get() string {
return "everyone"
}
```
`sub/sub.go`
```go
package sub
// some shared code
```
```
$ ls
main.go sub
$ gotype .
main.go:9:29: Get not declared by package sub
$ go run -tags a main.go
Hello, world!
$ go run -tags b main.go
Hello, everyone!
```
### What did you expect to see?
`gotype` would have a `-tags` argument to specify build tags, e.g.
```
$ gotype -tags a .
```
### What did you see instead?
`gotype` has no way to specify build tags for linting
| NeedsFix | low | Major |
289,323,728 | create-react-app | Set 'currentScript' as default 'PUBLIC_URL' | Like I suggested in [#3708 (comment)](https://github.com/facebookincubator/create-react-app/issues/3708#issuecomment-358298289), `currentScript` could be a possible default for `PUBLIC_URL`.
Right now **all** asset paths are relative (ie `/static/media/logo.2e151009.png`) to the root of the app.
When setting the [`PUBLIC_URL` environment parameter](https://github.com/facebookincubator/create-react-app/blob/master/packages/react-scripts/template/README.md#adding-assets-outside-of-the-module-system) or the [`homepage` in `package.json`](https://github.com/facebookincubator/create-react-app/blob/master/packages/react-scripts/template/README.md#building-for-relative-paths), the asset paths become absolute (ie `https://app.com/static/media/logo.2e151009.png`).
Setting these 2 variables is mostly used when you have your app running in a subdirectory on your server.
My proposal is to default the `PUBLIC_URL` to the root of the app, so there will be less need of setting the `PUBLIC_URL` or `homepage` and CRA would need less configuration **by default**. If it's still necessary to set `PUBLIC_URL` or `homepage`, that would still be possible of course. 🙂
The implementation for this could be to check the file-path of the `main.*.js` script (ref: [StackOverflow](https://stackoverflow.com/q/2255689)), which could be accessed by calling `document.currentScript` (ref: [MDN](https://developer.mozilla.org/en-US/docs/Web/API/Document/currentScript)). Since this isn't possible in IE, we could use [this answer](https://stackoverflow.com/a/2255727/4001231) as a fallback.
```js
function resolvePublicPath() {
const currentScript = document.currentScript || getCurrentScriptViaFallback();
const publicPath = currentScript.src
.split('/static/js')[0] // app root directory
.split('main.')[0]; // add support for script in app root directory
return `${publicPath}/`;
}
``` | issue: proposal | low | Minor |
289,385,274 | go | x/tools/go/buildutil: split TagsFlagDoc into multiple lines? | ### What version of Go are you using (`go version`)?
go version devel +7e054553ad Tue Jan 16 15:11:05 2018 +0000 linux/amd64
### Does this issue reproduce with the latest release?
Yes.
### What did you do?
Add a `-tags` flag to a static analysis tool of mine:
```
func init() {
flag.Var((*buildutil.TagsFlag)(&build.Default.BuildTags), "tags", buildutil.TagsFlagDoc)
}
```
Then run `mytool -h`.
### What did you expect to see?
The long flag usage line to be split in multiple lines somehow.
```
Usage of mytool:
-tags build tags
a list of build tags to consider satisfied during the build.
For more information about build tags, see the description of
build constraints in the documentation for the go/build package.
```
### What did you see instead?
```
Usage of mytool:
-tags build tags
a list of build tags to consider satisfied during the build. For more information about build tags, see the description of build constraints in the documentation for the go/build package
```
I initially thought that changing the string constant would be enough, for example by adding `"\n\t"` twice to split the one log line into three lines.
However, the `flag` package is not consistent about placing the usage line, so I am not sure if that is enough:
```
// Boolean flags of one ASCII letter are so common we
// treat them specially, putting their usage on the same line.
if len(s) <= 4 { // space, space, '-', 'x'.
s += "\t"
} else {
// Four spaces before the tab triggers good alignment
// for both 4- and 8-space tab stops.
s += "\n \t"
}
```
So perhaps this should be a proposal to add special handling of long flag usage texts in the `flag` package, to wrap them around a certain around of columns. But I imagine that such a feature wouldn't be welcome, as the package is meant to be simple, and one has multiple workarounds available like using shorter texts, using a custom `Usage` func, or using a third-party flag package.
Still, it seems to me like this is a problem with either `buildutil` or `flag`.
/cc @alandonovan @shurcooL | NeedsDecision,Tools | low | Major |
289,422,373 | rust | Panic related strings are still in binary with custom panic_fmt | It seems that custom `panic_fmt` that doesn't touches `Arguments` doesnt not help to strip all panic-related strings from final binary while compiling to wasm (either emscripten or wasm32-unknown-unknown)
```rust
#![feature(lang_items)]
#![no_std]
#![no_main]
extern "C" {
fn halt();
}
#[no_mangle]
#[lang = "panic_fmt"]
pub extern "C" fn panic_fmt(
_args: ::core::fmt::Arguments,
_file: &'static str,
_line: u32,
_col: u32,
) -> ! {
loop {
unsafe { halt() }
}
}
#[lang = "eh_personality"] extern fn eh_personality() {}
#[no_mangle]
pub fn call(descriptor: u8) -> u8 {
assert!(descriptor > 0);
descriptor
}
```
Invocation:
`rustc --target=wasm32-unknown-emscripten --emit llvm-ir -C lto -C opt-level=3 src/main.rs`
```
$ rustc --version
rustc 1.24.0-nightly (1956d5535 2017-12-03)
```
this produces the following [LLVM IR](https://gist.github.com/pepyakin/ccb14c80d91baf8ec2e09c94b253a30a).
The problem is that `panic_fmt` is not using it's `_args`, so it is a dead arg. However, despite this, strings for panic messages are still [end up in the LLVM IR](https://gist.github.com/pepyakin/ccb14c80d91baf8ec2e09c94b253a30a#file-main-ll-L9)}.
It seems that running `opt -deadargelim -globaldce` helps to strip this strings.
| A-LLVM,C-enhancement,T-compiler,WG-embedded,I-heavy | low | Major |
289,452,162 | angular | ICU messages don't work on attributes | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Feature request
[x] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
When I try to use ICU syntax in a translated attribute (like `placeholder`) I get an error:
`Template parse errors:
Unexpected translation for attribute "placeholder" (id="...`
## Expected behavior
<!-- Describe what the desired behavior would be. -->
Either **a)** nothing fails, everything works! or **b)** Fail more gracefully telling me this is not supported and make a note of this in the [documentation](https://angular.io/guide/i18n#nesting-plural-and-select-icu-expressions).
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
https://stackblitz.com/edit/angular-xafhzk?file=app/autocomplete-overview-example.html
https://angular-xafhzk.stackblitz.io
```html
<input placeholder="{foo, select, bar{a} baz{b} other{c}}" i18n-placeholder>
```
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
i'm forced to do this in different ways, like `goog.getMsg`. I'd like not to do this and have a consistent way of doing stuff.
## Environment
<pre><code>
Angular version: 5.2.1
<!-- Check whether this is still an issue in the most recent Angular version -->
</code></pre>
| feature,area: i18n,hotlist: error messages,freq2: medium,P4,feature: under consideration,area: docs | medium | Critical |
289,464,698 | go | cmd/cgo: don't use syscall.Errno type as errno return on Windows | Apologies in advance If I'm explaining anything poorly.
#### What did you do?
I made an Go binding library for a static C lib, then wrote a go application to use that. During the course of developing that application, I managed to trigger the following error:
`The process cannot access the file because another process has locked a portion of the file.`
I have a stripped down testable chunk of code (complete with a more in depth readme) here: https://github.com/technicalviking/cgotest
I'd like to stress: I'm not looking to debug the code itself. I know the outputs are not what I expected (casting the C **double variable back to [][]float64 in 'extractOutputs' shows a significant number of NaN entries, but meh), but rather why the functionality puts the code in a state where a windows error populates the error return value of the go reference to the C function in this case.
**EDIT TO INVESTIGATION**
On further digging I found that calling `sqrt` on a negative number, even in a function defined in the preamble, is sufficient to trigger this behavior. The test code referenced in this issue has been updated accordingly.
#### What did you expect to see?
No errors
#### What did you see instead?
calling the c function using the mechanism
`doResponse, doError = C.bridgeIndicatorFunction(...)`
to leverage a C function pointer results in `doError` containing the following value: "The process cannot access the file because another process has locked a portion of the file."
#### System details
```
go version go1.9.2 windows/amd64
GOARCH="amd64"
GOBIN=""
GOEXE=".exe"
GOHOSTARCH="amd64"
GOHOSTOS="windows"
GOOS="windows"
GOPATH="C:\Users\dmurker\Documents\Dev\DM\Go Projects"
GORACE=""
GOROOT="C:\Program Files Dev\Go"
GOTOOLDIR="C:\Program Files Dev\Go\pkg\tool\windows_amd64"
GCCGO="gccgo"
CC="gcc"
GOGCCFLAGS="-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\D-DANI~1\AppData\Local\Temp\1\go-build223821631=/tmp/go-build -gno-record-gcc-switches"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOROOT/bin/go version: go version go1.9.2 windows/amd64
GOROOT/bin/go tool compile -V: compile version go1.9.2
gdb --version: GNU gdb (GDB) 7.10.1
```
| help wanted,OS-Windows,NeedsFix | low | Critical |
289,465,814 | pytorch | Compilation issue: problem with GPU capability check | cudnn 7.0.3, cuda 8.0, gcc 5.4, ubuntu 16.04
The machine has three GPUs: two Pascal ones, and one simple old one.
When I don't set CUDA_VISIBLE_DEVICES prior to compilation, automatic GPU capability discovery picks on a wrong GPU and fails with `__shfl_xor` undefined.
The pytorch repo README doesn't mention the need to set CUDA_VISIBLE_DEVICES to a target GPU before compilation.
```
.../pytorch/aten/src/THC/generated/../THCDeviceUtils.cuh(61): error: identifier "__shfl_xor" is undefined
detected during:
instantiation of "T WARP_SHFL_XOR(T, int, int, unsigned int) [with T=float]"
.../pytorch/aten/src/THC/generated/../THCTensorMathReduce.cuh(445): here
instantiation of "void THCTensor_kernel_varInnermostDim<Real,Accreal,flag,apply_sqrt>(Real *, Real *, unsigned int, unsigned int) [with Real=half, Accreal=float, flag=true, apply_sqrt=true]"
.../pytorch/aten/src/THC/generated/../THCTensorMathReduce.cuh(505): here
instantiation of "void THCTensor_varInnermostDim<TensorTypeK,Real,Accreal,apply_sqrt>(THCState *, TensorTypeK *, TensorTypeK *, int) [with TensorTypeK=THCudaHalfTensor, Real=half, Accreal=float, apply_sqrt=true]"
...pytorch/aten/src/THC/generated/../generic/THCTensorMathReduce.cu(89): here
...pytorch/aten/src/THC/generated/../THCDeviceUtils.cuh(91): error: identifier "__shfl_down" is undefined
detected during:
instantiation of "T WARP_SHFL_DOWN(T, unsigned int, int, unsigned int) [with T=float]"
...pytorch/aten/src/THC/generated/../THCTensorMathReduce.cuh(471): here
instantiation of "void THCTensor_kernel_varInnermostDim<Real,Accreal,flag,apply_sqrt>(Real *, Real *, unsigned int, unsigned int) [with Real=half, Accreal=float, flag=true, apply_sqrt=true]"
...pytorch/aten/src/THC/generated/../THCTensorMathReduce.cuh(505): here
instantiation of "void THCTensor_varInnermostDim<TensorTypeK,Real,Accreal,apply_sqrt>(THCState *, TensorTypeK *, TensorTypeK *, int) [with TensorTypeK=THCudaHalfTensor, Real=half, Accreal=float, apply_sqrt=true]"
...pytorch/aten/src/THC/generated/../generic/THCTensorMathReduce.cu(89): here
2 errors detected in the compilation of "/tmp/tmpxft_00007616_00000000-9_THCTensorMathReduceHalf.compute_20.cpp1.ii".
CMake Error at ATen_generated_THCTensorMathReduceHalf.cu.o.cmake:267 (message):
Error generating file
.../pytorch/torch/lib/build/aten/src/ATen/CMakeFiles/ATen.dir/__/THC/generated/./ATen_generated_THCTensorMathReduceHalf.cu.o
```
cc @malfet @ngimel | module: build,module: cuda,triaged | low | Critical |
289,501,862 | godot | Unable to interact with titlebar or window chrome of editor | **Godot version:**
ff59c56 mono x64
**OS/device including version:**
Windows 10 x64
**Issue description:**
The titlebar and window chrome (including the close/minimize/maximize buttons and the edges of the window used to resize the window) are sometimes impossible to interact with. When this situation happens, the window cannot be moved via the titlebar, the titlebar buttons do not show hover effects and cannot be clicked, and the window cannot be resized.
Whenever Godot is exhibiting this behavior, opening up a different window and then going back to the editor seems to fix it temporarily. On the other hand, clicking anywhere in the editor causes this to happen once again.
**Steps to reproduce:**
The issue is intermittent, and I'm not actually sure what causes this. Once it starts happening, I can reproduce the effect 100% of the time (see above) as long as the editor isn't closed. Upon a fresh start of the editor, the issue disappears and I'm not sure what causes it to happen again. | bug,platform:windows,topic:editor,topic:porting,confirmed | medium | Major |
289,520,289 | TypeScript | 'utilities.ts' is an API surface hazard | `utilities.ts` consists of 4 different namespaces:
* one which seems to be entirely internal
* one which is entirely exposed
* one which is entirely exposed *and* is entirely type predicate functions
* one which is a random assortment of internal and exposed functions that have no consistent usage
Just from my intuition and experience, I don't think this is a good idea. It's just too easy to put a new helper in some random section of the file and forget whether or not it's being exposed.
As a naive first approximation, here's what I think would be more appropriate for each respective namespace:
* `utilities.ts`
* `publicUtilities.ts`
* `publicPredicates.ts`
* I don't know, TBD. | Infrastructure | low | Minor |
289,530,865 | flutter | The 'Pods-Runner' target has transitive dependencies | ## Steps to Reproduce
Install my [appcenter plugin](https://pub.dartlang.org/packages/appcenter) and add initialisation code in your app :
```dart
@override
initState() {
super.initState();
initPlatformState();
}
Future initPlatformState() async {
await AppCenter.start("<_app_secret>");
}
```
The code works well with the local example generated with the plugin project template, but when installing with the dart package its doesn't work : I get the following error.
```
The 'Pods-Runner' target has transitive dependencies that include static binaries: ...
```
I first saw that in the local example project, the Podfile doesn't have the `use_frameworks!` command, so I removed it. But it fails too and the library doesn't seem to be present.
## Logs
Before removing `use_framework!` from Podfile :
```
- Running pre install hooks
[!] The 'Pods-Runner' target has transitive dependencies that include static binaries: (/Users/alois/flutter_orange/src/ios/Pods/AppCenter/AppCenter-SDK-Apple/iOS/AppCenterAnalytics.framework, /Users/alois/flutter_orange/src/ios/Pods/AppCenter/AppCenter-SDK-Apple/iOS/AppCenter.framework, and /Users/alois/flutter_orange/src/ios/Pods/AppCenter/AppCenter-SDK-Apple/iOS/AppCenterCrashes.framework)
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/cocoapods-1.3.1/lib/cocoapods/installer/xcode/target_validator.rb:81:in `block (2 levels) in verify_no_static_framework_transitive_dependencies'
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/cocoapods-1.3.1/lib/cocoapods/installer/xcode/target_validator.rb:73:in `each'
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/cocoapods-1.3.1/lib/cocoapods/installer/xcode/target_validator.rb:73:in `block in verify_no_static_framework_transitive_dependencies'
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/cocoapods-1.3.1/lib/cocoapods/installer/xcode/target_validator.rb:70:in `each'
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/cocoapods-1.3.1/lib/cocoapods/installer/xcode/target_validator.rb:70:in `verify_no_static_framework_transitive_dependencies'
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/cocoapods-1.3.1/lib/cocoapods/installer/xcode/target_validator.rb:36:in `validate!'
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/cocoapods-1.3.1/lib/cocoapods/installer.rb:405:in `validate_targets'
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/cocoapods-1.3.1/lib/cocoapods/installer.rb:118:in `install!'
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/cocoapods-1.3.1/lib/cocoapods/command/install.rb:41:in `run'
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/claide-1.0.2/lib/claide/command.rb:334:in `run'
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/cocoapods-1.3.1/lib/cocoapods/command.rb:52:in `run'
/usr/local/Cellar/cocoapods/1.3.1/libexec/gems/cocoapods-1.3.1/bin/pod:55:in `<top (required)>'
/usr/local/Cellar/cocoapods/1.3.1/libexec/bin/pod:22:in `load'
/usr/local/Cellar/cocoapods/1.3.1/libexec/bin/pod:22:in `<main>'
Error running pod install
```
After removing `use_framework!` from Podfile :
```
0 CoreFoundation 0x000000010c27812b __exceptionPreprocess + 171
1 libobjc.A.dylib 0x000000010b90cf41 objc_exception_throw + 48
2 CoreFoundation 0x000000010c2f9024 -[NSObject(NSObject) doesNotRecognizeSelector:] + 132
3 CoreFoundation 0x000000010c1faf78 ___forwarding___ + 1432
4 CoreFoundation 0x000000010c1fa958 _CF_forwarding_prep_0 + 120
5 Runner 0x000000010744d972 -[MSAppCenter configure:] + 208
6 Runner 0x000000010744dc84 -[MSAppCenter start:withServices:] + 108
7 Runner 0x000000010744d1e7 +[MSAppCenter start:withServices:] + 95
8 Runner 0x000000010744a0df -[AppcenterPlugin<…>
```
| platform-ios,tool,platform-mac,t: xcode,P3,team-ios,triaged-ios | medium | Critical |
289,589,935 | opencv | Missing documentation for choosing inliers in LMeDS estimation | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
##### Detailed description
Both [cv::estimateAffine2D][1] and [cv::estimateAffinePartial2D][2]
support to refine the estimatied model using only inliers.
The same problem applies to [cv::findHomography][4].
However, it does not specify how the inliers are selected when
the model is estimated using the LMeDS algorithm.
The relevant code is [here][3]
```.cpp
double sigma = 2.5*1.4826*(1 + 5./(count - modelPoints))*std::sqrt(minMedian);
```
It is not documented at all why the threshold `sigma` is computed as above.
[4]: https://docs.opencv.org/master/d9/d0c/group__calib3d.html#ga4abc2ece9fab9398f2e560d53c8c9780
[3]: https://github.com/opencv/opencv/blob/master/modules/calib3d/src/ptsetreg.cpp#L360
[2]: https://docs.opencv.org/master/d9/d0c/group__calib3d.html#gad767faff73e9cbd8b9d92b955b50062d
[1]: https://docs.opencv.org/master/d9/d0c/group__calib3d.html#ga27865b1d26bac9ce91efaee83e94d4dd
<!-- your description -->
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | bug,category: documentation,category: calib3d | low | Critical |
289,593,002 | opencv | Missing estimation method, maxIteration and refinement option in cv::estimateAffine3D | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
##### Detailed description
Both [cv::estimateAffine3D][1] and [cv::estimateAffine2D][2] **SHOULD**
have similar signatures, however,
- `cv::estimateAffine2D` returns the model as the return value
- `cv::estimateAffine3D` returns the model as a parameter
- `cv::estimateAffine2D` supports RANSAC and LMeDS
- `cv::estimateAffine3D` supports only RANSAC
- `cv::estimateAffine2D` supports to specify the maximum number of iterations
- `cv::estimateAffine3D` dose not support maxIter
- `cv::estimateAffine2D` supports to refine the estimated model
- `cv::estimateAffine3D` does not support refinement
[2]: https://docs.opencv.org/master/d9/d0c/group__calib3d.html#ga27865b1d26bac9ce91efaee83e94d4dd
[1]: https://docs.opencv.org/master/d9/d0c/group__calib3d.html#ga396afb6411b30770e56ab69548724715
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | category: documentation,category: calib3d,RFC,future | low | Critical |
289,594,197 | opencv | Fixed outlier ratio in LMeDS estimation | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
- OpenCV => master
##### Detailed description
[class LMeDSPointSetRegistrator][1] uses
a fixed outlier ratio to determine the maximum
number of iterations required, see the code [here][2]
```.cpp
const double outlierRatio = 0.45;
```
Why `0.45` is chosen? Should it be passed as a parameter?
[2]: https://github.com/opencv/opencv/blob/master/modules/calib3d/src/ptsetreg.cpp#L279
[1]: https://github.com/opencv/opencv/blob/master/modules/calib3d/src/ptsetreg.cpp#L270
<!-- your description -->
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | feature,category: calib3d,future | low | Critical |
289,604,777 | create-react-app | Option not to include hash to file names | First of all thanks for this tool! It was really helpful for me to get started. I read all available how-to's and materials and really understand the purpose of this tool BUT... If this app is agnostic of the backend, and just produces static HTML/JS/CSS bundles there should be an option to leave filenames as is, as some ECM/CMS and other host systems has their own caching/versioning realization. Having different file names after each build mean need for manually deal with file published previously. | issue: proposal | medium | Critical |
289,658,870 | go | x/perf/cmd/benchstat: tips or quickstart for newcomers | `benchstat` is a very useful tool, but if you're not familiar with what it does, it can be very confusing to use for the first time.
One such example is "how many times should my benchmarks run". If one has used `benchcmp` before, running each benchmark before and after exactly once, trying to use `benchstat` will result in something confusing like:
```
name old time/op new time/op delta
Decode-4 2.20s ± 0% 1.54s ± 0% ~ (p=1.000 n=1+1)
```
The answer here is that the user should be running the benchmark more times - at least 3 or 4 to get p-values low enough for a result.
However, neither `benchstat -h` nor the godoc page are very clear on this, nor do they have a "quickstart" guide. The godoc page does show an example input with many runs, and does talk about "a number of runs" and p-values, but it's not very clear if you're not familiar with statistics and benchmarking.
I believe that a quick guide would greatly improve the usability of the tool - for example:
```
$ go test -bench=. -count 5 >old.txt
$ <apply source changes>
$ go test -bench=. -count 5 >new.txt
$ benchstat old.txt new.txt
```
I think it should also introduce other best practices, such as:
* Using higher `-count` values if the benchmark numbers aren't stable
* Usingn `-benchmem` to also get stats on allocated objects and space
* Running the benchmarks on an idle machine not running on battery (and with power management off?)
* Adding `-run='$^'` or `-run=-` to each `go test` command to avoid running the tests too
I realise that some of these tips are more about benchmarking than `benchstat` itself. But I think it's fine to have it all there, as in general you're going to be using that tool anyway.
/cc @rsc @ALTree @aclements @AlekSi | Documentation,NeedsInvestigation | high | Critical |
289,660,948 | rust | Type inference do not take bounds from `impl` into account | In the example below I get the following error.
```
error[E0284]: type annotations required: cannot resolve `<_ as Stream>::Item == char`
--> src/main.rs:60:49
|
60 | let _: (char, &str) = satisfy(|c| c != 'a').map(|c| c).parse("").unwrap();
| ^^^
|
= note: required because of the requirements on the impl of `Parser<_>` for `Satisfy<[closure@src/main.rs:60:35: 60:47]>`
```
Perhaps this is intended but I couldn't find anything stating that, feel free to close if that is the case.
```rust
#[derive(Clone)]
pub struct Satisfy<P> {
predicate: P,
}
trait Stream {
type Item;
}
impl<I, P> Parser<I> for Satisfy<P>
where
I: Stream,
P: FnMut(I::Item) -> bool,
{
type Output = I::Item;
}
pub fn satisfy<P, C>(predicate: P) -> Satisfy<P>
where
P: FnMut(C) -> bool,
{
Satisfy { predicate }
}
pub struct Map<P, F>(P, F);
impl<I, B, P, F> Parser<I> for Map<P, F>
where
I: Stream,
P: Parser<I>,
F: FnMut(P::Output) -> B,
{
type Output = B;
}
impl<'a> Stream for &'a str {
type Item = char;
}
pub trait Parser<Input>
where
Input: Stream,
{
type Output;
fn parse(&mut self, input: Input) -> Result<(Self::Output, Input), ()> {
unimplemented!()
}
fn map<F, B>(self, f: F) -> Map<Self, F>
where
F: FnMut(Self::Output) -> B,
Self: Sized,
{
Map(self, f)
}
}
fn main() {
let _: (char, &str) = satisfy(|c| c != 'a').map(|c| c).parse("").unwrap();
}
``` | C-enhancement,T-lang,T-compiler,A-inference | low | Critical |
289,690,841 | create-react-app | Document Babel macros support and common uses | Need to keep track:
* Relay https://github.com/facebook/relay/pull/2171
* Apollo https://github.com/facebookincubator/create-react-app/issues/3856#issuecomment-358696916
* Css-in-js like Emotion
* Maybe some i18n libs?
* SVG https://github.com/facebookincubator/create-react-app/issues/3856#issuecomment-358696916
* Maybe make one for idx? https://github.com/dralletje/idx.macro | contributions: up for grabs!,tag: documentation,difficulty: medium | medium | Major |
289,783,003 | pytorch | Consider disallowing Variables that require grad in NCCL/comm functions | The NCCL bindings and torch.cuda.comm functions now operate on Tensors or Variables, but they're not differentiable functions. We may want to raise an exception if the arguments require grad and `torch.is_grad_enabled()` is `True`.
See #4730
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @SsnL | module: autograd,triaged,module: nccl,actionable | low | Minor |
289,788,353 | go | runtime: Windows service timeout during system startup | ### What version of Go are you using (`go version`)?
go version go1.9.2 windows/amd64
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
```
set GOARCH=amd64
set GOBIN=
set GOEXE=.exe
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=C:\Users\XXXXX\d\go
set GORACE=
set GOROOT=C:\Go
set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=C:\Users\XXXXX\AppData\Local\Temp\go-build933228923=/tmp/go-build -gno-record-gcc-switches
set CXX=g++
set CGO_ENABLED=1
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
```
### What did you do?
Configured our Go language application to start as a Windows Service, with start type of 'Automatic'. The service is expected to start when the system boots up. Sometimes the service process will not be running after the system has booted up and the 'Windows Logs / 'System' logs in Event Viewer show two logs:
```
Event ID: 7009
Level: Error
Text:
A timeout was reached (30000 milliseconds) while waiting for the <service name> service to connect.
```
```
Event ID: 7000
Level: Error
Text:
The <service name> service failed to start due to the following error:
The service did not respond to the start or control request in a timely fashion.
```
These events do not appear in the logs when the service successfully starts.
It appears that Windows terminates the service process when the service fails to respond to the Windows Service start event within the 30 s time frame. Our application contains the Windows Service code based on the sample code from `sys/windows/svc/example`.
This timeout issue occurs both with 64-bit and 32-bit versions of our executable.
### What did you expect to see?
The Go language Windows service start reliably every time the system boots up.
### What did you see instead?
The Go language service does not start reliably on some systems on boot-up.
Some observations from systems where the service often fails to start during boot-up:
1. Manually starting the service once the system is up and running has never failed to start correctly. Timestamps from the logs show the process starting quickly (no more than a few seconds).
2. Setting the service's start type to 'Automatic (Delayed Start)', where Windows starts the service 120 s after the 'Automatic' services, works without fail. The user is able to login to the desktop well before the service has been started. It is possible to change this global Windows setting to a lower value (eg 60 s) to make it less noticeable to the user, but because it will affect **all** delayed start services, it is not an acceptable solution. Some search results claim a per-service delay value can be specified, but we were not able to make this work.
3. Changing the 30000 ms (30 s) Windows service timeout value to larger values (60 s or 90 s) allowed the service to startup. However, this is a global Windows setting and it is not possible to set the timeout per-service. Again, this is not an acceptable solution.
4. We experimented with using the Windows Service "Recovery" settings to restart our service in an attempt to mitigate this problem. However, it appears that these recovery settings have no effect for services that fail to start in time.
5. We instrumented our service with timestamped logs when the first imported package is initialized and when `main()` starts. We found that `main()` was often not reached when the timeout occurred. Sometimes even the first imported package's initialization was not reached.
6. A stripped-down Windows service that just logged would often fail to start, but it was less likely to happen compared to our full-fledged service.
7. Using a simple C++ Windows Service ([code here](https://code.msdn.microsoft.com/windowsapps/CppWindowsService-cacf4948)) modified to launch our executable (running as a 'normal' application, not as a Windows Service) always launched without fail (no timeouts) during boot-up. Logs within our executable, as well as in the C++ service (just prior to the `CreateProcess()` call that started our executable) often showed large time deltas. The logs indicated it often took a long time for the Go executable to get launched, and also to reach the `main()`.
While we have some systems (VirtualBox VMs) that reliably show these startup problems, the issue is harder to reproduce on other systems. We have VirtualBox VMs where the problem occurs only occasionally and we have yet to see the timeout occur on non-virtual Windows 10 installations. Timestamp-instrumented loads on those systems sometimes show time deltas of up to 10 s during startup. While this is not long enough to trigger the 30 s timeout on a powerful real system, the concern is that in virtualized environments and/or slower systems the timeout will be reached and the service will not start.
### Sample Programs
Sample code can be found in [this GitHub repository](https://github.com/Openera/winserv), including instructions for building and installing (README.md).
Both the `launchserv.exe` and `winserv.exe` programs contain the Go Windows Service code (again, based one the sample code). The Go Windows Service sample code was combined into a single file in both cases in order to 'flatten' the main package for tracking imported package initialization.
It is intended that only one of the two should run as a Windows Service. The recommended configuration is to install `launchserv.exe` as a Windows Service ("my service"), and it will in turn launch `winserv.exe` as a child process (`winserv.exe` will run as a 'normal' program, not a service). `launchserv.exe` writes a timestamped log just prior to starting `winserv.exe`, which allows measuring the time it takes for `winserv.exe` to start.
`winserv.exe` contains a large number of standard package imports, to mimic a real program. These imports are marked as 'unused' and as such only their initializations run (variable initialization and `init()`).
The VirtualBox version of the Win10 MSEdge test VM can be downloaded from Microsoft [here](https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/) to run these test programs. The following changes may help to increase the likelihood of seeing the issue with this, or other systems:
* Increase the number of CPU cores from 1 (to say 2 or 4). This might increase the concurrent activity when the system is starting up.
* Install the Guest Additions.
* Install other software that starts services when the system boots. For example, one VM that shows the issue very reliably has two VNC server packages, OpenVPN client and an SSL VPN client (Array Networks) (as well as other software) installed.
Multiple boot-ups of the system may be required to see the problem. In many cases, a VM may sometimes demonstrate the problem, but then not do so at other times. YMMV.
When delays are seen, the two most common places are:
1. Launching `winserv.exe`; that is, between the last `launchserv.exe` log and the first `winserv.exe` log. Time deltas as high as 26 s have been seen (and as low as less than 400 ms) have been seen on an Win10/MSEdge VM.
2. The 'lots of imports' in `winserv.exe` `main.go`, between the 'b' and 'c' logs. These are the large number of unused standard package imports. Times as high as 9 s (and less than 100 ms on the low end) have been seen on the same Win10/MSEdge VM.
This service is an important component of our product; we are adding more functionality all the time. As the amount of code increases, it seems even more likely that this problem will occur. If there is anything else we can do to help resolve this issue (experiments, measurements, debugging, etc), please let us know. | help wanted,OS-Windows,NeedsInvestigation,compiler/runtime | high | Critical |
289,790,396 | kubernetes | `kubectl apply` (client-side) removes all entries when attempting to remove a single duplicated entry in a persisted object | **Is this a BUG REPORT or FEATURE REQUEST?**:
/kind bug
Lists in API objects can define a named property that should act as a "merge key". The value of that property is expected to be unique for each item in the list. However, gaps in API validation allow some types to be persisted with multiple items in the list sharing the same value for a mergeKey property.
The algorithm used by `kubectl apply` detects removals from a list based on the specified key, and communicates that removal to the server using a delete directive, specifying only the key. When duplicate items exist, that deletion directive is ambiguous, and the server implementation deletes all items with that key.
Known API types/fields which define a mergeKey but allow duplicate items to be persisted:
PodSpec (affects all workload objects containing a pod template):
* `hostAliases` (#91670)
* `imagePullSecrets` (https://github.com/kubernetes/kubernetes/issues/91629)
* `containers[*].env` (this issue, https://github.com/kubernetes/kubernetes/issues/86163, https://github.com/kubernetes/kubernetes/issues/93266, https://github.com/kubernetes/kubernetes/issues/106809, https://github.com/kubernetes/kubernetes/issues/121541, https://github.com/kubernetes/kubernetes/issues/122121)
* `containers[*].ports` (#86273, https://github.com/kubernetes/kubernetes/issues/93952, https://github.com/kubernetes/kubernetes/issues/113246)
* `volumes` (https://github.com/kubernetes/kubernetes/issues/78266)
* ~~`containers[*].volumeMounts`~~ (https://github.com/kubernetes/kubernetes/pull/35071 changed the merge key from name to mountPath, which was [a breaking change](https://github.com/kubernetes/kubernetes/issues/36024#issuecomment-261169033), but mountPath is at least required to be unique)
Service
* `ports` (name+protocol required to be unique on create in https://github.com/kubernetes/kubernetes/pull/47336, but still has issues on update in https://github.com/kubernetes/kubernetes/issues/59119, #97883, and mergeKey is still only name, xref https://github.com/kubernetes/kubernetes/issues/47249)
Original report
===
**What happened**:
For `deployment` resource:
A container has defined environment variable with name `x` that is duplicated (there are two env vars with the same name, the value is also the same).
When you fix the `deployment` resource descriptor so that environment variable with name `x` appears only once and push it with `kubectl apply`, deployment with no environment variable named `x` is created and therefore no environment variable named `x` is passed to replica set and pods.
**What you expected to happen**:
After fixing the `deployment`, environment variable with name `x` is defined in the `deployment` once .
**How to reproduce it (as minimally and precisely as possible)**:
1. create deployment with container with duplicated environment variable
1. `kubectl apply` it
1. fix deployment removing one of duplicated environment variable definitions
1. `kubectl apply` it
1. `kubectl get deployment/your-deployment -o yaml` prints deployment without
**Anything else we need to know?**:
nope
**Environment**:
- Kubernetes version (use `kubectl version`):
`Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T20:00:41Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:40:06Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}`
- Cloud provider or hardware configuration: private Kubernetes cluster
- OS (e.g. from /etc/os-release): N/A
- Kernel (e.g. `uname -a`): N/A
- Install tools: N/A
- Others: N/A
| kind/bug,priority/backlog,sig/api-machinery,lifecycle/frozen | high | Critical |
289,826,964 | rust | Performance regressions of compiled code over the last year | Started on an update to https://github.com/Marwes/combine after it being dormant for a while. When I ran the benchmarks to check that my changes hadn't regressed the performance I noticed that performance had regressed by ~28% (~116% with incremental compilation!) since the last time ran benchmarks (somewhere around September 2016).
I ran the benchmarks again against an old version of the library to be able to compile it with older rustc's but the regression is the same in the master branch as well.
## `cargo bench --bench http` against https://github.com/Marwes/combine/tree/v2.3.2
### cargo-0.18.0-nightly (a73a665 2017-02-14)
```
test http_requests_large ... bench: 439,961 ns/iter (+/- 30,684)
test http_requests_small ... bench: 87,508 ns/iter (+/- 5,173)
```
### rustc 1.19.0-nightly (554c685b0 2017-06-14)
```
test http_requests_large ... bench: 475,989 ns/iter (+/- 10,477)
test http_requests_small ... bench: 95,175 ns/iter (+/- 23,751)
```
### rustc 1.22.0-nightly (368122087 2017-09-06)
```
test http_requests_large ... bench: 494,088 ns/iter (+/- 27,462)
test http_requests_small ... bench: 102,798 ns/iter (+/- 67,446)
```
### rustc 1.25.0-nightly (6828cf901 2018-01-06) (CARGO_INCREMENTAL=0)
```
test http_requests_large ... bench: 551,065 ns/iter (+/- 420,621)
test http_requests_small ... bench: 112,375 ns/iter (+/- 2,098)
```
### rustc 1.25.0-nightly (6828cf901 2018-01-06) (CARGO_INCREMENTAL=1)
```
test http_requests_large ... bench: 1,001,847 ns/iter (+/- 40,639)
test http_requests_small ... bench: 188,091 ns/iter (+/- 1,958)
```
I'd like to bisect this further but the two tools I found for this do not appear to work in this case, is there any other tool that can be used for this?
https://github.com/kamalmarhubi/rust-bisect (Outdated)
https://github.com/Mark-Simulacrum/bisect-rust/tree/master/src (Only last 90 days) | A-LLVM,I-slow,C-enhancement,P-medium,T-compiler,regression-from-stable-to-stable,A-libtest | medium | Major |
289,876,633 | rust | rustc hangs when compiling a certain broken codebase, rather than emit an error message | Let my start by saying that I'm NOT asking for help with my project, and I KNOW my code is broken. I just want rustc to give me an error message instead of deadlocking. :)
I found this while trying to make a version of Servo without JavaScript. I started by deleting all references to mozjs from .toml files and deleting all "use js::*" lines from .rs files. That *should* cause the compile to fail almost right away with compile errors. Instead, rustc just hangs when it reaches the "script" crate.
**OS and RustC version**
This is 64-bit Linux Mint 18, corresponding to Ubuntu 16.04. My rustc version is rustc 1.23.0 (766bd11c8 2018-01-01).
**To Reproduce**
- Check out this branch of Servo: https://github.com/Max-E/servo/tree/stripped_out
- compile with "./mach build -rv"
**Expected Results**
Compilation should fail as soon as it hits the "script" crate with a huge number of error messages.
**Actual Results**
rustc just hangs. Instead of the expected CPU utilization (8 threads going full blast) there is only one thread. When I attach gdb to the rustc process, I see only two threads. One thread is stuck on a pthread_join, and another thread is in a loop that seems to call memcmp a lot.
**A Second Opinion**
A very kind bystander on IRC was able to reproduce the same problem using both the stable and nightly versions of rustc.
**An Apology**
I wish I could narrow down the problem instead of just dumping this huge codebase on you, but I frankly am not sure how to even begin doing that. | A-diagnostics,T-compiler,C-bug,I-hang | low | Critical |
289,890,718 | vue | Double value appear in textarea when using render function to create in .vue file. | ### Version
2.5.13
### Reproduction link
[https://jsfiddle.net/SunnyLyu/ntoboxev/](https://jsfiddle.net/SunnyLyu/ntoboxev/)
### Steps to reproduce
1. sorry for things that it could not use .vue file definition in jsfiddle, please follow the steps below.
2. to create a textarea component like this:
export default {
props: {
value: {
type: String
}
},
render: function(h) {
var _this = this;
return h('textarea', {
'on': {
'input': function(e){
_this.$emit('update:value', e.target.value);
}
}
})
}
};
3. import into a .vue file and use as a Vue component:
<!-- ta means the component defined above -->
<ta id="main" :value.sync="msg"></ta>
4. run all these codes
### What is expected?
Type '1' in textarea once, a character '1' would appear in the textarea, in IE11.
### What is actually happening?
Type '1' in textarea once, 2 characters '11' would appear in the textarea at the same time, in IE11.
---
1. Just use the component definition in HTML, everything goes well in both IE and Chrome (like the example running in jsfiddle).
2. Once when using the definition in .vue file, after compiled would get wrong in IE11, but still ok in Safari and Chrome.
<!-- generated by vue-issues. DO NOT REMOVE --> | browser quirks | low | Major |
289,898,217 | rust | False positive: "struct is never used", but it is used via associated constant from a trait | ```Rust
trait X {
const X: usize;
}
struct A;
struct B;
impl X for A {
const X: usize = 0;
}
impl X for B {
const X: usize = 1;
}
fn main() {
let x = 1;
match x {
A::X => println!("A"),
B::X => println!("B"),
_ => println!("?"),
}
}
```
```
λ rustc main.rs ~/trash
warning: struct is never used: `A`
--> main.rs:5:1
|
5 | struct A;
| ^^^^^^^^^
|
= note: #[warn(dead_code)] on by default
warning: struct is never used: `B`
--> main.rs:6:1
|
6 | struct B;
```
Looks like a false positive?
| C-enhancement,A-lints,T-compiler | low | Minor |
289,925,274 | vscode | Allow to "Scope to this" in explorer | I'd like to have the "scope" feature in the tree view to "isolate" a directory in order to have a "cleaner" view. For example, if my project root directory has 2 main sub-directories: "public" and "admin":
main-dir
\___ admin
\___ public
and I want to work only to the "public" directory I'd right-click the "public" directory in the tree view and choose "Scope to this" and then the tree view would show only the "public" directory. | feature-request,file-explorer | high | Critical |
289,944,321 | kubernetes | Improve kubectl cp, so it doesn't require the tar binary in the container | > Uncomment only one, leave it on its own line:
> /kind feature
**What happened**:
Kubectl cp currently requires the container we're copying into to include the tar binary. This is problematic when the container image is minimal and only includes the main binary run in the container and nothing else.
**What you expected to happen**:
Docker now has `docker cp`, which can copy files into a running container without any prerequisites on the container itself. Kubectl cp could use that mechanism. Obviously, this will require introducing a new feature into CRI, so it's not a small task.
**Why we need this**:
This will enable users to debug an existing (running) container, which is based on the `scratch` image and contains nothing else but the main app binary. Users would be able to get any binary they need into the container. An alternative solution could be to mount an additional volume (possibly from another container image) into a running pod (if that feature is ever implemented). | sig/node,kind/feature,sig/cli,lifecycle/frozen | high | Critical |
289,966,029 | go | runtime: stack grow panic tracing back through sigpanic from signal handler | From https://build.golang.org/log/6864350004c318139a5516a5b65d5099a88a0272:
````
panic: runtime: unexpected return pc for runtime.exitsyscall called from 0x0
fatal error: unknown caller pc
panic during panic
runtime stack:
runtime.startpanic_m()
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/panic.go:688 +0x174
runtime.startpanic()
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/panic.go:589 +0x14
runtime.throw(0x100f5e61e, 0x11)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/panic.go:615 +0x54
runtime.gentraceback(0xffffffffffffffff, 0xffffffffffffffff, 0x0, 0x106ade300, 0x0, 0x0, 0x7fffffff, 0x100f65ab0, 0x16f0177e0, 0x0, ...)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/traceback.go:286 +0x14fc
runtime.copystack(0x106ade300, 0x1000, 0x1)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/stack.go:891 +0x1c4
runtime.newstack()
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/stack.go:1063 +0x254
runtime.morestack()
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/asm_arm64.s:297 +0x68
goroutine 40 [copystack]:
runtime.mapaccess1_fast32(0x100f31600, 0x0, 0x15dc0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/hashmap_fast.go:12 +0x1b8 fp=0x106b05b60 sp=0x106b05b60 pc=0x100df6818
runtime.resolveTypeOff(0x100f34880, 0x15dc0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/type.go:227 +0x70 fp=0x106b05bc0 sp=0x106b05b60 pc=0x100e38a70
runtime.(*_type).typeOff(0x100f34880, 0x100015dc0, 0x101039901)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/type.go:239 +0x28 fp=0x106b05be0 sp=0x106b05bc0 pc=0x100e38cf8
runtime.(*itab).init(0x10126cff0, 0x0, 0x101057ee0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/iface.go:193 +0x228 fp=0x106b05c90 sp=0x106b05be0 pc=0x100df9ec8
runtime.getitab(0x100f34880, 0x100f26c00, 0x1, 0x1)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/iface.go:69 +0x3b0 fp=0x106b05d10 sp=0x106b05c90 pc=0x100df99a0
runtime.assertE2I2(0x100f34880, 0x100f26c00, 0x106b21fe0, 0x106a20058, 0x106a10798, 0x100e16068)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/iface.go:592 +0x34 fp=0x106b05d40 sp=0x106b05d10 pc=0x100dfad04
runtime.printany(0x100f26c00, 0x106b21fe0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/error.go:78 +0x48 fp=0x106b05e40 sp=0x106b05d40 pc=0x100df2a98
runtime.printpanics(0x106b05ed8)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/panic.go:417 +0x58 fp=0x106b05e60 sp=0x106b05e40 pc=0x100e15108
panic(0x100f32260, 0x1010320c0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/panic.go:550 +0x398 fp=0x106b05f00 sp=0x106b05e60 pc=0x100e15528
runtime.panicmem()
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/panic.go:63 +0x5c fp=0x106b05f20 sp=0x106b05f00 pc=0x100e1425c
runtime.sigpanic()
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/signal_unix.go:382 +0x134 fp=0x106b05f70 sp=0x106b05f20 pc=0x100e2a0f4
runtime.exitsyscall(0x3b)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/proc.go:2973 +0x48 fp=0x106b05fb0 sp=0x106b05f80 pc=0x100e1dd68
created by log/syslog.TestConcurrentReconnect
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:351 +0x1d0
goroutine 1 [chan receive]:
testing.(*T).Run(0x106abc000, 0x100f5faef, 0x17, 0x100f65808, 0x5a5ae101)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:825 +0x258
testing.runTests.func1(0x106abc000)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:1063 +0x54
testing.tRunner(0x106abc000, 0x106a5ddd0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:777 +0xb0
testing.runTests(0x106a88180, 0x101036660, 0x8, 0x8, 0x100e3ab60)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:1061 +0x26c
testing.(*M).Run(0x106aba000, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:978 +0x14c
main.main()
_testmain.go:58 +0x15c
goroutine 20 [chan receive]:
testing.(*T).Parallel(0x106abc0f0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:697 +0x17c
log/syslog.TestWithSimulated(0x106abc0f0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:137 +0x28
testing.tRunner(0x106abc0f0, 0x100f65840)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:777 +0xb0
created by testing.(*T).Run
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:824 +0x244
goroutine 8 [chan receive]:
testing.(*T).Parallel(0x106ac03c0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:697 +0x17c
log/syslog.TestWrite(0x106ac03c0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:266 +0x48
testing.tRunner(0x106ac03c0, 0x100f65848)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:777 +0xb0
created by testing.(*T).Run
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:824 +0x244
goroutine 38 [semacquire]:
sync.runtime_Semacquire(0x106a182cc)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/sema.go:56 +0x2c
sync.(*WaitGroup).Wait(0x106a182c0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/sync/waitgroup.go:129 +0x68
log/syslog.TestConcurrentReconnect(0x106ac05a0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:385 +0x260
testing.tRunner(0x106ac05a0, 0x100f65808)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:777 +0xb0
created by testing.(*T).Run
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/testing/testing.go:824 +0x244
goroutine 53 [runnable]:
log/syslog.runStreamSyslog.func1(0x106a18280, 0x106a621e0, 0x100f7be40, 0x106a0e090)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:74
created by log/syslog.runStreamSyslog
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:74 +0x6c
goroutine 46 [runnable]:
syscall.Syscall(0x4, 0x9, 0x106ab64c0, 0x3b, 0x3b, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/syscall/asm_darwin_arm64.s:13 +0x8
syscall.write(0x9, 0x106ab64c0, 0x3b, 0x40, 0x100e54400, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/syscall/zsyscall_darwin_arm64.go:1321 +0x48
syscall.Write(0x9, 0x106ab64c0, 0x3b, 0x40, 0x7ffffffe7534aa79, 0x0, 0x1)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/syscall/syscall_unix.go:181 +0x38
internal/poll.(*FD).Write(0x106ad0500, 0x106ab64c0, 0x3b, 0x40, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_unix.go:256 +0x104
net.(*netFD).Write(0x106ad0500, 0x106ab64c0, 0x3b, 0x40, 0x100f52bc0, 0x100e80e78, 0x101032210)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/fd_unix.go:220 +0x3c
net.(*conn).Write(0x106a92070, 0x106ab64c0, 0x3b, 0x40, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/net.go:188 +0x50
fmt.Fprintf(0x100f7a1a0, 0x106a92070, 0x100f5f69a, 0x16, 0x106afbda0, 0x7, 0x7, 0x3b, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/fmt/print.go:189 +0x7c
log/syslog.(*netConn).writeString(0x106a882c0, 0xe, 0x106ac8360, 0x12, 0x100f5bb9b, 0x3, 0x100f5bd25, 0x4, 0x100f5ba0f, 0x1, ...)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog.go:295 +0x424
log/syslog.(*Writer).write(0x106a62300, 0xe, 0x100f5bd25, 0x4, 0x4, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog.go:273 +0xa8
log/syslog.(*Writer).writeAndRetry(0x106a62300, 0x6, 0x100f5bd25, 0x4, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog.go:254 +0x124
log/syslog.(*Writer).Info(0x106a62300, 0x100f5bd25, 0x4, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog.go:236 +0x38
log/syslog.TestConcurrentReconnect.func3(0x106a182c0, 0x106a1e440, 0x106a182b0, 0xf, 0x106ac05a0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:377 +0x15c
created by log/syslog.TestConcurrentReconnect
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:368 +0x244
goroutine 47 [IO wait]:
internal/poll.runtime_pollWait(0x10126ca90, 0x77, 0x10126ca98)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/netpoll.go:173 +0x3c
internal/poll.(*pollDesc).wait(0x106aba898, 0x77, 0x106afc600, 0x106afc658, 0x106ab9901)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_poll_runtime.go:85 +0xa0
internal/poll.(*pollDesc).waitWrite(0x106aba898, 0x106aba800, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_poll_runtime.go:94 +0x30
internal/poll.(*FD).WaitWrite(0x106aba880, 0x106a86048, 0x106a86048)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_unix.go:440 +0x2c
net.(*netFD).connect(0x106aba880, 0x100f7b380, 0x106a86048, 0x0, 0x0, 0x100f7a360, 0x106ab9960, 0x0, 0x0, 0x0, ...)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/fd_unix.go:152 +0x1ec
net.(*netFD).dial(0x106aba880, 0x100f7b380, 0x106a86048, 0x100f7b9c0, 0x0, 0x100f7b9c0, 0x106a80ba0, 0x100ef2298, 0x106a86da0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/sock_posix.go:142 +0x98
net.socket(0x100f7b380, 0x106a86048, 0x100f5bb9e, 0x3, 0x2, 0x1, 0x0, 0x0, 0x100f7b9c0, 0x0, ...)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/sock_posix.go:93 +0x128
net.internetSocket(0x100f7b380, 0x106a86048, 0x100f5bb9e, 0x3, 0x100f7b9c0, 0x0, 0x100f7b9c0, 0x106a80ba0, 0x1, 0x0, ...)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/ipsock_posix.go:141 +0xbc
net.doDialTCP(0x100f7b380, 0x106a86048, 0x100f5bb9e, 0x3, 0x0, 0x106a80ba0, 0x101039120, 0x100f7b380, 0x106a86048)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/tcpsock_posix.go:62 +0x78
net.dialTCP(0x100f7b380, 0x106a86048, 0x100f5bb9e, 0x3, 0x0, 0x106a80ba0, 0x0, 0x100e51b68, 0x150993ee4291fdb0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/tcpsock_posix.go:58 +0xb0
net.dialSingle(0x100f7b380, 0x106a86048, 0x106aba800, 0x100f7ab00, 0x106a80ba0, 0x0, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/dial.go:547 +0x300
net.dialSerial(0x100f7b380, 0x106a86048, 0x106aba800, 0x106b179c0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/dial.go:515 +0x18c
net.(*Dialer).DialContext(0x106afcda8, 0x100f7b380, 0x106a86048, 0x100f5bb9e, 0x3, 0x106a182b0, 0xf, 0x0, 0x0, 0x0, ...)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/dial.go:397 +0x510
net.(*Dialer).Dial(0x106afcda8, 0x100f5bb9e, 0x3, 0x106a182b0, 0xf, 0x0, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/dial.go:320 +0x5c
net.Dial(0x100f5bb9e, 0x3, 0x106a182b0, 0xf, 0x106ac83c0, 0x12, 0x100f5bb9b, 0x3)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/dial.go:291 +0x58
log/syslog.(*Writer).connect(0x106a62360, 0xe, 0x100f5bd25)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog.go:162 +0x10c
log/syslog.(*Writer).writeAndRetry(0x106a62360, 0x6, 0x100f5bd25, 0x4, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog.go:258 +0x90
log/syslog.(*Writer).Info(0x106a62360, 0x100f5bd25, 0x4, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog.go:236 +0x38
log/syslog.TestConcurrentReconnect.func3(0x106a182c0, 0x106a1e440, 0x106a182b0, 0xf, 0x106ac05a0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:377 +0x15c
created by log/syslog.TestConcurrentReconnect
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:368 +0x244
goroutine 39 [IO wait]:
internal/poll.runtime_pollWait(0x10126cea0, 0x72, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/netpoll.go:173 +0x3c
internal/poll.(*pollDesc).wait(0x106ad0218, 0x72, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_poll_runtime.go:85 +0xa0
internal/poll.(*pollDesc).waitRead(0x106ad0218, 0xffffffffffffff00, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_poll_runtime.go:90 +0x30
internal/poll.(*FD).Accept(0x106ad0200, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_unix.go:372 +0x164
net.(*netFD).accept(0x106ad0200, 0x106a0e098, 0x0, 0x100e1e45c)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/fd_unix.go:238 +0x24
net.(*TCPListener).accept(0x106a0e038, 0x106a0e098, 0x100e3cc20, 0x106ae5f40)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/tcpsock_posix.go:136 +0x24
net.(*TCPListener).Accept(0x106a0e038, 0x100f65850, 0x106a18280, 0x106a621e0, 0x100f7be40)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/tcpsock.go:259 +0x34
log/syslog.runStreamSyslog(0x100f7b000, 0x106a0e038, 0x106a621e0, 0x106a18280)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:70 +0x80
log/syslog.startServer.func2(0x106a0e030, 0x100f7b000, 0x106a0e038, 0x106a621e0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:130 +0x68
created by log/syslog.startServer
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:128 +0x378
goroutine 51 [runnable]:
log/syslog.runStreamSyslog.func1(0x106a18280, 0x106a621e0, 0x100f7be40, 0x106a0e080)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:74
created by log/syslog.runStreamSyslog
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:74 +0x6c
goroutine 24 [IO wait]:
internal/poll.runtime_pollWait(0x10126c680, 0x72, 0x106b09000)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/netpoll.go:173 +0x3c
internal/poll.(*pollDesc).wait(0x106aba318, 0x72, 0xffffffffffffff00, 0x100f7a5c0, 0x101020f80)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_poll_runtime.go:85 +0xa0
internal/poll.(*pollDesc).waitRead(0x106aba318, 0x106b09000, 0x1000, 0x1000)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_poll_runtime.go:90 +0x30
internal/poll.(*FD).Read(0x106aba300, 0x106b09000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_unix.go:157 +0x198
net.(*netFD).Read(0x106aba300, 0x106b09000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/fd_unix.go:202 +0x3c
net.(*conn).Read(0x106a92040, 0x106b09000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/net.go:176 +0x50
bufio.(*Reader).fill(0x106afef58)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:100 +0x104
bufio.(*Reader).ReadSlice(0x106afef58, 0x100e5310a, 0x18, 0x106a2fdb8, 0x106a70d80, 0x100e53b78, 0x106aba300)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:341 +0x20
bufio.(*Reader).ReadBytes(0x106afef58, 0xbe8ed63f84bab50a, 0x12aed3521, 0x100f0de00, 0x1000, 0x100f26dc0, 0x106a2fe01)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:419 +0x48
bufio.(*Reader).ReadString(0x106a2ff58, 0x100a, 0x1000, 0x106b09000, 0x1000, 0x1000)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:459 +0x28
log/syslog.runStreamSyslog.func1(0x106a18280, 0x106a621e0, 0x100f7be40, 0x106a92040)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:79 +0x154
created by log/syslog.runStreamSyslog
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:74 +0x6c
goroutine 52 [runnable]:
log/syslog.runStreamSyslog.func1(0x106a18280, 0x106a621e0, 0x100f7be40, 0x106a0e088)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:74
created by log/syslog.runStreamSyslog
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:74 +0x6c
goroutine 26 [runnable]:
internal/poll.runtime_pollWait(0x10126c340, 0x72, 0x106b0b000)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/netpoll.go:173 +0x3c
internal/poll.(*pollDesc).wait(0x106aba418, 0x72, 0xffffffffffffff00, 0x100f7a5c0, 0x101020f80)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_poll_runtime.go:85 +0xa0
internal/poll.(*pollDesc).waitRead(0x106aba418, 0x106b0b000, 0x1000, 0x1000)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_poll_runtime.go:90 +0x30
internal/poll.(*FD).Read(0x106aba400, 0x106b0b000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_unix.go:157 +0x198
net.(*netFD).Read(0x106aba400, 0x106b0b000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/fd_unix.go:202 +0x3c
net.(*conn).Read(0x106a92050, 0x106b0b000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/net.go:176 +0x50
bufio.(*Reader).fill(0x106b00f58)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:100 +0x104
bufio.(*Reader).ReadSlice(0x106b00f58, 0x100e5310a, 0x18, 0x106a30db8, 0x106a71080, 0x100e53b78, 0x106aba400)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:341 +0x20
bufio.(*Reader).ReadBytes(0x106b00f58, 0xbe8ed63f84bb740a, 0x12aedf0a1, 0x100f0de00, 0x1000, 0x100f26dc0, 0x106a30e01)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:419 +0x48
bufio.(*Reader).ReadString(0x106a30f58, 0x100a, 0x1000, 0x106b0b000, 0x1000, 0x1000)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:459 +0x28
log/syslog.runStreamSyslog.func1(0x106a18280, 0x106a621e0, 0x100f7be40, 0x106a92050)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:79 +0x154
created by log/syslog.runStreamSyslog
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:74 +0x6c
goroutine 27 [runnable]:
internal/poll.runtime_pollWait(0x10126c270, 0x72, 0x106ad5000)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/runtime/netpoll.go:173 +0x3c
internal/poll.(*pollDesc).wait(0x106aba498, 0x72, 0xffffffffffffff00, 0x100f7a5c0, 0x101020f80)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_poll_runtime.go:85 +0xa0
internal/poll.(*pollDesc).waitRead(0x106aba498, 0x106ad5000, 0x1000, 0x1000)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_poll_runtime.go:90 +0x30
internal/poll.(*FD).Read(0x106aba480, 0x106ad5000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/internal/poll/fd_unix.go:157 +0x198
net.(*netFD).Read(0x106aba480, 0x106ad5000, 0x1000, 0x1000, 0x2503, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/fd_unix.go:202 +0x3c
net.(*conn).Read(0x106a92058, 0x106ad5000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/net/net.go:176 +0x50
bufio.(*Reader).fill(0x106a42f58)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:100 +0x104
bufio.(*Reader).ReadSlice(0x106a42f58, 0xa, 0x0, 0x5, 0x2, 0x4, 0x1)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:341 +0x20
bufio.(*Reader).ReadBytes(0x106a42f58, 0xbe8ed63f84abc50a, 0x12ade44e9, 0x100f0de00, 0x1000, 0x100f26dc0, 0x106a31601)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:419 +0x48
bufio.(*Reader).ReadString(0x106a31758, 0x100a, 0x1000, 0x106ad5000, 0x1000, 0x1000)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/bufio/bufio.go:459 +0x28
log/syslog.runStreamSyslog.func1(0x106a18280, 0x106a621e0, 0x100f7be40, 0x106a92058)
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:79 +0x154
created by log/syslog.runStreamSyslog
/private/var/folders/f6/d2bhfqss2716nxm8gkv1fmb80000gn/T/workdir-host-darwin-amd64-eliasnaur-ios/go/src/log/syslog/syslog_test.go:74 +0x6c
````
The stack looks interesting to me. @aclements ? | NeedsInvestigation | low | Critical |
290,123,620 | go | x/net/html: Parse() adds duplicate elements | ### What version of Go are you using (`go version`)?
go version go1.9.2 darwin/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOOS="darwin"
### What did you do?
https://play.golang.org/p/M3PUK90dQhM
### What did you expect to see?
```
<html><head></head><body><strong someAttr=""><a href="some_link"><strong><strong><strong><strong></strong></strong></strong></strong> </a></strong></body></html>
```
### What did you see instead?
```
<html><head></head><body><strong someattr=""><a href="some_link"><strong><strong><strong><strong></strong></strong></strong></strong></a></strong><a href="some_link"> </a></body></html>
```
Notice the duplicate `<a href="some_link"></a>`
From what I understand https://github.com/golang/net/blob/master/html/parse.go#L347 removes identical opening element and before it has a chance to remove respective closing element, reconstructActiveFormattingElements() "fixes" the "broken" structure | NeedsInvestigation | low | Critical |
290,143,939 | flutter | CocoaPods might not work if there are multiple targets or the target name changed | https://github.com/flutter/flutter/blob/master/packages/flutter_tools/lib/src/ios/xcodeproj.dart#L67
is hardcoded.
Discover all the target names and dynamically insert them all into Generated.xcconfig | tool,platform-mac,customer: posse (eap),P3,a: plugins,team-tool,triaged-tool | low | Minor |
290,151,038 | rust | who tests the tester? | So, by and large, compiletest is untested, as far as I know. That is -- there are no 'self tests' to make sure that it's acting as it should, invoking revisions the way we expect, and so forth. This could be a bit of a tricky thing to do, but it's definitely worth the effort.
For example, an early version of https://github.com/rust-lang/rust/pull/47605 had a subtle bug where it *appeared* to be working, but in fact was not passing the revision info through to the final test. I don't think this would have bothered travis one bit, as that would have happily run the same revision over and over.
cc @spastorino @oli-obk ... we need like a compiletest team, don't why? :) | A-testsuite,E-hard,C-enhancement,T-compiler,T-bootstrap,T-infra,A-compiletest,E-needs-design | low | Critical |
290,167,954 | rust | Poor error message when user forgets derive that has attributes | ```rust
#[macro_use]
extern crate serde_derive;
#[serde(untagged)]
enum CellIndex {
Auto,
Index(u32),
}
```
```
error[E0658]: The attribute `serde` is currently unknown to the compiler and may have meaning added to it in the future (see issue #29642)
--> src/main.rs:4:1
|
4 | #[serde(untagged)]
| ^^^^^^^^^^^^^^^^^^
|
= help: add #![feature(custom_attribute)] to the crate attributes to enable
```
The error and suggestion are super misleading and I have seen this a few times in #serde. It should be possible for the compiler to observe that there are derive macros in scope with `serde` declared as an attribute, and suggest using those.
```rust
// These are in scope
#[proc_macro_derive(Serialize, attributes(serde))]
#[proc_macro_derive(Deserialize, attributes(serde))]
```
A better message would not have the part about the compiler adding meaning to `#[serde]` in the future and would recommend using `#[derive(Serialize)]` or `#[derive(Deserialize)]` on the struct containing the attribute. | C-enhancement,A-diagnostics,A-macros,T-compiler,D-newcomer-roadblock,A-proc-macros | medium | Critical |
290,188,886 | youtube-dl | SOCKS proxy mechanism does not support IPv6 | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.01.18*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.01.18**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
```
$ youtube-dl.real -v --proxy socks5://fuyu.home.romanrm.net:1080/ https://www.youtube.com/watch?v=s8kOsoPZUkc
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'--proxy', u'socks5://fuyu.home.romanrm.net:1080/', u'https://www.youtube.com/watch?v=s8kOsoPZUkc']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.01.18
[debug] Python version 2.7.13 (CPython) - Linux-4.14.8-rm1+-x86_64-with-debian-8.10
[debug] exe versions: ffmpeg 3.2.5-1, ffprobe 3.2.5-1
[debug] Proxy map: {u'http': u'socks5://fuyu.home.romanrm.net:1080/', u'https': u'socks5://fuyu.home.romanrm.net:1080/'}
[youtube] s8kOsoPZUkc: Downloading webpage
ERROR: Unable to download webpage: <urlopen error [Errno -2] Name or service not known> (caused by URLError(gaierror(-2, 'Name or service not known'),))
File "/usr/local/bin/youtube-dl.real/youtube_dl/extractor/common.py", line 517, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/bin/youtube-dl.real/youtube_dl/YoutubeDL.py", line 2198, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/local/bin/youtube-dl.real/youtube_dl/utils.py", line 1089, in https_open
req, **kwargs)
File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open
raise URLError(err)
$ host fuyu.home.romanrm.net
fuyu.home.romanrm.net has IPv6 address fd39::101
$ ping6 fuyu.home.romanrm.net
PING fuyu.home.romanrm.net(fd39::101 (fd39::101)) 56 data bytes
64 bytes from fd39::101 (fd39::101): icmp_seq=1 ttl=64 time=0.238 ms
64 bytes from fd39::101 (fd39::101): icmp_seq=2 ttl=64 time=0.232 ms
^C
--- fuyu.home.romanrm.net ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.232/0.235/0.238/0.003 ms
```
---
### Description of your *issue*, suggested solution and other information
Does not work with proxy hostnames which resolve to IPv6 only.
Does not work with bare IPv6 specified (in brackets) instead of the hostname either. The error then becomes
```
ERROR: Unable to download webpage: <urlopen error [Errno -9] Address family for hostname not supported> (caused by URLError(gaierror(-9, 'Address family for hostname not supported'),))
``` | bug | low | Critical |
290,200,268 | vscode | Provide API to access and change editor tab labels | Feature Request
The Atom has the ability to customize the tabs, it's so powerful for some requirements. For example, there is an extension [nice-index](https://github.com/joshwcomeau/nice-index) in Atom that could be able to rename tabs of many opened editors have same name `index.js`, rename them to the names of their parent folder.


It's very useful that if we could do so.
@bpasero I got it that the feature requested by #32836 will not be considered in the first half year of 2018. I think my feature request is very similar with that, so may I ask you when the ability to customize tabs in VSC would be added into the roadmap ?
I was excited when I start to think how can I implement a extension like `nice-index` in VSC, but after I involved in the documents about customizing a VSC extension, I just found that I can not achieve this. I hope that you could consider this feature request, thanks for all of your great works ~ 🙏 | feature-request,api,workbench-tabs | high | Critical |
290,205,988 | angular | Bug (?) - ReactiveForms > FormArray, trackBy index > removeAt & subscriptions | Hi Angular,
I think i found a [x] bug, but I'm not sure.
I have a large Reactive Form, with a FormArray.. and I trackBy index. Each FormGroup in the array has subscriptions like:
```
// get the rides so we can iterate over them
get myRides(): FormArray {
return this.ridesForm.get('rides') as FormArray;
};
```
```
trackByFn(index, item) {
return index; // or item.id
}
```
```
// each ride that is added has subscriptions
subscribeChanges(ride: FormGroup){
ride['dateSub$'] = ride.get('date').valueChanges.subscribe(val => {
});
ride['distanceSub$'] = ride.get('distance').valueChanges.subscribe(val => {
});
ride['startCountSub$'] = ride.get('start_count').valueChanges.subscribe(val => {
});
ride['endCountSub$'] = ride.get('end_count').valueChanges.subscribe(val => {
});
}
```
now.. you can also delete FormGroup's using:
```
// deletes a single ride
public deleteRide(ride: FormGroup, i: number): void{
this.unsubscribeChanges(ride);
this.myRides.removeAt(i)
}
unsubscribeChanges(ride: FormGroup){
ride['dateSub$'].unsubscribe();
ride['distanceSub$'].unsubscribe();
ride['startCountSub$'].unsubscribe();
ride['endCountSub$'].unsubscribe();
}
````
but it seems that the tracking of objects goes wrong.. So let's delete a FormGroup at index 3 of 6 FormGroups....The subscriptions on the index of your removed FormGroup - and below that - do not fire anymore. As if their binding is still on a 'old' index.
Is it not allowed to put your Subs$ on a FormGroup? Or is there a bug in the tracking / refreshing of FormGroup's in the array?
| freq2: medium,area: forms,type: confusing,forms: Controls API,P3,core: control flow | medium | Critical |
290,209,375 | godot | RID::is_valid missing from GDNative C API | Godot 3.0 master
As I progress porting my module, I found that `godot_rid_is_valid()` is missing from the C API of RID.
Should it be added? | enhancement,topic:gdextension | low | Minor |
290,217,408 | pytorch | Clang color diagnostics don't work with ninja | We seem to be hit by this problem: https://github.com/ninja-build/ninja/issues/174
Workaround: add `CFLAGS="-Xclang -fcolor-diagnostics"` to your environment.
cc @malfet | module: build,triaged | low | Minor |
290,220,055 | create-react-app | Reduce installed size in 2.x | I noticed 2.x has regressed on the installed size
Before:
* 109,142,603 bytes (172.7 MB on disk) for 23,707 items
After:
* 135,452,954 bytes (216.4 MB on disk) for 29,913 items
You can check by creating a project with 2.0 alpha: https://github.com/facebookincubator/create-react-app/issues/3815.
@wtgtybhertgeghgtwtg Is this something you could look into? | tag: enhancement | low | Major |
290,233,949 | go | cmd/gofmt: extend column alignment to include assignments and declarations | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
Should apply to any version of go.
### Does this issue reproduce with the latest release?
Yep. It's a formatting enhancement.
### What operating system and processor architecture are you using (`go env`)?
Should happen in any environment.
### What did you do?
Type the following and run it through gofmt:
```go
a.b.c = 123
a.d = "def"
a.e.f.g = 456
h, i = 7, 8
j := 123
k.l := "def"
k.m.n.o := 456
p, q := 7, 8
var r = "abc"
var t, u = 1, 2
var v int = 3
```
and nothing would change with the formatting
### What did you expect to see?
I would like to see are the same rules gofmt currently applies to structs and maps to be more uniformly applied. Specifically to sequential assignments, sequential declarations with := and sequential declaration withs =.
So I'd like to see:
```go
a.b.c = 123
a.d = "def"
a.e.f.g = 456
h, i = 7, 8
j := 123
k.l := "def"
k.m.n.o := 456
p, q := 7, 8
var r = "abc"
var t, u = 1, 2
var v int = 3
```
### What did you see instead?
Saw the original code.
| NeedsFix | medium | Major |
290,241,176 | godot | Cannot register enums and constants with GDNative | Godot 3.0 master RC2
You can register classes, tool classes, signals, properties, methods, but no enums and no constants. I need them so I'm not forced to write raw numbers when an enum is expected. | enhancement,topic:core | low | Major |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.