id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
344,555,311 | flutter | TextInputType.emailAddress should imply autoCorrect: false | ## Steps to Reproduce
Create a textfield like this:
```dart
TextField(
keyboardType: TextInputType.emailAddress
)
```
Run this on iOS. (I didn't check on Android yet.)
You get something like this: ` _______ `
Start typing an email address beginning with a well-known name.
` jim_______ `
Type ".", and notice that the name gets capitalized.
` Jim.______ `
For me, this was unexpected.
To get the behaviour I expected, that is an email address wouldn't autocorrect words, I had to additionally and explicitly do:
```dart
TextField(
keyboardType: TextInputType.emailAddress,
autocorrect: false
)
```
I don't think it ever makes sense to autoCorrect on an email field. So, I'm proposing that setting `keyboardType: TextInputType.emailAddress` should imply setting `autocorrect: false` unless `autocorrect` is explicitly set to `true`.
```
[✓] Flutter (Channel dev, v0.5.7, on Mac OS X 10.13.5 17F77, locale en-GB)
• Flutter version 0.5.7 at /Users/steve/code/flutter
• Framework revision 66091f9696 (2 weeks ago), 2018-07-09 12:52:41 -0700
• Engine revision 6fe748490d
• Dart version 2.0.0-dev.63.0.flutter-4c9689c1d2
[✓] Android toolchain - develop for Android devices (Android SDK 28.0.1)
• Android SDK at /Users/steve/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.1
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 9.4.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 9.4.1, Build version 9F2000
• ios-deploy 1.9.2
• CocoaPods version 1.5.3
[✓] Android Studio (version 3.1)
• Android Studio at /Applications/Android Studio.app/Contents
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
[✓] VS Code (version 1.25.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 2.16.0
[✓] Connected devices (1 available)
• iPhone X • 9655AB90-1BE3-43D8-B65C-937C0D5F86E7 • ios • iOS 11.4 (simulator)
• No issues found!
```
| a: text input,c: new feature,framework,a: fidelity,P2,team-framework,triaged-framework | low | Major |
344,565,825 | rust | Allow attr batching like #[attr1, attr2, attr3] and #[cfg_attr(cond, attr1, attr2, attr3)] | It would be useful / convenient if the compiler would allow batching attrs like `#[attr1, attr2, attr3]` and especially with `cfg_attr` like `#[cfg_attr(cond, attr1, attr2, attr3)]`, it would really reduce the verbosity of using `cfg_attr` (and normal attrs, too).
E.g. it would allow writing:
```rust
#[cfg_attr(feature = "default", derive(Identifiable, AsChangeset))]
#[cfg_attr(feature = "default", table_name = "mytable")]
```
As one line:
```rust
#[cfg_attr(feature = "default", derive(Identifiable, AsChangeset), table_name = "mytable")]
```
Also, I often use the [getset crate](https://github.com/Hoverbear/getset) and it currently requires writing:
```rust
#[get = "pub"]
#[set]
#[get_mut]
field: Ty;
```
when I need these accessors for a field. With batching it would be shorter:
```rust
#[get = "pub", set, get_mut]
field: Ty;
```
And a lot of similar use cases would be less verbose with attr batching, too :)
---
Btw, currently `#[cfg_attr(cond, attr1, attr2)]` is not a parse error if the `cond` is false, but currently it probably should be (?)
Compare:
https://play.rust-lang.org/?gist=2de03f428468bafaab00376a75e66119&version=stable&mode=debug&edition=2015
and:
https://play.rust-lang.org/?gist=0bba1cc58221c6dadc3b48c61c9ce6b6&version=stable&mode=debug&edition=2015 | A-attributes,T-lang,C-feature-request | low | Critical |
344,583,183 | godot | Changing energy of Light2D node with AnimationPlayer requires clickthrough of Node in editor to reset after playing project | Godot v3.05
Windows 10
When using AnimationPlayer to change Energy of Light2D node, failure to click the AnimationPlayer node in project (not change anything, simply clicking the node) in-between project plays seems to cause the Energy animation track not to function during the next project play, while all other animation tracks in the animation work as expected.
Steps to replicate:
Create animation with multiple tracks, one containing keyframes that alter the Energy value of a Light2D node with track settings continuous, linear, and wrap loop.
Play project.
Play project again; Light2D node animation does not function as expected if at all.
Click AnimationPlayer in editor. Do not change anything.
Play project again and Light2D node animation plays as expected. | bug,topic:editor | low | Critical |
344,588,486 | vscode | [scss] Breadcrumbs in scss file not working for multi-line selectors | Issue Type: <b>Bug</b>
Having `scss` file:
```scss
.btn,
.link {
&.another-class {
}
&.classs {
}
}
```

Expected path to change to `.btn &.another-class ` and `.btn &.classs` (Basically, the same as it works if you remove the second selector (`.link`)).
VS Code version: Code - Insiders 1.26.0-insider (5fae91d61f12726e149fe5baed342eac2fbf8f88, 2018-07-25T02:04:01.629Z)
OS version: Windows_NT x64 10.0.17134
<!-- generated by issue reporter --> | bug,css-less-scss | low | Critical |
344,607,201 | flutter | GestureDetector should not cancel a tap until finger moves off of button | When a user taps on a button and then slides his/her finger around the button, if the finger moves more than the "slop" amount, the GestureDetector invokes onTapCancel. This behavior is not desired.
The desired behavior is that as long as the user's finger remains on the button, the button should remain pressed. Only upon dragging off the button should onTapCancel be invoked.
CC @Hixie | c: new feature,framework,f: gestures,P3,team-framework,triaged-framework | low | Major |
344,654,049 | pytorch | TestSequenceOps.test_gather_padding failing | ```
=================================== FAILURES ===================================
_____________________ TestSequenceOps.test_gather_padding ______________________
self = <caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>
@given(start_pad_width=st.integers(min_value=0, max_value=2),
> end_pad_width=st.integers(min_value=0, max_value=2),
args=_gen_test_add_padding(with_pad_data=True),
**hu.gcs)
def test_gather_padding(self, start_pad_width, end_pad_width, args, gc, dc):
lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py:189:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../lib/python2.7/dist-packages/hypothesis/core.py:604: in execute
result = self.test_runner(data, run)
../lib/python2.7/dist-packages/hypothesis/executors.py:58: in default_new_style_executor
return function(data)
../lib/python2.7/dist-packages/hypothesis/core.py:595: in run
return test(*args, **kwargs)
lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py:189: in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
../lib/python2.7/dist-packages/hypothesis/core.py:542: in test
result = self.test(*args, **kwargs)
lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py:207: in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>
device_option = device_type: 1
op = input: "data"
input: "lengths"
output: "start_padding"
output: "end_padding"
n...
arg {
name: "end_padding_width"
i: 0
}
device_option {
device_type: 1
}
inputs = [array([], dtype=float64), array([], dtype=int32)]
reference = <functools.partial object at 0x7fba1f33bb50>
input_device_options = None, threshold = 0.0001, output_to_grad = None
grad_reference = None, atol = 0.0001, outputs_to_check = [0, 1]
def assertReferenceChecks(
self,
device_option,
op,
inputs,
reference,
input_device_options=None,
threshold=1e-4,
output_to_grad=None,
grad_reference=None,
atol=None,
outputs_to_check=None,
):
"""
This runs the reference Python function implementation
(effectively calling `reference(*inputs)`, and compares that
to the output of output, with an absolute/relative tolerance
given by the `threshold` parameter.
Useful for checking the implementation matches the Python
(typically NumPy) implementation of the same functionality.
Usage example:
@given(X=hu.tensor(), inplace=st.booleans(), **hu.gcs)
def test_softsign(self, X, inplace, gc, dc):
op = core.CreateOperator(
"Softsign", ["X"], ["X" if inplace else "Y"])
def softsign(X):
return (X / (1 + np.abs(X)),)
self.assertReferenceChecks(gc, op, [X], softsign)
"""
op = copy.deepcopy(op)
op.device_option.CopyFrom(device_option)
with temp_workspace():
if (len(op.input) > len(inputs)):
raise ValueError(
'must supply an input for each input on the op: %s vs %s' %
(op.input, inputs))
_input_device_options = input_device_options or \
core.InferOpBlobDevicesAsDict(op)[0]
for (n, b) in zip(op.input, inputs):
workspace.FeedBlob(
n,
b,
device_option=_input_device_options.get(n, device_option)
)
net = core.Net("opnet")
net.Proto().op.extend([op])
test_shape_inference = False
try:
(shapes, types) = workspace.InferShapesAndTypes([net])
test_shape_inference = True
except RuntimeError as e:
# Temporarily catch runtime errors when inferring shape
# and type info
logging.warning(str(e))
if os.getenv('CAFFE2_ASSERT_SHAPEINFERENCE') == '1':
raise e
workspace.RunNetOnce(net)
reference_outputs = reference(*inputs)
if not (isinstance(reference_outputs, tuple) or
isinstance(reference_outputs, list)):
raise RuntimeError(
"You are providing a wrong reference implementation. A "
"proper one should return a tuple/list of numpy arrays.")
if not outputs_to_check:
self.assertEqual(len(reference_outputs), len(op.output))
outputs_to_check = list(range(len(op.output)))
outs = []
for (output_index, ref) in zip(outputs_to_check, reference_outputs):
output_blob_name = op.output[output_index]
output = workspace.FetchBlob(output_blob_name)
if output.dtype.kind in ('S', 'O'):
np.testing.assert_array_equal(output, ref)
else:
if atol is None:
atol = threshold
np.testing.assert_allclose(
output, ref, atol=atol, rtol=threshold,
err_msg=(
'Output {0} is not matching the reference'.format(
> output_blob_name,
)),
)
E AssertionError:
E Not equal to tolerance rtol=0.0001, atol=0.0001
E Output start_padding is not matching the reference
E (mismatch 100.0%)
E x: array(-4618794431967920128)
E y: array(0.)
lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py:575: AssertionError
----------------------------- Captured stderr call -----------------------------
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
WARNING:caffe2.python.workspace:CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
------------------------------ Captured log call -------------------------------
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
workspace.py 331 WARNING CUDA operators do not support 64-bit doubles, please use arr.astype(np.float32) or np.int32 for ints. Blob: data type: float64
---------------------------------- Hypothesis ----------------------------------
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([], dtype=int32), array([], dtype=float32), 0.0, 0.0), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=2, args=(array([ 8, 10, 9, 0], dtype=int32),
array([ 0.3465074 , -0.5783911 , 0.3465074 , 0.8693086 , 0.3465074 ,
-0.52675825, 0.3465074 , 0.3465074 , 0.3465074 , 0.9321062 ,
-0.9681313 , 0.3465074 , 0.85339016, -0.995281 , 0.3465074 ,
0.3465074 , 0.3465074 , -0.6266759 , 0.3465074 , 0.25781822,
0.3465074 , 0.38632435, -0.44261232, 0.3465074 , 0.86272967,
0.3465074 , 0.3465074 ], dtype=float32),
-0.7661220172644854,
0.32672292989354795), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=2, args=(array([10, 9, 10, 1, 7], dtype=int32),
array([ 0.62059796, 0.62059796, 0.62059796, -0.1115087 , 0.62059796,
0.62059796, 0.62059796, 0.62059796, 0.62059796, 0.03708775,
0.62059796, 0.62059796, -0.6321088 , 0.62059796, 0.62059796,
0.8428636 , -0.97314036, -0.664075 , 0.62059796, 0.62059796,
0.62059796, 0.62059796, 0.62059796, 0.62059796, 0.62059796,
0.5944725 , 0.62059796, 0.62059796, 0.62059796, 0.62059796,
0.62059796, -0.2803797 , 0.62059796, 0.5317449 , 0.62059796,
0.62059796, 0.62059796], dtype=float32),
-0.7638539007421339,
-0.7086946573830012), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=2, args=(array([0, 8, 7, 4, 0], dtype=int32), array([[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]],
[[0.7863818],
[0.7863818],
[0.7863818]]], dtype=float32), array([[0.41727993],
[0.41727993],
[0.41727993]], dtype=float32), array([[-0.0011362 ],
[-0.29718485],
[-0.0011362 ]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[ 0.61785346, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.02449792, -0.725019 ],
[-0.6544955 , -0.92966264],
[-0.8234622 , -0.8234622 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output end_padding is not matching the reference
(mismatch 100.0%)
x: array(38654705669)
y: array([[0., 0.],
[0., 0.],
[0., 0.]])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[ 0.61785346, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.02449792, -0.725019 ],
[-0.6544955 , -0.92966264],
[-0.8234622 , -0.8234622 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output end_padding is not matching the reference
(mismatch 100.0%)
x: array(38654705669)
y: array([[0., 0.],
[0., 0.],
[0., 0.]])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[ 0.61785346, 0.61785346],
[-0.62575763, 0.61785346],
[ 0.61785346, 0.61785346]],
[[ 0.61785346, 0.61785346],
[ 0.61785346, 0.61785346],
[ 0.61785346, 0.61785346]],
[[ 0.61785346, 0.61785346],
[ 0.61785346, 0.61785346],
[ 0.61785346, 0.61785346]]], dtype=float32), array([[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[ 0.2294148 , -0.71080756]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([ 2, 10], dtype=int32),
array([-0.4377519 , 0.31514454, -0.3215343 , -0.6535671 , 0.31514454,
0.06280098, 0.31514454, 0.31514454, 0.31514454, 0.31514454,
0.35126773, 0.31514454], dtype=float32),
-0.8295155601085189,
0.43787003682785286), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([ 2, 10], dtype=int32),
array([-0.4377519 , 0.31514454, -0.3215343 , -0.6535671 , 0.31514454,
0.06280098, 0.31514454, 0.31514454, 0.31514454, 0.31514454,
0.35126773, 0.31514454], dtype=float32),
-0.8295155601085189,
0.43787003682785286), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([ 2, 10], dtype=int32),
array([-0.4377519 , 0.31514454, -0.3215343 , -0.6535671 , 0.31514454,
0.06280098, 0.31514454, 0.31514454, 0.31514454, 0.31514454,
0.35126773, 0.31514454], dtype=float32),
-0.8295155601085189,
0.43787003682785286), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=0, args=(array([2, 1], dtype=int32),
array([-0.12546732, -0.12546732, -0.12546732], dtype=float32),
-0.6535670833836938,
0.6457115366779941), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([], dtype=int32),
array([], shape=(0, 3, 2), dtype=float32),
array([[-0.71080756, -0.71080756],
[-0.17237663, -0.71080756],
[-0.12546732, 0.6818886 ]], dtype=float32),
array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=2, args=(array([], dtype=int32),
array([], dtype=float32),
-0.12524509770606265,
0.11963186579509325), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([], dtype=int32),
array([], shape=(0, 3, 2), dtype=float32),
array([[-0.71080756, -0.71080756],
[-0.17237663, -0.71080756],
[-0.12546732, 0.6818886 ]], dtype=float32),
array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=2, args=(array([], dtype=int32),
array([], dtype=float32),
-0.12524509770606265,
0.11963186579509325), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([1], dtype=int32),
array([[-0.12546732, 0.61785346]], dtype=float32),
array([ 0.2294148 , -0.71080756], dtype=float32),
array([-0.50764036, -0.50764036], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([1], dtype=int32),
array([[-0.12546732, 0.61785346]], dtype=float32),
array([ 0.2294148 , -0.71080756], dtype=float32),
array([-0.50764036, -0.50764036], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([1], dtype=int32),
array([[-0.12546732, 0.61785346]], dtype=float32),
array([ 0.2294148 , -0.71080756], dtype=float32),
array([-0.50764036, -0.50764036], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([1], dtype=int32),
array([[-0.12546732, 0.61785346]], dtype=float32),
array([ 0.2294148 , -0.71080756], dtype=float32),
array([-0.50764036, -0.50764036], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([1], dtype=int32),
array([[-0.12546732, 0.61785346]], dtype=float32),
array([ 0.2294148 , -0.71080756], dtype=float32),
array([-0.50764036, -0.50764036], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([1], dtype=int32),
array([[-0.12546732, 0.61785346]], dtype=float32),
array([ 0.2294148 , -0.71080756], dtype=float32),
array([-0.50764036, -0.50764036], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([ 3, 2, 10], dtype=int32),
array([-1.3770291e-04, -1.3770291e-04, 6.1789516e-02, -6.5356708e-01,
-1.3770291e-04, -1.3770291e-04, -7.5024533e-01, -1.3770291e-04,
-1.3770291e-04, -1.3770291e-04, 3.5126773e-01, -1.3770291e-04,
-1.3770291e-04, 8.8494122e-02, -1.8391909e-01], dtype=float32),
0.7273016037656251,
0.18778884372890753), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=0, args=(array([3, 2, 1], dtype=int32),
array([-0.12546732, -0.12546732, -0.12546732, -0.12546732, -0.12546732,
-0.12546732], dtype=float32),
-0.6535670833836938,
0.6457115366779941), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([ 3, 2, 10], dtype=int32),
array([-1.3770291e-04, -1.3770291e-04, 6.1789516e-02, -6.5356708e-01,
-1.3770291e-04, -1.3770291e-04, -7.5024533e-01, -1.3770291e-04,
-1.3770291e-04, -1.3770291e-04, 3.5126773e-01, -1.3770291e-04,
-1.3770291e-04, 8.8494122e-02, -1.8391909e-01], dtype=float32),
0.7273016037656251,
0.18778884372890753), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=0, args=(array([3, 2, 1], dtype=int32),
array([-0.12546732, -0.12546732, -0.12546732, -0.12546732, -0.12546732,
-0.12546732], dtype=float32),
-0.6535670833836938,
0.6457115366779941), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=0, args=(array([3], dtype=int32),
array([-0.12546732, -0.12546732, -0.12546732], dtype=float32),
-0.6535670833836938,
0.6457115366779941), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[-0.71080756, -0.71080756],
[-0.17237663, -0.71080756],
[-0.12546732, 0.6818886 ]], dtype=float32), array([-0.50764036, -0.50764036], dtype=float32), array([-0.6544955, -0.725019 ], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=1, args=(array([3], dtype=int32),
array([0.24296041, 0.6318636 , 0.24296041], dtype=float32),
0.6178534807058208,
-0.25014082969376666), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[-0.71080756, -0.71080756],
[-0.17237663, -0.71080756],
[-0.12546732, 0.6818886 ]], dtype=float32), array([-0.50764036, -0.50764036], dtype=float32), array([-0.6544955, -0.725019 ], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732]]], dtype=float32), array([[-0.17237663],
[-0.9245386 ]], dtype=float32), array([[0.81811625],
[0.08328725]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[-0.71080756, -0.71080756],
[-0.17237663, -0.71080756],
[-0.12546732, 0.6818886 ]], dtype=float32), array([-0.50764036, -0.50764036], dtype=float32), array([-0.6544955, -0.725019 ], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732]]], dtype=float32), array([[-0.17237663],
[-0.9245386 ]], dtype=float32), array([[0.81811625],
[0.08328725]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[-0.71080756, -0.71080756],
[-0.17237663, -0.71080756],
[-0.12546732, 0.6818886 ]], dtype=float32), array([-0.50764036, -0.50764036], dtype=float32), array([-0.6544955, -0.725019 ], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732]]], dtype=float32), array([[-0.17237663],
[-0.9245386 ]], dtype=float32), array([[0.81811625],
[0.08328725]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[-0.17049105, 0.2294148 , -0.62575763],
[ 0.2294148 , 0.2294148 , 0.2294148 ],
[ 0.2294148 , 0.2294148 , 0.2294148 ]], dtype=float32), array([-0.71080756, -0.71080756, -0.71080756], dtype=float32), array([-0.50764036, -0.50764036, -0.50764036], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.12546732],
[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732],
[-0.12546732]]], dtype=float32), array([[-0.9245386 ],
[-0.9245386 ],
[-0.17237663]], dtype=float32), array([[ 0.81811625],
[ 0.08328725],
[-0.75024533]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186]],
[[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186]],
[[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186]]], dtype=float32), array([[-0.4270123 , -0.9667386 , -0.4270123 ],
[ 0.15222359, 0.2294148 , 0.12841824],
[-0.4270123 , -0.4270123 , -0.7181849 ]], dtype=float32), array([[-0.8295156, -0.8295156, -0.8295156],
[-0.8295156, -0.8295156, -0.8295156],
[-0.8295156, -0.8295156, -0.8295156]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.12546732],
[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732],
[-0.12546732]]], dtype=float32), array([[-0.9245386 ],
[-0.9245386 ],
[-0.17237663]], dtype=float32), array([[ 0.81811625],
[ 0.08328725],
[-0.75024533]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186]],
[[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186]],
[[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186]]], dtype=float32), array([[-0.4270123 , -0.9667386 , -0.4270123 ],
[ 0.15222359, 0.2294148 , 0.12841824],
[-0.4270123 , -0.4270123 , -0.7181849 ]], dtype=float32), array([[-0.8295156, -0.8295156, -0.8295156],
[-0.8295156, -0.8295156, -0.8295156],
[-0.8295156, -0.8295156, -0.8295156]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.12546732],
[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732],
[-0.12546732]]], dtype=float32), array([[-0.9245386 ],
[-0.9245386 ],
[-0.17237663]], dtype=float32), array([[ 0.81811625],
[ 0.08328725],
[-0.75024533]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186]],
[[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186]],
[[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186],
[0.11963186, 0.11963186, 0.11963186]]], dtype=float32), array([[-0.4270123 , -0.9667386 , -0.4270123 ],
[ 0.15222359, 0.2294148 , 0.12841824],
[-0.4270123 , -0.4270123 , -0.7181849 ]], dtype=float32), array([[-0.8295156, -0.8295156, -0.8295156],
[-0.8295156, -0.8295156, -0.8295156],
[-0.8295156, -0.8295156, -0.8295156]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.12546732],
[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732],
[-0.12546732]],
[[-0.12546732],
[-0.12546732],
[-0.12546732]]], dtype=float32), array([[-0.9245386 ],
[-0.9245386 ],
[-0.17237663]], dtype=float32), array([[ 0.81811625],
[ 0.08328725],
[-0.75024533]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732]],
[[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732]],
[[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732]]], dtype=float32), array([[-0.71080756, -0.71080756],
[-0.17237663, -0.71080756],
[-0.71080756, 0.6818886 ]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732]],
[[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732]],
[[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732]]], dtype=float32), array([[-0.71080756, -0.71080756],
[-0.17237663, -0.71080756],
[-0.71080756, 0.6818886 ]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732]],
[[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732]],
[[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732],
[-0.12546732, -0.12546732]]], dtype=float32), array([[-0.71080756, -0.71080756],
[-0.17237663, -0.71080756],
[-0.71080756, 0.6818886 ]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[ 0.61785346, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.02449792, -0.725019 ],
[-0.6544955 , -0.92966264],
[-0.8234622 , -0.8234622 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output start_padding is not matching the reference
(mismatch 100.0%)
x: array(-4618794431967920128)
y: array([[0., 0.],
[0., 0.],
[0., 0.]])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[ 0.61785346, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.02449792, -0.725019 ],
[-0.6544955 , -0.92966264],
[-0.8234622 , -0.8234622 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output start_padding is not matching the reference
(mismatch 100.0%)
x: array(-4618794431967920128)
y: array([[0., 0.],
[0., 0.],
[0., 0.]])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[ 0.61785346, 0.61785346],
[-0.62575763, 0.61785346],
[ 0.61785346, 0.61785346]],
[[ 0.61785346, 0.61785346],
[ 0.61785346, 0.61785346],
[ 0.61785346, 0.61785346]],
[[ 0.61785346, 0.61785346],
[ 0.61785346, 0.61785346],
[ 0.61785346, 0.61785346]]], dtype=float32), array([[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[ 0.2294148 , -0.71080756]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.02449792, -0.725019 ],
[-0.6544955 , -0.92966264],
[-0.8234622 , -0.8234622 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output start_padding is not matching the reference
(mismatch 100.0%)
x: array(-4618794431967920128)
y: array([[0., 0.],
[0., 0.],
[0., 0.]])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[ 0.2294148 , 0.2294148 ],
[-0.62575763, 0.2294148 ],
[ 0.2294148 , 0.2294148 ]],
[[ 0.2294148 , 0.2294148 ],
[ 0.2294148 , 0.2294148 ],
[ 0.2294148 , 0.2294148 ]],
[[ 0.2294148 , 0.2294148 ],
[ 0.2294148 , 0.2294148 ],
[ 0.2294148 , 0.2294148 ]]], dtype=float32), array([[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[ 0.27146652, -0.9667386 ],
[-0.62575763, 0.27146652],
[ 0.27146652, 0.27146652]],
[[ 0.27146652, 0.27146652],
[-0.7181849 , 0.27146652],
[ 0.01402934, 0.27146652]],
[[ 0.27146652, -0.54356337],
[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652]]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.1372088],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[ 0.27146652, 0.9559304 ],
[-0.62575763, 0.27146652],
[-0.07308909, 0.27146652]],
[[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652],
[ 0.01402934, 0.27146652]],
[[ 0.27146652, -0.54356337],
[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652]]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.1372088],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[ 0.27146652, 0.9559304 ],
[-0.62575763, 0.27146652],
[-0.07308909, 0.27146652]],
[[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652],
[ 0.01402934, 0.27146652]],
[[ 0.27146652, -0.54356337],
[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652]]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.1372088],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[ 0.27146652, 0.9559304 ],
[-0.62575763, 0.27146652],
[ 0.07308909, 0.27146652]],
[[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652],
[ 0.01402934, 0.27146652]],
[[ 0.27146652, -0.54356337],
[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652]]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.1372088],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[ 0.27146652, 0.9559304 ],
[-0.62575763, 0.27146652],
[ 0.3127855 , 0.27146652]],
[[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652],
[ 0.01402934, 0.27146652]],
[[ 0.27146652, -0.54356337],
[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652]]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.1372088],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[ 0.27146652, 0.9559304 ],
[-0.62575763, 0.27146652],
[ 0.07308909, 0.27146652]],
[[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652],
[ 0.01402934, 0.27146652]],
[[ 0.27146652, -0.54356337],
[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652]]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.1372088],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[ 0.27146652, 0.9559304 ],
[-0.62575763, 0.27146652],
[ 0.3127855 , 0.27146652]],
[[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652],
[ 0.01402934, 0.27146652]],
[[ 0.27146652, -0.54356337],
[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652]]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.1372088],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[ 0.27146652, 0.9559304 ],
[-0.62575763, 0.27146652],
[ 0.07308909, 0.27146652]],
[[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652],
[ 0.01402934, 0.27146652]],
[[ 0.27146652, -0.54356337],
[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652]]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.1372088],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[ 0.27146652, 0.9559304 ],
[-0.62575763, 0.27146652],
[ 0.3127855 , 0.27146652]],
[[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652],
[ 0.01402934, 0.27146652]],
[[ 0.27146652, -0.54356337],
[ 0.27146652, 0.27146652],
[ 0.27146652, 0.27146652]]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.1372088],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.06448297, -0.06448297],
[-0.62575763, -0.06448297],
[ 0.2294148 , -0.06448297]],
[[-0.06448297, -0.06448297],
[-0.06448297, -0.06448297],
[-0.06448297, -0.06448297]],
[[-0.06448297, -0.06448297],
[-0.06448297, -0.06448297],
[-0.06448297, -0.06448297]]], dtype=float32), array([[0.06280098, 0.06280098],
[0.06280098, 0.06280098],
[0.06280098, 0.06280098]], dtype=float32), array([[ 0.7273016 , 0.7273016 ],
[-0.81298965, 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.06448297, -0.06448297],
[-0.62575763, -0.06448297],
[ 0.2294148 , -0.06448297]],
[[-0.06448297, -0.06448297],
[-0.06448297, -0.06448297],
[-0.06448297, -0.06448297]],
[[-0.06448297, -0.06448297],
[-0.06448297, -0.06448297],
[-0.06448297, -0.06448297]]], dtype=float32), array([[0.06280098, 0.06280098],
[0.06280098, 0.06280098],
[0.06280098, 0.06280098]], dtype=float32), array([[ 0.7273016 , 0.7273016 ],
[-0.81298965, 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.9667386 , -0.9667386 ],
[-0.62575763, -0.9667386 ],
[ 0.2294148 , -0.9667386 ]],
[[-0.9667386 , -0.9667386 ],
[-0.9667386 , -0.9667386 ],
[-0.9667386 , -0.9667386 ]],
[[-0.9667386 , -0.9667386 ],
[-0.9667386 , -0.9667386 ],
[-0.9667386 , -0.9667386 ]]], dtype=float32), array([[-0.7181849 , 0.27146652],
[-0.6544955 , -0.8447196 ],
[ 0.27146652, 0.27146652]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.4377519 , -0.4377519 ],
[-0.62575763, -0.4377519 ],
[ 0.2294148 , -0.4377519 ]],
[[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ]],
[[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ]]], dtype=float32), array([[-1.3770291e-04, -1.3770291e-04],
[-8.1298965e-01, -1.3770291e-04],
[-1.3770291e-04, 6.2800981e-02]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.4379044],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.4377519 , -0.4377519 ],
[-0.62575763, -0.4377519 ],
[ 0.2294148 , -0.4377519 ]],
[[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ]],
[[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ]]], dtype=float32), array([[-1.3770291e-04, -1.3770291e-04],
[-8.1298965e-01, -1.3770291e-04],
[-1.3770291e-04, 6.2800981e-02]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.4379044],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.4377519 , -0.4377519 ],
[-0.62575763, -0.4377519 ],
[ 0.2294148 , -0.4377519 ]],
[[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ]],
[[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ]]], dtype=float32), array([[-1.3770291e-04, -1.3770291e-04],
[-8.1298965e-01, -1.3770291e-04],
[-1.3770291e-04, 6.2800981e-02]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.4379044],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.4377519 , -0.4377519 ],
[-0.62575763, -0.4377519 ],
[ 0.2294148 , -0.4377519 ]],
[[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ]],
[[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ]]], dtype=float32), array([[-1.3770291e-04, -1.3770291e-04],
[-8.1298965e-01, -1.3770291e-04],
[-1.3770291e-04, 6.2800981e-02]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.4379044],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.4377519 , -0.4377519 ],
[-0.62575763, -0.4377519 ],
[ 0.2294148 , -0.4377519 ]],
[[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ]],
[[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ],
[-0.4377519 , -0.4377519 ]]], dtype=float32), array([[-1.3770291e-04, -1.3770291e-04],
[-8.1298965e-01, -1.3770291e-04],
[-1.3770291e-04, 6.2800981e-02]], dtype=float32), array([[-0.1372088, -0.1372088],
[-0.1372088, -0.4379044],
[-0.1372088, 0.3028629]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.7181849 , 0.27146652],
[-0.6544955 , -0.8447196 ],
[ 0.27146652, 0.27146652]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.7181849 , 0.27146652],
[-0.6544955 , -0.8447196 ],
[ 0.27146652, 0.27146652]], dtype=float32), array([[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[0.06280098, 0.06280098],
[0.06280098, 0.06280098],
[0.06280098, 0.06280098]], dtype=float32), array([[ 0.7273016 , 0.7273016 ],
[-0.81298965, 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[0.06280098, 0.06280098],
[0.06280098, 0.06280098],
[0.06280098, 0.06280098]], dtype=float32), array([[ 0.7273016 , 0.7273016 ],
[-0.81298965, 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[0.9559304, 0.9559304],
[0.9559304, 0.9559304],
[0.9559304, 0.9559304]], dtype=float32), array([[-1.3770291e-04, -6.0486221e-01],
[-1.3770291e-04, -1.3770291e-04],
[-1.3770291e-04, 8.8494122e-02]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.75024533, -0.75024533],
[-0.75024533, -0.75024533],
[-0.75024533, -0.75024533]], dtype=float32), array([[-0.37570858, 0.31514454],
[-0.12539388, -0.37570858],
[-0.07394399, -0.13174728]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.75024533, -0.75024533],
[-0.75024533, -0.75024533],
[-0.75024533, -0.75024533]], dtype=float32), array([[-0.37570858, 0.31514454],
[-0.12539388, -0.37570858],
[-0.07394399, -0.13174728]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.75024533, -0.75024533],
[-0.75024533, -0.75024533],
[-0.75024533, -0.75024533]], dtype=float32), array([[-0.37570858, 0.31514454],
[-0.12539388, -0.37570858],
[-0.07394399, -0.13174728]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.75024533, -0.75024533],
[-0.75024533, -0.75024533],
[-0.75024533, -0.75024533]], dtype=float32), array([[-0.37570858, 0.31514454],
[-0.12539388, -0.37570858],
[-0.07394399, -0.13174728]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.75024533, -0.75024533],
[-0.75024533, -0.75024533],
[-0.75024533, -0.75024533]], dtype=float32), array([[-0.37570858, 0.31514454],
[-0.12539388, -0.37570858],
[-0.07394399, -0.13174728]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3720880e-01, -1.2534568e-01],
[-1.3770291e-04, -4.3790439e-01],
[-1.3720880e-01, 3.0286291e-01]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3770291e-04, -6.0486221e-01],
[-1.3770291e-04, -1.3770291e-04],
[-1.3770291e-04, 8.8494122e-02]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3720880e-01, -1.2534568e-01],
[-1.3770291e-04, -4.3790439e-01],
[-1.3720880e-01, 3.0286291e-01]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3770291e-04, -6.0486221e-01],
[-1.3770291e-04, -1.3770291e-04],
[-1.3770291e-04, 8.8494122e-02]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3720880e-01, -1.2534568e-01],
[-1.3770291e-04, -4.3790439e-01],
[-1.3720880e-01, 3.0286291e-01]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , 0.7273016 ],
[-0.81298965, 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , 0.7273016 ],
[-0.81298965, 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3770291e-04, -6.0486221e-01],
[-1.3770291e-04, -1.3770291e-04],
[-1.3770291e-04, 8.8494122e-02]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , -0.81298965],
[ 0.7273016 , 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[0.31514454, 0.10083199],
[0.31514454, 0.31514454],
[0.31514454, 0.31514454]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[0.31514454, 0.10083199],
[0.31514454, 0.31514454],
[0.31514454, 0.31514454]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3770291e-04, -6.0486221e-01],
[-1.3770291e-04, -1.3770291e-04],
[-1.3770291e-04, 8.8494122e-02]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.31514454, -0.10083199],
[ 0.31514454, 0.31514454],
[ 0.31514454, 0.31514454]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , -0.12539388],
[ 0.7273016 , 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.31514454, -0.10083199],
[ 0.31514454, 0.31514454],
[ 0.31514454, 0.31514454]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , -0.12539388],
[ 0.7273016 , 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.31514454, -0.10083199],
[ 0.31514454, 0.31514454],
[ 0.31514454, 0.31514454]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , -0.12539388],
[ 0.7273016 , 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.31514454, -0.10083199],
[ 0.31514454, 0.31514454],
[ 0.31514454, 0.31514454]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , -0.12539388],
[ 0.7273016 , 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.31514454, -0.10083199],
[ 0.31514454, 0.31514454],
[ 0.31514454, 0.31514454]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , -0.12539388],
[ 0.7273016 , 0.7273016 ],
[ 0.7273016 , 0.31250054]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.02449792, -0.725019 ],
[-0.6544955 , -0.92966264],
[-0.8234622 , -0.8234622 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output start_padding is not matching the reference
(mismatch 100.0%)
x: array(-4620624399421145088)
y: array([[0., 0.],
[0., 0.],
[0., 0.]])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.6544955, -0.725019 ],
[-0.6544955, -0.6544955],
[-0.6544955, -0.6544955]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.6544955, -0.725019 ],
[-0.6544955, -0.6544955],
[-0.6544955, -0.6544955]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.4379044 , -0.725019 ],
[-0.4379044 , -0.4379044 ],
[-0.4379044 , 0.03525195]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.4379044 , -0.725019 ],
[-0.4379044 , -0.4379044 ],
[-0.4379044 , 0.03525195]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.4379044 , -0.725019 ],
[ 0.03525195, -0.4379044 ],
[-0.4379044 , -0.4379044 ]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3720880e-01, -7.2501898e-01],
[-1.3770291e-04, -4.3790439e-01],
[-1.3720880e-01, 3.0286291e-01]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3720880e-01, -7.2501898e-01],
[-1.3770291e-04, -4.3790439e-01],
[-1.3720880e-01, 3.0286291e-01]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.02449792, -0.725019 ],
[ 0.550836 , 0.02449792],
[ 0.02449792, 0.02449792]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3720880e-01, -7.2501898e-01],
[-1.3770291e-04, -4.3790439e-01],
[-1.3720880e-01, 3.0286291e-01]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , -0.725019 ],
[-0.31250054, 0.7273016 ],
[ 0.7273016 , 0.7273016 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3720880e-01, -7.2501898e-01],
[-1.3770291e-04, -4.3790439e-01],
[-1.3720880e-01, 3.0286291e-01]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , -0.725019 ],
[-0.31250054, 0.7273016 ],
[ 0.7273016 , 0.7273016 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3720880e-01, -7.2501898e-01],
[-1.3770291e-04, -4.3790439e-01],
[-1.3720880e-01, 3.0286291e-01]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , -0.725019 ],
[-0.31250054, 0.7273016 ],
[ 0.7273016 , 0.7273016 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3720880e-01, -7.2501898e-01],
[-1.3770291e-04, -4.3790439e-01],
[-1.3720880e-01, 3.0286291e-01]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , -0.725019 ],
[-0.31250054, 0.7273016 ],
[ 0.7273016 , 0.7273016 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-1.3720880e-01, -7.2501898e-01],
[-1.3770291e-04, -4.3790439e-01],
[-1.3720880e-01, 3.0286291e-01]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[ 0.7273016 , -0.725019 ],
[-0.31250054, 0.7273016 ],
[ 0.7273016 , 0.7273016 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.8234622 , -0.725019 ],
[-0.6544955 , -0.92966264],
[-0.8234622 , -0.8234622 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output start_padding is not matching the reference
(mismatch 100.0%)
x: array(-4620624399421145088)
y: array([[0., 0.],
[0., 0.],
[0., 0.]])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=1, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.5017104, -0.725019 ],
[-0.6544955, -0.5017104],
[-0.5017104, -0.5017104]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.8234622 , -0.725019 ],
[-0.6544955 , -0.92966264],
[-0.8234622 , -0.8234622 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output start_padding is not matching the reference
(mismatch 100.0%)
x: array(-4620624399421145088)
y: array([[0., 0.],
[0., 0.],
[0., 0.]])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.25073355, -0.725019 ],
[-0.6544955 , -0.25073355],
[-0.25073355, -0.25073355]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.8234622 , -0.725019 ],
[-0.6544955 , -0.92966264],
[-0.8234622 , -0.8234622 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output start_padding is not matching the reference
(mismatch 100.0%)
x: array(-4620624399421145088)
y: array([[0., 0.],
[0., 0.],
[0., 0.]])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=2, end_pad_width=2, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.3693131, -0.725019 ],
[-0.6544955, -0.3693131],
[-0.3693131, -0.3693131]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.37570858, -0.725019 ],
[-0.6544955 , 0.9936301 ],
[-0.37570858, -0.37570858]], dtype=float32)), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([3], dtype=int32), array([[[-0.71080756, -0.71080756],
[-0.62575763, -0.71080756],
[ 0.2294148 , -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]],
[[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756],
[-0.71080756, -0.71080756]]], dtype=float32), array([[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036],
[-0.50764036, -0.50764036]], dtype=float32), array([[-0.8234622 , -0.725019 ],
[-0.6544955 , -0.31397155],
[-0.8234622 , -0.8234622 ]], dtype=float32)), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output start_padding is not matching the reference
(mismatch 100.0%)
x: array(-4620624399421145088)
y: array([[0., 0.],
[0., 0.],
[0., 0.]])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([], dtype=int32), array([], dtype=float32), 0.0, 0.0), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output start_padding is not matching the reference
(mismatch 100.0%)
x: array(-4620624399421145088)
y: array(0.)
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=1, end_pad_width=0, args=(array([], dtype=int32), array([], dtype=float32), 0.0, 0.0), gc=, dc=[, device_type: 1])
Trying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([], dtype=int32),
array([], dtype=float32),
-0.6964012176709845,
-0.5087437279503267), gc=device_type: 1, dc=[, device_type: 1])
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
result = self.execute(data, collect=True)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
result = self.test_runner(data, run)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
return function(data)
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
return test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 189, in test_gather_padding
end_pad_width=st.integers(min_value=0, max_value=2),
File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
result = self.test(*args, **kwargs)
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/sequence_ops_test.py", line 207, in test_gather_padding
reference=partial(_gather_padding_ref, start_pad_width, end_pad_width))
File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/hypothesis_test_util.py", line 575, in assertReferenceChecks
output_blob_name,
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 1443, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/usr/local/lib/python2.7/dist-packages/numpy/testing/_private/utils.py", line 780, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=0.0001, atol=0.0001
Output start_padding is not matching the reference
(mismatch 100.0%)
x: array(-4618794431967920128)
y: array(0.)
Falsifying example: test_gather_padding(self=<caffe2.python.operator_test.sequence_ops_test.TestSequenceOps testMethod=test_gather_padding>, start_pad_width=0, end_pad_width=0, args=(array([], dtype=int32), array([], dtype=float32), 0.0, 0.0), gc=device_type: 1, dc=[, device_type: 1])
You can reproduce this example by temporarily adding @reproduce_failure('3.66.8', 'AXicY2DAAIwMAAAXAAI=') as a decorator on your test case
generated xml file: /var/lib/jenkins/workspace/caffe2_tests/python/result.xml -
```
Sample: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-cuda9.0-cudnn7-aten-ubuntu16.04-test/4954/consoleText | caffe2 | low | Critical |
344,654,277 | pytorch | TestConvolution.test_conv_separate_stride_pad_gradients failing | ```
21:57:49 =================================== FAILURES ===================================
21:57:49 ___________ TestConvolution.test_conv_separate_stride_pad_gradients ____________
21:57:49
21:57:49 self = <caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>
21:57:49
21:57:49 @unittest.skipIf(not workspace.has_gpu_support, "No gpu support")
21:57:49 > @given(stride_h=st.integers(1, 3),
21:57:49 stride_w=st.integers(1, 3),
21:57:49 pad_h=st.integers(0, 3),
21:57:49 pad_w=st.integers(0, 3),
21:57:49 kernel=st.integers(2, 5),
21:57:49 size=st.integers(1, 8),
21:57:49 input_channels=st.integers(1, 3),
21:57:49 output_channels=st.integers(1, 3),
21:57:49 batch_size=st.integers(1, 3),
21:57:49 order=st.sampled_from(["NCHW"]),
21:57:49 engine=st.sampled_from(["", "EIGEN"]),
21:57:49 shared_buffer=st.booleans(),
21:57:49 use_bias=st.booleans(),
21:57:49 deformable_group=st.integers(1, 3),
21:57:49 **hu.gcs_gpu_only)
21:57:49 def test_conv_separate_stride_pad_gradients(self, stride_h, stride_w,
21:57:49 pad_h, pad_w, kernel, size,
21:57:49 input_channels, output_channels,
21:57:49 batch_size, order, engine,
21:57:49 shared_buffer, use_bias,
21:57:49 deformable_group, gc, dc):
21:57:49
21:57:49 lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py:378:
21:57:49 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
21:57:49 ../lib/python2.7/dist-packages/hypothesis/core.py:604: in execute
21:57:49 result = self.test_runner(data, run)
21:57:49 ../lib/python2.7/dist-packages/hypothesis/executors.py:58: in default_new_style_executor
21:57:49 return function(data)
21:57:49 ../lib/python2.7/dist-packages/hypothesis/core.py:595: in run
21:57:49 return test(*args, **kwargs)
21:57:49 lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py:378: in test_conv_separate_stride_pad_gradients
21:57:49 @given(stride_h=st.integers(1, 3),
21:57:49 ../lib/python2.7/dist-packages/hypothesis/core.py:542: in test
21:57:49 result = self.test(*args, **kwargs)
21:57:49 lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py:434: in test_conv_separate_stride_pad_gradients
21:57:49 self.assertDeviceChecks(dc, op, inputs, [0])
21:57:49 E AssertionError: RuntimeError not raised
21:57:49 ----------------------------- Captured stdout call -----------------------------
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'b': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 {u'X': device_type: 1
21:57:49 , u'w': device_type: 1
21:57:49 , u'o': device_type: 1
21:57:49 }
21:57:49 ---------------------------------- Hypothesis ----------------------------------
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=0, kernel=2, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=3, pad_h=2, pad_w=0, kernel=4, size=6, input_channels=2, output_channels=1, batch_size=3, order='NCHW', engine='EIGEN', shared_buffer=False, use_bias=False, deformable_group=3, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=1, pad_w=3, kernel=2, size=4, input_channels=1, output_channels=3, batch_size=3, order='NCHW', engine='EIGEN', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=3, stride_w=3, pad_h=2, pad_w=0, kernel=5, size=3, input_channels=2, output_channels=1, batch_size=1, order='NCHW', engine='EIGEN', shared_buffer=False, use_bias=False, deformable_group=2, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=3, stride_w=1, pad_h=2, pad_w=1, kernel=5, size=1, input_channels=2, output_channels=1, batch_size=3, order='NCHW', engine='', shared_buffer=True, use_bias=True, deformable_group=2, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=2, stride_w=1, pad_h=1, pad_w=1, kernel=5, size=4, input_channels=2, output_channels=3, batch_size=3, order='NCHW', engine='EIGEN', shared_buffer=True, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=3, stride_w=1, pad_h=3, pad_w=3, kernel=5, size=2, input_channels=2, output_channels=2, batch_size=3, order='NCHW', engine='EIGEN', shared_buffer=True, use_bias=False, deformable_group=2, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=2, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=3, batch_size=2, order='NCHW', engine='EIGEN', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Traceback (most recent call last):
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
21:57:49 result = self.execute(data, collect=True)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
21:57:49 result = self.test_runner(data, run)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
21:57:49 return function(data)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
21:57:49 return test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 378, in test_conv_separate_stride_pad_gradients
21:57:49 @given(stride_h=st.integers(1, 3),
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
21:57:49 result = self.test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 434, in test_conv_separate_stride_pad_gradients
21:57:49 self.assertDeviceChecks(dc, op, inputs, [0])
21:57:49 File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
21:57:49 "{0} not raised".format(exc_name))
21:57:49 AssertionError: RuntimeError not raised
21:57:49
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=2, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=3, batch_size=2, order='NCHW', engine='EIGEN', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Traceback (most recent call last):
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
21:57:49 result = self.execute(data, collect=True)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
21:57:49 result = self.test_runner(data, run)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
21:57:49 return function(data)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
21:57:49 return test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 378, in test_conv_separate_stride_pad_gradients
21:57:49 @given(stride_h=st.integers(1, 3),
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
21:57:49 result = self.test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 434, in test_conv_separate_stride_pad_gradients
21:57:49 self.assertDeviceChecks(dc, op, inputs, [0])
21:57:49 File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
21:57:49 "{0} not raised".format(exc_name))
21:57:49 AssertionError: RuntimeError not raised
21:57:49
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=2, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=3, batch_size=2, order='NCHW', engine='EIGEN', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Traceback (most recent call last):
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
21:57:49 result = self.execute(data, collect=True)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
21:57:49 result = self.test_runner(data, run)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
21:57:49 return function(data)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
21:57:49 return test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 378, in test_conv_separate_stride_pad_gradients
21:57:49 @given(stride_h=st.integers(1, 3),
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
21:57:49 result = self.test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 434, in test_conv_separate_stride_pad_gradients
21:57:49 self.assertDeviceChecks(dc, op, inputs, [0])
21:57:49 File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
21:57:49 "{0} not raised".format(exc_name))
21:57:49 AssertionError: RuntimeError not raised
21:57:49
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=2, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=3, batch_size=1, order='NCHW', engine='EIGEN', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Traceback (most recent call last):
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
21:57:49 result = self.execute(data, collect=True)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
21:57:49 result = self.test_runner(data, run)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
21:57:49 return function(data)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
21:57:49 return test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 378, in test_conv_separate_stride_pad_gradients
21:57:49 @given(stride_h=st.integers(1, 3),
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
21:57:49 result = self.test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 434, in test_conv_separate_stride_pad_gradients
21:57:49 self.assertDeviceChecks(dc, op, inputs, [0])
21:57:49 File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
21:57:49 "{0} not raised".format(exc_name))
21:57:49 AssertionError: RuntimeError not raised
21:57:49
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=2, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=3, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Traceback (most recent call last):
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
21:57:49 result = self.execute(data, collect=True)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
21:57:49 result = self.test_runner(data, run)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
21:57:49 return function(data)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
21:57:49 return test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 378, in test_conv_separate_stride_pad_gradients
21:57:49 @given(stride_h=st.integers(1, 3),
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
21:57:49 result = self.test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 434, in test_conv_separate_stride_pad_gradients
21:57:49 self.assertDeviceChecks(dc, op, inputs, [0])
21:57:49 File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
21:57:49 "{0} not raised".format(exc_name))
21:57:49 AssertionError: RuntimeError not raised
21:57:49
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=2, pad_w=1, kernel=2, size=3, input_channels=1, output_channels=3, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=2, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Traceback (most recent call last):
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
21:57:49 result = self.execute(data, collect=True)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
21:57:49 result = self.test_runner(data, run)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
21:57:49 return function(data)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
21:57:49 return test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 378, in test_conv_separate_stride_pad_gradients
21:57:49 @given(stride_h=st.integers(1, 3),
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
21:57:49 result = self.test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 434, in test_conv_separate_stride_pad_gradients
21:57:49 self.assertDeviceChecks(dc, op, inputs, [0])
21:57:49 File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
21:57:49 "{0} not raised".format(exc_name))
21:57:49 AssertionError: RuntimeError not raised
21:57:49
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=0, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=2, pad_w=0, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=2, pad_w=1, kernel=5, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=2, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Traceback (most recent call last):
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
21:57:49 result = self.execute(data, collect=True)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
21:57:49 result = self.test_runner(data, run)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
21:57:49 return function(data)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
21:57:49 return test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 378, in test_conv_separate_stride_pad_gradients
21:57:49 @given(stride_h=st.integers(1, 3),
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
21:57:49 result = self.test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 434, in test_conv_separate_stride_pad_gradients
21:57:49 self.assertDeviceChecks(dc, op, inputs, [0])
21:57:49 File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
21:57:49 "{0} not raised".format(exc_name))
21:57:49 AssertionError: RuntimeError not raised
21:57:49
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=2, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Traceback (most recent call last):
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
21:57:49 result = self.execute(data, collect=True)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
21:57:49 result = self.test_runner(data, run)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
21:57:49 return function(data)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
21:57:49 return test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 378, in test_conv_separate_stride_pad_gradients
21:57:49 @given(stride_h=st.integers(1, 3),
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
21:57:49 result = self.test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 434, in test_conv_separate_stride_pad_gradients
21:57:49 self.assertDeviceChecks(dc, op, inputs, [0])
21:57:49 File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
21:57:49 "{0} not raised".format(exc_name))
21:57:49 AssertionError: RuntimeError not raised
21:57:49
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=1, kernel=5, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=5, size=2, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=2, pad_w=1, kernel=2, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=2, pad_w=1, kernel=3, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=2, pad_w=1, kernel=4, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Traceback (most recent call last):
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
21:57:49 result = self.execute(data, collect=True)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
21:57:49 result = self.test_runner(data, run)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
21:57:49 return function(data)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
21:57:49 return test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 378, in test_conv_separate_stride_pad_gradients
21:57:49 @given(stride_h=st.integers(1, 3),
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
21:57:49 result = self.test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 434, in test_conv_separate_stride_pad_gradients
21:57:49 self.assertDeviceChecks(dc, op, inputs, [0])
21:57:49 File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
21:57:49 "{0} not raised".format(exc_name))
21:57:49 AssertionError: RuntimeError not raised
21:57:49
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=0, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=5, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=0, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=True, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=3, stride_w=1, pad_h=1, pad_w=1, kernel=5, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=4, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=2, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=0, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=3, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=3, pad_w=1, kernel=3, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=3, kernel=3, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=4, size=4, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=2, stride_w=1, pad_h=0, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=0, pad_w=1, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=2, stride_w=1, pad_h=1, pad_w=0, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=1, pad_w=0, kernel=5, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=3, pad_h=1, pad_w=1, kernel=5, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=1, kernel=4, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=0, kernel=4, size=3, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Traceback (most recent call last):
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 689, in evaluate_test_data
21:57:49 result = self.execute(data, collect=True)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 604, in execute
21:57:49 result = self.test_runner(data, run)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/executors.py", line 58, in default_new_style_executor
21:57:49 return function(data)
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 600, in run
21:57:49 return test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 378, in test_conv_separate_stride_pad_gradients
21:57:49 @given(stride_h=st.integers(1, 3),
21:57:49 File "/usr/local/lib/python2.7/dist-packages/hypothesis/core.py", line 542, in test
21:57:49 result = self.test(*args, **kwargs)
21:57:49 File "/usr/local/caffe2/lib/python2.7/dist-packages/caffe2/python/operator_test/deform_conv_test.py", line 434, in test_conv_separate_stride_pad_gradients
21:57:49 self.assertDeviceChecks(dc, op, inputs, [0])
21:57:49 File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
21:57:49 "{0} not raised".format(exc_name))
21:57:49 AssertionError: RuntimeError not raised
21:57:49
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=0, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=2, stride_w=2, pad_h=0, pad_w=0, kernel=2, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=2, size=1, input_channels=1, output_channels=2, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=0, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=True, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=1, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=True, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=2, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=True, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=0, kernel=3, size=2, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=1, kernel=3, size=2, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=2, size=2, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=2, stride_w=1, pad_h=1, pad_w=0, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=2, stride_w=1, pad_h=0, pad_w=1, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=2, stride_w=1, pad_h=1, pad_w=1, kernel=2, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=1, pad_w=0, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=0, pad_w=1, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=2, pad_h=1, pad_w=1, kernel=2, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=0, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=1, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=2, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=True, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=2, pad_w=1, kernel=2, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=2, kernel=2, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=2, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=2, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=1, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=0, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=0, pad_w=1, kernel=2, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=0, kernel=2, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Trying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=3, pad_h=2, pad_w=1, kernel=4, size=3, input_channels=3, output_channels=3, batch_size=3, order='NCHW', engine='EIGEN', shared_buffer=True, use_bias=True, deformable_group=3, gc=device_type: 1, dc=[device_type: 1])
21:57:49 Falsifying example: test_conv_separate_stride_pad_gradients(self=<caffe2.python.operator_test.deform_conv_test.TestConvolution testMethod=test_conv_separate_stride_pad_gradients>, stride_h=1, stride_w=1, pad_h=1, pad_w=1, kernel=3, size=1, input_channels=1, output_channels=1, batch_size=1, order='NCHW', engine='', shared_buffer=False, use_bias=False, deformable_group=1, gc=device_type: 1, dc=[device_type: 1])
21:57:49
21:57:49 You can reproduce this example by temporarily adding @reproduce_failure('3.66.8', 'AAAAAAABAAEBAAAAAAA=') as a decorator on your test case
```
sample: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-cuda9.0-cudnn7-ubuntu16.04-test/9399/console | caffe2 | low | Critical |
344,663,050 | flutter | `newSlot` not documented in framework.dart updateChild | https://github.com/flutter/flutter/blob/1cc036519cb6af2bc1af8c9bc39468bc2fc8e05f/packages/flutter/lib/src/widgets/framework.dart#L2690
There's no reference to this parameter in the documentation, but it is used 44 times in the file. | framework,d: api docs,P2,found in release: 1.21,team-framework,triaged-framework | low | Minor |
344,672,476 | godot | Blurry Icon in Tab Switcher on Linux Mint 19 | **Godot version:**
3.0.5
**OS/device including version:**
Linux Mint 19
Also reported to happen in Elementary OS
**Issue description:**
Blurry Icon in tab switchers / dock
Using Docky

Using Tab Switcher

**Steps to reproduce:**
Download Godot 3.0.5 from website, Unzipped, launch program. Alt tab shows it a bit more clearly or using Docky.
| enhancement,platform:linuxbsd,topic:editor | low | Major |
344,725,118 | pytorch | How can I sure that I run caffe2 with GPU | I have installed CUDA and CUDNN, but today I run MNIST for a test, then it show me that "
I0726 15:18:46.212692 6426 operator.cc:169] Engine CUDNN is not available for operator Conv.
I0726 15:18:46.212774 6426 operator.cc:169] Engine CUDNN is not available for operator MaxPool.
I0726 15:18:46.212832 6426 operator.cc:169] Engine CUDNN is not available for operator Conv.
I0726 15:18:46.212883 6426 operator.cc:169] Engine CUDNN is not available for operator MaxPool.
I0726 15:18:46.212956 6426 operator.cc:169] Engine CUDNN is not available for operator Relu.
I0726 15:18:46.212990 6426 operator.cc:169] Engine CUDNN is not available for operator Softmax.
I0726 15:19:35.518470 6426 operator.cc:169] Engine CUDNN is not available for operator Conv.
I0726 15:19:35.518534 6426 operator.cc:169] Engine CUDNN is not available for operator MaxPool.
I0726 15:19:35.518573 6426 operator.cc:169] Engine CUDNN is not available for operator Conv.
I0726 15:19:35.518604 6426 operator.cc:169] Engine CUDNN is not available for operator MaxPool.
I0726 15:19:35.518649 6426 operator.cc:169] Engine CUDNN is not available for operator Relu.
I0726 15:19:35.518672 6426 operator.cc:169] Engine CUDNN is not available for operator Softmax.
I0726 15:19:35.519259 6426 operator.cc:169] Engine CUDNN is not available for operator Conv.
I0726 15:19:35.519331 6426 operator.cc:169] Engine CUDNN is not available for operator MaxPool.
I0726 15:19:35.519382 6426 operator.cc:169] Engine CUDNN is not available for operator Conv.
I0726 15:19:35.519426 6426 operator.cc:169] Engine CUDNN is not available for operator MaxPool.
I0726 15:19:35.519488 6426 operator.cc:169] Engine CUDNN is not available for operator Relu.
I0726 15:19:35.519521 6426 operator.cc:169] Engine CUDNN is not available for operator Softmax.
Any one can help me, please.
and thank you any way~~~~~~~ | caffe2 | low | Minor |
344,731,871 | rust | Allow to use a custom dsymutil command | `src/librustc_codegen_llvm/back/link.rs` does this:
```
match Command::new("dsymutil").arg(out_filename).output() {
```
`dsymutil` might not be in your `PATH`, or worse, have a different name (like, `llvm-dsymutil`) | A-linkage,O-macos,A-debuginfo,T-compiler,C-feature-request | low | Minor |
344,790,785 | nvm | Install of old Node.js versions aborts with "Could not autodetect OpenSSL support" | - Operating system and version:
macOS High Serrira v10.13.6
- `nvm debug` output:
<details>
```sh
$ nvm debug
nvm --version: v0.33.11
$TERM_PROGRAM: Apple_Terminal
$SHELL: /bin/bash
$SHLVL: 1
$HOME: /Users/medikoo
$NVM_DIR: '$HOME/.nvm'
$PATH: $HOME/anaconda3/bin:/Library/Frameworks/Python.framework/Versions/3.6/bin:~/Library/Python/3.6/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
$PREFIX: ''
$NPM_CONFIG_PREFIX: ''
$NVM_NODEJS_ORG_MIRROR: ''
$NVM_IOJS_ORG_MIRROR: ''
shell version: 'GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin17)'
uname -a: 'Darwin 17.7.0 Darwin Kernel Version 17.7.0: Thu Jun 21 22:53:14 PDT 2018; root:xnu-4570.71.2~1/RELEASE_X86_64 x86_64'
OS version: Mac 10.13.6 17G65
curl: /usr/bin/curl, curl 7.54.0 (x86_64-apple-darwin17.0) libcurl/7.54.0 LibreSSL/2.0.20 zlib/1.2.11 nghttp2/1.24.0
wget: not found
git: /usr/bin/git, git version 2.15.2 (Apple Git-101.1)
grep: /usr/bin/grep, grep (BSD grep) 2.5.1-FreeBSD
awk: /usr/bin/awk, awk version 20070501
sed: illegal option -- -
usage: sed script [-Ealn] [-i extension] [file ...]
sed [-Ealn] [-i extension] [-e script] ... [-f script_file] ... [file ...]
sed: /usr/bin/sed,
cut: illegal option -- -
usage: cut -b list [-n] [file ...]
cut -c list [file ...]
cut -f list [-s] [-d delim] [file ...]
cut: /usr/bin/cut,
basename: illegal option -- -
usage: basename string [suffix]
basename [-a] [-s suffix] string [...]
basename: /usr/bin/basename,
rm: illegal option -- -
usage: rm [-f | -i] [-dPRrvW] file ...
unlink file
rm: /bin/rm,
mkdir: illegal option -- -
usage: mkdir [-pv] [-m mode] directory ...
mkdir: /bin/mkdir,
xargs: illegal option -- -
usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] [-J replstr]
[-L number] [-n number [-x]] [-P maxprocs] [-s size]
[utility [argument ...]]
xargs: /usr/bin/xargs,
nvm current: system
which node: /usr/local/bin/node
which iojs:
which npm: /usr/local/bin/npm
npm config get prefix: /usr/local
npm root -g: /usr/local/lib/node_modules
```
</details>
- `nvm ls` output:
<details>
```sh
$ nvm ls
v0.12.18
v4.8.7
v6.10.3
v6.12.2
v7.5.0
v8.9.4
v8.10.0
v8.11.1
v9.11.1
-> system
default -> system
node -> stable (-> v9.11.1) (default)
stable -> 9.11 (-> v9.11.1) (default)
iojs -> N/A (default)
lts/* -> lts/carbon (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.14.3 (-> N/A)
lts/carbon -> v8.11.3 (-> N/A)
```
</details>
- How did you install `nvm`? (e.g. install script in readme, Homebrew):
Install script in readme
- What steps did you perform?
```sh
nvm install v0.4
```
- What happened?
```
$ nvm install v0.4
Clang v3.5+ detected! CC or CXX not specified, will use Clang as C/C++ compiler!
Local cache found: $NVM_DIR/.cache/src/node-v0.4.12/node-v0.4.12.tar.gz
Checksums match! Using existing downloaded archive $NVM_DIR/.cache/src/node-v0.4.12/node-v0.4.12.tar.gz
$>./configure --prefix=/Users/medikoo/.nvm/v0.4.12 <
Checking for program g++ or c++ : /usr/bin/g++
Checking for program cpp : /usr/bin/cpp
Checking for program ar : /usr/bin/ar
Checking for program ranlib : /usr/bin/ranlib
Checking for g++ : ok
Checking for program gcc or cc : /usr/bin/gcc
Checking for gcc : ok
Checking for library dl : yes
Checking for openssl : not found
Checking for function SSL_library_init : not found
Checking for header openssl/crypto.h : not found
/Users/medikoo/.nvm/.cache/src/node-v0.4.12/files/wscript:341: error: Could not autodetect OpenSSL support. Make sure OpenSSL development packages are installed. Use configure --without-ssl to disable this message.
nvm: install v0.4.12 failed!
```
- What did you expect to happen?
I expected Node v0.4 to be installed, especially that OpenSSL is installed on my machine:
```sh
$ openssl
OpenSSL> ^C
$ which openssl
/usr/local/bin/openssl
```
| installing node | low | Critical |
344,793,696 | pytorch | Pytorch is slow when only using CPU, and cannot utilize multicore of CPU | When I testing the acceleration effect on CPU by decomposing convolution layers, I found pytorch is slow and cannot utilize multicores of CPU.
```
# To blind GPU
os.environ["CUDA_VISIBLE_DEVICES"] = ' '
# Generate input data
t = np.random.randn(50,128,128,64).astype(np.float32)
```
```
import torch
import torch.nn as nn
import torch.nn.functional as F
x = torch.from_numpy(t).permute(0,3,1,2)
model1 = nn.Sequential(
nn.Conv2d(64, 128, 3, stride=1, padding=1, dilation=1),
nn.Conv2d(128, 256, 3, stride=1, padding=1, dilation=1),
)
model2 = nn.Sequential(
nn.Sequential(
nn.Conv2d(64, 16, kernel_size=(1, 1)),
nn.Conv2d(16, 16, kernel_size=(3, 1), padding=(1, 0)),
nn.Conv2d(16, 16, kernel_size=(1, 3), padding=(0, 1)),
nn.Conv2d(16, 128, kernel_size=(1, 1)),
),
nn.Sequential(
nn.Conv2d(128, 32, kernel_size=(1, 1)),
nn.Conv2d(32, 32, kernel_size=(3, 1), padding=(1, 0)),
nn.Conv2d(32, 32, kernel_size=(1, 3), padding=(0, 1)),
nn.Conv2d(32, 256, kernel_size=(1, 1)),
),
)
iters = 10
t1 = time.time()
for i in range(iters):
y=model1(x)
t2 = time.time()
m1t = t2-t1
t1 = time.time()
for i in range(iters):
y=model2(x)
t2 = time.time()
m2t = t2-t1
print(m1t, m2t)
```
Output is that while cpu usage is 100%-200%
> 147.48149728775024 16.699654817581177

However, Keras (TF back-end) is faster and multi-threaded.
```
model1 = Sequential([
Conv2D(128,3,input_shape=(128,128,64)),
Conv2D(256,3),
])
model2 = Sequential([
Sequential([
Conv2D(16,1,input_shape=(128,128,64)),
Conv2D(16,(3,1)),
Conv2D(16,(1,3)),
Conv2D(128,(1,1)),
]),
Sequential([
Conv2D(32,1,input_shape=(100,100,128)),
Conv2D(32,(3,1)),
Conv2D(32,(1,3)),
Conv2D(256,1)
]),
])
```
Output is that while cpu usage is 800%-1600%
> 26.393059253692627 13.783706188201904

## System Info
Pytorch 0.4
Ubuntu 16.04
cc @VitalyFedyunin @ngimel | module: performance,module: cpu,triaged,module: multithreading | medium | Major |
344,816,824 | pytorch | onnx to caffe2 err | ## Issue description
Provide a short description.
from pytorch model to onnx, and from onnx to caffe2, err ocurr, help me please, thanks all.
## Code example
from torch.autograd import Variable
import torch.onnx
import torchvision
dummy_input = Variable(torch.randn(10, 3, 224, 224)).cuda()
model = torchvision.models.alexnet(pretrained=False).cuda()
torch.onnx._export(model, dummy_input, "alexnet.proto", verbose=True)
import onnx
import caffe2.python.onnx.backend as backend
# load onnx object
model = onnx.load("alexnet.proto")
print(type(model))
prepared_backend = backend.prepare(model)
from caffe2.python.onnx.backend import Caffe2Backend as c2
init_net, predict_net = c2.onnx_graph_to_caffe2_net(model.graph)
with open("squeeze_init_net.pb", "wb") as f:
f.write(init_net.SerializeToString())
with open("squeeze_predict_net.pb", "wb") as f:
f.write(predict_net.SerializeToString())
## System Info
<class 'onnx.onnx_pb2.ModelProto'>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-4691dd3e26ec> in <module>()
4 model = onnx.load("alexnet.proto")
5 print(type(model))
----> 6 prepared_backend = backend.prepare(model)
7 from caffe2.python.onnx.backend import Caffe2Backend as c2
8 init_net, predict_net = c2.onnx_graph_to_caffe2_net(model.graph)
/home/fiona/lizhen/caffe2/build/caffe2/python/onnx/backend.pyc in prepare(cls, model, device, **kwargs)
745 # Check whether we have RNN related ops
746 pred_model = ModelProto()
--> 747 pred_model.ParseFromString(cls.optimize_onnx(model.SerializeToString(), predict=True))
748 rnn_nodes = []
749 for node in pred_model.graph.node:
/home/fiona/lizhen/caffe2/build/caffe2/python/onnx/backend.pyc in optimize_onnx(input, init, predict)
701 if predict:
702 passes.append('split_predict')
--> 703 out = onnx.optimizer.optimize(input, passes)
704 return out
705
/home/fiona/anaconda2/lib/python2.7/site-packages/onnx-1.2.1-py2.7-linux-x86_64.egg/onnx/optimizer.pyc in optimize(model, passes)
43 'fuse_transpose_into_gemm']
44 if not isinstance(model, ModelProto):
---> 45 raise ValueError('Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))
46
47 model_str = model.SerializeToString()
ValueError: Optimizer only accepts ModelProto, incorrect type: <type 'str'>
- PyTorch and Caffe2:
- How you installed PyTorch (pip):
- OS:Ubuntu16.04
- PyTorch version:0.4.0
- Python version:anaconda python2 2.7.1
- CUDA/cuDNN version:8.0/6.0.5
- GPU models and configuration: GeForce GTX 1080 Ti/PCIe/SSE2
- GCC version (if compiling from source):5.4.0
- CMake version:3.11.1
- Versions of any other relevant libraries:
eigen:3.3.4, opencv 3.4.1
| caffe2 | low | Critical |
344,835,471 | vscode | "Quick Open" open all/multi-select files | Sometimes I need to open all files matching a certain criteria.
For example:

At the moment I have to start "Quick Open" and then enter my search term and then go through each entry and hit arrow-right on my keyboard. This can take very long.
It would be nice if I could just open all files that are in the dialog.
It would also be good if I could select multiple files with ctrl + click and maybe shift + click to open only the files I need.
| feature-request,quick-open | low | Major |
344,890,538 | pytorch | cannot open caffe2.lib in VS2015 debug mode | Environment:
Win10 64Bit, VS2015, Caffe2 CPU Only mode
issue:
I build caffe2.lib successfully in VS2015 debug mode, but link error happened when I build project caffe2_pybind11_state.
link error message: LNK1104 can not open “../lib/debug/caffe2.lib”
file caffe2.lib exceed 2.6G, is that reason?
| caffe2 | low | Critical |
344,921,601 | go | regexp: investigate further performance improvements | Languages Regex Benchmark:
Language | Email(ms) | URI(ms) | IP(ms) | Total(ms)
--- | ---: | ---: | ---: | ---:
**C PCRE2** | 25.00 | 25.02 | 5.65 | 55.66
**Rust** | 31.31 | 31.73 | 6.75 | 69.79
**PHP** | 54.39 | 50.22 | 5.80 | 110.40
**Javascript** | 74.88 | 63.09 | 2.02 | 140.00
**D ldc** | 146.01 | 140.03 | 5.19 | 291.24
**D dmd** | 205.52 | 200.30 | 5.59 | 411.41
**Perl** | 246.91 | 170.74 | 45.60 | 463.24
**Crystal** | 339.79 | 280.74 | 27.03 | 647.56
**Python PyPy** | 207.96 | 177.18 | 329.85 | 714.99
**Ruby** | 354.16 | 308.55 | 52.73 | 715.44
**Java** | 382.57 | 456.34 | 297.66 | 1136.57
**Kotlin** | 395.23 | 474.31 | 293.53 | 1163.07
**Python 2** | 368.85 | 286.70 | 514.10 | 1169.65
**Python 3** | 565.71 | 416.32 | 493.07 | 1475.09
**Go** | 423.53 | 415.45 | 722.53 | 1561.51
**C# .Net Core** | 1952.13 | 1681.00 | 111.32 | 3744.45
**C# Mono** | 2463.84 | 2088.87 | 153.78 | 4706.49
In [the above benchmark](https://github.com/mariomka/regex-benchmark/issues/8), Go's regex is even slower than Python. It is not ideal because as Python is a scripting language, and Go is a static language, Go should be faster than Python.
I noticed that there's an issue here: https://github.com/golang/go/issues/19629, and someone said that because Python uses C for regex, and C is faster than Go. But Python is a cross-platform language and it can enjoy the C regex implementations for all platforms. Why can't Go do the same thing? This may be a stupid question but I just don't understand why Go has to use cgo to call C code, but Python doesn't have this limitation? Thanks.
| Performance,NeedsInvestigation | high | Major |
344,922,453 | pytorch | Unintuitive reduction of mini-batch loss for NLLLoss | I find the reduction method that was chosen for the NLLLoss quite unintutive.
<img width="712" alt="screen shot 2018-07-26 at 18 33 10" src="https://user-images.githubusercontent.com/14793026/43275573-774953a0-9102-11e8-8f84-8d8f7955cd9a.png">
This introduces a weird interdependence of the chosen class weights with the chose batch size (and more: the influence of the class weights depend on which ground-truth classes are present in the mini-batch)
Extreme case with the current implementation: with batch size one, it does not matter which class weights I choose, my net will always see the same gradients.
In other words: I would expect `F.nll_loss(..., reduce=True) == torch.mean(F.nll_los(..., reduce=False))` but this does not hold true when using different class weights.
In the documentation of the CrossEntropyLoss it also says the following
<img width="647" alt="screen shot 2018-07-26 at 18 45 47" src="https://user-images.githubusercontent.com/14793026/43276191-2a0489c8-9104-11e8-9348-23b16ad5543b.png">
Especially the sentence "The losses are averaged across observations for each minibatch." is very misleading with the current implementation if you are using class weights.
I can only guess that the reason this implementation was chosen is s.t. your loss value doesn't change when you change the class weights (which makes multiple runs with different class weights more comparable when you're just looking at the loss), but it seems to come at a cost of a very unintuitive treatment of class weights, that in my opinion is not worth it.
cc @jlin27 @mruberry @albanD @jbschlosser | module: docs,module: nn,module: loss,triaged | low | Minor |
344,929,494 | pytorch | NetTest.OperatorWithExecutorHelper intermittently hangs | Sample log:
```
06:36:16 [ RUN ] NetTest.OperatorWithExecutorHelper
06:36:16 I0726 06:36:16.360155 1596 net_dag_utils.cc:102] Operator graph pruning prior to chain compute took: 4.741e-06 secs
06:36:16 I0726 06:36:16.360244 1596 net_async_base.cc:435] Using specified CPU pool size: 4; NUMA node id: -1
06:36:16 I0726 06:36:16.360285 1596 net_async_base.cc:448] Created shared CPU pool, size: 4; NUMA node id: -1
07:18:08 Build timed out (after 45 minutes). Marking the build as failed.
07:18:08 Build was aborted
```
sample log: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-cuda9.0-cudnn7-aten-ubuntu16.04-test/5029/console | caffe2 | low | Critical |
344,940,132 | pytorch | [doc] functionalities not documented | - [ ] FeatureAlphaDropout
- [ ] nn.init.zeros_, nn.init.ones_
- [ ] bilinear missing doc content
- [x] DistributedDataParallelCPU
- [ ] torch.as_strided
- [ ] F.softplus
- [ ] torch.default_generator | module: docs,good first issue,triaged | medium | Major |
344,941,809 | vue | Slots with only comments use fallback instead | ### Version
2.5.16
### Reproduction link
[https://jsfiddle.net/sg0bkLhv/7/](https://jsfiddle.net/sg0bkLhv/7/)
### Steps to reproduce
* Register a component with a slot
* Use the component in a Vue instance with comments=true, filling the slot with only HTML comment(s)
### What is expected?
The HTML comment is rendered into the slot.
This could be a breaking "fix" for someone who is running with comments=true and still relying on this behavior. If the current behavior is kept, I think it should at least be documented.
### What is actually happening?
The HTML comment is discarded and the slot uses its fallback content instead.
If any other content is provided together with the HTML comment, all content is kept.
---
I am developing for a CMS which uses HTML comments to provide its editing capabilities. It has the concept of "areas", which are similar to Vue's slots, so it would be handy to render areas into slots. An empty area (e.g. in a newly created page) consists solely of an HTML comment. Since the comment is stripped by Vue, the editing tools are not available.
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | low | Major |
344,943,934 | rust | The run-pass/simd-intrinsic-float-minmax.rs fails on mips* with glibc 2.27 | This is a combination of several issues. First on Mips targets the LLVM expands @llvm.min/maxnum.v4f32 into per element fminf/fmaxf libcalls. Since glibc 2.25 [1] the semantics of these libm functions is changed and no longer follows the rules defined in LLVM lang ref [2]. Now these functions return QNAN if any of the input is SNAN. And finally the bit-pattern of libcore::f32::NAN constant is actually interpreted as SNAN by Mips hardware [3] [4] triggering the issue.
@gnzlbg
[1] https://sourceware.org/bugzilla/show_bug.cgi?id=20947
[2] https://llvm.org/docs/LangRef.html#id480
[3] https://sourceware.org/binutils/docs/as/MIPS-NaN-Encodings.html
[4] https://dmz-portal.mips.com/wiki/MIPS_ABI_-_NaN_Interlinking#1._Introduction | O-MIPS | low | Critical |
344,950,031 | rust | External statics compile incorrectly when there's no #[link(... above the extern block | External statics return stuff from incorrect memory locations when their `extern` blocks are not marked with `#[link(name = "their_lib_name")]` explicitly .
#### Environment
```
rustc 1.29.0-nightly (6a1c0637c 2018-07-23)`
binary: rustc
commit-hash: 6a1c0637ce44aeea6c60527f4c0e7fb33f2bcd0d
commit-date: 2018-07-23
host: x86_64-pc-windows-msvc
release: 1.29.0-nightly
LLVM version: 7.0
```
#### Example
Test `cdylib` type library called `libtest`:
```Rust
// lib.rs
#[no_mangle] pub static AS_STATIC: u32 = 1337;
#[no_mangle] pub extern "C" fn as_function() -> u32 { 1337 }
```
Test binary (marked):
```Rust
// main.rs
#[link(name = "libtest")]
extern "C" {
pub static AS_STATIC: u32;
pub fn as_function() -> u32;
}
fn main() {
unsafe {
println!(
"Static: '{}' at {:p}\nFunction: '{}' at {:p}",
AS_STATIC,
&AS_STATIC as *const u32 ,
as_function(),
as_function as *const (),
);
}
}
```
Result - OK:
```
Static: '1337' at 0x7fef9bd2140
Function: '1337' at 0x13f5462b0
```
Now after removing `#[link(name = "libtest")]` from above the `extern` block in our test binary and instead linking via build script:
```Rust
// build.rs
fn main() {
println!("cargo:rustc-link-lib=dylib=libtest");
}
```
Result:
```
Static: '262284799' at 0x13fe062b0
Function: '1337' at 0x13fe062b6
``` | T-compiler,O-windows-msvc,C-bug | low | Minor |
344,957,043 | pytorch | interaction with FindCUDA causes spurious re-cmakes | (see https://github.com/pytorch/pytorch/pull/9845 for a discarded potential solution)
FindCUDA generates a C++ source file at CMake time. When building with ninja (haven't tested with make), a dependency is recorded of build.ninja on that generated file. After invoking cmake manually, that generated file is (for some reason - it doesn't make sense to me why this is the case, and poking around to look at timestamps even suggests that it's not true) considered to be newer than build.ninja. Therefore upon the next time running `ninja build`, ninja spuriously reruns CMake (but not on additional subsequent runs).
expected behavior: running `cmake; ninja` doesn't cause cmake to be run twice
environment: on linux, with cuda and ninja
to reproduce:
(1) grab the cmake command. I did this by prepending an `echo` to the cmake invocation in build_pytorch_libs.sh
(2) `cd build`
(3) manually run that cmake command
(4) `ninja -d explain`
(5) note that cmake is rerun. The very first line of output should be a message from ninja saying that the dependency on the aforementioned generated source file is triggering a rebuild of build.ninja itself
Note: I'm not sure of exactly how to characterize the underlying problem. It could be that the timestamps are recorded incorrectly (on the filesystem, they appear to be within a few ms of each other). It could be that the dependency itself is bogus. It could be either an upstream bug of FindCUDA, or a problem with our usage of it.
cc @malfet @seemethere @walterddr | module: build,triaged | low | Critical |
344,957,195 | flutter | Can't resolve symbol io.flutter.plugin dependency | ## Edit from Flutter team:
Please see https://github.com/flutter/flutter/issues/19830#issuecomment-1688736003. If following these steps still doesn't resolve the imports, please add a comment with your Android Studio version and the output of `flutter doctor -v`
## Original issue:
I'm trying to build a flutter plugin in Android studio using an example template.
When I open the build.gradle in android studio and gradle resolves dependencies I cannot get these symbols to resolve:
I'm not sure what I'm missing as flutter doctor shows I have all checks.

| platform-android,tool,customer: crowd,P2,a: plugins,team-android,triaged-android | low | Critical |
344,965,742 | pytorch | [docs] Script for releasing new versions of the docs | I have a script but I need to clean it up a little. Leaving this issue here to remind myself.
cc @jlin27 @mruberry | module: docs,triaged | low | Minor |
344,968,969 | go | runtime: signal handling panics for signals generated by sigqueue/tgkill | ### What version of Go are you using (`go version`)?
go version devel +30d7e6449f Mon Jul 23 15:16:01 2018 +0000 linux/amd64
### Does this issue reproduce with the latest release?
Yes (Go 1.10). This is reproducible at least to 1.6 and much earlier.
### What operating system and processor architecture are you using (`go env`)?
linux/amd64
This will apply to all Linux platforms and some BSD platforms (though not Darwin since `kill` is the only way to send a signal on Darwin).
### What did you do?
```go
package main
import (
"runtime"
"syscall"
)
func main() {
runtime.LockOSThread()
pid := syscall.Getpid()
tid := syscall.Gettid()
syscall.Kill(tid, syscall.Signal(syscall.SIGSEGV))
println("kill ignored")
syscall.Tgkill(pid, tid, syscall.Signal(syscall.SIGSEGV))
println("tgkill ignored")
}
```
### What did you expect to see?
Since these aren't "real" SIGSEGV, I expect the runtime to ignore them (or deliver them to a signal channel if one is registered):
```
kill ignored
tgkill ignored
```
### What did you see instead?
```
kill ignored
unexpected fault address 0x2f860002ef35
fatal error: fault
[signal SIGSEGV: segmentation violation code=0xfffffffffffffffa addr=0x2f860002ef35 pc=0x44eb2b]
goroutine 1 [running, locked to thread]:
runtime.throw(0x46d21c, 0x5)
/home/austin/.cache/gover/1.10/src/runtime/panic.go:619 +0x81 fp=0xc42004fe98 sp=0xc42004fe78 pc=0x421f41
runtime.sigpanic()
/home/austin/.cache/gover/1.10/src/runtime/signal_unix.go:395 +0x211 fp=0xc42004fee8 sp=0xc42004fe98 pc=0x433ad1
...
```
`runtime.sighandler` explicitly checks for sigcode `SI_USER` in several places to distinguish kernel-generated signals from signals send by the user, but sigcode is only `SI_USER` for signals sent specifically by `kill`. If the signal was sent by `sigqueue` or `tkill`/`tgkill` (on Linux), sigcode is `SI_QUEUE` or `SI_TKILL`, respectively.
/cc @ianlancetaylor @bcmills because I know how much they love signals. | NeedsFix,compiler/runtime | low | Critical |
344,980,049 | pytorch | [Feature Request] Checkpoint manager | Would like to see a checkpoint manager taking a module, dataloader, and optimizer at construction that would be able to:
- Save the module
- Save the optimizer (including state)
- Save the dataloader state (be able to continue from the middle of an epoch)
- Save RNG states (CPU, GPU, including data sampler)
- Call every iteration and automatically checkpoint every X minutes (for preemptible instances)
- Automatically keep best N checkpoints (Remove rest)
- Save some kind of visualization state (like can be done with TensorBoard) | feature,triaged | low | Major |
344,985,064 | angular | Microsyntax is parsed differently depending on whether "let" is used | ## I'm submitting a...
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report
[ ] Performance issue
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[ ] Other... Please describe:
</code></pre>
I had this issue when trying to use MatRowDef (https://github.com/angular/material2/blob/master/src/lib/table/row.ts#L55), but this looks like a general issue on any structural directives that don't have an input that matches its selector. MatRowDef is selected with "[matRowDef]", but it only has inputs "matRowDefColumns" and "matRowDefWhen".
Depending on whether a "let" statement is used in the microsyntax, the rest of the statements throw different errors. The expectation is that a "let" statement should not affect anything else, especially when using a separator like the semi-colon, as in these examples.
This fails to compile:
```
<mat-card *matRowDef="x"></mat-card>
```
The errors are:
- Can't bind to 'matRowDef' since it isn't a known property of 'mat-card'.
- Property binding matRowDef not used by any directive on an embedded template. Make sure that the property name is spelled correctly and all directives are listed in the "@NgModule.declarations".
This is expected. The error message is confusing, because the directive exists, but because it doesn't have an input with the same name, the error is expected. As soon as I add a "let" statement, it compiles fine, though:
```
<mat-card *matRowDef="let y; x"></mat-card>
```
Why does this work? I'm still trying to assign "x" to the "matRowDef" input, which still doesn't exist. Adding the let statement seems to change the interpretation of the second statement.
Flipping them makes the compilation fail again with the same error as the first one:
```
<mat-card *matRowDef="x; let y"></mat-card>
```
Note that these are the very same statements commonly used with ngIf (example). This actually works as expected. There's no "matRowDef" input and I should be using "matRowDefColumns". Then I fix it like this:
```
<mat-card *matRowDef="columns: x; let y"></mat-card>
```
And the error is totally crazy:
error TS100: TypeError: Cannot read property 'toUpperCase' of undefined
Flipping them back now compiles:
```
<mat-card *matRowDef="let y; columns: x"></mat-card>
```
Removing the let statement while keeping the columns assignment gives me the crazy uppercase error again:
```
<mat-card *matRowDef="columns: x"></mat-card>
```
Rewriting not using microsyntax like this compiles fine:
```
<ng-template matRowDef [matRowDefColumns]="x">
<mat-card></mat-card>
</ng-template>
```
Note that I'm only trying to compile. I didn't run the code that compiled without "columns" being assigned to check if it even worked at runtime. My guess is that it wouldn't.
| type: bug/fix,freq2: medium,area: core,hotlist: google,core: ng-template and *microsyntax,P3 | low | Critical |
345,034,730 | pytorch | CRITICAL:root:Cannot load caffe2.python. Error: DLL load failed: A dynamic link library (DLL) initialization routine failed. | I just built Caffe2 using shared libraries option on Windows 10 and i'm getting the error in the title when I try to "import from caffe2.python import core".
- I built everything from sources
- I use only a single version of python on my PC, 2.7.14.
- I've copied the pyd file to C:\Python27\DLLs
- It works well with static linking..
I don't even understand what the error says and what could be its source. If I try to "import caffe2.python" it works well, but it doesn't if I try to "import caffe2.python.anything". | caffe2 | low | Critical |
345,042,733 | pytorch | Stop ifdef'ing out scatter/gather (comm) in libtorch | @goldsborough @mruberry
cc @yf225 @glaringlee @ngimel | module: cpp,module: cuda,triaged | low | Minor |
345,054,797 | flutter | SizeTransition hard codes cross axis alignment. | The SizeTransition should just take an AlignmentGeometry like everything else, not a double for the `axisAlignment`, which takes a bit of learning to understand.
It hard codes right side alignment for the cross axis (the one that is not being clipped), and it shouldn't: it limits the use cases quite a lot. | framework,a: animation,a: quality,c: proposal,P2,team-framework,triaged-framework | low | Minor |
345,078,270 | flutter | Crash while trying to run "flutter packages get", give better error message when iOS project is missing | I was trying to rearrange the repo, and I moved some things around and probably have some invalid paths in my `pubspec.yaml`. In any case, it shouldn't crash, it should give a reasonable error message or recover.
It says the `assets-for-api-docs/packages/diagram_capture/ios/Runner.xcodeproj` directory is missing (indeed, it doesn't exist on the disk).
## command
flutter packages get
## exception
```
ProcessException: ProcessException: No such file or directory
Command: /usr/bin/xcodebuild -project /Users/gspencer/code/assets-for-api-docs/packages/diagram_capture/ios/Runner.xcodeproj -target Runner -showBuildSettings
```
```
#0 _ProcessImpl._runAndWait (dart:io/runtime/binprocess_patch.dart:485:7)
#1 _runNonInteractiveProcessSync (dart:io/runtime/binprocess_patch.dart:631:18)
#2 Process.runSync (dart:io/runtime/binprocess_patch.dart:66:12)
#3 LocalProcessManager.runSync (package:process/src/interface/local_process_manager.dart:83:20)
#4 _runWithLoggingSync (package:flutter_tools/src/base/process.dart:321:48)
#5 runCheckedSync (package:flutter_tools/src/base/process.dart:280:10)
#6 XcodeProjectInterpreter.getBuildSettings (package:flutter_tools/src/ios/xcodeproj.dart:180:24)
#7 CocoaPods.setupPodfile (package:flutter_tools/src/ios/cocoapods.dart:164:52)
#8 injectPlugins (package:flutter_tools/src/plugins.dart:292:17)
<asynchronous suspension>
#9 FlutterProject.ensureReadyForPlatformSpecificTooling (package:flutter_tools/src/project.dart:143:11)
<asynchronous suspension>
#10 PackagesGetCommand.runCommand (package:flutter_tools/src/commands/packages.dart:84:23)
<asynchronous suspension>
#11 FlutterCommand.verifyThenRunCommand (package:flutter_tools/src/runner/flutter_command.dart:347:18)
#12 _asyncThenWrapperHelper.<anonymous closure> (dart:async/runtime/libasync_patch.dart:77:64)
#13 _rootRunUnary (dart:async/zone.dart:1134:38)
#14 _CustomZone.runUnary (dart:async/zone.dart:1031:19)
#15 _FutureListener.handleValue (dart:async/future_impl.dart:129:18)
#16 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:638:45)
#17 Future._propagateToListeners (dart:async/future_impl.dart:667:32)
#18 Future._complete (dart:async/future_impl.dart:472:7)
#19 _SyncCompleter.complete (dart:async/future_impl.dart:51:12)
#20 _AsyncAwaitCompleter.complete.<anonymous closure> (dart:async/runtime/libasync_patch.dart:33:20)
#21 _rootRun (dart:async/zone.dart:1126:13)
#22 _CustomZone.run (dart:async/zone.dart:1023:19)
#23 _CustomZone.bindCallback.<anonymous closure> (dart:async/zone.dart:949:23)
#24 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
#25 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
#26 _runPendingImmediateCallback (dart:isolate/runtime/libisolate_patch.dart:113:13)
#27 _RawReceivePortImpl._handleMessage (dart:isolate/runtime/libisolate_patch.dart:166:5)
```
## flutter doctor
```
[✓] Flutter (Channel unknown, v0.5.8-pre.147, on Mac OS X 10.13.6 17G65, locale en-US)
• Flutter version 0.5.8-pre.147 at /Users/gspencer/code/flutter
• Framework revision 9c159638bc (3 hours ago), 2018-07-26 16:44:27 -0700
• Engine revision 7d17da76db
• Dart version 2.0.0-dev.69.0.flutter-937ee2e8ca
[✓] Android toolchain - develop for Android devices (Android SDK 28.0.1)
• Android SDK at /Users/gspencer/Library/Android/sdk
• Android NDK at /Users/gspencer/Library/Android/sdk/ndk-bundle
• Platform android-28, build-tools 28.0.1
• ANDROID_HOME = /Users/gspencer/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
• All Android licenses accepted.
[✓] iOS toolchain - develop for iOS devices (Xcode 9.4.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 9.4.1, Build version 9F2000
• ios-deploy 1.9.2
• CocoaPods version 1.5.3
[✓] Android Studio (version 3.1)
• Android Studio at /Applications/Android Studio.app/Contents
✗ Flutter plugin not installed; this adds Flutter specific functionality.
• Dart plugin version 162.2924
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1024-b01)
[✓] IntelliJ IDEA Community Edition (version 2018.1.6)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin version 26.0.2
• Dart plugin version 181.4892.1
[✓] Connected devices (1 available)
• Pixel 2 • FA81A1A02109 • android-arm64 • Android 8.1.0 (API 27)
• No issues found!
```
| c: crash,tool,P2,team-tool,triaged-tool | low | Critical |
345,100,267 | pytorch | A serious problem when installing caffe2. Can anyone help me? | [ 88%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/tanh_op_cudnn.cc.o
In file included from /home/lgc/pytorch/caffe2/core/context_gpu.h:19:0,
from /home/lgc/pytorch/caffe2/operators/activation_ops_cudnn.h:4,
from /home/lgc/pytorch/caffe2/operators/tanh_op_cudnn.cc:3:
/home/lgc/pytorch/caffe2/core/common_cudnn.h:24:17: note: #pragma message: CUDNN version under 6.0 is supported at best effort.
#pragma message "CUDNN version under 6.0 is supported at best effort."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:25:17: note: #pragma message: We strongly encourage you to move to 6.0 and above.
#pragma message "We strongly encourage you to move to 6.0 and above."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:26:17: note: #pragma message: This message is intended to annoy you enough to update.
#pragma message "This message is intended to annoy you enough to update."
^
[ 88%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/sigmoid_op_cudnn.cc.o
In file included from /home/lgc/pytorch/caffe2/core/context_gpu.h:19:0,
from /home/lgc/pytorch/caffe2/operators/activation_ops_cudnn.h:4,
from /home/lgc/pytorch/caffe2/operators/sigmoid_op_cudnn.cc:3:
/home/lgc/pytorch/caffe2/core/common_cudnn.h:24:17: note: #pragma message: CUDNN version under 6.0 is supported at best effort.
#pragma message "CUDNN version under 6.0 is supported at best effort."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:25:17: note: #pragma message: We strongly encourage you to move to 6.0 and above.
#pragma message "We strongly encourage you to move to 6.0 and above."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:26:17: note: #pragma message: This message is intended to annoy you enough to update.
#pragma message "This message is intended to annoy you enough to update."
^
[ 88%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/transpose_op_cudnn.cc.o
In file included from /home/lgc/pytorch/caffe2/core/context_gpu.h:19:0,
from /home/lgc/pytorch/caffe2/operators/transpose_op_cudnn.cc:6:
/home/lgc/pytorch/caffe2/core/common_cudnn.h:24:17: note: #pragma message: CUDNN version under 6.0 is supported at best effort.
#pragma message "CUDNN version under 6.0 is supported at best effort."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:25:17: note: #pragma message: We strongly encourage you to move to 6.0 and above.
#pragma message "We strongly encourage you to move to 6.0 and above."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:26:17: note: #pragma message: This message is intended to annoy you enough to update.
#pragma message "This message is intended to annoy you enough to update."
^
[ 88%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/dropout_op_cudnn.cc.o
In file included from /home/lgc/pytorch/caffe2/core/context_gpu.h:19:0,
from /home/lgc/pytorch/caffe2/operators/dropout_op_cudnn.cc:1:
/home/lgc/pytorch/caffe2/core/common_cudnn.h:24:17: note: #pragma message: CUDNN version under 6.0 is supported at best effort.
#pragma message "CUDNN version under 6.0 is supported at best effort."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:25:17: note: #pragma message: We strongly encourage you to move to 6.0 and above.
#pragma message "We strongly encourage you to move to 6.0 and above."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:26:17: note: #pragma message: This message is intended to annoy you enough to update.
#pragma message "This message is intended to annoy you enough to update."
^
[ 88%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/softmax_op_cudnn.cc.o
In file included from /home/lgc/pytorch/caffe2/core/context_gpu.h:19:0,
from /home/lgc/pytorch/caffe2/operators/softmax_op_cudnn.cc:1:
/home/lgc/pytorch/caffe2/core/common_cudnn.h:24:17: note: #pragma message: CUDNN version under 6.0 is supported at best effort.
#pragma message "CUDNN version under 6.0 is supported at best effort."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:25:17: note: #pragma message: We strongly encourage you to move to 6.0 and above.
#pragma message "We strongly encourage you to move to 6.0 and above."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:26:17: note: #pragma message: This message is intended to annoy you enough to update.
#pragma message "This message is intended to annoy you enough to update."
^
[ 88%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/spatial_batch_norm_op_cudnn.cc.o
In file included from /home/lgc/pytorch/caffe2/core/context_gpu.h:19:0,
from /home/lgc/pytorch/caffe2/operators/spatial_batch_norm_op_cudnn.cc:3:
/home/lgc/pytorch/caffe2/core/common_cudnn.h:24:17: note: #pragma message: CUDNN version under 6.0 is supported at best effort.
#pragma message "CUDNN version under 6.0 is supported at best effort."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:25:17: note: #pragma message: We strongly encourage you to move to 6.0 and above.
#pragma message "We strongly encourage you to move to 6.0 and above."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:26:17: note: #pragma message: This message is intended to annoy you enough to update.
#pragma message "This message is intended to annoy you enough to update."
^
[ 89%] Building CXX object caffe2/CMakeFiles/caffe2_gpu.dir/operators/elu_op_cudnn.cc.o
In file included from /home/lgc/pytorch/caffe2/core/context_gpu.h:19:0,
from /home/lgc/pytorch/caffe2/operators/activation_ops_cudnn.h:4,
from /home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:3:
/home/lgc/pytorch/caffe2/core/common_cudnn.h:24:17: note: #pragma message: CUDNN version under 6.0 is supported at best effort.
#pragma message "CUDNN version under 6.0 is supported at best effort."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:25:17: note: #pragma message: We strongly encourage you to move to 6.0 and above.
#pragma message "We strongly encourage you to move to 6.0 and above."
^
/home/lgc/pytorch/caffe2/core/common_cudnn.h:26:17: note: #pragma message: This message is intended to annoy you enough to update.
#pragma message "This message is intended to annoy you enough to update."
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:8:25: **_error: ‘CUDNN_ACTIVATION_ELU’ was not declared in this scope_**
class CuDNNActivationOp<CUDNN_ACTIVATION_ELU> final
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:8:45**_: error: template argument 1 is invalid_**
class CuDNNActivationOp<CUDNN_ACTIVATION_ELU> final
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:54:33: **_error: ‘CUDNN_ACTIVATION_ELU’ was not declared in this scope_**
class CuDNNActivationGradientOp<CUDNN_ACTIVATION_ELU> final
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:54:53: **_error: template argument 1 is invalid
class CuDNNActivationGradientOp<CUDNN_ACTIVATION_ELU> final_**
^
In file included from /home/lgc/pytorch/caffe2/core/flags.h:23:0,
from /home/lgc/pytorch/caffe2/core/logging.h:10,
from /home/lgc/pytorch/caffe2/core/allocator.h:6,
from /home/lgc/pytorch/caffe2/core/context.h:9,
from /home/lgc/pytorch/caffe2/operators/elementwise_ops.h:10,
from /home/lgc/pytorch/caffe2/operators/elu_op.h:6,
from /home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:1:
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:104:48: **_error: ‘CUDNN_ACTIVATION_ELU’ was not declared in this scope_**
REGISTER_CUDNN_OPERATOR(Elu, CuDNNActivationOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/core/registry.h:180:48: note: in definition of macro ‘CAFFE_REGISTER_TYPED_CLASS’
Registerer##RegistryName::DefaultCreator<__VA_ARGS__>, \
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:104:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(Elu, CuDNNActivationOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:104:68: **_error: template argument 1 is invalid_**
REGISTER_CUDNN_OPERATOR(Elu, CuDNNActivationOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/core/registry.h:180:48: note: in definition of macro ‘CAFFE_REGISTER_TYPED_CLASS’
Registerer##RegistryName::DefaultCreator<__VA_ARGS__>, \
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:104:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(Elu, CuDNNActivationOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:104:48: error: ‘CUDNN_ACTIVATION_ELU’ was not declared in this scope
REGISTER_CUDNN_OPERATOR(Elu, CuDNNActivationOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/core/registry.h:181:20: note: in definition of macro ‘CAFFE_REGISTER_TYPED_CLASS’
DemangleType<__VA_ARGS__>()); \
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:104:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(Elu, CuDNNActivationOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:104:68: error: template argument 1 is invalid
REGISTER_CUDNN_OPERATOR(Elu, CuDNNActivationOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/core/registry.h:181:20: note: in definition of macro ‘CAFFE_REGISTER_TYPED_CLASS’
DemangleType<__VA_ARGS__>()); \
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:104:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(Elu, CuDNNActivationOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/core/registry.h:181:33: **_error: no matching function for call to_** ‘DemangleType()’
DemangleType<__VA_ARGS__>()); \
^
/home/lgc/pytorch/caffe2/core/registry.h:210:3: note: in expansion of macro ‘CAFFE_REGISTER_TYPED_CLASS’
CAFFE_REGISTER_TYPED_CLASS(RegistryName, #key, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:104:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(Elu, CuDNNActivationOp<CUDNN_ACTIVATION_ELU>);
^
In file included from /home/lgc/pytorch/caffe2/core/registry.h:21:0,
from /home/lgc/pytorch/caffe2/core/flags.h:23,
from /home/lgc/pytorch/caffe2/core/logging.h:10,
from /home/lgc/pytorch/caffe2/core/allocator.h:6,
from /home/lgc/pytorch/caffe2/core/context.h:9,
from /home/lgc/pytorch/caffe2/operators/elementwise_ops.h:10,
from /home/lgc/pytorch/caffe2/operators/elu_op.h:6,
from /home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:1:
/home/lgc/pytorch/caffe2/core/typeid.h:80:20: note: candidate: template<class T> const char* caffe2::DemangleType()
static const char* DemangleType() {
^
/home/lgc/pytorch/caffe2/core/typeid.h:80:20: note: **_template argument deduction/substitution failed:_**
In file included from /home/lgc/pytorch/caffe2/core/flags.h:23:0,
from /home/lgc/pytorch/caffe2/core/logging.h:10,
from /home/lgc/pytorch/caffe2/core/allocator.h:6,
from /home/lgc/pytorch/caffe2/core/context.h:9,
from /home/lgc/pytorch/caffe2/operators/elementwise_ops.h:10,
from /home/lgc/pytorch/caffe2/operators/elu_op.h:6,
from /home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:1:
/home/lgc/pytorch/caffe2/core/registry.h:181:33: error: template argument 1 is invalid
DemangleType<__VA_ARGS__>()); \
^
/home/lgc/pytorch/caffe2/core/registry.h:210:3: note: in expansion of macro ‘CAFFE_REGISTER_TYPED_CLASS’
CAFFE_REGISTER_TYPED_CLASS(RegistryName, #key, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:104:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(Elu, CuDNNActivationOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:107:31: **_error: ‘CUDNN_ACTIVATION_ELU’ was not declared in this scope_**
CuDNNActivationGradientOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/core/registry.h:180:48: note: in definition of macro ‘CAFFE_REGISTER_TYPED_CLASS’
Registerer##RegistryName::DefaultCreator<__VA_ARGS__>, \
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:105:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:107:51: e**_rror: template argument 1 is invalid_**
CuDNNActivationGradientOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/core/registry.h:180:48: note: in definition of macro ‘CAFFE_REGISTER_TYPED_CLASS’
Registerer##RegistryName::DefaultCreator<__VA_ARGS__>, \
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:105:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:107:31: **_error: ‘CUDNN_ACTIVATION_ELU’ was not declared in this scope_**
CuDNNActivationGradientOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/core/registry.h:181:20: note: in definition of macro ‘CAFFE_REGISTER_TYPED_CLASS’
DemangleType<__VA_ARGS__>()); \
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:105:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:107:51: error: template argument 1 is invalid
CuDNNActivationGradientOp<CUDNN_ACTIVATION_ELU>);
^
/home/lgc/pytorch/caffe2/core/registry.h:181:20: note: in definition of macro ‘CAFFE_REGISTER_TYPED_CLASS’
DemangleType<__VA_ARGS__>()); \
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:105:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(
^
/home/lgc/pytorch/caffe2/core/registry.h:181:33: error: no matching function for call to ‘DemangleType()’
DemangleType<__VA_ARGS__>()); \
^
/home/lgc/pytorch/caffe2/core/registry.h:210:3: note: in expansion of macro ‘CAFFE_REGISTER_TYPED_CLASS’
CAFFE_REGISTER_TYPED_CLASS(RegistryName, #key, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:105:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(
^
In file included from /home/lgc/pytorch/caffe2/core/registry.h:21:0,
from /home/lgc/pytorch/caffe2/core/flags.h:23,
from /home/lgc/pytorch/caffe2/core/logging.h:10,
from /home/lgc/pytorch/caffe2/core/allocator.h:6,
from /home/lgc/pytorch/caffe2/core/context.h:9,
from /home/lgc/pytorch/caffe2/operators/elementwise_ops.h:10,
from /home/lgc/pytorch/caffe2/operators/elu_op.h:6,
from /home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:1:
/home/lgc/pytorch/caffe2/core/typeid.h:80:20: note: candidate: template<class T> const char* caffe2::DemangleType()
static const char* DemangleType() {
^
/home/lgc/pytorch/caffe2/core/typeid.h:80:20: note: template argument deduction/substitution failed:
In file included from /home/lgc/pytorch/caffe2/core/flags.h:23:0,
from /home/lgc/pytorch/caffe2/core/logging.h:10,
from /home/lgc/pytorch/caffe2/core/allocator.h:6,
from /home/lgc/pytorch/caffe2/core/context.h:9,
from /home/lgc/pytorch/caffe2/operators/elementwise_ops.h:10,
from /home/lgc/pytorch/caffe2/operators/elu_op.h:6,
from /home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:1:
/home/lgc/pytorch/caffe2/core/registry.h:181:33: error: template argument 1 is invalid
DemangleType<__VA_ARGS__>()); \
^
/home/lgc/pytorch/caffe2/core/registry.h:210:3: note: in expansion of macro ‘CAFFE_REGISTER_TYPED_CLASS’
CAFFE_REGISTER_TYPED_CLASS(RegistryName, #key, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/core/operator.h:848:3: note: in expansion of macro ‘CAFFE_REGISTER_CLASS’
CAFFE_REGISTER_CLASS( \
^
/home/lgc/pytorch/caffe2/core/operator.h:853:3: note: in expansion of macro ‘REGISTER_CUDA_OPERATOR_WITH_ENGINE’
REGISTER_CUDA_OPERATOR_WITH_ENGINE(name, CUDNN, __VA_ARGS__)
^
/home/lgc/pytorch/caffe2/operators/elu_op_cudnn.cc:105:1: note: in expansion of macro ‘REGISTER_CUDNN_OPERATOR’
REGISTER_CUDNN_OPERATOR(
^
_**caffe2/CMakeFiles/caffe2_gpu.dir/build.make:1386: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/operators/elu_op_cudnn.cc.o' failed
make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/operators/elu_op_cudnn.cc.o] Error 1
CMakeFiles/Makefile2:1363: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2**_
| caffe2 | low | Critical |
345,181,827 | opencv | Proposal: camera multiplexing API | # OpenCV camera multiplexing API
## Motivation
Current `cv::VideoCapture` provide only blocking-style API for access cameras,
call to `VideoCapture::read` or `VideoCapture::grab` will block calling thread until
frame data is available.
When working with multiple cameras this require to create dedicated thread for each
camera.
We propose a way to wait for multiple cameras in single thread and a method to check
for camera frame availability without blocking calling thread.
## New OpenCV APIs
"videoio.hpp":
```C++
enum GrabAnyResult {
GRAB_ANY_TIMEOUT,
GRAB_ANY_EVENT,
GRAB_ANY_READY
}
class GrabAnyEvent {
public:
GrabAnyEvent();
GrabAnyEvent(const GrabAnyEvent&);
GrabAnyEvent& operator=(const GrabAnyEvent&);
void signal();
void reset();
};
struct GrabAnyItem {
VideoCapture* capture;
bool ready;
};
GrabAnyResult grabAny(/*some_array_type<GrabAnyItem>*/ items, GrabAnyEvent* event, int timeout_milliseconds);
```
### `cv::grabAny`
#### Parameters
* items - array of GrabAnyItem structures (can be implicitly created from `std::vector<GrabAnyItem>`, `std::array<GrabAnyItem>`, or `GrabAnyItem[]`),
used both as input and output.
User must set `capture` pointer to valid, opened `VideoCapture` instance.
After return `ready` field will be set to `true` if new call to `VideoCapture::grab` or `VideoCapture::read` won't block or some error happened (e.g. camera was disconneted).
File based `VideoCapture` instances are considered always ready.
This functions doesn't change state or dequeue any frames from `VideoCapture`, if ready `VideoCapture` passed to this function it will return immidiately.
* event - optional event which can be used to asynchronously abort waiting operation, can be null.
This function doens't change status of event, if signaled event is passed to this function it will
return immidiately.
Event must be manually reset by user.
* timeout_milliseconds - timeout in milliseconds after which this function will return. If 0 it will return immediately, which can be used for checking current `VideoCapture` status. Will wait indefinitely if negative.
#### Return value
* GRAB_ANY_TIMEOUT - timeout has expired
* GRAB_ANY_EVENT - asynchronous event was signaled, current `VideoCapture` status still updated in `items`.
* GRAB_ANY_READY - one or more `VideoCapture` instances is ready for reading.
#### Exceptions
* cv::Exception(CV_StsBadArg) - `items` is empty, `capture` field in any item is null, multiple `VideoCapture` instances have different backends.
* cv::Exception(CV_StsNotImplemented) - camera multiplexing is not supported by backend.
### `cv::GrabAnyEvent`
Events is implicitly shared
```C++
GrabAnyEvent event1;
GrabAnyEvent event2 = event1; // event2 refers to same event internally
```
`GrabAnyEvent::signal()` call is thread-safe relative to other asynchronous `GrabAnyEvent::signal()` calls.
Event state is undefined if both `GrabAnyEvent::signal()` and `GrabAnyEvent::reset()` called asynchronously.
## TODO
Check backend support for camera multiplexing without creating instances of `VideoCapturea` or `GrabAnyEvent`.
## Example
```C++
cv::VideoCapture cap0(0);
cv::VideoCapture cap1(1);
cv::GrabAnyEvent event;
...
std::vector<cv::GrabAnyItem> items(2);
items[0].capture = &cap0;
items[1].capture = &cap1;
cv::GrabAnyResult result = cv::grabAny(items, &event, 1000/*1s timeout*/);
if (result == GRAB_ANY_TIMEOUT)
{
// Handle timeout
}
else if (result == GRAB_ANY_EVENT)
{
// Event was signaled asynchronously
event.reset();
}
else if (result == GRAB_ANY_READY)
{
if (items[0].ready)
{
... = cap0.read();
}
if (items[1].ready)
{
... = cap1.read();
}
}
``` | feature,category: videoio(camera) | low | Critical |
345,187,017 | rust | rustc --version segmentation fault | After installing rust with `$ curl -sSf https://static.rust-lang.org/rustup.sh | sh` (following info found at [https://github.com/rust-lang-nursery/rustup.rs/issues/695](https://github.com/rust-lang-nursery/rustup.rs/issues/695)), a segmentation fault occurs when running rustc.
I tried the following code and received notification of the the fault instead of the version string:
```
$ rustc --version
Segmentation fault
```
When running the command via strace, the version string displays:
```
$ strace -o /dev/null rustc --version
rustc 1.27.2 (58cc626de 2018-07-18)
```
Info about the system:
```
$ strace -o /dev/null rustc --version --verbose
rustc 1.27.2 (58cc626de 2018-07-18)
binary: rustc
commit-hash: 58cc626de3301192d5d8c6dcbde43b5b44211ae2
commit-date: 2018-07-18
host: i686-unknown-linux-gnu
release: 1.27.2
LLVM version: 6.0
$ cat /etc/*-release
Fedora release 20 (Heisenbug)
NAME=Fedora
VERSION="20 (Heisenbug)"
ID=fedora
VERSION_ID=20
PRETTY_NAME="Fedora 20 (Heisenbug)"
ANSI_COLOR="0;34"
CPE_NAME="cpe:/o:fedoraproject:fedora:20"
HOME_URL="https://fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=20
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=20
Fedora release 20 (Heisenbug)
Fedora release 20 (Heisenbug)
$ uname --all
Linux s7netserver 3.12.5-301.fc20.i686+PAE #1 SMP Mon Dec 16 18:42:48 EST 2013 i686 i686 i386 GNU/Linux
$ lscpu
Architecture: i686
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 55
Model name: Intel(R) Celeron(R) CPU N2807 @ 1.58GHz
Stepping: 8
CPU MHz: 498.000
CPU max MHz: 1578.0000
CPU min MHz: 498.0000
BogoMIPS: 3166.39
Virtualization: VT-x
L1d cache: 24K
L1i cache: 32K
L2 cache: 1024K
```
Backtrace:
```
$ RUST_BACKTRACE=1 rustc --version
Segmentation fault
```
gdb:
```
$ gdb -quiet --args rustc --version
Reading symbols from rustc...(no debugging symbols found)...done.
(gdb) run
Starting program: /usr/local/bin/rustc --version
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/libthread_db.so.1".
Program terminated with signal SIGSEGV, Segmentation fault.
The program no longer exists.
(gdb) bt
No stack.
(gdb) list
1 /* Data for i386 version of processor capability information.
2 Copyright (C) 2001-2013 Free Software Foundation, Inc.
3 This file is part of the GNU C Library.
4 Contributed by Ulrich Drepper <[email protected]>, 2001.
5
6 The GNU C Library is free software; you can redistribute it and/or
7 modify it under the terms of the GNU Lesser General Public
8 License as published by the Free Software Foundation; either
9 version 2.1 of the License, or (at your option) any later version.
10
(gdb)
```
The gdb 'edit' command leads to this being displayed:
> "/usr/src/debug/glibc-2.18/sysdeps/i386/dl-procinfo.c" 82L, 2484C
Post made by me on the Rust forum in relation to this issue:
[https://users.rust-lang.org/t/rustc-version-segmentation-fault/19100/5?u=zis](https://users.rust-lang.org/t/rustc-version-segmentation-fault/19100/5?u=zis) | I-crash,O-x86_64,T-compiler,C-bug,O-x86_32 | low | Critical |
345,199,053 | go | x/crypto/ssh: client.NewSession can hang indefinitely | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
1.9.3
### Does this issue reproduce with the latest release?
I'm not able to verify this.
### What operating system and processor architecture are you using (`go env`)?
linux, amd64
### What did you do?
In kubernetes e2e we are using ssh to fetch logs from kubernetes nodes.
In https://github.com/kubernetes/kubernetes/issues/66609 we see that it quite frequently hangs for ~90 minutes in client.NewSession call (the stacktrace is there).
Relevant code is available here: https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/log_size_monitoring.go#L245
### What did you expect to see?
Attempt to create NewSession should finish with error if node doesn't respond to ssh connection.
### What did you see instead?
Attempt to create NewSession hung for ~90 minutes.
Relevant stacktraces are available in:
* https://github.com/kubernetes/kubernetes/issues/66609
* https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-scalability-stable2/414/build-log.txt (starts with `SIGABRT: abort`)
| NeedsInvestigation | low | Critical |
345,248,100 | rust | thread::ThreadId Display missing | Currently the `thread::ThreadId` can only be debug rendered. That's not entirely great because it renders with the `ThreadId()` thing around. Right now to report the thread ID to sentry we transmute it so we can get the inner u64 out.
Would it be reasonable to have a `Display` implementation to get a nice string version of the integer in it? There is a bit of precedent for this in Rust: `StatusCode` in `std::process` can be displayed and so can be process IDs (which are just u32s). | T-libs-api,C-feature-request | low | Critical |
345,249,539 | vscode | Use smartCase for Find | Issue Type: <b>Feature Request</b>
The `search.smartCase` setting is really useful. Thanks for implementing it.
It would be great if it worked in the "Find" functionality too, so I can quickly perform a case sensitive search within the text in a file.
VS Code version: Code 1.25.1 (1dfc5e557209371715f655691b1235b6b26a06be, 2018-07-11T15:33:29.235Z)
OS version: Darwin x64 16.7.0
<!-- generated by issue reporter --> | feature-request,editor-find | medium | Major |
345,274,267 | create-react-app | The eject survey doesn't indicate the meaning of the scale on the comfortable question | On the [eject survey](http://goo.gl/forms/Bi6CZjk1EqsdelXk1), one question asks to indicate your comfort level on a scale of 1 to 10. Though there was no definition for 1 or 10, so it could be misread.

I assumed that 1 meant "Not very comfortable" and 10 meant "I could do this in my sleep", but others may read it differently.
I would recommend adding some clarification to the scale to improve the answers you get. | issue: proposal | low | Minor |
345,277,922 | vscode | [folding] provide non-selection aware fold level command | - VSCode Version:

- OS Version:
Win 10 64 bit
Steps to Reproduce:
1. Open a file which has different folding levels through indentation
2. Move the cursor to a specific level x
3. Run command 'Fold level x'
All areas of that level getting folded but not the one the curor is currently in.

Does this issue occur when all extensions are disabled?: Yes
| feature-request,editor-folding | low | Major |
345,283,767 | vscode | Problem matchers should support creating related diagnostic information | See #55120 for motivation. | feature-request,tasks | low | Major |
345,284,803 | youtube-dl | Can anybody try to download from here? | https://www.game-leap.com/courses/arc-warden-the-self-arrives-at-last/arc-warden-introduction
Is it possible with youtube-dl? | site-support-request | low | Major |
345,316,316 | TypeScript | Allow ability to do an in-place override (_not_ extend) of interface properties | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
override interface
override interface replace
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
It would be nice to be able to override (not extend) third party libraries' interfaces to change the types/signatures of the interface properties.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
Sometimes they make mistakes or define interfaces in ways that force me to constantly use `as` to typecast. I'd rather short-circuit it in one place so it works across my project.
## Examples
<!-- Show how this would be used and what the behavior would be -->
Reference: https://stackoverflow.com/questions/51562792/how-to-override-a-typescript-interface-without-extending
## Checklist
My suggestion meets these guidelines:
(Honestly I'm in no position to make these judgment calls, as I don't know the inner workings of this library - why ask these quesitons?)
* [X] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [ ] This could be implemented without emitting different JS based on the types of the expressions (not sure what this means)
* [ ] This isn't a runtime feature (e.g. new expression-level syntax) (???)
| Suggestion,Awaiting More Feedback | low | Critical |
345,317,321 | flutter | Default color contrast for FloatingActionButton.extended too low | (This issue was found on the Demo page for the Material Search widget in the gallery)
The contrast ratio between text and background of the FloatingActionButton.extended is to low with the default theme. It's 2.24 and should be at least 3.0
Current background color: `#13B9FD`
Current foreground color: `#FFFFFF` | framework,f: material design,a: accessibility,P2,team-design,triaged-design | low | Minor |
345,323,231 | flutter | Outline Button outline stroke is too light | The material outline button has a stroke that is defined at 12% opacity of the provided color. Even in its best state, where the onSurfaceColor is black and the surfaceColor is white (which happens to be the default), the contrast ratio is only 1.31:1 [1].
[1] http://contrast-ratio.com/#hsla%280%2C0%25%2C0%25%2C.12%29-on-white | framework,f: material design,a: accessibility,P2,team-design,triaged-design | low | Minor |
345,333,476 | pytorch | WERROR=1 doesn't work with FULL_CAFFE2 | It seems to toggle too many warnings for Caffe2 and then `-Werror` fails. | caffe2 | low | Critical |
345,350,094 | pytorch | Stop passing inplace/out arguments as (non-const) Tensor& to functions | Currently, when you define an inplace function or a function that writes to an output function, you get a non-const reference. E.g.,
```
Tensor& add_out(Tensor& result, const Tensor& self, const Tensor& other);
```
The use of a non-const reference here is highly misleading, and can lead newbies down the wrong path when writing implementations of these functions. We should instead pass these arguments as ordinary `const Tensor&` references.
**Detailed discussion.**
Consider the function above. Assuming that `add` is implemented for tensors, is the following a valid implementation of the function?
```
Tensor& add_out(Tensor& result, const Tensor& self, const Tensor& other) {
result = self.add(other);
return result;
}
```
C++ will not give any error when you do this, but the function will not behave correctly when used from Python:
```
x = torch.randn(N)
y = torch.randn(N)
out = torch.zeros(N)
out2 = out # refers to the same tensor
add_out(x, y, out=out2)
print(out) # out is all zeros?!
```
The intention of the `add_out` function was to write the result of `x + y` into the preallocated memory of `zeros`. `out2` and `out` alias the same memory, so modifications to out2 should be reflected in out.
The reason for this is that when you wrote `result = self.add(other);`, you did not actually change the allocated memory of out, because your implementation was semantically equivalent to this Python code:
```
x = torch.randn(N)
y = torch.randn(N)
out = torch.zeros(N)
out2 = out
out2 = x.add(y) # the code you wrote
print(out) # well of course this is zeros
```
Yes, you modified the C++ reference, but C++ references are meaningless in Python (there is no such thing as C++ style references in Python). You need to modify the actual *underlying memory* of the out tensor object when you implement `add_out`, and assignment over a reference doesn't do that; all it does is change the `Tensor` *pointer* that was passed in. Other aliases to the same memory will not see the difference.
**What should you do instead?**
If the final operation you do is call another function, see if you can call it's inplace/out variant instead. For example, you could implement `sub_out` as:
```
Tensor& sub_out(Tensor& result, const Tensor& self, const Tensor& other) {
return add_out(result, self, -other);
}
```
If you are actually writing a kernel (e.g., you're writing code that writes into memory locations in a loop), simply make sure that result is the right size, and just write your results directly into it (rather than allocate a fresh buffer which you would have returned). In this case, the function is typically factored into an `_out` variant that doesn't allocate, and a regular, functional variant that simply calls the `_out` implementation after allocating the result. Example in aten/src/ATen/native/ReduceOps.cpp:
```
Tensor _prod(const Tensor &self, int64_t dim_, bool keepdim) {
int64_t dim = maybe_wrap_dim(dim_, self.dim());
Tensor result = self.type().tensor();
return at::_prod_out(result, self, dim, keepdim);
}
Tensor& prod_out(Tensor& result, const Tensor& self, int64_t dim, bool keepdim) {
// code that writes the result into result
}
```
If you're writing some code that calls into an external library, and it insists on giving you a fresh tensor, you can always smooth over the problem by doing a copy. So, to fix our add_out example from the beginning:
```
Tensor& add_out(Tensor& result, const Tensor& self, const Tensor& other) {
result.resize_(self.sizes());
result.copy_(self.add(other));
}
```
Another idiom is, if an inplace version of the function is available, copy the input into the eventual result buffer, and then do the inplace operation on that buffer. Example from UnaryOps.cpp
```
Tensor& _clamp_out_cpu(Tensor& result, const Tensor& self, Scalar min, Scalar max) {
result.resize_(self.sizes());
result.copy_(self);
return _th_clamp_(result, min, max);
}
```
These strategies are less efficient, so they should be used as a last resort.
cc @yf225 @glaringlee @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @ailzhang | module: internals,module: cpp,triaged,enhancement | low | Critical |
345,361,695 | pytorch | [feature request] Add COCOB Optimizer | COntinuous COin Betting (COCOB) in stochastic subgradient descent based algorithm that does not requires any learning rate.
Paper- https://arxiv.org/pdf/1705.07795.pdf
cc @vincentqb | module: optimizer,triaged,function request | low | Minor |
345,367,667 | flutter | TimePicker uses disruptive announcements on Android | framework,f: material design,a: accessibility,f: date/time picker,a: quality,P2,team-design,triaged-design | low | Minor |
|
345,372,352 | material-ui | Can't test pages using Material UI with Capybara | <!--- Provide a general summary of the issue in the Title above -->
It is literally impossible to test material ui using capybara+selenium-webdriver
<!-- Checked checkbox should look like this: [x] -->
- [x] This is a v1.x issue. <!-- (v0.x is no longer maintained) -->
- [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
## Expected Behavior
This library should not use elements so wrongly that it is impossible to test projects that use it with standard automated tools.
## Current Behavior
Material-ui currently abuses elements so thoroughly that there is currently no way to test pages that use material-ui using capybara/selenium-webdriver.
## Steps to Reproduce
codesandbox.io can't handle ruby deps, so no point in setting anything up there.
Link: I'll fill this in when i have a second
1. Create a Switch element on the page (non-native because otherwise you can't use multi-select)
2. try to select an option from it using capybara
3. you can't, it's literally impossible, you can get as far as clicking the option in the popup, but then selenium throws an unknown error because it can't understand the dom anymore
`Selenium::WebDriver::Error::UnknownError: unknown error: option element is not in a select`
## Context
I'm trying to write tests for my project, using capybara & a headless browser driven by selenium-webdriver.
## Your Environment
| Tech | Version |
|--------------|---------|
| Material-UI | v1.4.1 |
| React | 16.4.6 |
| headless Firefox | 62.0b12 |
| headless chrome | 68.0.3440.75 |
| capybara | 3.1.0 |
| selenium-webdriver |3.12.0| | waiting for 👍,package: material-ui | low | Critical |
345,373,696 | flutter | Select to Speak reads TabBar tabs inconsistently | Sometimes it only reads visible tabs, other times it reads offscreen tabs. | framework,f: material design,a: accessibility,P2,team-design,triaged-design | low | Minor |
345,376,809 | TypeScript | Show signature help when overriding/implementing method | **TypeScript Version:** 3.1.0-dev.20180727
**Code**
```ts
class A {
m(n: number): void {}
}
class B extends A {
m(/**/)
}
```
**Expected behavior:**
Get signature help for `m(n: number): void`. Extension of #26022.
**Actual behavior:**
No signature help. | Suggestion,Domain: Signature Help,Experience Enhancement | low | Minor |
345,399,751 | go | x/build: add a builder with oldest git supported by cmd/go to catch regressions | ### What did you expect to see?
The bug fix for breaking go on Ubuntu 16 would have a test or ci change to prevent a similar error in the future.
### What did you see instead?
A code change with no test change.
### Context
https://github.com/golang/go/issues/26501#issuecomment-407431340
| Testing,Builders,NeedsFix,modules,new-builder | low | Critical |
345,400,450 | rust | `#[macro_use]` on `use` broken with gfx-rs | With rust 2015 we can do this:
```rust
#[macro_use] extern crate gfx;
gfx_defines! {
vertex Vertex {
pos: [f32; 2] = "a_Pos",
}
pipeline rect_pipe {
vertices: gfx::VertexBuffer<Vertex> = (),
}
}
fn main() {}
```
But it's 2018 so we should deal with this warning:
```
warning: deprecated `#[macro_use]` directive used to import macros should be replaced at use sites with a `use` statement to import the macro instead
--> src/main.rs:4:1
|
4 | #[macro_use]
| ^^^^^^^^^^^^
|
```
OK, lets give it a try.
```rust
#![feature(rust_2018_preview)]
#![warn(rust_2018_idioms)]
use gfx;
#[macro_use] use gfx::gfx_defines;
gfx_defines! {
vertex Vertex {
pos: [f32; 2] = "a_Pos",
}
pipeline rect_pipe {
vertices: gfx::VertexBuffer<Vertex> = (),
}
}
fn main() {}
```
Oops!
```
error: cannot find macro `gfx_vertex_struct_meta!` in this scope
--> src/main.rs:7:1
|
7 | / gfx_defines! {
8 | | vertex Vertex {
9 | | pos: [f32; 2] = "a_Pos",
10 | | }
... |
13 | | }
14 | | }
| |_^
|
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
error: cannot find macro `gfx_pipeline!` in this scope
--> src/main.rs:7:1
|
7 | / gfx_defines! {
8 | | vertex Vertex {
9 | | pos: [f32; 2] = "a_Pos",
10 | | }
... |
13 | | }
14 | | }
| |_^
|
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
```
Looks like it wants me to import some macros which are used internally by the one that I called. Surely that's a bug in itself. Lets try and work around it though!
Two iterations later, we have `#[macro_use] use gfx::{gfx_defines, gfx_pipeline, gfx_vertex_struct_meta, gfx_impl_struct_meta, gfx_pipeline_inner};` and an odd message.
```
error[E0282]: type annotations needed
--> src/gui/render.rs:19:1
|
19 | / gfx_defines! {
20 | | vertex Vertex {
21 | | pos: [f32; 2] = "a_Pos",
22 | | }
... |
70 | | }
71 | | }
| |_^ cannot infer type for `std::ops::Range<{integer}>`
|
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
```
Cargo suggests running with `-Z external-macro-backtrace`, but then complains that no such `-Z` flag exists (yes, this is a nightly build). I guessed that I should really be passing that flag to rustc and not cargo, so I tried `RUSTFLAGS="-Z external-macro-backtrace" cargo run` and got the info I was looking for.
```
error[E0282]: type annotations needed
--> <gfx_pipeline_inner macros>:88:28
|
1 | / { $ ( $ field : ident : $ ty : ty , ) * } => {
2 | | use $ crate :: pso :: {
3 | | DataLink , DataBind , Descriptor , InitError , RawDataSet , AccessInfo } ; # [
4 | | derive ( Clone , Debug , PartialEq ) ] pub struct Data < R : $ crate ::
... |
88 | | => ( ) , } ) * } for _ in 0 .. 1 {
| | ------
| | |
| | cannot infer type for `std::ops::Range<{integer}>`
| | in this expansion of `desugaring of `...`` (#7)
| | in this macro invocation (#7)
... |
99 | | $ ( meta . $ field . bind_to ( out , & self . $ field , man , access ) ; ) * }
100 | | } }
| |________- in this expansion of `gfx_pipeline_inner!` (#6)
|
::: <gfx_defines macros>:1:1
|
1 | (
| __-
| |__|
| ||__|
| |||__|
| ||||
2 | |||| $ ( # [ $ attr : meta ] ) * vertex $ name : ident {
3 | |||| $ (
4 | |||| $ ( # [ $ field_attr : meta ] ) * $ field : ident : $ ty : ty = $ e : expr , )
... ||||
17 | |||| ) => { gfx_pipeline ! ( $ name { $ ( $ field : $ ty = $ e , ) + } ) ; } ; (
| |||| -------------------------------------------------------------- in this macro invocation (#5)
... ||||
24 | |||| $ ( $ ( # [ $ field_attr ] ) * $ field : $ ty = $ e , ) + } } gfx_defines ! (
| ||||________________________________________________________________-
| |||||________________________________________________________________|
| |||||
25 | ||||| $ ( $ tail ) + ) ; } ; (
| ||||| -
| |||||___________________|
| |||||___________________in this macro invocation (#2)
| |||| in this macro invocation (#4)
... ||||
37 | ||||| gfx_defines ! { $ keyword $ name { $ ( $ field : $ ty = $ e , ) + } }
38 | ||||| gfx_defines ! ( $ ( $ tail ) + ) ; } ;
| ||||| ---------------------------------- -
| |||||__|____________________________________|
| ||||___|____________________________________in this expansion of `gfx_defines!` (#1)
| |||____|____________________________________in this expansion of `gfx_defines!` (#2)
| ||_____|____________________________________in this expansion of `gfx_defines!` (#3)
| | | in this expansion of `gfx_defines!` (#4)
| | in this macro invocation (#3)
|
::: <gfx_pipeline macros>:1:1
|
1 | / ( $ module : ident { $ ( $ field : ident : $ ty : ty = $ value : expr , ) * }
2 | | ) => {
3 | | # [ allow ( missing_docs ) ] pub mod $ module {
4 | | # [ allow ( unused_imports ) ] use super :: * ; # [ allow ( unused_imports ) ]
5 | | use super :: gfx ; gfx_pipeline_inner ! { $ ( $ field : $ ty , ) * } pub fn
| | ------------------------------------------------- in this macro invocation (#6)
6 | | new ( ) -> Init < 'static > { Init { $ ( $ field : $ value , ) * } } } }
| |_________________________________________________________________________- in this expansion of `gfx_pipeline!` (#5)
|
::: src/gui/render.rs:19:1
|
19 | / gfx_defines! {
20 | | vertex Vertex {
21 | | pos: [f32; 2] = "a_Pos",
22 | | }
... |
70 | | }
71 | | }
| |______- in this macro invocation (#1)
```
The ASCII art is beautiful. But I still have no clue what is going on, especially considering that this was working fine when I was doing `#[macro_use] extern crate gfx;`.
`for _ in 0 .. 1 { }` appears to be a fully valid construct when it appears anywhere else but [here](https://docs.rs/gfx/0.17.1/src/gfx/macros/pso.rs.html#193).
version info:
```
rustc 1.29.0-nightly (6a1c0637c 2018-07-23)
binary: rustc
commit-hash: 6a1c0637ce44aeea6c60527f4c0e7fb33f2bcd0d
commit-date: 2018-07-23
host: x86_64-unknown-linux-gnu
release: 1.29.0-nightly
LLVM version: 7.0
```
gfx crate is at `0.17.1` | A-lints,S-needs-repro,A-edition-2018 | low | Critical |
345,436,704 | rust | Special-case `format!("{}", string_like)` for increased performance | Changes like #52767 and existence of the [useless_format](https://rust-lang-nursery.github.io/rust-clippy/v0.0.212/index.html#useless_format) clippy lint show that people end up writing `format!("{}", a_string)` way more often than they should (i.e., >0 times).
Since `format` is a proc macro/builtin in the compiler, and since this specific case should not have any side effects other than producing a `String`, I want to propose adding this special case:
When expanding a call of the `format` macro (and before using `format_args!`), check if the formatting string is literally `"{}"`, and, if the only other parameter `x` is either of type `&str`, `String`, or `Cow<str>`, expand the macro to `ToString::to_string(x)`.
This is quite a conservative and concrete refinement. It could easily be adjusted to support more string-like types.
And additional special case to consider is `format!("foo")` (with only a literal formatting string). It should expand to `String::from("foo")`. | I-slow,C-enhancement,T-libs-api | low | Major |
345,448,483 | flutter | Return index of the first visible item in ListView | In the current implementation of ListView, there is no easy way to find which item is the first one in a visible part of the screen. There could be many use cases if we know the index of the first visible item. For instance, playing audio of the first visible item, etc.
Could you please add this feature to the ListView? Thank you.
| c: new feature,framework,f: scrolling,customer: crowd,P3,team-framework,triaged-framework | high | Critical |
345,453,261 | pytorch | python caffe2/python/operator_test/activation_ops_test.py Segmentation fault (core dumped) | /Downloads$ python /home/jasonma/caffe2/pytorch/build/caffe2/python/operator_test/activation_ops_test.py
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
WARNING:root:Debug message: No module named caffe2_pybind11_state_hip
python: Relink `/usr/lib/x86_64-linux-gnu/libnspr4.so' with `/lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbol `clock_gettime'
python: Relink `/usr/lib/x86_64-linux-gnu/libopen-pal.so.20' with `/lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbol `clock_gettime'
python: Relink `/usr/lib/x86_64-linux-gnu/libopen-pal.so.20' with `/lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbol `clock_getres'
python: Relink `/usr/lib/x86_64-linux-gnu/libopencv_core.so.3.2' with `/lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbol `clock_gettime'
python: Relink `/usr/lib/x86_64-linux-gnu/libmpi.so.20' with `/lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbol `clock_getres'
python: Relink `/usr/lib/x86_64-linux-gnu/libmpi.so.20' with `/lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbol `clock_gettime'
Segmentation fault (core dumped)
--------------------------------------------------------
how to fix this ?
@eklitzke
@resistor
@huitseeker
@jfsantos | caffe2 | low | Critical |
345,470,008 | rust | Investigate the Ryū algorithm for a simpler/faster implementation of float -> string conversion | There's a new paper making the rounds on the topic of converting floats to their decimal string representations that claims to be both simpler and faster than prior algorithms: https://pldi18.sigplan.org/event/pldi-2018-papers-ry-fast-float-to-string-conversion . I'm particularly interested in the simplicity aspect, since I recall some old conversations regarding our current machinery for this being somewhat subtle and complicated (for speed purposes, I imagine). If we could drastically simplify the implementation without sacrificing speed, that might be a win. Good student or intern project, methinks.
(Apologies for how wishlisty this issue is, I've no clue who might be the presiding expert on our current float->string implementation.) | I-slow,T-libs-api | medium | Major |
345,484,177 | puppeteer | Request event doesn't include websocket requests | <!--
STEP 1: Are you in the right place?
- For general technical questions or "how to" guidance, please search StackOverflow for questions tagged "puppeteer" or create a new post.
https://stackoverflow.com/questions/tagged/puppeteer
- For issues or feature requests related to the DevTools Protocol (https://chromedevtools.github.io/devtools-protocol/), file an issue there:
https://github.com/ChromeDevTools/devtools-protocol/issues/new.
- Problem in Headless Chrome? File an issue against Chromium's issue tracker:
https://bugs.chromium.org/p/chromium/issues/entry?components=Internals%3EHeadless&blocking=705916
For issues, feature requests, or setup troubles with Puppeteer, file an issue right here!
-->
### Steps to reproduce
**Tell us about your environment:**
Puppeteer version: 1.6.1
Platform / OS version: Mac OS 10.11.6
URLs (if applicable): about:blank
Node.js version: 10.7.0
**What steps will reproduce the problem?**
```js
var puppeteer = require('puppeteer');
(async () => {
var browser = await puppeteer.launch(
{headless:false}
);
var page = await browser.newPage();
await page.setRequestInterception(true);
page.on('request', req => {
console.log(req);
let headers = req.headers;
headers['origin'] = 'http://www.multiplayerpiano.com';
req.continue({
headers: headers
});
});
page.on('console', console.log);
page.goto(`file://${__dirname}/ppage.html`)
})();
```
**What is the expected result?**
Websocket requests should be logged to console and the origin header changed by the interception
**What happens instead?**
I don't get WebSocket requests logged to console and its origin header is not changed. The "request" event doesn't emit for WebSocket requests.
| feature,chromium,confirmed | medium | Critical |
345,497,745 | pytorch | [feature request] Add matrix functions | Add matrix power as implemented by [numpy.linalg.matrix_power](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.matrix_power.html), matrix exponential as implemented by [scipy.linalg.expm](https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.linalg.expm.html), and matrix logarithm as implemented by [scipy.linalg.logm](https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.linalg.logm.html).
- [x] matrix_power
- [x] matrix_exp
- [ ] matrix_log (For an implementation see https://github.com/pytorch/pytorch/issues/9983#issuecomment-891777620)
- [ ] matrix_sqrt (For an implementation see https://github.com/pytorch/pytorch/issues/9983#issuecomment-907530049)
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @jianyuh @nikitaved @pearu @mruberry @heitorschueroff @walterddr @IvanYashchuk @xwang233 @Lezcano @rgommers | triaged,module: numpy,module: linear algebra,function request | high | Critical |
345,523,782 | godot | Godot slow to open, slow to edit, slow to launch simple game [Windows, caused by specific USB peripherals] | *Bugsquad edit:*
This bug has been well confirmed as something that's caused by specific USB peripherals and their drivers, apparently triggering an elusive DirectInput bug on Windows which is only reproducible in specific combinations of hardware (both host and peripheral) and drivers. This seems to trigger particularly some brands of USD keyboards, mice or audio devices (especially Digital Audio Converters).
See https://stackoverflow.com/questions/10967795/directinput8-enumdevices-sometimes-painfully-slow which seems to be the reference StackOverflow issue for this problem. We still don't know how to work this around in a way that wouldn't require users to manually upgrade or disable bogus Windows drivers.
----
**Godot version:**
<!-- Specify commit hash if non-official. -->
3.0.6 from Steam.
Also same issue on fresh download from https://godotengine.org/
This happened on previous versions too. It's been happening for about 3 months or so.
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Windows 10 PRO x86_64
Version 1803
OS build 17134.167
GPU Nvidia GTX980ti
GPU driver 398.36
**Issue description:**
<!-- What happened, and what was expected. -->
Opening Godot from with steam or from native download takes over 40 seconds.
Opening a very simple project in edit mode takes 35 seconds.
Pressing the play icon on this project from within Godot takes 46 seconds before the game window opens.
**Steps to reproduce:**
I can reproduce this every time just by opening or creating a basic project.
I get the same issues when I launch one of the demo projects, such as multiplayer pong.
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
Here is the minimal project that takes the time mentioned, but I get this issue on all projects.
[Hello Godot.zip](https://github.com/godotengine/godot/files/2238695/Hello.Godot.zip)
I have also attached the output from the cmd windows that opens when Godot is launched.

| bug,platform:windows,topic:porting,confirmed,topic:thirdparty,high priority,performance | high | Critical |
345,533,662 | electron | Add app event: `before-browser-window-created` | It would be great to have a `before-browser-window-created` event that developers can use to create reusable modules that can globally intercept and modify `BrowserWindow` options before window creation.
It is already possible to do this for `BrowserWindow`s passed through the [`webContents 'new-window'`](https://electronjs.org/docs/api/web-contents#event-new-window) event but not for windows created from the `BrowserWindow` constructor.
This would be useful for reusable:
- Window managers
- Window state/position persisting
- Other things?
```
app.on(`before-browser-window-created`, (options: BrowserWindowConstructorOptions) => {
// do something fancy with options?
});
```
| enhancement :sparkles: | low | Minor |
345,538,351 | go | crypto/cipher: It should support another interface for CTR mode | ### What did you do?
I want to write code that can efficiently encrypt or decrypt a portion of large files using random access I/O.
**Theoretically in CTR mode it is possible to encrypt/decrypt arbitrary block independently.**
But with the lack of alternative interface to Stream interface, there is no way to take advantage of CTR mode.
### What did you expect to see?
Another interface for CTR mode, or maybe custom counter support would be good too.
### What did you see instead?
Only Stream interface exists. | NeedsInvestigation | low | Minor |
345,542,090 | godot | RigidBody2D won't detect collision sometimes when colliding with 2 StaticBody2D | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
master @ cfcb6e11f25adb13177ba08777263288a5ec6f61
(just before typed gdscript merge)
**OS/device including version:**
Windows 10 Home Version 1803 Build 17134.147
Processor i9-8950HK
Nvidia GTX 1050 Ti
**Issue description:**
When a RigidBody2D is falling and collides with 2 StaticBody2D, sometimes it does not detect the collision and just passes through. It may have to do with the collision points not pointing upward.

In this test, I try different CCD settings, and nothing seems to fix the issue.
**Steps to reproduce:**
1. Let a RigidBody2D fall and collide with 2 StaticBody2D one next to each other
2. The body will pass through sometimes (it should not)
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[2D-Platformer-Demo.zip](https://github.com/godotengine/godot/files/2238933/2D-Platformer-Demo.zip) | bug,confirmed,topic:physics | low | Critical |
345,563,994 | three.js | Scene: Respect original material settings in .overrideMaterial | When setting the override material for a scene the maps and other settings on the original material are not used -- this applies to animations, mesh instances, and other uniforms and defines, as well. This jsfiddle shows that the normal map is not applied when rendering the scene with a `MeshNormalMaterial` applied:
http://jsfiddle.net/wbhrd58c/

This makes it a difficult to render things like normal and depth buffers for screen effects. So when an overrideMaterial is used the defines and uniforms of the original should be used (if they exist on the override material). This would let an override material us the color, displacementMap, normalMap, textures, skinning settings, etc available.
This mechanic could be used make post processing passes more robust and correct, afford a depth prepass with correct vertex transformations, a deferred renderer that uses the original material properties without having to manually copy everything, and shadows that support displacement maps out of the box.
This is closer to how Unity's [Shader Replacement](https://docs.unity3d.com/Manual/SL-ShaderReplacement.html) works to afford this functionality:
> the camera renders the scene as it normally would. the objects still use their materials, but the actual shader that ends up being used is changed
In Unity if the objects material doesn't define a specific uniform then the default shader uniform value is used. To allow for backwards compatibility this could be included as an opt-in feature:
```js
Scene.setOverrideMaterial( material : Material, overrideUniforms : Boolean );
``` | Enhancement | low | Major |
345,565,747 | javascript-algorithms | My concern about no classification by Procedural/Imperative and Functional programming Paradigm | Hi, this is a great project. Thanks.
I have a concern that I would like to share.
For instance, looking at `/algorithms/math/factorial` which is one of the most basic math topic:
https://github.com/trekhleb/javascript-algorithms/tree/master/src/algorithms/math/factorial
I found 2 implementations:
### factorial.js
```js
export default function factorial(number) {
let result = 1;
for (let i = 2; i <= number; i += 1) {
result *= i;
}
return result;
}
```
### factorialRecursive.js
```js
export default function factorialRecursive(number) {
return number > 1 ? number * factorialRecursive(number - 1) : 1;
}
```
`factorial.js` is a code of Procedural/Imperative programming style, and uses mutable variables.
`factorialRecursive.js` is recursive, and can be said functional programming style, immutable. Alghouth this is a typical implementation which I can see everywhere, in terms of "Big O notations". this is rather anti-pattern.
A better or I would say, a proper way is,
### factorialFunctional.js
```js
//[...Array(5).keys()]
//-> [ 0, 1, 2, 3 ,4 ]
const natural = n => {
const arr0 = [...Array(n + 1).keys()];
const [first, ...arr] = arr0;
return arr;
};
console.log(
natural(5) //[ 1, 2, 3, 4, 5 ]
);
function factorialFunctional(number) {
const list = number < 1
? [1]
: natural(number)
const multiply = (a, b) => a * b;
return list.reduce(multiply);
}
console.log(
factorialFunctional(5)//120
);
```
This is as efficient as the `factorial.js` in terms of "Big O notations", and immutable.
I think when algorithms are presented, it's significantly important to clarify what kind of programming paradigm the algorithms are based on.
Currently, it seems the contributions are updated randomly without formal classification for them, and I think it's a good idea to show a guideline in the README that to clarify which paradigm every algorithm belongs to. In this manner, a contributors notice "oh, here, there is no Functional or Imperative pattern yet, so I will add.."
Thanks.
| enhancement | low | Major |
345,567,518 | flutter | DataTable only communicates selection/deselection visually | Ideally checkbox state would be communicated. We could also use a live region on the header to update the selected count without being too chatty. | framework,f: material design,a: accessibility,P2,team-design,triaged-design | low | Minor |
345,570,287 | go | cmd/compile, runtime: pack info into low bits of interface type pointers | We've generally avoided doing any bit packing, so I don't anticipate this happening, but wanted to record the idea for reference and in case it generates interesting discussion.
We could pack some information into the bottom bits of interface type pointers. As long as `sizeof(reflect.type) % 16 == 0`, it'll point within the same object, so there should be no GC impact.
For example, for strings with 0 < len <= 15, instead of `(*typ, *str)` we could have `(*typ + len(s), str.ptr)`.
Then `i.(string)` ends generating code like:
```go
if i.typ&15 == 0 {
str = *i.data
} else {
str = (i.typ&15, i.data)
}
```
(Note that strings of length 0 are already allocation-free by pointing into runtime.zero.)
You could do something similar for small ints and tiny slices (2 bits each for len, cap-len).
The impact of this would be fairly localized, I think--just type switches, interface assertions, interface equality checks, and some choice bits of the runtime.
For reference, over make.bash, here are percents, counts, types, and length (and caps for slices) of calls to `convT2(E|I)(string|slice)`:
```
7.52% 117662 convT2Estring string 5
7.09% 110979 convT2Estring string 6
6.93% 108445 convT2Estring string 1
5.85% 91610 convT2Estring string 3
5.45% 85249 convT2Eslice []uint8 1 1
4.97% 77838 convT2Estring string 4
4.91% 76785 convT2Estring string 7
3.78% 59191 convT2Estring string 8
3.08% 48164 convT2Estring string 9
3.07% 48104 convT2Eslice []uint8 0 20
2.59% 40474 convT2Estring string 2
2.35% 36777 convT2Islice dwarf.byChildIndex 1 1
2.11% 33001 convT2Islice gc.methcmp 0 0
2.08% 32491 convT2Estring string 10
1.79% 28062 convT2Islice dwarf.byChildIndex 0 0
1.64% 25712 convT2Estring string 11
1.43% 22429 convT2Estring string 12
1.43% 22339 convT2Estring string 20
1.25% 19502 convT2Islice gc.byClassThenName 1 1
1.20% 18828 convT2Estring string 21
1.16% 18145 convT2Islice dwarf.byChildIndex 2 2
1.14% 17778 convT2Estring string 13
1.07% 16706 convT2Estring string 14
1.06% 16523 convT2Estring string 16
```
This scheme would cover a lot of these.
| Performance,NeedsDecision,compiler/runtime | low | Minor |
345,573,541 | node | doc: explanation about flagged feature etiquette needed | There is a lot of misunderstanding in the ecosystem about how harmony* and experimental flags should be used, most alarmingly in the case of libraries that ship support for features that are flagged.
We need to document:
- Why flags are how we (and V8*) ship experimental features
- Why you should only use flagged features at an application level
- Why you shouldn't use flagged features in production environments
- Why you shouldn't publish versions of your libraries with support for flagged features
- Given the restrictions above, ways that you *can* help us test flagged features
This seems like a lot for one person so I thought I would open an issue about it and get some collaboration going.
\*We probably shouldn't document V8's flagging, but it might be necessary to at least mention harmony flags. | help wanted,doc | low | Major |
345,577,881 | rust | #[repr(align(…))] should allow arbitrary constant expressions, not just integer literals | For example:
```rust
const ALIGN_OF_FOO: usize = 32;
#[repr(align(ALIGN_OF_FOO))]
struct Foo;
```
Currently this produces an error:
```
error[E0552]: unrecognized representation hint
--> src/main.rs:2:8
|
2 | #[repr(align(ALIGN_OF_FOO))]
| ^^^^^^^^^^^^^^^^^^^
```
A potential use case is for [opaque struct definitions](https://www.reddit.com/r/rust/comments/92m2rj/reusing_a_byte_array_as_an_object/).
I could submit this as a new RFC, but I can't think of any reason not to allow it or any ambiguity as to how it should work, so I'm hoping an issue report may be sufficient. | T-compiler,C-feature-request,A-repr | low | Critical |
345,579,258 | go | x/net/http2/h2c: support closure of all connections and graceful shutdown | Very happy to see h2c supported: https://github.com/golang/net/commit/c4299a1a0d8524c11563db160fbf9bddbceadb21
However, there is no way to enable graceful shutdown, or even close all h2c connections. We should support this.
You could use `http2.ConfigureServer` to configure a http2 server's graceful shutdown to be called when the http server is shutdown but the http server's shutdown method will not wait for the http2 connections to close because they have been hijacked.
I think it'd be simplest if we added a `Shutdown` and `Close` method to `*http2.Server` analogous to `*http.Server`'s methods. | NeedsInvestigation,FeatureRequest | low | Major |
345,580,111 | flutter | Flutter sending message from native to the dart side : Dart side listen to native events | I am building an audio app in Flutter, I am already able to show remote player controls on iOS lock screen and the events are being received already.
The only problem is those events are being received only in the native side :
How can the native side pass a message to the Dart side **without the Dart code calling.**
Most of the examples are showing a communication starting with the Dart code calling the native, and then the native responds, _is the inverse possible ?_, _how can the inverse be achieved_ ?
| engine,d: api docs,P2,a: plugins,team-engine,triaged-engine | low | Major |
345,590,372 | rust | Rust fails to infer types for this function | [Playground](http://play.rust-lang.org/?gist=00c5e5c8d5c34d22b34ba0d9442053f4&version=stable&mode=debug&edition=2015)
[Playground that works](http://play.rust-lang.org/?gist=c73ea26df08348ed121281339f57c25b&version=stable&mode=debug&edition=2015)
As far I know, this should be a bug. There is only one impl of `From<A>` that satisfies the type constraints defined in the function, which is `impl From<A> for AMain`. Same thing with `From<B>` and `impl From<B> for BMain`.
edit: GitHub automatially opens the url with an extra slash, please remove it for the playground to load properly | T-compiler,A-inference,C-bug | low | Critical |
345,633,672 | TypeScript | Tuple iteration and merging | **Edit:** Remove indexing part
**Edit 2:** Replace tuple mapping syntax with [normal mapped types](https://github.com/Microsoft/TypeScript/pull/26063)
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
Concatentate and merge tuples' entries.
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
1. I would like to be able to extract and spread in positions other than the last part of a tuple.
1. I would like to merge a tuple into an intersection or concatenation of its members.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
- Converting a tuple to its intersection, such as with `Object.assign`'s assignee.
- Typing `Array.prototype.concat` correctly for tuples.
## Syntax
The syntax comes in a few new forms:
- `[...A, B]` appends `B` to the tuple `A`. Similarly, `[...infer A, any] extends B ? A : []` drops the last item of the tuple and `[...any[], infer A] extends B ? A : never` extracts the first.
- `{... ...T}` for an n-ary merge and `[... ...T]` an n-ary concatenation.
## Examples
Here's the types for each method I mentioned above.
```ts
interface Array<T> {
concat<U extends any[]>(...others: U): [...this, ... ...{[I in keyof U]: (
U[I] extends any[] ? N : U[I] extends ArrayLike<infer R> ? R[] : [U[I]]
)}];
}
interface ObjectConstructor {
assign(target: {... ...typeof sources}, ...sources: object[]): typeof target;
}
```
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript / JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. new expression-level syntax) | Suggestion,In Discussion | medium | Critical |
345,669,239 | flutter | Dynamic Forms | How does flutter implement dynamic form ??? | a: text input,c: new feature,framework,P3,team-framework,triaged-framework | low | Major |
345,702,500 | TypeScript | Preserve executable file permission when generating from executable .ts file | As many libraries ship executable files to help with testing or administration it is convenient to be able to keep the executable permission on UNIX like systems while generating .js files from .ts files. The creator of the library can provide a build script or manually keep the proper permissions in distributed generated library but this is sub-optimal
**TypeScript Version:** 2.9.2
**Search Terms:** file permissions executable
**Expected behavior:**
```bash
$ ll lib/jem_cli.ts
-rwxr-xr-x 1 omg omg 5.9K 30 Jul 12,58 lib/jem_cli.ts
$ tsc
TSFILE: ..../jem_cli.js
$ ll jem_cli.js
-rwxr-xr-x 1 omg omg 8.2K 30 Jul 13,23 jem_cli.js
```
**Actual behavior:**
```bash
$ ll lib/jem_cli.ts
-rwxr-xr-x 1 omg omg 5.9K 30 Jul 12,58 lib/jem_cli.ts
$ tsc
TSFILE: ..../jem_cli.js
$ ll jem_cli.js
-rw-r--r-- 1 omg omg 8.2K 30 Jul 13,23 jem_cli.js
```
| Suggestion,Help Wanted | medium | Major |
345,704,717 | rust | rustc infinite loop on recursive type (FingerTree, indirectly polymorphic/nonregular) | Spawned off of #4363, specifically the [example provided by @goffrie](https://github.com/rust-lang/rust/issues/4363#issuecomment-73419917)
This example code causes rustc to infinite loop ([playground](https://play.rust-lang.org/?gist=1addfdb2297fd4c93804a07eada62865&version=nightly&mode=release&edition=2015)):
```rust
enum FingerTree<A> {
Empty,
Single(A),
Deep(Node<A>)
}
struct Node<A> {
count: i32,
front: Digit<A>,
inner: Box<FingerTree<(A,A)>>,
back: Digit<A>
}
struct Digit<A> {
count: i32,
content: [Option<A>; 4]
}
fn FingerTree<A>() -> FingerTree<A> { FingerTree::Empty }
fn main() {
let _ = FingerTree::Deep(Node { count: 0,
front: Digit { count: 0, content: [None, None, None, None] },
inner: Box::new(FingerTree::Single((1, 2))),
back: Digit { count: 0, content: [None, None, None, None] }}
);
}
``` | P-medium,T-compiler,I-hang | low | Minor |
345,738,590 | angular | bug(animations) nested animations don't work properly when switching between states | ## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Performance issue
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
[ ] Other... Please describe:
</code></pre>
## Current behavior
Nested animations don't work properly when switching between states
## Expected behavior
Nested animations should always work.
## Minimal reproduction of the problem with instructions
Working (without swithing between states ; with *ngIf):
https://stackblitz.com/edit/angular-sev2pq?file=src%2Fapp%2Fapp.component.html
Not working (switching between states):
https://stackblitz.com/edit/angular-skyqmo?file=src%2Fapp%2Fapp.component.html
## What is the motivation / use case for changing the behavior?
## Environment
<pre><code>
Angular version: X.Y.Z
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [x] Chrome (desktop) version 68
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
| type: bug/fix,area: animations,freq2: medium,P3 | low | Critical |
345,772,650 | vue | Typescript - Component's property types are not correct | ### Version
Vuejs: 2.5.16
Typescript: 2.8.1
### Reproduction link
[https://stackblitz.com/edit/typescript-rrnw8z?file=index.ts](https://stackblitz.com/edit/typescript-rrnw8z?file=index.ts)
### Steps to reproduce
- Use typescript
- Create a component with at least one property
### What is expected?
If your property of type `X` is not required, its type should be `X | undefined`
### What is actually happening?
If your property of type `X` is not required, its type is still `X`
---
More dangerous: The properties are not required by default and you can easily write code that will fail at runtime.
Note: The stackblitz above won't show the error because the types are not correctly assumed and `this` is assumed as `any`
<!-- generated by vue-issues. DO NOT REMOVE --> | typescript | low | Critical |
345,788,274 | pytorch | RNN gradients in eval mode in pytorch 0.4 | I need to be able to compute the gradients of an RNN with dropout temporarily turned off. In earlier versions if pytorch, it was possible to do this just by setting model.eval() and calling loss.backward(), but in pytorch 0.4 I get this error message:
```
Traceback (most recent call last):
File "dynamiceval.py", line 271, in <module>
gradstat()
File "dynamiceval.py", line 137, in gradstat
loss.backward()
File "/opt/conda/envs/pytorch04/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/opt/conda/envs/pytorch04/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cudnn RNN backward can only be called in training mode
```
Is there or could there be any work around that allows for dropout to be temporarily turned off, and still allow for gradients to be computed?
Thanks!
| module: nn,module: rnn,triaged | medium | Critical |
345,812,045 | pytorch | Upgrade Caffe2 Hypothesis | Currently, we are pegged on Hypothesis 3.59.0 as per https://github.com/pytorch/pytorch/pull/9830
This is bad, we should endeavor to take updates of the versions of our dependencies on a timely basis.
Unfortunately, the new version of Hypothesis has some behavior changes which cause previously stable tests to start flaking. Known cases: https://github.com/pytorch/pytorch/issues/9854 https://github.com/pytorch/pytorch/issues/9853 https://github.com/pytorch/pytorch/issues/9833 https://github.com/pytorch/pytorch/issues/9832
These need to be fixed before we can upgrade. | caffe2 | low | Minor |
345,858,259 | flutter | Improved Image Widget | The current Image implementations has a series of disadvantages that should be improved:
- Image.network if the URL isn't available there is no chance to react on that. We should offer an errorBuilder that fills the image space if an loading error occurs or at least offer an error place holder Image that will be dispalyed in that case. a general Builder would be more flexible.
- There is no loading placeholder which could optimal be a builder too so it could even play some animation during image load time.
- We should be able to define how long images are cached and if on disk or in memory
- I don't know how it is currently implemented but are images scaled down before caching?
As an inspiration I can only recommend https://github.com/luberda-molinet/FFImageLoading an awesome Xamarin image library | c: new feature,framework,a: images,P3,team-framework,triaged-framework | low | Critical |
345,873,960 | flutter | Code samples for how to use tabBar with flexibleSpace in an AppBar | The AppBar documentation doesn't show how to get these fields to interact well: FlexibleSpace shows 'behind' the TabBar, so it can make more sense to put the TabBar into the FlexibleSpace.
We should add a code sample to the AppBar documentation that illustrates how to do this. | team,framework,f: material design,d: api docs,d: examples,P2,team-design,triaged-design | low | Minor |
345,908,327 | rust | Variadic C function calls don't decay references to pointers as other function types do | It seems like calling a variadic C function doesn't hold the exact same rules for reference to pointer conversions as does calling a regular function:
```rust
fn bar(_a: *mut i32, _b: *mut i32) {}
extern "C" fn bar2(_a: *mut i32, _b: *mut i32) {}
extern crate libc; // 0.2.42
extern "C" {
#[no_mangle]
fn sscanf(_: *const libc::c_char, _: *const libc::c_char, ...) -> libc::c_int;
}
fn main() {
let mut i: libc::c_int = 1i32;
let k: *mut libc::c_int = &mut i;
let mut l: [libc::c_int; 2] = [0; 2];
unsafe {
sscanf(k as *const libc::c_char,
b"%u,%u\x00" as *const u8 as *const libc::c_char,
&mut l[0usize],
&mut l[1usize]); // Err
}
let mut x = 1;
bar(&mut x, &mut x); // Ok
bar(&mut l[0], &mut l[1]); // Ok
bar2(&mut l[0], &mut l[1]); // Ok
}
```
([Playground](https://play.rust-lang.org/?gist=caa2db63af71f98f07b969e272a844fe&version=stable&mode=debug&edition=2015))
Errors:
```rust
Compiling playground v0.0.1 (file:///playground)
error[E0499]: cannot borrow `l[..]` as mutable more than once at a time
--> src/main.rs:20:21
|
19 | &mut l[0usize],
| --------- first mutable borrow occurs here
20 | &mut l[1usize]);
| ^^^^^^^^^- first borrow ends here
| |
| second mutable borrow occurs here
error: aborting due to previous error
For more information about this error, try `rustc --explain E0499`.
error: Could not compile `playground`.
To learn more, run the command again with --verbose.
```
In this example, calls to `bar` (rust fn) and `bar2` (extern "C" fn) will convert the mutable references to pointers implicitly, and so they succeed at compiling even though they would not normally if the signature took mutable references per rust's standard rules.
Obviously because `sscanf` is variadic, it doesn't specify pointer types in its signature. I nevertheless expected the call to compile because `sscanf` is extern "C" and rust references should decay to C pointers.
The work around is to explicitly cast the `sscanf` references to pointers.
| A-FFI,T-lang,C-bug | low | Critical |
345,947,227 | go | cmd/gofmt: unfinished else statement followed by if statement should maybe have indentation | ### What version of Go are you using (`go version`)?
```
go version go1.10.2 darwin/amd64
```
### Does this issue reproduce with the latest release?
Yep
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/matt/Library/Caches/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/matt/Dev"
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/d4/3d8k5bgd62v9h2c5f05xkyx80000gn/T/go-build921281437=/tmp/go-build -gno-record-gcc-switches -fno-common"
```
### What did you do?
Started writing an `else` statement but got distracted fixing an issue elsewhere in the program and didn't finish the statement. Definitely my bad! However, gofmt [made this bug a little harder to find](https://twitter.com/mholt6/status/1023663407323467776):
https://play.golang.org/p/AdIDE1dT9Sd
Notice that the next statement is an `if`, with a comment between them. Because of the comment. `go fmt` didn't move the `if` up to the line with the `else`. That seems correct, however, since it is technically a continuation of the `else`, indenting the `if` would have been helpful for catching it.
### What did you expect to see?
```
...
} else
// unfinished statement above
if b == 2 {
fmt.Println("b is 2")
}
...
```
### What did you see instead?
```
...
} else
// unfinished statement above
if b == 2 {
fmt.Println("b is 2")
}
...
```
This is just a proposed enhancement -- not urgent, and definitely still my fault for not finishing a line I started -- but if `gofmt` could figure this out and cue me in by indenting, that would be helpful. :)
Edit: Seems similar to https://github.com/golang/go/issues/20562 -- at least one of the comments there mentions the same issue, it looks like. Feel free to close if you think this is a duplicate. I can't quite tell if that's a separate formatting issue originally... | NeedsInvestigation | low | Critical |
345,954,821 | go | net/http: PUT: DefaultClient significantly slower than DefaultTransport when network is slow | **What version of Go are you using (go version)?**
go version go1.10.3 darwin/amd64
**Does this issue reproduce with the latest release?**
yes
**What operating system and processor architecture are you using (go env)?**
GOARCH="amd64"
GOBIN="/Users/test/go/bin"
GOCACHE="/Users/test/Library/Caches/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/test/go"
GORACE=""
GOROOT="/usr/local/Cellar/go/1.10.3/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.10.3/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/76/trlnwqwj0bs5003x850kzpg00000gn/T/go-build781190647=/tmp/go-build -gno-record-gcc-switches -fno-common"
**Scenario:**
I have a network with Download speed :400kbit/s Upload:400kbit/s RTT:200ms
I have http client developed in Golang.
Wanted to transfer 1MB file to a web server.
**What did you expect to see?**
File upload must almost be same to curl or other web clients.
**What did you see instead?**
Curl took ~28s
Go client took ~42s
**More Information on environment, tools:**
How does Go react when there are issues with network.
To mimic network delay - https://github.com/sitespeedio/throttle
web server : https://github.com/sashgorokhov/docker-nginx-webdav
Source Code
```go
package main
import (
"io/ioutil"
"time"
"fmt"
"net/http"
"strings"
)
func main() {
client := &http.Client{}
url := "http://localhost:9080/file.txt"
rb, _ := ioutil.ReadFile("file1mb.txt")
request, err := http.NewRequest("PUT", url, strings.NewReader(string(rb)))
//request.ContentLength = int64(len(string(rb)))
sTime := time.Now()
response, err := client.Do(request)
eTime := time.Since(sTime)
fmt.Println(eTime.String())
if err != nil {
fmt.Println(err)
} else {
fmt.Println(request.Header)
defer response.Body.Close()
fmt.Println(response.Status)
}
}
```
Tried with and without Content-Length header(chunked encoding) in the http request.
My concern is for 1mb file I see a difference what if file size is 100MB+?
| Performance,NeedsInvestigation | low | Critical |
345,958,335 | godot | Unable to Force Update 2D UI Controls before Next Frame | **Godot version:**
3.0.5
**OS/device including version:**
All
**Issue description:**
Other game engines have a way to force components to update after new data is set. Godot seems unable to do this.
For example: setting text in a wordwrapped Label and finding the new vertical size, adding children to VBoxContainer. Then, using that new size to position the control (for example, moving a VBoxContainer based on it's size).
If we have to wait for the next frame to find out what new sizes/etc our controls are, then we have one frame of visual glitchiness to contend with. And that can be multiplied for however many times you need to be changing controls. On some systems the framerate is so fast I can't see the glitches, but on most devices I've used they are very apparent. I've had to resort to hacks, like taking parts of the background, and pasting them again to hide the glitches, keeping an hidden clone and switching back and forth, but this shouldn't need to be done and eventually becomes cumbersome.
All of this can be fixed by have a means to force a cascading update. | enhancement,topic:gui | high | Critical |
345,996,320 | pytorch | Sparse tensor use cases | We are working on to increase supports for sparse tensor. Currently we have [summarized current state of sparse tensor](https://github.com/pytorch/pytorch/issues/9674) and listed out [sparse ops to support](https://github.com/pytorch/pytorch/issues/8853). We would like to collect sparse tensor use cases to facilitate the design decisions and prioritize TODO list according. It will be very helpful if you can post your use cases and desired sparse ops here or at the [PyTorch Forum](https://discuss.pytorch.org/t/sparse-tensor-use-cases/22047). Thanks!
I find these questions useful when writing use cases:
- Where do I need sparse tensor? During training deep learning model?
- Do I need autograd support for the sparse ops?
A possible example will be:
I am training model that has `mul(Sparse, Dense)` ops. I would like to have its forward and backward. I know there will be a dense gradient at the backward of `mul`, so here I am asking for a special kind of `mul` ops (called `sparse_mul`) that returns a sparse grad tensor and only keep the nnz's gradients.
| module: sparse,feature,triaged | high | Critical |
346,010,511 | TypeScript | type.isLiteral() returns false for boolean literals | **TypeScript Version:** 3.1.0-dev.20180728 (new in 3.0.1)
**Search Terms:** isLiteral()
**Code**
1. Create a boolean literal in the compiler api.
2. Call `isLiteral()` on it.
Change seems to be done here:
https://github.com/Microsoft/TypeScript/commit/e46d214fceb72cc76145779543bf871ce9ff66b0#diff-233e1126c0abc811c4098757f9e4516eR428
**Expected behavior:** `true` for a boolean literal type (`true` or `false`)
**Actual behavior:** `false`
I have a few tests for assumptions about the compiler api in ts-simple-ast and this one started failing. Maybe it's not a bug, but I thought I would log it anyway to find out why this behaviour changed.
Is it because it doesn't have the `Unit` flag while string and number literals do? For what it's worth, in *checker.ts* it will internally return `true` for boolean literals in `isLiteralType`. | Bug,Help Wanted,API | low | Critical |
346,029,871 | opencv | timestamp = cap.get(CV_CAP_PROP_POS_MSEC) always returns the same timestamp. | OpenCV 3.40
Centos 7
GCC compiler
I am trying to fetch the video timestamp but it is always giving me 40 and does not change as i move in frames. Is this a bug ?
```
width = cap.get(CV_CAP_PROP_FRAME_WIDTH);//1280
height = cap.get(CV_CAP_PROP_FRAME_HEIGHT);//720
timestamp = cap.get(CV_CAP_PROP_POS_MSEC);//always 40
fps = cap.get(CV_CAP_PROP_FPS);//25
``` | category: videoio,incomplete,needs reproducer,needs investigation | low | Critical |
346,055,330 | TypeScript | comments before imports are stripped in AMD output unless there's a blank line after them | **TypeScript Version:** 3.0.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** preserve comments removes comments
**Code**
Compile the following with `--module amd`
```ts
// foo
import 'foo';
console.log('hi');
```
**Expected behavior:**
output contains `// foo`
**Actual behavior:**
output does not contain `// foo`.
If a newline is added between `// foo` and the `import 'foo'`, the comment will then show up in the output. If the file is compiled with `--module commonjs`, the comment will also show up in the output, in that case regardless of the whitespace.
**Playground Link:** https://www.typescriptlang.org/play/#src=%2F%2F%20foo%0D%0Aimport%20'foo'%3B%0D%0A%0D%0Aconsole.log('hi')%3B%0D%0A
**Related Issues:** this looks most similar to #6399
| Bug,Domain: Comment Emit | low | Major |
346,058,359 | kubernetes | Eviction should be able to request eviction asynchronously | **Is this a BUG REPORT or FEATURE REQUEST?**:
/kind feature
**What happened**:
`kubectl drain` got stuck because we had PodDisruptionBudget (PDB) with `spec.minAvailable: 1` and the deployment only had one replica -> allowed disruptions was always 0.
**What you expected to happen**:
Instead of `kubectl drain` getting stuck, it should activly start a new pod on a different node if the deployment allows for it (deployment.spec.strategy.rollingUpdate.maxSurge). This should be built into to eviction API.
If a missconfiguration of PodDisruptionBudget prevents a drain, `kubectl drain` or the eviction API should warn about it.
**How to reproduce it (as minimally and precisely as possible)**:
- Create a deployment with `spec.replicas: 1`
- Create PDB with `spec.minAvailable: 1` and select the pod of the deployment
- Try to evict the pod from its node via eviction api or `kubectl drain`
**Environment**:
- Kubernetes version (use `kubectl version`): 1.10.5
| priority/backlog,sig/node,kind/feature,sig/apps,lifecycle/frozen | high | Critical |
346,069,201 | rust | Should std::{f32,f64}::NAN be a QNAN or SNAN ? | Right now, whether they are a QNAN or an SNAN depends on the architecture. Currently, they are an SNAN on MIPS (EDIT: r5 and older) at least, and a QNAN on most others.
It probably makes sense to offer consistent behavior here independently of whether LLVM preservers payloads, signaling/quietness, etc.
cc @rkruppe @draganmladjenovic | T-libs-api,A-docs,A-floating-point | medium | Major |
346,106,071 | vscode | Disable undo menu item when there is no more history available | From #55389
**Feature request**
Disable the undo menu item if there is no undo history available.
(same for redo if there is no more forward history) | feature-request,menus,undo-redo | low | Major |
346,134,324 | go | cmd/gofmt: Remove redundant parenthesis in foo = (1 << 42) | go version go1.10.3 linux/amd64
Reviewing CLs showed lists of const bit-masks that were foo = 1 << 42, etc., all had redundant parenthesis around the (1 << 42). The author said they were copying similar ones in existing source. gofmt doesn't remove them. It does remove the outer pair of double ones, i.e. ((1 << 314)) in https://play.golang.org/p/V6jz7iKHJwt
I think simple cases like foo = (1 << 42) can always have them removed. That's the style followed by most of the stdlib, but not all. Having pointed it out on the CL, Martin Möhrmann is fixing those other cases that influenced the author, e.g. https://go-review.googlesource.com/126775, but it would be nice if gofmt avoided the possibility. | NeedsInvestigation,FeatureRequest | low | Major |
346,168,601 | flutter | URL_Launcher OnDismiss Callback or Listener | There currently isn't a way to tell when the SFSafariViewController is dismissed. It seems necessary when certain actions need to be carried out afterwards. | c: new feature,p: url_launcher,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Minor |
346,188,765 | pytorch | UnicodeDecodeError while loading caffe2 model | If you have a question or would like help and support, please ask at our
[forums](https://discuss.pytorch.org/).
If you are submitting a feature request, please preface the title with [feature request].
If you are submitting a bug report, please fill in the following details.
## Issue description
Provide a short description.
## Code example
Please try to provide a minimal example to repro the bug.
Error messages and stack traces are also helpful.
## System Info
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch or Caffe2:
- How you installed PyTorch (conda, pip, source):
- Build command you used (if compiling from source):
- OS:
- PyTorch version:
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- GCC version (if compiling from source):
- CMake version:
- Versions of any other relevant libraries:
| caffe2 | low | Critical |
346,252,713 | rust | rust-lld errors with duplicate symbol: __rustc_debug_gdb_scripts_section__ for embeded target. | Hi,
I'm using `cargo xbuild` to compile & link my code to a embedded target. @phil-opp the maintainer of cargo xbuild believes that this is a rust-ldd [issue](https://github.com/japaric/xargo/issues/218) beginning with the second comment.
In short beginning with at least nightly-2018-03-06 I started getting linking error with the signature `
note: rust-lld: error: duplicate symbol: __rustc_debug_gdb_scripts_section_`
My code is currently a private gitlab repo (I'd be happy to add anyone who needs access) but this issue has also been seen with https://github.com/toothbrush7777777/uefi-app-x64.
I've been working around this by adding the `--release` flag; but I'm at a point where I'd like to add gdb support.
```
ymir:munin$ just build
RUST_TARGET_PATH=`pwd`/arch/x86_64 cargo xbuild --target=x86_64-uefi
Compiling bootloader v0.1.0 (file:///home/parrisj/src/munin)
error: linking with `rust-lld` failed: exit code: 1
|
= note: "rust-lld" "-flavor" "link" "/Subsystem:EFI_Application" "/Entry:efi_main" "/LIBPATH:/home/parrisj/src/munin/target/sysroot/lib/rustlib/x86_64-uefi/lib" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.16i0u6jlhoj1fwbo.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.16u6js6g0l3k1ic6.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.170lmw1sqqfe80qo.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.1im38lueib99jsk0.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.1y16o1qfye96o7m0.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.2670nmqs1be3ww7y.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.2c1y7jr7pp6yeu55.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.2xj8e5u0nv6enw9x.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.49a7n47po4ttqjl7.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.4xq48u46a1pwiqn7.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.81jpvh8cn5k8ng8.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.8xzrsc1ux72v29j.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.98g0d9x8aw3akpe.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.9fcb3syd3ne5k0n.rcgu.o" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.c6lbtaiefvx3wya.rcgu.o" "/OUT:/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.efi" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.crate.allocator.rcgu.o" "/OPT:REF,NOICF" "/DEBUG" "/LIBPATH:/home/parrisj/src/munin/target/x86_64-uefi/debug/deps" "/LIBPATH:/home/parrisj/src/munin/target/debug/deps" "/LIBPATH:/home/parrisj/src/munin/target/sysroot/lib/rustlib/x86_64-uefi/lib" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/liblibuefi-73eea763ee59199f.rlib" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/libcty-5c786bc06dad6f6a.rlib" "/home/parrisj/src/munin/target/x86_64-uefi/debug/deps/libbitflags-5e29547bdbbce63f.rlib" "/home/parrisj/src/munin/target/sysroot/lib/rustlib/x86_64-uefi/lib/liballoc-326639068cfe9737.rlib" "/home/parrisj/src/munin/target/sysroot/lib/rustlib/x86_64-uefi/lib/libcore-27ea11fcee464c1f.rlib" "/home/parrisj/src/munin/target/sysroot/lib/rustlib/x86_64-uefi/lib/libcompiler_builtins-e9fbcd04a1fe51f6.rlib"
= note: rust-lld: error: duplicate symbol: __rustc_debug_gdb_scripts_section__ in /home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.16i0u6jlhoj1fwbo.rcgu.o and in /home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.16u6js6g0l3k1ic6.rcgu.o
rust-lld: error: duplicate symbol: __rustc_debug_gdb_scripts_section__ in /home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.16i0u6jlhoj1fwbo.rcgu.o and in /home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.170lmw1sqqfe80qo.rcgu.o
rust-lld: error: duplicate symbol: __rustc_debug_gdb_scripts_section__ in /home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.16i0u6jlhoj1fwbo.rcgu.o and in /home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.1im38lueib99jsk0.rcgu.o
rust-lld: error: duplicate symbol: __rustc_debug_gdb_scripts_section__ in /home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.16i0u6jlhoj1fwbo.rcgu.o and in /home/parrisj/src/munin/target/x86_64-uefi/debug/deps/bootloader-49ecc39b692d6c5e.1y16o1qfye96o7m0.rcgu.o
...
```
# Target Triple
```
{
"llvm-target": "x86_64-pc-windows-msvc",
"target-endian": "little",
"target-pointer-width": "64",
"target-c-int-width": "32",
"os": "uefi",
"arch": "x86_64",
"data-layout": "e-m:e-i64:64-f80:128-n8:16:32:64-S128",
"linker": "rust-lld",
"linker-flavor": "lld-link",
"pre-link-args": {
"lld-link": [
"/Subsystem:EFI_Application",
"/Entry:efi_main"
]
},
"panic-strategy": "abort",
"default-hidden-visibility": true,
"executables": true,
"exe-suffix": ".efi",
"is-like-windows": true
}
```
# Version Info
```
$ rustc --version
rustc 1.29.0-nightly (54628c8ea 2018-07-30)
$ cargo --version
cargo 1.29.0-nightly (2cd36b4ed 2018-07-25)
``` | A-linkage,T-compiler,C-bug,WG-embedded | low | Critical |
346,256,370 | go | cmd/doc: add example support | To go along with #25443, #25595, and #18807, it would be nice to get support for `godoc`'s `-ex` flag in `go doc`. | Proposal-Accepted,NeedsFix,FeatureRequest | medium | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.