repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
google/jax | 2,532 | google__jax-2532 | [
"2529"
] | f371bfc0bfe927051738d7a8cdca2b4581b45e2f | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -1498,9 +1498,10 @@ def zeros_like_array(x):
return full_like(x, 0)
for t in itertools.chain(dtypes.python_scalar_dtypes.keys(), array_types,
- [xla.DeviceArray]):
+ [xla.DeviceArray, pxla.ShardedDeviceArray]):
ad_util.jaxval_adders[t] = add
ad_util.jaxval_zeros_likers[xla.DeviceArray] = zeros_like_array
+ad_util.jaxval_zeros_likers[pxla.ShardedDeviceArray] = zeros_like_array
### primitives
| Add pxla.ShardedDeviceArray to jaxval_zeros_likers
This is a one-line fix at https://github.com/google/jax/blob/master/jax/lax/lax.py#L1503
Just adding this as an issue as a reminder for @mattjj
| Just add `ad_util.jaxval_zeros_likers[pxla.ShardedDeviceArray] = zeros_like_array` below that line
A thought - does `pxla.ShardedDeviceArray` also need to be added to `jaxval_adders`? | 2020-03-28T18:56:52 |
|
google/jax | 2,536 | google__jax-2536 | [
"2534"
] | 614d39dcca014fa5f4e74906da6899928a665dd4 | diff --git a/jax/interpreters/pxla.py b/jax/interpreters/pxla.py
--- a/jax/interpreters/pxla.py
+++ b/jax/interpreters/pxla.py
@@ -248,6 +248,43 @@ def apply_parallel_primitive(prim, *args, **params):
def axis_index(axis_name):
+ """Return the index along the pmapped axis ``axis_name``.
+
+ Args:
+ axis_name: hashable Python object used to name the pmapped axis (see the
+ ``pmap`` docstring for more details).
+
+ Returns:
+ An integer representing the index.
+
+ For example, with 8 XLA devices available:
+
+ >>> from functools import partial
+ >>> @partial(pmap, axis_name='i')
+ ... def f(_):
+ ... return lax.axis_index('i')
+ ...
+ >>> f(np.zeros(4))
+ ShardedDeviceArray([0, 1, 2, 3], dtype=int32)
+ >>> f(np.zeros(8))
+ ShardedDeviceArray([0, 1, 2, 3, 4, 5, 6, 7], dtype=int32)
+ >>> @partial(pmap, axis_name='i')
+ ... @partial(pmap, axis_name='j')
+ ... def f(_):
+ ... return lax.axis_index('i'), lax.axis_index('j')
+ ...
+ >>> x, y = f(np.zeros((4, 2)))
+ >>> print(x)
+ [[0 0]
+ [1 1]
+ [2 2]
+ [3 3]]
+ >>> print(y)
+ [[0 1]
+ [0 1]
+ [0 1]
+ [0 1]]
+ """
dynamic_axis_env = _thread_local_state.dynamic_axis_env
frame = dynamic_axis_env[axis_name]
sizes = dynamic_axis_env.sizes[:dynamic_axis_env.index(frame)+1]
| document lax.axis_index to get mapped element index from axis name
Hello JAX team,
First of all, thanks a lot for cool JAX. This is a great tool.
This issue contains a question about your plans regarding `pmap` and a feature request.
What `pmap`ed function receives as an input, is that a slice of input array or a full array? What happens in case of nested `pmap`s? Guess JAX traces a function and runs copies of it on sharded chunks of an input array. If so, that would super useful to have access to an index of that shard, that would allow encoding more complex algorithms. Otherwise, it still would be great to have an access to the shard index.
<details><summary>Example</summary>
<p>
```python
@partial(pmap, axis_name='rows')
@partial(pmap, axis_name='cols')
def f(x):
# Q: When the code is running, what the `x.shape` actually is?
#
# Dummy example of how to get shard id:
# shard_row_id = __global_axis_name__["rows"]
# shard_col_id = __global_axis_name__["cols"]
#
# Do something smart with it! E.g. process tensor in `chess` like style - blacks are positive, whites are negative.
pass
```
</p>
</details>
Thanks
| Thanks for the kind words!
I think you might find answers to these in [the SPMD cookbook](https://colab.sandbox.google.com/github/google/jax/blob/master/cloud_tpu_colabs/Pmap_Cookbook.ipynb). But perhaps you've already looked at that, based on your question, and you want clarification.
> Guess JAX traces a function and runs copies of it on sharded chunks of an input array.
That sounds right to me!
> What pmaped function receives as an input, is that a slice of input array or a full array?
It's a slice, like you'd get from the expression `x[i]` for integer `i` where `x` is a NumPy ndarray (so in particular there's no singleton axis).
In your example, the shape of `x` in the body of `f` depends on the shape of the argument to which the pmapped function `f` is applied. In general, the shape of `x` in the body will be missing two leading axes. That is,
```python
@partial(pmap, axis_name='rows')
@partial(pmap, axis_name='cols')
def f(x):
print(x.shape)
...
f(np.ones((4, 2, 9, 9))) # prints (9, 9)
```
While `pmap` is most fun when programming multiple GPUs or multiple TPU cores, you can experiment with it locally by setting the environment variable `XLA_FLAGS=--xla_force_host_platform_device_count=8` or something like that. See below for an example.
> What happens in case of nested pmaps?
In terms of what values get computed, the semantics are compositional. Without using collectives, it's just like doing nested `vmap`s, which themselves are roughly defined by `vmap(f)(xs) = np.stack([f(x) for x in xs])`. With collectives, every pmap can bind an axis variable name, and collectives refer to those names. Here's an example from the pmap cookbook:
```python
@partial(pmap, axis_name='rows')
@partial(pmap, axis_name='cols')
def f(x):
row_normed = x / lax.psum(x, 'rows')
col_normed = x / lax.psum(x, 'cols')
doubly_normed = x / lax.psum(x, ('rows', 'cols'))
return row_normed, col_normed, doubly_normed
x = np.arange(8.).reshape((4, 2))
a, b, c = f(x)
print(a)
print(b)
prrint(c)
```
```
[[0. , 0.0625 ],
[0.16666667, 0.1875 ],
[0.33333333, 0.3125 ],
[0.5 , 0.4375 ]])
[[0. , 1. ],
[0.4 , 0.6 ],
[0.44444444, 0.55555556],
[0.46153846, 0.53846154]])
[[0. , 0.03571429],
[0.07142857, 0.10714286],
[0.14285714, 0.17857143],
[0.21428571, 0.25 ]])
```
Here are some images from slides that might illustrate things:



> If so, that would super useful to have access to an index of that shard, that would allow encoding more complex algorithms.
Check out `lax.axis_index`, which seems to be undocumented right now:
```
$ env XLA_FLAGS=--xla_force_host_platform_device_count=8 ipython (base)
Python 3.7.4 (default, Aug 13 2019, 20:35:49)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.8.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: from jax import pmap
In [2]: from jax import lax
In [3]: import jax.numpy as np
In [4]: pmap(lambda x: x + lax.axis_index('i'), axis_name='i')(np.zeros(8))
Out[4]: ShardedDeviceArray([0., 1., 2., 3., 4., 5., 6., 7.], dtype=float32)
In [6]: @partial(pmap, axis_name='rows')
...: @partial(pmap, axis_name='cols')
...: def f(_):
...: return lax.axis_index('rows'), lax.axis_index('cols')
...:
In [7]: x, y = f(np.zeros((4, 2)))
In [8]: print(x)
[[0 0]
[1 1]
[2 2]
[3 3]]
In [9]: print(y)
[[0 1]
[0 1]
[0 1]
[0 1]]
```
Is that what you have in mind?
Awesome response! Thank you @mattjj!
The `lax.axis_index` is exactly what I was searching! Please, document this method ))) | 2020-03-29T20:57:03 |
|
google/jax | 2,558 | google__jax-2558 | [
"2554"
] | bd1708c70734eeae116c00ecf8e65da1aa01dbf3 | diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -185,6 +185,11 @@ def xla_primitive_callable(prim, *arg_specs, **params):
nreps = initial_style_primitive_replicas(params)
else:
nreps = 1
+ if nreps > xb.device_count(backend):
+ msg = ("compiling a primitive computation `{}` that requires {} replicas, "
+ "but only {} XLA devices are available on backend {}.")
+ raise ValueError(msg.format(prim, nreps, xb.device_count(backend),
+ backend.platform))
built_c = primitive_computation(prim, AxisEnv(nreps), backend, tuple_args,
*avals, **params)
options = xb.get_compile_options(
| diff --git a/tests/lax_control_flow_test.py b/tests/lax_control_flow_test.py
--- a/tests/lax_control_flow_test.py
+++ b/tests/lax_control_flow_test.py
@@ -1816,25 +1816,41 @@ def arange(n):
expected = onp.arange(10)
self.assertAllClose(ans, expected, check_dtypes=False)
- @jtu.skip_on_devices("tpu", "gpu", "cpu")# TODO(mattjj): follow up w/ xla
- # Issue #2554
def test_while_loop_of_pmap(self):
# code from jsnoek@
+
def body(i, x):
result = api.pmap(lambda z: lax.psum(np.sin(z), 'i'), axis_name='i')(x)
return result + x
f_loop = lambda x: lax.fori_loop(0, 3, body, x)
- ans = f_loop(np.ones(8))
+ ans = f_loop(np.ones(api.device_count()))
del body, f_loop
def body2(i, x):
result = np.broadcast_to(np.sin(x).sum(), x.shape)
return result + x
g_loop = lambda x: lax.fori_loop(0, 3, body2, x)
- expected = g_loop(np.ones(8))
+ expected = g_loop(np.ones(api.device_count()))
self.assertAllClose(ans, expected, check_dtypes=False)
+ def test_while_loop_of_pmap_error_message(self):
+
+ def body(i, x):
+ result = api.pmap(lambda z: lax.psum(np.sin(z), 'i'), axis_name='i')(x)
+ return result + x
+ f_loop = lambda x: lax.fori_loop(0, 3, body, x)
+
+ too_big = 2 * api.device_count()
+
+ self.assertRaisesRegex(
+ ValueError,
+ re.escape(
+ "compiling a primitive computation `while` that requires {} "
+ "replicas, but only {} XLA devices are available on backend {}."
+ .format(too_big, api.device_count(), jtu.device_under_test())),
+ lambda: f_loop(np.ones(too_big)))
+
if __name__ == '__main__':
absltest.main()
| Error compiling while_of_pmap
The test is lax_control_flow_tests.py:test_while_loop_of_pmap. It fails on CPU, GPU, and TPU.
For GPU the failure seems to be
```
F0331 01:03:30.304149 7266 ir_emitter_unnested.cc:1766] Non-OK-status: body->Accept(&ir_emitter_body) status: Unimplemented: Requested AllReduce not implemented on GPU; replica_count: 8; operand_count: 1; IsCrossReplicaAllReduce: 1; NCCL support: 0; first operand array element-type: F64
```
For TPU the failure is perhaps related but shows up elsewhere:
```
File "/build/work/c62b4eb2372c3477aa8032626244c507067f/google3/runfiles/google3/third_party/py/jax/interpreters/xla.py", line 169, in apply_primitive
compiled_fun = xla_primitive_callable(prim, *map(arg_spec, args), **params)
File "/build/work/c62b4eb2372c3477aa8032626244c507067f/google3/runfiles/google3/third_party/py/jax/interpreters/xla.py", line 194, in xla_primitive_callable
compiled = built_c.Compile(compile_options=options, backend=backend)
File "/build/work/c62b4eb2372c3477aa8032626244c507067f/google3/runfiles/google3/third_party/tensorflow/compiler/xla/python/xla_client.py", line 571, in Compile
return backend.compile(self.computation, compile_options)
File "/build/work/c62b4eb2372c3477aa8032626244c507067f/google3/runfiles/google3/third_party/tensorflow/compiler/xla/python/xla_client.py", line 151, in compile
compile_options.tuple_arguments)
RuntimeError: Invalid argument: Invalid (replica_count,computation_count) pair: (8,1)
```
| 2020-03-31T18:55:33 |
|
google/jax | 2,561 | google__jax-2561 | [
"2518"
] | c28c46e1911ccb2154a633ce171dfdbdc74e4738 | diff --git a/jax/experimental/stax.py b/jax/experimental/stax.py
--- a/jax/experimental/stax.py
+++ b/jax/experimental/stax.py
@@ -134,12 +134,10 @@ def apply_fun(params, x, **kwargs):
# TODO(phawkins): np.expand_dims should accept an axis tuple.
# (https://github.com/numpy/numpy/issues/12290)
ed = tuple(None if i in axis else slice(None) for i in range(np.ndim(x)))
- beta = beta[ed]
- gamma = gamma[ed]
z = normalize(x, axis, epsilon=epsilon)
- if center and scale: return gamma * z + beta
- if center: return z + beta
- if scale: return gamma * z
+ if center and scale: return gamma[ed] * z + beta[ed]
+ if center: return z + beta[ed]
+ if scale: return gamma[ed] * z
return z
return init_fun, apply_fun
| diff --git a/tests/stax_test.py b/tests/stax_test.py
--- a/tests/stax_test.py
+++ b/tests/stax_test.py
@@ -212,6 +212,19 @@ def testIssue182(self):
assert out_shape == out.shape
assert onp.allclose(onp.sum(onp.asarray(out), -1), 1.)
+ def testBatchNormNoScaleOrCenter(self):
+ key = random.PRNGKey(0)
+ axes = (0, 1, 2)
+ init_fun, apply_fun = stax.BatchNorm(axis=axes, center=False, scale=False)
+ input_shape = (4, 5, 6, 7)
+ inputs = random_inputs(onp.random.RandomState(0), input_shape)
+
+ out_shape, params = init_fun(key, input_shape)
+ out = apply_fun(params, inputs)
+ means = onp.mean(out, axis=(0, 1, 2))
+ std_devs = onp.std(out, axis=(0, 1, 2))
+ assert onp.allclose(means, onp.zeros_like(means), atol=1e-4)
+ assert onp.allclose(std_devs, onp.ones_like(std_devs), atol=1e-4)
def testBatchNormShapeNHWC(self):
key = random.PRNGKey(0)
| BatchNorm doesn't work with center or scale turned off
BatchNorm raises TypeError when using without center or scale or both. Seems pretty straightforward to fix (e.x. as a workaround I've set beta to None `beta = None if not center else beta[ed]` if center is False, the same for gamma but there might be some better solution).
Exception:
beta = beta[ed] # ed=(None, slice(None, None, None))
TypeError: tuple indices must be integers or slices, not tuple
Code:
```
from jax import np
from jax import random
from jax.experimental import stax
init_fn, apply_fn = stax.BatchNorm(axis=0, center=False, scale=False)
_, params = init_fn(random.PRNGKey(0), (-1, 1))
x = np.zeros((2, 1))
apply_fn(params, x)
```
Details:
jax==0.1.62
jaxlib==0.1.42
| 2020-04-01T00:02:31 |
|
google/jax | 2,591 | google__jax-2591 | [
"1476"
] | 64a7d172399f7649b5ef1e0609afaca65717f1ac | diff --git a/jax/core.py b/jax/core.py
--- a/jax/core.py
+++ b/jax/core.py
@@ -449,7 +449,18 @@ def __getattr__(self, name):
return attr
def __repr__(self):
- return 'Traced<{}>with<{}>'.format(self.aval, self._trace)
+ base = pp('Traced<{}>with<{}>'.format(self.aval, self._trace))
+ contents = self._contents()
+ if contents:
+ base += pp(' with ') >> vcat(pp('{} = '.format(name)) >> pp_payload
+ for name, pp_payload in contents)
+ return str(base)
+
+ def _contents(self):
+ try:
+ return [(name, pp(repr(getattr(self, name)))) for name in self.__slots__]
+ except AttributeError:
+ return ()
def __copy__(self):
return self
| Print values inside a vmap function
How to print values with a vmapped function even without any jit?
From advi.py example,
def elbo(logprob, rng, mean, log_std):
**sample** = diag_gaussian_sample(rng, mean, log_std)
return logprob(sample) - diag_gaussian_logpdf(sample, mean, log_std)
def batch_elbo(logprob, rng, params, num_samples):
rngs = random.split(rng, num_samples)
vectorized_elbo = vmap(partial(elbo, logprob), in_axes=(0, None, None))
return np.mean(vectorized_elbo(rngs, *params))
print(sample) in VS Code debug mode returns the below message
None
Traced<ShapedArray(float32[2])>with<BatchTrace(level=2/0)>
This seems to related to #1396.
Even if I disabled all the jit, still I can't print values with the vmapp function.
| Like `jit`, `vmap` also traces the function in order to find the operations that need to be vectorized. The print statement will show the abstract values that JAX is using to trace the function, before vmap.
I do not think there is a way right now to print; we do not yet have the jax.print operation.
I got it. Hope to support jax.print soon.
Thank you | 2020-04-03T05:03:19 |
|
google/jax | 2,593 | google__jax-2593 | [
"2578"
] | 64a7d172399f7649b5ef1e0609afaca65717f1ac | diff --git a/jax/custom_derivatives.py b/jax/custom_derivatives.py
--- a/jax/custom_derivatives.py
+++ b/jax/custom_derivatives.py
@@ -252,6 +252,7 @@ def _flatten_jvp(in_tree, *args):
yield primals_out + tangents_out, out_tree
def _custom_jvp_call_bind(prim, fun, jvp, *args):
+ args = map(core.full_lower, args)
top_trace = core.find_top_trace(args)
level = (core.trace_state.trace_stack.next_level(True)
if top_trace is None else top_trace.level)
@@ -490,6 +491,7 @@ def _flatten_bwd(in_tree, out_trees, *args):
yield cts_in
def _custom_vjp_call_bind(prim, fun, fwd, bwd, *args, out_trees):
+ args = map(core.full_lower, args)
top_trace = core.find_top_trace(args)
level = (core.trace_state.trace_stack.next_level(True)
if top_trace is None else top_trace.level)
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -2781,6 +2781,19 @@ def g(x):
expected = jax.grad(f, 0)(2., 0.1) + jax.grad(f, 0)(2., 0.2)
self.assertAllClose(ans, expected, check_dtypes=False)
+ def test_lowering_out_of_traces(self):
+ # https://github.com/google/jax/issues/2578
+
+ class F(collections.namedtuple("F", ["a"])):
+ def __call__(self, x):
+ return jax.nn.relu(self.a) * x
+
+ @jax.jit
+ def g(f, x):
+ return f(x)
+
+ jax.grad(g, argnums=(1,))(F(2.0), 0.) # doesn't crash
+
class DeprecatedCustomTransformsTest(jtu.JaxTestCase):
| `AssertionError` with new `custom_jvp` for `jax.nn.relu`
On this [commit](https://github.com/google/jax/commit/ca23be63fbaed20192cda5f921afe177ac8dcf4d) (currently just a few commits behind head, none of which look relevant), I'm finding that
```python
import jax
import jax.numpy as np
import collections
class F(collections.namedtuple("F", ["a"])):
def __call__(self, x):
return jax.nn.relu(self.a) * x
@jax.jit
def g(f, x):
return f(x)
jax.grad(F(2.0))(0.) # works, returns DeviceArray(2., dtype=float32)
jax.grad(g, argnums=(1,))(F(2.0), 0.) # AssertionError
```
hits an `AssertionError` on this line: https://github.com/google/jax/blob/1bb9aaa88c09a90a115c0304c05b0fc25932523b/jax/interpreters/partial_eval.py#L239
Commenting out the `@jax.jit` on `g` or replacing `jax.nn.relu(self.a)` with `np.maximum(self.a, 0)` both avoid the error.
| 2020-04-03T05:55:09 |
|
google/jax | 2,596 | google__jax-2596 | [
"1212"
] | 192e9086f8bfcabdd179bbab17cd5fccbe5e0be8 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -33,6 +33,7 @@
import re
import string
import types
+from typing import Callable
import warnings
import numpy as onp
@@ -1569,7 +1570,59 @@ def nanmean(a, axis=None, dtype=None, out=None, keepdims=False):
td = lax.div(nansum(a, axis, dtype=dtype, keepdims=keepdims), normalizer)
return td
-def _make_cumulative_reduction(onp_reduction, window_reduce, init_val,
+
+# Parallel prefix-scan. See:
+# https://developer.nvidia.com/gpugems/gpugems3/part-vi-gpu-computing/chapter-39-parallel-prefix-sum-scan-cuda
+# and
+# Blelloch, Guy E. 1990. "Prefix Sums and Their Applications.", Technical Report
+# CMU-CS-90-190, School of Computer Science, Carnegie Mellon University.
+#
+# Unlike the Blelloch algorithm, we use an out-of-place algorithm that uses 2n
+# space. This is somewhat wasteful if we are interested only in the output of
+# the forward pass, but more memory-efficient if we intend to differentiate
+# through the implementation of the scan.
+def _prescan_power_of_two(x, axis: int, op: Callable, unit):
+ n = x.shape[axis]
+ assert n != 0 and n & (n - 1) == 0, "n must be a power of 2"
+
+ # Upsweep
+ xs = []
+ for d in range(0, n.bit_length() - 1):
+ x1 = lax.slice_in_dim(x, 0, None, stride=2, axis=axis)
+ xs.append(x1)
+ x2 = lax.slice_in_dim(x, 1, None, stride=2, axis=axis)
+ x = op(x1, x2)
+ total = x
+
+ # Downsweep
+ x = full_like(total, unit)
+ pad_left = [(0, 0, 0)] * len(x.shape)
+ pad_left[axis] = (1, 0, 1)
+ pad_right = [(0, 0, 0)] * len(x.shape)
+ pad_right[axis] = (0, 1, 1)
+ for w in reversed(xs):
+ x1 = lax.pad(x, x.dtype.type(0), pad_right)
+ x2 = lax.pad(x, x.dtype.type(0), pad_left)
+ w = lax.pad(w, x.dtype.type(0), pad_left)
+ x = x1 + op(x2, w)
+
+ return x, total
+
+def _parallel_prefix_scan(x, axis: int, op: Callable, unit):
+ n = x.shape[axis]
+
+ # Pads to the next largest power of two
+ nbits = n.bit_length()
+ if n == (1 << (nbits - 1)):
+ nbits -= 1
+ padding = [(0, 0, 0)] * len(x.shape)
+ padding[axis] = (0, (1 << nbits) - n, 0)
+ x = lax.pad(x, x.dtype.type(unit), padding)
+ x, product = _prescan_power_of_two(x, axis, op, unit)
+ return concatenate((lax.slice_in_dim(x, 1, n, axis=axis), product), axis=axis)
+
+
+def _make_cumulative_reduction(onp_reduction, op, unit,
squash_nan=False):
# We want to allow XLA to fuse the pad and reduce-window operators to
# avoid materializing the padded output.
@@ -1592,7 +1645,7 @@ def _cumulative_reduction(a, axis, dtype):
axis, num_dims))
if squash_nan:
- a = where(isnan(a), _constant_like(a, init_val), a)
+ a = where(isnan(a), _constant_like(a, unit), a)
if not dtype and _dtype(a) == bool_:
dtype = int_
@@ -1601,15 +1654,7 @@ def _cumulative_reduction(a, axis, dtype):
if a_shape[axis] == 0:
return a
-
- padding = [(0, 0, 0)] * num_dims
- padding[axis] = (a_shape[axis] - 1, 0, 0)
- a = lax.pad(a, _constant_like(a, init_val), padding)
- strides = [1] * num_dims
- window_dims = [1] * num_dims
- window_dims[axis] = a_shape[axis]
- return window_reduce(
- a, window_dims, strides, xla_client.PaddingType.VALID)
+ return _parallel_prefix_scan(a, axis, op, unit)
@_wraps(onp_reduction)
def cumulative_reduction(a, axis=None, dtype=None):
@@ -1619,14 +1664,14 @@ def cumulative_reduction(a, axis=None, dtype=None):
cumsum = _make_cumulative_reduction(
- onp.cumsum, lax._reduce_window_sum, 0, squash_nan=False)
+ onp.cumsum, add, 0, squash_nan=False)
cumprod = _make_cumulative_reduction(
- onp.cumprod, lax._reduce_window_prod, 1, squash_nan=False)
+ onp.cumprod, multiply, 1, squash_nan=False)
cumproduct = cumprod
nancumsum = _make_cumulative_reduction(
- onp.nancumsum, lax._reduce_window_sum, 0, squash_nan=True)
+ onp.nancumsum, add, 0, squash_nan=True)
nancumprod = _make_cumulative_reduction(
- onp.nancumprod, lax._reduce_window_prod, 1, squash_nan=True)
+ onp.nancumprod, multiply, 1, squash_nan=True)
### Array-creation functions
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1104,7 +1104,8 @@ def attempt_sideeffect(x):
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "op={}_shape=[{}]_axis={}_out_dtype={}".format(
- op, jtu.format_shape_dtype_string(shape, dtype), axis, out_dtype),
+ op, jtu.format_shape_dtype_string(shape, dtype), axis,
+ out_dtype.__name__),
"axis": axis, "shape": shape, "dtype": dtype, "out_dtype": out_dtype,
"rng_factory": jtu.rand_default, "jnp_op": getattr(jnp, op),
"onp_op": getattr(onp, op)}
@@ -1124,6 +1125,9 @@ def testCumSumProd(self, axis, shape, dtype, out_dtype, onp_op, jnp_op, rng_fact
self._CheckAgainstNumpy(onp_fun, jnp_fun, args_maker, check_dtypes=True,
tol=tol)
self._CompileAndCheck(jnp_fun, args_maker, check_dtypes=True)
+ grad_dtypes = [onp.float32, onp.float64, onp.complex64, onp.complex128]
+ if dtype in grad_dtypes and out_dtype in grad_dtypes:
+ check_grads(jnp_fun, args_maker(), order=2, rtol=1e-2)
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_dtype={}_m={}_n={}_k={}".format(
| Add autodiff support for `reduce_window`
I get the following exception trying to take the grad of a cumprod-using function, e.g.
```python
jax.jacrev(lambda x: jnp.cumprod(x, axis=-1))(np.random.randn(5))
```
```
File "jax/api.py", line 623, in batched_fun
out_flat = batching.batch(jaxtree_fun, in_flat, in_axes_, out_axes)
File "jax/interpreters/batching.py", line 45, in batch
return batch_transform(fun, sz, in_dims, out_dim_dst).call_wrapped(in_vals)
File "jax/linear_util.py", line 161, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "jax/api.py", line 501, in jacfun
y, pullback = vjp(f_partial, *dyn_args)
File "jax/api.py", line 1019, in vjp
out_primal, out_vjp = ad.vjp(jaxtree_fun, primals_flat)
File "jax/interpreters/ad.py", line 105, in vjp
out_primal, pval, jaxpr, consts = linearize(traceable, *primals)
File "jax/interpreters/ad.py", line 94, in linearize
jaxpr, out_pval, consts = pe.trace_to_jaxpr(jvpfun, in_pvals)
File "jax/interpreters/partial_eval.py", line 400, in trace_to_jaxpr
jaxpr, (out_pval, consts, env) = fun.call_wrapped(pvals)
File "jax/linear_util.py", line 161, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
... my code, ultimately calling np.cumprod ...
File "jax/numpy/lax_numpy.py", line 1206, in cumulative_reduction
return _cumulative_reduction(a, axis, dtype)
File "jax/api.py", line 151, in f_jitted
device_assignment=device_assignment)
File "jax/core.py", line 675, in call_bind
ans = full_lower(top_trace.process_call(primitive, f, tracers, params))
File "jax/interpreters/ad.py", line 260, in process_call
result = call_primitive.bind(f_jvp, pack(primals), nonzero_tangents, **params)
File "jax/core.py", line 675, in call_bind
ans = full_lower(top_trace.process_call(primitive, f, tracers, params))
File "jax/interpreters/partial_eval.py", line 116, in process_call
out_pv_const, consts = call_primitive.bind(fun, *in_consts, **params)
File "jax/core.py", line 675, in call_bind
ans = full_lower(top_trace.process_call(primitive, f, tracers, params))
File "jax/interpreters/batching.py", line 135, in process_call
val_out = call_primitive.bind(f, *vals, **params)
File "jax/core.py", line 672, in call_bind
ans = primitive.impl(f, *args, **params)
File "jax/interpreters/xla.py", line 667, in _xla_call_impl
*map(abstractify, args))
File "jax/linear_util.py", line 213, in cached_fun
ans, f_prev = cached_fun_body(f, args)
File "jax/linear_util.py", line 210, in cached_fun_body
return call(f, *args), f
File "jax/interpreters/xla.py", line 679, in _xla_callable
jaxpr, (pval, consts, env) = pe.trace_to_subjaxpr(fun, master, False).call_wrapped(pvals)
File "jax/linear_util.py", line 161, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "jax/numpy/lax_numpy.py", line 1201, in _cumulative_reduction
a, window_dims, strides, xla_client.PaddingType.VALID)
File "jax/lax/lax.py", line 939, in _reduce_window_prod
window_strides=tuple(window_strides), padding=padding)
File "jax/core.py", line 148, in bind
out_tracer = top_trace.process_primitive(self, tracers, kwargs)
File "jax/interpreters/ad.py", line 251, in process_primitive
.format(primitive))
NotImplementedError: Forward-mode differentiation rule for 'reduce_window' not implemented
```
| It's easy enough to mimic TF's behavior here:
```
def cumprod_jvp(g, ans, x):
return jnp.cumsum(g / x) * ans
cumprod = custom_transforms(jnp.cumprod)
defjvp(cumprod, cumprod_jvp)
```
This has two downsides:
a) it doesn't support the extra keyword arguments (dtype and axis). We need to extend `custom_transforms` a bit to allow that.
b) it doesn't work correctly if any entry in `x` is 0. Note that TF has the same bug (mishandling 0s). PyTorch does not have this bug because it falls back to a much more expensive quadratic algorithm if any entry is 0.
(There's also a clearly correct solution by rewriting `cumprod` using `lax.scan`, but it will most likely be slower than the current implementation.)
this is also something that we are interested in using in [pyhf](https://github.com/scikit-hep/pyhf) to compute hessians of likelihoods cc @kratsg @matthewfeickert | 2020-04-03T15:57:17 |
google/jax | 2,610 | google__jax-2610 | [
"2607"
] | 99944d12045a8e16b42003c3f08fc60e8f3e2ec8 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -3443,12 +3443,12 @@ def quantile(a, q, axis=None, out=None, overwrite_input=False,
msg = ("jax.numpy.quantile does not support overwrite_input=True or "
"out != None")
raise ValueError(msg)
- if interpolation != "linear":
- raise NotImplementedError("Only interpolation='linear' is implemented")
- return _quantile(a, q, axis, keepdims)
+ if interpolation not in ["linear", "lower", "higher", "midpoint", "nearest"]:
+ raise ValueError("interpolation can only be 'linear', 'lower', 'higher', 'midpoint', or 'nearest'")
+ return _quantile(a, q, axis, interpolation, keepdims)
-@partial(jit, static_argnums=(2, 3))
-def _quantile(a, q, axis, keepdims):
+@partial(jit, static_argnums=(2, 3, 4))
+def _quantile(a, q, axis, interpolation, keepdims):
a = asarray(a)
if axis is None:
a = ravel(a)
@@ -3477,7 +3477,7 @@ def _quantile(a, q, axis, keepdims):
n = a_shape[axis]
q = lax.mul(q, _constant_like(q, n - 1))
low = lax.floor(q)
- high = lax.add(low, _constant_like(low, 1))
+ high = lax.ceil(q)
high_weight = lax.sub(q, low)
low_weight = lax.sub(_constant_like(high_weight, 1), high_weight)
@@ -3506,9 +3506,23 @@ def _quantile(a, q, axis, keepdims):
broadcast_dimensions=(0,))
high_weight = lax.broadcast_in_dim(high_weight, high_value.shape,
broadcast_dimensions=(0,))
- return lax.convert_element_type(
- lax.add(lax.mul(low_value.astype(q.dtype), low_weight),
- lax.mul(high_value.astype(q.dtype), high_weight)), a.dtype)
+
+ if interpolation == "linear":
+ result = lax.add(lax.mul(low_value.astype(q.dtype), low_weight),
+ lax.mul(high_value.astype(q.dtype), high_weight))
+ elif interpolation == "lower":
+ result = low_value
+ elif interpolation == "higher":
+ result = high_value
+ elif interpolation == "nearest":
+ pred = lax.le(high_weight, _constant_like(high_weight, 0.5))
+ result = lax.select(pred, low_value, high_value)
+ elif interpolation == "midpoint":
+ result = lax.mul(lax.add(low_value, high_value), _constant_like(low_value, 0.5))
+ else:
+ raise ValueError(f"interpolation={interpolation!r} not recognized")
+
+ return lax.convert_element_type(result, a.dtype)
@_wraps(onp.percentile)
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -2093,19 +2093,19 @@ def testIx_(self, rng_factory, shapes, dtypes):
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name":
- "_op={}_a_shape={}_q_shape={}_axis={}_keepdims={}".format(
+ "_op={}_a_shape={}_q_shape={}_axis={}_keepdims={}_interpolation={}".format(
op,
jtu.format_shape_dtype_string(a_shape, a_dtype),
jtu.format_shape_dtype_string(q_shape, q_dtype),
- axis, keepdims),
+ axis, keepdims, interpolation),
"a_rng": jtu.rand_default(), "q_rng": q_rng, "op": op,
"a_shape": a_shape, "a_dtype": a_dtype,
"q_shape": q_shape, "q_dtype": q_dtype, "axis": axis,
- "keepdims": keepdims}
+ "keepdims": keepdims,
+ "interpolation": interpolation}
for (op, q_rng) in (
("percentile", jtu.rand_uniform(low=0., high=100.)),
("quantile", jtu.rand_uniform(low=0., high=1.)),
- ("median", jtu.rand_uniform(low=0., high=1.)),
)
for a_dtype in float_dtypes
for a_shape, axis in (
@@ -2115,20 +2115,20 @@ def testIx_(self, rng_factory, shapes, dtypes):
)
for q_dtype in [onp.float32]
for q_shape in scalar_shapes + [(4,)]
- for keepdims in [False, True]))
+ for keepdims in [False, True]
+ for interpolation in ['linear', 'lower', 'higher', 'nearest', 'midpoint']))
def testQuantile(self, op, a_rng, q_rng, a_shape, a_dtype, q_shape, q_dtype,
- axis, keepdims):
+ axis, keepdims, interpolation):
if op == "quantile" and numpy_version < (1, 15):
raise SkipTest("Numpy < 1.15 does not have np.quantile")
- if op == "median":
- args_maker = lambda: [a_rng(a_shape, a_dtype)]
- else:
- args_maker = lambda: [a_rng(a_shape, a_dtype), q_rng(q_shape, q_dtype)]
+ args_maker = lambda: [a_rng(a_shape, a_dtype), q_rng(q_shape, q_dtype)]
def onp_fun(*args):
args = [x if jnp.result_type(x) != jnp.bfloat16 else
onp.asarray(x, onp.float32) for x in args]
- return getattr(onp, op)(*args, axis=axis, keepdims=keepdims)
- jnp_fun = partial(getattr(jnp, op), axis=axis, keepdims=keepdims)
+ return getattr(onp, op)(*args, axis=axis, keepdims=keepdims,
+ interpolation=interpolation)
+ jnp_fun = partial(getattr(jnp, op), axis=axis, keepdims=keepdims,
+ interpolation=interpolation)
# TODO(phawkins): we currently set dtype=False because we aren't as
# aggressive about promoting to float64. It's not clear we want to mimic
# Numpy here.
@@ -2140,6 +2140,39 @@ def onp_fun(*args):
self._CompileAndCheck(jnp_fun, args_maker, check_dtypes=True, rtol=tol)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name":
+ "_op=median_a_shape={}_axis={}_keepdims={}".format(
+ jtu.format_shape_dtype_string(a_shape, a_dtype),
+ axis, keepdims),
+ "a_rng": jtu.rand_default(),
+ "a_shape": a_shape, "a_dtype": a_dtype,
+ "axis": axis,
+ "keepdims": keepdims}
+ for a_dtype in float_dtypes
+ for a_shape, axis in (
+ ((7,), None),
+ ((47, 7), 0),
+ ((4, 101), 1),
+ )
+ for keepdims in [False, True]))
+ def testMedian(self, a_rng, a_shape, a_dtype, axis, keepdims):
+ args_maker = lambda: [a_rng(a_shape, a_dtype)]
+ def onp_fun(*args):
+ args = [x if jnp.result_type(x) != jnp.bfloat16 else
+ onp.asarray(x, onp.float32) for x in args]
+ return onp.median(*args, axis=axis, keepdims=keepdims)
+ jnp_fun = partial(jnp.median, axis=axis, keepdims=keepdims)
+ # TODO(phawkins): we currently set dtype=False because we aren't as
+ # aggressive about promoting to float64. It's not clear we want to mimic
+ # Numpy here.
+ tol_spec = {onp.float32: 2e-4, onp.float64: 5e-6}
+ tol = jtu.tolerance(a_dtype, tol_spec)
+ self._CheckAgainstNumpy(onp_fun, jnp_fun, args_maker, check_dtypes=False,
+ tol=tol)
+ self._CompileAndCheck(jnp_fun, args_maker, check_dtypes=True, rtol=tol)
+
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}".format(
jtu.format_shape_dtype_string(shape, dtype)),
| Add all interpolation methods to quantile (and percentile)
# Description
At the moment, only the "linear" interpolation method from [`numpy.quantile`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.quantile.html) and by extension [`numpy.percentile`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.percentile.html) are implemented (as [noted in Issue #70](https://github.com/google/jax/issues/70#issuecomment-509322341))
https://github.com/google/jax/blob/1cf708ea77fae0fc1418e0b944f0115646f7f2ec/jax/numpy/lax_numpy.py#L3446-L3447
It would be useful to also have the "nearest" interpolation method, along with the others ("lower", "higher", "midpoint") implemented as well.
cc @lukasheinrich @kratsg
| Agreed, this would be nice to have. Contributions would be very welcome here!
> Agreed, this would be nice to have
Great to hear that you're in agreement. I realistically won't have time to give this a shot for at least 2 weeks, but I'll try to do so then and open a PR once I pass the tests. If someone beats me to it that's great too of course. :) | 2020-04-06T04:22:58 |
google/jax | 2,616 | google__jax-2616 | [
"2612"
] | 7629c5aab49fa51df0f35510be4ca40ed53a21fb | diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -279,7 +279,7 @@ def check_nans(prim, bufs):
def _check_nans(name, xla_shape, buf):
assert not xla_shape.is_tuple()
- if dtypes.issubdtype(xla_shape.element_type(), onp.floating):
+ if dtypes.issubdtype(xla_shape.element_type(), onp.inexact):
if onp.any(onp.isnan(buf.to_py())):
msg = "invalid value (nan) encountered in {}"
raise FloatingPointError(msg.format(name))
| jax_debug_nans does not work with complex numbers
Running with flag --jax_debug_nans:
With this this, I get no error (but I do get nans):
```
@jax.jit
def fn(x):
return (x * 0.) / jnp.zeros_like(x)
print(fn(1j * jnp.ones([2, 3])))
```
but with this, I get an error (as expected):
```
@jax.jit
def fn(x):
return (x * 0.) / jnp.zeros_like(x)
print(fn(jnp.ones([2, 3])))
```
Could we add support for nan detection on complex numbers?
| The issue seems to be that the nan checker only looks for `floating` types, not `complexfloating`. Numpy can detect complex nans just fine, so this should be a one line change in this line:
https://github.com/google/jax/blob/master/jax/interpreters/xla.py#L282 | 2020-04-06T16:30:26 |
|
google/jax | 2,626 | google__jax-2626 | [
"2446"
] | 44e761b33d4c79bd64f78492f5e23f74e78e0a9d | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -3481,7 +3481,7 @@ def percentile(a, q, axis=None, out=None, overwrite_input=False,
def median(a, axis=None, out=None, overwrite_input=False, keepdims=False):
q = 0.5
return quantile(a, q, axis=axis, out=out, overwrite_input=overwrite_input,
- keepdims=keepdims)
+ keepdims=keepdims, interpolation='midpoint')
def _astype(arr, dtype):
lax._check_user_dtype_supported(dtype, "astype")
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -2139,7 +2139,7 @@ def onp_fun(*args):
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name":
- "_op=median_a_shape={}_axis={}_keepdims={}".format(
+ "_a_shape={}_axis={}_keepdims={}".format(
jtu.format_shape_dtype_string(a_shape, a_dtype),
axis, keepdims),
"a_rng": jtu.rand_default(),
| np.median returns nan on a list of all inf
I found the following discrepancy between numpy and jax:
```
In [1]: import numpy as np
In [2]: np.median([np.inf, np.inf, np.inf])
Out[2]: inf
In [3]: import jax.numpy as np
In [4]: np.median([np.inf, np.inf, np.inf])
/Users/stephentu/anaconda3/lib/python3.7/site-packages/jax/lib/xla_bridge.py:122: UserWarning: No GPU/TPU found, falling back to CPU.
warnings.warn('No GPU/TPU found, falling back to CPU.')
Out[4]: DeviceArray(nan, dtype=float32)
In [5]: import jax
In [6]: jax.__version__
Out[6]: '0.1.59'
```
Seems like returning `np.inf` is the sensible thing to do here.
@mattjj suggested the culprit might be here:
https://github.com/google/jax/blob/75077a14414124c13decc65c4ab1c1ac74174b81/jax/numpy/lax_numpy.py#L3374-L3375
| 2020-04-07T04:15:52 |
|
google/jax | 2,627 | google__jax-2627 | [
"2132"
] | 44e761b33d4c79bd64f78492f5e23f74e78e0a9d | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -26,6 +26,7 @@
import collections
import functools
+import inspect
import itertools as it
import threading
from typing import Any, Callable, Dict, Iterable, Optional, Sequence, Tuple, Union
@@ -75,7 +76,9 @@
def _check_callable(fun):
if not callable(fun):
- raise TypeError("Expected a callable value, got {}".format(fun))
+ raise TypeError(f"Expected a callable value, got {fun}")
+ if inspect.isgeneratorfunction(fun):
+ raise TypeError(f"Expected a function, got a generator function: {fun}")
class _ThreadLocalState(threading.local):
def __init__(self):
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -877,7 +877,13 @@ def test_jit_device(self):
def test_jit_of_noncallable(self):
self.assertRaisesRegex(TypeError, "Expected a callable value.*",
- lambda: api.jit(3))
+ lambda: api.jit(3))
+
+ def test_jit_of_generator(self):
+ def gen(x):
+ yield x
+ self.assertRaisesRegex(TypeError, "Expected a function, got a generator function.*",
+ lambda: api.jit(gen))
def test_issue_1062(self):
# code from https://github.com/google/jax/issues/1062 @shoyer
| Generator/yield-support
Functions with yield does not seem to be supported and gives funny results in jit.
```
from jax import jit
def yf(x):
yield(x)
jyf = jit(yf)
print(yf, jyf)
print(next(yf(1.5)))
print(next(jyf(1.5)))
```
Gives:
> <function yf at 0x7f79f01a7e18> <function jit.<locals>.f_jitted at 0x7f79f01ab1e0>
1.5
Traced<ShapedArray(float32[], weak_type=True):JaxprTrace(level=-1/1)>
| It is intentional that this is not supported.
I'm not sure whether we could detect this case and issue a more graceful error. If we could, we should do so. Otherwise I think the best we could do is add a note to the documentation of `jit`. | 2020-04-07T04:33:56 |
google/jax | 2,673 | google__jax-2673 | [
"2657"
] | a3cc9a7d327f46292d1edc5fcd2d0d771adc2bb9 | diff --git a/jax/custom_derivatives.py b/jax/custom_derivatives.py
--- a/jax/custom_derivatives.py
+++ b/jax/custom_derivatives.py
@@ -294,7 +294,8 @@ def _custom_jvp_call_jaxpr_abstract_eval(*_, fun_jaxpr, **__):
def _custom_jvp_call_jaxpr_jvp(primals, tangents, *, fun_jaxpr, jvp_jaxpr_thunk):
jvp_jaxpr = jvp_jaxpr_thunk()
- outs = core.jaxpr_as_fun(jvp_jaxpr)(*(primals + tangents))
+ tangents = map(ad.instantiate_zeros, primals, tangents)
+ outs = core.jaxpr_as_fun(jvp_jaxpr)(*primals, *tangents)
return split_list(outs, [len(outs) // 2])
ad.primitive_jvps[custom_jvp_call_jaxpr_p] = _custom_jvp_call_jaxpr_jvp
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -2446,6 +2446,34 @@ def _expit_jvp(primals, tangents):
api.eval_shape(expit, np.ones((2, 3)))
api.eval_shape(api.grad(lambda x: expit(x).sum()), np.ones((2, 3)))
+ def test_jaxpr_zeros(self):
+ # from https://github.com/google/jax/issues/2657
+ @api.custom_jvp
+ def f(A, b):
+ return A @ b
+
+ def f_jvp(primals, tangents):
+ A, b = primals
+ dA, db = tangents
+ z = f(A, b)
+ dz = A @ db + dA @ b
+ return z, dz
+
+ f.defjvp(f_jvp)
+
+ def experiment(theta):
+ def step(q, _):
+ z = f(np.eye(3), np.ones(3) * theta)
+ q += z[0]
+ return q, q
+
+ q = 0.
+ q, _ = lax.scan(step, q, None, 4)
+ return q
+
+ grad(experiment)(1.) # doesn't crash
+
+
class CustomVJPTest(jtu.JaxTestCase):
def test_basic(self):
| TypeError: <class 'jax.ad_util.Zero'> is not a valid Jax type when I combine custom_jvp and lax.scan
Hi all
I have a difficult bug which I don't understand. It started appearing when I defined my custom gradients with `custom_jvp` (in forward mode), before, I did it in reverse mode with `custom_gradient`.
I'm up-to-date on the master branch, as of this writing.
`TypeError: <class 'jax.ad_util.Zero'> is not a valid Jax type`
I've tried to make a simple code snippet to reproduce it, but it's still rather complex because the bug, as far as I can tell, only appears when I combine different things.
I've tried lots of combinations, and so far I've seen the bug appear only when:
1) I define the gradient with `custom_jvp`
2) The custom gradient depends on `dA` (`A` does not depend on `theta`, I think it's got something to do with this)
3) I use `lax.scan` (the bug disappears when I use a python loop)
4) `theta` is used as below, directly in `step` (this is not breaking the 'pure function' requirement, right?)
```python
import jax
import jax.numpy as np
@jax.custom_jvp
def f(A, b):
return A @ b
def f_jvp(primals, tangents):
A, b = primals
dA, db = tangents
z = f(A, b)
dz = dA @ db
return z, dz
f.defjvp(f_jvp)
def experiment(theta):
def step(q, _):
z = f(np.eye(3), np.ones(3) * theta)
q += z[0]
return q, q
q = 0.
q, _ = jax.lax.scan(step, q, None, 4)
return q
experiment_grad = jax.grad(experiment)
g = experiment_grad(1.)
print(g)
```
Thanks for the help!
Rembert
| This is the stack trace:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/jax/jax/core.py in concrete_aval(x)
676 try:
--> 677 return pytype_aval_mappings[type(x)](x)
678 except KeyError as err:
KeyError: <class 'jax.ad_util.Zero'>
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-2-b58b959b770c> in <module>
29 experiment_grad = jax.grad(experiment)
30
---> 31 g = experiment_grad(1.)
32 print(g)
~/jax/jax/api.py in grad_f(*args, **kwargs)
370 @wraps(fun, docstr=docstr, argnums=argnums)
371 def grad_f(*args, **kwargs):
--> 372 _, g = value_and_grad_f(*args, **kwargs)
373 return g
374
~/jax/jax/api.py in value_and_grad_f(*args, **kwargs)
426 f_partial, dyn_args = argnums_partial(f, argnums, args)
427 if not has_aux:
--> 428 ans, vjp_py = _vjp(f_partial, *dyn_args)
429 else:
430 ans, vjp_py, aux = _vjp(f_partial, *dyn_args, has_aux=True)
~/jax/jax/api.py in _vjp(fun, *primals, **kwargs)
1386 if not has_aux:
1387 flat_fun, out_tree = flatten_fun_nokwargs(fun, in_tree)
-> 1388 out_primal, out_vjp = ad.vjp(flat_fun, primals_flat)
1389 out_tree = out_tree()
1390 else:
~/jax/jax/interpreters/ad.py in vjp(traceable, primals, has_aux)
104 def vjp(traceable, primals, has_aux=False):
105 if not has_aux:
--> 106 out_primals, pvals, jaxpr, consts = linearize(traceable, *primals)
107 else:
108 out_primals, pvals, jaxpr, consts, aux = linearize(traceable, *primals, has_aux=True)
~/jax/jax/interpreters/ad.py in linearize(traceable, *primals, **kwargs)
93 _, in_tree = tree_flatten(((primals, primals), {}))
94 jvpfun_flat, out_tree = flatten_fun(jvpfun, in_tree)
---> 95 jaxpr, out_pvals, consts = pe.trace_to_jaxpr(jvpfun_flat, in_pvals)
96 pval_primals, pval_tangents = tree_unflatten(out_tree(), out_pvals)
97 aval_primals, const_primals = unzip2(pval_primals)
~/jax/jax/interpreters/partial_eval.py in trace_to_jaxpr(fun, pvals, instantiate, stage_out, bottom)
372 with new_master(trace_type, bottom=bottom) as master:
373 fun = trace_to_subjaxpr(fun, master, instantiate)
--> 374 jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
375 assert not env
376 del master
~/jax/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
148 gen = None
149
--> 150 ans = self.f(*args, **dict(self.params, **kwargs))
151 del args
152 while stack:
<ipython-input-2-b58b959b770c> in experiment(theta)
23
24 q = 0.
---> 25 q, _ = jax.lax.scan(step, q, None, 4)
26 return q
27
~/jax/jax/lax/lax_control_flow.py in scan(f, init, xs, length)
846 x_dtypes = [x.dtype for x in xs_flat]
847 x_avals = tuple(_map(ShapedArray, x_shapes, x_dtypes))
--> 848 jaxpr, consts, out_tree = _initial_style_jaxpr(f, in_tree, carry_avals + x_avals)
849 out_tree_children = out_tree.children()
850 if len(out_tree_children) != 2:
~/jax/jax/lax/lax_control_flow.py in _initial_style_jaxpr(fun, in_tree, in_avals)
60 with core.initial_style_staging():
61 jaxpr, out_pvals, consts = pe.trace_to_jaxpr(
---> 62 wrapped_fun, in_pvals, instantiate=True, stage_out=False)
63 out_avals = _map(raise_to_shaped, unzip2(out_pvals)[0])
64 const_avals = tuple(raise_to_shaped(core.get_aval(c)) for c in consts)
~/jax/jax/interpreters/partial_eval.py in trace_to_jaxpr(fun, pvals, instantiate, stage_out, bottom)
372 with new_master(trace_type, bottom=bottom) as master:
373 fun = trace_to_subjaxpr(fun, master, instantiate)
--> 374 jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
375 assert not env
376 del master
~/jax/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
148 gen = None
149
--> 150 ans = self.f(*args, **dict(self.params, **kwargs))
151 del args
152 while stack:
<ipython-input-2-b58b959b770c> in step(q, _)
18 def experiment(theta):
19 def step(q, _):
---> 20 z = f(np.eye(3), np.ones(3) * theta)
21 q += z[0]
22 return q, q
~/jax/jax/custom_derivatives.py in __call__(self, *args, **kwargs)
211 flat_jvp, out_tree2 = _flatten_jvp(jvp, in_tree)
212 if core.trace_state.initial_style:
--> 213 out_flat = custom_jvp_call_jaxpr(flat_fun, flat_jvp, *args_flat)
214 out_tree = out_tree1()
215 else:
~/jax/jax/custom_derivatives.py in custom_jvp_call_jaxpr(fun, jvp, *args)
280 jvp_jaxpr_thunk = _memoize(lambda: _initial_style_jaxpr(jvp, in_avals * 2))
281 return custom_jvp_call_jaxpr_p.bind(*args, fun_jaxpr=fun_jaxpr,
--> 282 jvp_jaxpr_thunk=jvp_jaxpr_thunk)
283
284 def _custom_jvp_call_jaxpr_impl(*args, fun_jaxpr, **_):
~/jax/jax/core.py in bind(self, *args, **kwargs)
200
201 tracers = map(top_trace.full_raise, args)
--> 202 out_tracer = top_trace.process_primitive(self, tracers, kwargs)
203 if self.multiple_results:
204 return map(full_lower, out_tracer)
~/jax/jax/interpreters/ad.py in process_primitive(self, primitive, tracers, params)
300 "Forward-mode differentiation rule for '{}' not implemented"
301 .format(primitive)) from err
--> 302 primal_out, tangent_out = jvp(primals_in, tangents_in, **params)
303 if primitive.multiple_results:
304 return [JVPTracer(self, x, t) for x, t in zip(primal_out, tangent_out)]
~/jax/jax/custom_derivatives.py in _custom_jvp_call_jaxpr_jvp(primals, tangents, fun_jaxpr, jvp_jaxpr_thunk)
295 def _custom_jvp_call_jaxpr_jvp(primals, tangents, *, fun_jaxpr, jvp_jaxpr_thunk):
296 jvp_jaxpr = jvp_jaxpr_thunk()
--> 297 outs = core.jaxpr_as_fun(jvp_jaxpr)(*(primals + tangents))
298 return split_list(outs, [len(outs) // 2])
299 ad.primitive_jvps[custom_jvp_call_jaxpr_p] = _custom_jvp_call_jaxpr_jvp
~/jax/jax/core.py in jaxpr_as_fun(typed_jaxpr, *args)
108 @curry
109 def jaxpr_as_fun(typed_jaxpr: TypedJaxpr, *args):
--> 110 return eval_jaxpr(typed_jaxpr.jaxpr, typed_jaxpr.literals, *args)
111
112
~/jax/jax/core.py in eval_jaxpr(jaxpr, consts, *args)
267 else:
268 subfuns = []
--> 269 ans = eqn.primitive.bind(*(subfuns + in_vals), **params)
270 if eqn.primitive.multiple_results:
271 map(write, eqn.outvars, ans)
~/jax/jax/core.py in bind(self, *args, **kwargs)
200
201 tracers = map(top_trace.full_raise, args)
--> 202 out_tracer = top_trace.process_primitive(self, tracers, kwargs)
203 if self.multiple_results:
204 return map(full_lower, out_tracer)
~/jax/jax/interpreters/partial_eval.py in process_primitive(self, primitive, tracers, params)
97 return custom_partial_eval_rules[primitive](self, *tracers, **params)
98 else:
---> 99 return self.default_process_primitive(primitive, tracers, params)
100
101 def default_process_primitive(self, primitive, tracers, params):
~/jax/jax/interpreters/partial_eval.py in default_process_primitive(self, primitive, tracers, params)
103 if all(pv is None for pv in pvs):
104 return primitive.bind(*consts, **params)
--> 105 tracers = map(self.instantiate_const, tracers)
106 avals = [t.aval for t in tracers]
107 out_aval = primitive.abstract_eval(*avals, **params)
~/jax/jax/util.py in safe_map(f, *args)
32 for arg in args[1:]:
33 assert len(arg) == n, 'length mismatch: {}'.format(list(map(len, args)))
---> 34 return list(map(f, *args))
35
36 def unzip2(xys):
~/jax/jax/interpreters/partial_eval.py in instantiate_const(self, tracer)
79 return self.new_instantiated_literal(const)
80 else:
---> 81 return self.new_instantiated_const(const)
82 else:
83 raise TypeError(pv)
~/jax/jax/interpreters/partial_eval.py in new_instantiated_const(self, val)
65
66 def new_instantiated_const(self, val):
---> 67 return JaxprTracer(self, PartialVal((get_aval(val), unit)), ConstVar(val))
68
69 def new_arg(self, pval):
~/jax/jax/core.py in get_aval(x)
684 return x.aval
685 else:
--> 686 return concrete_aval(x)
687
688
~/jax/jax/core.py in concrete_aval(x)
677 return pytype_aval_mappings[type(x)](x)
678 except KeyError as err:
--> 679 raise TypeError("{} is not a valid Jax type".format(type(x))) from err
680
681
TypeError: <class 'jax.ad_util.Zero'> is not a valid Jax type
```
Thanks for the report, and for testing out this new feature even before we released it! Yes, this looks like a bug to me.
By the way, the line `dz = dA @ db` in your JVP rule tripped me up. That's a math bug, even if it should be fine it to write it in a JVP rule, because the output gradient must be a linear function of the input gradients. But switching that to the correct `dz = A @ db + dA @ b` gives the same error, so that isn't the issue here. | 2020-04-10T18:42:05 |
google/jax | 2,698 | google__jax-2698 | [
"2107"
] | 8c2901cf4a81b39d890033ef9b85429986acd9a7 | diff --git a/jax/interpreters/ad.py b/jax/interpreters/ad.py
--- a/jax/interpreters/ad.py
+++ b/jax/interpreters/ad.py
@@ -143,8 +143,11 @@ def backward_pass(jaxpr: core.Jaxpr, consts, args, cotangents_in):
def write_cotangent(v, ct):
# assert v not in primal_env
- if ct is not None and type(v) is not Literal:
+ if ct is not None and type(v) is not Literal and ct is not zero:
ct_env[v] = add_tangents(ct_env[v], ct) if v in ct_env else ct
+ if not core.skip_checks:
+ ct_aval = core.get_aval(ct_env[v])
+ assert v.aval == core.lattice_join(v.aval, ct_aval)
def read_cotangent(v):
return ct_env.get(v, zero)
diff --git a/jax/lax/__init__.py b/jax/lax/__init__.py
--- a/jax/lax/__init__.py
+++ b/jax/lax/__init__.py
@@ -20,7 +20,7 @@
_const, _eq_meet, _broadcasting_select,
_check_user_dtype_supported, _one, _const,
_upcast_fp16_for_computation, _broadcasting_shape_rule,
- _eye, _tri, _delta)
+ _eye, _tri, _delta, _ones, _zeros)
from .lax_control_flow import *
from .lax_fft import *
from .lax_parallel import *
diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -39,7 +39,7 @@
import numpy as onp
import opt_einsum
-from jax import jit, device_put
+from jax import jit, device_put, custom_jvp
from .. import core
from .. import dtypes
from ..abstract_arrays import UnshapedArray, ShapedArray, ConcreteArray
@@ -583,6 +583,7 @@ def power(x1, x2):
return acc
+@custom_jvp
@_wraps(onp.logaddexp)
def logaddexp(x1, x2):
x1, x2 = _promote_shapes("logaddexp", *_promote_dtypes_inexact(x1, x2))
@@ -592,7 +593,21 @@ def logaddexp(x1, x2):
lax.add(x1, x2), # NaNs or infinities of the same sign.
lax.add(amax, lax.log1p(lax.exp(-lax.abs(delta)))))
[email protected]
+def _logaddexp_jvp(primals, tangents):
+ x1, x2 = primals
+ t1, t2 = tangents
+ x1, x2, t1, t2 = broadcast_arrays(x1, x2, t1, t2)
+ primal_out = logaddexp(x1, x2)
+ tangent_out = (t1 * exp(_replace_inf(x1) - _replace_inf(primal_out)) +
+ t2 * exp(_replace_inf(x2) - _replace_inf(primal_out)))
+ return primal_out, tangent_out
+def _replace_inf(x):
+ return lax.select(isposinf(x), zeros_like(x), x)
+
+
+@custom_jvp
@_wraps(onp.logaddexp2)
def logaddexp2(x1, x2):
x1, x2 = _promote_shapes("logaddexp2", *_promote_dtypes_inexact(x1, x2))
@@ -602,6 +617,15 @@ def logaddexp2(x1, x2):
lax.add(x1, x2), # NaNs or infinities of the same sign.
lax.add(amax, lax.div(lax.log1p(exp2(-lax.abs(delta))),
_constant_like(x1, onp.log(2)))))
[email protected]
+def _logaddexp2_jvp(primals, tangents):
+ x1, x2 = primals
+ t1, t2 = tangents
+ x1, x2, t1, t2 = broadcast_arrays(x1, x2, t1, t2)
+ primal_out = logaddexp2(x1, x2)
+ tangent_out = (t1 * 2 ** (_replace_inf(x1) - _replace_inf(primal_out)) +
+ t2 * 2 ** (_replace_inf(x2) - _replace_inf(primal_out)))
+ return primal_out, tangent_out
@_wraps(onp.log2)
| diff --git a/jax/test_util.py b/jax/test_util.py
--- a/jax/test_util.py
+++ b/jax/test_util.py
@@ -696,8 +696,10 @@ def cases_from_gens(*gens):
class JaxTestCase(parameterized.TestCase):
"""Base class for JAX tests including numerical checks and boilerplate."""
- def tearDown(self) -> None:
- assert core.reset_trace_state()
+ # TODO(mattjj): this obscures the error messages from failures, figure out how
+ # to re-enable it
+ # def tearDown(self) -> None:
+ # assert core.reset_trace_state()
def assertArraysAllClose(self, x, y, check_dtypes, atol=None, rtol=None):
"""Assert that x and y are close (up to numerical tolerances)."""
diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -2966,6 +2966,12 @@ def grad_test_spec(op, nargs, order, rng_factory, dtypes, name=None, tol=None):
grad_test_spec(jnp.arctanh, nargs=1, order=2,
rng_factory=partial(jtu.rand_uniform, -0.9, 0.9),
dtypes=[onp.float64, onp.complex64], tol=1e-4),
+ grad_test_spec(jnp.logaddexp, nargs=2, order=1,
+ rng_factory=partial(jtu.rand_uniform, -0.9, 0.9),
+ dtypes=[onp.float64], tol=1e-4),
+ grad_test_spec(jnp.logaddexp2, nargs=2, order=2,
+ rng_factory=partial(jtu.rand_uniform, -0.9, 0.9),
+ dtypes=[onp.float64], tol=1e-4),
]
GradSpecialValuesTestSpec = collections.namedtuple(
@@ -2975,7 +2981,7 @@ def grad_test_spec(op, nargs, order, rng_factory, dtypes, name=None, tol=None):
GradSpecialValuesTestSpec(jnp.arcsinh, [0., 1000.], 2),
GradSpecialValuesTestSpec(jnp.arccosh, [1000.], 2),
GradSpecialValuesTestSpec(jnp.arctanh, [0.], 2),
- GradSpecialValuesTestSpec(jnp.sinc, [0.], 1)
+ GradSpecialValuesTestSpec(jnp.sinc, [0.], 1),
]
def num_float_bits(dtype):
diff --git a/tests/nn_test.py b/tests/nn_test.py
--- a/tests/nn_test.py
+++ b/tests/nn_test.py
@@ -39,6 +39,27 @@ def testSoftplusGrad(self):
check_grads(nn.softplus, (1e-8,), order=4,
rtol=1e-2 if jtu.device_under_test() == "tpu" else None)
+ def testSoftplusGradZero(self):
+ check_grads(nn.softplus, (0.,), order=1,
+ rtol=1e-2 if jtu.device_under_test() == "tpu" else None)
+
+ def testSoftplusGradInf(self):
+ self.assertAllClose(
+ 1., jax.grad(nn.softplus)(float('inf')), check_dtypes=True)
+
+ def testSoftplusGradNegInf(self):
+ check_grads(nn.softplus, (-float('inf'),), order=1,
+ rtol=1e-2 if jtu.device_under_test() == "tpu" else None)
+
+ def testSoftplusGradNan(self):
+ check_grads(nn.softplus, (float('nan'),), order=1,
+ rtol=1e-2 if jtu.device_under_test() == "tpu" else None)
+
+ @parameterized.parameters([
+ int, np.int32, float, np.float64, np.float32, np.float64,])
+ def testSoftplusZero(self, dtype):
+ self.assertEqual(np.log(dtype(2)), nn.softplus(dtype(0)))
+
def testReluGrad(self):
rtol = 1e-2 if jtu.device_under_test() == "tpu" else None
check_grads(nn.relu, (1.,), order=3, rtol=rtol)
| grad(softplus) has wrong value at zero
`jax.grad(jax.nn.softplus)(0.0)` evaluates to 0.0, which is definitely wrong -- the right answer is 0.5.
This is easy to visualize:
```python
import matplotlib.pyplot as plt
x = jax.numpy.linspace(-2, 2, num=101)
plt.plot(x, jax.vmap(jax.grad(jax.nn.softplus))(x))
```

| `logaddexp` is implemented as:
```
amax = lax.max(x1, x2)
delta = lax.sub(x1, x2)
return lax.select(isnan(delta),
lax.add(x1, x2), # NaNs or infinities of the same sign.
lax.add(amax, lax.log1p(lax.exp(-lax.abs(delta)))))
```
The derivative of the implementation, especially the `abs` and `max`, seems problematic.
We'd probably do best to either make `logaddexp` primitive or to add a custom JVP rule for it.
We have run into this on a real project so I think it would be good if we could fix this as soon as possible.
How about this version?
```python
def logaddexp2(x1, x2):
amax = lax.max(x1, x2)
amin = lax.min(x1, x2)
delta = amin - amax
return lax.select(jnp.isnan(delta),
lax.add(x1, x2),
lax.add(amax, lax.log1p(lax.exp(delta)))
```
It seems to be slightly slower than the original:
``` python
xs = np.linspace(-5, 5, 1000001)
f1 = jax.jit(jax.vmap(jax.grad(logaddexp)))
f2 = jax.jit(jax.vmap(jax.grad(logaddexp2)))
ys = np.zeros_like(xs)
f1(xs, ys)
f2(xs, ys)
%timeit f1(xs, ys)
%timeit f2(xs, ys)
```
```
100 loops, best of 3: 5.79 ms per loop
100 loops, best of 3: 6.35 ms per loop
```
But the slowdown is not too bad, should I make a PR with this version as an interim solution?
`0` seems to be handled correctly with this:

I'm attempting a solution that changes lax.max grad, #2195
@ibab That seems reasonable, though I'm not sure we can rely on XLA preserving NaN semantics under a `jit` on all backends.
@joaogui1 Thanks for digging in! I'm not sure we should couple fixing this issue to changing our derivative for `lax.max`, though. We can discuss what the best policy for `lax.max` is, but this issue seems to be crying out for more direct solution, like a custom JVP rule.
@ibab re: NaN semantics, I have learned that XLA _will_ respect NaN semantics (and while LLVM on CPU might not, JAX currently sets flags so that it does). So your solution seems viable.
We're looking into this more...
@mattjj okss, but I think we should take a look at `lax.max` later, as right now it's gradient is not compatible with `lax.abs` gradient | 2020-04-13T18:19:03 |
google/jax | 2,712 | google__jax-2712 | [
"2663"
] | 1ac80d711b2ef274be19fb573705183de7fe9bc5 | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -334,7 +334,9 @@ def grad(fun: Callable, argnums: Union[int, Sequence[int]] = 0,
Args:
fun: Function to be differentiated. Its arguments at positions specified by
- ``argnums`` should be arrays, scalars, or standard Python containers. It
+ ``argnums`` should be arrays, scalars, or standard Python containers.
+ Argument arrays in the positions specified by ``argnums`` must be of
+ inexact (i.e., floating-point or complex) type. It
should return a scalar (which includes arrays with shape ``()`` but not
arrays with shape ``(1,)`` etc.)
argnums: Optional, integer or sequence of integers. Specifies which
| jax.numpy.tanh(2) works but jax.grad(tanh)(2) fails- int32 not liked by grad
Hi
this may be a little churlish to bring up but ints are acceptable for some calls in jax but not for others. I assume this is not by design.
example
`jax.version.__version__ == '0.1.62'`
```
import jax.numpy as np
from jax import grad
aval =2
print("aval=" +str(aval))
print("np.tanh(aval)=" + str(np.tanh(aval)))
try:
grad(np.tanh)(aval)
except TypeError as e:
print(e)
print ("sadness - grad(np.tanh)(aval) failed ")
```
Output
aval=2
np.tanh(aval)=0.9640276
Primal inputs to reverse-mode differentiation must be of float or complex type, got type int32
sadness - grad(np.tanh)(aval) failed
| Well, I think this particular example is working as intended. You can think of your original `tanh` function as equivalent to the following:
```
import jax.numpy as jnp
def f(x):
x = jnp.array(x, jnp.float32)
return jnp.tanh(x)
```
In other words, the int->float cast happens *inside* the function you are differentiating. That is, you are differentiating a function that takes as input an integer and returns a float.
In general the cotangent types returned by `grad` match the input types, so we'd have to return an integer value to you, which probably isn't what you wanted. An error seems preferable.
What do you think?
Hi Peter,
Thank you for the quick response.
I think your comment is quite reasonable and I get that the cotangent code is driven by the function and its inputs so the results are understandable.
I can think of at least two ways forward.
1 Take the expedient approach and just make it clear (may be more clear if it is already in the docs; I apologize if this issue is discussed in the docs and I missed it) that the input value controls the output value from the cotangent and that since ints for gradients are not sensible that the inputs should be floats otherwise the gradient code will throw an error.
2 Take a potentially more hazardous approach and handle ints in a more graceful way by casting them to a default float type before really starting on making the cotangent
Given that there are probably better things to spend one's time on and approach 2 may give rise to some nasty subtle issues, I would think that option 1 would be the more sensible way of moving forward.
Dominic
________________________________
From: Peter Hawkins <[email protected]>
Sent: Thursday, April 9, 2020 3:06 PM
To: google/jax <[email protected]>
Cc: Barraclough, Dominic (ext. 414) <[email protected]>; Author <[email protected]>
Subject: [EXTERNAL] Re: [google/jax] jax.numpy.tanh(2) works but jax.grad(tanh)(2) fails- int32 not liked by grad (#2663)
CAUTION:
This email originated from outside of QVI. Do not click links or open attachments unless you recognize the sender and are expecting to receive this content from them. If you suspect the email is not legitimate, please forward it as an attachment to [email protected] and delete it from your Inbox.
Well, I think this particular example is working as intended. You can think of your original tanh function as equivalent to the following:
import jax.numpy as jnp
def f(x):
x = jnp.array(x, jnp.float32)
return jnp.tanh(x)
In other words, the int->float cast happens inside the function you are differentiating. That is, you are differentiating a function that takes as input an integer and returns a float.
In general the cotangent types returned by grad match the input types, so we'd have to return an integer value to you, which probably isn't what you wanted. An error seems preferable.
What do you think?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<https://github.com/google/jax/issues/2663#issuecomment-611701532>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAIRK64NOX52JSGRAM4MD53RLYMC7ANCNFSM4ME5ESRQ>.
| 2020-04-14T14:09:01 |
|
google/jax | 2,753 | google__jax-2753 | [
"2263"
] | 2d96cfb2666bae922e8fb9d458f6c6cb62269d76 | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -278,6 +278,10 @@ def bitwise_xor(x: Array, y: Array) -> Array:
r"""Elementwise exclusive OR: :math:`x \oplus y`."""
return xor_p.bind(x, y)
+def population_count(x: Array) -> Array:
+ r"""Elementwise popcount, count the number of set bits in each element."""
+ return population_count_p.bind(x)
+
def add(x: Array, y: Array) -> Array:
r"""Elementwise addition: :math:`x + y`."""
return add_p.bind(x, y)
@@ -2023,6 +2027,8 @@ def _pow_jvp_rhs(g, ans, x, y):
xor_p = standard_naryop([_bool_or_int, _bool_or_int], 'xor')
ad.defjvp_zero(xor_p)
+population_count_p = standard_unop(_bool_or_int, 'population_count')
+
def _add_transpose(t, x, y):
# The following linearity assertion is morally true, but because in some cases we
# instantiate zeros for convenience, it doesn't always hold.
diff --git a/jax/lax_reference.py b/jax/lax_reference.py
--- a/jax/lax_reference.py
+++ b/jax/lax_reference.py
@@ -111,6 +111,31 @@ def rem(lhs, rhs):
shift_right_arithmetic = onp.right_shift
# TODO shift_right_logical
+def population_count(x):
+ assert x.dtype in (onp.uint32, onp.uint64)
+ m = [
+ 0x5555555555555555, # binary: 0101...
+ 0x3333333333333333, # binary: 00110011..
+ 0x0f0f0f0f0f0f0f0f, # binary: 4 zeros, 4 ones ...
+ 0x00ff00ff00ff00ff, # binary: 8 zeros, 8 ones ...
+ 0x0000ffff0000ffff, # binary: 16 zeros, 16 ones ...
+ 0x00000000ffffffff, # binary: 32 zeros, 32 ones
+ ]
+
+ if x.dtype == onp.uint32:
+ m = list(map(onp.uint32, m[:-1]))
+ else:
+ m = list(map(onp.uint64, m))
+
+ x = (x & m[0]) + ((x >> 1) & m[0]) # put count of each 2 bits into those 2 bits
+ x = (x & m[1]) + ((x >> 2) & m[1]) # put count of each 4 bits into those 4 bits
+ x = (x & m[2]) + ((x >> 4) & m[2]) # put count of each 8 bits into those 8 bits
+ x = (x & m[3]) + ((x >> 8) & m[3]) # put count of each 16 bits into those 16 bits
+ x = (x & m[4]) + ((x >> 16) & m[4]) # put count of each 32 bits into those 32 bits
+ if x.dtype == onp.uint64:
+ x = (x & m[5]) + ((x >> 32) & m[5]) # put count of each 64 bits into those 64 bits
+ return x
+
eq = onp.equal
ne = onp.not_equal
ge = onp.greater_equal
| diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -152,6 +152,7 @@ def op_record(op, nargs, dtypes, rng_factory, tol=None):
op_record("bitwise_not", 1, bool_dtypes, jtu.rand_small),
op_record("bitwise_or", 2, bool_dtypes, jtu.rand_small),
op_record("bitwise_xor", 2, bool_dtypes, jtu.rand_small),
+ op_record("population_count", 1, uint_dtypes, partial(jtu.rand_int, 1 << 32)),
op_record("add", 2, default_dtypes + complex_dtypes, jtu.rand_small),
op_record("sub", 2, default_dtypes + complex_dtypes, jtu.rand_small),
| Feature request: enable XLA PopulationCount op
I think XLA has a PopulationCount op, listed [here](https://www.tensorflow.org/xla/operation_semantics#element-wise_unary_functions). Is it straightforward for JAX to expose it in lax.py? @hawkinsp I'm thinking you might be the right person to ask.
| 2020-04-17T12:02:08 |
|
google/jax | 2,786 | google__jax-2786 | [
"2583"
] | 18f967420c12765467db5c66d2e54f5580d08269 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -2558,7 +2558,7 @@ def tensordot(a, b, axes=2, precision=None):
@_wraps(onp.einsum, lax_description=_PRECISION_DOC)
def einsum(*operands, **kwargs):
- optimize = kwargs.pop('optimize', 'auto')
+ optimize = kwargs.pop('optimize', True)
optimize = 'greedy' if optimize is True else optimize
precision = kwargs.pop('precision', None)
if kwargs:
| Consider setting optimize=True as default for einsum
We should consider changing the default of einsum to be optimized, as the search for the best contraction order will generally be faster than compilation anyway.
The contraction order calculation tends to only be slow when there are >10 tensors, which doesn't happen very often.
| Nice idea! What's numpy do here?
Numpy defaults to False | 2020-04-21T22:02:28 |
|
google/jax | 2,789 | google__jax-2789 | [
"2779"
] | ec03f8e2d59d99ed823ec72b298a32f5b5696160 | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -4494,6 +4494,7 @@ def _sort_batch_rule(batched_args, batch_dims, *, dimension):
sort_p = standard_primitive(sort_shape, _input_dtype, 'sort')
ad.defjvp(sort_p, _sort_jvp_rule)
+xla.translations[sort_p] = partial(standard_translate, 'sort', is_stable=True)
batching.primitive_batchers[sort_p] = _sort_batch_rule
def _sort_key_val_abstract_eval(keys, values, *, dimension):
@@ -4562,7 +4563,8 @@ def _sort_key_val_batch_rule(batched_args, batch_dims, *, dimension):
sort_key_val_p.multiple_results = True
sort_key_val_p.def_impl(partial(xla.apply_primitive, sort_key_val_p))
sort_key_val_p.def_abstract_eval(_sort_key_val_abstract_eval)
-xla.translations[sort_key_val_p] = partial(standard_translate, 'sort_key_val')
+xla.translations[sort_key_val_p] = partial(standard_translate, 'sort_key_val',
+ is_stable=True)
ad.primitive_jvps[sort_key_val_p] = _sort_key_val_jvp
ad.primitive_transposes[sort_key_val_p] = _sort_key_val_transpose_rule
batching.primitive_batchers[sort_key_val_p] = _sort_key_val_batch_rule
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1095,7 +1095,6 @@ def testRepeat(self, axis, shape, dtype, repeats, rng_factory):
for return_index in [False, True]
for return_inverse in [False, True]
for return_counts in [False, True]))
- @jtu.skip_on_devices("gpu", "tpu") # https://github.com/google/jax/issues/2779
def testUnique(self, shape, dtype, return_index, return_inverse, return_counts, rng):
args_maker = lambda: [rng(shape, dtype)]
onp_fun = lambda x: onp.unique(x, return_index, return_inverse, return_counts)
diff --git a/tests/lax_scipy_sparse_test.py b/tests/lax_scipy_sparse_test.py
--- a/tests/lax_scipy_sparse_test.py
+++ b/tests/lax_scipy_sparse_test.py
@@ -101,12 +101,14 @@ def args_maker():
check_dtypes=True,
tol=3e-5)
+ # TODO(shoyer,mattjj): I had to loosen the tolerance for complex64[7,7]
+ # with preconditioner=random
self._CheckAgainstNumpy(
partial(scipy_cg, M=M, maxiter=3),
partial(lax_cg, M=M, maxiter=3),
args_maker,
check_dtypes=True,
- tol=1e-4)
+ tol=3e-3)
self._CheckAgainstNumpy(
np.linalg.solve,
| Expose stable flag on XLA sort
XLA's sort HLOs have an "is_stable" attribute, which defaults to false; in practice, the sort used on CPU is stable but the one on GPU isn't. This means that e.g. this code will fail on GPU:
```python
x = onp.array([5, 1, 2, 6, 5, -2, 2, 0, 0, 1, 0, 4])
assert jnp.argsort(x) == onp.argsort(x)
```
and #2760 won't match NumPy behavior (ties to earliest) either.
| 2020-04-22T00:40:32 |
|
google/jax | 2,794 | google__jax-2794 | [
"2792"
] | a2c06d6113ea02075bfbc924d2d6d8fd39c2f6d3 | diff --git a/jax/lax_linalg.py b/jax/lax_linalg.py
--- a/jax/lax_linalg.py
+++ b/jax/lax_linalg.py
@@ -852,17 +852,18 @@ def svd_jvp_rule(primals, tangents, full_matrices, compute_uv):
s_dim = s[..., None, :]
dS = np.matmul(np.matmul(Ut, dA), V)
ds = np.real(np.diagonal(dS, 0, -2, -1))
- F = 1 / (np.square(s_dim) - np.square(_T(s_dim)) + np.eye(k)) - np.eye(k)
+ F = 1 / (np.square(s_dim) - np.square(_T(s_dim)) + np.eye(k, dtype=A.dtype))
+ F = F - np.eye(k, dtype=A.dtype)
dSS = s_dim * dS
SdS = _T(s_dim) * dS
dU = np.matmul(U, F * (dSS + _T(dSS)))
dV = np.matmul(V, F * (SdS + _T(SdS)))
- m, n = A.shape[-2], A.shape[-1]
+ m, n = A.shape[-2:]
if m > n:
- dU = dU + np.matmul(np.eye(m) - np.matmul(U, Ut), np.matmul(dA, V)) / s_dim
+ dU = dU + np.matmul(np.eye(m, dtype=A.dtype) - np.matmul(U, Ut), np.matmul(dA, V)) / s_dim
if n > m:
- dV = dV + np.matmul(np.eye(n) - np.matmul(V, Vt), np.matmul(_H(dA), U)) / s_dim
+ dV = dV + np.matmul(np.eye(n, dtype=A.dtype) - np.matmul(V, Vt), np.matmul(_H(dA), U)) / s_dim
return (s, U, Vt), (ds, dU, _T(dV))
def _svd_cpu_gpu_translation_rule(gesvd_impl, c, operand, full_matrices, compute_uv):
diff --git a/jax/numpy/linalg.py b/jax/numpy/linalg.py
--- a/jax/numpy/linalg.py
+++ b/jax/numpy/linalg.py
@@ -33,6 +33,7 @@
from ..third_party.numpy.linalg import cond, multi_dot, tensorinv, tensorsolve
_T = lambda x: np.swapaxes(x, -1, -2)
+_H = lambda x: np.conj(np.swapaxes(x, -1, -2))
def _promote_arg_dtypes(*args):
@@ -188,32 +189,47 @@ def eigvalsh(a, UPLO='L'):
return w
+@partial(custom_jvp, nondiff_argnums=(1,))
@_wraps(onp.linalg.pinv, lax_description=textwrap.dedent("""\
It differs only in default value of `rcond`. In `numpy.linalg.pinv`, the
default `rcond` is `1e-15`. Here the default is
`10. * max(num_rows, num_cols) * np.finfo(dtype).eps`.
"""))
def pinv(a, rcond=None):
- # ported from https://github.com/numpy/numpy/blob/v1.17.0/numpy/linalg/linalg.py#L1890-L1979
+ # Uses same algorithm as
+ # https://github.com/numpy/numpy/blob/v1.17.0/numpy/linalg/linalg.py#L1890-L1979
a = np.conj(a)
- # copied from https://github.com/tensorflow/probability/blob/master/tensorflow_probability/python/math/linalg.py#L442
if rcond is None:
- max_rows_cols = max(a.shape[-2:])
- rcond = 10. * max_rows_cols * np.finfo(a.dtype).eps
+ max_rows_cols = max(a.shape[-2:])
+ rcond = 10. * max_rows_cols * np.finfo(a.dtype).eps
rcond = np.asarray(rcond)
u, s, v = svd(a, full_matrices=False)
# Singular values less than or equal to ``rcond * largest_singular_value``
# are set to zero.
cutoff = rcond[..., np.newaxis] * np.amax(s, axis=-1, keepdims=True)
- large = s > cutoff
- s = np.divide(1, s)
- s = np.where(large, s, 0)
- vT = np.swapaxes(v, -1, -2)
- uT = np.swapaxes(u, -1, -2)
- res = np.matmul(vT, np.multiply(s[..., np.newaxis], uT))
+ s = np.where(s > cutoff, s, np.inf)
+ res = np.matmul(_T(v), np.divide(_T(u), s[..., np.newaxis]))
return lax.convert_element_type(res, a.dtype)
[email protected]
+def _pinv_jvp(rcond, primals, tangents):
+ # The Differentiation of Pseudo-Inverses and Nonlinear Least Squares Problems
+ # Whose Variables Separate. Author(s): G. H. Golub and V. Pereyra. SIAM
+ # Journal on Numerical Analysis, Vol. 10, No. 2 (Apr., 1973), pp. 413-432.
+ # (via https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse#Derivative)
+ a, = primals
+ a_dot, = tangents
+ p = pinv(a, rcond=rcond)
+ m, n = a.shape[-2:]
+ # TODO(phawkins): on TPU, we would need to opt into high precision here.
+ # TODO(phawkins): consider if this can be simplified in the Hermitian case.
+ p_dot = -p @ a_dot @ p
+ p_dot = p_dot + p @ _H(p) @ _H(a_dot) @ (np.eye(m, dtype=a.dtype) - a @ p)
+ p_dot = p_dot + (np.eye(n, dtype=a.dtype) - p @ a) @ _H(a_dot) @ _H(p) @ p
+ return p, p_dot
+
+
@_wraps(onp.linalg.inv)
def inv(a):
if np.ndim(a) < 2 or a.shape[-1] != a.shape[-2]:
| diff --git a/tests/linalg_test.py b/tests/linalg_test.py
--- a/tests/linalg_test.py
+++ b/tests/linalg_test.py
@@ -703,7 +703,7 @@ def args_maker():
{"testcase_name":
"_shape={}".format(jtu.format_shape_dtype_string(shape, dtype)),
"shape": shape, "dtype": dtype, "rng_factory": rng_factory}
- for shape in [(1, 1), (4, 4), (2, 70, 7), (2000, 7), (7, 10000), (70, 7, 2)]
+ for shape in [(1, 1), (4, 4), (2, 70, 7), (2000, 7), (7, 1000), (70, 7, 2)]
for dtype in float_types + complex_types
for rng_factory in [jtu.rand_default]))
@jtu.skip_on_devices("tpu") # SVD is not implemented on the TPU backend
@@ -716,6 +716,24 @@ def testPinv(self, shape, dtype, rng_factory):
self._CheckAgainstNumpy(onp.linalg.pinv, np.linalg.pinv, args_maker,
check_dtypes=True, tol=1e-3)
self._CompileAndCheck(np.linalg.pinv, args_maker, check_dtypes=True)
+ # TODO(phawkins): 1e-1 seems like a very loose tolerance.
+ jtu.check_grads(np.linalg.pinv, args_maker(), 2, rtol=1e-1)
+
+
+ def testPinvGradIssue2792(self):
+ def f(p):
+ a = np.array([[0., 0.],[-p, 1.]], np.float32) * 1 / (1 + p**2)
+ return np.linalg.pinv(a)
+ j = jax.jacobian(f)(np.float32(2.))
+ self.assertAllClose(np.array([[0., -1.], [ 0., 0.]], np.float32), j,
+ check_dtypes=True)
+
+ expected = np.array([[[[-1., 0.], [ 0., 0.]], [[0., -1.], [0., 0.]]],
+ [[[0., 0.], [-1., 0.]], [[0., 0.], [0., -1.]]]],
+ dtype=np.float32)
+ self.assertAllClose(
+ expected, jax.jacobian(np.linalg.pinv)(np.eye(2, dtype=np.float32)),
+ check_dtypes=True)
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_n={}".format(
| Add support for differentiation of pseudoinverse
Consider the following example:
```python
def f(p):
a = jnp.array([[0.,0.],[- p, 1.]]) * 1/ (1 + p**2)
return jnp.linalg.pinv(a)
jax.jacobian(f)(2.)
Out[75]:
DeviceArray([[nan, nan],
[nan, nan]], dtype=float32)
```
So it returns `nan` values, whereas the correct values are
```
array(
[[0., -1.],
[ 0., 0.]])
```
Oddly enough, jax is able to handle this function correctly:
```python
def f2(p):
a = jnp.array([[- p, 1.]]) * 1/ (1 + p**2)
return jnp.linalg.pinv(a)
In [77]: jax.jacobian(f)(2.)
Out[77]:
DeviceArray([[-0.99999976],
[ 0. ]], dtype=float32)
```
Thoughts:
1. ~~f2 returns nd arrays and jax.jacobian seems not to handle those cases to well (probably should open another issue for that)~~ (this seems to be fixed in recent versions though)
2. There is a closed formula for the derivative of the pseudoinverse which uses the derivative of the original array: https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse#Derivative
| 2020-04-22T13:42:42 |
|
google/jax | 2,803 | google__jax-2803 | [
"2795"
] | 59bdb1fb3d1329bcb7bf71e9e869ae40c4e7f05b | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -3275,7 +3275,7 @@ def _index_to_gather(x_shape, idx):
collapsed_slice_dims = []
start_index_map = []
- index_dtype = int64 if max(x_shape) >= (1 << 31) else int32
+ index_dtype = int64 if _max(x_shape, default=0) >= (1 << 31) else int32
gather_indices = onp.zeros((0,), dtype=index_dtype) # use onp to save a compilation
# We perform three transformations to y before the scatter op, in order:
| diff --git a/tests/lax_numpy_indexing_test.py b/tests/lax_numpy_indexing_test.py
--- a/tests/lax_numpy_indexing_test.py
+++ b/tests/lax_numpy_indexing_test.py
@@ -18,6 +18,7 @@
from functools import partial
import itertools
import unittest
+import warnings
from absl.testing import absltest
from absl.testing import parameterized
@@ -983,6 +984,14 @@ def testSegmentSum(self):
expected = onp.array([13, 2, 7, 4])
self.assertAllClose(ans, expected, check_dtypes=False)
+ def testIndexDtypeError(self):
+ # https://github.com/google/jax/issues/2795
+ jnp.array(1) # get rid of startup warning
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter("error")
+ jnp.zeros(5).at[::2].set(1)
+ self.assertLen(w, 0)
+
if __name__ == "__main__":
absltest.main()
| Indexing with stride != 1 results in warning about int64
This code:
```py
import jax.numpy as jnp
jnp.zeros(5).at[::2].set(1)
```
results in a warning
```
UserWarning: Explicitly requested dtype <class 'jax.numpy.lax_numpy.int64'> requested in arange is not available, and will be truncated to dtype int32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
```
No warning is issued when doing integer indexing or slice indexing with stride 1.
When raising warnings as errors:
```py
import warnings
warnings.simplefilter("error")
import jax.numpy as jnp
jnp.zeros(5).at[::2].set(1)
```
this is the stacktrace:
```
---------------------------------------------------------------------------
UserWarning Traceback (most recent call last)
<ipython-input-7-058b40a9a665> in <module>
----> 1 jnp.zeros(5).at[::2].set(1)
[...]/lib/python3.8/site-packages/jax/numpy/lax_numpy.py in set(self, values)
3929 See :mod:`jax.ops` for details.
3930 """
-> 3931 return ops.index_update(self.array, self.index, values)
3932
3933 def add(self, values):
[...]/lib/python3.8/site-packages/jax/ops/scatter.py in index_update(x, idx, y)
279 [1., 1., 1., 6., 6., 6.]], dtype=float32)
280 """
--> 281 return _scatter_update(x, idx, y, lax.scatter)
282
283 def segment_sum(data, segment_ids, num_segments=None):
[...]/lib/python3.8/site-packages/jax/ops/scatter.py in _scatter_update(x, idx, y, scatter_op)
45 # is more or less a transpose of the gather equivalent.
46 treedef, static_idx, dynamic_idx = np._split_index_for_jit(idx)
---> 47 return _scatter_impl(x, y, scatter_op, treedef, static_idx, dynamic_idx)
48
49
[...]/lib/python3.8/site-packages/jax/ops/scatter.py in _scatter_impl(x, y, scatter_op, treedef, static_idx, dynamic_idx)
56
57 idx = np._merge_static_and_dynamic_indices(treedef, static_idx, dynamic_idx)
---> 58 indexer = np._index_to_gather(np.shape(x), idx)
59
60 # Broadcast `y` to the slice output shape.
[...]/lib/python3.8/site-packages/jax/numpy/lax_numpy.py in _index_to_gather(x_shape, idx)
3367 start_index_map.append(x_axis)
3368 else:
-> 3369 i = arange(start, limit, stride, dtype=index_dtype)
3370 size = i.shape[0]
3371 slice_shape.append(size)
[...]/lib/python3.8/site-packages/jax/numpy/lax_numpy.py in arange(start, stop, step, dtype)
2114 @_wraps(onp.arange)
2115 def arange(start, stop=None, step=None, dtype=None):
-> 2116 lax._check_user_dtype_supported(dtype, "arange")
2117 if stop is None and step is None:
2118 dtype = dtype or _dtype(start)
[...]/lib/python3.8/site-packages/jax/lax/lax.py in _check_user_dtype_supported(dtype, fun_name)
5102 fun_name = "requested in {}".format(fun_name) if fun_name else ""
5103 truncated_dtype = dtypes.canonicalize_dtype(dtype).name
-> 5104 warnings.warn(msg.format(dtype, fun_name , truncated_dtype))
UserWarning: Explicitly requested dtype <class 'jax.numpy.lax_numpy.int64'> requested in arange is not available, and will be truncated to dtype int32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
```
It looks innocuous, but the code that generates the `index_dtype` is here:
https://github.com/google/jax/blob/a2c06d6113ea02075bfbc924d2d6d8fd39c2f6d3/jax/numpy/lax_numpy.py#L3278
Why would it ever produce `int64` in this scenario?
| Wow, fantastic report. Thank you.
Wow, this is a good one!
The problem is on [this line](https://github.com/google/jax/blob/59bdb1fb3d1329bcb7bf71e9e869ae40c4e7f05b/jax/numpy/lax_numpy.py#L3278)
```python
index_dtype = int64 if max(x_shape) >= (1 << 31) else int32
```
Danger! The `max` in lax_numpy.py is not the Python builtin max, but rather it is `jax.numpy.max`. It produces a DeviceArray. That's the underlying issue.
What happens next is funny. `1 << 31` is the Python int `2147483648`, but to execute the `>=` operator, which is forwards to `jax.numpy.greater_equal`, that gets cast down to int32 when the Python scalar is reconciled with the int32 DeviceArray from the above paragraph. And as we all know, `onp.array(1 << 31, onp.int32) == -2147483648`, so the conditional on the line pasted above is True!
The short-term fix is to replace `max` with `_max` on the line pasted above. I'm also going to look briefly into whether we should be catching these overflows when casting Python integers... | 2020-04-23T04:52:46 |
google/jax | 2,804 | google__jax-2804 | [
"2784"
] | 6b5e36763c8c21912c22e033cf09db108924f0c3 | diff --git a/jax/custom_derivatives.py b/jax/custom_derivatives.py
--- a/jax/custom_derivatives.py
+++ b/jax/custom_derivatives.py
@@ -18,6 +18,7 @@
import itertools as it
import operator as op
+import jax
from . import core
from . import linear_util as lu
from .tree_util import tree_flatten, tree_unflatten, tree_map, tree_multimap
@@ -82,6 +83,15 @@ def sum_tangents(x, *xs):
def zeros_like_pytree(x):
return tree_map(lambda _: zero, x)
+def stop_gradient(x):
+ return tree_map(_stop_gradient, x)
+
+def _stop_gradient(x):
+ if isinstance(x, core.Tracer) or core.valid_jaxtype(x):
+ return jax.lax.stop_gradient(x)
+ else:
+ return x
+
### JVPs
@@ -199,7 +209,10 @@ def __call__(self, *args, **kwargs):
raise AttributeError(msg.format(self.__name__))
args = _resolve_kwargs(self.fun, args, kwargs)
if self.nondiff_argnums:
- dyn_argnums = [i for i in range(len(args)) if i not in self.nondiff_argnums]
+ is_nondiff = [False] * len(args)
+ for i in self.nondiff_argnums: is_nondiff[i] = True
+ args = [stop_gradient(x) if b else x for b, x in zip(is_nondiff, args)]
+ dyn_argnums = [i for i, b in enumerate(is_nondiff) if not b]
f_, dyn_args = argnums_partial(lu.wrap_init(self.fun), dyn_argnums, args)
static_args = [args[i] for i in self.nondiff_argnums]
jvp = _add_args(lu.wrap_init(self.jvp), static_args, left=True)
@@ -436,7 +449,10 @@ def __call__(self, *args, **kwargs):
raise AttributeError(msg.format(self.__name__))
args = _resolve_kwargs(self.fun, args, kwargs)
if self.nondiff_argnums:
- dyn_argnums = [i for i in range(len(args)) if i not in self.nondiff_argnums]
+ is_nondiff = [False] * len(args)
+ for i in self.nondiff_argnums: is_nondiff[i] = True
+ args = [stop_gradient(x) if b else x for b, x in zip(is_nondiff, args)]
+ dyn_argnums = [i for i, b in enumerate(is_nondiff) if not b]
f_, dyn_args = argnums_partial(lu.wrap_init(self.fun), dyn_argnums, args)
static_args = [args[i] for i in self.nondiff_argnums]
fwd, _ = argnums_partial(lu.wrap_init(self.fwd), dyn_argnums, args)
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -2853,6 +2853,28 @@ def g(f, x):
jax.grad(g, argnums=(1,))(F(2.0), 0.) # doesn't crash
+ def test_nondiff_argnums_stop_gradient(self):
+ # https://github.com/google/jax/issues/2784
+ @partial(api.custom_vjp, nondiff_argnums=(0, 1))
+ def _clip_gradient(lo, hi, x):
+ return x # identity function
+
+ def clip_gradient_fwd(lo, hi, x):
+ # return x, None
+ return x, (hi, )
+
+ def clip_gradient_bwd(lo, hi, _, g):
+ return (np.clip(g, lo, hi),)
+
+ _clip_gradient.defvjp(clip_gradient_fwd, clip_gradient_bwd)
+
+ def clip_gradient(x):
+ lo = -1
+ hi = x + 1 # causes things to break
+ return _clip_gradient(lo, hi, x)
+
+ jax.grad(clip_gradient)(1.) # doesn't crash
+
class DeprecatedCustomTransformsTest(jtu.JaxTestCase):
| Forward-mode differentiation rule for 'custom_lin' not implemented
Running this code produces the above error.
```
@partial(jax.custom_vjp, nondiff_argnums=(0, 1))
def _clip_gradient(lo, hi, x):
return x # identity function
def clip_gradient_fwd(lo, hi, x):
# return x, None
return x, (hi, )
def clip_gradient_bwd(lo, hi, _, g):
return (np.clip(g, lo, hi),)
_clip_gradient.defvjp(clip_gradient_fwd, clip_gradient_bwd)
def clip_gradient(x):
lo = -1
hi = x + 1 # causes things to break
return _clip_gradient(lo, hi, x)
print(jax.grad(clip_gradient)(1.))
```
Replacing the residual with `None` (see commented out line in `clip_gradient_fwd`) makes the output
```
Traced<ConcreteArray(1.0)>with<JVPTrace(level=1/0)>
with primal = DeviceArray(1., dtype=float32)
tangent = Traced<ShapedArray(float32[]):JaxprTrace(level=0/0)>
```
I was also able to get a mismatched tracer levels error on a more complex example.
To my understanding setting `hi = x + 1` is the issue as it creates a trace of `hi` with `x`.
This example may seem a bit contrived, but I originally came across this trying to set an initial step size for `odeint` (see #2604).
IIUC we want to make concrete all `JVPTracer` instances that are in static args [here](https://github.com/google/jax/blob/master/jax/custom_derivatives.py#L441).
| I also just managed to produce ```NotImplementedError: Batching rule for 'custom_lin' not implemented``` while doing something else.
As a workaround, you can write
```python
def clip_gradient(x):
lo = -1
hi = jax.lax.stop_gradient(x + 1)
return _clip_gradient(lo, hi, x)
``` | 2020-04-23T06:15:20 |
google/jax | 2,805 | google__jax-2805 | [
"2928"
] | 56f6294e377cb4e03c8e1a6fe82dade0c965d617 | diff --git a/jax/random.py b/jax/random.py
--- a/jax/random.py
+++ b/jax/random.py
@@ -1016,6 +1016,114 @@ def _gamma(key, a, shape, dtype):
return random_gamma_p.bind(key, a)[0]
+@partial(jit, static_argnums=(2, 3, 4))
+def _poisson_knuth(key, lam, shape, dtype, max_iters):
+ # Knuth's algorithm for generating Poisson random variates.
+ # Reference:
+ # https://en.wikipedia.org/wiki/Poisson_distribution#Generating_Poisson-distributed_random_variables
+
+ def body_fn(carry):
+ i, k, rng, log_prod = carry
+ rng, subkey = split(rng)
+ k = lax.select(log_prod > -lam, k + 1, k)
+ u = uniform(subkey, shape, onp.float32)
+ return i + 1, k, rng, log_prod + np.log(u)
+
+ def cond_fn(carry):
+ i, log_prod = carry[0], carry[3]
+ return (log_prod > -lam).any() & (i < max_iters)
+
+ k_init = lax.full_like(lam, 0, dtype, shape)
+ log_rate_init = lax.full_like(lam, 0, onp.float32, shape)
+ k = lax.while_loop(cond_fn, body_fn, (0, k_init, key, log_rate_init))[1]
+ return (k - 1).astype(dtype)
+
+
+@partial(jit, static_argnums=(2, 3, 4))
+def _poisson_rejection(key, lam, shape, dtype, max_iters):
+ # Transformed rejection due to Hormann.
+ # Reference:
+ # http://citeseer.ist.psu.edu/viewdoc/citations;jsessionid=1BEB35946CC807879F55D42512E5490C?doi=10.1.1.48.3054.
+ log_lam = lax.log(lam)
+ b = 0.931 + 2.53 * lax.sqrt(lam)
+ a = -0.059 + 0.02483 * b
+ inv_alpha = 1.1239 + 1.1328 / (b - 3.4)
+ v_r = 0.9277 - 3.6224 / (b - 2)
+
+ def body_fn(carry):
+ i, k_out, accepted, key = carry
+ key, subkey_0, subkey_1 = split(key, 3)
+
+ u = uniform(subkey_0, shape, lam.dtype) - 0.5
+ v = uniform(subkey_1, shape, lam.dtype)
+ u_shifted = 0.5 - abs(u)
+
+ k = lax.floor((2 * a / u_shifted + b) * u + lam + 0.43)
+ s = lax.log(v * inv_alpha / (a / (u_shifted * u_shifted) + b))
+ t = -lam + k * log_lam - lax.lgamma(k + 1)
+
+ accept1 = (u_shifted >= 0.07) & (v <= v_r)
+ reject = (k < 0) | ((u_shifted < 0.013) & (v > u_shifted))
+ accept2 = s <= t
+ accept = accept1 | (~reject & accept2)
+
+ k_out = lax.select(accept, k, k_out)
+ accepted |= accept
+
+ return i + 1, k_out, accepted, key
+
+ def cond_fn(carry):
+ i, k_out, accepted, key = carry
+ return (~accepted).any() & (i < max_iters)
+
+ k_init = lax.full_like(lam, -1, lam.dtype, shape)
+ accepted = lax.full_like(lam, False, np.bool_, shape)
+ k = lax.while_loop(cond_fn, body_fn, (0, k_init, accepted, key))[1]
+ return k.astype(dtype)
+
+
+@partial(jit, static_argnums=(2, 3))
+def _poisson(key, lam, shape, dtype):
+ # The implementation matches TensorFlow and NumPy:
+ # https://github.com/tensorflow/tensorflow/blob/v2.2.0-rc3/tensorflow/core/kernels/random_poisson_op.cc
+ # https://github.com/numpy/numpy/blob/v1.18.3/numpy/random/src/distributions/distributions.c#L574
+ # For lambda < 10, we use the Knuth algorithm; otherwise, we use transformed
+ # rejection sampling.
+ use_knuth = lam < 10
+ lam_knuth = lax.select(use_knuth, lam, lax.full_like(lam, 0.0))
+ # The acceptance probability for rejection sampling maxes out at 89% as
+ # λ -> ∞, so pick some arbitrary large value.
+ lam_rejection = lax.select(use_knuth, lax.full_like(lam, 1e5), lam)
+ max_iters = np.iinfo(dtype).max # insanely conservative
+ return lax.select(
+ use_knuth,
+ _poisson_knuth(key, lam_knuth, shape, dtype, max_iters),
+ _poisson_rejection(key, lam_rejection, shape, dtype, max_iters),
+ )
+
+
+def poisson(key, lam, shape=(), dtype=onp.int64):
+ """Sample Poisson random values with given shape and integer dtype.
+
+ Args:
+ key: a PRNGKey used as the random key.
+ lam: rate parameter (mean of the distribution), must be >= 0.
+ shape: optional, a tuple of nonnegative integers representing the result
+ shape. Default ().
+ dtype: optional, a integer dtype for the returned values (default int64 if
+ jax_enable_x64 is true, otherwise int32).
+
+ Returns:
+ A random array with the specified shape and dtype.
+ """
+ dtype = dtypes.canonicalize_dtype(dtype)
+ shape = abstract_arrays.canonicalize_shape(shape)
+ if onp.shape(lam) != shape:
+ lam = np.broadcast_to(lam, shape)
+ lam = lam.astype(onp.float32)
+ return _poisson(key, lam, shape, dtype)
+
+
def gumbel(key, shape=(), dtype=onp.float64):
"""Sample Gumbel random values with given shape and float dtype.
@@ -1039,6 +1147,7 @@ def _gumbel(key, shape, dtype):
return -np.log(-np.log(
uniform(key, shape, dtype, minval=np.finfo(dtype).eps, maxval=1.)))
+
def categorical(key, logits, axis=-1, shape=None):
"""Sample random values from categorical distributions.
@@ -1068,6 +1177,7 @@ def categorical(key, logits, axis=-1, shape=None):
sample_shape = shape[:len(shape)-len(batch_shape)]
return np.argmax(gumbel(key, sample_shape + logits.shape, logits.dtype) + logits, axis=axis)
+
def laplace(key, shape=(), dtype=onp.float64):
"""Sample Laplace random values with given shape and float dtype.
| diff --git a/tests/random_test.py b/tests/random_test.py
--- a/tests/random_test.py
+++ b/tests/random_test.py
@@ -394,6 +394,38 @@ def testGammaGradType(self):
# Should not crash with a type error.
api.vjp(f, a, b)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "_lam={}_{}".format(lam, dtype),
+ "lam": lam, "dtype": onp.dtype(dtype).name}
+ for lam in [0.5, 3, 9, 11, 50, 500]
+ for dtype in [onp.int32, onp.int64]))
+ def testPoisson(self, lam, dtype):
+ key = random.PRNGKey(0)
+ rand = lambda key, lam: random.poisson(key, lam, (10000,), dtype)
+ crand = api.jit(rand)
+
+ uncompiled_samples = rand(key, lam)
+ compiled_samples = crand(key, lam)
+
+ for samples in [uncompiled_samples, compiled_samples]:
+ self._CheckChiSquared(samples, scipy.stats.poisson(lam).pmf)
+ # TODO(shoyer): determine error bounds for moments more rigorously (e.g.,
+ # based on the central limit theorem).
+ self.assertAllClose(samples.mean(), lam, rtol=0.01, check_dtypes=False)
+ self.assertAllClose(samples.var(), lam, rtol=0.03, check_dtypes=False)
+
+ def testPoissonBatched(self):
+ key = random.PRNGKey(0)
+ lam = np.concatenate([2 * np.ones(10000), 20 * np.ones(10000)])
+ samples = random.poisson(key, lam, shape=(20000,))
+ self._CheckChiSquared(samples[:10000], scipy.stats.poisson(2.0).pmf)
+ self._CheckChiSquared(samples[10000:], scipy.stats.poisson(20.0).pmf)
+
+ def testPoissonShape(self):
+ key = random.PRNGKey(0)
+ x = random.poisson(key, onp.array([2.0, 20.0]), shape=(3, 2))
+ assert x.shape == (3, 2)
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}".format(dtype), "dtype": onp.dtype(dtype).name}
for dtype in [onp.float32, onp.float64]))
| consider adding poisson random variable generator
Given the extensive list of random variables that can be generated by jax.random poisson sticks out as an important omission.
Thanks!
| 2020-04-23T07:28:18 |
|
google/jax | 2,807 | google__jax-2807 | [
"2716"
] | 903010b7b90f5af6ff4d724721f4bc1f10237dc4 | diff --git a/jax/interpreters/pxla.py b/jax/interpreters/pxla.py
--- a/jax/interpreters/pxla.py
+++ b/jax/interpreters/pxla.py
@@ -383,26 +383,27 @@ def axis_index(axis_name):
[0 1]
[0 1]]
"""
+ return axis_index_p.bind(axis_name=axis_name)
+
+def _axis_index_bind(*, axis_name):
dynamic_axis_env = _thread_local_state.dynamic_axis_env
frame = dynamic_axis_env[axis_name]
sizes = dynamic_axis_env.sizes[:dynamic_axis_env.index(frame)+1]
nreps = dynamic_axis_env.nreps
- dummy_arg = frame.pmap_trace.pure(core.unit)
- if frame.soft_trace:
- dummy_arg = frame.soft_trace.pure(dummy_arg)
-
- return axis_index_p.bind(dummy_arg, nreps=nreps, sizes=sizes,
- soft_size=frame.soft_size, axis_name=axis_name)
+ trace = frame.pmap_trace
-def _axis_index_partial_eval(trace, _, **params):
- # This partial_eval rule adds the axis_index primitive into the jaxpr formed
- # during pmap lowering. It is like the standard JaxprTrace.process_primitive
- # rule except that we don't attempt to lower out of the trace.
out_aval = ShapedArray((), onp.int32)
out_tracer = pe.JaxprTracer(trace, pe.PartialVal.unknown(out_aval), None)
- eqn = pe.new_eqn_recipe([], [out_tracer], axis_index_p, params)
+ eqn = pe.new_eqn_recipe([], [out_tracer], axis_index_p,
+ dict(nreps=nreps, sizes=sizes,
+ soft_size=frame.soft_size, axis_name=axis_name))
out_tracer.recipe = eqn
- return out_tracer
+
+ if not frame.soft_trace:
+ return out_tracer
+ else:
+ val_out = out_tracer * frame.soft_size + onp.arange(frame.soft_size)
+ return SplitAxisTracer(frame.soft_trace, axis_name, val_out)
def _axis_index_translation_rule(c, nreps, sizes, soft_size, axis_name):
div = c.Constant(onp.array(nreps // prod(sizes), dtype=onp.uint32))
@@ -411,8 +412,8 @@ def _axis_index_translation_rule(c, nreps, sizes, soft_size, axis_name):
return c.ConvertElementType(unsigned_index, xb.dtype_to_etype(onp.int32))
axis_index_p = core.Primitive('axis_index')
+axis_index_p.def_custom_bind(_axis_index_bind)
xla.translations[axis_index_p] = _axis_index_translation_rule
-pe.custom_partial_eval_rules[axis_index_p] = _axis_index_partial_eval
### lazy device-memory persistence and result handling
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -1037,6 +1037,17 @@ def distributed_matrix_vector(x, y):
tol = 1e-1 if jtu.device_under_test() == "tpu" else 1e-3
self.assertAllClose(result, expected, check_dtypes=False, atol=tol, rtol=tol)
+ def testAxisIndexRemat(self):
+ # https://github.com/google/jax/issues/2716
+ n = len(jax.devices())
+
+ def f(key):
+ key = random.fold_in(key, jax.lax.axis_index('i'))
+ return random.bernoulli(key, p=0.5)
+
+ keys = random.split(random.PRNGKey(0), n)
+ jax.pmap(jax.remat(f), axis_name='i')(keys)
+
class PmapWithDevicesTest(jtu.JaxTestCase):
| remat(axis_index) drops dummy_arg to bind
```python
import jax
import numpy as np
def f(key):
key = jax.random.fold_in(key, jax.lax.axis_index('i'))
return jax.random.bernoulli(key, p=0.5)
jax.pmap(jax.remat(f), axis_name='i')(np.stack([jax.random.PRNGKey(428)] * 8))
```
Throws:
```
NotImplementedError: Evaluation rule for 'axis_index' not implemented
```
If I run args in pdb at the bind() that causes this error, I see:
```
(Pdb) args
self = axis_index
args = ()
kwargs = {'nreps': 1, 'sizes': (1,), 'soft_size': None, 'axis_name': 'model'}
```
which is missing the dummy_arg we usually give bind. seems like it's getting lost in the partial_eval machinery for remat?
| 2020-04-23T17:59:28 |
|
google/jax | 2,810 | google__jax-2810 | [
"2759"
] | d2653a1e8a29dda3f2df40098076c83978a0098f | diff --git a/jax/interpreters/pxla.py b/jax/interpreters/pxla.py
--- a/jax/interpreters/pxla.py
+++ b/jax/interpreters/pxla.py
@@ -496,10 +496,11 @@ def block_until_ready(self):
def _value(self):
if self._npy_value is None:
self.copy_to_host_async()
- self._npy_value = onp.empty(self.aval.shape, self.aval.dtype)
+ npy_value = onp.empty(self.aval.shape, self.aval.dtype)
for i in range(0, len(self.device_buffers),
self.sharding_spec.replication_factor):
- self._npy_value[self.indices[i]] = self.device_buffers[i].to_py()
+ npy_value[self.indices[i]] = self.device_buffers[i].to_py()
+ self._npy_value = npy_value
return self._npy_value
def __getitem__(self, idx):
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -13,9 +13,11 @@
# limitations under the License.
+from concurrent.futures import ThreadPoolExecutor
from functools import partial
import os
from random import shuffle
+import threading
from unittest import SkipTest
import numpy as onp
@@ -1147,6 +1149,35 @@ def f(x, y):
self.assertAllClose(ans, expected, check_dtypes=False)
+class ShardedDeviceArrayTest(jtu.JaxTestCase):
+
+ def testThreadsafeIndexing(self):
+ # NOTE(skye): I picked these values to be big enough to cause interesting
+ # execution overlap, but small enough to not use too much memory. YMMV.
+ shape = (8, 8000, 1000)
+
+ if jax.device_count() < shape[0]:
+ raise SkipTest(f"requires {shape[0]} devices")
+
+ x = np.arange(np.prod(shape)).reshape(shape)
+ sharded_x = pmap(lambda x: x)(x)
+
+ num_threads = 10
+ futures = []
+ expected = []
+ with ThreadPoolExecutor(max_workers=num_threads) as executor:
+ for i in range(num_threads):
+ idx = i % shape[0]
+ # Mix together different kinds of indices
+ if i % 2 == 0:
+ idx = slice(idx, idx + 1)
+ futures.append(executor.submit(
+ lambda: [sharded_x[idx] for _ in range(10)][0]))
+ expected.append(x[idx])
+ actual = [f.result() for f in futures]
+ self.assertAllClose(actual, expected, check_dtypes=False)
+
+
class SpecToIndicesTest(jtu.JaxTestCase):
def testShardsPerAxis(self):
| ShardedDeviceArray access should be threadsafe
This used to be the case, but I broke it in https://github.com/google/jax/commit/07571ae4dd3fceee580aa49c4490f99ce7f6b6de here: https://github.com/google/jax/blob/master/jax/interpreters/pxla.py#L499 (self._npy_value may be accessed by another thread before it's populated). I'm planning to fix this today or Monday.
| 2020-04-23T20:49:21 |
|
google/jax | 2,812 | google__jax-2812 | [
"1017"
] | 9b6976bfd98906943c6e7a0d68a0e5999424e28c | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -293,8 +293,9 @@ def xla_computation(fun: Callable,
ROOT tuple.18 = (f32[], f32[], f32[]) tuple(all-reduce.7, all-reduce.12, all-reduce.17)
}
"""
- del static_argnums # Unused.
_check_callable(fun)
+ if isinstance(static_argnums, int):
+ static_argnums = (static_argnums,)
fun_name = getattr(fun, '__name__', 'unknown')
def make_axis_env(nreps):
@@ -311,6 +312,11 @@ def abstractify(x):
@wraps(fun)
def computation_maker(*args, **kwargs):
wrapped = lu.wrap_init(fun)
+ if static_argnums:
+ dyn_argnums = [i for i in range(len(args)) if i not in static_argnums]
+ wrapped, dyn_args = argnums_partial(wrapped, dyn_argnums, args)
+ else:
+ dyn_args = args
jax_args, in_tree = tree_flatten((args, kwargs))
jaxtree_fun, out_tree = flatten_fun(wrapped, in_tree)
avals = map(abstractify, jax_args)
@@ -1402,17 +1408,20 @@ def _vjp(fun: lu.WrappedFun, *primals, **kwargs):
return out_primal_py, vjp_py, tree_unflatten(aux_tree, aux)
-def make_jaxpr(fun: Callable) -> Callable[..., core.TypedJaxpr]:
+def make_jaxpr(fun: Callable,
+ static_argnums: Union[int, Iterable[int]] = ()
+ ) -> Callable[..., core.TypedJaxpr]:
"""Creates a function that produces its jaxpr given example args.
Args:
fun: The function whose ``jaxpr`` is to be computed. Its positional
arguments and return value should be arrays, scalars, or standard Python
containers (tuple/list/dict) thereof.
+ static_argnums: See the ``jax.jit`` docstring.
Returns:
- A wrapped version of ``fun`` that when applied to example arguments returns a
- ``TypedJaxpr`` representation of ``fun`` on those arguments.
+ A wrapped version of ``fun`` that when applied to example arguments returns
+ a ``TypedJaxpr`` representation of ``fun`` on those arguments.
A ``jaxpr`` is JAX's intermediate representation for program traces. The
``jaxpr`` language is based on the simply-typed first-order lambda calculus
@@ -1443,6 +1452,8 @@ def make_jaxpr(fun: Callable) -> Callable[..., core.TypedJaxpr]:
in [g] }
"""
_check_callable(fun)
+ if isinstance(static_argnums, int):
+ static_argnums = (static_argnums,)
def pv_like(x):
aval = xla.abstractify(x)
@@ -1451,6 +1462,11 @@ def pv_like(x):
@wraps(fun)
def jaxpr_maker(*args, **kwargs):
wrapped = lu.wrap_init(fun)
+ if static_argnums:
+ dyn_argnums = [i for i in range(len(args)) if i not in static_argnums]
+ wrapped, dyn_args = argnums_partial(wrapped, dyn_argnums, args)
+ else:
+ dyn_args = args
jax_args, in_tree = tree_flatten((args, kwargs))
jaxtree_fun, out_tree = flatten_fun(wrapped, in_tree)
in_pvals = map(pv_like, jax_args)
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -879,6 +879,13 @@ def f():
out_shape, = xla_comp.GetReturnValueShape().tuple_shapes()
self.assertEqual(out_shape.dimensions(), (3, 4))
+ def test_xla_computation_static_argnums(self):
+ def f(x, y):
+ return x + y
+
+ xla_comp = api.xla_computation(f, static_argnums=(1,))(2, 3)
+ self.assertIn('constant(3)', xla_comp.GetHloText())
+
def test_jit_device(self):
device = xb.devices()[-1]
x = api.jit(lambda x: x, device=device)(3.)
@@ -1845,6 +1852,14 @@ def inner(x):
in (d,) }
""", str(jaxpr))
+ def test_make_jaxpr_static_argnums(self):
+ def f(x, y):
+ return x + y
+
+ jaxpr = api.make_jaxpr(f, static_argnums=(1,))(2, 3)
+ self.assertIn('3', str(jaxpr))
+
+
class LazyTest(jtu.JaxTestCase):
@contextmanager
| jax.xla_computation ignores its static_argnums argument
The implementation of jax.xla_computation completely ignores its static_argnums argument, which seems like an oversight.
| 2020-04-23T23:49:57 |
|
google/jax | 2,834 | google__jax-2834 | [
"2833"
] | 77901e9fa71f5b23066c70132a983ae57f655b39 | diff --git a/jax/core.py b/jax/core.py
--- a/jax/core.py
+++ b/jax/core.py
@@ -989,7 +989,7 @@ def process_env_traces(post_processor: str, primitive: Primitive,
yield outs, tuple(todo) # Ensure the aux output is immutable
def _call_bind(processor: str, post_processor: str, primitive: Primitive,
- f: lu.WrappedFun, *args, **params):
+ f: lu.WrappedFun, *args, **params):
top_trace = find_top_trace(args)
level = trace_state.trace_stack.next_level(True) if top_trace is None else top_trace.level
params_tuple = tuple(params.items())
diff --git a/jax/interpreters/partial_eval.py b/jax/interpreters/partial_eval.py
--- a/jax/interpreters/partial_eval.py
+++ b/jax/interpreters/partial_eval.py
@@ -163,7 +163,8 @@ def default_process_primitive(self, primitive, tracers, params):
def process_call(self, call_primitive, f: lu.WrappedFun, tracers, params):
name = params.get('name', f.__name__)
- if self.master.trace_type is StagingJaxprTrace:
+ if (self.master.trace_type is StagingJaxprTrace
+ and call_primitive in staged_out_calls):
tracers = map(self.instantiate_const_abstracted, tracers)
else:
name = wrap_name(name, 'pe')
@@ -312,6 +313,7 @@ def _unmapped_aval(size, aval):
custom_partial_eval_rules: Dict[core.Primitive, Callable] = {}
call_partial_eval_rules: Dict[core.Primitive, Callable] = {}
+staged_out_calls: Set[core.Primitive] = set()
def partial_eval(f, trace, pvs: Sequence[Optional[AbstractValue]], instantiate=False):
diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -608,6 +608,7 @@ def _get_device(device, backend):
xla_call = partial(core.call_bind, xla_call_p)
xla_call_p.def_custom_bind(xla_call)
xla_call_p.def_impl(_xla_call_impl)
+pe.staged_out_calls.add(xla_call_p)
def _xla_call_translation_rule(c, axis_env,
in_nodes, name_stack, backend, name,
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -1510,6 +1510,16 @@ def scanned_f(x, _):
jax.grad(scan_bug)(1.0) # doesn't crash
+ def test_remat_jit_static_argnum(self):
+ # https://github.com/google/jax/issues/2833
+ def f(a_bool, y):
+ if a_bool:
+ return y + 1
+ else:
+ return y
+
+ api.jit(api.remat(f, concrete=True), static_argnums=0)(True, 1) # no crash
+
def test_trivial_computations(self):
x = np.array([1, 2, 3])
y = api.jit(lambda x: x)(x)
| core.call_bind aggressively raises args to top trace
```
def f(a_bool, y):
if a_bool:
return y + 1
else:
return y
jax.jit(jax.remat(f), static_argnums=0)(True, 1)
```
Results in:
```
ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected (in `bool`).
Use transformation parameters such as `static_argnums` for `jit` to avoid tracing input values.
See `https://jax.readthedocs.io/en/latest/faq.html#abstract-tracer-value-encountered-where-concrete-value-is-expected-error`.
Encountered value: Traced<ShapedArray(bool[], weak_type=True):JaxprTrace(level=-1/2)>
```
I think this arises from the full_raise occurring here when processing `remat_call_p` - which raise to the JaxprTrace when we told JIT we don't want to!
https://github.com/google/jax/blob/77901e9fa71f5b23066c70132a983ae57f655b39/jax/core.py#L1001
This also applies to user-defined call primitives using `core.call_bind`, resulting in unnecessary workarounds like this one in Haiku:
https://github.com/deepmind/dm-haiku/blob/49b21f7192dfdb3dc0a49cc097c8d3b0ccabb107/haiku/_src/named_call.py#L101-L109
| I think this might be working as intended. That is, `remat` itself will raise the abstraction level. You can use `jax.remat(f, concrete=True)` if you want to handle Python control flow inside a `remat`, but when mixing that together with `jit` it's possible to do redundant FLOPs in that case, so we wanted to make it opt-in. [This comment](https://github.com/google/jax/pull/1749#issuecomment-558267584) on #1749 has some detail.
WDYT?
Hmm I just tried with `concrete=True` and it didn't work... let's see...
I'm confused because calling `JaxprTrace.full_raise` should neither produce an 'unknown' tracer (see [1](https://github.com/google/jax/blob/77901e9fa71f5b23066c70132a983ae57f655b39/jax/interpreters/partial_eval.py#L98) [2](https://github.com/google/jax/blob/77901e9fa71f5b23066c70132a983ae57f655b39/jax/interpreters/partial_eval.py#L109)) nor raise the abstraction level. An unknown is only introduced when we call `JaxprTrace.instantiate_const`, and then it's only raised past the Concrete level if we call `JaxprTrace.instantiate_const_abstracted`.
Oh... I think I have an idea...
It's because [we stage out as much as possible](https://github.com/google/jax/blob/77901e9fa71f5b23066c70132a983ae57f655b39/jax/interpreters/partial_eval.py#L167) for calls. We really only want that behavior for `xla_call` and not other call primitives... | 2020-04-25T01:03:54 |
google/jax | 2,885 | google__jax-2885 | [
"2881"
] | 52c69e88c58e3838a605eb952d9bbf3ad6195f89 | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -5173,8 +5173,13 @@ def _abstractify(x):
return raise_to_shaped(core.get_aval(x))
+
def _check_user_dtype_supported(dtype, fun_name=None):
- if dtype is not None and onp.dtype(dtype) != dtypes.canonicalize_dtype(dtype):
+ onp_dtype = onp.dtype(dtype)
+ if onp_dtype.kind not in "biufc" and onp_dtype.type != dtypes.bfloat16:
+ msg = f"JAX only supports number and bool dtypes, got dtype {dtype}"
+ raise TypeError(msg)
+ if dtype is not None and onp_dtype != dtypes.canonicalize_dtype(dtype):
msg = ("Explicitly requested dtype {} {} is not available, "
"and will be truncated to dtype {}. To enable more dtypes, set the "
"jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell "
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1728,6 +1728,11 @@ def testArray(self, arg, ndmin, dtype):
self._CheckAgainstNumpy(onp_fun, jnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(jnp_fun, args_maker, check_dtypes=True)
+ def testArrayUnsupportedDtypeError(self):
+ with self.assertRaisesRegex(TypeError,
+ "JAX only supports number and bool dtypes.*"):
+ jnp.array(3, [('a','<i4'),('b','<i4')])
+
def testIssue121(self):
assert not onp.isscalar(jnp.array(3))
| np.array named columns and different column types not working
Classical numpy arrays allow to specify column names/identifiers and assign different types to columns.
From the documentation:
> Data-type consisting of more than one element:
>
> ```
>>>> x = np.array([(1,2),(3,4)],dtype=[('a','<i4'),('b','<i4')])
>>>> x['a']
>array([1, 3])
> ```
The same documentation is given for jax's `np.array` but trying the above example yields a
```
TypeError: unhashable type: 'list'
```
for the `dtype` argument (raised in function `_check_user_dtype_supported`, line 5096, `jax/lax/lax.py`).
Is there a chance that this kind of array specifications could be supported? If not, could at least the documentation be clarified?
| JAX doesn’t support structured dtypes, but I agree that we should provide better a error message here.
too bad, but thanks for the clarification! | 2020-04-29T17:02:12 |
google/jax | 2,903 | google__jax-2903 | [
"2899"
] | b39da1f842f1363b8b36052c0837407de0be9c2d | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -2827,7 +2827,7 @@ def argmax(a, axis=None):
if axis is None:
a = ravel(a)
axis = 0
- return _argminmax(max, a, axis)
+ return _argminmax("argmax", max, a, axis)
_NANARG_DOC = """\
@@ -2850,7 +2850,7 @@ def argmin(a, axis=None):
if axis is None:
a = ravel(a)
axis = 0
- return _argminmax(min, a, axis)
+ return _argminmax("argmin", min, a, axis)
@_wraps(onp.nanargmin, lax_description=_NANARG_DOC.format("min"))
@@ -2864,7 +2864,9 @@ def nanargmin(a, axis=None):
# TODO(mattjj): redo this lowering with a call to variadic lax.reduce
-def _argminmax(op, a, axis):
+def _argminmax(name, op, a, axis):
+ if size(a) == 0:
+ raise ValueError("attempt to get {} of an empty sequence".format(name))
shape = [1] * a.ndim
shape[axis] = a.shape[axis]
idxs = lax.tie_in(a, arange(a.shape[axis])).reshape(shape)
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -696,6 +696,16 @@ def jnp_fun(array_to_reduce):
raise
self._CompileAndCheck(jnp_fun, args_maker, check_dtypes=True)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": rec.test_name.capitalize(),
+ "name": rec.name, "jnp_op": getattr(jnp, rec.name)}
+ for rec in JAX_ARGMINMAX_RECORDS))
+ def testArgMinMaxEmpty(self, name, jnp_op):
+ name = name[3:] if name.startswith("nan") else name
+ msg = "attempt to get {} of an empty sequence".format(name)
+ with self.assertRaises(ValueError, msg=msg):
+ jnp_op(onp.array([]))
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_{}_{}".format(
jtu.format_shape_dtype_string(lhs_shape, lhs_dtype),
| Argmax on empty array returns 2147483647
How to reproduce:
```
>>> jnp.argmax(jnp.array([]))
DeviceArray(2147483647, dtype=int32)
```
Confirmed to occur on CPU and TPU.
For reference, in numpy an exception is returned: "ValueError: attempt to get argmax of an empty sequence".
| 2020-04-30T15:38:28 |
|
google/jax | 2,907 | google__jax-2907 | [
"2905"
] | 3216f5ca4647a00a5a53663846a56dc0faf1fc01 | diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -16,7 +16,7 @@
from collections import defaultdict
import itertools as it
import operator as op
-from typing import Any, Callable, Dict, Sequence, Type
+from typing import Any, Callable, Dict, Sequence, Type, Optional
from absl import logging
import numpy as onp
@@ -920,16 +920,23 @@ def _device_put_device_array(x, device):
return _force(x).device_buffer
device_put_handlers[DeviceArray] = _device_put_device_array
-def _copy_device_array_to_device(x, device):
- if is_device_constant(x):
+def _copy_device_array_to_device(x: DeviceArray, device: Optional[xc.Device]):
+ if device is None:
+ # no copying to be done because there's no target specified
+ return x
+ elif is_device_constant(x):
+ # create a new DeviceArray with the same lazy expr, no copying
return DeviceArray(x.aval, device, x._lazy_expr, DeviceConstant(device))
elif xb.get_device_backend(device).platform == x.device_buffer.platform():
- if device is None or x.device_buffer.device() == device:
+ # source and target platforms are the same
+ if x.device_buffer.device() == device:
+ # no copying to be done because source equals target
return x
else:
+ # move the buffer with a device-to-device copy
moved_buf = x.device_buffer.copy_to_device(device)
else:
- # Buffers from different XLA backends are passed through the host.
+ # buffers from different XLA backends are passed through the host.
backend = xb.get_device_backend(device)
moved_buf = backend.buffer_from_pyval(x.device_buffer.to_py(), device)
return DeviceArray(x.aval, device, x._lazy_expr, moved_buf)
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -323,9 +323,6 @@ def test_device_put_across_platforms(self):
x = api.device_put(val, device=cpu_device)
self.assertEqual(x.device_buffer.device(), cpu_device)
- y = api.device_put(x)
- self.assertEqual(y.device_buffer.device(), default_device)
-
def test_jit_on_all_devices(self):
# Verifies we can run the same computation on every device present, even
# if they are, for example, different models of GPU.
diff --git a/tests/multibackend_test.py b/tests/multibackend_test.py
--- a/tests/multibackend_test.py
+++ b/tests/multibackend_test.py
@@ -171,6 +171,24 @@ def my_sin(x): return np.sin(x)
result4 = api.jit(my_sin, backend="cpu")(2)
self.assertEqual(result4.device_buffer.device(), cpus[0])
+ @jtu.skip_on_devices("cpu") # test only makes sense on non-cpu backends
+ def test_indexing(self):
+ # https://github.com/google/jax/issues/2905
+ cpus = api.devices("cpu")
+
+ x = api.device_put(onp.ones(2), cpus[0])
+ y = x[0]
+ self.assertEqual(y.device_buffer.device(), cpus[0])
+
+ @jtu.skip_on_devices("cpu") # test only makes sense on non-cpu backends
+ def test_sum(self):
+ # https://github.com/google/jax/issues/2905
+ cpus = api.devices("cpu")
+
+ x = api.device_put(onp.ones(2), cpus[0])
+ y = x.sum()
+ self.assertEqual(y.device_buffer.device(), cpus[0])
+
if __name__ == "__main__":
absltest.main()
| Ops involving only CPU tensors run on device, not CPU
In the code below running jax op that has CPU backed inputs runs on device, (which for my use case OOMs):
```
def CpuArray(numpy_array):
return jax.device_put(numpy_array, device=jax.devices(backend='cpu')[0])
data = CpuArray(np.ones([100, 4], dtype=np.float32))
slice_indexes = CpuArray(np.zeros([100], dtype=np.int32))
# Summing the data runs on TPU.
summed_data = jnp.sum(data[slice_indexes])
# So does slicing it.
sliced_data = data[100]
print("data device", data.device_buffer.device())
print("sliced indexes", slice_indexes.device_buffer.device())
print("sliced data", sliced_data.device_buffer.device())
print("summed data", summed_data.device_buffer.device())
```
Prints:
```
data device cpu:0
sliced indexes device cpu:0
Both output arrays are on TPU:
sliced data device TPU_0(host=0,(0,0,0,0))
summed data device TPU_0(host=0,(0,0,0,0))
```
| This feels similar to #2883
I agree! I'm not sure yet if it's the same or not.
I verified that on master on a GPU machine this prints gpu:0:
```python
import jax
import numpy as onp
CPU = jax.devices('cpu')
x = jax.device_put(onp.ones(2), CPU[0])
y = x[0]
print(y.device_buffer.device())
```
Possibly also related to #2878.
I think it's because [`_rewriting_take` calls `asarray`](https://github.com/google/jax/blob/3216f5ca4647a00a5a53663846a56dc0faf1fc01/jax/numpy/lax_numpy.py#L3190), which is itself not respecting the device of its input. | 2020-04-30T20:51:03 |
google/jax | 2,927 | google__jax-2927 | [
"432"
] | a821e67d607dbcc530c5b57cd6175e20f7b07c12 | diff --git a/jaxlib/version.py b/jaxlib/version.py
--- a/jaxlib/version.py
+++ b/jaxlib/version.py
@@ -12,4 +12,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-__version__ = "0.1.45"
+__version__ = "0.1.46"
| diff --git a/jax/test_util.py b/jax/test_util.py
--- a/jax/test_util.py
+++ b/jax/test_util.py
@@ -34,6 +34,7 @@
from . import core
from . import dtypes
from . import lax
+from . import lib
from .config import flags, bool_env
from .util import partial
from .tree_util import tree_multimap, tree_all, tree_map, tree_reduce
@@ -369,10 +370,12 @@ def test_method_wrapper(self, *args, **kwargs):
return test_method_wrapper
return skip
-# TODO(phawkins): bug https://github.com/google/jax/issues/432
+# TODO(phawkins): workaround for bug https://github.com/google/jax/issues/432
+# Delete this code after the minimum jaxlib version is 0.1.46 or greater.
skip_on_mac_linalg_bug = partial(
unittest.skipIf,
- sys.platform == "darwin" and scipy.version.version > "1.1.0",
+ (sys.platform == "darwin" and scipy.version.version > "1.1.0" and
+ lib.version < (0, 1, 46)),
"Test fails on Mac with new scipy (issue #432)")
| jax.scipy.linalg routines segfault on Mac OS X on scipy 1.2.1 or later but not scipy 1.1.0
I'm not exactly sure *why* this happens, being unfamiliar with the internal architecture, but on MacOS with Python 3.6.8, the following code segfaults if scipy 1.2.1 is installed (the version that comes by default when you `pip install jax jaxlib`):
```python
import jax.random as random
import jax.scipy.linalg as linalg
key = random.PRNGKey(42)
# For some reason, matrices smaller than (50, 50) or so do not trigger segfaults
X = random.normal(key, (500, 500))
A = X @ X.T # Drawn from standard Wishart distribution
linalg.cholesky(A)
print("Success!")
```
Output:
```
$ python -W ignore test.py
zsh: bus error python -W ignore test.py
```
If I roll back to Scipy 1.1.0, everything works:
```
$ python -W ignore test.py
Success!
```
This is a great project by the way--thanks for working on it!
Edit: after further digging, I found the following in the the Scipy 1.2 release notes:
> scipy.linalg.lapack now exposes the LAPACK routines using the Rectangular Full Packed storage (RFP) for upper triangular, lower triangular, symmetric, or Hermitian matrices; the upper trapezoidal fat matrix RZ decomposition routines are now available as well.
Perhaps this has something to do with it?
Even more edits: yet more digging has revealed scipy/scipy#9751, which hints that this might be caused by a specific (old) version of XCode. I will report back once XCode is upgraded.
| Thanks for reporting this, and for digging into it! (And for the kind words about JAX too!)
This smells like a `scipy` bug to me, but it's hard to be sure without tracking it down...
(I am increasingly leaning towards not using scipy's LAPACK kernels on CPU, because of issues like this one...)
Turns out upgrading XCode doesn't fix it. I agree that it seems like a `scipy` bug, perhaps it is because I'm running MacOS High Sierra still (at the request of department IT). In any case pinning the dependency to Scipy 1.1.0 works for now, and deployment on Linux is no problem.
Since I'm the only affected user and the bug isn't showing up on the CI system, it might be reasonable to drop this until other people report the same problem.
One more data point: I tried building scipy from source at version 1.2.1, and the self-built version doesn't segfault. I'm using Mac OS Mojave and XCode 10.0 and I installed OpenBLAS from homebrew.
So I'm wondering if this is a problem with the PyPI-provided scipy packages.
Another data point:
Segfault with `jax.numpy.linalg.solve` and `scipy>1.1` from PyPI
The same happens for scipy 1.4.1 on macos and python 3.6.6 for the qr decomposition
for matrices where number of columns > 128
``` python
import jax.numpy as np
import jax
import numpy as onp
import jax.config as config
config.update("jax_enable_x64", True)
q, r = np.linalg.qr(onp.random.rand(2000, 128)) #works fine
q, r = np.linalg.qr(onp.random.rand(2000, 129)) #bus error: 10
```
update: the problem first seems to appear with scipy 1.2.0
I'm seeing a similar problem with Cholesky decomposition on Mac OS 10.15.4, Scipy 1.4.1 and Python 3.7.4 from Anaconda but where it works for a matrix 63 x 63 but gives a Bus error for 64 x 64.
I think I've figured out what's going wrong here, and why it's Mac OS specific.
The problem is that we run out of stack space and crash due to a stack overflow. Mac OS thread stacks default to 512KiB, whereas Linux defaults to 8MiB stacks. (Since these threads are part of a thread pool, you cannot work around this by changing `ulimit`, it requires code changes.)
I'm not quite sure what the best way to fix this is at the moment but I'll figure something out.
Thanks for all the hard work! Much appreciated | 2020-05-02T00:32:07 |
google/jax | 2,931 | google__jax-2931 | [
"2920"
] | 46ce80b03212dfff86624e341d8a2b59ac474482 | diff --git a/jax/experimental/ode.py b/jax/experimental/ode.py
--- a/jax/experimental/ode.py
+++ b/jax/experimental/ode.py
@@ -28,6 +28,7 @@
import jax
import jax.numpy as np
+from jax import core
from jax import lax
from jax import ops
from jax.util import safe_map, safe_zip
@@ -141,7 +142,9 @@ def odeint(func, y0, t, *args, rtol=1.4e-8, atol=1.4e-8, mxstep=np.inf):
y0: array or pytree of arrays representing the initial value for the state.
t: array of float times for evaluation, like `np.linspace(0., 10., 101)`,
in which the values must be strictly increasing.
- *args: tuple of additional arguments for `func`.
+ *args: tuple of additional arguments for `func`, which must be arrays
+ scalars, or (nested) standard Python containers (tuples, lists, dicts,
+ namedtuples, i.e. pytrees) of those types.
rtol: float, relative local error tolerance for solver (optional).
atol: float, absolute local error tolerance for solver (optional).
mxstep: int, maximum number of steps to take for each timepoint (optional).
@@ -151,6 +154,12 @@ def odeint(func, y0, t, *args, rtol=1.4e-8, atol=1.4e-8, mxstep=np.inf):
point in `t`, represented as an array (or pytree of arrays) with the same
shape/structure as `y0` except with a new leading axis of length `len(t)`.
"""
+ def _check_arg(arg):
+ if not isinstance(arg, core.Tracer) and not core.valid_jaxtype(arg):
+ msg = ("The contents of odeint *args must be arrays or scalars, but got "
+ "\n{}.")
+ raise TypeError(msg.format(arg))
+ tree_map(_check_arg, args)
return _odeint_wrapper(func, rtol, atol, mxstep, y0, t, *args)
@partial(jax.jit, static_argnums=(0, 1, 2, 3))
| stax.serial.apply_fun is not a valid JAX type inside odeint
Hi,
FWIW, I'm using a self-built jax and jaxlib following instructions from #2083.
```
#
# Name Version Build Channel
jax 0.1.64 <pip>
jaxlib 0.1.45 <pip>
```
I'm trying to do get gradients through an ODE solver. First, I ran into `AssertionError` issue #2718 and I think I solved it by passing all the arguments directly into `odeint`. Then I followed instructions to solve another `AssertionError` issue #2531 by doing `vmap` of `grads` instead of `grads` of `vmap` . Now I'm getting the following error.
<details>
<summary>Full trace back.</summary>
<p>
```
----> 1 batch_grad(batch_y0, batch_t, batch_y,[1.3,1.8], [U1,U2], [U1_params,U2_params])
~/Code/jax/jax/api.py in batched_fun(*args)
805 _check_axis_sizes(in_tree, args_flat, in_axes_flat)
806 out_flat = batching.batch(flat_fun, args_flat, in_axes_flat,
--> 807 lambda: _flatten_axes(out_tree(), out_axes))
808 return tree_unflatten(out_tree(), out_flat)
809
~/Code/jax/jax/interpreters/batching.py in batch(fun, in_vals, in_dims, out_dim_dests)
32 # executes a batched version of `fun` following out_dim_dests
33 batched_fun = batch_fun(fun, in_dims, out_dim_dests)
---> 34 return batched_fun.call_wrapped(*in_vals)
35
36 @lu.transformation_with_aux
~/Code/jax/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
148 gen = None
149
--> 150 ans = self.f(*args, **dict(self.params, **kwargs))
151 del args
152 while stack:
~/Code/jax/jax/api.py in value_and_grad_f(*args, **kwargs)
436 f_partial, dyn_args = argnums_partial(f, argnums, args)
437 if not has_aux:
--> 438 ans, vjp_py = _vjp(f_partial, *dyn_args)
439 else:
440 ans, vjp_py, aux = _vjp(f_partial, *dyn_args, has_aux=True)
~/Code/jax/jax/api.py in _vjp(fun, *primals, **kwargs)
1437 if not has_aux:
1438 flat_fun, out_tree = flatten_fun_nokwargs(fun, in_tree)
-> 1439 out_primal, out_vjp = ad.vjp(flat_fun, primals_flat)
1440 out_tree = out_tree()
1441 else:
~/Code/jax/jax/interpreters/ad.py in vjp(traceable, primals, has_aux)
104 def vjp(traceable, primals, has_aux=False):
105 if not has_aux:
--> 106 out_primals, pvals, jaxpr, consts = linearize(traceable, *primals)
107 else:
108 out_primals, pvals, jaxpr, consts, aux = linearize(traceable, *primals, has_aux=True)
~/Code/jax/jax/interpreters/ad.py in linearize(traceable, *primals, **kwargs)
93 _, in_tree = tree_flatten(((primals, primals), {}))
94 jvpfun_flat, out_tree = flatten_fun(jvpfun, in_tree)
---> 95 jaxpr, out_pvals, consts = pe.trace_to_jaxpr(jvpfun_flat, in_pvals)
96 out_primals_pvals, out_tangents_pvals = tree_unflatten(out_tree(), out_pvals)
97 assert all(out_primal_pval.is_known() for out_primal_pval in out_primals_pvals)
~/Code/jax/jax/interpreters/partial_eval.py in trace_to_jaxpr(fun, pvals, instantiate, stage_out, bottom, trace_type)
435 with new_master(trace_type, bottom=bottom) as master:
436 fun = trace_to_subjaxpr(fun, master, instantiate)
--> 437 jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
438 assert not env
439 del master
~/Code/jax/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
148 gen = None
149
--> 150 ans = self.f(*args, **dict(self.params, **kwargs))
151 del args
152 while stack:
~/Code/jax/jax/api.py in f_jitted(*args, **kwargs)
152 flat_fun, out_tree = flatten_fun(f, in_tree)
153 out = xla.xla_call(flat_fun, *args_flat, device=device, backend=backend,
--> 154 name=flat_fun.__name__)
155 return tree_unflatten(out_tree(), out)
156
~/Code/jax/jax/core.py in _call_bind(processor, post_processor, primitive, f, *args, **params)
1003 tracers = map(top_trace.full_raise, args)
1004 process = getattr(top_trace, processor)
-> 1005 outs = map(full_lower, process(primitive, f, tracers, params))
1006 return apply_todos(env_trace_todo(), outs)
1007
~/Code/jax/jax/interpreters/ad.py in process_call(self, call_primitive, f, tracers, params)
342 name = params.get('name', f.__name__)
343 params = dict(params, name=wrap_name(name, 'jvp'))
--> 344 result = call_primitive.bind(f_jvp, *primals, *nonzero_tangents, **params)
345 primal_out, tangent_out = tree_unflatten(out_tree_def(), result)
346 return [JVPTracer(self, p, t) for p, t in zip(primal_out, tangent_out)]
~/Code/jax/jax/core.py in _call_bind(processor, post_processor, primitive, f, *args, **params)
1003 tracers = map(top_trace.full_raise, args)
1004 process = getattr(top_trace, processor)
-> 1005 outs = map(full_lower, process(primitive, f, tracers, params))
1006 return apply_todos(env_trace_todo(), outs)
1007
~/Code/jax/jax/interpreters/partial_eval.py in process_call(self, call_primitive, f, tracers, params)
175 in_pvs, in_consts = unzip2([t.pval for t in tracers])
176 fun, aux = partial_eval(f, self, in_pvs)
--> 177 out_flat = call_primitive.bind(fun, *in_consts, **params)
178 out_pvs, jaxpr, env = aux()
179 env_tracers = map(self.full_raise, env)
~/Code/jax/jax/core.py in _call_bind(processor, post_processor, primitive, f, *args, **params)
1003 tracers = map(top_trace.full_raise, args)
1004 process = getattr(top_trace, processor)
-> 1005 outs = map(full_lower, process(primitive, f, tracers, params))
1006 return apply_todos(env_trace_todo(), outs)
1007
~/Code/jax/jax/interpreters/batching.py in process_call(self, call_primitive, f, tracers, params)
146 else:
147 f, dims_out = batch_subtrace(f, self.master, dims)
--> 148 vals_out = call_primitive.bind(f, *vals, **params)
149 return [BatchTracer(self, v, d) for v, d in zip(vals_out, dims_out())]
150
~/Code/jax/jax/core.py in _call_bind(processor, post_processor, primitive, f, *args, **params)
999 if top_trace is None:
1000 with new_sublevel():
-> 1001 outs = primitive.impl(f, *args, **params)
1002 else:
1003 tracers = map(top_trace.full_raise, args)
~/Code/jax/jax/interpreters/xla.py in _xla_call_impl(fun, device, backend, name, *args)
460
461 def _xla_call_impl(fun: lu.WrappedFun, *args, device, backend, name):
--> 462 compiled_fun = _xla_callable(fun, device, backend, name, *map(arg_spec, args))
463 try:
464 return compiled_fun(*args)
~/Code/jax/jax/linear_util.py in memoized_fun(fun, *args)
219 fun.populate_stores(stores)
220 else:
--> 221 ans = call(fun, *args)
222 cache[key] = (ans, fun.stores)
223 return ans
~/Code/jax/jax/interpreters/xla.py in _xla_callable(fun, device, backend, name, *arg_specs)
477 pvals: Sequence[pe.PartialVal] = [pe.PartialVal.unknown(aval) for aval in abstract_args]
478 jaxpr, pvals, consts = pe.trace_to_jaxpr(
--> 479 fun, pvals, instantiate=False, stage_out=True, bottom=True)
480
481 _map(prefetch, it.chain(consts, jaxpr_literals(jaxpr)))
~/Code/jax/jax/interpreters/partial_eval.py in trace_to_jaxpr(fun, pvals, instantiate, stage_out, bottom, trace_type)
435 with new_master(trace_type, bottom=bottom) as master:
436 fun = trace_to_subjaxpr(fun, master, instantiate)
--> 437 jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
438 assert not env
439 del master
~/Code/jax/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
148 gen = None
149
--> 150 ans = self.f(*args, **dict(self.params, **kwargs))
151 del args
152 while stack:
<ipython-input-17-de50dc731d85> in loss(batch_y0, batch_t, batch_y, params, ufuncs, uparams)
1 @partial(jit, static_argnums=(4,))
2 def loss(batch_y0, batch_t, batch_y, params, ufuncs,uparams):
----> 3 pred_y = odeint(batch_y0,batch_t,params,ufuncs,uparams)
4 loss = np.mean(np.abs(pred_y-batch_y))
5 return loss
~/Code/jax/jax/experimental/ode.py in odeint(func, y0, t, rtol, atol, mxstep, *args)
152 shape/structure as `y0` except with a new leading axis of length `len(t)`.
153 """
--> 154 return _odeint_wrapper(func, rtol, atol, mxstep, y0, t, *args)
155
156 @partial(jax.jit, static_argnums=(0, 1, 2, 3))
~/Code/jax/jax/api.py in f_jitted(*args, **kwargs)
149 dyn_args = args
150 args_flat, in_tree = tree_flatten((dyn_args, kwargs))
--> 151 _check_args(args_flat)
152 flat_fun, out_tree = flatten_fun(f, in_tree)
153 out = xla.xla_call(flat_fun, *args_flat, device=device, backend=backend,
~/Code/jax/jax/api.py in _check_args(args)
1558 if not (isinstance(arg, core.Tracer) or _valid_jaxtype(arg)):
1559 raise TypeError("Argument '{}' of type {} is not a valid JAX type"
-> 1560 .format(arg, type(arg)))
1561
1562 def _valid_jaxtype(arg):
TypeError: Argument '<function serial.<locals>.apply_fun at 0x2b06c3d6f7a0>' of type <class 'function'> is not a valid JAX type
```
</details>
I'm passing two `stax.Serial` modules with three `Dense` layers each as an input to `odeint` to integrate the Lotka-Volterra ODEs. `ufuncs` and `uparams` contains apply functions and params of `stax.Serial` module.
```
def lv_UDE(y,t,params,ufuncs,uparams):
R, F = y
alpha, theta = params
U1, U2 = ufuncs
U1_params, U2_params = uparams
dRdt = alpha*R - U1(U1_params, y)
dFdt = -theta*F + U2(U2_params, y)
return np.array([dRdt,dFdt])
```
I'm trying to get gradients through an `odeint` w.r.t `uparams`. Is there a workaround to pass `stax.Serial` modules as an argument? Thanks in advance.
| Could you please share a full example of how you get this error? Ideally something that I could copy into a terminal and run.
Hi,
I just noticed that even the non vmapped version of a function with `stax.serial` as an input errors out with the same error message. Here's the full example. Thanks
```
import jax
import jax.numpy as np
import numpy as onp
from jax import random
from jax import grad, jit, vmap, value_and_grad
from jax.experimental.ode import odeint
from jax.experimental import stax
from functools import partial
def lv(y,t,params):
"""
original lotka-volterra equations
"""
R,F = y
alpha, beta, gamma, theta = params
dRdt = alpha*R - beta*R*F
dFdt = gamma*R*F - theta*F
return np.hstack([dRdt,dFdt])
t = np.linspace(0.,4.,num=1000)
y0 = np.array([0.44249296,4.6280594])
true_y = odeint(partial(lv,params=[1.3,0.9,0.5,1.8]),y0=y0,t=t) #training data generation
def lv_UDE(y,t,params,ufuncs,uparams):
"""
additional parameters include stax.Serial
modules and uparams associated with them
"""
R, F = y
alpha, theta = params
U1, U2 = ufuncs
U1_params, U2_params = uparams
dRdt = alpha*R - U1(U1_params, y)
dFdt = -theta*F + U2(U2_params, y)
return np.hstack([dRdt,dFdt])
#two modules of stax Serial
U1_init, U1 = stax.serial(stax.Dense(32),stax.Tanh,
stax.Dense(32), stax.Tanh,
stax.Dense(32),stax.Tanh,
stax.Dense(1))
U2_init, U2 = stax.serial(stax.Dense(32),stax.Tanh,
stax.Dense(32), stax.Tanh,
stax.Dense(32),stax.Tanh,
stax.Dense(1))
key, subkey = random.split(random.PRNGKey(0))
_,U1_params = U1_init(key,(2,)) #inputs of size 2
_,U2_params = U2_init(subkey,(2,))
key,subkey = random.split(subkey)
def get_batch():
"""
Get batches of inital conditions and
times along with true time history
"""
s = onp.random.choice(onp.arange(1000 - 20,
dtype=onp.int64), 20, replace=False)
batch_y0 = true_y[s] # (M, D)
batch_t = t[:20] # (T)
batch_y = np.stack([true_y[s + i] for i in range(20)]) # (T, M, D)
return batch_y0, batch_t, batch_y
def loss(batch_y0, batch_t, batch_y, params, ufuncs,uparams):
"""
Mean absolute loss
"""
pred_y = odeint(batch_y0,batch_t,params,ufuncs,uparams) # integrate using odeint
loss = np.mean(np.abs(pred_y-batch_y)) #calculate loss
return loss
grads = value_and_grad(loss,(5,)) #grads w.r.t uparams
batch_grad = vmap(grads,(0, None, None, None, None, None)) #vectorize over initial conditions (batch_y0)
grads(y0,t,true_y,[1.3,1.8], [U1,U2],
[U1_params,U2_params]) #non vmappped doesn't work
batch_grad(batch_y0, batch_t, batch_y,[1.3,1.8],
[U1,U2], [U1_params,U2_params]) #vmap version same error
```
Hey @skrsna , thanks for the question!
In your example, it seems the `lv_UDE` is never called. Is that intentional?
The underlying issue here is that `odeint` can't take function-valued arguments in `*args`; those must be arrays (or potentially-nested containers of arrays, like potentially-nested lists/tuples/dicts of arrays). Instead of passing `ufuncs` via the `*args` of `odeint`, maybe you can instead just write something like:
```python
def lv_UDE(ufuncs,y,t,params,uparams): # moved ufuncs to front
...
odeint(partial(lv_UDE, ufuncs), ...)
```
WDYT?
It's possible we could support passing function-valued arguments in `*args`, but I'm not sure it'd be worth the extra complexity. We could at least raise a better error...
Hi @mattjj , thanks for the super fast response. My bad I forgot to add `lv_UDE` while refactoring the code to make it look nice. I'll try your solution and update the issue with the workaround. Thanks again. | 2020-05-02T16:08:23 |
|
google/jax | 2,966 | google__jax-2966 | [
"2889"
] | dc234b6f11b25237fea1eb9c851add83812fc5f8 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -390,7 +390,7 @@ def fmax(x1, x2):
return where((x1 > x2) | isnan(x2), x1, x2)
@_wraps(onp.finfo)
-def finfo(dtype):
+def finfo(dtype):
return dtypes.finfo(dtype)
@_wraps(onp.issubdtype)
@@ -724,7 +724,7 @@ def _conv(x, y, mode, op, precision):
if ndim(x) != 1 or ndim(y) != 1:
raise ValueError(f"{op}() only support 1-dimensional inputs.")
x, y = _promote_dtypes_inexact(x, y)
-
+
out_order = slice(None)
if len(x) < len(y):
x, y = y, x
@@ -1153,6 +1153,25 @@ def ravel(a, order="C"):
return reshape(a, (size(a),), order)
+_UNRAVEL_INDEX_DOC = """\
+Unlike numpy's implementation of unravel_index, negative indices are accepted
+and out-of-bounds indices are clipped.
+"""
+
+@_wraps(onp.unravel_index, lax_description=_UNRAVEL_INDEX_DOC)
+def unravel_index(indices, shape):
+ indices = asarray(indices)
+ sizes = pad(shape, (0, 1), constant_values=1)
+ cumulative_sizes = cumprod(sizes[::-1])[::-1]
+ total_size = cumulative_sizes[0]
+ # Clip so raveling and unraveling an oob index will not change the behavior
+ clipped_indices = clip(indices, -total_size, total_size - 1)
+ # Add enough trailing dims to avoid conflict with flat_index
+ cumulative_sizes = cumulative_sizes.reshape([-1] + [1] * indices.ndim)
+ idx = clipped_indices % cumulative_sizes[:-1] // cumulative_sizes[1:]
+ return tuple(idx)
+
+
@_wraps(onp.squeeze)
def squeeze(a, axis=None):
shape_a = shape(a)
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1992,6 +1992,25 @@ def testRavel(self):
args_maker = lambda: [rng.randn(3, 4).astype("float32")]
self._CompileAndCheck(lambda x: x.ravel(), args_maker, check_dtypes=True)
+ @parameterized.parameters(
+ (0, (2, 1, 3)),
+ (5, (2, 1, 3)),
+ (0, ()),
+ ([0, 1, 2], (2, 2)),
+ ([[[0, 1], [2, 3]]], (2, 2)))
+ def testUnravelIndex(self, flat_index, shape):
+ self._CheckAgainstNumpy(
+ onp.unravel_index,
+ jnp.unravel_index,
+ lambda: (flat_index, shape),
+ check_dtypes=True
+ )
+
+ def testUnravelIndexOOB(self):
+ self.assertEqual(jnp.unravel_index(2, (2,)), (1,))
+ self.assertEqual(jnp.unravel_index(-2, (2, 1, 3,)), (1, 0, 1))
+ self.assertEqual(jnp.unravel_index(-3, (2,)), (0,))
+
def testAstype(self):
rng = onp.random.RandomState(0)
args_maker = lambda: [rng.randn(3, 4).astype("float32")]
| Add np.unravel_index
This is what I've been using. If it looks reasonable, will see about putting together a PR.
```python
def unravel_index(flat_index, shape):
"""
To support abstract `flat_index` there is no bounds checking. For
overflow some dimensions of the resulting index will be out of bounds.
For negative `flat_index` the resulting index is valid, although the first
dimension will be negative.
"""
if len(shape) == 0:
return ()
size_of_next_dimension = np.pad(shape[1:], (0, 1), constant_values=1)
strides = np.cumprod(size_of_next_dimension[::-1], dtype='int32')[::-1]
_, idx = jax.lax.scan(
lambda remaining, stride: (
remaining % stride,
remaining // stride
),
flat_index,
strides
)
return tuple(idx)
```
Haven't looked at jax testing yet, would be helpful to know whether / how to incorporate these...
```python
def test_unravel_index():
assert unravel_index(0, (2, 3)) == (0, 0)
assert unravel_index(5, (2, 1, 3)) == (1, 0, 2)
assert unravel_index(0, ()) == ()
assert unravel_index(3, (2,)) == (3,)
assert unravel_index(-1, (2, 1, 3)) == (-1, 0, 2)
```
| While it's probably correct as is, I think it would be preferable to avoid the `scan`, along the following lines:
```
In [72]: np.unravel_index(692, [4,6,8,9])
Out[72]: (1, 3, 4, 8)
In [73]: index = 692
In [74]: shape = [4, 6, 8, 9]
In [75]: cp = np.cumprod(np.pad(shape, [(0, 1)], constant_values=1)[::-1])[::-1]
In [76]: index % cp[:-1] // cp[1:]
Out[76]: array([1, 3, 4, 8])
```
In general vectorizable operations will be faster than `scan`.
Want to send a PR? Tests would go in `lax_numpy_test.py`.
Thanks @hawkinsp
Any preference on overflow behavior?
1. `np.unravel_index(3, (2,)) == (3,)`
2. `np.unravel_index(3, (2,)) == (2,)`
3. `np.unravel_index(3, (2,)) == (1,)`
Both (1) and (2) seem consistent with "JAX does not raise an error and instead returns the last value in the array". And (1) is FWIW what I expected to see when I first tried `jax.numpy.unravel_index`
Hi @mattwescott
Are you still looking into this? I could use this feature as well.
@rdaems thanks for the motivation, will put together a PR shortly | 2020-05-05T15:43:13 |
google/jax | 3,003 | google__jax-3003 | [
"2595"
] | 7b19302db0806cad6cf903eace9f50ab1617a1b5 | diff --git a/jax/numpy/__init__.py b/jax/numpy/__init__.py
--- a/jax/numpy/__init__.py
+++ b/jax/numpy/__init__.py
@@ -28,10 +28,10 @@
complex128, complex64, complex_, complexfloating, concatenate, conj,
conjugate, convolve, copysign, corrcoef, correlate, cos, cosh,
count_nonzero, cov, cross, csingle, cumprod, cumproduct, cumsum, deg2rad,
- degrees, diag, diag_indices, diagonal, diff, divide, divmod, dot, double,
- dsplit, dstack, dtype, e, ediff1d, einsum, einsum_path, empty, empty_like,
- equal, euler_gamma, exp, exp2, expand_dims, expm1, eye, fabs, finfo, fix,
- flexible, flip, fliplr, flipud, float16, float32, float64, float_,
+ degrees, diag, diag_indices, diagonal, diff, digitize, divide, divmod, dot,
+ double, dsplit, dstack, dtype, e, ediff1d, einsum, einsum_path, empty,
+ empty_like, equal, euler_gamma, exp, exp2, expand_dims, expm1, eye, fabs,
+ finfo, fix, flexible, flip, fliplr, flipud, float16, float32, float64, float_,
float_power, floating, floor, floor_divide, fmax, fmin, fmod, frexp, full,
full_like, function, gcd, geomspace, gradient, greater, greater_equal,
hamming, hanning, heaviside, hsplit, hstack, hypot, identity, iinfo, imag,
diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -3823,6 +3823,18 @@ def searchsorted(a, v, side='left', sorter=None):
return _searchsorted(a, v, side)
+@_wraps(onp.digitize)
+def digitize(x, bins, right=False):
+ if len(bins) == 0:
+ return zeros(x, dtype=int32)
+ side = 'right' if not right else 'left'
+ return where(
+ bins[-1] >= bins[0],
+ searchsorted(bins, x, side=side),
+ len(bins) - searchsorted(bins[::-1], x, side=side)
+ )
+
+
@_wraps(onp.percentile)
def percentile(a, q, axis=None, out=None, overwrite_input=False,
interpolation="linear", keepdims=False):
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1431,6 +1431,28 @@ def testSearchsorted(self, ashape, vshape, side, dtype, rng_factory):
self._CheckAgainstNumpy(onp_fun, jnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(jnp_fun, args_maker, check_dtypes=True)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "_x={}_bins={}_right={}_reverse={}".format(
+ jtu.format_shape_dtype_string(xshape, dtype),
+ jtu.format_shape_dtype_string(binshape, dtype),
+ right, reverse), "xshape": xshape, "binshape": binshape,
+ "right": right, "reverse": reverse, "dtype": dtype, "rng_factory": rng_factory}
+ for xshape in [(20,), (5, 4)]
+ for binshape in [(1,), (5,)]
+ for right in [True, False]
+ for reverse in [True, False]
+ for dtype in default_dtypes
+ for rng_factory in [jtu.rand_default]
+ ))
+ def testDigitize(self, xshape, binshape, right, reverse, dtype, rng_factory):
+ order = jax.ops.index[::-1] if reverse else jax.ops.index[:]
+ rng = rng_factory(self.rng())
+ args_maker = lambda: [rng(xshape, dtype), jnp.sort(rng(binshape, dtype))[order]]
+ onp_fun = lambda x, bins: onp.digitize(x, bins, right=right)
+ jnp_fun = lambda x, bins: jnp.digitize(x, bins, right=right)
+ self._CheckAgainstNumpy(onp_fun, jnp_fun, args_maker, check_dtypes=True)
+ self._CompileAndCheck(jnp_fun, args_maker, check_dtypes=True)
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_axis={}".format(
jtu.format_test_name_suffix("", [shape] * len(dtypes), dtypes), axis),
| Implement stratified resampling np.digitize in XLA
Hi JAX team,
I'm interested in implementing efficient stratified resampling in JAX. This is a technique for variance reduction in sequential monte carlo algorithms (see section 2.2.1 in Naesseth, Lindsten, and Schön "Elements of Sequential Monte Carlo").
The algorithm provided looks like this
```python
# w.shape = (N, )
# x.shape = (N, )
# assert np.sum(w) = 1
def stratified_resampling(key, w, x) :
N = w.shape[0]
u = (np.arange(N) + jax.random.uniform(key, (N, )) / N
bins = np.cumsum(w)
return x[np.digitize(u, bins)]
```
although I'm also considering a version that doesn't actually do the shuffling in the last step (saving it for a "decoding" process later on which doesn't particularly need to be fast.
The immediate issue is that `np.digitize` is not implemented in JAX. As it were, I don't actually need the gradient for it, but as it's in the hot path and defeats the tracer it's causing some real pain. Even more ideally, I'd like to have something like the interface to `jax.random.categorical` where I can just use unnormalized logits.
How can I help implement at least tracing through `np.digitize`? Alternatively, is there a clever way of doing stratified resampling without using `np.digitize`?
Thanks
| @tel I had the same issue. `digitize` relies on `searchsorted`. The latter is not implemented in JAX either, but you can sort of hack it if you don't mind a few extra allocations and sorts, see https://github.com/google/jax/issues/2080#issuecomment-616234407.
@jakevdp interested? | 2020-05-07T22:53:57 |
google/jax | 3,016 | google__jax-3016 | [
"3014"
] | f60184e12e279c4602c77f91be19a0c6e1eb7083 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -2160,6 +2160,7 @@ def linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None,
dtype = dtype or dt
bounds_shape = list(lax.broadcast_shapes(shape(start), shape(stop)))
broadcast_start = broadcast_to(start, bounds_shape)
+ broadcast_stop = broadcast_to(stop, bounds_shape)
axis = len(bounds_shape) + axis + 1 if axis < 0 else axis
bounds_shape.insert(axis, 1)
iota_shape = [1,] * len(bounds_shape)
@@ -2167,9 +2168,18 @@ def linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None,
div = (num - 1) if endpoint else num
if num > 1:
delta = lax.convert_element_type(stop - start, dt) / div
- out = (reshape(broadcast_start, bounds_shape) +
- reshape(lax.iota(dt, num), iota_shape) *
- reshape(delta, bounds_shape))
+ if issubdtype(dtype, integer):
+ # This is similar to how numpy computes linspace, but it
+ # can fail to recover the endpoints in float32 arithmetic.
+ out = (reshape(broadcast_start, bounds_shape) +
+ reshape(lax.iota(dt, num), iota_shape) *
+ reshape(delta, bounds_shape))
+ else:
+ # This approach recovers the endpoints with float32 arithmetic,
+ # but can lead to rounding errors for integer outputs.
+ step = reshape(lax.iota(dt, num), iota_shape) / div
+ out = (reshape(broadcast_start, bounds_shape) * (1 - step) +
+ reshape(broadcast_stop, bounds_shape) * step)
elif num == 1:
delta = nan if endpoint else lax.convert_element_type(stop - start, dt)
out = reshape(broadcast_start, bounds_shape)
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -2912,6 +2912,20 @@ def testLinspace(self, start_shape, stop_shape, num, endpoint,
self._CompileAndCheck(jnp_op, args_maker,
check_dtypes=False, atol=tol, rtol=tol)
+ @parameterized.named_parameters(
+ jtu.cases_from_list(
+ {"testcase_name": "_dtype={}".format(dtype),
+ "dtype": dtype,
+ "rng_factory": rng_factory}
+ for dtype in number_dtypes
+ for rng_factory in [jtu.rand_default]))
+ def testLinspaceEndpoints(self, dtype, rng_factory):
+ """Regression test for Issue #3014."""
+ rng = rng_factory(self.rng())
+ endpoints = rng((2,), dtype)
+ out = jnp.linspace(*endpoints, 10, dtype=dtype)
+ self.assertAllClose(out[[0, -1]], endpoints, check_dtypes=True, rtol=0, atol=0)
+
@parameterized.named_parameters(
jtu.cases_from_list(
{"testcase_name": ("_start_shape={}_stop_shape={}_num={}_endpoint={}"
| jax.numpy.linspace() does not faithfully recover endpoints
Example:
```python
>>> import numpy as np
>>> import jax.numpy as jnp
>>> a, b = -2.3328583, 2.8459014
>>> np.linspace(a, b, 10)[-1] == b
True
>>> jnp.linspace(a, b, 10)[-1] == b
DeviceArray(False, dtype=bool)
```
| Looks to be due to float32 roundoff error:
```
>>> b - a, float32(b) - float32(a)
(5.1787598, 5.1787596)
```
...and numpy side-steps this by doing all computation in 64-bit, even if input and output dtypes are 32-bit: https://github.com/numpy/numpy/blob/fc1f196584ff6dd530982febba2322679317c632/numpy/core/function_base.py#L120-L121
I think we could address this by factoring out operations to prevent loss of precision; e.g.
```python
def linspace_bad(a, b, N):
a, b = np.float32(a), np.float32(b)
return a + (b - a) * np.arange(N, dtype=np.float32) / (N - 1)
def linspace_good(a, b, N):
a, b = np.float32(a), np.float32(b)
step = np.arange(N, dtype=np.float32) / (N - 1)
return a * (1 - step) + b * step
print(linspace_bad(-2.3328583, 2.8459015, 2))
print(linspace_good(-2.3328583, 2.8459015, 2))
```
```
[-2.3328583 2.8459013]
[-2.3328583 2.8459015]
``` | 2020-05-08T21:37:05 |
google/jax | 3,018 | google__jax-3018 | [
"3007"
] | f60184e12e279c4602c77f91be19a0c6e1eb7083 | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -138,9 +138,9 @@ def f_jitted(*args, **kwargs):
if _jit_is_disabled():
return fun(*args, **kwargs)
if static_argnums and max(static_argnums) >= len(args):
- msg = ("Jitted function has static_argnums={} but was called with only {}"
+ msg = ("jitted function has static_argnums={} but was called with only {}"
" positional arguments.")
- raise TypeError(msg.format(static_argnums, len(args)))
+ raise ValueError(msg.format(static_argnums, len(args)))
f = lu.wrap_init(fun)
if static_argnums:
dyn_argnums = [i for i in range(len(args)) if i not in static_argnums]
@@ -771,21 +771,23 @@ def batched_fun(*args):
return batched_fun
-def _get_axis_size(i:int, shape: Tuple[int, ...], axis: int):
+def _get_axis_size(name: str, i:int, shape: Tuple[int, ...], axis: int):
try:
return shape[axis]
except (IndexError, TypeError) as e:
- raise ValueError(f"vmap got arg {i} of rank {len(shape)} but axis to be mapped {axis}") from e
+ raise ValueError(f"{name} got arg {i} of rank {len(shape)} "
+ f"but axis to be mapped {axis}") from e
def _mapped_axis_size(tree, vals, dims, name):
- mapped_axis_sizes = {_get_axis_size(i, onp.shape(x), d) for i, (x, d) in enumerate(zip(vals, dims))
- if d is not None}
+ mapped_axis_sizes = {_get_axis_size(name, i, onp.shape(x), d)
+ for i, (x, d) in enumerate(zip(vals, dims))
+ if d is not None}
try:
size, = mapped_axis_sizes
return size
except ValueError as e:
if not mapped_axis_sizes:
- raise ValueError("{} must have at least one non-None in_axes".format(name)) from e
+ raise ValueError(f"{name} must have at least one non-None value in in_axes") from e
msg = "{} got inconsistent sizes for array axes to be mapped:\n".format(name) + "{}"
# we switch the error message based on whether args is a tuple of arrays,
# in which case we can produce an error message based on argument indices,
@@ -1033,7 +1035,14 @@ def pmap(fun: Callable, axis_name: Optional[AxisName] = None, *, in_axes=0,
def f_pmapped(*args, **kwargs):
f = lu.wrap_init(fun)
if static_broadcasted_argnums:
- dyn_argnums = [i for i in range(len(args)) if i not in static_broadcasted_argnums]
+ if max(static_broadcasted_argnums) >= len(args):
+ msg = ("pmapped function has static_broadcasted_argnums={} but was "
+ "called with only {} positional argument{}. All static "
+ "broadcasted arguments must be passed positionally.")
+ raise ValueError(msg.format(static_broadcasted_argnums, len(args),
+ "s" if len(args) > 1 else ""))
+ dyn_argnums = [i for i in range(len(args))
+ if i not in static_broadcasted_argnums]
f, dyn_args = argnums_partial(f, dyn_argnums, args)
if isinstance(in_axes, tuple):
dyn_in_axes = tuple(in_axes[i] for i in dyn_argnums)
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -1080,7 +1080,7 @@ def h(a, b):
api.vmap(lambda x: x, in_axes=(jnp.array([1., 2.]),))(jnp.array([1., 2.]))
with self.assertRaisesRegex(
- ValueError, "vmap must have at least one non-None in_axes"):
+ ValueError, "vmap must have at least one non-None value in in_axes"):
# If the output is mapped, there must be a non-None in_axes
api.vmap(lambda x: x, in_axes=None)(jnp.array([1., 2.]))
@@ -1098,7 +1098,6 @@ def h(a, b):
# If the output is mapped, then there must be some out_axes specified
api.vmap(lambda x: x, out_axes=None)(jnp.array([1., 2.]))
-
def test_vmap_structured_in_axes(self):
A, B, C, D = 2, 3, 4, 5
@@ -1655,6 +1654,18 @@ def func1(x):
re.DOTALL)):
api.jit(func1)(2.)
+ def test_pmap_static_kwarg_error_message(self):
+ # https://github.com/google/jax/issues/3007
+ def f(a, b):
+ return a + b
+
+ g = jax.pmap(f, static_broadcasted_argnums=(1,))
+
+ msg = (r"pmapped function has static_broadcasted_argnums=\(1,\) but was "
+ r"called with only 1 positional argument. All static broadcasted "
+ r"arguments must be passed positionally.")
+ with self.assertRaisesRegex(ValueError, msg):
+ g(jnp.ones((1, 1)), b=1)
class JaxprTest(jtu.JaxTestCase):
| pmap static_broadcasted_argnums does not respect keyword arguments.
Tested on internal TPU donut.
This code
```python
import jax
def f(a, b):
return a + b
g = jax.pmap(f, static_broadcasted_argnums=(1,))
g(jnp.ones((8, 1)), b=1)
```
Crashse with
```
5 return a + b
6 g = jax.pmap(f, static_broadcasted_argnums=(1,))
----> 7 g(jnp.ones((8, 1)), b=1)
3 frames
google3/third_party/py/jax/api.py in _get_axis_size(i, shape, axis)
775 return shape[axis]
776 except (IndexError, TypeError) as e:
--> 777 raise ValueError(f"vmap got arg {i} of rank {len(shape)} but axis to be mapped {axis}") from e
778
779 def _mapped_axis_size(tree, vals, dims, name):
ValueError: vmap got arg 1 of rank 0 but axis to be mapped 0
```
However, changing the last line to
```python
g(jnp.ones((8, 1)), 1) # No keyword args!
```
works as expected.
| Thanks for the report! Just checking: how does this compare to how `jit` works with `static_argnums` together with keyword arguments? (I'm not sure if this is a bug or just a bad error message, and perhaps comparing to `jit` can determine that.)
I just checked against `jit`, and indeed I think I'd currently consider this an issue about better errors, though we could upgrade both `jit` and `pmap` to try harder to reconcile arguments passed via keywords to parameter positions. | 2020-05-08T23:34:34 |
google/jax | 3,048 | google__jax-3048 | [
"3043"
] | 16cf84514862b73a191a02eb551028eb2b16c700 | diff --git a/jax/random.py b/jax/random.py
--- a/jax/random.py
+++ b/jax/random.py
@@ -1227,8 +1227,21 @@ def logistic(key, shape=(), dtype=onp.float64):
@partial(jit, static_argnums=(1, 2))
def _logistic(key, shape, dtype):
+ # Mathematically, we can compute the distribution by generating uniformly-distributed
+ # numbers x in the open interval (a, b) and computing:
+ # z = log[ (x - a) / (b - x))
+ # It's important to avoid x=a or x=b, which lead to infinite values for z.
+ # The uniform() function generates pseudorandom floating point numbers x in the
+ # semi-closed interval [0, 1), so if used directly with (a,b)=(0,1), it will
+ # lead to infinite output in a small number of cases (as many as 1 in 2^23 for float32).
+ #
+ # Instead, we let (a, b) = (-ε, 1) where ε is the smallest step between floating point
+ # values: then numbers in the interval (-ε, 1) are approximated by standard uniformly
+ # drawn numbers in [0, 1).
_check_shape("logistic", shape)
- return logit(uniform(key, shape, dtype))
+ x = uniform(key, shape, dtype)
+ eps = np.finfo(dtype).eps
+ return lax.log(lax.div(lax.add(lax._const(x, eps), x), lax.sub(lax._const(x, 1), x)))
def pareto(key, b, shape=None, dtype=onp.float64):
| random.logistic returns -inf
If I call jax.random.logistic with key array([2308802225, 3876068305], dtype=uint32) and shape=(10000, 12), the sampled array contains -inf. I don't think this behavior is correct or at least it is not expected and causes downstream code that assumes finite floats to crash.
I can work around this, but I think it's worth fixing. Thanks!
| This is a consequence of using a 32-bit implementation of the standard logit generating function: x is drawn from a uniform distribution between 0 and 1, then transformed according to `y = log(x / (1 - x))`. For real numbers there is an infinitessimally small chance of `x=0` which maps to `y=-inf`, in 32-bit arithmetic the chance is approximately 1 in 2^30.
I'll think about how to best address this.
Wait, it's even worse than that, because ``uniform()`` only randomizes the mantissa: this means the chance is about a 1 in 2^23 of getting -inf; that's confirmed empirically:
```
>>> (uniform(PRNGKey(0), (100000000,)) == 0).sum()
DeviceArray(19, dtype=int32)
```
I think the best approach would be to modify the generating algorithm to use 32-bit unsigned integers rather than 32-bit uniform floats; then the chances of hitting this would be one in 4 billion, rather than 1 in 8 million as it is currently.
Could you avoid the problem by drawing from the range `(0, 1)` instead of `[0, 1)` ?
Yes, that would fix the -inf issue. But it still might be preferable to use an approach that allows the result to have a cardinality greater than 2^23 | 2020-05-11T23:04:51 |
|
google/jax | 3,061 | google__jax-3061 | [
"3060"
] | 28bc4b759ee113815bfacb99546e58a91891b3f2 | diff --git a/jax/experimental/optimizers.py b/jax/experimental/optimizers.py
--- a/jax/experimental/optimizers.py
+++ b/jax/experimental/optimizers.py
@@ -499,7 +499,7 @@ def piecewise_constant(boundaries, values):
if not boundaries.ndim == values.ndim == 1:
raise ValueError("boundaries and values must be sequences")
if not boundaries.shape[0] == values.shape[0] - 1:
- raise ValueError("boundaries length must be one longer than values length")
+ raise ValueError("boundaries length must be one shorter than values length")
def schedule(i):
return values[jnp.sum(i > boundaries)]
| minor problem in error message for jax.experimental.optimizers.piecewise_constant
Hi,
I think there is a small problem with the error message for jax.experimental.optimizers.piecewise_constant.
```python
def piecewise_constant(boundaries, values):
boundaries = jnp.array(boundaries)
values = jnp.array(values)
if not boundaries.ndim == values.ndim == 1:
raise ValueError("boundaries and values must be sequences")
if not boundaries.shape[0] == values.shape[0] - 1:
raise ValueError("boundaries length must be one longer than values length")
```
I think the last line should read "boundaries length must be one less than values length" or something along those lines.
Cheers!
| 2020-05-12T14:22:29 |
||
google/jax | 3,082 | google__jax-3082 | [
"3070"
] | 91d1e0ddbd06360a5295a0e8a68ba055c57009e9 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -3790,7 +3790,8 @@ def quantile(a, q, axis=None, out=None, overwrite_input=False,
@partial(jit, static_argnums=(2, 3, 4))
def _quantile(a, q, axis, interpolation, keepdims):
- a = asarray(a)
+ a = asarray(a, dtype=promote_types(_dtype(a), float_))
+ q = asarray(q, dtype=promote_types(_dtype(q), float_))
if axis is None:
a = ravel(a)
axis = 0
@@ -3803,15 +3804,6 @@ def _quantile(a, q, axis, interpolation, keepdims):
if q_ndim > 1:
raise ValueError("q must be have rank <= 1, got shape {}".format(shape(q)))
- q = asarray(q)
-
- if not issubdtype(a.dtype, floating) or not issubdtype(q.dtype, floating):
- msg = "q and a arguments to quantile must be of float type, got {} and {}"
- raise TypeError(msg.format(a.dtype, q.dtype))
-
- # Promote q to at least float32 for precise interpolation.
- q = lax.convert_element_type(q, promote_types(q.dtype, float32))
-
a_shape = shape(a)
a = lax.sort(a, dimension=axis)
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -2434,7 +2434,7 @@ def args_maker(): return []
("percentile", partial(jtu.rand_uniform, low=0., high=100.)),
("quantile", partial(jtu.rand_uniform, low=0., high=1.)),
)
- for a_dtype in float_dtypes
+ for a_dtype in default_dtypes
for a_shape, axis in (
((7,), None),
((47, 7), 0),
@@ -2477,7 +2477,7 @@ def onp_fun(*args):
"a_shape": a_shape, "a_dtype": a_dtype,
"axis": axis,
"keepdims": keepdims}
- for a_dtype in float_dtypes
+ for a_dtype in default_dtypes
for a_shape, axis in (
((7,), None),
((47, 7), 0),
| jax.numpy.median throws error for float median
`import jax.numpy as np; np.median(np.array([i for i in range(0,10)]))` throws an `TypeError: q and a arguments to quantile must be of float type, got int32 and float3` where `import numpy as np; np.median(np.array([i for i in range(0,10)]))` results in `4.5`.
<details><summary>Traceback of error (Click!)</summary>
<p>
```python
TypeError Traceback (most recent call last)
<ipython-input-28-eb17bd577cca> in <module>
----> 1 np.median(np.array([i for i in range(0,10)]))
/opt/anaconda3/lib/python3.7/site-packages/jax/numpy/lax_numpy.py in median(a, axis, out, overwrite_input, keepdims)
3556 q = 0.5
3557 return quantile(a, q, axis=axis, out=out, overwrite_input=overwrite_input,
-> 3558 keepdims=keepdims, interpolation='midpoint')
3559
3560 def _astype(arr, dtype):
/opt/anaconda3/lib/python3.7/site-packages/jax/numpy/lax_numpy.py in quantile(a, q, axis, out, overwrite_input, interpolation, keepdims)
3464 if interpolation not in ["linear", "lower", "higher", "midpoint", "nearest"]:
3465 raise ValueError("interpolation can only be 'linear', 'lower', 'higher', 'midpoint', or 'nearest'")
-> 3466 return _quantile(a, q, axis, interpolation, keepdims)
3467
3468 @partial(jit, static_argnums=(2, 3, 4))
/opt/anaconda3/lib/python3.7/site-packages/jax/api.py in f_jitted(*args, **kwargs)
151 flat_fun, out_tree = flatten_fun(f, in_tree)
152 out = xla.xla_call(flat_fun, *args_flat, device=device, backend=backend,
--> 153 name=flat_fun.__name__)
154 return tree_unflatten(out_tree(), out)
155
/opt/anaconda3/lib/python3.7/site-packages/jax/core.py in call_bind(primitive, f, *args, **params)
976 if top_trace is None:
977 with new_sublevel():
--> 978 outs = primitive.impl(f, *args, **params)
979 else:
980 tracers = map(top_trace.full_raise, args)
/opt/anaconda3/lib/python3.7/site-packages/jax/interpreters/xla.py in _xla_call_impl(fun, device, backend, name, *args)
461
462 def _xla_call_impl(fun: lu.WrappedFun, *args, device, backend, name):
--> 463 compiled_fun = _xla_callable(fun, device, backend, name, *map(arg_spec, args))
464 try:
465 return compiled_fun(*args)
/opt/anaconda3/lib/python3.7/site-packages/jax/linear_util.py in memoized_fun(fun, *args)
219 fun.populate_stores(stores)
220 else:
--> 221 ans = call(fun, *args)
222 cache[key] = (ans, fun.stores)
223 return ans
/opt/anaconda3/lib/python3.7/site-packages/jax/interpreters/xla.py in _xla_callable(fun, device, backend, name, *arg_specs)
478 pvals: Sequence[pe.PartialVal] = [pe.PartialVal.unknown(aval) for aval in abstract_args]
479 jaxpr, pvals, consts = pe.trace_to_jaxpr(
--> 480 fun, pvals, instantiate=False, stage_out=True, bottom=True)
481
482 _map(prefetch, it.chain(consts, jaxpr_literals(jaxpr)))
/opt/anaconda3/lib/python3.7/site-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr(fun, pvals, instantiate, stage_out, bottom)
419 with new_master(trace_type, bottom=bottom) as master:
420 fun = trace_to_subjaxpr(fun, master, instantiate)
--> 421 jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
422 assert not env
423 del master
/opt/anaconda3/lib/python3.7/site-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
148 gen = None
149
--> 150 ans = self.f(*args, **dict(self.params, **kwargs))
151 del args
152 while stack:
/opt/anaconda3/lib/python3.7/site-packages/jax/numpy/lax_numpy.py in _quantile(a, q, axis, interpolation, keepdims)
3485 if not issubdtype(a.dtype, floating) or not issubdtype(q.dtype, floating):
3486 msg = "q and a arguments to quantile must be of float type, got {} and {}"
-> 3487 raise TypeError(msg.format(a.dtype, q.dtype))
3488
3489 # Promote q to at least float32 for precise interpolation.
TypeError: q and a arguments to quantile must be of float type, got int32 and float32
```
</p>
</details>
jax 0.1.63, Python 3.7.4
| Hello @Baschdl
Solution to this,
```
from jax import numpy as np
np.median(np.array([i for i in range(0,10)],dtype = np.float32))
```
You just need to add **dtype** in it.
Regards.
Shorter repro:
```
import jax.numpy as jnp
jnp.median(jnp.arange(10))
```
I'll work on a fix. | 2020-05-13T17:58:34 |
google/jax | 3,096 | google__jax-3096 | [
"3074"
] | 7c687b245b34397c13563a714ad9bf0290b419e3 | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -4289,6 +4289,11 @@ def _select_and_gather_add_shape_rule(
64: onp.uint64,
}
+_INT_DTYPES = {
+ 16: onp.int16,
+ 32: onp.int32,
+ 64: onp.int64,
+}
def _select_and_gather_add_translation(
c, tangents, operand, *, select_prim, window_dimensions, window_strides,
@@ -4563,8 +4568,69 @@ def _sort_abstract_eval(*args, **kwargs):
raise TypeError(f"Arguments to sort must have equal shapes, got: {shapes}")
return args
+
+def _float_to_int_for_sort(x):
+ # Switch from a floating point value to a integer value in such a way that
+ # when using the integer value to compare, we get the same result for normal
+ # values, and -nan is treated as the smallest value, and nan is treated as
+ # the largest value.
+ # If f is a float, and
+ # x = bit_cast<int32>(f);
+ # y = x < 0 ? int32_max - x : x;
+ # then y is ordered as an int32 such that finite values have the obvious
+ # order, -0 is ordered before 0, and -NaN and NaN appear at the beginning
+ # and end of the ordering.
+ # Note that in order to avoid -x to overflow, we calculate
+ # int32_max - x as unsigned, and then convert back to signed.
+ if x.dtype == dtypes.bfloat16:
+ x = convert_element_type(x, onp.float32)
+ nbits = onp.finfo(x).bits
+ signed_dtype = _INT_DTYPES[nbits]
+ unsigned_dtype = _UINT_DTYPES[nbits]
+
+ signed = bitcast_convert_type(x, signed_dtype)
+ unsigned = bitcast_convert_type(x, unsigned_dtype)
+ flipped = bitcast_convert_type(
+ sub(unsigned_dtype(onp.iinfo(signed_dtype).max), unsigned), signed_dtype)
+ return select(lt(signed, _zero(signed)), flipped, signed)
+
+# Default comparator that sorts the operands only on their first arguments.
+# For floating point types, a total order is created where
+# -NaN < -infinity < ... < -0 < 0 < ... < infinity < NaN.
+# For complex types, the (real, imag) pairs are sorted lexicographically
+# (following NumPy's semantics).
+# This code adds complex-number support to the algorithm from:
+# https://github.com/tensorflow/tensorflow/blob/ba43780830f09da72081fe5061c436f1c6203a92/tensorflow/compiler/xla/client/lib/comparators.h#L33
+def _sort_lt_comparator(*operands):
+ assert len(operands) >= 2 and len(operands) % 2 == 0, operands
+ x, y = operands[:2]
+ assert x.dtype == y.dtype, (x.dtype, y.dtype)
+ if onp.issubdtype(x.dtype, onp.complexfloating):
+ x_keys = [_float_to_int_for_sort(real(x)), _float_to_int_for_sort(imag(x))]
+ y_keys = [_float_to_int_for_sort(real(y)), _float_to_int_for_sort(imag(y))]
+ elif onp.issubdtype(x.dtype, onp.floating):
+ x_keys = [_float_to_int_for_sort(x)]
+ y_keys = [_float_to_int_for_sort(y)]
+ else:
+ x_keys = [x]
+ y_keys = [y]
+
+ p = None
+ for xk, yk in zip(x_keys[::-1], y_keys[::-1]):
+ p = (bitwise_or(lt(xk, yk), bitwise_and(eq(xk, yk), p)) if p is not None
+ else lt(xk, yk))
+ return p
+
def _sort_translation_rule(c, *operands, dimension):
- out = xops.Sort(c, operands, dimension=dimension, is_stable=True)
+ types = [c.get_shape(x).xla_element_type() for x in operands]
+ subc = xla_bridge.make_computation_builder("sort_lt_comparator")
+ params = [xb.parameter(subc, 2 * i + j, xc.Shape.array_shape(typ, ()))
+ for i, typ in enumerate(types) for j in range(2)]
+ result = xla.lower_fun(_sort_lt_comparator,
+ multiple_results=False)(subc, *params)
+ comparator = subc.build(result)
+ out = xops.Sort(c, operands, dimension=dimension, is_stable=True,
+ comparator=comparator)
return out if len(operands) != 1 else xops.Tuple(c, [out])
def _sort_jvp(primals, tangents, *, dimension):
| diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -1302,13 +1302,17 @@ def testCumulativeReduce(self, op, onp_op, shape, dtype, axis, rng_factory):
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_axis={}".format(
jtu.format_shape_dtype_string(shape, dtype), axis),
- "rng_factory": rng_factory, "shape": shape, "dtype": dtype, "axis": axis}
- for dtype in [onp.float32, onp.int32, onp.uint32]
+ "shape": shape, "dtype": dtype, "axis": axis}
+ for dtype in all_dtypes
for shape in [(5,), (5, 7)]
- for axis in [-1, len(shape) - 1]
- for rng_factory in [jtu.rand_default]))
- def testSort(self, shape, dtype, axis, rng_factory):
- rng = rng_factory(self.rng())
+ for axis in [-1, len(shape) - 1]))
+ def testSort(self, shape, dtype, axis):
+ # TODO(b/141131288): enable complex-valued sorts on TPU.
+ if (onp.issubdtype(dtype, onp.complexfloating) and (
+ (jtu.device_under_test() == "cpu" and jax.lib.version <= (0, 1, 47)) or
+ jtu.device_under_test() == "tpu")):
+ raise SkipTest("Complex-valued sort not implemented")
+ rng = jtu.rand_default(self.rng())
args_maker = lambda: [rng(shape, dtype)]
fun = lambda x: lax.sort(x, dimension=axis)
self._CompileAndCheck(fun, args_maker, check_dtypes=True)
@@ -1316,13 +1320,17 @@ def testSort(self, shape, dtype, axis, rng_factory):
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_axis={}".format(
jtu.format_shape_dtype_string(shape, dtype), axis),
- "rng_factory": rng_factory, "shape": shape, "dtype": dtype, "axis": axis}
- for dtype in [onp.float32, onp.int32, onp.uint32]
+ "shape": shape, "dtype": dtype, "axis": axis}
+ for dtype in all_dtypes
for shape in [(5,), (5, 7)]
- for axis in [-1, len(shape) - 1]
- for rng_factory in [jtu.rand_default]))
- def testSortAgainstNumpy(self, shape, dtype, axis, rng_factory):
- rng = rng_factory(self.rng())
+ for axis in [-1, len(shape) - 1]))
+ def testSortAgainstNumpy(self, shape, dtype, axis):
+ # TODO(b/141131288): enable complex-valued sorts on TPU.
+ if (onp.issubdtype(dtype, onp.complexfloating) and (
+ (jtu.device_under_test() == "cpu" and jax.lib.version <= (0, 1, 47)) or
+ jtu.device_under_test() == "tpu")):
+ raise SkipTest("Complex-valued sort not implemented")
+ rng = jtu.rand_default(self.rng())
args_maker = lambda: [rng(shape, dtype)]
op = lambda x: lax.sort(x, dimension=axis)
numpy_op = lambda x: lax_reference.sort(x, axis)
@@ -1333,15 +1341,19 @@ def testSortAgainstNumpy(self, shape, dtype, axis, rng_factory):
jtu.format_shape_dtype_string(shape, key_dtype),
jtu.format_shape_dtype_string(shape, val_dtype),
axis),
- "rng_factory": rng_factory, "shape": shape,
- "key_dtype": key_dtype, "val_dtype": val_dtype, "axis": axis}
- for key_dtype in [onp.float32, onp.int32, onp.uint32]
+ "shape": shape, "key_dtype": key_dtype, "val_dtype": val_dtype,
+ "axis": axis}
+ for key_dtype in float_dtypes + complex_dtypes + int_dtypes + uint_dtypes
for val_dtype in [onp.float32, onp.int32, onp.uint32]
for shape in [(3,), (5, 3)]
- for axis in [-1, len(shape) - 1]
- for rng_factory in [jtu.rand_default]))
- def testSortKeyVal(self, shape, key_dtype, val_dtype, axis, rng_factory):
- rng = rng_factory(self.rng())
+ for axis in [-1, len(shape) - 1]))
+ def testSortKeyVal(self, shape, key_dtype, val_dtype, axis):
+ # TODO(b/141131288): enable complex-valued sorts on TPU.
+ if (onp.issubdtype(key_dtype, onp.complexfloating) and (
+ (jtu.device_under_test() == "cpu" and jax.lib.version <= (0, 1, 47)) or
+ jtu.device_under_test() == "tpu")):
+ raise SkipTest("Complex-valued sort not implemented")
+ rng = jtu.rand_default(self.rng())
# This test relies on the property that wherever keys are tied, values are
# too, since we don't guarantee the same ordering of values with equal keys.
# To avoid that case, we generate unique keys (globally in the key array).
@@ -1359,15 +1371,19 @@ def args_maker():
jtu.format_shape_dtype_string(shape, key_dtype),
jtu.format_shape_dtype_string(shape, val_dtype),
axis),
- "rng_factory": rng_factory, "shape": shape,
- "key_dtype": key_dtype, "val_dtype": val_dtype, "axis": axis}
- for key_dtype in [onp.float32, onp.int32, onp.uint32]
+ "shape": shape, "key_dtype": key_dtype, "val_dtype": val_dtype,
+ "axis": axis}
+ for key_dtype in float_dtypes + complex_dtypes + int_dtypes + uint_dtypes
for val_dtype in [onp.float32, onp.int32, onp.uint32]
for shape in [(3,), (5, 3)]
- for axis in [-1, len(shape) - 1]
- for rng_factory in [jtu.rand_default]))
- def testSortKeyValAgainstNumpy(self, shape, key_dtype, val_dtype, axis, rng_factory):
- rng = rng_factory(self.rng())
+ for axis in [-1, len(shape) - 1]))
+ def testSortKeyValAgainstNumpy(self, shape, key_dtype, val_dtype, axis):
+ # TODO(b/141131288): enable complex-valued sorts on TPU.
+ if (onp.issubdtype(key_dtype, onp.complexfloating) and (
+ (jtu.device_under_test() == "cpu" and jax.lib.version <= (0, 1, 47)) or
+ jtu.device_under_test() == "tpu")):
+ raise SkipTest("Complex-valued sort not implemented")
+ rng = jtu.rand_default(self.rng())
# This test relies on the property that wherever keys are tied, values are
# too, since we don't guarantee the same ordering of values with equal keys.
# To avoid that case, we generate unique keys (globally in the key array).
| RuntimeError: Unimplemented: complex comparison 'LT'
I'm getting a weird internal JAX error with this script:
```python
import jax.numpy as jp
def lqr_continuous_time_infinite_horizon(A, B, Q, R, N):
# Take the last dimension, in case we try to do some kind of broadcasting
# thing in the future.
x_dim = A.shape[-1]
# See https://en.wikipedia.org/wiki/Linear%E2%80%93quadratic_regulator#Infinite-horizon,_continuous-time_LQR.
A1 = A - B @ jp.linalg.solve(R, N.T)
Q1 = Q - N @ jp.linalg.solve(R, N.T)
# See https://en.wikipedia.org/wiki/Algebraic_Riccati_equation#Solution.
H = jp.block([[A1, -B @ jp.linalg.solve(R, B.T)], [-Q1, -A1]])
eigvals, eigvectors = jp.linalg.eig(H)
argsort = jp.argsort(eigvals)
ix = argsort[:x_dim]
U = eigvectors[:, ix]
P = U[x_dim:, :] @ jp.linalg.inv(U[:x_dim, :])
K = jp.linalg.solve(R, (B.T @ P + N.T))
return K, P, eigvals[ix]
def _test_lqr(n):
import control
from jax.tree_util import tree_multimap
A = jp.eye(n)
B = jp.eye(n)
Q = jp.eye(n)
R = jp.eye(n)
N = jp.zeros((n, n))
actual = lqr_continuous_time_infinite_horizon(A, B, Q, R, N)
expected = control.lqr(A, B, Q, R, N)
assert tree_multimap(jp.allclose, actual, expected)
if __name__ == "__main__":
_test_lqr(2)
```
I'm getting:
```
❯ pipenv run python -m research.lqr
/Users/skainswo/dev/jax/jax/lib/xla_bridge.py:116: UserWarning: No GPU/TPU found, falling back to CPU.
warnings.warn('No GPU/TPU found, falling back to CPU.')
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/skainswo/dev/research/research/lqr.py", line 50, in <module>
_test_lqr1(2)
File "/Users/skainswo/dev/research/research/lqr.py", line 45, in _test_lqr1
actual = lqr_continuous_time_infinite_horizon(A, B, Q, R, N)
File "/Users/skainswo/dev/research/research/lqr.py", line 26, in lqr_continuous_time_infinite_horizon
argsort = jp.argsort(eigvals)
File "/Users/skainswo/dev/jax/jax/numpy/lax_numpy.py", line 2886, in argsort
_, perm = lax.sort_key_val(a, iota, dimension=axis)
File "/Users/skainswo/dev/jax/jax/lax/lax.py", line 1190, in sort_key_val
result = sort_key_val_p.bind(keys, values, dimension=dimension)
File "/Users/skainswo/dev/jax/jax/core.py", line 211, in bind
return self.impl(*args, **kwargs)
File "/Users/skainswo/dev/jax/jax/interpreters/xla.py", line 217, in apply_primitive
compiled_fun = xla_primitive_callable(prim, *map(arg_spec, args), **params)
File "/Users/skainswo/dev/jax/jax/interpreters/xla.py", line 254, in xla_primitive_callable
compiled = backend.compile(built_c, compile_options=options)
RuntimeError: Unimplemented: complex comparison 'LT'
```
I'm guessing that this has something to do with the fact that I'm getting complex eigenvalues out, but the error message is pretty confusing...
| Yes, this is certainly a bad error.
I'm curious if you actually want the `np.argsort` behavior on complex numbers (lexicographic order) or just a better error... either way, I guess we'll implement the `np.argsort` behavior.
@hawkinsp Yeah, that's a great question. I think I'd personally prefer a better error, but only because it's not immediately clear to me what the right ordering should be. | 2020-05-14T18:51:47 |
google/jax | 3,097 | google__jax-3097 | [
"3079"
] | 7c687b245b34397c13563a714ad9bf0290b419e3 | diff --git a/jax/experimental/jet.py b/jax/experimental/jet.py
--- a/jax/experimental/jet.py
+++ b/jax/experimental/jet.py
@@ -51,16 +51,17 @@ def flatten_fun_output(*args):
yield tree_flatten(ans)
f, out_tree = flatten_fun_output(lu.wrap_init(fun))
- out_primals, out_terms = jet_fun(jet_subtrace(f)).call_wrapped(primals, series)
+ out_primals, out_terms = jet_fun(jet_subtrace(f), order).call_wrapped(primals, series)
return tree_unflatten(out_tree(), out_primals), tree_unflatten(out_tree(), out_terms)
@lu.transformation
-def jet_fun(primals, series):
+def jet_fun(order, primals, series):
with core.new_master(JetTrace) as master:
+ master.order = order
out_primals, out_terms = yield (master, primals, series), {}
del master
- out_terms = [tree_map(lambda x: onp.zeros_like(x, dtype=onp.result_type(out_primals[0])), series[0])
- if s is zero_series else s for s in out_terms]
+ out_terms = [[onp.zeros_like(p)] * order if s is zero_series else s
+ for p, s in zip(out_primals, out_terms)]
yield out_primals, out_terms
@lu.transformation
@@ -112,8 +113,8 @@ def sublift(self, val):
def process_primitive(self, primitive, tracers, params):
assert not primitive.multiple_results # TODO
+ order = self.master.order
primals_in, series_in = unzip2((t.primal, t.terms) for t in tracers)
- order, = {len(terms) for terms in series_in if terms is not zero_series}
series_in = [[zero_term] * order if s is zero_series else s
for s in series_in]
# TODO(mattjj): avoid always instantiating zeros
| [jet] Unable to determine order from zeroseries
When `series_in` consists only of `zeroseries`, [this line](https://github.com/google/jax/blob/master/jax/experimental/jet.py#L116) will fail with `
ValueError: not enough values to unpack (expected 1, got 0)`.
This occurs when taking jet through `jax.nn.softplus = lambda x: jnp.logaddexp(x, 0)` as we pass one `zeroseries`to `convert_element_type`. We can avoid the error in this case by avoiding this primitive altogether and manually converting the dtype `lambda x: jnp.logaddexp(x, 0.)`, but I think it's better to fix this in general.
| 2020-05-14T19:14:17 |
||
google/jax | 3,098 | google__jax-3098 | [
"3001"
] | 007cdf2f9eb46f665ae9d39bad5992a4bf994100 | diff --git a/jax/dtypes.py b/jax/dtypes.py
--- a/jax/dtypes.py
+++ b/jax/dtypes.py
@@ -125,12 +125,22 @@ def finfo(dtype):
else:
return np.finfo(dtype)
+def _issubclass(a, b):
+ """Determines if ``a`` is a subclass of ``b``.
+
+ Similar to issubclass, but returns False instead of an exception if `a` is not
+ a class.
+ """
+ try:
+ return issubclass(a, b)
+ except TypeError:
+ return False
def issubdtype(a, b):
if a == bfloat16:
return b in [bfloat16, _bfloat16_dtype, np.floating, np.inexact,
np.number]
- if not issubclass(b, np.generic):
+ if not _issubclass(b, np.generic):
# Workaround for JAX scalar types. NumPy's issubdtype has a backward
# compatibility behavior for the second argument of issubdtype that
# interacts badly with JAX's custom scalar types. As a workaround,
| diff --git a/tests/dtypes_test.py b/tests/dtypes_test.py
--- a/tests/dtypes_test.py
+++ b/tests/dtypes_test.py
@@ -149,6 +149,7 @@ def testIsSubdtype(self):
self.assertTrue(dtypes.issubdtype(t, t))
self.assertTrue(dtypes.issubdtype(np.dtype(t).type, t))
self.assertTrue(dtypes.issubdtype(t, np.dtype(t).type))
+ self.assertTrue(dtypes.issubdtype(t, np.dtype(t)))
if t != jnp.bfloat16:
for category in [np.generic, jnp.inexact, jnp.integer, jnp.signedinteger,
jnp.unsignedinteger, jnp.floating, jnp.complexfloating]:
| `issubdtype` errors when using dtype constructor
NumPy:
```python
np.issubdtype(np.int32, np.dtype('int32')) # ==> True
```
JAX NumPy:
```python
jnp.issubdtype(jnp.int32, jnp.dtype('int32')) # ==> TypeError: issubclass() arg 1 must be a class
```
| Try:`jnp.issubdtype(jnp.int32, jnp.dtype('int32').type)`. This is an unfortunate quirk of `numpy` whereby dtype objects are sometimes interchangeable with the types that they are based off. See [here](https://github.com/numpy/numpy/issues/7242), for example.
Thanks for the pointer!
Is this issue solved?
No, I think it's still a bug. JAX should act like NumPy here. | 2020-05-14T21:33:35 |
google/jax | 3,110 | google__jax-3110 | [
"3024"
] | 812df27a2d8f61bfac95ff2867d1284cac81837f | diff --git a/jax/scipy/ndimage.py b/jax/scipy/ndimage.py
--- a/jax/scipy/ndimage.py
+++ b/jax/scipy/ndimage.py
@@ -44,13 +44,10 @@ def _nearest_indices_and_weights(coordinate):
def _linear_indices_and_weights(coordinate):
lower = jnp.floor(coordinate)
- upper = jnp.ceil(coordinate)
- l_index = lower.astype(jnp.int32)
- u_index = upper.astype(jnp.int32)
- one = coordinate.dtype.type(1)
- l_weight = one - (coordinate - lower)
- u_weight = one - l_weight # handles the edge case lower==upper
- return [(l_index, l_weight), (u_index, u_weight)]
+ upper_weight = coordinate - lower
+ lower_weight = 1 - upper_weight
+ index = lower.astype(jnp.int32)
+ return [(index, lower_weight), (index + 1, upper_weight)]
@functools.partial(api.jit, static_argnums=(2, 3, 4))
@@ -95,11 +92,12 @@ def _map_coordinates(input, coordinates, order, mode, cval):
outputs = []
for items in itertools.product(*valid_1d_interpolations):
indices, validities, weights = zip(*items)
- if any(valid is not True for valid in validities):
+ if all(valid is True for valid in validities):
+ # fast path
+ contribution = input[indices]
+ else:
all_valid = functools.reduce(operator.and_, validities)
contribution = jnp.where(all_valid, input[indices], cval)
- else:
- contribution = input[indices]
outputs.append(_nonempty_prod(weights) * contribution)
result = _nonempty_sum(outputs)
return result
| diff --git a/tests/scipy_ndimage_test.py b/tests/scipy_ndimage_test.py
--- a/tests/scipy_ndimage_test.py
+++ b/tests/scipy_ndimage_test.py
@@ -21,6 +21,7 @@
from absl.testing import parameterized
import scipy.ndimage as osp_ndimage
+from jax import grad
from jax import test_util as jtu
from jax import dtypes
from jax.scipy import ndimage as lsp_ndimage
@@ -119,6 +120,21 @@ def testMapCoordinateDocstring(self):
self.assertIn("Only linear interpolation",
lsp_ndimage.map_coordinates.__doc__)
+ def testContinuousGradients(self):
+ # regression test for https://github.com/google/jax/issues/3024
+
+ def loss(delta):
+ x = onp.arange(100.0)
+ border = 10
+ indices = onp.arange(x.size) + delta
+ # linear interpolation of the linear function y=x should be exact
+ shifted = lsp_ndimage.map_coordinates(x, [indices], order=1)
+ return ((x - shifted) ** 2)[border:-border].mean()
+
+ # analytical gradient of (x - (x - delta)) ** 2 is 2 * delta
+ self.assertAllClose(grad(loss)(0.5), 1.0, check_dtypes=False)
+ self.assertAllClose(grad(loss)(1.0), 2.0, check_dtypes=False)
+
if __name__ == "__main__":
absltest.main()
| Zero gradient when resampling an image at grid location using map_coordinates
Sorry if this not a very minimal test case but let me explain my use case and issue as I believe the context will help.
I am trying to use jax for a toy image registration problem. Given two images `x1` and `x2` I want to find the translation `u` that minimises the difference between `x1(.)` and `x2(.+u)` as measured in terms of mean square error (MSE). The resampled version of `x2` after the (non-integer) translation is computed with `map_coordinates`.
The computation of the gradient of the cost funcrtion in this context is usually done by assuming the images are continuous, computing the gradient of the MSE as `-2(x1-x2)∇x2` and computing `∇x2` with something similar to `np.gradient`.
Trying to mimick this setup with jax to avoid computing the gradient manually (this would be useful for example as soon as one wants to change the MSE loss for something else) fails to converge to a suitable translation (at least if initialised with an integer translation) as the gradient from jax is **exactly** zero for integer translations.
Below is a test case to illustrate the zero gradient issue. I understand there is a discontinuity of the gradient at such integer points but I was expecting to nonetheless get a proper **sub-gradient**.
I am not sure if I am doing something wrong or simply misunderstanding something but so far I haven't managed to get the image registration to converge with jax derivatives even though it does with a numerical approximation of the gradient, or the classical continous approximation I was refering to.
```python
import jax
import jax.numpy as jnp
import numpy as onp
from jax.scipy import ndimage as jndimage
from scipy import ndimage as ondimage
import scipy as oscipy
# This needs to run at startup
# https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#Double-(64bit)-precision
jax.config.update('jax_enable_x64', True)
# Exclude a border in hte computation of the loss to try and avoid numerical issues
excl_border = 2
def run(np,ndimage):
print(f"\nRunning on {np}")
onp.random.seed(0)
x1 = onp.random.randn(20,10)
x2 = onp.random.randn(20,10)
grid_x, grid_y = np.meshgrid(np.arange(x1.shape[1]), np.arange(x1.shape[0]))
def lossfunc(du):
# Get translated grid
def_grid_x = grid_x + du[0]
def_grid_y = grid_y + du[1]
# Resample image
tmpx2_warped = jndimage.map_coordinates(x2, [def_grid_y, def_grid_x], order=1)
# Compute the MSE between the warped image an the fixed image
diff_im = tmpx2_warped[excl_border:-excl_border,excl_border:-excl_border]-x1[excl_border:-excl_border,excl_border:-excl_border]
imloss = np.mean((diff_im)**2)
return imloss
def mg_lossfunc(du):
#Get translated grid
def_grid_x = grid_x + du[0]
def_grid_y = grid_y + du[1]
# Resample image
tmpx2_warped = ndimage.map_coordinates(x2, [def_grid_y, def_grid_x], order=1)
# Compute the MSE between the warped image an the fixed image
diff_im = tmpx2_warped[excl_border:-excl_border,excl_border:-excl_border]-x1[excl_border:-excl_border,excl_border:-excl_border]
jm = np.gradient(tmpx2_warped)
jmrx = jm[0][excl_border:-excl_border,excl_border:-excl_border]
jmry = jm[1][excl_border:-excl_border,excl_border:-excl_border]
jdmx = -2.*diff_im*jmrx
jdmy = -2.*diff_im*jmry
return np.array([np.mean(jdmx), np.mean(jdmy)])
print(f"loss at 0,0: {lossfunc([0., 0.])}")
print(f"loss at 0.1,0.1: {lossfunc([0.1, 0.1])}")
print(f"Manual approx gradient at 0,0: {mg_lossfunc([0., 0.])}")
print(f"Manual approx gradient at 0.1,0.1: {mg_lossfunc([0.1, 0.1])}")
# Finite difference gradient with a large step as the image is sampled on a grid and interpolated
epsgrad = 0.1
print(f"Numerical approx gradient at 0,0: {oscipy.optimize.approx_fprime([0., 0.],lossfunc,epsgrad)}")
print(f"Numerical approx gradient at 0.1,0.1: {oscipy.optimize.approx_fprime([0.1, 0.1],lossfunc,epsgrad)}")
if np==jnp:
jg_lossfunc = lambda du:np.asarray(jax.jit(jax.grad(lossfunc))(du))
print(f"Jax gradient at 0,0: {jg_lossfunc([0., 0.])}")
print(f"Jax gradient at 0.1,0.1: {jg_lossfunc([0.1, 0.1])}")
run(onp,ondimage)
run(jnp,jndimage)
```
Outputs:
```
Running on <module 'numpy' from '/usr/local/lib/python3.6/dist-packages/numpy/__init__.py'>
loss at 0,0: 1.9514397057693333
loss at 0.1,0.1: 1.6584230856711766
Manual approx gradient at 0,0: [ 0.01189463 -0.21033731]
Manual approx gradient at 0.1,0.1: [-0.00626396 -0.12798883]
Numerical approx gradient at 0,0: [-1.72714353 -1.47912221]
Numerical approx gradient at 0.1,0.1: [-1.15234194 -0.9063109 ]
Running on <module 'jax.numpy' from '/usr/local/lib/python3.6/dist-packages/jax/numpy/__init__.py'>
loss at 0,0: 1.9514397057693333
loss at 0.1,0.1: 1.6584230856711766
Manual approx gradient at 0,0: [ 0.01189463 -0.21033731]
Manual approx gradient at 0.1,0.1: [-0.00626396 -0.12798883]
Numerical approx gradient at 0,0: [-1.72714353 -1.47912221]
Numerical approx gradient at 0.1,0.1: [-1.15234194 -0.9063109 ]
Jax gradient at 0,0: [0. 0.]
Jax gradient at 0.1,0.1: [-1.30169297 -1.05466678]
```
| I think it may be helpful to look at the function you're optimizing:
```python
import matplotlib.pyplot as plt
plt.figure()
x = jnp.linspace(-1, 2, num=91)
losses = jax.jit(jax.vmap(lossfunc))(jnp.stack([x, x], axis=1))
plt.plot(x, jax.device_get(losses), '-s')
plt.ylabel('loss')
plt.figure()
grads = jax.jit(jax.vmap(jax.grad(lossfunc)))(jnp.stack([x, x], axis=1))
plt.plot(x, jax.device_get(grads), '-s')
plt.ylabel('grad(loss)')
```


So yes, it's a little weird that the gradient is _exactly_ zero at these points instead of picking the value from one of the sides (which _might_ make more sense?) but you're going to have trouble optimizing this function with gradient based methods no matter how you calculate them, because the gradients on either side of this point go in opposite directions!
Maybe there's something different about your real use-case here?
Thanks @shoyer the only obvious difference with the real use case is that images are not random. A more complete example that includes the optimisation over the translation can be found here:
https://colab.research.google.com/drive/1lkV7zBPL4YLiKwTxz1uL9K188uiF6BLI?usp=sharing
I guess the classical approximation of the gradient (`-2(x1-x2)∇x2`) has a smoothing effect without which local minima at grid points are an issues. Swiching the interpolation order to 3 instead of 1 might help (at the cost of computational time) but for now this is not implemented in jax:
```
NotImplementedError: jax.scipy.ndimage.map_coordinates currently requires order<=1
```
I don't know the details from the signal processing literature, but I suspect adding some sort of a low-pass filter, either before or after resampling, is important to avoid [anti-aliasing](https://en.wikipedia.org/wiki/Anti-aliasing_filter) issues when doing this sort of alignment.
Many thanks @shoyer for spending time on this. Anti-aliasing is not typically needed in such a simple translation problem. I have expanded a bit the example (a fixed a small bug in the manual approximate gradient) to rely on a very smooth set of images:

The following graph shows the gradients along a diagonal translation as computed with the manual approximate gradient (`(mx,my)`), the finite difference one (`(ax,ay)`) and the jax one (`(jx,jy)` with the spikes):

Code snippet:
<details>
<summary>Click to expand code</summary>
```python
import jax
import jax.numpy as jnp
import numpy as onp
from jax.scipy import ndimage as jndimage
from scipy import ndimage as ondimage
import scipy as oscipy
import matplotlib.pyplot as plt
# This needs to run at startup
# https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#Double-(64bit)-precision
jax.config.update('jax_enable_x64', True)
# Exclude a border in the computation of the loss to try and avoid numerical issues
excl_border = 2
def run(np,ndimage):
print(f"\n===\nRunning on {np}")
h=30
w=40
grid_x, grid_y = np.meshgrid(np.arange(w), np.arange(h))
cx1 = np.round(w/2.)
cy1 = np.round(h/2.)
sx = w/4.
sy = h/4.
cx2 = cx1+2.
cy2 = cy1+2.
x1 = 10*np.exp( -( ((grid_x-cx1)/sx)**2 + ((grid_y-cy1)/sy)**2 ) )
x2 = 10*np.exp( -( ((grid_x-cx2)/sx)**2 + ((grid_y-cy2)/sy)**2 ) )
plt.figure()
fig, axs = plt.subplots(1,2)
axs[0].set_title('x1')
axs[0].imshow(x1)
axs[1].set_title('x2')
axs[1].imshow(x2)
plt.show()
def lossfunc(du):
# Get translated grid
def_grid_x = grid_x + du[0]
def_grid_y = grid_y + du[1]
# Resample image
tmpx2_warped = jndimage.map_coordinates(x2, [def_grid_y, def_grid_x], order=1)
# Compute the MSE between the warped image an the fixed image
diff_im = tmpx2_warped[excl_border:-excl_border,excl_border:-excl_border]-x1[excl_border:-excl_border,excl_border:-excl_border]
imloss = np.mean((diff_im)**2)
return imloss
def mg_lossfunc(du):
#Get translated grid
def_grid_x = grid_x + du[0]
def_grid_y = grid_y + du[1]
# Resample image
tmpx2_warped = ndimage.map_coordinates(x2, [def_grid_y, def_grid_x], order=1)
# Compute the MSE between the warped image an the fixed image
diff_im = tmpx2_warped[excl_border:-excl_border,excl_border:-excl_border]-x1[excl_border:-excl_border,excl_border:-excl_border]
jm = np.gradient(tmpx2_warped)
jmrx = jm[1][excl_border:-excl_border,excl_border:-excl_border]
jmry = jm[0][excl_border:-excl_border,excl_border:-excl_border]
jdmx = 2.*diff_im*jmrx
jdmy = 2.*diff_im*jmry
return np.array([np.mean(jdmx), np.mean(jdmy)])
print(f"loss at 0,0: {lossfunc([0., 0.])}")
print(f"loss at 0.1,0.1: {lossfunc([0.1, 0.1])}")
print(f"loss at -0.1,-0.1: {lossfunc([-0.1, -0.1])}")
plt.figure(figsize=(30, 2))
uu = np.arange(-1.1, 3.5, 0.05)
losses = list(map(lossfunc, np.stack([uu, uu], axis=1)))
plt.plot(uu, losses, '-s')
plt.ylabel('loss')
plt.show()
print(f"Manual approx gradient at 0,0: {mg_lossfunc([0., 0.])}")
print(f"Manual approx gradient at 0.1,0.1: {mg_lossfunc([0.1, 0.1])}")
print(f"Manual approx gradient at -0.1,-0.1: {mg_lossfunc([-0.1, -0.1])}")
print(f"Manual approx gradient at 2,-1: {mg_lossfunc([2., -1.])}")
print(f"Manual approx gradient at 2.1,-1.1: {mg_lossfunc([2.1, -1.1])}")
# Finite difference gradient with a large step as the image is sampled on a grid and interpolated
epsgrad = 0.1
ag_lossfunc = lambda du: oscipy.optimize.approx_fprime(du,lossfunc,epsgrad)
print(f"Numerical approx gradient at 0,0: {ag_lossfunc([0., 0.])}")
print(f"Numerical approx gradient at 0.1,0.1: {ag_lossfunc([0.1, 0.1])}")
print(f"Numerical approx gradient at -0.1,-0.1: {ag_lossfunc([-0.1, -0.1])}")
print(f"Numerical approx gradient at 2.,-1.: {ag_lossfunc([2., -1.])}")
print(f"Numerical approx gradient at 2.1,-1.1: {ag_lossfunc([2.1, -1.1])}")
if np==jnp:
jg_lossfunc = lambda du:np.asarray(jax.jit(jax.grad(lossfunc))(du))
print(f"Jax gradient at 0,0: {jg_lossfunc([0., 0.])}")
print(f"Jax gradient at 0.1,0.1: {jg_lossfunc([0.1, 0.1])}")
print(f"Jax gradient at -0.1,-0.1: {jg_lossfunc([-0.1, -0.1])}")
print(f"Jax gradient at 2.1,-1.1: {jg_lossfunc([2., -1.])}")
print(f"Jax gradient at 2.1,-1.1: {jg_lossfunc([2.1, -1.1])}")
plt.figure(figsize=(30, 2))
mg_grads = list(map(mg_lossfunc, np.stack([uu, uu], axis=1)))
plt.plot(uu, mg_grads, '-o')
ag_grads = list(map(ag_lossfunc, np.stack([uu, uu], axis=1)))
plt.plot(uu, ag_grads, '-+')
plt.legend(['mx', 'my', 'ax', 'ay'])
if np==jnp:
jg_lossfunc = lambda du:np.asarray(jax.jit(jax.grad(lossfunc))(du))
jg_grads = list(map(jg_lossfunc, np.stack([uu, uu], axis=1)))
plt.plot(uu, jg_grads, '-d')
plt.legend(['mx', 'my', 'ax', 'ay', 'jx', 'jy'])
plt.ylabel('grad(loss)')
plt.show()
print("\nBFGS optimisation\n")
u = np.zeros(2)
opt_opt={'disp': True, 'maxiter': 200, 'eps': epsgrad, 'gtol':1e-9}
res = oscipy.optimize.minimize(lossfunc, u, method="BFGS", options=opt_opt)
print(f"\nNumerical approx gradient - loss at optim end {res.x}: {lossfunc(res.x)}")
res = oscipy.optimize.minimize(lossfunc, u, jac=mg_lossfunc, method="BFGS", options=opt_opt)
print(f"\nManual approx gradient - loss at optim end {res.x}: {lossfunc(res.x)}")
if np==jnp:
jg_lossfunc = lambda du:np.asarray(jax.jit(jax.grad(lossfunc))(du))
res = oscipy.optimize.minimize(lossfunc, u, jac=jg_lossfunc, method="BFGS", options=opt_opt)
print(f"\nJax grad - loss at optim end {res.x}: {lossfunc(res.x)}")
run(onp,ondimage)
run(jnp,jndimage)
```
</details>
<details>
<summary>Click to expand output</summary>
```
===
Running on <module 'numpy' from '/usr/local/lib/python3.6/dist-packages/numpy/__init__.py'>
<Figure size 432x288 with 0 Axes>
loss at 0,0: 1.3439400676144586
loss at 0.1,0.1: 1.214026608789585
loss at -0.1,-0.1: 1.4720960079911962
Manual approx gradient at 0,0: [-0.4687936 -0.82254061]
Manual approx gradient at 0.1,0.1: [-0.44707712 -0.78433099]
Manual approx gradient at -0.1,-0.1: [-0.48807353 -0.85495853]
Manual approx gradient at 2,-1: [-7.75443519e-05 -1.18900303e+00]
Manual approx gradient at 2.1,-1.1: [ 0.02652188 -1.21587652]
Numerical approx gradient at 0,0: [-0.46826943 -0.83002476]
Numerical approx gradient at 0.1,0.1: [-0.44431941 -0.78719149]
Numerical approx gradient at -0.1,-0.1: [-0.46686583 -0.81231689]
Numerical approx gradient at 2.,-1.: [ 0.00328248 -1.22299187]
Numerical approx gradient at 2.1,-1.1: [ 0.02816145 -1.15391493]
BFGS optimisation
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 0.000708
Iterations: 4
Function evaluations: 324
Gradient evaluations: 78
Numerical approx gradient - loss at optim end [1.95190614 1.95647443]: 0.0007083630658263352
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 0.000000
Iterations: 8
Function evaluations: 50
Gradient evaluations: 39
Manual approx gradient - loss at optim end [1.99890299 1.99994993]: 1.507664203680706e-07
===
Running on <module 'jax.numpy' from '/usr/local/lib/python3.6/dist-packages/jax/numpy/__init__.py'>
<Figure size 432x288 with 0 Axes>
loss at 0,0: 1.3439400676144586
loss at 0.1,0.1: 1.214026608789585
loss at -0.1,-0.1: 1.4720960079911962
Manual approx gradient at 0,0: [-0.4687936 -0.82254061]
Manual approx gradient at 0.1,0.1: [-0.44707712 -0.78433099]
Manual approx gradient at -0.1,-0.1: [-0.48807353 -0.85495853]
Manual approx gradient at 2,-1: [-7.75443519e-05 -1.18900303e+00]
Manual approx gradient at 2.1,-1.1: [ 0.02296511 -1.21586866]
Numerical approx gradient at 0,0: [-0.46826943 -0.83002476]
Numerical approx gradient at 0.1,0.1: [-0.44431941 -0.78719149]
Numerical approx gradient at -0.1,-0.1: [-0.46686583 -0.81231689]
Numerical approx gradient at 2.,-1.: [ 0.00328248 -1.22299187]
Numerical approx gradient at 2.1,-1.1: [ 0.02816145 -1.15391493]
Jax gradient at 0,0: [0. 0.]
Jax gradient at 0.1,0.1: [-0.45671462 -0.80902832]
Jax gradient at -0.1,-0.1: [-0.47918073 -0.83378842]
Jax gradient at 2.1,-1.1: [0. 0.]
Jax gradient at 2.1,-1.1: [ 0.01547956 -1.17478728]
BFGS optimisation
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 0.000708
Iterations: 4
Function evaluations: 324
Gradient evaluations: 78
Numerical approx gradient - loss at optim end [1.95190614 1.95647443]: 0.0007083630658263352
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 9
Function evaluations: 10
Gradient evaluations: 10
Manual approx gradient - loss at optim end [2. 2.]: 5.814722077624332e-20
Optimization terminated successfully.
Current function value: 1.343940
Iterations: 0
Function evaluations: 1
Gradient evaluations: 1
Jax grad - loss at optim end [0. 0.]: 1.3439400676144586
```
</details>
OK, I agree this looks pretty bad! We shouldn't have the gradient deviate at a single point.
I bet the problem is when `lower == upper` on these lines:
https://github.com/google/jax/blob/db71f3c5fc5226c3e9c87dd9f056d1b63cfa0286/jax/scipy/ndimage.py#L45-L53
Rather than computing `upper = jnp.ceil(coordinate)`, we should probably just set `upper = lower + 1` | 2020-05-15T20:51:00 |
google/jax | 3,149 | google__jax-3149 | [
"3121"
] | 73b76e9976ba94a9e28759faca602a4f9f295578 | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -150,7 +150,7 @@ def f_jitted(*args, **kwargs):
else:
dyn_args = args
args_flat, in_tree = tree_flatten((dyn_args, kwargs))
- _check_args(args_flat)
+ for arg in args_flat: _check_arg(arg)
flat_fun, out_tree = flatten_fun(f, in_tree)
out = xla.xla_call(flat_fun, *args_flat, device=device, backend=backend,
name=flat_fun.__name__)
@@ -370,7 +370,7 @@ def grad(fun: Callable, argnums: Union[int, Sequence[int]] = 0,
first element is considered the output of the mathematical function to be
differentiated and the second element is auxiliary data. Default False.
holomorphic: Optional, bool. Indicates whether ``fun`` is promised to be
- holomorphic. Default False.
+ holomorphic. If True, inputs and outputs must be complex. Default False.
Returns:
A function with the same arguments as ``fun``, that evaluates the gradient
@@ -424,7 +424,7 @@ def value_and_grad(fun: Callable, argnums: Union[int, Sequence[int]] = 0,
first element is considered the output of the mathematical function to be
differentiated and the second element is auxiliary data. Default False.
holomorphic: Optional, bool. Indicates whether ``fun`` is promised to be
- holomorphic. Default False.
+ holomorphic. If True, inputs and outputs must be complex. Default False.
Returns:
A function with the same arguments as ``fun`` that evaluates both ``fun``
@@ -454,17 +454,14 @@ def value_and_grad_f(*args, **kwargs):
f = lu.wrap_init(fun, kwargs)
f_partial, dyn_args = argnums_partial(f, argnums, args)
+ tree_map(partial(_check_input_dtype_grad, holomorphic), dyn_args)
if not has_aux:
ans, vjp_py = _vjp(f_partial, *dyn_args)
else:
ans, vjp_py, aux = _vjp(f_partial, *dyn_args, has_aux=True)
_check_scalar(ans)
dtype = dtypes.result_type(ans)
- if not (holomorphic or dtypes.issubdtype(dtype, onp.floating)):
- msg = ("Gradient only defined for real-output functions (with dtype that "
- "is a subdtype of np.floating), but got dtype {}. For holomorphic "
- "differentiation, pass holomorphic=True.")
- raise TypeError(msg.format(dtype))
+ tree_map(partial(_check_output_dtype_grad, holomorphic), ans)
g = vjp_py(onp.ones((), dtype=dtype))
g = g[0] if isinstance(argnums, int) else g
if not has_aux:
@@ -487,6 +484,38 @@ def _check_scalar(x):
else:
raise TypeError(msg("had abstract value {}".format(aval)))
+def _check_input_dtype_revderiv(name, holomorphic, x):
+ _check_arg(x)
+ aval = core.get_aval(x)
+ if holomorphic:
+ if not dtypes.issubdtype(aval.dtype, onp.complexfloating):
+ msg = (f"{name} with holomorphic=True requires inputs with complex dtype, "
+ f"but got {aval.dtype.name}.")
+ raise TypeError(msg)
+ elif not (dtypes.issubdtype(aval.dtype, onp.floating) or
+ dtypes.issubdtype(aval.dtype, onp.complexfloating)):
+ msg = (f"{name} requires real- or complex-valued inputs (input dtype that "
+ "is a sub-dtype of np.floating or np.complexfloating), "
+ f"but got {aval.dtype.name}. ")
+ raise TypeError(msg)
+_check_input_dtype_grad = partial(_check_input_dtype_revderiv, "grad")
+
+def _check_output_dtype_revderiv(name, holomorphic, x):
+ aval = core.get_aval(x)
+ if holomorphic:
+ if not dtypes.issubdtype(aval.dtype, onp.complexfloating):
+ msg = (f"{name} with holomorphic=True requires outputs with complex dtype, "
+ f"but got {aval.dtype.name}.")
+ raise TypeError(msg)
+ elif not dtypes.issubdtype(aval.dtype, onp.floating):
+ msg = (f"{name} requires real-valued outputs (output dtype that is "
+ f"a sub-dtype of np.floating), but got {aval.dtype.name}. "
+ "For holomorphic differentiation, pass holomorphic=True. "
+ "For differentiation of non-holomorphic functions involving complex "
+ "outputs, use jax.vjp directly.")
+ raise TypeError(msg)
+_check_output_dtype_grad = partial(_check_output_dtype_revderiv, "grad")
+
def jacfwd(fun: Callable, argnums: Union[int, Sequence[int]] = 0,
holomorphic: bool = False) -> Callable:
@@ -521,21 +550,39 @@ def jacfwd(fun: Callable, argnums: Union[int, Sequence[int]] = 0,
def jacfun(*args, **kwargs):
f = lu.wrap_init(fun, kwargs)
f_partial, dyn_args = argnums_partial(f, argnums, args)
- holomorphic or tree_map(_check_real_input_jacfwd, dyn_args)
+ tree_map(partial(_check_input_dtype_jacfwd, holomorphic), dyn_args)
pushfwd = partial(_jvp, f_partial, dyn_args)
y, jac = vmap(pushfwd, out_axes=(None, batching.last))(_std_basis(dyn_args))
+ tree_map(partial(_check_output_dtype_jacfwd, holomorphic), y)
example_args = dyn_args[0] if isinstance(argnums, int) else dyn_args
return tree_map(partial(_unravel_array_into_pytree, example_args, -1), jac)
return jacfun
-def _check_real_input_jacfwd(x):
+def _check_input_dtype_jacfwd(holomorphic, x):
+ _check_arg(x)
aval = core.get_aval(x)
- if not dtypes.issubdtype(aval.dtype, onp.floating):
- msg = ("jacfwd only defined for functions with input dtypes that are "
- "sub-dtypes of `np.floating` (i.e. that model real values), but "
- "got {}. For holomorphic differentiation, pass holomorphic=True.")
- raise TypeError(msg.format(aval.dtype.name))
+ if holomorphic:
+ if not (dtypes.issubdtype(aval.dtype, onp.complexfloating) and
+ not dtypes.issubdtype(aval.dtype, onp.floating)):
+ msg = ("jacfwd with holomorphic=True requires inputs with complex dtype, "
+ f"but got {aval.dtype.name}.")
+ raise TypeError(msg)
+ elif not dtypes.issubdtype(aval.dtype, onp.floating):
+ msg = ("jacfwd requires real-valued inputs (input dtype that is "
+ f"a sub-dtype of np.floating), but got {aval.dtype.name}. "
+ "For holomorphic differentiation, pass holomorphic=True. "
+ "For differentiation of non-holomorphic functions involving complex "
+ "inputs, use jax.jvp directly.")
+ raise TypeError(msg)
+
+def _check_output_dtype_jacfwd(holomorphic, x):
+ aval = core.get_aval(x)
+ if holomorphic:
+ if not dtypes.issubdtype(aval.dtype, onp.complexfloating):
+ msg = ("jacfwd with holomorphic=True requires outputs with complex dtype, "
+ f"but got {aval.dtype.name}.")
+ raise TypeError(msg)
def jacrev(fun: Callable, argnums: Union[int, Sequence[int]] = 0,
@@ -571,8 +618,9 @@ def jacrev(fun: Callable, argnums: Union[int, Sequence[int]] = 0,
def jacfun(*args, **kwargs):
f = lu.wrap_init(fun, kwargs)
f_partial, dyn_args = argnums_partial(f, argnums, args)
+ tree_map(partial(_check_input_dtype_jacrev, holomorphic), dyn_args)
y, pullback = _vjp(f_partial, *dyn_args)
- holomorphic or tree_map(_check_real_output_jacrev, y)
+ tree_map(partial(_check_output_dtype_jacrev, holomorphic), y)
jac = vmap(pullback)(_std_basis(y))
jac = jac[0] if isinstance(argnums, int) else jac
example_args = dyn_args[0] if isinstance(argnums, int) else dyn_args
@@ -582,13 +630,8 @@ def jacfun(*args, **kwargs):
return jacfun
jacobian = jacrev
-def _check_real_output_jacrev(x):
- aval = core.get_aval(x)
- if not dtypes.issubdtype(aval.dtype, onp.floating):
- msg = ("jacrev only defined for functions with output dtypes that are "
- "sub-dtypes of `np.floating` (i.e. that model real values), but "
- "got {}. For holomorphic differentiation, pass holomorphic=True.")
- raise TypeError(msg.format(aval.dtype.name))
+_check_input_dtype_jacrev = partial(_check_input_dtype_revderiv, "jacrev")
+_check_output_dtype_jacrev = partial(_check_output_dtype_revderiv, "jacrev")
def hessian(fun: Callable, argnums: Union[int, Sequence[int]] = 0,
@@ -1070,7 +1113,7 @@ def f_pmapped(*args, **kwargs):
assert all(axis in (0, None) for axis in in_axes_flat), \
"pmap currently only supports mapping over the leading axis"
local_axis_size = _mapped_axis_size(in_tree, args, in_axes_flat, "pmap")
- _check_args(args)
+ for arg in args: _check_arg(arg)
flat_fun, out_tree = flatten_fun(f, in_tree)
out = pxla.xla_pmap(
flat_fun,
@@ -1114,7 +1157,7 @@ def f_pmapped(*args, **kwargs):
"soft_pmap currently only supports mapping over the leading axis"
mapped_invars = tuple(axis is not None for axis in in_axes_flat)
axis_size = _mapped_axis_size(in_tree, args_flat, in_axes_flat, "soft_pmap")
- _check_args(args_flat)
+ for arg in args_flat: _check_arg(arg)
flat_fun, out_tree = flatten_fun(f, in_tree)
chunk_size, leftover = divmod(axis_size, pxla.unmapped_device_count(backend))
@@ -1489,7 +1532,7 @@ def _vjp(fun: lu.WrappedFun, *primals, **kwargs):
has_aux = kwargs.pop('has_aux', False)
assert not kwargs
primals_flat, in_tree = tree_flatten(primals)
- _check_args(primals_flat)
+ for arg in primals_flat: _check_arg(arg)
tree_map(_check_inexact_input_vjp, primals)
if not has_aux:
flat_fun, out_tree = flatten_fun_nokwargs(fun, in_tree)
@@ -1618,11 +1661,10 @@ def device_get(x):
return tree_map(_device_get, x)
-def _check_args(args):
- for arg in args:
- if not (isinstance(arg, core.Tracer) or _valid_jaxtype(arg)):
- raise TypeError("Argument '{}' of type {} is not a valid JAX type"
- .format(arg, type(arg)))
+def _check_arg(arg):
+ if not (isinstance(arg, core.Tracer) or _valid_jaxtype(arg)):
+ raise TypeError("Argument '{}' of type {} is not a valid JAX type"
+ .format(arg, type(arg)))
def _valid_jaxtype(arg):
try:
diff --git a/jax/experimental/host_callback.py b/jax/experimental/host_callback.py
--- a/jax/experimental/host_callback.py
+++ b/jax/experimental/host_callback.py
@@ -193,14 +193,14 @@ def id_tap(func: Callable, arg, *,
if func not in (_end_consumer, _unknown_testing_consumer):
api._check_callable(func)
flat_args, arg_treedef = pytree.flatten(arg)
- api._check_args(flat_args)
+ for arg in flat_args: api._check_arg(arg)
params = dict(kwargs) # we pass a copy of params to the primitive
# See definition of id_tap_p for what parameters it takes
params["func"] = func
params["arg_treedef"] = arg_treedef
if result is not None:
flat_results, result_treedef = pytree.flatten(result)
- api._check_args(flat_results)
+ for result in flat_results: api._check_arg(result)
all_args = flat_args + flat_results
params["nr_untapped"] = len(flat_results)
else:
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -801,8 +801,49 @@ def test_grad_of_int_errors(self):
dfn = grad(lambda x: x ** 2)
self.assertRaisesRegex(
TypeError,
- "Primal inputs to reverse-mode differentiation must be of float or "
- "complex type, got type int..", lambda: dfn(3))
+ (r"grad requires real- or complex-valued inputs \(input dtype that is a "
+ r"sub-dtype of np.floating or np.complexfloating\), but got int.*."),
+ lambda: dfn(3))
+
+ def test_grad_complex_result_errors(self):
+ dfn = grad(lambda x: x ** 2 + 1j)
+ self.assertRaisesRegex(
+ TypeError,
+ (r"grad requires real-valued outputs \(output dtype that is a "
+ r"sub-dtype of np.floating\), but got complex.*"),
+ lambda: dfn(3.))
+
+ def test_holomorphic_grad_of_float_errors(self):
+ dfn = grad(lambda x: x ** 2, holomorphic=True)
+ self.assertRaisesRegex(
+ TypeError,
+ (r"grad with holomorphic=True requires inputs with complex dtype, "
+ r"but got float.*"),
+ lambda: dfn(3.))
+
+ def test_holomorphic_jacrev_of_float_errors(self):
+ dfn = jacrev(lambda x: x ** 2, holomorphic=True)
+ self.assertRaisesRegex(
+ TypeError,
+ (r"jacrev with holomorphic=True requires inputs with complex dtype, "
+ r"but got float.*"),
+ lambda: dfn(3.))
+
+ def test_holomorphic_jacfwd_of_float_errors(self):
+ dfn = jacfwd(lambda x: x ** 2, holomorphic=True)
+ self.assertRaisesRegex(
+ TypeError,
+ (r"jacfwd with holomorphic=True requires inputs with complex dtype, "
+ r"but got float.*"),
+ lambda: dfn(3.))
+
+ def test_jacfwd_of_complex_errors(self):
+ dfn = jacfwd(lambda x: x ** 2)
+ self.assertRaisesRegex(
+ TypeError,
+ (r"jacfwd requires real-valued inputs \(input dtype that is a "
+ r"sub-dtype of np.floating\), but got complex.*"),
+ lambda: dfn(3. + 1j))
def test_xla_computation(self):
# these tests basically check the examples in the xla_computation docstring
| Error in backward-mode differentiation of ℝ → ℂ function
Hi,
First of all I'd like to thank you for the wonderful work you put forward with Jax.
We recently started using it in [netket](https://github.com/netket/netket), a package for solving quantum-many body problems with neural-networks, and have been very pleasantly surprised by the performance and ease of use, so much that we rewrote most of our code to take advantage of it!
However, when dealing with wavefunctions we have to work a lot with complex functions. It is within this context that I noticed a silent error arises when trying to use reverse-mode over a ℝ → ℂ function (so, `grad` and `jacrev`).
Consider the following case:
```python
import jax
w=(jax.np.array([1.0]), jax.np.array([0.5]))
jax.grad(lambda w: w[0].sum() + 1j* w[1].sum(), holomorphic=True)(w)
(DeviceArray([1.], dtype=float32), DeviceArray([0.], dtype=float32))
jax.jacrev(lambda w: w[0].sum() + 1j* w[1].sum(), holomorphic=True)(w)
(DeviceArray([1.], dtype=float32), DeviceArray([0.], dtype=float32))
```
- The gradient is wrong, it should be complex-valued. I have the feeling you are probably competing the right gradient but you cast it at the end to the original type, resulting in this error.
As a temporary fix, I figured out I can just promote types to be complex and compute the gradient by saying that the function is holomorphic, and re-scaling the complex part, though it would be nice if this could be fixed.
```python
wc=(jax.np.array([1.0], dtype=jax.np.complex64), jax.np.array([0.5], dtype=jax.np.complex64))
jax.jacrev(lambda w: w[0].sum() + 1j* w[1].sum(), holomorphic=True)(wc)
(DeviceArray([1.-0.j], dtype=complex64), DeviceArray([0.+1.j], dtype=complex64))
```
Slightly related, I understand why you ask the user to mark the function to be holomorphic once you hit a complex result, but in this case it's a bit weird: ℝ → ℂ functions are always holomorphic. I don't know if anything can be done about it.
cc @gcarleo
| Yes, thank you, we are really enjoying the clarity of Jax's APIs! It makes working with complex-valued quantum wave functions much easier and requires a lot less boilerplate than with other approaches.
Something related to what @PhilipVinc was saying (in the same spirit of mixed real complex typing). There is this slight issue with vjp for complex vectors and real functions and vice versa:
```python
import jax
import jax.numpy as jnp
f = jnp.sin
x = jnp.ones(3)
v = jnp.ones(3)
# R->R Works
val, f_jvp = jax.vjp(f, x)
print(f_jvp(v))
# X Real and V complex Works only if casting x to complex
x = jnp.ones(3, dtype=jnp.complex64)
v = jnp.ones(3) + 1.0j * jnp.ones(3)
val, f_jvp = jax.vjp(f, x)
print(f_jvp(v))
# X Complex and V real Works only if casting v to complex
x = jnp.ones(3) + 1.0j * jnp.ones(3)
v = jnp.ones(3, dtype=jnp.complex64)
val, f_jvp = jax.vjp(f, x)
print(f_jvp(v))
```
The second case can be implemented calling two jvp with real x, instead of casting x to complex (most likely that is going to be also more efficient than casting). I was wondering if you have plans to implement these mixed-typing cases directly into the AD engine or if we should handle them separately in our code.
Thank you!
Broadly speaking it's perfectly reasonable to implement the complex AD semantics you need in your own code: everything is pretty easy to construct out of unambiguous R -> R derivatives. JAX [makes particular choices in `grad`](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html#Complex-numbers-and-differentiation) that are different from some other libraries: we treat a R -> C or C -> C function as if it were a function onto the reals (i.e., we throw out the complex component). In the special case where f is holomorphic C -> C, that definition lines up with the complex derivative; in other cases, it doesn't.
I haven't paged this issue in yet, but I wonder if the comment on #610 might be useful.
Yes, that looks very relevant and perhaps worth making an FAQ!
Thank you @jekbradbury and @mattjj for the quick feedback!
Indeed I agree that there is absolutely no ambiguity in the vjp APIs, and the conventions taken in JAX for complex numbers are clear.
I was just remarking that it might be quite convenient for users if vjp (and jvp) had a mixed-type usage consistent with matmul.
For example, a matmul with complex mat and real vector, just works in numpy:
```python
import jax
import jax.numpy as jnp
# Real matrix
J = jnp.ones((3, 3))
# Complex vector
v = (1.0 + 1.0j) * jnp.ones(3)
# Mixed typing works without casting
out = jnp.matmul(J, v)
```
however an equivalent vjp wouldn't work unless one manually performs a tree flattening and unflattening
```python
# Function of real parameters
x = jnp.ones(3)
f_jvp = jax.vjp(jnp.sin, x)[1]
## Jacobian-vector product doesn't work without casting
# out = f_jvp(v)
## This **would** work if addition and scalar multiplication were overloaded for PyTrees
# out = f_jvp(v.real) + 1.0j * f_jvp(v.imag)
# This works (but it might be non-trivial for some users)
from jax.tree_util import tree_map, tree_flatten, tree_unflatten
res, td = tree_flatten(f_jvp(v.real))
ims, td = tree_flatten(f_jvp(v.imag))
out = tree_unflatten(td, [re + 1.0j * im for re, im in zip(res, ims)])
```
In the example above, casting x to complex might also be an option, however for non-linear functions I suspect that computing a jvp for complex parameters is more than twice as expensive as calling f_jvp on real parameters?
Hmm, well in your example you have a complex vector input, and that works with `jacfwd` / `jacrev` as well because you're clearly modeling a C->C function:
```python
import jax
import jax.numpy as jnp
from jax import jacfwd, jacrev
# Real matrix
J = jnp.ones((3, 3))
# Complex vector
v = (1.0 + 1.0j) * jnp.ones(3)
def f(v):
return jnp.matmul(J, v)
print(jacrev(f, holomorphic=True)(v)) # notice complex v input
```
```
[[1.+0.j 1.+0.j 1.+0.j]
[1.+0.j 1.+0.j 1.+0.j]
[1.+0.j 1.+0.j 1.+0.j]]
```
Calling it at `jnp.ones_like(v)` works the same way, but not `jnp.ones(v.shape)`.
I think the fundamental issue here is JAX can't know that you think of `f` as a `C->C` function unless you pass in a complex input, and therefore can't know to construct an input basis with complex dtype, because Python is polymorphic and we only ascribe an input type to `f` right when we execute it using the inputs you provide.
We could:
1. raise an error if you pass `holomorphic=True` but don't provide a complex dtype input, or
2. automatically cast inputs to complex dtype when `holomorphic=True`.
I think you might be requesting the latter.
WDYT? | 2020-05-19T17:01:15 |
google/jax | 3,150 | google__jax-3150 | [
"3123"
] | 73b76e9976ba94a9e28759faca602a4f9f295578 | diff --git a/jax/lax/lax_parallel.py b/jax/lax/lax_parallel.py
--- a/jax/lax/lax_parallel.py
+++ b/jax/lax/lax_parallel.py
@@ -44,6 +44,8 @@ def psum(x, axis_name, *, axis_index_groups=None):
If ``x`` is a pytree then the result is equivalent to mapping this function to
each leaf in the tree.
+ Inputs of boolean dtype are converted to integers before the reduction.
+
Args:
x: array(s) with a mapped axis named ``axis_name``.
axis_name: hashable Python object used to name a pmapped axis (see the
@@ -68,10 +70,13 @@ def psum(x, axis_name, *, axis_index_groups=None):
>>> print(y)
[ 0. 0.16666667 0.33333334 0.5 ]
"""
- leaves, treedef = tree_util.tree_flatten(x)
_validate_axis_index_groups(axis_index_groups)
- return treedef.unflatten(
- psum_p.bind(*leaves, axis_name=axis_name, axis_index_groups=axis_index_groups))
+ leaves, treedef = tree_util.tree_flatten(x)
+ leaves = [lax.convert_element_type(l, onp.int32)
+ if dtypes.dtype(l) == onp.bool_ else l for l in leaves]
+ out_flat = psum_p.bind(*leaves, axis_name=axis_name,
+ axis_index_groups=axis_index_groups)
+ return tree_util.tree_unflatten(treedef, out_flat)
def pmean(x, axis_name, *, axis_index_groups=None):
"""Compute an all-reduce mean on ``x`` over the pmapped axis ``axis_name``.
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -1184,6 +1184,26 @@ def matrix_vector(x, y, parallel=True):
self.assertAllClose(result1, result3, check_dtypes=False, atol=1e-3, rtol=1e-3)
self.assertAllClose(result1, result4, check_dtypes=False, atol=1e-3, rtol=1e-3)
+ def testPsumOnBooleanDtype(self):
+ # https://github.com/google/jax/issues/3123
+ n = xla_bridge.device_count()
+ if n > 1:
+ x = jnp.array([True, False])
+
+ out = pmap(lambda x: jax.lax.psum(x, 'i'), 'i')(x)
+ self.assertEqual(list(out), [1, 1])
+
+ out = pmap(lambda x: jax.lax.pmean(x, 'i'), 'i')(x)
+ self.assertEqual(list(out), [1/2, 1/2])
+ else:
+ x = jnp.array([True])
+
+ out = pmap(lambda x: jax.lax.psum(x, 'i'), 'i')(x)
+ self.assertEqual(list(out), [1])
+
+ out = pmap(lambda x: jax.lax.pmean(x, 'i'), 'i')(x)
+ self.assertEqual(list(out), [1])
+
class PmapWithDevicesTest(jtu.JaxTestCase):
| This is a bug in JAX's shape-checking rules; please report it!
I'm getting `This is a bug in JAX's shape-checking rules; please report it!`.
The function that creates this error is the following:
```
@functools.partial(jax.pmap, axis_name="batch")
def accuracy_and_loss_fn2(params, images, labels):
logits = resnet_apply(params, images)
loss = cross_entropy_loss(logits=logits, labels=labels)
accu = jax.lax.pmean(jnp.argmax(logits, axis=1) == labels, axis_name="batch")
return accu, loss
```
When I replace the `jax.lax.pmean` by a `jnp.mean` it does not cause the error. This is when running in multi-host setup, didn't try single-host yet.
full error:
```
Traceback (most recent call last):
File "[REDACTED]/py/jax/interpreters/xla.py", line 301, in primitive_computation
return c.build()
RuntimeError: Invalid argument: Expected element type in shape to be arithmetic type for operation add; got PRED.: @ 0x55b73da20345 xla::XlaBuilder::BinaryOp()
@ 0x55b73da19dc9 xla::Add()
@ 0x7fc98c23d5f4 pybind11::cpp_function::initialize<>()::{lambda()#1}::__invoke()
@ 0x7fc98c1e9963 pybind11::cpp_function::dispatcher()
@ 0x55b745431fe7 PyCFunction_Call
@ 0x55b7454b3adf _PyEval_EvalFrameDefault
@ 0x55b7454b7160 _PyEval_EvalCodeWithName
@ 0x55b7454ad938 PyEval_EvalCodeEx
@ 0x55b7454171a6 function_call
@ 0x55b7453e64ba PyObject_Call
@ 0x55b7450f983e partial_call
@ 0x55b7453e64ba PyObject_Call
@ 0x55b7454b3a04 _PyEval_EvalFrameDefault
@ 0x55b7454b7160 _PyEval_EvalCodeWithName
@ 0x55b7454ad938 PyEval_EvalCodeEx
@ 0x55b7454171a6 function_call
@ 0x55b7453e64ba PyObject_Call
@ 0x55b7450fa680 bounded_lru_cache_wrapper
@ 0x55b7453e64ba PyObject_Call
@ 0x55b7454b3a04 _PyEval_EvalFrameDefault
@ 0x55b7454b7160 _PyEval_EvalCodeWithName
@ 0x55b7454b7a24 fast_function
@ 0x55b7454b66bc call_function
@ 0x55b7454b375f _PyEval_EvalFrameDefault
@ 0x55b7454b7160 _PyEval_EvalCodeWithName
@ 0x55b7454ad938 PyEval_EvalCodeEx
@ 0x55b7454171a6 function_call
@ 0x55b7453e64ba PyObject_Call
@ 0x55b7450f983e partial_call
@ 0x55b7453e678f _PyObject_FastCallDict
@ 0x55b7454b669d call_function
@ 0x55b7454b375f _PyEval_EvalFrameDefault
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "[REDACTED]/py/absl/app.py", line 464, in run
_run_main(main, args)
File "[REDACTED]/py/absl/app.py", line 393, in _run_main
sys.exit(main(argv))
File "[REDACTED]/imagenet/train.py", line 255, in main
batch["image"], batch["label"])
File "[REDACTED]/py/jax/api.py", line 1051, in f_pmapped
mapped_invars=tuple(axis is not None for axis in in_axes_flat))
File "[REDACTED]/py/jax/core.py", line 1021, in _call_bind
outs = primitive.impl(f, *args, **params)
File "[REDACTED]/py/jax/interpreters/pxla.py", line 604, in xla_pmap_impl
*abstract_args)
File "[REDACTED]/py/jax/linear_util.py", line 221, in memoized_fun
ans = call(fun, *args)
File "[REDACTED]/py/jax/interpreters/pxla.py", line 702, in parallel_callable
extend_name_stack(wrap_name(name, 'pmap')), *xla_args)
File "[REDACTED]/py/jax/interpreters/xla.py", line 406, in jaxpr_subcomp
**new_params)
File "[REDACTED]/py/jax/lax/lax_parallel.py", line 311, in _psum_translation_rule
return _notuple_psum_translation_rule(c, *args, replica_groups=replica_groups)
File "[REDACTED]/py/jax/lax/lax_parallel.py", line 357, in _notuple_psum_translation_rule
return xops.Tuple(c, list(map(_translate, args)))
File "[REDACTED]/py/jax/lax/lax_parallel.py", line 356, in _translate
return psum(val)
File "[REDACTED]/py/jax/lax/lax_parallel.py", line 304, in _allreduce_translation_rule
computation = xla.primitive_subcomputation(prim, scalar, scalar)
File "[REDACTED]/py/jax/interpreters/xla.py", line 309, in primitive_subcomputation
return primitive_computation(prim, AxisEnv(1), None, False, *avals, **params)
File "[REDACTED]/py/jax/interpreters/xla.py", line 306, in primitive_computation
raise RuntimeError(msg) from e
RuntimeError: Invalid argument: Expected element type in shape to be arithmetic type for operation add; got PRED.: @ 0x55b73da20345 xla::XlaBuilder::BinaryOp()
@ 0x55b73da19dc9 xla::Add()
@ 0x7fc98c23d5f4 pybind11::cpp_function::initialize<>()::{lambda()#1}::__invoke()
@ 0x7fc98c1e9963 pybind11::cpp_function::dispatcher()
@ 0x55b745431fe7 PyCFunction_Call
@ 0x55b7454b3adf _PyEval_EvalFrameDefault
@ 0x55b7454b7160 _PyEval_EvalCodeWithName
@ 0x55b7454ad938 PyEval_EvalCodeEx
@ 0x55b7454171a6 function_call
@ 0x55b7453e64ba PyObject_Call
@ 0x55b7450f983e partial_call
@ 0x55b7453e64ba PyObject_Call
@ 0x55b7454b3a04 _PyEval_EvalFrameDefault
@ 0x55b7454b7160 _PyEval_EvalCodeWithName
@ 0x55b7454ad938 PyEval_EvalCodeEx
@ 0x55b7454171a6 function_call
@ 0x55b7453e64ba PyObject_Call
@ 0x55b7450fa680 bounded_lru_cache_wrapper
@ 0x55b7453e64ba PyObject_Call
@ 0x55b7454b3a04 _PyEval_EvalFrameDefault
@ 0x55b7454b7160 _PyEval_EvalCodeWithName
@ 0x55b7454b7a24 fast_function
@ 0x55b7454b66bc call_function
@ 0x55b7454b375f _PyEval_EvalFrameDefault
@ 0x55b7454b7160 _PyEval_EvalCodeWithName
@ 0x55b7454ad938 PyEval_EvalCodeEx
@ 0x55b7454171a6 function_call
@ 0x55b7453e64ba PyObject_Call
@ 0x55b7450f983e partial_call
@ 0x55b7453e678f _PyObject_FastCallDict
@ 0x55b7454b669d call_function
@ 0x55b7454b375f _PyEval_EvalFrameDefault
This is a bug in JAX's shape-checking rules; please report it!
https://github.com/google/jax/issues
```
| Here is `cross_entropy_loss`, but `apply_resnet` is too much to post here. It's a ResNet variant which does work in other calls, including multi-host.
```
def cross_entropy_loss(*, logits, labels):
logp = jax.nn.log_softmax(logits)
loglik = jnp.take_along_axis(logp, labels[:, None], axis=1)
return -jnp.mean(loglik)
```
For the offending line, I have also tried `jax.lax.all_gather` as well as `jax.lax.psum` and they result in the same, in case that helps.
Okay cool, presumably if we replace `apply_resnet(...)` with just a constant array of some shape we'll see the same shape error.
It does help that `psum` raises the same error!
One last question, if it's easy to answer: what's the shape of `jnp.argmax(logits, axis=1) == labels` here? We might be able to repro with just that being a constant array, so we can clear out everything except the `psum`/`pmean`.
Sorry, original message was wrong, I was confused. Shape of that is `(B,)` where `B` is the per-device batch-size, for example `(2,)` in a recent run I did.
~It *should* be `(B, C)` where `B` is batch-size and `C = 1000` in my case, but I will add a logging and run again just to be sure!~
Argh, miss-clicked.
Perfect, ty!
This error looks like it's saying we're creating an XLA add instruction on boolean inputs (which isn't supported). (But it's evidently not going through the same code path as `jax.lax.add(jax.numpy.array([False]), jax.numpy.array([True]))`, which throws a better error.)
@skye had a really smart guess: we lower `jax.core.unit` to `Pred[]`, so maybe there's a partial-eval-inserted unit cropping up here. (It'd be nice to revise things so we lower units to nothing at all in HLO... one of the many things on the todo list.)
I haven't checked yet whether this is pod-specific; on one hand it doesn't sound pod-specific, on the other hand I'm sure we have test coverage for this kind of use case...
@jekbradbury is right, in this case a boolean array is being passed to pmean. I think we should improve the error message, but still make it an error and require an explicit cast in this situation. However, there is an argument to be made for providing some kind of implicit casting functionality (either in psum or even a separate API endpoint?), since that's what numpy does, and even though psum is in lax there isn't a numpy equivalent.
Any thoughts? Otherwise I'll just improve the error message for now.
Oh, hah, I totally didn't get what @jekbradbury 's point was. Sorry! I get it now.
I think we should handle booleans (by promoting to a non-boolean type, like we do for analogous situations in lax_numpy.py).
Ok! I actually like that better, it just felt weird to have some lax primitives do casting and others not. But this way is more useful!
Yeah, it is a bit weird, but I think most lax primitives aren't user-facing. The bool->int promotion in a reduce-sum seems pretty harmless, and this is a nice canonical use case too.
(Also, as per the title of this issue, actually we should do dtype checking in the abstract eval / shape rule, but if we do any it's not currently matching XLA's constraints.)
Wait are you sure the dtype isn't a red herring here? As I mentioned, I get the same error if I do `jax.lax.all_gather` instead of `psum`, which I should reasonably expect to work, no?
Regarding casting, my understanding is "jax is numpy" and so I would expect implicit cast of bool to float, which I use all the time in numpy.
`jax.numpy` approximates NumPy semantics. `jax.lax` in general does not. Although there's certainly a case to be made that the user-facing parallel operations should promote like NumPy, even if other `lax` operations do not.
@lucasb-eyer I'm surprised you're getting exactly the same error with an all-gather, since the error specifically talks about add: `Expected element type in shape to be arithmetic type for operation add; got PRED.` I haven't had a chance to dig into this myself with your code yet though.
Either way, I was able to repro this error with a small psum(bool) unit test, so there's definitely an issue there. I was gonna start by fixing the unit test, then rerunning your code with the fix and see what happens. It's also possible the unit test will expose the more general issue if there is one.
[`all_gather` is just a psum](https://github.com/google/jax/blob/77e31323f79d01b9e97b52546571b6c98dc4df2f/jax/lax/lax_parallel.py#L460) so it makes sense it'd be the same error. | 2020-05-19T19:55:53 |
google/jax | 3,152 | google__jax-3152 | [
"3120"
] | bc47a32c69411965017851ecd3e02444b73f4a89 | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -1131,7 +1131,7 @@ def f_pmapped(*args, **kwargs):
f_pmapped.__name__ = namestr(f_pmapped.__name__, axis_name)
return f_pmapped
-class _TempAxisName(object):
+class _TempAxisName:
def __init__(self, obj):
self.obj = obj
def __repr__(self):
@@ -1139,7 +1139,7 @@ def __repr__(self):
def __hash__(self):
return hash(self.obj)
def __eq__(self, other):
- return self.obj is other.obj
+ return type(other) is _TempAxisName and self.obj is other.obj
def soft_pmap(fun: Callable, axis_name: Optional[AxisName] = None, *,
diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -435,7 +435,7 @@ def check_backend_params(params, outer_backend):
return {k: params[k] for k in params if k != 'backend'}
-class AxisEnv(object):
+class AxisEnv:
def __init__(self, nreps, names=(), sizes=(), devices=None):
assert isinstance(names, tuple)
assert isinstance(sizes, tuple)
@@ -448,7 +448,10 @@ def extend_axis_env(env, name, size):
return AxisEnv(env.nreps, env.names + (name,), env.sizes + (size,), env.devices)
def axis_read(axis_env, axis_name):
- return max(i for i, name in enumerate(axis_env.names) if name == axis_name)
+ try:
+ return max(i for i, name in enumerate(axis_env.names) if name == axis_name)
+ except ValueError:
+ raise NameError("unbound axis name: {}".format(axis_name))
def axis_groups(axis_env, name):
if isinstance(name, (list, tuple)):
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -1184,6 +1184,15 @@ def matrix_vector(x, y, parallel=True):
self.assertAllClose(result1, result3, check_dtypes=False, atol=1e-3, rtol=1e-3)
self.assertAllClose(result1, result4, check_dtypes=False, atol=1e-3, rtol=1e-3)
+ def testPmapAxisNameError(self):
+ # https://github.com/google/jax/issues/3120
+ a = np.arange(4)[np.newaxis,:]
+ def test(x):
+ return jax.lax.psum(x, axis_name='batch')
+
+ with self.assertRaisesRegex(NameError, "unbound axis name: batch"):
+ jax.pmap(test)(a)
+
def testPsumOnBooleanDtype(self):
# https://github.com/google/jax/issues/3123
n = xla_bridge.device_count()
| Request for better error message when pmap and psum don't match on axis_name.
```
a = np.arange(4)[np.newaxis,:]
print(a.shape)
def test(x):
return jax.lax.psum(x, axis_name='batch')
jax.pmap(test)(a)
```
Gives an error message of `AttributeError: 'str' object has no attribute 'obj'`.
It took me forever to figure out that the issue was the pmap needed axis_name='batch'. It would be helpful if this gave a better error message here.
| That might win an overall bad-JAX-error-message award! (Not a competition...)
But yeah, this is just a bug. | 2020-05-19T21:46:39 |
google/jax | 3,155 | google__jax-3155 | [
"2898"
] | 850f1afd959917aa69337f253c435c84b5e53ebc | diff --git a/jax/lib/xla_bridge.py b/jax/lib/xla_bridge.py
--- a/jax/lib/xla_bridge.py
+++ b/jax/lib/xla_bridge.py
@@ -162,7 +162,7 @@ def get_device_backend(device=None):
return get_backend(platform)
-def device_count(backend=None):
+def device_count(backend: str = None):
"""Returns the total number of devices.
On most platforms, this is the same as ``local_device_count()``. However, on
@@ -179,19 +179,22 @@ def device_count(backend=None):
return int(get_backend(backend).device_count())
-def local_device_count(backend=None):
+def local_device_count(backend: str =None):
"""Returns the number of devices on this host."""
return int(get_backend(backend).local_device_count())
-def devices(backend=None):
- """Returns a list of all devices.
+def devices(backend: str = None):
+ """Returns a list of all devices for a given backend.
- Each device is represented by a subclass of Device (e.g. CpuDevice,
- GpuDevice). The length of the returned list is equal to
- ``device_count()``. Local devices can be identified by comparing
+ Each device is represented by a subclass of ``Device`` (e.g. ``CpuDevice``,
+ ``GpuDevice``). The length of the returned list is equal to
+ ``device_count(backend)``. Local devices can be identified by comparing
``Device.host_id`` to ``host_id()``.
+ If ``backend`` is ``None``, returns all the devices from the default backend.
+ The default backend is generally 'gpu' or 'tpu' if available, otherwise 'cpu'.
+
Args:
backend: This is an experimental feature and the API is likely to change.
Optional, a string representing the xla backend. 'cpu', 'gpu', or 'tpu'.
@@ -202,14 +205,28 @@ def devices(backend=None):
return get_backend(backend).devices()
-def local_devices(host_id=None, backend=None):
- """Returns a list of devices local to a given host (this host by default)."""
+def local_devices(host_id: int = None, backend: str = None):
+ """Like ``devices``, but only returns devices local to a given host.
+
+ If ``host_id`` is ``None``, returns devices local to this host.
+
+ Args:
+ host_id: the integer ID of the host. Host IDs can be retrieved via
+ ``host_ids()``.
+ backend: This is an experimental feature and the API is likely to change.
+ Optional, a string representing the xla backend. 'cpu', 'gpu', or 'tpu'.
+
+ Returns:
+ List of Device subclasses.
+ """
if host_id is None:
host_id = get_backend(backend).host_id()
+ if host_id not in host_ids():
+ raise ValueError(f"Unknown host_id {host_id}")
return [d for d in devices(backend) if d.host_id == host_id]
-def host_id(backend=None):
+def host_id(backend: str = None):
"""Returns the integer host ID of this host.
On most platforms, this will always be 0. This will vary on multi-host
@@ -225,12 +242,12 @@ def host_id(backend=None):
return get_backend(backend).host_id()
-def host_ids(backend=None):
+def host_ids(backend: str = None):
"""Returns a sorted list of all host IDs."""
return sorted(list(set(d.host_id for d in devices(backend))))
-def host_count(backend=None):
+def host_count(backend: str = None):
"""Returns the number of hosts."""
return len(host_ids(backend))
| diff --git a/tests/xla_bridge_test.py b/tests/xla_bridge_test.py
--- a/tests/xla_bridge_test.py
+++ b/tests/xla_bridge_test.py
@@ -35,7 +35,7 @@ def test_set_device_assignment_with_partition(self):
"0 2 \nComputation 1: 1 3 \n")
self.assertEqual(compile_options.device_assignment.__repr__(),
expected_device_assignment)
-
+
def test_parameter_replication_default(self):
c = xb.make_computation_builder("test")
param = xb.parameter(c, 0, xc.Shape.array_shape(xc.PrimitiveType.F32, ()))
@@ -48,6 +48,13 @@ def test_parameter_replication(self):
built_c = c.Build()
assert "parameter_replication={false}" in built_c.as_hlo_text()
+ def test_local_devices(self):
+ self.assertNotEmpty(xb.local_devices())
+ with self.assertRaisesRegex(ValueError, "Unknown host_id 100"):
+ xb.local_devices(100)
+ with self.assertRaisesRegex(RuntimeError, "Unknown backend foo"):
+ xb.local_devices(backend="foo")
+
if __name__ == "__main__":
absltest.main()
| jax.devices returns only the devices from the default backend
The documentation of jax.devices says that it returns all devices. This is not true in multi-backend setups, it returns only the devices from the default backend. This was confusing to me initially, and I also encountered this confusion in issue #2785
The simplest change would be to the documentation: if no `backend` is specified then only the devices on the default backend are returned (along with an explanation of what is the default backend).
A better change may be though to say that it returns all devices, in the order of backend priority, such that `devices()[0]` is the same as now. This may break some code though.
| I searched in Google3 and could not find any code that uses the backend parameter for `jax.devices` other than tests. | 2020-05-19T22:17:18 |
google/jax | 3,156 | google__jax-3156 | [
"3093",
"2823"
] | 3141ff832b0ab04f94ad6aa7ecd00e9ae2e143cd | diff --git a/jax/lax/lax_control_flow.py b/jax/lax/lax_control_flow.py
--- a/jax/lax/lax_control_flow.py
+++ b/jax/lax/lax_control_flow.py
@@ -33,7 +33,7 @@
from jax import util
from jax.lax import lax
from jax import linear_util as lu
-from jax.abstract_arrays import ShapedArray, raise_to_shaped
+from jax.abstract_arrays import ConcreteArray, ShapedArray, raise_to_shaped
from jax.api_util import flatten_fun_nokwargs, apply_flat_fun_nokwargs
from jax.core import get_aval
from jax.interpreters import ad
@@ -267,10 +267,15 @@ def while_loop(cond_fun, body_fun, init_val):
The output from the final iteration of body_fun, of type ``a``.
"""
if jax.api._jit_is_disabled():
- val = init_val
- while cond_fun(val):
- val = body_fun(val)
- return val
+ try:
+ val = init_val
+ while cond_fun(val):
+ val = body_fun(val)
+ return val
+ except core.ConcretizationTypeError:
+ # Can't run this while_loop in Python (e.g. because there's a vmap
+ # transformation on it), so we fall back to the primitive version.
+ pass
init_vals, in_tree = tree_flatten((init_val,))
init_avals = tuple(_map(_abstractify, init_vals))
@@ -593,7 +598,7 @@ def _cond(pred, true_fun: Callable, false_fun: Callable, operand):
msg = ("Pred type must be either boolean or number, got {}.")
raise TypeError(msg.format(pred_dtype))
- if jax.api._jit_is_disabled():
+ if jax.api._jit_is_disabled() and isinstance(core.get_aval(pred), ConcreteArray):
if pred:
return true_fun(operand)
else:
| diff --git a/tests/lax_control_flow_test.py b/tests/lax_control_flow_test.py
--- a/tests/lax_control_flow_test.py
+++ b/tests/lax_control_flow_test.py
@@ -1966,6 +1966,22 @@ def cumsum(x, reverse):
with api.disable_jit():
self.assertAllClose(np.cumsum(x[::-1])[::-1], cumsum(x, True), check_dtypes=False)
+ def test_disable_jit_cond_with_vmap(self):
+ # https://github.com/google/jax/issues/3093
+ def fn(t):
+ return lax.cond(t > 0, 0, lambda x: 0, 0, lambda x: 1)
+ fn = api.vmap(fn)
+
+ with api.disable_jit():
+ outputs = fn(jnp.array([1])) # doesn't crash
+
+ def test_disable_jit_while_loop_with_vmap(self):
+ # https://github.com/google/jax/issues/2823
+ def trivial_while(y):
+ return lax.while_loop(lambda x: x < 10.0, lambda x: x + 1.0, y)
+ with api.disable_jit():
+ api.vmap(trivial_while)(jnp.array([3.0,4.0])) # doesn't crash
+
if __name__ == '__main__':
absltest.main()
diff --git a/tests/ode_test.py b/tests/ode_test.py
--- a/tests/ode_test.py
+++ b/tests/ode_test.py
@@ -155,6 +155,14 @@ def g(x):
rtol = {np.float64: 2e-15}
self.assertAllClose(ans, expected, check_dtypes=False, atol=atol, rtol=rtol)
+ def test_disable_jit_odeint_with_vmap(self):
+ # https://github.com/google/jax/issues/2598
+ with jax.disable_jit():
+ t = jax.numpy.array([0.0, 1.0])
+ x0_eval = jax.numpy.zeros((5, 2))
+ f = lambda x0: odeint(lambda x, _t: x, x0, t)
+ jax.vmap(f)(x0_eval) # doesn't crash
+
if __name__ == '__main__':
absltest.main()
| Bug with jax.disable_jit (or lax.cond, or vmap?)
Attempt at minimal reproducing example:
```python
def fn(t):
return jax.lax.cond(t > 0, 0, lambda x: 0, 0, lambda x: 1)
fn = jax.vmap(fn)
with jax.disable_jit():
outputs = fn(jnp.array([1]))
```
fails with
```
Encountered value: Traced<ShapedArray(bool[])>with<BatchTrace(level=11/0)>
with val = DeviceArray([ True], dtype=bool)
batch_dim = 0
```
It does not fail without the `disable_jit()` and it does not fail without the `vmap` (input a scalar in that case).
disable_jit extensions to control flow constructs break with vmap
The extension of `jax.disable_jit()` context to the control flow elements (e.g. `while_loop`, `cond`) isn't correct in the presence of batch tracers for `vmap`. `while_loop` and `cond` are rewritten to use python control flow under this context, but conditional branching can't be evaluated on the batchtracers present when functions using these control flow constructs are inside a `vmap`.
e.g.
```python
def trivial_while(y):
return lax.while_loop(lambda x: x < 10.0, lambda x: x + 1.0, y)
# works:
jax.vmap(trivial_while)(jnp.array([3.0,4.0]))
# throws a ConcretizationTypeError:
with jax.disable_jit():
jax.vmap(trivial_while)(jnp.array([3.0,4.0]))
```
likewise:
```python
def trivial_cond(x):
return lax.cond(x < 1.0, x, lambda x: x, x, lambda x: 2*x)
# works
jax.vmap(trivial_cond)(jnp.array([0.5,2.0]))
# throws a ConcretizationTypeError:
with jax.disable_jit():
jax.vmap(trivial_cond)(jnp.array([0.5,2.0]))
```
Perhaps the disabled_jit python implementations for these control flow constructs could peak at the tracer value's `val` field which I suspect will be populated in these use-cases?
| Thanks for raising this! This is a known problem, though I forget if there's already an issue for it.
The problem is that:
1. when in a `disable_jit` context, the control flow primitives like `cond` [just fall back to a Python implementation](https://github.com/google/jax/blob/7c687b245b34397c13563a714ad9bf0290b419e3/jax/lax/lax_control_flow.py#L530-L534) motivated by better debugging, yet
2. this Python implementation isn't `vmap`-traceable (because in this case `vmap` abstracts the boolean `t > 0` to the Shaped level).
Probably the best option is to run the Python implementation only when `bool(pred)` can be evaluated at trace time (i.e. when that call doesn't raise an abstract value error), and call into the non-Python version when it fails.
#2598 is possibly related? In particular commenting out [these lines](https://github.com/google/jax/blob/2c4ced2143375ed8a4f8e467c501d9c794d385cb/jax/lax/lax_control_flow.py#L219-L223) might fix it? | 2020-05-20T00:17:51 |
google/jax | 3,160 | google__jax-3160 | [
"3133"
] | 42b425d8e556583a2afce6388aa101932805aad1 | diff --git a/jax/core.py b/jax/core.py
--- a/jax/core.py
+++ b/jax/core.py
@@ -371,11 +371,18 @@ class Tracer(object):
__slots__ = ['_trace', '__weakref__']
def __array__(self, *args, **kw):
- raise Exception("Tracer can't be used with raw numpy functions. "
- "You might have\n"
- " import numpy as np\n"
- "instead of\n"
- " import jax.numpy as jnp")
+ msg = ("The numpy.ndarray conversion method __array__() was called on "
+ f"the JAX Tracer object {self}.\n\n"
+ "This error can occur when a JAX Tracer object is passed to a raw "
+ "numpy function, or a method on a numpy.ndarray object. You might "
+ "want to check that you are using `jnp` together with "
+ "`import jax.numpy as jnp` rather than using `np` via "
+ "`import numpy as np`. If this error arises on a line that involves "
+ "array indexing, like `x[idx]`, it may be that the array being "
+ "indexed `x` is a raw numpy.ndarray while the indices `idx` are a "
+ "JAX Tracer instance; in that case, you can instead write "
+ "`jax.device_put(x)[idx]`.")
+ raise Exception(msg)
def __init__(self, trace):
self._trace = trace
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -177,10 +177,8 @@ def test_unwrapped_numpy(self):
def f(x):
return np.exp(x)
- jtu.check_raises(lambda: grad(f)(np.zeros(3)), Exception,
- "Tracer can't be used with raw numpy functions. "
- "You might have\n import numpy as np\ninstead of\n"
- " import jax.numpy as jnp")
+ with self.assertRaisesRegex(Exception, "The numpy.ndarray conversion .*"):
+ grad(f)(np.zeros(3))
def test_binop_mismatch(self):
def f(x, y):
| Tracing a function that indexes into Numpy array gives a poor error message
The following code fails on the last line
```
f = lambda i: jnp.zeros((3, 3))[i, :]
g = lambda i: np.zeros((3, 3))[i, :]
a = np.array([1, 2])
f(a) # Okay
jax.jit(f)(a) # Okay
g(a) # Okay
jax.jit(g)(a) # Fail
```
with the standard error message
```
Tracer can't be used with raw numpy functions. You might have
import numpy as np
instead of
import jax.numpy as np
```
The cause of the error is attempting to trace the `__getitem__` method of a raw numpy tensor. Normally "Tracer can't be used ..." errors are easy to spot because the offending call starts with `np.`, but this error is a bit more subtle and takes more time to track down. Also, binary operations that mix numpy and JAX arrays work fine, so it this is an exceptional case.
Is there any way to improve this error message / detect this case? At the extreme end, could jax do without implementing the `__array__` method for implicit conversions (and replace with an explicit conversion method), to reduce the mental overhead associated with these conversions?
| I think the only way to specialize this error would be to do some sort of call stack tracing in ``__array__``. The most useful thing may be to print the context of the call as part of the error; for example, something like this:
```python
import inspect
class MyClass:
def __array__(self):
frame = inspect.currentframe()
call_frame = inspect.getouterframes(frame, 3)
prefix = lambda i: '--> ' if i == call_frame[1].index else ' '
lines = [prefix(i) + line for i, line in enumerate(call_frame[1].code_context)]
raise ValueError('Tracer error:\n\n' + ''.join(lines))
x = np.zeros(10)
m = MyClass()
x[m]
```
```pytb
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-36-70e4e8c61a15> in <module>()
11 x = np.zeros(10)
12 m = MyClass()
---> 13 x[m]
<ipython-input-36-70e4e8c61a15> in __array__(self)
7 prefix = lambda i: '--> ' if i == call_frame[1].index else ' '
8 lines = [prefix(i) + line for i, line in enumerate(call_frame[1].code_context)]
----> 9 raise ValueError('Tracer error:\n\n' + ''.join(lines))
10
11 x = np.zeros(10)
ValueError: Tracer error:
x = np.zeros(10)
m = MyClass()
--> x[m]
```
I don't think the array slicing itself leaves a frame in the call stack, unfortunately. A more typical example would be something like `jnp.exp(1 + 0.5 * x[m])` or longer. In that case, Jake's class will look like it is flagging `jnp.exp` but really it is flagging the `x[m]`.
The message would be something like `--> jnp.exp(1 + 0.5 * x[m])` that, together with the suggestion in the error message that you have the wrong import, will suggest the `exp` pretty strongly.
The error message could probably be improved either way
1. It should probably mention methods, too, e.g., `Tracer can't be used with raw numpy functions or methods on numpy arrays`
2. We not longer recommend `import jax.numpy as np`
We could just call out indexing in the error message too (in all cases, not just when we can detect it). I think we've seen this multiple times. | 2020-05-20T03:14:24 |
google/jax | 3,162 | google__jax-3162 | [
"2737"
] | 7d157c71815e5386f2b449bac69ef9e3ab7e701f | diff --git a/jax/interpreters/ad.py b/jax/interpreters/ad.py
--- a/jax/interpreters/ad.py
+++ b/jax/interpreters/ad.py
@@ -170,47 +170,18 @@ def write_primal(v, val):
# forces primal_in to contain UndefinedPrimals for tangent values!
map(write_primal, jaxpr.invars, primals_in)
- def is_linear(var):
- if type(var) is Literal:
- return False
- else:
- return var not in primal_env
-
- linear_eqns = []
- for eqn in jaxpr.eqns:
- prim = eqn.primitive
- if not (prim.call_primitive or prim.map_primitive):
- if any(is_linear(v) for v in eqn.invars):
- linear_eqns.append(eqn)
- else:
- in_vals = map(read_primal, eqn.invars)
- ans = prim.bind(*in_vals, **eqn.params)
- if prim.multiple_results:
- map(write_primal, eqn.outvars, ans)
- else:
- write_primal(eqn.outvars[0], ans)
- else:
- call_jaxpr, params = core.extract_call_jaxpr(prim, eqn.params)
- if any(is_linear(v) for v in eqn.invars):
- linear_eqns.append(eqn)
- if any(not is_linear(v) for v in eqn.invars):
- # FIXME: Some invars correspond to tangents
- ans = _eval_subjaxpr_primals(prim, call_jaxpr,
- map(read_primal, eqn.invars), params)
- map(write_primal, eqn.outvars, ans)
-
# Find the last use of each cotangent so that they can be removed
# as soon as possible.
drop_cts: List[Set[Any]] = []
seen_vars: Set[Any] = set(jaxpr.invars)
- for eqn in linear_eqns:
+ for eqn in jaxpr.eqns:
read_set = set(eqn.outvars) # NOTE: eqn is not transposed yet!
drop_cts.append(read_set - seen_vars)
seen_vars |= read_set
ct_env: Dict[Any, Any] = {}
map(write_cotangent, jaxpr.outvars, cotangents_in)
- for eqn, to_drop in zip(linear_eqns[::-1], drop_cts[::-1]):
+ for eqn, to_drop in zip(jaxpr.eqns[::-1], drop_cts[::-1]):
# FIXME: Some invars correspond to tangents
invals = map(read_primal, eqn.invars)
if eqn.primitive.multiple_results:
@@ -218,9 +189,10 @@ def is_linear(var):
else:
cts_in, = map(read_cotangent, eqn.outvars)
if eqn.primitive.call_primitive or eqn.primitive.map_primitive:
+ cts_in_avals = [v.aval for v in eqn.outvars]
call_jaxpr, params = core.extract_call_jaxpr(eqn.primitive, eqn.params)
cts_out = get_primitive_transpose(eqn.primitive)(
- params, call_jaxpr, invals, cts_in)
+ params, call_jaxpr, invals, cts_in, cts_in_avals)
else:
cts_out = get_primitive_transpose(eqn.primitive)(cts_in, *invals, **eqn.params)
cts_out = [zero] * len(eqn.invars) if cts_out is zero else cts_out
@@ -232,60 +204,6 @@ def is_linear(var):
cotangents_out = map(read_cotangent, jaxpr.invars)
return cotangents_out
-def _eval_subjaxpr_primals(prim, jaxpr, in_vals, params):
- assert not jaxpr.constvars
- all_args, in_tree_def = tree_flatten((in_vals,))
- fun = lu.hashable_partial(lu.wrap_init(_eval_primals), jaxpr)
- fun, out_tree = flatten_fun_nokwargs(fun, in_tree_def)
- assert prim.map_primitive ^ prim.call_primitive
- if prim.map_primitive:
- new_mapped_invars = [m for m, x in zip(params['mapped_invars'], in_vals)
- if not is_undefined_primal(x)]
- new_params = dict(params, mapped_invars=tuple(new_mapped_invars))
- out_flat = prim.bind(fun, *all_args, **new_params)
- else:
- out_flat = prim.bind(fun, *all_args, **params)
- return tree_unflatten(out_tree(), out_flat)
-
-def _eval_primals(jaxpr, args):
- primal_env = {}
-
- def read_primal(v):
- if type(v) is Literal:
- return v.val
- else:
- return primal_env.get(v, UndefinedPrimal(v.aval))
-
- def write_primal(v, val):
- if not is_undefined_primal(val):
- primal_env[v] = val
-
- def is_linear(var):
- if type(var) is Literal:
- return False
- else:
- return var not in primal_env
-
- write_primal(core.unitvar, core.unit)
- assert not jaxpr.constvars
- map(write_primal, jaxpr.invars, args)
- for eqn in jaxpr.eqns:
- if not (eqn.primitive.call_primitive or eqn.primitive.map_primitive):
- if not any(is_linear(v) for v in eqn.invars):
- in_vals = map(read_primal, eqn.invars)
- ans = eqn.primitive.bind(*in_vals, **eqn.params)
- if eqn.primitive.multiple_results:
- map(write_primal, eqn.outvars, ans)
- else:
- write_primal(eqn.outvars[0], ans)
- else:
- call_jaxpr, params = core.extract_call_jaxpr(eqn.primitive, eqn.params)
- if any(not is_linear(v) for v in eqn.invars):
- ans = _eval_subjaxpr_primals(eqn.primitive, call_jaxpr,
- map(read_primal, eqn.invars), params)
- map(write_primal, eqn.outvars, ans)
- return map(read_primal, jaxpr.outvars)
-
class UndefinedPrimal:
__slots__ = ['aval']
def __init__(self, aval):
@@ -548,7 +466,7 @@ def traceable(num_primals, in_tree_def, *primals_and_tangents):
yield out_flat, tree_def
-def call_transpose(primitive, params, call_jaxpr, args, ct):
+def call_transpose(primitive, params, call_jaxpr, args, ct, _):
all_args, in_tree_def = tree_flatten(((), args, ct)) # empty consts
fun = lu.hashable_partial(lu.wrap_init(backward_pass), call_jaxpr)
fun, out_tree = flatten_fun_nokwargs(fun, in_tree_def)
@@ -556,9 +474,40 @@ def call_transpose(primitive, params, call_jaxpr, args, ct):
out_flat = primitive.bind(fun, *all_args, **params)
return tree_unflatten(out_tree(), out_flat)
primitive_transposes[core.call_p] = partial(call_transpose, call_p)
-primitive_transposes[pe.remat_call_p] = partial(call_transpose, pe.remat_call_p)
-def map_transpose(primitive, params, call_jaxpr, args, ct):
+
+def remat_transpose(params, call_jaxpr, primals_in, cotangents_in, cotangent_in_avals):
+ # backward_pass can only transpose linear computations, but the call_jaxpr embedded in
+ # remat contains primal (non-linear) equations too. Hence, we have to eliminate those
+ # (in this case via partial_eval) before we call into backward_pass again.
+ typed_call_jaxpr = core.TypedJaxpr(
+ call_jaxpr, [],
+ [raise_to_shaped(p.aval if is_undefined_primal(p) else get_aval(p)) for p in primals_in],
+ cotangent_in_avals)
+ primal_jaxpr, tangent_jaxpr, out_unknowns = \
+ pe.partial_eval_jaxpr(typed_call_jaxpr,
+ unknowns=map(is_undefined_primal, primals_in),
+ instantiate=True,
+ trace_type=None)
+
+ def do_transpose(primals_in, cotangents_in):
+ # NOTE: This is passing in undefined primals in place of tangent arguments, but it
+ # should all work out, because we're only computing the primal part here.
+ residuals = core.jaxpr_as_fun(primal_jaxpr)(*primals_in)[len(cotangents_in):]
+ # Now that we have a purely linear jaxpr, we can transpose it
+ cotangents_out = backward_pass(tangent_jaxpr.jaxpr, (), primals_in + residuals, cotangents_in)
+ # backward_pass will return cotangents computed for all invars, but some of them
+ # are residuals appended by partial eval, so we need to skip those before we return.
+ return cotangents_out[:len(primals_in)]
+
+ flat_args, in_tree_def = tree_flatten((primals_in, cotangents_in))
+ flat_do_transpose, out_tree = flatten_fun_nokwargs(lu.wrap_init(do_transpose), in_tree_def)
+ flat_cotangents_out = pe.remat_call_p.bind(flat_do_transpose, *flat_args, **params)
+ return tree_unflatten(out_tree(), flat_cotangents_out)
+primitive_transposes[pe.remat_call_p] = remat_transpose
+
+
+def map_transpose(primitive, params, call_jaxpr, args, ct, _):
all_args, in_tree_def = tree_flatten(((), args, ct)) # empty consts
fun = lu.hashable_partial(lu.wrap_init(backward_pass), call_jaxpr)
fun, out_tree = flatten_fun_nokwargs(fun, in_tree_def)
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -1567,6 +1567,59 @@ def f(a_bool, y):
api.jit(api.remat(f, concrete=True), static_argnums=0)(True, 1) # no crash
+ def test_remat_eval_counter(self):
+ # https://github.com/google/jax/issues/2737
+ add_one_p = Primitive('add_one')
+ add_one = add_one_p.bind
+
+ num_evals = 0
+
+ @contextmanager
+ def assertEvals(n):
+ start = num_evals
+ yield
+ assert num_evals - start == n
+
+ def add_one_impl(x):
+ nonlocal num_evals
+ num_evals += 1
+ return x + 1
+ add_one_p.def_impl(add_one_impl)
+
+ def add_one_jvp(pin, tin):
+ pout = add_one(pin[0])
+ return pout, pout * tin[0]
+ ad.primitive_jvps[add_one_p] = add_one_jvp
+
+ add_one_p.def_abstract_eval(lambda x: x)
+
+ v = np.zeros((1,))
+
+ f = jax.remat(add_one)
+ g = jax.remat(lambda x: add_one(f(x)))
+
+ # 2 calls needed to evaluate g
+ with assertEvals(2):
+ _, vjp = jax.vjp(g, v)
+ # 2 calls made while transposing g, 1 call made while transposing f
+ with assertEvals(3):
+ vjp(v)
+
+ @jax.util.curry
+ def call(f, *args):
+ return jax.core.call(jax.linear_util.wrap_init(lambda *args: [f(*args)]), *args)[0]
+
+ f = call(add_one)
+ g = jax.remat(lambda x: add_one(f(x)))
+
+ # 2 calls needed to evaluate g
+ with assertEvals(2):
+ _, vjp = jax.vjp(g, v)
+ # 2 calls made while transposing g, no reevaluation for transposition of f
+ with assertEvals(2):
+ vjp(v)
+
+
def test_trivial_computations(self):
x = jnp.array([1, 2, 3])
y = api.jit(lambda x: x)(x)
@@ -1721,7 +1774,7 @@ def fun(x):
jaxpr = api.make_jaxpr(fun)(0.)
self.assertMultiLineStrippedEqual("""
{ lambda b ; a.
- let
+ let
in (a, 1.0, b) }
""", str(jaxpr))
| remat + eager mode = unnecessary FLOPs in backward_pass
While `remat` takes great care to avoid unnecessary FLOPs during tracing of the primal code (see the discussion around the `concrete` parameter), it looks like some FLOPs have slipped through the cracks in the `backward_pass`! To verify you can apply the diff below, that I'm folding for readability.
<details>
```diff
diff --git a/jax/interpreters/ad.py b/jax/interpreters/ad.py
index 0b730e0a..ba8bcf5c 100644
--- a/jax/interpreters/ad.py
+++ b/jax/interpreters/ad.py
@@ -187,8 +188,14 @@ def backward_pass(jaxpr: core.Jaxpr, consts, args, cotangents_in):
if any(is_linear(v) for v in eqn.invars):
linear_eqns.append(eqn)
if any(not is_linear(v) for v in eqn.invars):
+ print('Evaluating primal call primitive')
ans = _eval_subjaxpr_primals(eqn.primitive, call_jaxpr,
map(read_primal, eqn.invars), params)
+ print('Results:', ans)
map(write_primal, eqn.outvars, ans)
ct_env: Dict[Any, Any] = {}
@@ -220,6 +228,7 @@ def _eval_subjaxpr_primals(prim, jaxpr, in_vals, params):
return tree_unflatten(out_tree(), out_flat)
def _eval_primals(jaxpr, args):
+ print('eval_primals >>>>')
primal_env = {}
def read_primal(v):
@@ -230,6 +239,7 @@ def _eval_primals(jaxpr, args):
def write_primal(v, val):
if not is_undefined_primal(val):
+ print('COMPUTED: ', val)
primal_env[v] = val
def is_linear(var):
@@ -256,6 +266,7 @@ def _eval_primals(jaxpr, args):
ans = _eval_subjaxpr_primals(eqn.primitive, call_jaxpr,
map(read_primal, eqn.invars), params)
map(write_primal, eqn.outvars, ans)
+ print('eval_primals <<<<')
return map(read_primal, jaxpr.outvars)
class UndefinedPrimal:
```
</details>
Then, running this code:
```py
@checkpoint
def g(x):
return x ** 3
def f(x):
return g(x) * 2
vjp(f, x)[1](x)
```
produces the following output:
```
Evaluating primal call primitive
eval_primals >>>>
COMPUTED: *
COMPUTED: [2.]
COMPUTED: [4.]
COMPUTED: [12.]
eval_primals <<<<
Results: [*, UndefinedPrimal(ShapedArray(float32[1]))]
```
indicating that the call to `_eval_subjaxpr_primals` for the purpose of getting _potential primal outputs_ out of `remat_call` evaluated a bunch of expressions on real data, only to return a unit and an undefined primal. In effect, every `remat_call` gets evaluated twice, with all results of the first evaluation getting thrown away.
A potential solution would be to make `_eval_subjaxp_primals` only compute the primal values that actually affect the primal outputs (none in this case). The problem with that in that JVP jaxprs make no distinction between primals and tangents in their return lists, except that we might be able to depend on the convention that the first half of those always corresponds to primals, while the rest are their tangents. Not sure if that's a sound thing to do though (would that generalize to all call primitives?), so I wanted to discuss this before trying to resolve the issue. If yes, then `partial_eval._dce_jaxpr` might be a sufficient fix.
| Wow, brilliant catch!
I've forgotten the details now, but I think having some structure/bookkeeping to distinguish between primals and tangents from JVPs (and similarly for VJPs) would have been helpful when we were prototyping grad of sharded_jit (specifically in the sharded_jit translation rule I think). Not sure how relevant that is here but just throwing it out there.
@skye that's a really useful point! I think actually we might be able to collect a handful of such examples. (One that was on my mind recently: having linearity information available would make it much easier for users to write custom derivatives for higher-order functions that correctly handle closed-over traces in their function-valued arguments.)
@apaszke and I chatted this morning and, while there's still more thinking to do, it seems like it might not be a big lift to include this information in jaxprs, especially now that we include avals for each variable as a kind of type information. After all, when we differentiate things, we have the linearity information available; we just throw it away rather than recording it in the jaxpr. (There are a couple ad-hoc places we record it, like the `linear` params of `scan` and `cond`. But those feel analogous to the way we used to stash shape data in params, before we had avals available.)
cc @dougalm
Another interesting failure case that I forgot to document properly in here is that `call_p` primitive nested inside remat calls will end up behaving in the same way as if they were nested remat calls. To reproduce this you can use this code:
```python
def g(x):
return jax.core.call(lu.wrap_init(lambda x: (np.sin(x),)), x)[0]
# g = remat(np.sin)
@remat
def f(x):
return g(x * 2) * 4
print(jax.make_jaxpr(lambda x, y: jax.jvp(f, [x], [y]))(x, x))
```
which produces the following jaxpr:
```
{ lambda ; a b.
let c d = remat_call[ call_jaxpr={ lambda ; a b.
let c = mul a 2.0
d = mul b 2.0
e f = call[ call_jaxpr={ lambda ; a b.
let c = sin a
d = cos a
e = mul b d
in (c, e) }
name=pe(jvp(<lambda>)) ] c d
g = mul e 4.0
h = mul f 4.0
in (g, h) }
concrete=False
name=pe(jvp(f)) ] a b
in (c, d) }
```
Uncommenting the line with `remat` produces an identical jaxpr, only with `call` replaced by `remat_call`. Since `remat` is (correctly) not special cased anywhere in AD, they will be treated in exactly the same way (they share their transpose implementation). | 2020-05-20T10:58:58 |
google/jax | 3,166 | google__jax-3166 | [
"3165"
] | 96c20f3237614f7ca11f4ed288ebcc219b5d6ffd | diff --git a/jax/scipy/ndimage.py b/jax/scipy/ndimage.py
--- a/jax/scipy/ndimage.py
+++ b/jax/scipy/ndimage.py
@@ -21,6 +21,7 @@
import scipy.ndimage
from .. import api
+from .. import lax
from ..numpy import lax_numpy as jnp
from ..numpy._util import _wraps
from ..util import safe_zip as zip
@@ -36,8 +37,12 @@
}
+def _round_half_away_from_zero(a):
+ return a if jnp.issubdtype(a.dtype, jnp.integer) else lax.round(a)
+
+
def _nearest_indices_and_weights(coordinate):
- index = jnp.around(coordinate).astype(jnp.int32)
+ index = _round_half_away_from_zero(coordinate).astype(jnp.int32)
weight = coordinate.dtype.type(1)
return [(index, weight)]
@@ -53,7 +58,7 @@ def _linear_indices_and_weights(coordinate):
@functools.partial(api.jit, static_argnums=(2, 3, 4))
def _map_coordinates(input, coordinates, order, mode, cval):
input = jnp.asarray(input)
- coordinates = [jnp.asarray(c, input.dtype) for c in coordinates]
+ coordinates = [jnp.asarray(c) for c in coordinates]
cval = jnp.asarray(cval, input.dtype)
if len(coordinates) != input.ndim:
@@ -100,7 +105,9 @@ def _map_coordinates(input, coordinates, order, mode, cval):
contribution = jnp.where(all_valid, input[indices], cval)
outputs.append(_nonempty_prod(weights) * contribution)
result = _nonempty_sum(outputs)
- return result
+ if jnp.issubdtype(input.dtype, jnp.integer):
+ result = _round_half_away_from_zero(result)
+ return result.astype(input.dtype)
@_wraps(scipy.ndimage.map_coordinates, lax_description=textwrap.dedent("""\
| diff --git a/tests/scipy_ndimage_test.py b/tests/scipy_ndimage_test.py
--- a/tests/scipy_ndimage_test.py
+++ b/tests/scipy_ndimage_test.py
@@ -74,7 +74,7 @@ class NdimageTest(jtu.JaxTestCase):
"cval": cval, "impl": impl, "round_": round_}
for shape in [(5,), (3, 4), (3, 4, 5)]
for coords_shape in [(7,), (2, 3, 4)]
- for dtype in float_dtypes
+ for dtype in float_dtypes + int_dtypes
for coords_dtype in float_dtypes
for order in [0, 1]
for mode in ['wrap', 'constant', 'nearest']
@@ -100,10 +100,14 @@ def args_maker():
impl_fun = (osp_ndimage.map_coordinates if impl == "original"
else _fixed_ref_map_coordinates)
osp_op = lambda x, c: impl_fun(x, c, order=order, mode=mode, cval=cval)
- epsilon = max([dtypes.finfo(dtypes.canonicalize_dtype(d)).eps
- for d in [dtype, coords_dtype]])
- self._CheckAgainstNumpy(lsp_op, osp_op, args_maker, tol=100*epsilon,
- check_dtypes=True)
+ if dtype in float_dtypes:
+ epsilon = max([dtypes.finfo(dtypes.canonicalize_dtype(d)).eps
+ for d in [dtype, coords_dtype]])
+ self._CheckAgainstNumpy(lsp_op, osp_op, args_maker, tol=100*epsilon,
+ check_dtypes=True)
+ else:
+ self._CheckAgainstNumpy(lsp_op, osp_op, args_maker, tol=0,
+ check_dtypes=True)
def testMapCoordinatesErrors(self):
x = onp.arange(5.0)
@@ -120,6 +124,21 @@ def testMapCoordinateDocstring(self):
self.assertIn("Only linear interpolation",
lsp_ndimage.map_coordinates.__doc__)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "_{}_order={}".format(onp.dtype(dtype), order),
+ "dtype": dtype, "order": order}
+ for dtype in float_dtypes + int_dtypes
+ for order in [0, 1]))
+ def testMapCoordinatesRoundHalf(self, dtype, order):
+ x = onp.arange(-3, 3, dtype=dtype)
+ c = onp.array([[.5, 1.5, 2.5, 3.5]])
+ def args_maker():
+ return x, c
+
+ lsp_op = lambda x, c: lsp_ndimage.map_coordinates(x, c, order=order)
+ osp_op = lambda x, c: osp_ndimage.map_coordinates(x, c, order=order)
+ self._CheckAgainstNumpy(lsp_op, osp_op, args_maker, check_dtypes=True)
+
def testContinuousGradients(self):
# regression test for https://github.com/google/jax/issues/3024
| jax.scipy.ndimage.map_coordinates round half is different than scipy
`jax.scipy.ndimage.map_coordinates` implements 'round half to even' when using nearest-neighbor interpolation and the coordinat values are exactly half (i.e. both 1.5 and 2.5 get rounded to 2). This is the default behavior for jax and ordinary numpy rounding (round, around, rint). However scipy.ndimage.map_coordinates appears to implement 'round half up'.
The difference between jax and scipy is illustrated with the following commands.
`>>> x = np.arange(10).astype(np.float32)`
`>>> coords = [[3.5, 4.5, 5.5, 6.5, 7.5]]`
`>>> jax.scipy.ndimage.map_coordinates(x, coords, order=0)`
`DeviceArray([4., 4., 6., 6., 8.], dtype=float32)`
`>>> scipy.ndimage.map_coordinates(x, coords, order=0)`
`array([4., 5., 6., 7., 8.], dtype=float32)`
Not sure which behavior is desired, but if the default numpy behavior is desired, the jax.scipy documentation should mention this. If the scipy behavior is desired, the rounding function at
[https://github.com/google/jax/blob/7d157c71815e5386f2b449bac69ef9e3ab7e701f/jax/scipy/ndimage.py#L40](https://github.com/google/jax/blob/7d157c71815e5386f2b449bac69ef9e3ab7e701f/jax/scipy/ndimage.py#L40)
should be changed to something like:
`index = jnp.floor(coordinate + .5).astype(jnp.int32)`
| 2020-05-20T16:40:57 |
|
google/jax | 3,173 | google__jax-3173 | [
"3168"
] | 12f26d3c8c6b3e020e17024d3cbd39b62fd631bb | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -1354,8 +1354,10 @@ def broadcast_arrays(*args):
return [broadcast_to(arg, result_shape) for arg in args]
+@_wraps(np.broadcast_to, lax_description="""\
+The JAX version does not necessarily return a view of the input.
+""")
def broadcast_to(arr, shape):
- """Like Numpy's broadcast_to but doesn't necessarily return views."""
arr = arr if isinstance(arr, ndarray) else array(arr)
shape = canonicalize_shape(shape) # check that shape is concrete
arr_shape = _shape(arr)
| broadcast_to docs are unhelpful
Right now the docs state "Like Numpy’s broadcast_to but doesn’t necessarily return views."
A few suggestions
1. Link to numpy's docs
2. Copy their docs in some way
3. Write our own version
| 2020-05-21T01:40:57 |
||
google/jax | 3,174 | google__jax-3174 | [
"3164"
] | f9c978e9d608e373965512ebc498c0a1338af3ed | diff --git a/jax/lax/lax_control_flow.py b/jax/lax/lax_control_flow.py
--- a/jax/lax/lax_control_flow.py
+++ b/jax/lax/lax_control_flow.py
@@ -375,7 +375,8 @@ def _while_loop_batching_rule(args, dims, cond_nconsts, cond_jaxpr,
body_jaxpr_batched, carry_bat_out = batching.batch_jaxpr(
body_jaxpr, size, batched, instantiate=carry_bat)
cond_jaxpr_batched, (pred_bat,) = batching.batch_jaxpr(
- cond_jaxpr, size, cconst_bat + carry_bat, instantiate=False)
+ cond_jaxpr, size, cconst_bat + carry_bat,
+ instantiate=bool(cond_jaxpr.out_avals[0].shape))
carry_bat_out = _map(partial(operator.or_, pred_bat), carry_bat_out)
if carry_bat_out == carry_bat:
break
@@ -389,7 +390,7 @@ def _while_loop_batching_rule(args, dims, cond_nconsts, cond_jaxpr,
new_consts = [batching.moveaxis(x, d, 0) if d is not batching.not_mapped and d != 0
else x for x, d in zip(consts, const_dims)]
new_init = [batching.broadcast(x, size, 0) if now_bat and not was_bat
- else batching.moveaxis(x, d, 0) if now_bat else x
+ else batching.moveaxis(x, d, 0) if now_bat and d != 0 else x
for x, d, was_bat, now_bat in zip(init, init_dims, init_bat, carry_bat)]
outs = while_p.bind(*(new_consts + new_init),
| diff --git a/tests/lax_control_flow_test.py b/tests/lax_control_flow_test.py
--- a/tests/lax_control_flow_test.py
+++ b/tests/lax_control_flow_test.py
@@ -1982,6 +1982,12 @@ def trivial_while(y):
with api.disable_jit():
api.vmap(trivial_while)(jnp.array([3.0,4.0])) # doesn't crash
+ def test_vmaps_of_while_loop(self):
+ # https://github.com/google/jax/issues/3164
+ def f(x, n): return lax.fori_loop(0, n, lambda _, x: x + 1, x)
+ x, n = jnp.arange(3), jnp.arange(4)
+ api.vmap(api.vmap(f, (None, 0)), (0, None))(x, n) # doesn't crash
+
if __name__ == '__main__':
absltest.main()
| expm shape tracing issue through two vmaps and jvp
Hi,
I've been trying to narrow down the issue that arises when a vmap is applied to jvp that goes through a calculation which has its own vmap. Although I am new to Jax, and potentially, lacking understanding of shape tracing but it seems to me that the problem occurs due to expm matrix operation.
Consider the following code that either implements a Jacobian calculation for the matrix product operation:
> e^{I*(w[0,0]+w[0,1]})e^{I*(w[1,0]+w[1,1]})
or
> (I * w[0,0]) * (I * w[0,1]) * (I * w[1,0]) * (I * w[1,1])
where I is 2x2 identity matrix and w is a set of weights stored as a 2x2 array. The executed calculation can be toggled by commenting/uncommenting 2 lines in matrix_operation(). The code works well for the 2nd operation but breaks down for the 1st one throwing an assertion error related to the shape detected in lax_control_flow.py.
I am wondering whether it is a bug in expm or I am missing something? I would appreciate any help!
```
import jax.numpy as jnp
from jax import vmap, jvp
from functools import partial
from jax.numpy.linalg import multi_dot
def matrix_operation(w):
assert w.shape == (2,)
A = jnp.identity(2)
return la.expm(A*(w[0]+w[1])) # ----- breaks down with this line
#return jnp.matmul(A*w[0], A*w[1]) #----- works with this line!
def cost_function(ws):
same_w = vmap(matrix_operation)(ws)
product = multi_dot(same_w)
return product
def pushfwd_(func, weights, tangent):
return jvp(func, (weights,), (tangent,))
r = 2
c = 2
weights = jnp.ones((r, c))
pushfwd = partial(pushfwd_, cost_function, weights)
# a set of vectors with a single non-zero entry equal to 1
# and same shape as weights
tangents = jnp.reshape(jnp.identity(r*c), (c*r, r, c))
print(vmap(pushfwd)(tangents)) # this breaks with la.expm above
#pushfwd(tangents[0]) # this works with la.expm above
```
| Wondering if it is related to the issue #3056 ?
What specific error message do you see?
The short answer for what's wrong here is probably that JAX's `expm` doesn't support reverse-mode differentiation yet.
Thanks for the reply!
> The short answer for what's wrong here is probably that JAX's expm doesn't support reverse-mode differentiation yet.
Note that the code doesn't use reverse-mode differentiation, it runs JVP instead which is implemented. For example, if the last print is replaced with pushfwd(tangents[0]), all is good.
Also the code runs if the two body lines in cost_function(ws) are replaced with an explicit loop computing the product. So it seems that it is really the combination of two vmaps and jvp, that is causing the issue to arise.
> What specific error message do you see?
Here is the full message.
```
---------------------------------------------------------------------------
> AssertionError Traceback (most recent call last)
> <ipython-input-37-192dd72c18c4> in <module>
> 23 tangents = jnp.reshape(jnp.identity(r*c), (c*r, r, c))
> 24
> ---> 25 print(vmap(pushfwd)(tangents))
>
> ~/Codes/JAX_source/jax/jax/api.py in batched_fun(*args)
> 765 in_axes_flat = _flatten_axes(in_tree, in_axes)
> 766 _ = _mapped_axis_size(in_tree, args_flat, in_axes_flat, "vmap")
> --> 767 out_flat = batching.batch(flat_fun, args_flat, in_axes_flat,
> 768 lambda: _flatten_axes(out_tree(), out_axes))
> 769 return tree_unflatten(out_tree(), out_flat)
>
> ~/Codes/JAX_source/jax/jax/interpreters/batching.py in batch(fun, in_vals, in_dims, out_dim_dests)
> 32 # executes a batched version of `fun` following out_dim_dests
> 33 batched_fun = batch_fun(fun, in_dims, out_dim_dests)
> ---> 34 return batched_fun.call_wrapped(*in_vals)
> 35
> 36 @lu.transformation_with_aux
>
> ~/Codes/JAX_source/jax/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
> 148 gen = None
> 149
> --> 150 ans = self.f(*args, **dict(self.params, **kwargs))
> 151 del args
> 152 while stack:
>
> <ipython-input-37-192dd72c18c4> in pushfwd_(func, weights, tangent)
> 11
> 12 def pushfwd_(func, weights, tangent):
> ---> 13 return jvp(func, (weights,), (tangent,))
> 14
> 15 r = 2
>
> ~/Codes/JAX_source/jax/jax/api.py in jvp(fun, primals, tangents)
> 1275 """
> 1276 _check_callable(fun)
> -> 1277 return _jvp(lu.wrap_init(fun), primals, tangents)
> 1278
> 1279 def _jvp(fun: lu.WrappedFun, primals, tangents):
>
> ~/Codes/JAX_source/jax/jax/api.py in _jvp(fun, primals, tangents)
> 1298 raise TypeError(msg.format(_dtype(p), _dtype(t)))
> 1299 flat_fun, out_tree = flatten_fun_nokwargs(fun, tree_def)
> -> 1300 out_primals, out_tangents = ad.jvp(flat_fun).call_wrapped(ps_flat, ts_flat)
> 1301 return (tree_unflatten(out_tree(), out_primals),
> 1302 tree_unflatten(out_tree(), out_tangents))
>
> ~/Codes/JAX_source/jax/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
> 148 gen = None
> 149
> --> 150 ans = self.f(*args, **dict(self.params, **kwargs))
> 151 del args
> 152 while stack:
>
> <ipython-input-37-192dd72c18c4> in cost_function(ws)
> 6
> 7 def cost_function(ws):
> ----> 8 same_w = vmap(matrix_operation)(ws)
> 9 product = la.multi_dot(same_w)
> 10 return product
>
> ~/Codes/JAX_source/jax/jax/api.py in batched_fun(*args)
> 765 in_axes_flat = _flatten_axes(in_tree, in_axes)
> 766 _ = _mapped_axis_size(in_tree, args_flat, in_axes_flat, "vmap")
> --> 767 out_flat = batching.batch(flat_fun, args_flat, in_axes_flat,
> 768 lambda: _flatten_axes(out_tree(), out_axes))
> 769 return tree_unflatten(out_tree(), out_flat)
>
> ~/Codes/JAX_source/jax/jax/interpreters/batching.py in batch(fun, in_vals, in_dims, out_dim_dests)
> 32 # executes a batched version of `fun` following out_dim_dests
> 33 batched_fun = batch_fun(fun, in_dims, out_dim_dests)
> ---> 34 return batched_fun.call_wrapped(*in_vals)
> 35
> 36 @lu.transformation_with_aux
>
> ~/Codes/JAX_source/jax/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
> 148 gen = None
> 149
> --> 150 ans = self.f(*args, **dict(self.params, **kwargs))
> 151 del args
> 152 while stack:
>
> <ipython-input-37-192dd72c18c4> in matrix_operation(w)
> 2 assert w.shape == (2,)
> 3 A = jnp.identity(2)
> ----> 4 return la.expm(A*(w[0]+w[1])) # ----- breaks down with this line
> 5 #return jnp.matmul(A*w[0], A*w[1]) #----- works with this line!
> 6
>
> ~/Codes/JAX_source/jax/jax/api.py in __call__(self, *args)
> 1727 with core.initial_style_staging():
> 1728 jaxpr, _, consts = pe.trace_to_jaxpr(flat_fun, in_pvals, instantiate=True)
> -> 1729 outs = self.prim.bind(*it.chain(consts, args_flat), jaxpr=jaxpr,
> 1730 in_tree=in_tree, out_tree=out_tree(),
> 1731 num_consts=len(consts))
>
> ~/Codes/JAX_source/jax/jax/core.py in bind(self, *args, **kwargs)
> 212
> 213 tracers = map(top_trace.full_raise, args)
> --> 214 out_tracer = top_trace.process_primitive(self, tracers, kwargs)
> 215 if self.multiple_results:
> 216 return map(full_lower, out_tracer)
>
> ~/Codes/JAX_source/jax/jax/interpreters/batching.py in process_primitive(self, primitive, tracers, params)
> 132 # TODO(mattjj,phawkins): if no rule implemented, could vmap-via-map here
> 133 batched_primitive = get_primitive_batcher(primitive)
> --> 134 val_out, dim_out = batched_primitive(vals_in, dims_in, **params)
> 135 if primitive.multiple_results:
> 136 return map(partial(BatchTracer, self), val_out, dim_out)
>
> ~/Codes/JAX_source/jax/jax/api.py in fun_batch(args, dims, **params)
> 1750 def fun_batch(args, dims, **params):
> 1751 batched, out_dims = batching.batch_fun2(lu.wrap_init(fun_impl, params), dims)
> -> 1752 return batched.call_wrapped(*args), out_dims()
> 1753 batching.primitive_batchers[fun_p] = fun_batch
> 1754
>
> ~/Codes/JAX_source/jax/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
> 148 gen = None
> 149
> --> 150 ans = self.f(*args, **dict(self.params, **kwargs))
> 151 del args
> 152 while stack:
>
> ~/Codes/JAX_source/jax/jax/api.py in fun_impl(*args, **params)
> 1741 def fun_impl(*args, **params):
> 1742 consts, args = split_list(args, [params['num_consts']])
> -> 1743 return core.eval_jaxpr(params['jaxpr'], consts, *args)
> 1744 fun_p.def_impl(fun_impl)
> 1745
>
> ~/Codes/JAX_source/jax/jax/core.py in eval_jaxpr(jaxpr, consts, *args)
> 279 else:
> 280 subfuns = []
> --> 281 ans = eqn.primitive.bind(*(subfuns + in_vals), **params)
> 282 if eqn.primitive.multiple_results:
> 283 map(write, eqn.outvars, ans)
>
> ~/Codes/JAX_source/jax/jax/core.py in _call_bind(processor, post_processor, primitive, f, *args, **params)
> 1018 tracers = map(top_trace.full_raise, args)
> 1019 process = getattr(top_trace, processor)
> -> 1020 outs = map(full_lower, process(primitive, f, tracers, params))
> 1021 return apply_todos(env_trace_todo(), outs)
> 1022
>
> ~/Codes/JAX_source/jax/jax/interpreters/batching.py in process_call(self, call_primitive, f, tracers, params)
> 146 else:
> 147 f, dims_out = batch_subtrace(f, self.master, dims)
> --> 148 vals_out = call_primitive.bind(f, *vals, **params)
> 149 return [BatchTracer(self, v, d) for v, d in zip(vals_out, dims_out())]
> 150
>
> ~/Codes/JAX_source/jax/jax/core.py in _call_bind(processor, post_processor, primitive, f, *args, **params)
> 1018 tracers = map(top_trace.full_raise, args)
> 1019 process = getattr(top_trace, processor)
> -> 1020 outs = map(full_lower, process(primitive, f, tracers, params))
> 1021 return apply_todos(env_trace_todo(), outs)
> 1022
>
> ~/Codes/JAX_source/jax/jax/interpreters/ad.py in process_call(self, call_primitive, f, tracers, params)
> 342 name = params.get('name', f.__name__)
> 343 params = dict(params, name=wrap_name(name, 'jvp'))
> --> 344 result = call_primitive.bind(f_jvp, *primals, *nonzero_tangents, **params)
> 345 primal_out, tangent_out = tree_unflatten(out_tree_def(), result)
> 346 return [JVPTracer(self, p, t) for p, t in zip(primal_out, tangent_out)]
>
> ~/Codes/JAX_source/jax/jax/core.py in _call_bind(processor, post_processor, primitive, f, *args, **params)
> 1018 tracers = map(top_trace.full_raise, args)
> 1019 process = getattr(top_trace, processor)
> -> 1020 outs = map(full_lower, process(primitive, f, tracers, params))
> 1021 return apply_todos(env_trace_todo(), outs)
> 1022
>
> ~/Codes/JAX_source/jax/jax/interpreters/batching.py in process_call(self, call_primitive, f, tracers, params)
> 146 else:
> 147 f, dims_out = batch_subtrace(f, self.master, dims)
> --> 148 vals_out = call_primitive.bind(f, *vals, **params)
> 149 return [BatchTracer(self, v, d) for v, d in zip(vals_out, dims_out())]
> 150
>
> ~/Codes/JAX_source/jax/jax/core.py in _call_bind(processor, post_processor, primitive, f, *args, **params)
> 1014 if top_trace is None:
> 1015 with new_sublevel():
> -> 1016 outs = primitive.impl(f, *args, **params)
> 1017 else:
> 1018 tracers = map(top_trace.full_raise, args)
>
> ~/Codes/JAX_source/jax/jax/interpreters/xla.py in _xla_call_impl(fun, device, backend, name, *args)
> 466
> 467 def _xla_call_impl(fun: lu.WrappedFun, *args, device, backend, name):
> --> 468 compiled_fun = _xla_callable(fun, device, backend, name, *map(arg_spec, args))
> 469 try:
> 470 return compiled_fun(*args)
>
> ~/Codes/JAX_source/jax/jax/linear_util.py in memoized_fun(fun, *args)
> 219 fun.populate_stores(stores)
> 220 else:
> --> 221 ans = call(fun, *args)
> 222 cache[key] = (ans, fun.stores)
> 223 return ans
>
> ~/Codes/JAX_source/jax/jax/interpreters/xla.py in _xla_callable(fun, device, backend, name, *arg_specs)
> 516 xla_consts = _map(partial(xb.constant, c), consts)
> 517 xla_args = _xla_callable_args(c, abstract_args, tuple_args)
> --> 518 out_nodes = jaxpr_subcomp(
> 519 c, jaxpr, backend, AxisEnv(nreps, (), ()), xla_consts,
> 520 extend_name_stack(wrap_name(name, 'jit')), *xla_args)
>
> ~/Codes/JAX_source/jax/jax/interpreters/xla.py in jaxpr_subcomp(c, jaxpr, backend, axis_env, consts, name_stack, *args)
> 344 new_params = check_backend_params(eqn.params, backend)
> 345 rule = initial_style_translations[eqn.primitive]
> --> 346 ans = rule(c, axis_env, extend_name_stack(name_stack, eqn.primitive.name),
> 347 map(aval, eqn.invars), backend, *in_nodes, **new_params)
> 348 elif eqn.primitive in parallel_translations:
>
> ~/Codes/JAX_source/jax/jax/lax/lax_control_flow.py in _while_loop_translation_rule(c, axis_env, name_stack, avals, backend, cond_jaxpr, body_jaxpr, cond_nconsts, body_nconsts, *args)
> 288 _map(partial(xb.constant, body_c), cond_jaxpr.literals),
> 289 extend_name_stack(name_stack, 'body_pred'), *(x + z))
> --> 290 new_z = _map(partial(_pred_bcast_select, body_c, body_pred), new_z, z)
> 291 assert _map(body_c.GetShape, new_z) == _map(body_c.GetShape, z) # no broadcast
> 292 new_carry = xops.Tuple(body_c, list(itertools.chain(x, y, new_z)))
>
> ~/Codes/JAX_source/jax/jax/util.py in safe_map(f, *args)
> 32 for arg in args[1:]:
> 33 assert len(arg) == n, 'length mismatch: {}'.format(list(map(len, args)))
> ---> 34 return list(map(f, *args))
> 35
> 36 def unzip2(xys):
>
> ~/Codes/JAX_source/jax/jax/lax/lax_control_flow.py in _pred_bcast_select(c, pred, x, y)
> 302 y_shape = c.GetShape(y).dimensions()
> 303 assert x_shape == y_shape
> --> 304 assert pred_shape == x_shape[:len(pred_shape)] == y_shape[:len(pred_shape)]
> 305 bcast_pred = xops.BroadcastInDim(pred, x_shape, list(range(len(pred_shape))))
> 306 return xops.Select(bcast_pred, x, y)
>
> AssertionError:
```
Good point re JVP vs VJP.
I'm going to mark this down as a bug for now. At the very least it's an uninformative error.
Thank you! Any comment whether it could be related to the issue brought up in #3056 ?That would lead to an understanding of how to fix this.
#3056 is specific to reverse-mode (i.e. VJPs) so I don't think it's related.
I think this is a bug in our while_loop batching rule (and a while_loop is used in expm, via the fori_loop wrapper)! It's a bit hard to articulate, but I think I see it... | 2020-05-21T02:37:27 |
google/jax | 3,176 | google__jax-3176 | [
"3161"
] | ae9d1753462d07d75e9e002b13538c385949d9af | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -796,6 +796,17 @@ def vmap(fun: Callable, in_axes=0, out_axes=0) -> Callable:
>>> print(vfoo(tree).shape)
(6, 2, 5)
+ Here's another example using container types in ``in_axes``, this time a
+ dictionary, to specify the elements of the container to map over:
+
+ >>> dct = {'a': 0., 'b': np.arange(5.)}
+ >>> x = 1.
+ >>> def foo(dct, x):
+ ... return dct['a'] + dct['b'] + x
+ >>> out = vmap(foo, in_axes=({'a': None, 'b': 0}, None))(dct, x)
+ >>> print(out)
+ [1. 2. 3. 4. 5.]
+
The results of a vectorized function can be mapped or unmapped.
For example, the function below returns a pair with the first
element mapped and the second unmapped. Only for unmapped results
| Documentation on how to vmap on dictionary entries
I have a related question to #2367: Is it possible to use `vmap` on an array that is in a dictionary? And if yes, how?
To stick with the example given in #2367, how to `vmap` 'b', which is an array within a dictionary:
```
import jax.numpy as np
from jax import vmap
dictionary = {'a': 5., 'b': np.arange(5)}
c = 1.
d = 2.
def f(dct, c, d):
return dct['a'] + dct['b'] + c + d
result = vmap(f, magic)(dictionary, c, d)
```
I don't understand how I would have to create the magic axes tuple for this example or if it is even possible? I think it would be great if there would be a related example in the [documentation of vmap](https://jax.readthedocs.io/en/latest/jax.html?highlight=vmap#jax.vmap).
| Yes, set `magic = ({'a': None, 'b': 0}, None, None)`, like this:
```python
import jax.numpy as np
from jax import vmap
dictionary = {'a': 5., 'b': np.arange(5)}
c = 1.
d = 2.
def f(dct, c, d):
return dct['a'] + dct['b'] + c + d
result = vmap(f, in_axes=({'a': None, 'b': 0}, None, None))(dictionary, c, d)
```
This sentence in the vmap docstring is intended to describe the behavior:
> If the positional arguments to fun are container types, the corresponding element of in_axes can itself be a matching container, so that distinct array axes can be mapped for different container elements. in_axes must be a container tree prefix of the positional argument tuple passed to fun.
Here, the argument in question is a dictionary, so we make the corresponding entry of `in_axes` a dictionary too (with the same keys). | 2020-05-21T04:03:28 |
|
google/jax | 3,205 | google__jax-3205 | [
"3191"
] | 6ffde8061d20b3f3c2ce4196e7640dee4b2548dc | diff --git a/jax/scipy/stats/__init__.py b/jax/scipy/stats/__init__.py
--- a/jax/scipy/stats/__init__.py
+++ b/jax/scipy/stats/__init__.py
@@ -26,3 +26,4 @@
from . import t
from . import uniform
from . import logistic
+from . import geom
diff --git a/jax/scipy/stats/geom.py b/jax/scipy/stats/geom.py
new file mode 100644
--- /dev/null
+++ b/jax/scipy/stats/geom.py
@@ -0,0 +1,33 @@
+# Copyright 2020 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import scipy.stats as osp_stats
+
+from ... import lax
+from ...numpy import lax_numpy as jnp
+from ...numpy._util import _wraps
+from ..special import xlogy, xlog1py
+
+@_wraps(osp_stats.geom.logpmf, update_doc=False)
+def logpmf(k, p, loc=0):
+ k, p, loc = jnp._promote_args_inexact("geom.logpmf", k, p, loc)
+ zero = jnp._constant_like(k, 0)
+ one = jnp._constant_like(k, 1)
+ x = lax.sub(k, loc)
+ log_probs = xlog1py(lax.sub(x, one), -p) + lax.log(p)
+ return jnp.where(lax.le(x, zero), -jnp.inf, log_probs)
+
+@_wraps(osp_stats.geom.pmf, update_doc=False)
+def pmf(k, p, loc=0):
+ return jnp.exp(logpmf(k, p, loc))
| diff --git a/tests/scipy_stats_test.py b/tests/scipy_stats_test.py
--- a/tests/scipy_stats_test.py
+++ b/tests/scipy_stats_test.py
@@ -102,6 +102,23 @@ def args_maker():
tol=1e-4)
self._CompileAndCheck(lax_fun, args_maker, check_dtypes=True)
+ @genNamedParametersNArgs(3, jtu.rand_default)
+ def testGeomLogPmf(self, rng_factory, shapes, dtypes):
+ rng = rng_factory(self.rng())
+ scipy_fun = osp_stats.geom.logpmf
+ lax_fun = lsp_stats.geom.logpmf
+
+ def args_maker():
+ x, logit, loc = map(rng, shapes, dtypes)
+ x = onp.floor(x)
+ p = expit(logit)
+ loc = onp.floor(loc)
+ return [x, p, loc]
+
+ self._CheckAgainstNumpy(scipy_fun, lax_fun, args_maker, check_dtypes=False,
+ tol=1e-4)
+ self._CompileAndCheck(lax_fun, args_maker, check_dtypes=True)
+
@genNamedParametersNArgs(5, jtu.rand_positive)
def testBetaLogPdf(self, rng_factory, shapes, dtypes):
rng = rng_factory(self.rng())
| Require for Geometric Distribution
It seems that `jax` does not have geometric distribution in its random module like `scipy.stats`. Will it be added? I really need it.
| We would happily accept pull requests to add a geometric distribution, but we have no immediate plans to work on it.
It is easy to simulate geometric random variables given a uniform RNG:
```np.ceil(np.log(np.random.rand()) / np.log1p(-p))```
has a `Geometric(p)` distribution. (Source: http://www.nrbook.com/devroye/Devroye_files/chapter_ten.pdf)
@gao462 I don't have time to package this up into a PR but I encourage you to do so if you end up using it.
| 2020-05-26T04:40:16 |
google/jax | 3,207 | google__jax-3207 | [
"3204"
] | 0eace80a6e12ded1ad7ecb3e7d6ccb41b84e97e4 | diff --git a/jax/lax/lax_control_flow.py b/jax/lax/lax_control_flow.py
--- a/jax/lax/lax_control_flow.py
+++ b/jax/lax/lax_control_flow.py
@@ -329,7 +329,7 @@ def _while_loop_translation_rule(c, axis_env, name_stack, avals, backend, *args,
body_pred, = xla.jaxpr_subcomp(body_c, cond_jaxpr.jaxpr, backend, axis_env,
_map(partial(xb.constant, body_c), cond_jaxpr.literals),
extend_name_stack(name_stack, 'body_pred'), *(x + z))
- new_z = _map(partial(_pred_bcast_select, body_c, body_pred), new_z, z)
+ new_z = _map(partial(_pred_bcast_select, body_c, body_pred), new_z, z, body_jaxpr.out_avals)
assert _map(body_c.get_shape, new_z) == _map(body_c.get_shape, z) # no broadcast
new_carry = xops.Tuple(body_c, list(itertools.chain(x, y, new_z)))
@@ -338,13 +338,14 @@ def _while_loop_translation_rule(c, axis_env, name_stack, avals, backend, *args,
_, _, z = split_list(ans_elts, [cond_nconsts, body_nconsts])
return xops.Tuple(c, z)
-def _pred_bcast_select(c, pred, x, y):
+def _pred_bcast_select(c, pred, x, y, x_y_aval: core.AbstractValue):
pred_shape = c.get_shape(pred).dimensions()
x_shape = c.get_shape(x).dimensions()
y_shape = c.get_shape(y).dimensions()
assert x_shape == y_shape
- if not c.get_shape(x).is_array() and not c.get_shape(y).is_array():
- # Two tokens
+ if x_y_aval is core.abstract_unit:
+ return x
+ elif x_y_aval is core.abstract_token:
return xops.AfterAll(c, [x, y])
else:
assert pred_shape == x_shape[:len(pred_shape)] == y_shape[:len(pred_shape)]
| diff --git a/tests/lax_control_flow_test.py b/tests/lax_control_flow_test.py
--- a/tests/lax_control_flow_test.py
+++ b/tests/lax_control_flow_test.py
@@ -334,6 +334,38 @@ def fun(x, y):
expected = (np.array([4, 3]), np.array([1, 2]))
self.assertAllClose(ans, expected, check_dtypes=False)
+ def test_issue_3204(self):
+ # Error during XLA code generation for vmap of nested loops
+ def test(a, b):
+ val = 0
+ i = 0
+ j = 0
+
+ condfun_1 = lambda inp: inp[1] < a + 1
+ condfun_2 = lambda inp: inp[2] < b + 1
+
+ def bodyfun_1(inp):
+ val, i, j = inp
+ j = 0
+
+ def bodyfun_2(inp):
+ val, i, j = inp
+ val += i + j
+ j += 1
+ return (val, i, j)
+
+ result = lax.while_loop(condfun_2, bodyfun_2, (val, i, j))
+ val = result[0]
+ i += 1
+ return (val, i, j)
+
+ result = lax.while_loop(condfun_1, bodyfun_1, (val, i, j))
+ return result[0]
+
+ arr = np.arange(5)
+ vmap_test = api.vmap(test, (0, 0))
+ vmap_test(arr, arr)
+
def testForiLoopErrors(self):
"""Test typing error messages for while."""
with self.assertRaisesRegex(
| While-loop vmap bug
The following code block runs as expected on jax 0.1.63, jaxlib 0.1.45, but fails on all later versions, including master:
```python
import jax
import jax.numpy as np
from jax.experimental import loops
def test(a,b):
with loops.Scope() as s:
s.val = 0
s.i = 0
s.j = 0
for _ in s.while_range(lambda: s.i < a + 1):
s.j = 0
for _ in s.while_range(lambda: s.j < b + 1):
s.val += s.i + s.j
s.j += 1
s.i += 1
return s.val
# vectorized version
vmap_test = jax.vmap(test, (0,0))
arr = np.arange(5)
vmap_test(arr, arr)
```
<details><summary>Click for Traceback</summary>
<p>
```python
Traceback (most recent call last):
File "test.py", line 21, in <module>
print(vmap_test(arr, arr))
File "/home/adabbott/Git/jax/jax/jax/api.py", line 858, in batched_fun
lambda: flatten_axes(out_tree(), out_axes))
File "/home/adabbott/Git/jax/jax/jax/interpreters/batching.py", line 34, in batch
return batched_fun.call_wrapped(*in_vals)
File "/home/adabbott/Git/jax/jax/jax/linear_util.py", line 150, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "test.py", line 12, in test
for _ in s.while_range(lambda: s.j < b + 1):
File "/home/adabbott/Git/jax/jax/jax/experimental/loops.py", line 341, in __next__
self.end_tracing_body()
File "/home/adabbott/Git/jax/jax/jax/experimental/loops.py", line 407, in end_tracing_body
carried_init_vals, body_typed_jaxpr, body_const_vals)
File "/home/adabbott/Git/jax/jax/jax/experimental/loops.py", line 576, in build_output_vals
body_jaxpr=body_typed_jaxpr)
File "/home/adabbott/Git/jax/jax/jax/core.py", line 212, in bind
out_tracer = top_trace.process_primitive(self, tracers, kwargs)
File "/home/adabbott/Git/jax/jax/jax/interpreters/partial_eval.py", line 141, in process_primitive
return custom_partial_eval_rules[primitive](self, *tracers, **params)
File "/home/adabbott/Git/jax/jax/jax/lax/lax_control_flow.py", line 517, in _while_partial_eval
body_jaxpr=body_jaxpr_known)
File "/home/adabbott/Git/jax/jax/jax/core.py", line 212, in bind
out_tracer = top_trace.process_primitive(self, tracers, kwargs)
File "/home/adabbott/Git/jax/jax/jax/interpreters/batching.py", line 134, in process_primitive
val_out, dim_out = batched_primitive(vals_in, dims_in, **params)
File "/home/adabbott/Git/jax/jax/jax/lax/lax_control_flow.py", line 391, in _while_loop_batching_rule
body_nconsts=body_nconsts, body_jaxpr=body_jaxpr_batched)
File "/home/adabbott/Git/jax/jax/jax/core.py", line 209, in bind
return self.impl(*args, **kwargs)
File "/home/adabbott/Git/jax/jax/jax/interpreters/xla.py", line 217, in apply_primitive
compiled_fun = xla_primitive_callable(prim, *map(arg_spec, args), **params)
File "/home/adabbott/Git/jax/jax/jax/interpreters/xla.py", line 248, in xla_primitive_callable
*avals, **params)
File "/home/adabbott/Git/jax/jax/jax/interpreters/xla.py", line 295, in primitive_computation
*xla_args, **params)
File "/home/adabbott/Git/jax/jax/jax/lax/lax_control_flow.py", line 332, in _while_loop_translation_rule
new_z = _map(partial(_pred_bcast_select, body_c, body_pred), new_z, z)
File "/home/adabbott/Git/jax/jax/jax/util.py", line 34, in safe_map
return list(map(f, *args))
File "/home/adabbott/Git/jax/jax/jax/lax/lax_control_flow.py", line 350, in _pred_bcast_select
assert pred_shape == x_shape[:len(pred_shape)] == y_shape[:len(pred_shape)]
AssertionError
```
</p>
</details>
It appears to only occur when the nested while-loop variable `b` is vectorized:
```python
# this works
vmap_test = jax.vmap(test, (0,None))
vmap_test(arr, 3)
# this fails
vmap_test = jax.vmap(test, (None,0))
vmap_test(3, arr)
```
| I should also note the same behavior occurs when using `jax.lax.while_loop` directly, without the convenience of the `loops` module:
<details><summary>Click for pure jax.lax.while_loop version </summary>
<p>
```python
import jax
import jax.numpy as np
def test(a,b):
val = 0
i = 0
j = 0
condfun_1 = lambda inp: inp[1] < a + 1
condfun_2 = lambda inp: inp[2] < b + 1
def bodyfun_1(inp):
val, i, j = inp
j = 0
def bodyfun_2(inp):
val, i, j = inp
val += i + j
j += 1
return (val, i, j)
result = jax.lax.while_loop(condfun_2, bodyfun_2, (val,i,j))
val = result[0]
i += 1
return (val, i, j)
result = jax.lax.while_loop(condfun_1, bodyfun_1, (val,i,j))
return result[0]
arr = np.arange(5)
vmap_test = jax.vmap(test, (0,0))
vmap_test(arr, arr)
```
</p>
</details> | 2020-05-26T06:43:29 |
google/jax | 3,224 | google__jax-3224 | [
"3216"
] | c5010cda473b56413b07054415b498b7ea5b5618 | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -66,6 +66,14 @@
DType = Any
Shape = Sequence[int]
+def _try_broadcast_shapes(shapes):
+ # Replace 1 with 0 to avoid inconclusive comparisons for polymorphic dims:
+ out_shape = onp.max(onp.where(shapes == 1, 0, shapes), axis=0)
+ out_shape = onp.where(onp.all(shapes == 1, axis=0), 1, out_shape)
+ if not onp.all((shapes == out_shape) | (shapes == 1)):
+ return None
+ return canonicalize_shape(out_shape)
+
@cache()
def broadcast_shapes(*shapes):
"""Returns the shape that results from NumPy broadcasting of `shapes`."""
@@ -73,13 +81,11 @@ def broadcast_shapes(*shapes):
return shapes[0]
ndim = _max(len(shape) for shape in shapes)
shapes = onp.array([(1,) * (ndim - len(shape)) + shape for shape in shapes])
- is_zero = onp.any(shapes == 0, axis=0)
- max_shape = onp.max(shapes, axis=0)
- result_shape = onp.where(is_zero, 0, max_shape)
- if not onp.all((shapes == result_shape) | (shapes == 1)):
+ result_shape = _try_broadcast_shapes(shapes)
+ if result_shape is None:
raise ValueError("Incompatible shapes for broadcasting: {}"
.format(tuple(map(tuple, shapes))))
- return canonicalize_shape(result_shape)
+ return result_shape
def _identity(x): return x
@@ -1781,13 +1787,11 @@ def _broadcasting_shape_rule(name, *avals):
if len({len(shape) for shape in shapes}) != 1:
msg = '{} got arrays of different rank: {}.'
raise TypeError(msg.format(name, ', '.join(map(str, map(tuple, shapes)))))
- is_zero = onp.any(shapes == 0, axis=0)
- max_shape = onp.max(shapes, axis=0)
- result_shape = onp.where(is_zero, 0, max_shape)
- if not onp.all((shapes == result_shape) | (shapes == 1)):
+ result_shape = _try_broadcast_shapes(shapes)
+ if result_shape is None:
msg = '{} got incompatible shapes for broadcasting: {}.'
raise TypeError(msg.format(name, ', '.join(map(str, map(tuple, shapes)))))
- return tuple(result_shape)
+ return result_shape
def naryop(result_dtype, accepted_dtypes, name, translation_rule=None):
| diff --git a/tests/masking_test.py b/tests/masking_test.py
--- a/tests/masking_test.py
+++ b/tests/masking_test.py
@@ -101,6 +101,7 @@ def test_Poly_rsub(self):
assert -1 - n == -n - 1
def test_add_broadcast(self):
+ @shapecheck(['n', '(m, n)'], '(m, n)')
@shapecheck(['(m, n)', 'n'], '(m, n)')
@shapecheck(['n', ''], 'n')
def add(a, b):
| shapecheck of arithmetic is non-commutative
```python
import jax.numpy as jnp
import jax
jax.shapecheck(["(m,n)", "n"], "(m,n)")(jnp.add) # passes
jax.shapecheck(["n", "(m,n)"], "(m,n)")(jnp.add) # errors
```
This results in the error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-9-b3bfdb20b090> in <module>()
3
4 jax.shapecheck(["(m,n)", "n"], "(m,n)")(jnp.add) # passes
----> 5 jax.shapecheck(["n", "(m,n)"], "(m,n)")(jnp.add) # errors
google3/third_party/py/jax/api.py in shapecheck(in_shapes, out_shape, fun)
1263 flat_fun, out_tree_ = flatten_fun_nokwargs(lu.wrap_init(fun), in_tree)
1264 avals = map(partial(ShapedArray, dtype=onp.float32), in_shapes)
-> 1265 out_shapes_ = [o.shape for o in pe.abstract_eval_fun(flat_fun.call_wrapped, *avals)]
1266 if out_tree != out_tree_(): raise TypeError("pytree mismatch")
1267 if not all(map(masking._shape_spec_consistent, out_shapes, out_shapes_)):
google3/third_party/py/jax/interpreters/partial_eval.py in abstract_eval_fun(fun, *avals, **params)
340 pvals_in = [PartialVal.unknown(a) for a in avals]
341 _, pvals_out, _ = trace_to_jaxpr(lu.wrap_init(fun, params), pvals_in,
--> 342 instantiate=True, stage_out=True)
343 avals_out, _ = unzip2(pvals_out)
344 for aval_out in avals_out:
google3/third_party/py/jax/interpreters/partial_eval.py in trace_to_jaxpr(fun, pvals, instantiate, stage_out, bottom, trace_type)
436 with new_master(trace_type, bottom=bottom) as master:
437 fun = trace_to_subjaxpr(fun, master, instantiate)
--> 438 jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
439 assert not env
440 del master
google3/third_party/py/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
148 gen = None
149
--> 150 ans = self.f(*args, **dict(self.params, **kwargs))
151 del args
152 while stack:
google3/third_party/py/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
148 gen = None
149
--> 150 ans = self.f(*args, **dict(self.params, **kwargs))
151 del args
152 while stack:
google3/third_party/py/jax/numpy/lax_numpy.py in fn(x1, x2)
350 def _maybe_bool_binop(numpy_fn, lax_fn, bool_lax_fn):
351 def fn(x1, x2):
--> 352 x1, x2 = _promote_args(numpy_fn.__name__, x1, x2)
353 return lax_fn(x1, x2) if x1.dtype != bool_ else bool_lax_fn(x1, x2)
354 return _wraps(numpy_fn)(fn)
google3/third_party/py/jax/numpy/lax_numpy.py in _promote_args(fun_name, *args)
281 """Convenience function to apply Numpy argument shape and dtype promotion."""
282 _check_arraylike(fun_name, *args)
--> 283 return _promote_shapes(fun_name, *_promote_dtypes(*args))
284
285 def _promote_args_inexact(fun_name, *args):
google3/third_party/py/jax/numpy/lax_numpy.py in _promote_shapes(fun_name, *args)
218 if FLAGS.jax_numpy_rank_promotion != "allow":
219 _rank_promotion_warning_or_error(fun_name, shapes)
--> 220 result_rank = len(lax.broadcast_shapes(*shapes))
221 return [lax.reshape(arg, (1,) * (result_rank - len(shp)) + shp)
222 if shp and len(shp) != result_rank else arg
google3/third_party/py/jax/lax/lax.py in broadcast_shapes(*shapes)
75 shapes = onp.array([(1,) * (ndim - len(shape)) + shape for shape in shapes])
76 is_zero = onp.any(shapes == 0, axis=0)
---> 77 max_shape = onp.max(shapes, axis=0)
78 result_shape = onp.where(is_zero, 0, max_shape)
79 if not onp.all((shapes == result_shape) | (shapes == 1)):
google3/third_party/py/numpy/core/fromnumeric.py in amax(a, axis, out, keepdims, initial)
2503 """
2504 return _wrapreduction(a, np.maximum, 'max', axis, None, out, keepdims=keepdims,
-> 2505 initial=initial)
2506
2507
google3/third_party/py/numpy/core/fromnumeric.py in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs)
84 return reduction(axis=axis, out=out, **passkwargs)
85
---> 86 return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
87
88
google3/third_party/py/jax/interpreters/masking.py in __le__(self, other)
184
185 def __le__(self, other):
--> 186 return _ensure_poly(other) >= self
187
188 def __lt__(self, other):
google3/third_party/py/jax/interpreters/masking.py in __ge__(self, other)
181
182 raise ValueError('Polynomials comparison "{} >= {}" is inconclusive.'
--> 183 .format(self, other))
184
185 def __le__(self, other):
ValueError: Polynomials comparison "1 >= m" is inconclusive.
```
| ping @JuliusKunze | 2020-05-27T19:07:59 |
google/jax | 3,235 | google__jax-3235 | [
"3180"
] | 7c90023ddbbb598a2f34a805ac8c4b19f69b82e1 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -147,7 +147,7 @@ def __ne__(self, other):
return not (self == other)
def __call__(self, x):
- return array(self.dtype.type(x), dtype=self.dtype)
+ return array(x, dtype=self.dtype)
def _make_scalar_type(np_scalar_type):
return _ScalarMeta(np_scalar_type.__name__, (object,),
| diff --git a/tests/dtypes_test.py b/tests/dtypes_test.py
--- a/tests/dtypes_test.py
+++ b/tests/dtypes_test.py
@@ -173,5 +173,11 @@ class AnEnum(enum.IntEnum):
np.testing.assert_equal(np.int32(101), np.int32(AnEnum.B))
np.testing.assert_equal(jnp.int32(101), jnp.int32(AnEnum.B))
+ def testScalarCastInsideJitWorks(self):
+ # jnp.int32(tracer) should work.
+ self.assertEqual(jnp.int32(101),
+ jax.jit(lambda x: jnp.int32(x))(jnp.float32(101.4)))
+
+
if __name__ == "__main__":
absltest.main()
| Cast breaks jit
Casting via `jnp.float32(x)` breaks jit.
> *** Exception: Tracer can’t be used with raw numpy functions. You might have
> import numpy as np
> instead of
> import jax.numpy as jnp
The workaround is to use `jnp.asarray(x, dtype=jnp.float32)` instead, but the error is very confusing and it would be nice if this was done automatically under the hood by jax.
| I suspect to fix this all we need to do is change this line of code:
https://github.com/google/jax/blob/bb2127cebd3ade161109ee4919a92aaff5c788c1/jax/numpy/lax_numpy.py#L151
to do something different for JAX array types. | 2020-05-28T17:49:46 |
google/jax | 3,320 | google__jax-3320 | [
"3317"
] | c49bb754543f89fc44bcec2ab4b7824f3b869be0 | diff --git a/jax/random.py b/jax/random.py
--- a/jax/random.py
+++ b/jax/random.py
@@ -359,6 +359,9 @@ def uniform(key: jnp.ndarray,
Returns:
A random array with the specified shape and dtype.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `uniform` must be a float dtype, "
+ f"got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
shape = abstract_arrays.canonicalize_shape(shape)
return _uniform(key, shape, dtype, minval, maxval)
@@ -543,6 +546,9 @@ def normal(key: jnp.ndarray,
Returns:
A random array with the specified shape and dtype.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `normal` must be a float dtype, "
+ f"got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
shape = abstract_arrays.canonicalize_shape(shape)
return _normal(key, shape, dtype)
@@ -581,6 +587,9 @@ def multivariate_normal(key: jnp.ndarray,
``shape + mean.shape[-1:]`` if ``shape`` is not None, or else
``broadcast_shapes(mean.shape[:-1], cov.shape[:-2]) + mean.shape[-1:]``.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `multivariate_normal` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
if shape is not None:
shape = abstract_arrays.canonicalize_shape(shape)
@@ -634,6 +643,9 @@ def truncated_normal(key: jnp.ndarray,
A random array with the specified dtype and shape given by ``shape`` if
``shape`` is not None, or else by broadcasting ``lower`` and ``upper``.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `truncated_normal` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
if shape is not None:
shape = abstract_arrays.canonicalize_shape(shape)
@@ -714,6 +726,9 @@ def beta(key: jnp.ndarray,
A random array with the specified dtype and shape given by ``shape`` if
``shape`` is not None, or else by broadcasting ``a`` and ``b``.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `beta` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
if shape is not None:
shape = abstract_arrays.canonicalize_shape(shape)
@@ -748,6 +763,9 @@ def cauchy(key, shape=(), dtype=np.float64):
Returns:
A random array with the specified shape and dtype.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `cauchy` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
shape = abstract_arrays.canonicalize_shape(shape)
return _cauchy(key, shape, dtype)
@@ -780,6 +798,9 @@ def dirichlet(key, alpha, shape=None, dtype=np.float64):
``shape + (alpha.shape[-1],)`` if ``shape`` is not None, or else
``alpha.shape``.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `dirichlet` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
if shape is not None:
shape = abstract_arrays.canonicalize_shape(shape)
@@ -814,6 +835,9 @@ def exponential(key, shape=(), dtype=np.float64):
Returns:
A random array with the specified shape and dtype.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `exponential` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
shape = abstract_arrays.canonicalize_shape(shape)
return _exponential(key, shape, dtype)
@@ -1039,6 +1063,9 @@ def gamma(key, a, shape=None, dtype=np.float64):
A random array with the specified dtype and with shape given by ``shape`` if
``shape`` is not None, or else by ``a.shape``.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `gamma` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
if shape is not None:
shape = abstract_arrays.canonicalize_shape(shape)
@@ -1178,6 +1205,9 @@ def gumbel(key, shape=(), dtype=np.float64):
Returns:
A random array with the specified shape and dtype.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `gumbel` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
shape = abstract_arrays.canonicalize_shape(shape)
return _gumbel(key, shape, dtype)
@@ -1232,6 +1262,9 @@ def laplace(key, shape=(), dtype=np.float64):
Returns:
A random array with the specified shape and dtype.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `laplace` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
shape = abstract_arrays.canonicalize_shape(shape)
return _laplace(key, shape, dtype)
@@ -1257,6 +1290,9 @@ def logistic(key, shape=(), dtype=np.float64):
Returns:
A random array with the specified shape and dtype.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `logistic` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
shape = abstract_arrays.canonicalize_shape(shape)
return _logistic(key, shape, dtype)
@@ -1297,6 +1333,9 @@ def pareto(key, b, shape=None, dtype=np.float64):
A random array with the specified dtype and with shape given by ``shape`` if
``shape`` is not None, or else by ``b.shape``.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `pareto` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
if shape is not None:
shape = abstract_arrays.canonicalize_shape(shape)
@@ -1331,6 +1370,9 @@ def t(key, df, shape=(), dtype=np.float64):
A random array with the specified dtype and with shape given by ``shape`` if
``shape`` is not None, or else by ``df.shape``.
"""
+ if not dtypes.issubdtype(dtype, np.floating):
+ raise ValueError(f"dtype argument to `t` must be a float "
+ f"dtype, got {dtype}")
dtype = dtypes.canonicalize_dtype(dtype)
shape = abstract_arrays.canonicalize_shape(shape)
return _t(key, df, shape, dtype)
| diff --git a/tests/random_test.py b/tests/random_test.py
--- a/tests/random_test.py
+++ b/tests/random_test.py
@@ -687,6 +687,10 @@ def testPRNGValues(self):
random.fold_in(k, 4),
np.array([2285895361, 433833334], dtype='uint32'))
+ def testDtypeErrorMessage(self):
+ with self.assertRaisesRegex(ValueError, r"dtype argument to.*"):
+ random.normal(random.PRNGKey(0), (), dtype=jnp.int32)
+
if __name__ == "__main__":
absltest.main()
| Cannot specify data type to be int in `random.normal`
I have been trying to use `random` with data type `np.int32` or any other int type (e.g `np.int8`, `np.in16`).
Recreating the bug:
```python
from jax import random
import jax.numpy as np
key = random.PRNGKey(4)
x = random.normal(key,dtype=np.int32)
```
Error message:
```
TypeError: No loop matching the specified signature and casting was found for ufunc nextafter
```
Full logs can be found below:
```
TypeError Traceback (most recent call last)
<ipython-input-1-eb3ab6450cb1> in <module>
3
4 key = random.PRNGKey(4)
----> 5 x = random.normal(key,dtype=np.int32)
~/anaconda3/lib/python3.6/site-packages/jax/random.py in normal(key, shape, dtype)
546 dtype = dtypes.canonicalize_dtype(dtype)
547 shape = abstract_arrays.canonicalize_shape(shape)
--> 548 return _normal(key, shape, dtype)
549
550 @partial(jit, static_argnums=(1, 2))
~/anaconda3/lib/python3.6/site-packages/jax/api.py in f_jitted(*args, **kwargs)
165 flat_fun, out_tree = flatten_fun(f, in_tree)
166 out = xla.xla_call(flat_fun, *args_flat, device=device, backend=backend,
--> 167 name=flat_fun.__name__, donated_invars=donated_invars)
168 return tree_unflatten(out_tree(), out)
169
~/anaconda3/lib/python3.6/site-packages/jax/core.py in _call_bind(processor, post_processor, primitive, f, *args, **params)
1077 if top_trace is None:
1078 with new_sublevel():
-> 1079 outs = primitive.impl(f, *args, **params)
1080 else:
1081 tracers = map(top_trace.full_raise, args)
~/anaconda3/lib/python3.6/site-packages/jax/interpreters/xla.py in _xla_call_impl(fun, device, backend, name, donated_invars, *args)
525
526 def _xla_call_impl(fun: lu.WrappedFun, *args, device, backend, name, donated_invars):
--> 527 compiled_fun = _xla_callable(fun, device, backend, name, donated_invars, *map(arg_spec, args))
528 try:
529 return compiled_fun(*args)
~/anaconda3/lib/python3.6/site-packages/jax/linear_util.py in memoized_fun(fun, *args)
219 fun.populate_stores(stores)
220 else:
--> 221 ans = call(fun, *args)
222 cache[key] = (ans, fun.stores)
223 return ans
~/anaconda3/lib/python3.6/site-packages/jax/interpreters/xla.py in _xla_callable(fun, device, backend, name, donated_invars, *arg_specs)
585 pvals: Sequence[pe.PartialVal] = [pe.PartialVal.unknown(aval) for aval in abstract_args]
586 jaxpr, pvals, consts = pe.trace_to_jaxpr(
--> 587 fun, pvals, instantiate=False, stage_out=True, bottom=True)
588 jaxpr, uses_outfeed = apply_outfeed_rewriter(jaxpr)
589 _map(prefetch, it.chain(consts, jaxpr_literals(jaxpr)))
~/anaconda3/lib/python3.6/site-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr(fun, pvals, instantiate, stage_out, bottom, trace_type)
450 with new_master(trace_type, bottom=bottom) as master:
451 fun = trace_to_subjaxpr(fun, master, instantiate)
--> 452 jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
453 assert not env
454 del master
~/anaconda3/lib/python3.6/site-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
148 gen = None
149
--> 150 ans = self.f(*args, **dict(self.params, **kwargs))
151 del args
152 while stack:
~/anaconda3/lib/python3.6/site-packages/jax/random.py in _normal(key, shape, dtype)
551 def _normal(key, shape, dtype):
552 _check_shape("normal", shape)
--> 553 lo = np.nextafter(np.array(-1., dtype), 0., dtype=dtype)
554 hi = np.array(1., dtype)
555 u = uniform(key, shape, dtype, lo, hi)
TypeError: No loop matching the specified signature and casting was found for ufunc nextafter
```
| Thanks for reporting this!
I see this as an issue of raising a bad error message; we probably want the `random.normal` API only to accept inexect (i.e. floating point) dtypes and not int dtypes, but one can always cast the output to an integer as needed.
Thanks a lot for your prompt comment!
Yes, that's correct. I have also ended up casting these values to ints. However, just noticed I cannot use `dtype=np.complex128` or `dtype=np.complex64`. Is it still part of intended behavior of this API?
I think this is probably intended; by analogy, `np.random.normal` doesn't accept dtype at all, and can only return float64. If you want to construct complex normal values you need to do it by way of floats.
> I think this is probably intended; by analogy, `np.random.normal` doesn't accept dtype at all, and can only return float64. If you want to construct complex normal values you need to do it by way of floats.
That's very true. I realised my previous response was irrelevant, hence deleted it.
Also, I figured one can specify dtype using `astype`, instead of third parameter in `jax.random.normal`.
```python
from jax import random
import jax.numpy as np
key = random.PRNGKey(4)
x = random.normal(key,(1,)).astype(np.int32)
```
This only works if I am generating an array, but not a single value. | 2020-06-04T06:00:19 |
google/jax | 3,328 | google__jax-3328 | [
"3326"
] | a63b9cc256feec01b93a1edb8520d0f94e4bcd5e | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -2030,7 +2030,7 @@ def column_stack(tup):
for v in tup:
arr = array(v)
if arr.ndim < 2:
- arr = expand_dims(arr, axis=0)
+ arr = atleast_2d(arr).T
arrays.append(arr)
return concatenate(arrays, 1)
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1659,6 +1659,26 @@ def testDigitize(self, xshape, binshape, right, reverse, dtype, rng_factory):
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker)
self._CompileAndCheck(jnp_fun, args_maker)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "_{}".format(
+ jtu.format_test_name_suffix("", [shape] * len(dtypes), dtypes)),
+ "shape": shape, "dtypes": dtypes}
+ for dtypes in [
+ [np.float32],
+ [np.float32, np.float32],
+ [np.float32, np.int32, np.float32],
+ [np.float32, np.int64, np.float32],
+ [np.float32, np.int32, np.float64],
+ ]
+ for shape in [(), (2,), (3, 4), (1, 5)]))
+ def testColumnStack(self, shape, dtypes):
+ rng = jtu.rand_default(self.rng())
+ args_maker = lambda: [[rng(shape, dtype) for dtype in dtypes]]
+ np_fun = _promote_like_jnp(np.column_stack)
+ jnp_fun = jnp.column_stack
+ self._CheckAgainstNumpy(jnp_fun, np_fun, args_maker)
+ self._CompileAndCheck(jnp_fun, args_maker)
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_axis={}".format(
jtu.format_test_name_suffix("", [shape] * len(dtypes), dtypes), axis),
| Incorrect output from jax.numpy.column_stack
Using the CPU-only version of JAX (`jax 0.1.69`, `jaxlib 0.1.47`) in a fresh conda environment.
```
import numpy as np
import jax.numpy as jp
x = np.array([1., 1.])
y = np.array([2., 2.])
print(np.column_stack((x, y))) # [[1. 2.], [1. 2.]], expected
print(jp.column_stack((x, y))) # [[1., 1., 2., 2.]], incorrect
```
| Hmm seems close enough up to numerical error...
Just kidding! Thanks for spotting the issue. Our tests must be lacking here.
Tests? Who needs tests? :grin: https://github.com/google/jax/search?q=column_stack | 2020-06-04T22:39:36 |
google/jax | 3,334 | google__jax-3334 | [
"3332"
] | 93646b5785667dfe1d8726a4e39e4a367db6f5f2 | diff --git a/jax/lax_linalg.py b/jax/lax_linalg.py
--- a/jax/lax_linalg.py
+++ b/jax/lax_linalg.py
@@ -854,8 +854,16 @@ def svd_impl(operand, full_matrices, compute_uv):
return s, u, vt
def svd_translation_rule(c, operand, full_matrices, compute_uv):
- raise NotImplementedError(
- "Singular value decomposition is only implemented on the CPU and GPU backends")
+ shape = c.get_shape(operand).dimensions()
+ m, n = shape[-2:]
+ u, s, v = xops.SVD(operand)
+ permutation = list(range(len(shape)))
+ permutation[-1], permutation[-2] = permutation[-2], permutation[-1]
+ vt = xops.Transpose(v, permutation)
+ if not full_matrices and m != n:
+ u = xops.SliceInDim(u, 0, min(m, n), stride=1, dimno=len(shape) - 1)
+ vt = xops.SliceInDim(vt, 0, min(m, n), stride=1, dimno=len(shape) - 2)
+ return xops.Tuple(c, [s, u, vt])
def svd_abstract_eval(operand, full_matrices, compute_uv):
if isinstance(operand, ShapedArray):
| diff --git a/tests/linalg_test.py b/tests/linalg_test.py
--- a/tests/linalg_test.py
+++ b/tests/linalg_test.py
@@ -492,8 +492,10 @@ def testNorm(self, shape, dtype, ord, axis, keepdims, rng_factory):
for full_matrices in [False, True]
for compute_uv in [False, True]
for rng_factory in [jtu.rand_default]))
- @jtu.skip_on_devices("tpu")
def testSVD(self, b, m, n, dtype, full_matrices, compute_uv, rng_factory):
+ if (jnp.issubdtype(dtype, np.complexfloating) and
+ jtu.device_under_test() == "tpu"):
+ raise unittest.SkipTest("No complex SVD implementation")
rng = rng_factory(self.rng())
_skip_if_unsupported_type(dtype)
args_maker = lambda: [rng(b + (m, n), dtype)]
@@ -618,10 +620,12 @@ def testQrBatching(self, shape, dtype, rng_factory):
for shape in [(1, 1), (4, 4), (2, 3, 5), (5, 5, 5), (20, 20), (5, 10)]
for pnorm in [jnp.inf, -jnp.inf, 1, -1, 2, -2, 'fro']
for dtype in float_types + complex_types))
- @jtu.skip_on_devices("tpu") # SVD is not implemented on the TPU backend
@jtu.skip_on_devices("gpu") # TODO(#2203): numerical errors
def testCond(self, shape, pnorm, dtype):
_skip_if_unsupported_type(dtype)
+ if (jnp.issubdtype(dtype, np.complexfloating) and
+ jtu.device_under_test() == "tpu"):
+ raise unittest.SkipTest("No complex SVD implementation")
def gen_mat():
# arr_gen = jtu.rand_some_nan(self.rng())
@@ -733,8 +737,10 @@ def args_maker():
for shape in [(1, 1), (4, 4), (2, 70, 7), (2000, 7), (7, 1000), (70, 7, 2)]
for dtype in float_types + complex_types
for rng_factory in [jtu.rand_default]))
- @jtu.skip_on_devices("tpu") # SVD is not implemented on the TPU backend
def testPinv(self, shape, dtype, rng_factory):
+ if (jnp.issubdtype(dtype, np.complexfloating) and
+ jtu.device_under_test() == "tpu"):
+ raise unittest.SkipTest("No complex SVD implementation")
rng = rng_factory(self.rng())
_skip_if_unsupported_type(dtype)
args_maker = lambda: [rng(shape, dtype)]
@@ -742,10 +748,10 @@ def testPinv(self, shape, dtype, rng_factory):
self._CheckAgainstNumpy(np.linalg.pinv, jnp.linalg.pinv, args_maker,
tol=1e-2)
self._CompileAndCheck(jnp.linalg.pinv, args_maker)
- # TODO(phawkins): 1e-1 seems like a very loose tolerance.
- jtu.check_grads(jnp.linalg.pinv, args_maker(), 2, rtol=1e-1, atol=2e-1)
+ if jtu.device_under_test() != "tpu":
+ # TODO(phawkins): 1e-1 seems like a very loose tolerance.
+ jtu.check_grads(jnp.linalg.pinv, args_maker(), 2, rtol=1e-1, atol=2e-1)
- @jtu.skip_on_devices("tpu") # SVD is not implemented on the TPU backend
def testPinvGradIssue2792(self):
def f(p):
a = jnp.array([[0., 0.],[-p, 1.]], jnp.float32) * 1 / (1 + p**2)
@@ -787,8 +793,10 @@ def testMatrixPower(self, shape, dtype, n, rng_factory):
for shape in [(3, ), (1, 2), (8, 5), (4, 4), (5, 5), (50, 50)]
for dtype in float_types + complex_types
for rng_factory in [jtu.rand_default]))
- @jtu.skip_on_devices("tpu")
def testMatrixRank(self, shape, dtype, rng_factory):
+ if (jnp.issubdtype(dtype, np.complexfloating) and
+ jtu.device_under_test() == "tpu"):
+ raise unittest.SkipTest("No complex SVD implementation")
rng = rng_factory(self.rng())
_skip_if_unsupported_type(dtype)
args_maker = lambda: [rng(shape, dtype)]
| np.linalg.svd on TPU
JAX currently only supports SVD on the CPU or GPU and one gets the following error when attempting to run it on a TPU:
```
NotImplementedError: Singular value decomposition is only implemented on the CPU and GPU backends
```
Afaik the SVD is supported in tensorflow so there should exist XLA primitives for it. What needs to be done to add support in JAX?
| 2020-06-05T14:14:05 |
|
google/jax | 3,350 | google__jax-3350 | [
"2919"
] | 2a10dbbf3730ca8ff359716652b9e7fb590a365e | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -35,20 +35,21 @@
import numpy as np
import opt_einsum
-from jax import jit, device_put, custom_jvp
+from jax import jit, custom_jvp
from .vectorize import vectorize
from ._util import _wraps
from .. import core
from .. import dtypes
from ..abstract_arrays import UnshapedArray, ShapedArray, ConcreteArray, canonicalize_shape
from ..config import flags
-from ..interpreters.xla import DeviceArray
+from ..interpreters.xla import (DeviceArray, device_put, array_result_handler,
+ DeviceValue)
from ..interpreters.masking import Poly
from .. import lax
from .. import ops
from ..util import (partial, unzip2, prod as _prod,
subvals, safe_zip)
-from ..lib import pytree
+from ..tree_util import tree_leaves, tree_flatten
FLAGS = flags.FLAGS
flags.DEFINE_enum(
@@ -2107,19 +2108,24 @@ def array(object, dtype=None, copy=True, order="K", ndmin=0):
if order is not None and order != "K":
raise NotImplementedError("Only implemented for order='K'")
lax._check_user_dtype_supported(dtype, "array")
-
- if isinstance(object, ndarray) or isscalar(object):
- out = device_put(object)
- if dtype and _dtype(out) != dtypes.canonicalize_dtype(dtype):
+ dtype = dtype and dtypes.canonicalize_dtype(dtype)
+
+ if _can_call_numpy_array(object):
+ object = np.array(object, dtype=dtype, ndmin=ndmin)
+ assert type(object) not in dtypes.python_scalar_dtypes
+
+ if type(object) is np.ndarray:
+ out = _device_put_raw(object)
+ if dtype: assert _dtype(out) == dtype
+ elif isinstance(object, (DeviceValue, core.Tracer)):
+ out = object
+ if dtype and _dtype(out) != dtype:
out = lax.convert_element_type(out, dtype)
- elif hasattr(object, '__array__'):
- # this case is for duck-typed handling of objects that implement `__array__`
- out = array(object.__array__(), dtype and dtypes.canonicalize_dtype(dtype))
elif isinstance(object, (list, tuple)):
if object:
out = stack([array(elt, dtype=dtype) for elt in object])
else:
- out = array(np.array([], dtype or float_))
+ out = _device_put_raw(np.array([], dtype or float_))
else:
try:
view = memoryview(object)
@@ -2134,6 +2140,15 @@ def array(object, dtype=None, copy=True, order="K", ndmin=0):
out = lax.broadcast(out, (1,) * (ndmin - ndim(out)))
return out
+def _can_call_numpy_array(x):
+ return _all(not isinstance(l, (core.Tracer, DeviceValue))
+ for l in tree_leaves(x))
+
+def _device_put_raw(x):
+ aval = core.raise_to_shaped(core.get_aval(x))
+ return array_result_handler(None, aval)(device_put(x))
+
+
@_wraps(np.asarray)
def asarray(a, dtype=None, order=None):
lax._check_user_dtype_supported(dtype, "asarray")
@@ -3431,7 +3446,7 @@ def _split_index_for_jit(idx):
# indexing logic to handle them.
idx = _expand_bool_indices(idx)
- leaves, treedef = pytree.flatten(idx)
+ leaves, treedef = tree_flatten(idx)
dynamic = [None] * len(leaves)
static = [None] * len(leaves)
for i, x in enumerate(leaves):
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -2134,16 +2134,17 @@ def __array__(self, dtype=None):
return np.array([], dtype=dtype)
assert type(jnp.array(NDArrayLike())) == jax.interpreters.xla.DeviceArray
- class DeviceArrayLike:
- def __array__(self, dtype=None):
- return jnp.array([], dtype=dtype)
- assert type(jnp.array(DeviceArrayLike())) == jax.interpreters.xla.DeviceArray
+ # NOTE(mattjj): disabled b/c __array__ must produce ndarrays
+ # class DeviceArrayLike:
+ # def __array__(self, dtype=None):
+ # return jnp.array([], dtype=dtype)
+ # assert type(jnp.array(DeviceArrayLike())) == jax.interpreters.xla.DeviceArray
def testArrayMethod(self):
class arraylike(object):
dtype = np.float32
def __array__(self, dtype=None):
- return 3.
+ return np.array(3., dtype=dtype)
a = arraylike()
ans = jnp.array(a)
assert ans == 3.
| `jax.numpy.array()` is much slower than `numpy.array()` when called on a list
As pointed out in the docs, converting lists to arrays is terribly slow compared to numpy:
```
import numpy as np
import jax.numpy as jnp
%%timeit
np.array([0] * int(1e6))
> 60.1 ms ± 578 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit
jnp.array([0] * int(1e6))
> 3min 12s ± 794 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
This is somewhat remedied by first calling numpy:
```
%%timeit
jnp.array(np.array([0] * int(1e6)))
> 61.9 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
Ideally, one wouldn't want to make a proxy for `array()` by defining a function that always calls numpy first, so it would be nice to either explicitly do this on a low level, or improve the implementation of `array()` as to avoid this huge discrepancy in speed.
| This is something worth fixing!
@hawkinsp did a quick profile, I'll put the screenshot here just for posterity:

From the left half:

From the right half:

Looks like a lot of dumb stuff going on. The good news is we should be able to improve it significantly :P
The same problem happens when converting from a bunch of `jnp.array`s to `np.array`. But the bulk of the time is in the creation of the `jnp.array`s.
```
import numpy as np
import jax.numpy as jnp
%%timeit
np.array([0] * int(1e3))
10000 loops, best of 3: 119 µs per loop
%%timeit
arr = jnp.array([0] * int(1e3))
1 loops, best of 3: 212 ms per loop
arr = jnp.array([0] * int(1e3))
%%timeit
np.array(arr)
100000 loops, best of 3: 7.1 µs per loop
``` | 2020-06-07T03:11:45 |
google/jax | 3,385 | google__jax-3385 | [
"3372"
] | d55c0edeace05c51fc979d0ea2481c2cb72992ca | diff --git a/jax/scipy/special.py b/jax/scipy/special.py
--- a/jax/scipy/special.py
+++ b/jax/scipy/special.py
@@ -19,6 +19,7 @@
from .. import lax
from .. import api
+from ..interpreters import ad
from ..numpy import lax_numpy as jnp
from ..numpy.lax_numpy import (asarray, _reduction_dims, _constant_like,
_promote_args_inexact)
@@ -47,6 +48,7 @@ def betainc(a, b, x):
def digamma(x):
x, = _promote_args_inexact("digamma", x)
return lax.digamma(x)
+ad.defjvp(lax.digamma_p, lambda g, x: lax.mul(g, polygamma(1, x)))
@_wraps(osp_special.gammainc, update_doc=False)
@@ -150,6 +152,71 @@ def multigammaln(a, d):
return res + constant
+# coefs of (2k)! / B_{2k} where B are bernoulli numbers
+# those numbers are obtained using https://www.wolframalpha.com
+_BERNOULLI_COEFS = [
+ 12,
+ -720,
+ 30240,
+ -1209600,
+ 47900160,
+ -1307674368000 / 691,
+ 74724249600,
+ -10670622842880000 / 3617,
+ 5109094217170944000 / 43867,
+ -802857662698291200000 / 174611,
+ 14101100039391805440000 / 77683,
+ -1693824136731743669452800000 / 236364091,
+ 186134520519971831808000000 / 657931,
+ -37893265687455865519472640000000 / 3392780147,
+ 759790291646040068357842010112000000 / 1723168255201,
+ -134196726836183700385281186201600000000 / 7709321041217,
+]
+
+
+@_wraps(osp_special.zeta)
+def zeta(x, q=None):
+ assert q is not None, "Riemann zeta function is not implemented yet."
+ # Reference: Johansson, Fredrik.
+ # "Rigorous high-precision computation of the Hurwitz zeta function and its derivatives."
+ # Numerical Algorithms 69.2 (2015): 253-270.
+ # https://arxiv.org/abs/1309.2877 - formula (5)
+ # here we keep the same notation as in reference
+ s, a = _promote_args_inexact("zeta", x, q)
+ dtype = lax.dtype(a).type
+ s_, a_ = jnp.expand_dims(s, -1), jnp.expand_dims(a, -1)
+ # precision ~ N, M
+ N = M = dtype(8) if lax.dtype(a) == jnp.float32 else dtype(16)
+ assert M <= len(_BERNOULLI_COEFS)
+ k = np.arange(N, dtype=N.dtype)
+ S = jnp.sum((a_ + k) ** -s_, -1)
+ I = lax.div((a + N) ** (dtype(1) - s), s - dtype(1))
+ T0 = (a + N) ** -s
+ s_over_a = (s_ + np.arange(2 * M, dtype=M.dtype)) / (a_ + N)
+ T1 = jnp.cumprod(s_over_a, -1)[..., ::2]
+ coefs = np.array(_BERNOULLI_COEFS[:T1.shape[-1]], dtype=dtype)
+ T1 = T1 / coefs
+ T = T0 * (dtype(0.5) + T1.sum(-1))
+ return S + I + T
+
+
+@_wraps(osp_special.polygamma, update_doc=False)
+def polygamma(n, x):
+ assert jnp.issubdtype(lax.dtype(n), jnp.integer)
+ n, x = _promote_args_inexact("polygamma", n, x)
+ shape = lax.broadcast_shapes(n.shape, x.shape)
+ return _polygamma(jnp.broadcast_to(n, shape), jnp.broadcast_to(x, shape))
+
+
[email protected]_jvp
+def _polygamma(n, x):
+ dtype = lax.dtype(n).type
+ n_plus = n + dtype(1)
+ sign = dtype(1) - (n_plus % dtype(2)) * dtype(2)
+ return jnp.where(n == 0, digamma(x), sign * jnp.exp(gammaln(n_plus)) * zeta(n_plus, x))
+_polygamma.defjvps(None, lambda g, ans, n, x: lax.mul(g, _polygamma(n + 1, x)))
+
+
# Normal distributions
# Functions "ndtr" and "ndtri" are derived from calculations made in:
| diff --git a/tests/lax_scipy_test.py b/tests/lax_scipy_test.py
--- a/tests/lax_scipy_test.py
+++ b/tests/lax_scipy_test.py
@@ -44,18 +44,18 @@
OpRecord = collections.namedtuple(
"OpRecord",
- ["name", "nargs", "dtypes", "rng_factory", "test_autodiff", "test_name"])
+ ["name", "nargs", "dtypes", "rng_factory", "test_autodiff", "nondiff_argnums", "test_name"])
-def op_record(name, nargs, dtypes, rng_factory, test_grad, test_name=None):
+def op_record(name, nargs, dtypes, rng_factory, test_grad, nondiff_argnums=(), test_name=None):
test_name = test_name or name
- return OpRecord(name, nargs, dtypes, rng_factory, test_grad, test_name)
+ nondiff_argnums = tuple(sorted(set(nondiff_argnums)))
+ return OpRecord(name, nargs, dtypes, rng_factory, test_grad, nondiff_argnums, test_name)
JAX_SPECIAL_FUNCTION_RECORDS = [
- # TODO: digamma has no JVP implemented.
op_record("betaln", 2, float_dtypes, jtu.rand_positive, False),
op_record("betainc", 3, float_dtypes, jtu.rand_positive, False),
- op_record("digamma", 1, float_dtypes, jtu.rand_positive, False),
+ op_record("digamma", 1, float_dtypes, jtu.rand_positive, True),
op_record("gammainc", 2, float_dtypes, jtu.rand_positive, True),
op_record("gammaincc", 2, float_dtypes, jtu.rand_positive, True),
op_record("erf", 1, float_dtypes, jtu.rand_small_positive, True),
@@ -72,8 +72,12 @@ def op_record(name, nargs, dtypes, rng_factory, test_grad, test_name=None):
op_record("ndtr", 1, float_dtypes, jtu.rand_default, True),
# TODO(phawkins): gradient of entr yields NaNs.
op_record("entr", 1, float_dtypes, jtu.rand_default, False),
+ op_record("polygamma", 2, (int_dtypes, float_dtypes), jtu.rand_positive, True, (0,)),
op_record("xlogy", 2, float_dtypes, jtu.rand_default, True),
op_record("xlog1py", 2, float_dtypes, jtu.rand_default, True),
+ # TODO: enable gradient test for zeta by restricting the domain of
+ # of inputs to some reasonable intervals
+ op_record("zeta", 2, float_dtypes, jtu.rand_positive, False),
]
CombosWithReplacement = itertools.combinations_with_replacement
@@ -117,22 +121,32 @@ def lax_fun(array_to_reduce):
rec.test_name, shapes, dtypes),
"rng_factory": rec.rng_factory, "shapes": shapes, "dtypes": dtypes,
"test_autodiff": rec.test_autodiff,
+ "nondiff_argnums": rec.nondiff_argnums,
"scipy_op": getattr(osp_special, rec.name),
"lax_op": getattr(lsp_special, rec.name)}
for shapes in CombosWithReplacement(all_shapes, rec.nargs)
- for dtypes in CombosWithReplacement(rec.dtypes, rec.nargs))
+ for dtypes in (CombosWithReplacement(rec.dtypes, rec.nargs)
+ if isinstance(rec.dtypes, list) else itertools.product(*rec.dtypes)))
for rec in JAX_SPECIAL_FUNCTION_RECORDS))
def testScipySpecialFun(self, scipy_op, lax_op, rng_factory, shapes, dtypes,
- test_autodiff):
+ test_autodiff, nondiff_argnums):
rng = rng_factory(self.rng())
args_maker = self._GetArgsMaker(rng, shapes, dtypes)
args = args_maker()
self.assertAllClose(scipy_op(*args), lax_op(*args), atol=1e-3, rtol=1e-3,
check_dtypes=False)
- self._CompileAndCheck(lax_op, args_maker, rtol=1e-5)
+ self._CompileAndCheck(lax_op, args_maker, rtol=1e-4)
if test_autodiff:
- jtu.check_grads(lax_op, args, order=1,
+ def partial_lax_op(*vals):
+ list_args = list(vals)
+ for i in nondiff_argnums:
+ list_args.insert(i, args[i])
+ return lax_op(*list_args)
+
+ assert list(nondiff_argnums) == sorted(set(nondiff_argnums))
+ diff_args = [x for i, x in enumerate(args) if i not in nondiff_argnums]
+ jtu.check_grads(partial_lax_op, diff_args, order=1,
atol=jtu.if_device_under_test("tpu", .1, 1e-3),
rtol=.1, eps=1e-3)
| NotImplementedError: Forward-mode differentiation rule for 'digamma' not implemented
I'm running into the following error using jax 0.1.57 and numpyro 0.2.4 while trying to model a GammaPoisson distribution with SVI:
NotImplementedError: Forward-mode differentiation rule for 'digamma' not implemented.
This issue can be replicated by running the following:
```python
import jax
jax.hessian(jax.scipy.special.gammaln)(jax.numpy.ones(3))
jax.grad(jax.scipy.special.digamma)(jax.numpy.ones(3))
```
It sounds like the ideal case would be an implementation of [scipy.special.polygamma](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.polygamma.html). This way numpyro could handle taking the derivative of any polygamma. For more information, please look here: https://github.com/pyro-ppl/numpyro/issues/621#issuecomment-640892756.
The GammaPoisson distribution is critical for modeling in my industry, so it would be awesome to get it working.
Thank you!
| This is a good topic for community contributions! | 2020-06-09T18:33:31 |
google/jax | 3,390 | google__jax-3390 | [
"3389"
] | 15bc62204ecd201baf331bc926e9d77548e67f98 | diff --git a/jax/custom_derivatives.py b/jax/custom_derivatives.py
--- a/jax/custom_derivatives.py
+++ b/jax/custom_derivatives.py
@@ -75,7 +75,7 @@ def _initial_style_jaxpr(fun, in_avals):
typed_jaxpr = core.TypedJaxpr(jaxpr, consts, in_avals, out_avals)
return typed_jaxpr
-def sum_tangents(x, *xs):
+def sum_tangents(_, x, *xs):
return reduce(ad.add_tangents, xs, x)
def zeros_like_pytree(x):
@@ -196,7 +196,7 @@ def jvp(primals, tangents):
zeros = zeros_like_pytree(primal_out)
all_tangents_out = [jvp(t, primal_out, *primals) if jvp else zeros
for t, jvp in zip(tangents, jvps)]
- tangent_out = tree_multimap(sum_tangents, *all_tangents_out)
+ tangent_out = tree_multimap(sum_tangents, primal_out, *all_tangents_out)
return primal_out, tangent_out
self.defjvp(jvp)
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -2476,6 +2476,17 @@ def foo(x):
expected = -1.
self.assertAllClose(ans, expected, check_dtypes=False)
+ def test_custom_jvps_first_rule_is_none(self):
+ # https://github.com/google/jax/issues/3389
+ @api.custom_jvp
+ def f(x, y):
+ return x ** 2 * y
+
+ f.defjvps(None, lambda x_dot, primal_out, x, y: 2 * x * y * x_dot)
+ ans = grad(f, 1)(2., 3.) # doesn't crash
+ expected = 12.
+ self.assertAllClose(ans, expected, check_dtypes=False)
+
class CustomVJPTest(jtu.JaxTestCase):
| custom_jvp does not work when first jvp rule is None
Here is a repro code
```python
from jax import grad, custom_jvp
@custom_jvp
def f(x, y):
return x ** 2 * y
f.defjvps(None, lambda x_dot, primal_out, x, y: 2 * x * y * x_dot)
print(grad(f, 1)(2., 3.))
```
| 2020-06-09T22:20:29 |
|
google/jax | 3,413 | google__jax-3413 | [
"3412"
] | 04c9b3278812b78549d44b6aee1b3312cd6672b6 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -43,7 +43,7 @@
from ..abstract_arrays import UnshapedArray, ShapedArray, ConcreteArray, canonicalize_shape
from ..config import flags
from ..interpreters.xla import (DeviceArray, device_put, array_result_handler,
- DeviceValue)
+ DeviceValue, abstractify)
from ..interpreters.masking import Poly
from .. import lax
from .. import ops
@@ -2120,9 +2120,12 @@ def array(object, dtype=None, copy=True, order="K", ndmin=0):
out = _device_put_raw(object)
if dtype: assert _dtype(out) == dtype
elif isinstance(object, (DeviceValue, core.Tracer)):
- out = object
- if dtype and _dtype(out) != dtype:
- out = lax.convert_element_type(out, dtype)
+ if isinstance(object, DeviceArray) and copy:
+ # We perform a copy by bouncing back to the host
+ # TODO(phawkins): add a device runtime function to copy a buffer
+ out = _device_put_raw(np.asarray(object))
+ else:
+ out = object
elif isinstance(object, (list, tuple)):
if object:
out = stack([array(elt, dtype=dtype) for elt in object])
@@ -2138,6 +2141,9 @@ def array(object, dtype=None, copy=True, order="K", ndmin=0):
raise TypeError("Unexpected input type for array: {}".format(type(object)))
+ if dtype and _dtype(out) != dtype:
+ out = lax.convert_element_type(out, dtype)
+
if ndmin > ndim(out):
out = lax.broadcast(out, (1,) * (ndmin - ndim(out)))
return out
@@ -2146,9 +2152,9 @@ def _can_call_numpy_array(x):
return _all(not isinstance(l, (core.Tracer, DeviceValue))
for l in tree_leaves(x))
+# TODO(mattjj): maybe move these two functions into xla.py
def _device_put_raw(x):
- aval = core.raise_to_shaped(core.get_aval(x))
- return array_result_handler(None, aval)(device_put(x))
+ return array_result_handler(None, abstractify(x))(device_put(x))
@_wraps(np.asarray)
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -3114,6 +3114,23 @@ def test_jit_nested_donate_ignored(self):
with self.assertRaisesRegex(ValueError, "nested.*not supported"):
jit_fun(a)
+ def test_jnp_array_copy(self):
+ # https://github.com/google/jax/issues/3412
+
+ @partial(api.jit, donate_argnums=(0,))
+ def _test(array):
+ return array.at[0].set(77)
+
+ x = jnp.asarray([0, 1])
+ x_copy = jnp.array(x, copy=True)
+ with warnings.catch_warnings():
+ warnings.simplefilter("ignore")
+ _test(x) # donation
+
+ # Gives: RuntimeError: Invalid argument: CopyToHostAsync() called on invalid buffer.
+ print(x_copy) # doesn't crash
+
+
# === pmap ===
@jtu.skip_on_devices("cpu", "gpu") # In/out aliasing only supported on TPU.
| jnp.asarray does not copy when copy=True
This can be a problem now that we have donate_argnums. It took me a while to realize this, before I smartened up and just used x.copy() in the example below.
```
import functools
import jax
import jax.numpy as jnp
@functools.partial(jax.jit, donate_argnums=(0,))
def _test(array: jnp.ndarray):
return array.at[0].set(77)
x = jnp.asarray([0, 1])
x_copy = jnp.array(x, copy=True)
new_x = _test(x)
# Gives: RuntimeError: Invalid argument: CopyToHostAsync() called on invalid buffer.
print(x_copy)
```
@tomhennigan
| @hawkinsp advises that we don't have in our device runtime a way to copy buffers! He'll add one, but in the meantime I'll add something that bounces back to the host. | 2020-06-11T21:24:39 |
google/jax | 3,436 | google__jax-3436 | [
"3419"
] | b2105ab370a4567aaf4eed910395f20a2bda67d0 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -1995,6 +1995,8 @@ def concatenate(arrays, axis=0):
raise ValueError("Need at least one array to concatenate.")
if ndim(arrays[0]) == 0:
raise ValueError("Zero-dimensional arrays cannot be concatenated.")
+ if axis is None:
+ return concatenate([ravel(a) for a in arrays], axis=0)
axis = _canonicalize_axis(axis, ndim(arrays[0]))
arrays = _promote_dtypes(*arrays)
# lax.concatenate can be slow to compile for wide concatenations, so form a
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1218,6 +1218,12 @@ def args_maker():
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker)
self._CompileAndCheck(jnp_fun, args_maker)
+ def testConcatenateAxisNone(self):
+ # https://github.com/google/jax/issues/3419
+ a = jnp.array([[1, 2], [3, 4]])
+ b = jnp.array([[5]])
+ jnp.concatenate((a, b), axis=None)
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_axis={}_baseshape=[{}]_dtypes=[{}]".format(
axis, ",".join(str(d) for d in base_shape),
| concatenate fails when axis = None
when running documentation example I'm getting a error:
```
import jax.numpy as np
a = np.array([[1, 2], [3, 4]])
b = np.array([[5, 6]])
np.concatenate((a, b), axis=None)
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-37-28d5f29078d3> in <module>()
5 np.concatenate((a, b.T), axis=1)
6
----> 7 np.concatenate((a, b), axis=None)
8
1 frames
/usr/local/lib/python3.6/dist-packages/jax/numpy/lax_numpy.py in concatenate(arrays, axis)
1996 if ndim(arrays[0]) == 0:
1997 raise ValueError("Zero-dimensional arrays cannot be concatenated.")
-> 1998 axis = _canonicalize_axis(axis, ndim(arrays[0]))
1999 arrays = _promote_dtypes(*arrays)
2000 # lax.concatenate can be slow to compile for wide concatenations, so form a
/usr/local/lib/python3.6/dist-packages/jax/lax/lax.py in _canonicalize_axis(axis, num_dims)
5490 def _canonicalize_axis(axis, num_dims):
5491 """Canonicalize an axis in [-num_dims, num_dims) to [0, num_dims)."""
-> 5492 axis = operator.index(axis)
5493 if not -num_dims <= axis < num_dims:
5494 raise ValueError(
TypeError: 'NoneType' object cannot be interpreted as an integer
```
| JAX currently does not yet support the `axis=None` behavior of NumPy (flattening before concatenation). That said, the error message here is also quite poor.
Contributions would be welcome here -- either to implement the feature or a better error message! Take a look at `concatenate` in `jax/numpy/lax_numpy.py`. | 2020-06-14T20:57:27 |
google/jax | 3,439 | google__jax-3439 | [
"3399"
] | b2105ab370a4567aaf4eed910395f20a2bda67d0 | diff --git a/jax/interpreters/batching.py b/jax/interpreters/batching.py
--- a/jax/interpreters/batching.py
+++ b/jax/interpreters/batching.py
@@ -161,10 +161,12 @@ def process_map(self, map_primitive, f: lu.WrappedFun, tracers, params):
if all(dim is not_mapped for dim in dims):
return map_primitive.bind(f, *vals, **params)
else:
+ mapped_invars = params['mapped_invars']
size, = {x.shape[d] for x, d in zip(vals, dims) if d is not not_mapped}
- vals = [moveaxis(x, d, 1) if d is not not_mapped and d != 1 else x
- for x, d in zip(vals, dims)]
- dims = tuple(not_mapped if d is not_mapped else 0 for d in dims)
+ vals = [moveaxis(x, d, 1) if d == 0 and mapped_invar else x
+ for x, d, mapped_invar in zip(vals, dims, mapped_invars)]
+ dims = tuple(not_mapped if d is not_mapped else max(0, d - mapped_invar)
+ for d, mapped_invar in zip(dims, mapped_invars))
f, dims_out = batch_subtrace(f, self.master, dims)
vals_out = map_primitive.bind(f, *vals, **params)
dims_out = tuple(d + 1 if d is not not_mapped else d for d in dims_out())
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -854,6 +854,29 @@ def s(keys):
ans = s(keys) # doesn't crash
self.assertEqual(ans.shape, (13, N_DEVICES))
+ def testVmapOfPmap3(self):
+ # https://github.com/google/jax/issues/3399
+ device_count = xla_bridge.device_count()
+ if device_count < 2:
+ raise SkipTest("test requires at least two devices")
+
+ def map_version(qs, pts):
+ return jax.lax.map(lambda x: func(x, pts), qs)
+
+ def vmap_version(qs, pts):
+ return jax.vmap(func, in_axes=(0, None))(qs, pts)
+
+ def func(q, pts):
+ q_from_pmap = jax.pmap(lambda x, y: y, in_axes=(0, None))(pts, q)
+ return q, q_from_pmap
+
+ pts = jnp.ones(device_count)
+ qs = jnp.asarray(((0,0), (3,3), (2,2)))
+
+ _, expected = map_version(qs, pts)
+ _, ans = vmap_version(qs, pts)
+ self.assertAllClose(ans, expected, check_dtypes=False)
+
def testVmapOfPmapNonLeadingAxis(self):
device_count = xla_bridge.device_count()
f0 = lambda x: x
@@ -1210,7 +1233,6 @@ def testPsumOnBooleanDtype(self):
out = pmap(lambda x: jax.lax.pmean(x, 'i'), 'i')(x)
self.assertEqual(list(out), [1])
-
class PmapWithDevicesTest(jtu.JaxTestCase):
def testAllDevices(self):
| vmap(pmap) different results than map(pmap)
Likely I'm confused, but I recently tracked a bug in my program to a different output when I run (what I expect to be) the same computation with vmap vs. map.
This was run on TPUv2.
```
import functools
import jax
import jax.numpy as jnp
print(jax.local_device_count())
# In the code below, pts only exists to allow us to execute with pmap.
def map_version(qs, pts):
return jax.lax.map(lambda x: func(x, pts), qs)
def vmap_version(qs, pts):
return jax.vmap(func, in_axes=(0, None))(qs, pts)
def func(q, pts):
"""Returns q and also the view of q that the pmap'd lambda gets"""
q_from_pmap = jax.pmap(lambda x, y: y, in_axes=(0, None))(pts, q)
return q, q_from_pmap
device_count = 2
pts = jnp.ones(device_count)
qs = jnp.asarray(((0,0), (3,3), (2,2)))
print(f'qs.shape: {qs.shape} qs:\n{qs}')
print(pts.shape: {pts.shape} {pts}')
print(f'map version\n-----------')
q, q_from_pmap = map_version(qs, pts)
print(f'q from func:{q.shape}\n{q}')
print(f'q_from_pmap:{q_from_pmap.shape}\n{q_from_pmap}')
print(f'vmap version\n-----------')
q, q_from_pmap = vmap_version(qs, pts)
print(f'q from func:{q.shape}\n{q}')
print(f'q_from_pmap:{q_from_pmap.shape}\n{q_from_pmap}')
```
Output:
```
8
qs.shape: (3, 2) qs:
[[0 0]
[3 3]
[2 2]]
points.shape: (2,) [1. 1.]
map version
-----------
q from func:(3, 2)
[[0 0]
[3 3]
[2 2]]
q_from_pmap:(3, 2, 2)
[[[0 0]
[0 0]]
[[3 3]
[3 3]]
[[2 2]
[2 2]]]
vmap version
-----------
q from func:(3, 2)
[[0 0]
[3 3]
[2 2]]
q_from_pmap:(2, 2, 3)
[[[0 3 2]
[0 3 2]]
[[0 3 2]
[0 3 2]]]
```
The map version does exactly what I would expect. The vmap version does not.
If I had to guess, it would be that vmap does some sort of virtual slicing, but that slicing is not maintained once qs is passed through to the pmapped lambda. The pmapped lambda seems to get all the rows of qs, when I would expect it to only get one row per batch.
Please let me know if I'm abusing vmap here, or if it's a bug.
**Update**
If we transpose qs in vmap_version, everything works:
```
def vmap_version(qs, pts):
return jax.vmap(func, in_axes=(0, None))(qs.T, pts)
```
This is true for my actual program as well, which is doing quite a bit more sophisticated work than this example. Doesn't feel right. :)
| I think perhaps in #1959 (and the follow-up fix #2828) we neglected to handle `mapped_invars` correctly for `vmap`-of-`pmap`. That is, BatchTracer.process_map still assumes all arguments are mapped over by a map primitive (i.e. by `pmap`).
cc @gnecula @jekbradbury | 2020-06-14T21:46:30 |
google/jax | 3,449 | google__jax-3449 | [
"3440"
] | ea9af1b0796c49d632b0550648e11d3fcd4b3a51 | diff --git a/jax/interpreters/batching.py b/jax/interpreters/batching.py
--- a/jax/interpreters/batching.py
+++ b/jax/interpreters/batching.py
@@ -161,10 +161,12 @@ def process_map(self, map_primitive, f: lu.WrappedFun, tracers, params):
if all(dim is not_mapped for dim in dims):
return map_primitive.bind(f, *vals, **params)
else:
+ mapped_invars = params['mapped_invars']
size, = {x.shape[d] for x, d in zip(vals, dims) if d is not not_mapped}
- vals = [moveaxis(x, d, 1) if d is not not_mapped and d != 1 else x
- for x, d in zip(vals, dims)]
- dims = tuple(not_mapped if d is not_mapped else 0 for d in dims)
+ vals = [moveaxis(x, d, 1) if d == 0 and mapped_invar else x
+ for x, d, mapped_invar in zip(vals, dims, mapped_invars)]
+ dims = tuple(not_mapped if d is not_mapped else max(0, d - mapped_invar)
+ for d, mapped_invar in zip(dims, mapped_invars))
f, dims_out = batch_subtrace(f, self.master, dims)
vals_out = map_primitive.bind(f, *vals, **params)
dims_out = tuple(d + 1 if d is not not_mapped else d for d in dims_out())
| diff --git a/tests/lax_vmap_test.py b/tests/lax_vmap_test.py
--- a/tests/lax_vmap_test.py
+++ b/tests/lax_vmap_test.py
@@ -23,12 +23,12 @@
import numpy as onp
-import jax
from jax import api
from jax import dtypes
from jax import lax
from jax import test_util as jtu
from jax.lib import xla_client
+from jax.util import safe_map, safe_zip
from tests.lax_test import (all_dtypes, CombosWithReplacement,
compatible_shapes, default_dtypes, float_dtypes,
@@ -38,6 +38,10 @@
config.parse_flags_with_absl()
FLAGS = config.FLAGS
+map, unsafe_map = safe_map, map
+zip, unsafe_zip = safe_zip, zip
+
+
def all_bdims(*shapes):
bdims = (itertools.chain([cast(Optional[int], None)],
range(len(shape) + 1)) for shape in shapes)
@@ -56,17 +60,15 @@ def slicer(x, bdim):
return lambda i: lax.index_in_dim(x, i, bdim, keepdims=False)
def args_slicer(args, bdims):
- slicers = list(map(slicer, args, bdims))
+ slicers = map(slicer, args, bdims)
return lambda i: [sl(i) for sl in slicers]
class LaxVmapTest(jtu.JaxTestCase):
def _CheckBatching(self, op, bdim_size, bdims, shapes, dtypes, rng,
rtol=None, atol=None):
- batched_shapes = list(jax.util.safe_map(partial(add_bdim, bdim_size),
- bdims, shapes))
- args = [rng(shape, dtype)
- for shape, dtype in jax.util.safe_zip(batched_shapes, dtypes)]
+ batched_shapes = map(partial(add_bdim, bdim_size), bdims, shapes)
+ args = [rng(shape, dtype) for shape, dtype in zip(batched_shapes, dtypes)]
args_slice = args_slicer(args, bdims)
ans = api.vmap(op, bdims)(*args)
expected = onp.stack([op(*args_slice(i)) for i in range(bdim_size)])
@@ -642,7 +644,7 @@ def testBroadcastShapesReturnsPythonInts(self):
# Note also that we chose 3 * 5 * 3 * 5 such that it fits in the range of
# values a bfloat16 can represent exactly to avoid ties.
for dtype, rng_factory in itertools.chain(
- zip(float_dtypes + int_dtypes, itertools.repeat(jtu.rand_unique_int)))))
+ unsafe_zip(float_dtypes + int_dtypes, itertools.repeat(jtu.rand_unique_int)))))
def testTopK(self, shape, dtype, k, bdims, rng_factory):
rng = rng_factory(self.rng())
# _CheckBatching doesn't work with tuple outputs, so test outputs separately.
diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -15,6 +15,7 @@
from concurrent.futures import ThreadPoolExecutor
from functools import partial
+import itertools as it
import os
from random import shuffle
from unittest import SkipTest
@@ -37,6 +38,9 @@
from jax.interpreters import pxla
from jax.interpreters import xla
+from tests.lax_test import compatible_shapes
+from tests.lax_vmap_test import all_bdims, add_bdim, args_slicer
+
from jax.config import config
config.parse_flags_with_absl()
@@ -856,10 +860,6 @@ def s(keys):
def testVmapOfPmap3(self):
# https://github.com/google/jax/issues/3399
-
- # TODO(mattjj): re-enable
- raise SkipTest("temporarily skipping test while debugging others")
-
device_count = xla_bridge.device_count()
if device_count < 2:
raise SkipTest("test requires at least two devices")
@@ -1237,6 +1237,36 @@ def testPsumOnBooleanDtype(self):
out = pmap(lambda x: jax.lax.pmean(x, 'i'), 'i')(x)
self.assertEqual(list(out), [1])
+class VmapOfPmapTest(jtu.JaxTestCase):
+
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": f"{shapes}_{vmap_bdims}_{pmap_bdims}",
+ "shapes": shapes, "vmap_bdims": vmap_bdims, "pmap_bdims": pmap_bdims}
+ for shape_group in compatible_shapes
+ for num_args in range(1, 4)
+ for shapes in it.combinations_with_replacement(shape_group, num_args)
+ for vmap_bdims in all_bdims(*shapes)
+ for pmap_bdims in it.product([0, None], repeat=num_args)
+ if not all(bd is None for bd in pmap_bdims)
+ ))
+ def testVmapOfPmap(self, shapes, vmap_bdims, pmap_bdims):
+ vmapped_size = 3
+ pmapped_size = xla_bridge.device_count()
+
+ rng = jtu.rand_default(self.rng())
+
+ def fun(*args):
+ return sum(args)
+
+ final_shapes = map(partial(add_bdim, vmapped_size), vmap_bdims,
+ map(partial(add_bdim, pmapped_size), pmap_bdims, shapes))
+
+ args = [rng(shape, jnp.float32) for shape in final_shapes]
+ args_slice = args_slicer(args, vmap_bdims)
+ ans = vmap(pmap(fun, in_axes=pmap_bdims), vmap_bdims)(*args)
+ expected = np.stack([fun(*args_slice(i)) for i in range(vmapped_size)])
+ self.assertAllClose(ans, expected)
+
class PmapWithDevicesTest(jtu.JaxTestCase):
| systematic tests for vmap-of-pmap
We've had some bugs pop up in vmap-of-pmap, with users running into them rather than us catching them up front. We have several tests, but they're all "manual" and don't exercise all possible corner cases. We should make really systematic vmap-of-pmap tests so we're confident things work and we never regress things.
| 2020-06-15T16:11:54 |
|
google/jax | 3,453 | google__jax-3453 | [
"3450"
] | 37f4722fb33dc3e3eb8844fdf35c9c60ce2431ab | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -2253,7 +2253,7 @@ def arange(start, stop=None, step=None, dtype=None):
lax._check_user_dtype_supported(dtype, "arange")
if stop is None and step is None:
dtype = dtype or _dtype(start)
- return lax.iota(dtype, start) # avoids materializing
+ return lax.iota(dtype, ceil(start)) # avoids materializing
else:
return array(np.arange(start, stop=stop, step=step, dtype=dtype))
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -2504,9 +2504,11 @@ def testNpMean(self):
def testArangeOnFloats(self):
# from https://github.com/google/jax/issues/145
- expected = np.arange(0.0, 1.0, 0.1, dtype=jnp.float_)
- ans = jnp.arange(0.0, 1.0, 0.1)
- self.assertAllClose(expected, ans)
+ self.assertAllClose(np.arange(0.0, 1.0, 0.1, dtype=jnp.float_),
+ jnp.arange(0.0, 1.0, 0.1))
+ # from https://github.com/google/jax/issues/3450
+ self.assertAllClose(np.arange(2.5, dtype=jnp.float_),
+ jnp.arange(2.5))
def testSortManually(self):
# manual tests for sort are nice because we don't have to worry about ties.
| jax.numpy.arange() doesn't match numpy.arange() for fractional values
Example:
```python
In [1]: import jax.numpy as jnp
In [2]: import numpy as np
In [3]: jnp.arange(2.5)
Out[3]: DeviceArray([0., 1.], dtype=float32)
In [4]: np.arange(2.5)
Out[4]: array([0., 1., 2.])
```
| 2020-06-15T18:33:05 |
|
google/jax | 3,463 | google__jax-3463 | [
"2070"
] | a088c023ff00f7b24270b3387711c166657d0e01 | diff --git a/jax/random.py b/jax/random.py
--- a/jax/random.py
+++ b/jax/random.py
@@ -530,6 +530,63 @@ def _shuffle(key, x, axis):
return x
+def choice(key, a, shape=(), replace=True, p=None):
+ """Generates a random sample from a given 1-D array.
+
+ Args:
+ key: a PRNGKey used as the random key.
+ a : 1D array or int. If an ndarray, a random sample is generated from
+ its elements. If an int, the random sample is generated as if a were
+ arange(a).
+ shape : tuple of ints, optional. Output shape. If the given shape is,
+ e.g., ``(m, n)``, then ``m * n`` samples are drawn. Default is (),
+ in which case a single value is returned.
+ replace : boolean. Whether the sample is with or without replacement.
+ default is True.
+ p : 1-D array-like, The probabilities associated with each entry in a.
+ If not given the sample assumes a uniform distribution over all
+ entries in a.
+
+ Returns:
+ An array of shape `shape` containing samples from `a`.
+ """
+ a = jnp.asarray(a)
+ if a.ndim not in [0, 1]:
+ raise ValueError("a must be an integer or 1-dimensional")
+ n_inputs = int(a) if a.ndim == 0 else len(a)
+ n_draws = np.prod(shape).astype(int)
+ if n_draws == 0:
+ return jnp.zeros(shape, dtype=a.dtype)
+ if n_inputs <= 0:
+ raise ValueError("a must be greater than 0 unless no samples are taken")
+ if not replace and n_draws > n_inputs:
+ raise ValueError("Cannot take a larger sample than population when 'replace=False'")
+
+ if p is None:
+ if replace:
+ ind = randint(key, shape, 0, n_inputs)
+ result = ind if a.ndim == 0 else a[ind]
+ else:
+ result = permutation(key, a)[:n_draws]
+ else:
+ p = jnp.asarray(p)
+ if p.shape != (n_inputs,):
+ raise ValueError("p must be None or match the shape of a")
+ if jnp.any(p < 0):
+ raise ValueError("entries of p must be non-negative.")
+ if replace:
+ p_cuml = jnp.cumsum(p)
+ r = p_cuml[-1] * (1 - uniform(key, shape))
+ ind = jnp.searchsorted(p_cuml, r)
+ result = ind if a.ndim == 0 else a[ind]
+ else:
+ # Gumbel top-k trick: https://timvieira.github.io/blog/post/2019/09/16/algorithms-for-sampling-without-replacement/
+ g = -gumbel(key, (n_inputs,)) - jnp.log(p)
+ ind = jnp.argsort(g)[:n_draws]
+ result = ind if a.ndim == 0 else a[ind]
+ return result.reshape(shape)
+
+
def normal(key: jnp.ndarray,
shape: Sequence[int] = (),
dtype: np.dtype = np.float64) -> jnp.ndarray:
| diff --git a/tests/random_test.py b/tests/random_test.py
--- a/tests/random_test.py
+++ b/tests/random_test.py
@@ -233,6 +233,34 @@ def testShuffle(self, dtype):
self.assertFalse(np.all(perm1 == x)) # seems unlikely!
self.assertAllClose(np.sort(perm1), x, check_dtypes=False)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "_{}_shape={}_replace={}_weighted={}_array_input={}".format(
+ np.dtype(dtype).name, shape, replace, weighted, array_input),
+ "dtype": np.dtype(dtype).name, "shape": shape, "replace": replace,
+ "weighted": weighted, "array_input": array_input}
+ for dtype in [np.float32, np.float64, np.int32, np.int64]
+ for shape in [(), (5,), (4, 5)]
+ for replace in [True, False]
+ for weighted in [True, False]
+ for array_input in [True, False]))
+ def testChoice(self, dtype, shape, replace, weighted, array_input):
+ N = 100
+ key = random.PRNGKey(0)
+ x = N if not array_input else jnp.arange(N, dtype=dtype)
+ p = None if not weighted else jnp.arange(N)
+ rand = lambda key: random.choice(key, x, shape, p=p, replace=replace)
+ crand = api.jit(rand)
+
+ sample1 = rand(key)
+ sample2 = crand(key)
+
+ self.assertEqual(shape, sample1.shape)
+ if array_input:
+ self.assertEqual(x.dtype, sample1.dtype)
+ if not replace:
+ assert len(np.unique(sample1)) == len(np.ravel(sample1))
+ self.assertAllClose(sample1, sample2)
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}".format(jtu.format_shape_dtype_string(shape, dtype)),
"dtype": np.dtype(dtype).name, "shape": shape}
| Add jax.random.choice
It would be nice to have a random weighted choice in jax. We need this to sample bitstrings from a quantum wave function.
| I ran into the same issue, I got around it by implementing the function described in this [blog post](https://timvieira.github.io/blog/post/2019/09/16/algorithms-for-sampling-without-replacement/). Ideally `jax.random.choice` would be implemented, but I suppose it's tricky to get it to match the API for `np.random.choice`. Perhaps `jax.random` could have a simpler sampling without replacement function?
Edit: Seems like #2066 discusses similar issues in how to match the numpy version.
This is a significant missing feature. I'm curious @Thenerdstation if you also need sampling without replacement or if you're happy with a single sample/sampling with replacement (both much easier).
Sampling with replacement is what we need for our use case
seconded
should this take a probability vector `p`? `categorical` takes a `logits` vector. (was this for performance reasons?)
guessing api would be something like
```
jax.random.choice(key, x, size=None, replace=True, p=None, axis=0)
```
potentially relevant: `permutation` was requested https://github.com/google/jax/issues/1526 with PR pending https://github.com/google/jax/pull/1568
I also have been in need of this several times.
Both the with and without replacement case. | 2020-06-16T17:17:53 |
google/jax | 3,485 | google__jax-3485 | [
"3452"
] | a05263f5ceb628f56b7eec2b4fdaef7d3abac014 | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -28,30 +28,14 @@
#
import os
import sys
-import typing
sys.path.insert(0, os.path.abspath('..'))
-# Workaround to avoid expanding type aliases. See:
+# Currently type aliases are expanded. We tried a workaround along the lines of:
# https://github.com/sphinx-doc/sphinx/issues/6518#issuecomment-589613836
-
-# When building docs, enable `from __future__ import annotations` everywhere.
-def _rewrite(p):
- with open(p) as f:
- contents = f.read()
- with open(p, 'w') as f:
- f.write('from __future__ import annotations\n')
- f.write(contents)
-
-if 'READTHEDOCS' in os.environ:
- for path, dirs, files in os.walk('../jax/'):
- for file in files:
- if file.endswith('.py'):
- _rewrite(os.path.abspath(os.path.join(path, file)))
-
-# Monkey patch for the typing module to prevent it from expanding type aliases.
-typing.get_type_hints = lambda obj, *unused: obj.__annotations__
+# Unfortunately, this workaround makes Sphinx drop module-level documentation.
+# See https://github.com/google/jax/issues/3452.
# -- Project information -----------------------------------------------------
| Module level documentation no longer shown on RTD
Example:
https://jax.readthedocs.io/en/latest/jax.experimental.loops.html
This used to show the module level docstring, but not anymore.
My guess is that this might be related to inserting the line `from __future__ import annotations` into each source file?
| 2020-06-18T11:58:18 |
||
google/jax | 3,516 | google__jax-3516 | [
"3498"
] | 18798a307bc1a40e3dd2e457d09e6194efc29710 | diff --git a/jax/scipy/signal.py b/jax/scipy/signal.py
--- a/jax/scipy/signal.py
+++ b/jax/scipy/signal.py
@@ -15,8 +15,11 @@
import scipy.signal as osp_signal
import warnings
+import numpy as np
+
from .. import lax
from ..numpy import lax_numpy as jnp
+from ..numpy import linalg
from ..numpy.lax_numpy import _promote_dtypes_inexact
from ..numpy._util import _wraps
@@ -104,3 +107,33 @@ def correlate2d(in1, in2, mode='full', boundary='fill', fillvalue=0,
if jnp.ndim(in1) != 2 or jnp.ndim(in2) != 2:
raise ValueError("correlate2d() only supports {ndim}-dimensional inputs.")
return _convolve_nd(in1[::-1, ::-1], in2, mode, precision=precision)[::-1, ::-1]
+
+
+@_wraps(osp_signal.detrend)
+def detrend(data, axis=-1, type='linear', bp=0, overwrite_data=None):
+ if overwrite_data is not None:
+ raise NotImplementedError("overwrite_data argument not implemented.")
+ if type not in ['constant', 'linear']:
+ raise ValueError("Trend type must be 'linear' or 'constant'.")
+ data, = _promote_dtypes_inexact(jnp.asarray(data))
+ if type == 'constant':
+ return data - data.mean(axis, keepdims=True)
+ else:
+ N = data.shape[axis]
+ # bp is static, so we use np operations to avoid pushing to device.
+ bp = np.sort(np.unique(np.r_[0, bp, N]))
+ if bp[0] < 0 or bp[-1] > N:
+ raise ValueError("Breakpoints must be non-negative and less than length of data along given axis.")
+ data = jnp.moveaxis(data, axis, 0)
+ shape = data.shape
+ data = data.reshape(N, -1)
+ for m in range(len(bp) - 1):
+ Npts = bp[m + 1] - bp[m]
+ A = jnp.vstack([
+ jnp.ones(Npts, dtype=data.dtype),
+ jnp.arange(1, Npts + 1, dtype=data.dtype) / Npts
+ ]).T
+ sl = slice(bp[m], bp[m + 1])
+ coef, *_ = linalg.lstsq(A, data[sl])
+ data = data.at[sl].add(-jnp.matmul(A, coef, precision=lax.Precision.HIGHEST))
+ return jnp.moveaxis(data.reshape(shape), 0, axis)
| diff --git a/tests/scipy_signal_test.py b/tests/scipy_signal_test.py
--- a/tests/scipy_signal_test.py
+++ b/tests/scipy_signal_test.py
@@ -44,7 +44,7 @@ class LaxBackedScipySignalTests(jtu.JaxTestCase):
"""Tests for LAX-backed scipy.stats implementations"""
@parameterized.named_parameters(jtu.cases_from_list(
- {"testcase_name": "_op={}_xshape=[{}]_yshape=[{}]_mode={}".format(
+ {"testcase_name": "_op={}_xshape={}_yshape={}_mode={}".format(
op,
jtu.format_shape_dtype_string(xshape, dtype),
jtu.format_shape_dtype_string(yshape, dtype),
@@ -67,7 +67,7 @@ def testConvolutions(self, xshape, yshape, dtype, mode, jsp_op, osp_op):
self._CompileAndCheck(jsp_fun, args_maker)
@parameterized.named_parameters(jtu.cases_from_list(
- {"testcase_name": "op={}_xshape=[{}]_yshape=[{}]_mode={}".format(
+ {"testcase_name": "op={}_xshape={}_yshape={}_mode={}".format(
op,
jtu.format_shape_dtype_string(xshape, dtype),
jtu.format_shape_dtype_string(yshape, dtype),
@@ -89,6 +89,24 @@ def testConvolutions2D(self, xshape, yshape, dtype, mode, jsp_op, osp_op):
self._CheckAgainstNumpy(osp_fun, jsp_fun, args_maker, check_dtypes=False, tol=tol)
self._CompileAndCheck(jsp_fun, args_maker)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "_shape={}_axis={}_type={}_bp={}".format(
+ jtu.format_shape_dtype_string(shape, dtype), axis, type, bp),
+ "shape": shape, "dtype": dtype, "axis": axis, "type": type, "bp": bp}
+ for shape in [(5,), (4, 5), (3, 4, 5)]
+ for dtype in default_dtypes
+ for axis in [0, -1]
+ for type in ['constant', 'linear']
+ for bp in [0, [0, 2]]))
+ def testDetrend(self, shape, dtype, axis, type, bp):
+ rng = jtu.rand_default(self.rng())
+ args_maker = lambda: [rng(shape, dtype)]
+ osp_fun = partial(osp_signal.detrend, axis=axis, type=type, bp=bp)
+ jsp_fun = partial(jsp_signal.detrend, axis=axis, type=type, bp=bp)
+ tol = {onp.float32: 1e-5, onp.float64: 1e-12}
+ self._CheckAgainstNumpy(osp_fun, jsp_fun, args_maker, tol=tol)
+ self._CompileAndCheck(jsp_fun, args_maker, rtol=tol, atol=tol)
+
if __name__ == "__main__":
absltest.main()
| scipy.signal.detrend implementation in JAX
HI,
Would it be possible to implement scipy.signal.detrend in JAX? I need to compute the gradient of a function which involves de-trending a signal computed from input parameters.
Thanks,
York
| Here's a quick implementation (not comprehensively tested yet):
```python
import jax.numpy as jnp
import numpy as np
def detrend(data, axis=-1, type='linear', bp=0):
if type not in ['constant', 'linear', 'c', 'l']:
raise ValueError("Trend type must be 'linear' or 'constant'.")
data = jnp.asarray(data)
if type in ['constant', 'c']:
return data - jnp.mean(data, axis, keepdims=True)
else:
N = data.shape[axis]
# bp is static, so we use np operations to avoid pushing to device.
bp = np.sort(np.unique(np.r_[0, bp, N]))
if np.any(bp > N):
raise ValueError("Breakpoints must be less than length of data along given axis.")
data = jnp.moveaxis(data, axis, 0)
shape = data.shape
data = data.reshape(N, -1)
for m in range(len(bp) - 1):
Npts = bp[m + 1] - bp[m]
A = jnp.vstack([jnp.ones(Npts), jnp.arange(1, Npts + 1) / Npts]).T
sl = slice(bp[m], bp[m + 1])
coef, resids, rank, s = jnp.linalg.lstsq(A, data[sl])
data = data.at[sl].add(-jnp.dot(A, coef))
return jnp.moveaxis(data.reshape(shape), 0, axis)
```
I can work on getting it into ``jax.scipy.signal`` this week.
Thanks for the quick reply. I tested the function with my data and it's very close to the result from using scipy.signal.detrend. It's a bit slower than the original scipy version, but I suppose wrapping it in jit will speed it up for repeated evaluations. | 2020-06-22T16:58:33 |
google/jax | 3,543 | google__jax-3543 | [
"3511"
] | 28262da1317b0e704d3fd638450fe679876ff59a | diff --git a/jax/dtypes.py b/jax/dtypes.py
--- a/jax/dtypes.py
+++ b/jax/dtypes.py
@@ -144,8 +144,10 @@ def _issubclass(a, b):
def issubdtype(a, b):
if a == bfloat16:
- return b in [bfloat16, _bfloat16_dtype, np.floating, np.inexact,
- np.number]
+ if isinstance(b, np.dtype):
+ return b == _bfloat16_dtype
+ else:
+ return b in [bfloat16, np.floating, np.inexact, np.number]
if not _issubclass(b, np.generic):
# Workaround for JAX scalar types. NumPy's issubdtype has a backward
# compatibility behavior for the second argument of issubdtype that
diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -139,7 +139,7 @@ def __hash__(self):
return hash(self.dtype.type)
def __eq__(self, other):
- return id(self) == id(other) or self.dtype == other
+ return id(self) == id(other) or self.dtype.type == other
def __ne__(self, other):
return not (self == other)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,7 +29,7 @@
packages=find_packages(exclude=["examples"]),
python_requires='>=3.6',
install_requires=[
- 'numpy >=1.12, <1.19', 'absl-py', 'opt_einsum'
+ 'numpy >=1.12', 'absl-py', 'opt_einsum'
],
url='https://github.com/google/jax',
license='Apache-2.0',
| fix numpy 1.19 deprecation warnings
Follow-up to #3509, the numpy 1.19 release started raising some DeprecationWarnings in tests that made our CI mad.
| 2020-06-24T18:38:33 |
||
google/jax | 3,550 | google__jax-3550 | [
"2325"
] | 696958d2bd97ecfb83feafc5d70bc9fc32cc8b6d | diff --git a/jax/lax_linalg.py b/jax/lax_linalg.py
--- a/jax/lax_linalg.py
+++ b/jax/lax_linalg.py
@@ -265,10 +265,10 @@ def eigh_jvp_rule(primals, tangents, lower):
a, = primals
a_dot, = tangents
- v, w = eigh_p.bind(symmetrize(a), lower=lower)
+ v, w_real = eigh_p.bind(symmetrize(a), lower=lower)
# for complex numbers we need eigenvalues to be full dtype of v, a:
- w = w.astype(a.dtype)
+ w = w_real.astype(a.dtype)
eye_n = jnp.eye(a.shape[-1], dtype=a.dtype)
# carefully build reciprocal delta-eigenvalue matrix, avoiding NaNs.
Fmat = jnp.reciprocal(eye_n + w[..., jnp.newaxis, :] - w[..., jnp.newaxis]) - eye_n
@@ -277,8 +277,8 @@ def eigh_jvp_rule(primals, tangents, lower):
precision=lax.Precision.HIGHEST)
vdag_adot_v = dot(dot(_H(v), a_dot), v)
dv = dot(v, jnp.multiply(Fmat, vdag_adot_v))
- dw = jnp.diagonal(vdag_adot_v, axis1=-2, axis2=-1)
- return (v, w), (dv, dw)
+ dw = jnp.real(jnp.diagonal(vdag_adot_v, axis1=-2, axis2=-1))
+ return (v, w_real), (dv, dw)
def eigh_batching_rule(batched_args, batch_dims, lower):
x, = batched_args
| diff --git a/jax/test_util.py b/jax/test_util.py
--- a/jax/test_util.py
+++ b/jax/test_util.py
@@ -154,6 +154,15 @@ def check_close(xs, ys, atol=None, rtol=None):
assert_close = partial(_assert_numpy_close, atol=atol, rtol=rtol)
tree_all(tree_multimap(assert_close, xs, ys))
+def _check_dtypes_match(xs, ys):
+ def _assert_dtypes_match(x, y):
+ if FLAGS.jax_enable_x64:
+ assert _dtype(x) == _dtype(y)
+ else:
+ assert (dtypes.canonicalize_dtype(_dtype(x)) ==
+ dtypes.canonicalize_dtype(_dtype(y)))
+ tree_all(tree_multimap(_assert_dtypes_match, xs, ys))
+
def inner_prod(xs, ys):
def contract(x, y):
@@ -202,7 +211,9 @@ def check_jvp(f, f_jvp, args, atol=None, rtol=None, eps=EPS):
rng = np.random.RandomState(0)
tangent = tree_map(partial(rand_like, rng), args)
v_out, t_out = f_jvp(args, tangent)
+ _check_dtypes_match(v_out, t_out)
v_out_expected = f(*args)
+ _check_dtypes_match(v_out, v_out_expected)
t_out_expected = numerical_jvp(f, args, tangent, eps=eps)
# In principle we should expect exact equality of v_out and v_out_expected,
# but due to nondeterminism especially on GPU (e.g., due to convolution
diff --git a/tests/lax_numpy_indexing_test.py b/tests/lax_numpy_indexing_test.py
--- a/tests/lax_numpy_indexing_test.py
+++ b/tests/lax_numpy_indexing_test.py
@@ -418,7 +418,7 @@ def testStaticIndexingGrads(self, shape, dtype, rng_factory, indexer):
rng = rng_factory(self.rng())
tol = 1e-2 if jnp.finfo(dtype).bits == 32 else None
arg = rng(shape, dtype)
- fun = lambda x: x[indexer]**2
+ fun = lambda x: jnp.asarray(x)[indexer]**2
check_grads(fun, (arg,), 2, tol, tol, tol)
def _ReplaceSlicesWithTuples(self, idx):
diff --git a/tests/linalg_test.py b/tests/linalg_test.py
--- a/tests/linalg_test.py
+++ b/tests/linalg_test.py
@@ -405,6 +405,8 @@ def testEighGradVectorComplex(self, shape, dtype, rng_factory, lower, eps):
# evaluate eigenvector gradient and groundtruth eigensystem for perturbed input matrix
f = partial(jnp.linalg.eigh, UPLO=uplo)
(w, v), (dw, dv) = jvp(f, primals=(a,), tangents=(a_dot,))
+ self.assertTrue(jnp.issubdtype(w.dtype, jnp.floating))
+ self.assertTrue(jnp.issubdtype(dw.dtype, jnp.floating))
new_a = a + a_dot
new_w, new_v = f(new_a)
new_a = (new_a + np.conj(new_a.T)) / 2
| An error from a combination of scan, complex numbers, and np.diag
A very high level summary: I got an error when I did some complex number computation inside a lax.scan function (compute1 in the code below). But the almost identical code can be run with a for loop (compute2). Only gradient computation raises the error. Forward computation is fine for both method. If I replace np.diag(d) with v in the function f, then also no error.
```python
import jax.numpy as np
from jax import grad, random, jit, lax
def f(carrier, x):
h = x*np.array([[0, 1j], [-1j, 0]])
d, v = np.linalg.eigh(h, symmetrize_input=False)
carrier = np.diag(d) @ carrier
return carrier, None
def compute1(p):
u = np.eye(2, dtype=np.complex64)
carrier, _ = lax.scan(f, u, p)
return np.abs(np.trace(carrier))
def compute2(p):
u = np.eye(2, dtype=np.complex64)
for i in range(3):
h = p[i]*np.array([[0, 1j], [-1j, 0]])
d, v = np.linalg.eigh(h, symmetrize_input=False)
u = np.diag(d) @ u
return np.abs(np.trace(u))
print(grad(compute1)(np.arange(2,5,dtype=np.complex64)))
# print(compute1(np.arange(2,5,dtype=np.complex64)))
```
Error message:
/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/lib/xla_bridge.py:122: UserWarning: No GPU/TPU found, falling back to CPU.
warnings.warn('No GPU/TPU found, falling back to CPU.')
Traceback (most recent call last):
File "/home/xiaotongni/jax-qoc/test_scan.py", line 24, in <module>
print(grad(compute1)(np.arange(2,5,dtype=np.complex64)))
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/api.py", line 352, in grad_f
_, g = value_and_grad_f(*args, **kwargs)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/api.py", line 407, in value_and_grad_f
ans, vjp_py = vjp(f_partial, *dyn_args)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/api.py", line 1285, in vjp
out_primal, out_vjp = ad.vjp(flat_fun, primals_flat)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/interpreters/ad.py", line 106, in vjp
out_primals, pvals, jaxpr, consts = linearize(traceable, *primals)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/interpreters/ad.py", line 95, in linearize
jaxpr, out_pvals, consts = pe.trace_to_jaxpr(jvpfun_flat, in_pvals)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/interpreters/partial_eval.py", line 354, in trace_to_jaxpr
jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/linear_util.py", line 149, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/home/xiaotongni/jax-qoc/test_scan.py", line 12, in compute1
carrier, _ = lax.scan(f, u, p)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/lax/lax_control_flow.py", line 804, in scan
linear=(False,) * (len(consts) + len(in_flat)))
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/lax/lax_control_flow.py", line 1166, in scan_bind
num_carry=num_carry, linear=linear)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/core.py", line 182, in bind
out_tracer = top_trace.process_primitive(self, tracers, kwargs)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/interpreters/ad.py", line 305, in process_primitive
primal_out, tangent_out = jvp(primals_in, tangents_in, **params)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/lax/lax_control_flow.py", line 865, in _scan_jvp
jaxpr, nonzeros, instantiate=carry_nz + [False] * num_ys)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/interpreters/ad.py", line 575, in jvp_jaxpr
jaxpr_out, pvals_out, literals_out = pe.trace_to_jaxpr(f_jvp, pvals, instantiate=True)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/interpreters/partial_eval.py", line 354, in trace_to_jaxpr
jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/linear_util.py", line 149, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/core.py", line 97, in jaxpr_as_fun
return eval_jaxpr(typed_jaxpr.jaxpr, typed_jaxpr.literals, *args)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/core.py", line 251, in eval_jaxpr
ans = eqn.primitive.bind(*(subfuns + in_vals), **params)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/core.py", line 182, in bind
out_tracer = top_trace.process_primitive(self, tracers, kwargs)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/interpreters/ad.py", line 305, in process_primitive
primal_out, tangent_out = jvp(primals_in, tangents_in, **params)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/interpreters/ad.py", line 388, in linear_jvp
val_out = primitive.bind(*primals, **params)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/core.py", line 182, in bind
out_tracer = top_trace.process_primitive(self, tracers, kwargs)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/interpreters/partial_eval.py", line 98, in process_primitive
return self.default_process_primitive(primitive, tracers, params)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/interpreters/partial_eval.py", line 106, in default_process_primitive
out_aval = primitive.abstract_eval(*avals, **params)
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/lax/lax.py", line 1523, in standard_abstract_eval
return ShapedArray(shape_rule(*args, **kwargs), dtype_rule(*args, **kwargs))
File "/home/xiaotongni/miniconda3/lib/python3.7/site-packages/jax/lax/lax.py", line 2484, in _pad_dtype_rule
raise TypeError(msg.format(operand.dtype, padding_value.dtype))
TypeError: pad operand and padding_value must be same dtype: got complex64 and float32.
| 2020-06-25T02:04:01 |
|
google/jax | 3,562 | google__jax-3562 | [
"2718"
] | 2a6fc316c3e2de99abdf4a97656abbed44a1c626 | diff --git a/jax/experimental/ode.py b/jax/experimental/ode.py
--- a/jax/experimental/ode.py
+++ b/jax/experimental/ode.py
@@ -31,15 +31,39 @@
from jax import core
from jax import lax
from jax import ops
-from jax.util import safe_map, safe_zip
+from jax.util import safe_map, safe_zip, cache, split_list
+from jax.api_util import flatten_fun_nokwargs
from jax.flatten_util import ravel_pytree
-from jax.tree_util import tree_map
+from jax.tree_util import tree_map, tree_flatten, tree_unflatten
+from jax.interpreters import partial_eval as pe
from jax import linear_util as lu
map = safe_map
zip = safe_zip
+@cache()
+def closure_convert(fun, in_tree, in_avals):
+ in_pvals = [pe.PartialVal.unknown(aval) for aval in in_avals]
+ wrapped_fun, out_tree = flatten_fun_nokwargs(lu.wrap_init(fun), in_tree)
+ with core.initial_style_staging():
+ jaxpr, out_pvals, consts = pe.trace_to_jaxpr(
+ wrapped_fun, in_pvals, instantiate=True, stage_out=False)
+ out_tree = out_tree()
+ num_consts = len(consts)
+
+ def converted_fun(y, t, *consts_args):
+ consts, args = split_list(consts_args, [num_consts])
+ all_args, in_tree2 = tree_flatten((y, t, *args))
+ assert in_tree == in_tree2
+ out_flat = core.eval_jaxpr(jaxpr, consts, *all_args)
+ return tree_unflatten(out_tree, out_flat)
+
+ return converted_fun, consts
+
+def abstractify(x):
+ return core.raise_to_shaped(core.get_aval(x))
+
def ravel_first_arg(f, unravel):
return ravel_first_arg_(lu.wrap_init(f), unravel).call_wrapped
@@ -159,8 +183,12 @@ def _check_arg(arg):
msg = ("The contents of odeint *args must be arrays or scalars, but got "
"\n{}.")
raise TypeError(msg.format(arg))
- tree_map(_check_arg, args)
- return _odeint_wrapper(func, rtol, atol, mxstep, y0, t, *args)
+
+ flat_args, in_tree = tree_flatten((y0, t[0], *args))
+ in_avals = tuple(map(abstractify, flat_args))
+ converted, consts = closure_convert(func, in_tree, in_avals)
+
+ return _odeint_wrapper(converted, rtol, atol, mxstep, y0, t, *consts, *args)
@partial(jax.jit, static_argnums=(0, 1, 2, 3))
def _odeint_wrapper(func, rtol, atol, mxstep, y0, ts, *args):
| diff --git a/tests/ode_test.py b/tests/ode_test.py
--- a/tests/ode_test.py
+++ b/tests/ode_test.py
@@ -181,6 +181,37 @@ def test_disable_jit_odeint_with_vmap(self):
f = lambda x0: odeint(lambda x, _t: x, x0, t)
jax.vmap(f)(x0_eval) # doesn't crash
+ @jtu.skip_on_devices("tpu")
+ def test_grad_closure(self):
+ # simplification of https://github.com/google/jax/issues/2718
+ def experiment(x):
+ def model(y, t):
+ return -x * y
+ history = odeint(model, 1., np.arange(0, 10, 0.1))
+ return history[-1]
+ jtu.check_grads(experiment, (0.01,), modes=["rev"], order=1)
+
+ @jtu.skip_on_devices("tpu")
+ def test_grad_closure_with_vmap(self):
+ # https://github.com/google/jax/issues/2718
+ @jax.jit
+ def experiment(x):
+ def model(y, t):
+ return -x * y
+ history = odeint(model, 1., np.arange(0, 10, 0.1))
+ return history[-1]
+
+ gradfun = jax.value_and_grad(experiment)
+ t = np.arange(0., 1., 0.01)
+ h, g = jax.vmap(gradfun)(t) # doesn't crash
+ ans = h[11], g[11]
+
+ expected_h = experiment(t[11])
+ expected_g = (experiment(t[11] + 1e-5) - expected_h) / 1e-5
+ expected = expected_h, expected_g
+
+ self.assertAllClose(ans, expected, check_dtypes=False, atol=1e-2, rtol=1e-2)
+
if __name__ == '__main__':
absltest.main()
| AssertionError when taking grad of odeint with outer scope variable
Hello!
An AssertionError arises when taking the grad of odeint with a variable which is outer scope.
My setup is MacBook Pro 13 inch 2019 with MacOS Catalina 10.15.2. I have compiled jax and jaxlib from source on the current master branch.
Reproduction code (x is the outer scope variable):
from jax.experimental.ode import odeint
from jax import jit, grad, value_and_grad, vmap
import jax.numpy as np
@jit
def experiment(x):
def model(y, t):
dydt = -x * y
return dydt
history = odeint(model, 1., np.arange(0, 10, 0.1))
return history[-1]
experiment = value_and_grad(experiment)
t = np.arange(0., 1., 0.01)
h, g = vmap(experiment)(t)
Running this gives the following output:
Traceback (most recent call last):
File "/Users/kstorm/PycharmProjects/finger_model/venv/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-a7e55fc52a04>", line 1, in <module>
runfile('/Users/kstorm/PycharmProjects/finger_model/issue.py', wdir='/Users/kstorm/PycharmProjects/finger_model')
File "/Users/kstorm/Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/193.5233.109/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/Users/kstorm/Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/193.5233.109/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/kstorm/PycharmProjects/finger_model/issue.py", line 22, in <module>
h, g = vmap(experiment)(t)
File "/Users/kstorm/PycharmProjects/finger_model/jax/jax/api.py", line 759, in batched_fun
lambda: _flatten_axes(out_tree(), out_axes))
File "/Users/kstorm/PycharmProjects/finger_model/jax/jax/interpreters/batching.py", line 34, in batch
return batched_fun.call_wrapped(*in_vals)
File "/Users/kstorm/PycharmProjects/finger_model/jax/jax/linear_util.py", line 150, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/Users/kstorm/PycharmProjects/finger_model/jax/jax/api.py", line 428, in value_and_grad_f
ans, vjp_py = _vjp(f_partial, *dyn_args)
File "/Users/kstorm/PycharmProjects/finger_model/jax/jax/api.py", line 1389, in _vjp
out_primal, out_vjp = ad.vjp(flat_fun, primals_flat)
File "/Users/kstorm/PycharmProjects/finger_model/jax/jax/interpreters/ad.py", line 106, in vjp
out_primals, pvals, jaxpr, consts = linearize(traceable, *primals)
File "/Users/kstorm/PycharmProjects/finger_model/jax/jax/interpreters/ad.py", line 97, in linearize
assert all(out_primal_pval.is_known() for out_primal_pval in out_primals_pvals)
AssertionError
I know the problem can be solved by passing x as an argument to odeint and adding an argument to 'def model' like this:
@jit
def experiment(x):
def model(y, t, a):
dydt = -a * y
return dydt
history = odeint(model, 1., np.arange(0, 10, 0.1), x)
return history[-1]`
But I wonder why it doesn't work with outer scope variables.
Thanks in advance!
| The need for manual closure conversion is a limitation of `jax.experimental.ode.odeint`, and a consequence of how higher-order primitives are set up in JAX's internals. This issue can also come up outside of the grad or vmap context.
It would be an enhancement to `odeint` to accept functions that close over arguments. Until then, we might consider it a documentation issue, as there ought to be more guidance around it.
@froystig Could you give some more insight into what this error means? I'm hitting it without using `odeint`, but I don't have a good MWE to post yet.
@NeilGirdhar are you differentiating through a `while_loop`? See #2129. Otherwise you'll have to give us more hints!
@killianstorm To add on to what @froystig said: `jax.experimental.ode.odeint` is not actually a primitive (like the control flow primitives `lax.cond`, `lax.scan`, etc. which correctly handle closed-over tracers in their function-valued arguments). It's just a function with a `jax.custom_vjp` rule defined. And higher-order functions with `jax.custom_vjp` rules can't automatically handle closed-over tracers in their function-valued arguments.
We should document this constraint on `jax.custom_vjp` functions better (hence the "documentation" tag on this issue), but it's mentioned briefly in the last example (a `fixed_point` function) in the [custom_vjp/jvp tutorial](https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html), specifically in the paragraph [just before this heading](https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html#Basic-usage-of-jax.custom_jvp-and-jax.custom_vjp-APIs).
Actually, the reason why `odeint` still lives in `jax.experimental` rather than being included directly in `lax` or something is precisely that we want to upgrade it to handle closed-over tracers automatically, like the other functions/primitives in `lax` which take function-valued arguments. We just haven't done it yet! When we do, it'll graduate out of `jax.experimental`.
@mattjj You got it! I forgot to add my vjp. Thanks for the lightning fast response. I really appreciate it!
@froystig, @mattjj I understand now, thanks for the explanation! | 2020-06-25T21:22:41 |
google/jax | 3,566 | google__jax-3566 | [
"3558"
] | 4b1bb1890937e9fb81de210305c77c13fb4952ec | diff --git a/jax/custom_derivatives.py b/jax/custom_derivatives.py
--- a/jax/custom_derivatives.py
+++ b/jax/custom_derivatives.py
@@ -590,3 +590,5 @@ def batched_fwd_jaxpr_thunk():
xla.initial_style_translations[custom_vjp_call_jaxpr_p] = \
xla.lower_fun_initial_style(_custom_vjp_call_jaxpr_impl)
+
+batching.primitive_batchers[ad.custom_lin_p] = ad._raise_custom_vjp_error_on_jvp
| diff --git a/tests/ode_test.py b/tests/ode_test.py
--- a/tests/ode_test.py
+++ b/tests/ode_test.py
@@ -212,6 +212,15 @@ def model(y, t):
self.assertAllClose(ans, expected, check_dtypes=False, atol=1e-2, rtol=1e-2)
+ def test_forward_mode_error(self):
+ # https://github.com/google/jax/issues/3558
+
+ def f(k):
+ return odeint(lambda x, t: k*x, 1., jnp.linspace(0, 1., 50)).sum()
+
+ with self.assertRaisesRegex(TypeError, "can't apply forward-mode.*"):
+ jax.jacfwd(f)(3.)
+
if __name__ == '__main__':
absltest.main()
| Forward mode differentiation of `odeint` is not supported, but it does not give an error
```
def f(k):
return ode.odeint(lambda x, t: k*x, 1., jnp.linspace(0, 1., 50)).sum()
jax.jacfwd(f)(3.)
```
should fail with an error since jvp of `odeint` is not supported, but it returns a value anyway.
| Cross-ref to #3557 also: one reason this error isn't raised is that we don't detect the closed-over tracer in the dynamics function.
#2718 requests support for closure under `odeint`, and describes a related issue with how it is caught in the meantime.
In a funny way, this is accidentally doing the right thing, even though I told @john-m-jumper it wouldn't!
```python
import jax
from jax.experimental import ode
import jax.numpy as jnp
def f(k):
return ode.odeint(lambda x, t: k*x, 1., jnp.linspace(0, 1., 50)).sum()
print(jax.jacfwd(f)(3.))
print((f(3.+1e-6) - f(3.-1e-6)) / 2e-6)
```
```
$ env JAX_ENABLE_X64=1 python issue3558.py
234.33132392618268
234.33132375316745
```
It's kind of funny: `odeint` has a custom vjp rule, and usually when one has a custom vjp rule that means we can't use forward-mode with it. However, `odeint` also isn't set up to detect closed-over tracers in the dynamics function. As a result, the tracer on `k` is eluding odeint's custom vjp rule, but that means it just gets into odeint's implementation, which is all implemented in terms of while_loops and things that are themselves forward-mode differentiable. So it all just works out!
We should not close this bug yet though; we want to make sure this keeps working when we resolve #3557 / #2718 , and moreover make sure forward-mode works like this even without smuggling a tracer in via closure.
After merging #3562, now we get an error here:
```
NotImplementedError: Batching rule for 'custom_lin' not implemented
```
That's a bad error message, but it makes sense: it's essentially from doing forward-mode on a function with a custom VJP.
At least two possible directions to go:
1. make forward-mode work with custom vjp functions, perhaps ignoring the custom rule and just tracing through the function; or
2. improve the error message. | 2020-06-26T03:31:05 |
google/jax | 3,587 | google__jax-3587 | [
"3584"
] | 7b57dc8c8043163a5e649ba66143ccef880d7d58 | diff --git a/jax/experimental/ode.py b/jax/experimental/ode.py
--- a/jax/experimental/ode.py
+++ b/jax/experimental/ode.py
@@ -29,6 +29,7 @@
import jax
import jax.numpy as jnp
from jax import core
+from jax import dtypes
from jax import lax
from jax import ops
from jax.util import safe_map, safe_zip, cache, split_list
@@ -50,16 +51,31 @@ def closure_convert(fun, in_tree, in_avals):
jaxpr, out_pvals, consts = pe.trace_to_jaxpr(
wrapped_fun, in_pvals, instantiate=True, stage_out=False)
out_tree = out_tree()
- num_consts = len(consts)
- def converted_fun(y, t, *consts_args):
- consts, args = split_list(consts_args, [num_consts])
+ # We only want to closure convert for constants with respect to which we're
+ # differentiating. As a proxy for that, we hoist consts with float dtype.
+ # TODO(mattjj): revise this approach
+ is_float = lambda c: dtypes.issubdtype(dtypes.dtype(c), jnp.inexact)
+ (closure_consts, hoisted_consts), merge = partition_list(is_float, consts)
+ num_consts = len(hoisted_consts)
+
+ def converted_fun(y, t, *hconsts_args):
+ hoisted_consts, args = split_list(hconsts_args, [num_consts])
+ consts = merge(closure_consts, hoisted_consts)
all_args, in_tree2 = tree_flatten((y, t, *args))
assert in_tree == in_tree2
out_flat = core.eval_jaxpr(jaxpr, consts, *all_args)
return tree_unflatten(out_tree, out_flat)
- return converted_fun, consts
+ return converted_fun, hoisted_consts
+
+def partition_list(choice, lst):
+ out = [], []
+ which = [out[choice(elt)].append(elt) or choice(elt) for elt in lst]
+ def merge(l1, l2):
+ i1, i2 = iter(l1), iter(l2)
+ return [next(i2 if snd else i1) for snd in which]
+ return out, merge
def abstractify(x):
return core.raise_to_shaped(core.get_aval(x))
| diff --git a/tests/ode_test.py b/tests/ode_test.py
--- a/tests/ode_test.py
+++ b/tests/ode_test.py
@@ -221,6 +221,19 @@ def f(k):
with self.assertRaisesRegex(TypeError, "can't apply forward-mode.*"):
jax.jacfwd(f)(3.)
+ @jtu.skip_on_devices("tpu")
+ def test_closure_nondiff(self):
+ # https://github.com/google/jax/issues/3584
+
+ def dz_dt(z, t):
+ return jnp.stack([z[0], z[1]])
+
+ def f(z):
+ y = odeint(dz_dt, z, jnp.arange(10.))
+ return jnp.sum(y)
+
+ jax.grad(f)(jnp.ones(2)) # doesn't crash
+
if __name__ == '__main__':
absltest.main()
| ode is not working in jax 0.1.70
Here is a repro code, which works for previous version
```python
import jax
import jax.numpy as jnp
from jax.experimental.ode import odeint
def dz_dt(z, t, theta):
""" Lotka–Volterra equations. """
u = z[0]
v = z[1]
alpha, beta, gamma, delta = theta[0], theta[1], theta[2], theta[3]
du_dt = (alpha - beta * v) * u
dv_dt = (-gamma + delta * u) * v
return jnp.stack([du_dt, dv_dt])
def f(z):
y = odeint(dz_dt, z, jnp.arange(10.), jnp.ones(4))
return jnp.sum(y)
jax.grad(f)(jnp.ones(2))
```
Running the above script raises the error `TypeError: Primal inputs to reverse-mode differentiation must be of float or complex type, got type int32`. I tried to trace the error but got no hint where `int` variables are created. I think the issue happens after https://github.com/google/jax/pull/3562.
| A simpler repro code
```
def dz_dt(z, t):
return jnp.stack([z[0], z[1]])
def f(z):
y = odeint(dz_dt, z, jnp.arange(10.))
return jnp.sum(y)
jax.grad(f)(jnp.ones(2))
```
It seems to me that the indices `0`, `1` cause the issue.
Ah, this is indeed because of #3562. Thanks for catching it!
Unfortunately I've got to go afk for a while, but I should be able to fix this tonight (if no one beats me to it).
As a temporary workaround, you can use this version:
```python
from jax.experimental.ode import _odeint_wrapper
def odeint(func, y0, t, *args, rtol=1.4e-8, atol=1.4e-8, mxstep=jnp.inf):
return _odeint_wrapper(func, rtol, atol, mxstep, y0, t, *args)
```
Thanks, @mattjj! | 2020-06-28T17:07:49 |
google/jax | 3,608 | google__jax-3608 | [
"3599"
] | 7ecb441d086264884698aececa82cdd8bc5eddf4 | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -2950,8 +2950,9 @@ def _pad_dtype_rule(operand, padding_value, *, padding_config):
def _pad_shape_rule(operand, padding_value, *, padding_config):
lo, hi, interior = zip(*padding_config)
- out_shape = onp.add(onp.add(onp.add(lo, hi), operand.shape),
- onp.multiply(interior, onp.subtract(operand.shape, 1)))
+ out_shape = onp.add(
+ onp.add(onp.add(lo, hi), operand.shape),
+ onp.maximum(0, onp.multiply(interior, onp.subtract(operand.shape, 1))))
return tuple(out_shape)
def _pad_transpose(t, operand, padding_value, *, padding_config):
| diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -955,7 +955,7 @@ def testReshapeAgainstNumpy(self, arg_shape, out_shape, dtype, rng_factory):
{"testcase_name": "_inshape={}_pads={}"
.format(jtu.format_shape_dtype_string(shape, dtype), pads),
"shape": shape, "dtype": dtype, "pads": pads, "rng_factory": jtu.rand_small}
- for shape in [(2, 3)]
+ for shape in [(0, 2), (2, 3)]
for dtype in default_dtypes
for pads in [[(1, 2, 1), (0, 1, 0)]]))
def testPad(self, shape, dtype, pads, rng_factory):
| lax.pad breaks for zero-sized inputs
```python
from jax import lax, numpy as jnp
out = lax.pad(jnp.ones((0,)), 0., ((1, 1, 1),))
print(out.shape) # (1,)
print(out) # [0. 0.]
print(out[0]) # RuntimeError: Invalid argument: Argument does not match host shape or layout of computation parameter 0: want f32[1]{0}, got f32[2]{0}
```
Pad works as expected for zero-sized inputs for non-interior padding (i. e. padding config `((1, 1, 0),)`), so I guess this should also work (or at least give an error).
| 2020-06-30T12:35:26 |
|
google/jax | 3,619 | google__jax-3619 | [
"3613"
] | eb2a22758898b0470b24ee79d271723873d58956 | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -867,10 +867,11 @@ def batched_fun(*args):
args_flat, in_tree = tree_flatten(args)
f = lu.wrap_init(fun)
flat_fun, out_tree = flatten_fun_nokwargs(f, in_tree)
- in_axes_flat = flatten_axes(in_tree, in_axes)
+ in_axes_flat = flatten_axes("vmap in_axes", in_tree, in_axes)
_ = _mapped_axis_size(in_tree, args_flat, in_axes_flat, "vmap")
out_flat = batching.batch(flat_fun, args_flat, in_axes_flat,
- lambda: flatten_axes(out_tree(), out_axes))
+ lambda: flatten_axes("vmap out_axes", out_tree(),
+ out_axes))
return tree_unflatten(out_tree(), out_flat)
return batched_fun
@@ -1152,7 +1153,7 @@ def f_pmapped(*args, **kwargs):
dyn_args, dyn_in_axes = args, in_axes
args, in_tree = tree_flatten((dyn_args, kwargs))
donated_invars = donation_vector(donate_tuple, dyn_args, kwargs)
- in_axes_flat = flatten_axes(in_tree, (dyn_in_axes, 0))
+ in_axes_flat = flatten_axes("pmap in_axes", in_tree, (dyn_in_axes, 0))
local_axis_size = _mapped_axis_size(in_tree, args, in_axes_flat, "pmap")
for arg in args: _check_arg(arg)
flat_fun, out_tree = flatten_fun(f, in_tree)
@@ -1191,7 +1192,7 @@ def soft_pmap(fun: Callable, axis_name: Optional[AxisName] = None, *,
def f_pmapped(*args, **kwargs):
f = lu.wrap_init(fun)
args_flat, in_tree = tree_flatten((args, kwargs))
- in_axes_flat = flatten_axes(in_tree, (in_axes, 0))
+ in_axes_flat = flatten_axes("soft_pmap in_axes", in_tree, (in_axes, 0))
mapped_invars = tuple(axis is not None for axis in in_axes_flat)
axis_size = _mapped_axis_size(in_tree, args_flat, in_axes_flat, "soft_pmap")
for arg in args_flat: _check_arg(arg)
diff --git a/jax/api_util.py b/jax/api_util.py
--- a/jax/api_util.py
+++ b/jax/api_util.py
@@ -143,7 +143,7 @@ def _argnums_partial(dyn_argnums, fixed_args, *dyn_args, **kwargs):
ans = yield args, kwargs
yield ans
-def flatten_axes(treedef, axis_tree):
+def flatten_axes(name, treedef, axis_tree):
# given an axis spec tree axis_tree (a pytree with integers and Nones at the
# leaves, i.e. the Nones are to be considered leaves) that is a tree prefix of
# the given treedef, build a complete axis spec tree with the same structure
@@ -155,10 +155,10 @@ def flatten_axes(treedef, axis_tree):
add_leaves = lambda i, x: axes.extend([i] * len(tree_flatten(x)[0]))
try:
tree_multimap(add_leaves, _replace_nones(proxy, axis_tree), dummy)
- except ValueError as e:
- msg = ("axes specification must be a tree prefix of the corresponding "
- "value, got specification {} for value {}.")
- raise ValueError(msg.format(axis_tree, treedef)) from e
+ except ValueError:
+ raise ValueError(f"{name} specification must be a tree prefix of the "
+ f"corresponding value, got specification {axis_tree} "
+ f"for value tree {treedef}.") from None
axes = [None if a is proxy else a for a in axes]
assert len(axes) == treedef.num_leaves
return axes
diff --git a/jax/interpreters/sharded_jit.py b/jax/interpreters/sharded_jit.py
--- a/jax/interpreters/sharded_jit.py
+++ b/jax/interpreters/sharded_jit.py
@@ -267,11 +267,13 @@ def wrapped(*args, **kwargs):
raise NotImplementedError("sharded_jit over kwargs not yet supported")
f = lu.wrap_init(fun)
args_flat, in_tree = tree_flatten((args, kwargs))
- in_parts_flat = tuple(flatten_axes(in_tree.children()[0], in_parts))
+ in_parts_flat = tuple(flatten_axes("sharded_jit in_parts",
+ in_tree.children()[0], in_parts))
flat_fun, out_tree = flatten_fun(f, in_tree)
# TODO(skye): having a function-typed param in a primitive seems dicey, is
# there a better way?
- out_parts_thunk = lambda: tuple(flatten_axes(out_tree(), out_parts))
+ out_parts_thunk = lambda: tuple(flatten_axes("sharded_jit out_parts",
+ out_tree(), out_parts))
out = sharded_call(
flat_fun,
*args_flat,
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -1073,8 +1073,8 @@ def test_vmap_in_axes_tree_prefix_error(self):
# https://github.com/google/jax/issues/795
self.assertRaisesRegex(
ValueError,
- "axes specification must be a tree prefix of the corresponding "
- r"value, got specification \(0, 0\) for value "
+ "vmap in_axes specification must be a tree prefix of the corresponding "
+ r"value, got specification \(0, 0\) for value tree "
r"PyTreeDef\(tuple, \[\*\]\).",
lambda: api.vmap(lambda x: x, in_axes=(0, 0))(jnp.ones(3))
)
@@ -1148,7 +1148,9 @@ def h(a, b):
# Error is: TypeError: only integer scalar arrays can be converted to a scalar index
with self.assertRaisesRegex(
- ValueError, "axes specification must be a tree prefix of the corresponding value"):
+ ValueError,
+ "vmap out_axes specification must be a tree prefix of the "
+ "corresponding value.*"):
api.vmap(lambda x: x, in_axes=0, out_axes=(2, 3))(jnp.array([1., 2.]))
with self.assertRaisesRegex(
| Improve error message w/ incorrect out_axes argument to vmap
[This colab](https://colab.research.google.com/drive/1DWziQYqL-tiZHvtSiCz8QJlQtkTSqlq8?usp=sharing) shows an incorrect use of vmap where more `out_axes` arguments are passed than the number of outputs in the mapped function. The error message does not indicate that the issue is with the `out_axes` argument, as opposed to the `in_axes` argument. It would be nice to improve this so that it's clear where the issue is.
| @mattjj @tgale96 it seems to be pretty easy to pass a boolean flag `is_in_axes` to the function below to create a custom error message for `in_axes` and `out_axes`. However, it seems a bit naive and could make usage of this function cumbersome.
https://github.com/google/jax/blob/e808681f6c096e95be8532e4e901f5d410e0fb58/jax/api_util.py#L146
@IgorWilbert That line of thinking sounds right to me! Perhaps we can just pass in a string that gets included in the error message; we've used that pattern before and it seems too work out.
I really appreciate your thoughts and advice around here! | 2020-07-01T04:35:52 |
google/jax | 3,653 | google__jax-3653 | [
"3651"
] | 7f7fd997a35a4a9fee3a16244caaaa5b37b65f8e | diff --git a/jax/lax/lax_parallel.py b/jax/lax/lax_parallel.py
--- a/jax/lax/lax_parallel.py
+++ b/jax/lax/lax_parallel.py
@@ -364,6 +364,12 @@ def _translate(val):
return psum(val)
return xops.Tuple(c, list(map(_translate, args)))
+def _psum_transpose_rule(cts, axis_name, axis_index_groups):
+ nonzero_out_cts, treedef = tree_util.tree_flatten(cts)
+ nonzero_in_cts = psum_p.bind(*nonzero_out_cts, axis_name=axis_name,
+ axis_index_groups=axis_index_groups)
+ return tree_util.tree_unflatten(treedef, nonzero_in_cts)
+
psum_p = standard_pmap_primitive('psum', multiple_results=True)
psum_p.def_abstract_eval(
lambda *args, **params: tuple(map(raise_to_shaped, args)))
@@ -371,8 +377,7 @@ def _translate(val):
partial(_allreduce_split_axis_rule, psum_p, lax._reduce_sum)
xla.parallel_translations[psum_p] = _psum_translation_rule
pxla.parallel_pure_rules[psum_p] = lambda *args, shape: (x * prod(shape) for x in args)
-ad.deflinear(psum_p, lambda ts, axis_name, axis_index_groups: psum_p.bind(
- *ts, axis_name=axis_name, axis_index_groups=axis_index_groups))
+ad.deflinear(psum_p, _psum_transpose_rule)
pxla.multi_host_supported_collectives.add(psum_p)
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -1290,6 +1290,30 @@ def foo(x): return x
self.assertIn("The jitted function foo includes a pmap",
str(w[-1].message))
+ def testPsumZeroCotangents(self):
+ # https://github.com/google/jax/issues/3651
+ def loss(params, meta_params):
+ (net, mpo) = params
+ return meta_params * mpo * net
+
+ def inner(meta_params, params):
+ grads = jax.grad(loss)(params, meta_params)
+ grads = lax.psum(grads, axis_name="i")
+ net_grads, mpo_grads = grads
+ net = params[0] + net_grads
+ mpo = params[1]
+ return mpo * net
+
+ def outer(params):
+ meta_params = jnp.array(4.0)
+ return jax.grad(inner)(meta_params, params)
+
+ params = (jnp.array([2.0]), jnp.array([3.0]))
+ jax.pmap(outer, axis_name='i')(params) # doesn't crash
+
+ f = jax.pmap(outer, axis_name='i')
+ jtu.check_grads(f, (params,), 2, ["fwd", "rev"], 1e-3, 1e-3)
+
class VmapOfPmapTest(jtu.JaxTestCase):
| Meta-Gradient causes "TypeError: <class 'jax.ad_util.Zero'> is not a valid JAX type"
Hello,
I discovered the following unexpected behaviour while calculating meta-gradients (gradients through gradients) with JAX.
```
import jax
from jax import lax
from jax import numpy as jnp
def loss(params, meta_params):
(net, mpo) = params
return meta_params * mpo * net
def inner(meta_params, params):
grads = jax.grad(loss)(params, meta_params)
grads = lax.psum(grads, axis_name="i")
net_grads, mpo_grads = grads
net = params[0] + net_grads
mpo = params[1] # Does not work!
# mpo = params[1] + mpo_grads # Works if I add mpo_grads
return mpo * net
def outer(params):
meta_params = jnp.array(1.0)
return jax.grad(inner)(meta_params, params)
params = (jnp.array([1.0]), jnp.array([1.0]))
learner_output = jax.pmap(outer, axis_name='i')(params)
```
Outputs
```
[....]
jax/core.py in concrete_aval(x)
778 handler = pytype_aval_mappings.get(typ)
779 if handler: return handler(x)
--> 780 raise TypeError(f"{type(x)} is not a valid JAX type")
781
782
TypeError: <class 'jax.ad_util.Zero'> is not a valid JAX type
```
This only appears inside a pmap (both CPU and TPU as backend throw the same error).
The same code works without pmap, when using jit (or vmap), or if the psum is removed.
| A possible workaround seems to be to first split the gradients and then aggregate them:
```
net_grads, mpo_grads = grads
net_grads = lax.psum(net_grads, axis_name="i")
mpo_grads = lax.psum(mpo_grads, axis_name="i")
```
Thanks for the report, and nice repro! This looks like one of our symbolic zeros leaking somewhere that it shouldn't be. | 2020-07-03T16:25:18 |
google/jax | 3,656 | google__jax-3656 | [
"3654"
] | 796df9c550618c15dfec157af306b08fe58eac13 | diff --git a/jax/core.py b/jax/core.py
--- a/jax/core.py
+++ b/jax/core.py
@@ -831,17 +831,15 @@ def error(self, arg):
return error
-def concrete_or_error(typ: Type, val: Any, context=""):
- """Like typ(val), but gives the context in the error message.
- Use with typ either `int`, or `bool`.
- """
+def concrete_or_error(force: Any, val: Any, context=""):
+ """Like force(val), but gives the context in the error message."""
if isinstance(val, Tracer):
if isinstance(val.aval, ConcreteArray):
- return typ(val.aval.val)
+ return force(val.aval.val)
else:
raise_concretization_error(val, context)
else:
- return typ(val)
+ return force(val)
class UnshapedArray(AbstractValue):
__slots__ = ['dtype', 'weak_type']
diff --git a/jax/nn/functions.py b/jax/nn/functions.py
--- a/jax/nn/functions.py
+++ b/jax/nn/functions.py
@@ -20,6 +20,7 @@
from jax import custom_jvp
from jax import dtypes
from jax import lax
+from jax import core
from jax.scipy.special import expit
import jax.numpy as jnp
@@ -263,6 +264,8 @@ def one_hot(x, num_classes, *, dtype=jnp.float64):
dtype: optional, a float dtype for the returned values (default float64 if
jax_enable_x64 is true, otherwise float32).
"""
+ num_classes = core.concrete_or_error(int, num_classes,
+ "in jax.nn.one_hot argument `num_classes`")
dtype = dtypes.canonicalize_dtype(dtype)
x = jnp.asarray(x)
lhs = x[..., jnp.newaxis]
diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -2282,10 +2282,16 @@ def identity(n, dtype=None):
@_wraps(np.arange)
def arange(start, stop=None, step=None, dtype=None):
lax._check_user_dtype_supported(dtype, "arange")
+ require = partial(core.concrete_or_error, np.asarray)
+ msg = "in jax.numpy.arange argument `{}`".format
if stop is None and step is None:
+ start = require(start, msg("stop"))
dtype = dtype or _dtype(start)
return lax.iota(dtype, np.ceil(start)) # avoids materializing
else:
+ start = None if start is None else require(start, msg("start"))
+ stop = None if stop is None else require(stop, msg("stop"))
+ step = None if step is None else require(step, msg("step"))
return array(np.arange(start, stop=stop, step=step, dtype=dtype))
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -3826,6 +3826,17 @@ def testReductionWithRepeatedAxisError(self):
with self.assertRaisesRegex(ValueError, r"duplicate value in 'axis': \(0, 0\)"):
jnp.sum(jnp.arange(3), (0, 0))
+ def testArangeConcretizationError(self):
+ msg = r"Abstract tracer.*\(in jax.numpy.arange argument `{}`\).*".format
+ with self.assertRaisesRegex(jax.core.ConcretizationTypeError, msg('stop')):
+ jax.jit(jnp.arange)(3)
+
+ with self.assertRaisesRegex(jax.core.ConcretizationTypeError, msg('start')):
+ jax.jit(lambda start: jnp.arange(start, 3))(0)
+
+ with self.assertRaisesRegex(jax.core.ConcretizationTypeError, msg('stop')):
+ jax.jit(lambda stop: jnp.arange(0, stop))(3)
+
# Most grad tests are at the lax level (see lax_test.py), but we add some here
# as needed for e.g. particular compound ops of interest.
diff --git a/tests/nn_test.py b/tests/nn_test.py
--- a/tests/nn_test.py
+++ b/tests/nn_test.py
@@ -148,6 +148,13 @@ def testOneHotCustomDtype(self):
[False, False, True]])
self.assertAllClose(actual, expected)
+ def testOneHotConcretizationError(self):
+ # https://github.com/google/jax/issues/3654
+ msg = r"Abstract tracer.*\(in jax.nn.one_hot argument `num_classes`\).*"
+ with self.assertRaisesRegex(core.ConcretizationTypeError, msg):
+ jax.jit(nn.one_hot)(3, 5)
+
+
InitializerRecord = collections.namedtuple(
"InitializerRecord",
["name", "initializer", "shapes"])
| bad error message when `one_hot` gets a non-static parameter
the following line: `replay_action_one_hot = jax.nn.one_hot(replay_elements['action'], num_actions)`
causes the following error:
```
Exception: The numpy.ndarray conversion method __array__() was called on the JAX Tracer object Traced<ShapedArray(int32[], weak_type=True):JaxprTrace(level=-1/1)>.
This error can occur when a JAX Tracer object is passed to a raw numpy function, or a method on a numpy.ndarray object. You might want to check that you are using `jnp` together with `import jax.numpy as jnp` rather than using `np` via `import numpy as np`. If this error arises on a line that involves array indexing, like `x[idx]`, it may be that the array being indexed `x` is a raw numpy.ndarray while the indices `idx` are a JAX Tracer instance; in that case, you can instead write `jax.device_put(x)[idx]`.
```
the context around the line in question:
```
(Pdb) type(replay_elements['action'])
<class 'jax.interpreters.partial_eval.JaxprTracer'>
(Pdb) replay_elements['action'].shape
(128,)
(Pdb) type(num_actions)
<class 'jax.interpreters.partial_eval.JaxprTracer'>
(Pdb) num_actions.shape
()
```
| 2020-07-03T22:46:59 |
|
google/jax | 3,663 | google__jax-3663 | [
"3662"
] | 3bbe06e81ae191dfb7f71d4fa15cefd52e0a6c89 | diff --git a/jax/tree_util.py b/jax/tree_util.py
--- a/jax/tree_util.py
+++ b/jax/tree_util.py
@@ -31,7 +31,7 @@
user defined data structures and JAX transformations (e.g. `jit`). This is not
meant to be a general purpose tree-like data structure handling library.
-See the `JAX pytrees notebook <https://jax.readthedocs.io/en/latest/notebooks/JAX_pytrees.html>`_
+See the `JAX pytrees note <pytrees.html>`_
for examples.
"""
@@ -112,7 +112,7 @@ def all_leaves(iterable):
def register_pytree_node(nodetype, flatten_func, unflatten_func):
"""Extends the set of types that are considered internal nodes in pytrees.
- See `example usage <https://jax.readthedocs.io/en/latest/notebooks/JAX_pytrees.html#Pytrees-are-extensible>`_.
+ See `example usage <pytrees.html>`_.
Args:
nodetype: a Python type to treat as an internal pytree node.
| broken link from pytree_util api docs to 'pytrees notebook'
the bottom of the top introduction on this page:
https://jax.readthedocs.io/en/latest/jax.tree_util.html
reads:
> See the [JAX pytrees notebook](https://jax.readthedocs.io/en/latest/notebooks/JAX_pytrees.html) for examples.
(this is written here: https://github.com/google/jax/blob/269da0ae584cfe840f34e9f871f13c28e2772de5/jax/tree_util.py#L34)
is this the correct link? or are we missing a notebook?
https://jax.readthedocs.io/en/latest/pytrees.html
just curious because tree_util is my new favorite thing...
| 2020-07-05T13:13:22 |
||
google/jax | 3,673 | google__jax-3673 | [
"3672"
] | 23deefa71838ceeab41977ac0ab781164c914a8c | diff --git a/jax/nn/__init__.py b/jax/nn/__init__.py
--- a/jax/nn/__init__.py
+++ b/jax/nn/__init__.py
@@ -22,6 +22,7 @@
gelu,
glu,
hard_sigmoid,
+ hard_silu,
hard_swish,
hard_tanh,
leaky_relu,
@@ -36,5 +37,6 @@
soft_sign,
softmax,
softplus,
+ silu,
swish,
)
diff --git a/jax/nn/functions.py b/jax/nn/functions.py
--- a/jax/nn/functions.py
+++ b/jax/nn/functions.py
@@ -68,16 +68,18 @@ def sigmoid(x):
"""
return expit(x)
-def swish(x):
- r"""Swish activation function.
+def silu(x):
+ r"""SiLU activation function.
Computes the element-wise function:
.. math::
- \mathrm{swish}(x) = x \cdot \mathrm{sigmoid}(x) = \frac{x}{1 + e^{-x}}
+ \mathrm{silu}(x) = x \cdot \mathrm{sigmoid}(x) = \frac{x}{1 + e^{-x}}
"""
return x * sigmoid(x)
+swish = silu
+
def log_sigmoid(x):
r"""Log-sigmoid activation function.
@@ -292,12 +294,14 @@ def hard_sigmoid(x):
"""
return relu6(x + 3.) / 6.
-def hard_swish(x):
- r"""Hard Swish activation function
+def hard_silu(x):
+ r"""Hard SiLU activation function
Computes the element-wise function
.. math::
- \mathrm{hard\_swish}(x) = x \cdot \mathrm{hard\_sigmoid}(x)
+ \mathrm{hard\_silu}(x) = x \cdot \mathrm{hard\_sigmoid}(x)
"""
return x * hard_sigmoid(x)
+
+hard_swish = hard_silu
| Rename jax.nn.swish to jax.nn.silu to give appropriate credit
The swish was originally coined the "SiLU" in https://arxiv.org/pdf/1606.08415.pdf and https://arxiv.org/abs/1702.03118 long before the swish paper. Renaming other peoples' exact same ideas is unacceptable and tensorflow's naming convention implicitly erases the research and work of people outside of Google.
This request inspired by a [recent discussion](https://www.reddit.com/r/MachineLearning/comments/hkiyir/r_google_has_a_credit_assignment_problem_in/) and a recent [tensorflow issue](https://github.com/tensorflow/tensorflow/issues/41066), but this problem has been brought up every few months for the past few years. In light of recent efforts to make the ML community more equitable and _fair_, this is a no-brainer and long overdue.
**Will this change the current api? How?**
jax.nn.swish will eventually be deprecated (jax is still new) and jax.nn.silu will be added and both of the aforementioned papers will be cited in the documentation.
| 2020-07-06T21:01:08 |
||
google/jax | 3,705 | google__jax-3705 | [
"3667"
] | e073e25427cbe3e1bc1698e5341d1652258d9e2a | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -45,7 +45,7 @@
donation_vector, rebase_donate_argnums)
from .tree_util import (tree_map, tree_flatten, tree_unflatten, tree_structure,
tree_transpose, tree_leaves, tree_multimap,
- treedef_is_leaf)
+ treedef_is_leaf, Partial)
from .util import (unzip2, curry, partial, safe_map, safe_zip, prod,
split_list, extend_name_stack, wrap_name)
from .lib import xla_bridge as xb
@@ -1488,7 +1488,7 @@ def _check_inexact_input_vjp(x):
"or complex type, got type {}")
raise TypeError(msg.format(aval.dtype.name))
-def _vjp_pullback_wrapper(fun, cotangent_dtypes, io_tree, py_args):
+def _vjp_pullback_wrapper(cotangent_dtypes, io_tree, fun, py_args):
in_tree_expected, out_tree = io_tree
args, in_tree = tree_flatten(py_args)
if in_tree != in_tree_expected:
@@ -1563,8 +1563,12 @@ def _vjp(fun: lu.WrappedFun, *primals, **kwargs):
out_primal, out_vjp, aux = ad.vjp(flat_fun, primals_flat, has_aux=True)
out_tree, aux_tree = out_aux_trees()
out_primal_py = tree_unflatten(out_tree, out_primal)
- vjp_py = partial(_vjp_pullback_wrapper, out_vjp,
- [_dtype(x) for x in out_primal], (out_tree, in_tree))
+ # Ensure that vjp_py is a PyTree so that we can pass it from the forward to the
+ # backward pass in a custom VJP.
+ vjp_py = Partial(partial(_vjp_pullback_wrapper,
+ [_dtype(x) for x in out_primal],
+ (out_tree, in_tree)),
+ out_vjp)
if not has_aux:
return out_primal_py, vjp_py
else:
diff --git a/jax/interpreters/ad.py b/jax/interpreters/ad.py
--- a/jax/interpreters/ad.py
+++ b/jax/interpreters/ad.py
@@ -27,7 +27,7 @@
from ..tree_util import register_pytree_node
from .. import linear_util as lu
from ..api_util import flatten_fun, flatten_fun_nokwargs
-from ..tree_util import tree_flatten, tree_unflatten
+from ..tree_util import tree_flatten, tree_unflatten, Partial
from .. import source_info_util
zip = safe_zip
@@ -109,12 +109,16 @@ def vjp(traceable, primals, has_aux=False):
out_primals, pvals, jaxpr, consts = linearize(traceable, *primals)
else:
out_primals, pvals, jaxpr, consts, aux = linearize(traceable, *primals, has_aux=True)
- def vjp_(*cts):
+
+ def unbound_vjp(pvals, jaxpr, consts, *cts):
cts = tuple(map(ignore_consts, cts, pvals))
dummy_args = [UndefinedPrimal(v.aval) for v in jaxpr.invars]
arg_cts = backward_pass(jaxpr, consts, dummy_args, cts)
return map(instantiate_zeros, arg_cts)
+ # Ensure that vjp_ is a PyTree so that we can pass it from the forward to the backward
+ # pass in a custom VJP.
+ vjp_ = Partial(partial(unbound_vjp, pvals, jaxpr), consts)
if not has_aux:
return out_primals, vjp_
else:
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -2863,6 +2863,38 @@ def clip_gradient(x):
jax.grad(clip_gradient)(1.) # doesn't crash
+ def test_nestable_vjp(self):
+ # Verify that https://github.com/google/jax/issues/3667 is resolved.
+ def f(x):
+ return x ** 2
+
+ @api.custom_vjp
+ def g(x):
+ return f(x)
+
+ def g_fwd(x):
+ y, f_vjp = api.vjp(f, x)
+ return y, f_vjp
+
+ def g_bwd(f_vjp, y_bar):
+ return f_vjp(y_bar)
+
+ g.defvjp(g_fwd, g_bwd)
+
+ # Check that VJP can be nested in simple situations. For this to pass,
+ # vjp has to return a PyTree.
+ _, g_vjp = api.vjp(g, 1.0)
+ y, = g_vjp(1.0)
+ self.assertAllClose(y, jnp.array(2.0))
+
+ # Check that VJP can be nested in complex situations. For this to pass,
+ # vjp can't treat the closed-over tracer x as a static argument.
+ @jit
+ def z(x):
+ _, g_vjp = api.vjp(g, x)
+ return g_vjp
+ y, = z(1.0)(3.0)
+ self.assertAllClose(y, jnp.array(6.0))
class InvertibleADTest(jtu.JaxTestCase):
| How do you nest implementations of VJP?
If I have a function `f` that has a VJP. Can I somehow build a VJP on top of it for some function `g`? My use case is that the backwards pass of `g` is quite complicated and makes multiple calls to `f_vjp`. Here's a simplified example:
```python
from jax import vjp, custom_vjp
def f(x):
return x ** 2
@custom_vjp
def g(x):
return f(x)
def g_fwd(x):
return vjp(f, x)
def g_bwd(f_vjp, y_bar):
return f_vjp(y_bar)
g.defvjp(g_fwd, g_bwd)
y, g_vjp = vjp(g, 1.0)
```
This prints: `TypeError: <class 'functools.partial'> is not a valid JAX type` because `vjp` returns a `functool.partial` instance instead of a pytree. It should be possible for `vjp` to return a pytree-like callable instead since internally `vjp(f, x)` produces residuals that must be pytree-like. It's just unfortunate that when they're wrapped up into a callable, then that callable is not pytree-like.
| I've looked at it a bit, and it appears that the issue is that jax.api._vjp returns
```
vjp_py = partial(_vjp_pullback_wrapper, out_vjp,
[_dtype(x) for x in out_primal], (out_tree, in_tree))
```
where `out_vjp` is
```
out_primal, out_vjp = ad.vjp(flat_fun, primals_flat)
```
and in `ad.vjp`, we see that `out_vjp` has some closed over values that are JAX trees:
```python
out_primals, pvals, jaxpr, consts = linearize(traceable, *primals)
def vjp_(*cts):
cts = tuple(map(ignore_consts, cts, pvals))
dummy_args = [UndefinedPrimal(v.aval) for v in jaxpr.invars]
arg_cts = backward_pass(jaxpr, consts, dummy_args, cts)
return map(instantiate_zeros, arg_cts)
return out_primals, vjp_
```
I guess `pvals`, `jaxprs`, and `consts` can all contain PyTrees? Could some of them be static instead? If they're definitely PyTrees, I might be able to make this change. Or is there a better way to nest VJPs that I'm missing?
This seems like a case where perhaps you are pushing the limits of what `custom_vjp` is designed to support should be considering other options :)
That said, one way to fix this immediate issue would be to replace `partial` inside `jax.api._vjp` with `tree_util.Partial`, which is serializable as a pytree. You could also do this wrapping in user code:
```python
from jax import vjp, custom_vjp, tree_util
def f(x):
return x ** 2
@custom_vjp
def g(x):
return f(x)
def g_fwd(x):
y, f_vjp = vjp(f, x)
return y, tree_util.Partial(f_vjp)
def g_bwd(f_vjp, y_bar):
return f_vjp(y_bar)
g.defvjp(g_fwd, g_bwd)
y, g_vjp = vjp(g, 1.0)
print(y, g_vjp(1.0))
# 1.0 (DeviceArray(2., dtype=float32),)
```
In general, it's fine to using `Partial` inside the forward pass of `custom_vjp` functions as long as you are careful not to close over any tracers, like the value `x` in this case. The dtypes, constants, jaxprs and treedefs used in the closures should all be fine. (Otherwise you should get a nasty error message.)
(@mattjj please correct me if I'm mis-stating anything here)
@shoyer Nice!!! Brilliant solution.
As for other options, I think this is the simplest design: All of the nodes in my model have a regular VJP. It's just that one of the node types' VJP would normally be very complicated, so I implement it as a custom VJP, which internally delegates some of the work to a nested VJP. It seems to me to be very logical.
I find nesting patterns to be very logical: a function delegating to a nested function, a list display delegating to a nested list display, etc. My (possibly naive) feeling is that VJP and JVP should also be able to delegate to nested instances of themselves.
Since `vjp` already ensures that its arguments are pytrees, do you think it would be possible to make `vjp` return a `tree_util.Partial`?
> Since `vjp` already ensures that its arguments are pytrees, do you think it would be possible to make `vjp` return a `tree_util.Partial`?
@mattjj is the real expert here, so I'll defer to his judgment. But this seems reasonable to me.
@shoyer I tried to apply your solution to my code, but when used in a more complicated setting, I'm getting leaked tracers. Why did you say:
> as long as you are careful not to close over any tracers, like the value x in this case.
Where is x closed over?
Can you think of any alternative way of nesting VJPs? Am I missing something?
I think the reason I'm getting leaked tracers is because `tree_view.Partial` assumes that the callable is necessarily static (not always true), and more importantly it assumes that all of the arguments are pytrees (very often false). In particular, some of the itnernals of vjp are passing static arguments as positional arguments to partial (lists of dtypes, other functions, lists of PyTreeDef).
I think the solution is to write a more advanced `Partial` that marks its static parameters. Then, this can be used in the VJP codepath in JAX's codebase. I don't mind doing it if you think it's a good idea. What do you think?
How are you calling `Partial`? On further consideration, maybe we don't want to substitute `partial` -> `Partial`, but rather just add `Partial` on top as a wrapper.
In my example, `Partial` has no explicit argument applied at all, and is just a way of marking the function as a pytree. That might be a better approach.
Thinking about this a little more carefully: of course you need to explicitly wrap values that might include tracers.
`pvals` and `consts` are the parameters that might include tracers. Both should already be pytrees.
@shoyer Thanks a lot for exploring this. I'm just passing the result of `vjp` into `Partial` just like in your comment above. The reason this doesn't work is because the function outputted by `vjp` is a `partial` whose arguments include static arguments.
Your code above does this:
```
f_vjp: partial(...)
g = Partial(f_vjp)
```
Here, `g` is a pytree whose only component is a static `partial` object. But that partial object contains within it some pytrees, so it leaks tracers.
If you try to do this:
```
g = Partial(f_vjp.func, *f_vjp.args)
```
then you find out that `f_vjp.args` also contains static components, and JAX complains that these components are not pytrees.
I guess pvals and consts are not problematic. Things which are problematic are the function stored as an argument (I think it was `vjp_`), and a list of dtypes, both of which need to be static.
I think the long story short of this is that we might consider changing JAX's VJP codepath so that the partial function applications that it creates are created with `Partial`, and to ensure that its arguments are all pytrees. It might be convenient to create a simple box (using `flax.struct.dataclass`) like
```python
@dataclass
class StaticBox:
value = field(False)
```
or equivalently by writing out the flatten and unflatten methods. You could then put the static parameters in boxes, and send them to Partial, and it should all work.
I'm trying to use this version of `Partial` to ensure that the output of `vjp` doesn't leak tracers:
```python
class Partial(functools.partial):
"""A version of functools.partial that works in pytrees.
Use it for partial function evaluation in a way that is compatible with JAX's
transformations, e.g., ``Partial(func, *args, **kwargs)``.
(You need to explicitly opt-in to this behavior because we didn't want to give
functools.partial different semantics than normal function closures.)
"""
def __new__(cls, func, /, *args,
callable_is_static=True,
static_argnums=(),
static_kwargs={},
**kwargs):
if isinstance(func, Partial):
raise TypeError
retval = super().__new__(cls, func, *args, **kwargs)
retval.callable_is_static = callable_is_static
retval.static_argnums = set(static_argnums)
retval.static_kwargs = static_kwargs
return retval
def tree_flatten(self):
static_args = []
tree_args = []
def _append(is_static, value):
if is_static:
static_args.append(value)
else:
tree_args.append(value)
_append(self.callable_is_static, self.func)
for i, value in enumerate(self.args):
_append(i in self.static_argnums, value)
return ((list(reversed(tree_args)), self.keywords),
(self.callable_is_static, self.static_argnums,
list(reversed(static_args)), self.static_kwargs))
@classmethod
def tree_unflatten(cls, static, trees):
callable_is_static, static_argnums, static_args, static_kwargs = static
tree_args, tree_kwargs = trees
args = []
for i in range(len(static_args) + len(tree_args)):
if i == 0:
is_static = callable_is_static
else:
is_static = i - 1 in static_argnums
if is_static:
args.append(static_args.pop(0))
else:
args.append(tree_args.pop(0))
return Partial(*reversed(args),
callable_is_static=callable_is_static,
static_argnums=static_argnums,
static_kwargs=static_kwargs,
**tree_kwargs)
def __call__(self, *args, **kwargs):
super().__call__(*args, **self.static_kwargs, **kwargs)
```
Still getting leaked tracers, but it's getting farther than before.
I don't think this needs a new Pytree. You could just make a function that
uses a combination of functools.partial and tree_util.Partial.
On Thu, Jul 9, 2020 at 9:57 AM Neil <[email protected]> wrote:
> I'm trying to use this version of Partial to ensure that the output of vjp
> doesn't leak tracers:
>
> class Partial(functools.partial):
> """A version of functools.partial that works in pytrees. Use it for partial function evaluation in a way that is compatible with JAX's transformations, e.g., ``Partial(func, *args, **kwargs)``. (You need to explicitly opt-in to this behavior because we didn't want to give functools.partial different semantics than normal function closures.) """
> def __new__(cls, func, /, *args,
> callable_is_static=True,
> static_argnums=(),
> static_kwargs={},
> **kwargs):
> if isinstance(func, Partial):
> raise TypeError
> retval = super().__new__(cls, func, *args, **kwargs)
> retval.callable_is_static = callable_is_static
> retval.static_argnums = set(static_argnums)
> retval.static_kwargs = static_kwargs
> return retval
>
> def tree_flatten(self):
> static_args = []
> tree_args = []
>
> def _append(is_static, value):
> if is_static:
> static_args.append(value)
> else:
> tree_args.append(value)
>
> _append(self.callable_is_static, self.func)
> for i, value in enumerate(self.args):
> _append(i in self.static_argnums, value)
>
> return ((list(reversed(tree_args)), self.keywords),
> (self.callable_is_static, self.static_argnums,
> list(reversed(static_args)), self.static_kwargs))
>
> @classmethod
> def tree_unflatten(cls, static, trees):
> callable_is_static, static_argnums, static_args, static_kwargs = static
> tree_args, tree_kwargs = trees
> args = []
>
> for i in range(len(static_args) + len(tree_args)):
> if i == 0:
> is_static = callable_is_static
> else:
> is_static = i - 1 in static_argnums
> if is_static:
> args.append(static_args.pop(0))
> else:
> args.append(tree_args.pop(0))
>
> return Partial(*reversed(args),
> callable_is_static=callable_is_static,
> static_argnums=static_argnums,
> static_kwargs=static_kwargs,
> **tree_kwargs)
>
> Still getting leaked tracers, but it's getting farther than before.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/google/jax/issues/3667#issuecomment-656240407>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAJJFVUD3FBFP2JTKTTS7ODR2XZHNANCNFSM4ORJUZEQ>
> .
>
| 2020-07-09T18:19:22 |
google/jax | 3,725 | google__jax-3725 | [
"3718"
] | eb67571b426153f85bedec9200021fac32c2c25a | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -2657,6 +2657,20 @@ def _precision_config(precision):
def _dot_general_shape_rule(lhs, rhs, *, dimension_numbers, precision):
(lhs_contracting, rhs_contracting), (lhs_batch, rhs_batch) = dimension_numbers
+ if not all(onp.all(onp.greater_equal(d, 0)) and onp.all(onp.less(d, lhs.ndim))
+ for d in (lhs_contracting, lhs_batch)):
+ msg = ("dot_general requires lhs dimension numbers to be nonnegative and "
+ "less than the number of axes of the lhs value, got "
+ f"lhs_batch of {lhs_batch} and lhs_contracting of {lhs_contracting} "
+ f"for lhs of rank {lhs.ndim}")
+ raise TypeError(msg)
+ if not all(onp.all(onp.greater_equal(d, 0)) and onp.all(onp.less(d, rhs.ndim))
+ for d in (rhs_contracting, rhs_batch)):
+ msg = ("dot_general requires rhs dimension numbers to be nonnegative and "
+ "less than the number of axes of the rhs value, got "
+ f"rhs_batch of {rhs_batch} and rhs_contracting of {rhs_contracting} "
+ f"for rhs of rank {rhs.ndim}")
+ raise TypeError(msg)
if len(lhs_batch) != len(rhs_batch):
msg = ("dot_general requires equal numbers of lhs_batch and rhs_batch "
"dimensions, got lhs_batch {} and rhs_batch {}.")
diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -2783,6 +2783,12 @@ def dot(a, b, *, precision=None): # pylint: disable=missing-docstring
@_wraps(np.matmul, lax_description=_PRECISION_DOC)
def matmul(a, b, *, precision=None): # pylint: disable=missing-docstring
_check_arraylike("matmul", a, b)
+ for i, x in enumerate((a, b)):
+ if ndim(x) < 1:
+ msg = (f"matmul input operand {i} must have ndim at least 1, "
+ f"but it has ndim {ndim(x)}")
+ raise ValueError(msg)
+
a_is_vec, b_is_vec = (ndim(a) == 1), (ndim(b) == 1)
a = expand_dims(a, axis=0) if a_is_vec else a
b = expand_dims(b, axis=-1) if b_is_vec else b
| Shape-checking bug
Hi there,
I've encountered a bug with Jax's shape-checking rules that directed me to report to this forum. I've worked down to roughly the minimum self-contained bug-inducing example:
```python
import jax.numpy as np
from jax import grad, vmap
a = np.arange(4) # example vector
A = np.arange(16).reshape((4, 4)) # example matrix
def fun(x):
x @ A # this line shouldn't do anything, but commenting it out makes this code run without errors!
return np.sum(a * x)
jac_diag = vmap(grad(fun)) # element-wise gradient
input = np.arange(4).astype(float) # example input
print(fun(input)) # This runs fine
print(jac_diag(input)) # This throws a "shape-checking" error, see trace below
```
When I run this, I get an error with a long stack trace that I've copied to [Pastebin](https://pastebin.com/5ctS14SU). I'm running on Windows 10 with Windows Subsystem for Linux running Ubuntu 18.04, set up as a remote interpreter through PyCharm.
I suspect it might have something to do with the `@` operator - any ideas here? ~Rewriting that as a series of `np.dot`s might fix the issue~, but I'm curious about what's causing this.
| Update: rewriting with `np.dot`s did not resolve this problem.
Thanks for the report! Looks like a bug in `jax.numpy.matmul`, as well as a bug in our shape checking rules (the latter because XLA caught this rather than us catching it earlier in JAX).
However, replacing `x @ A` with `np.dot(x, A)` made the issue go away for me. If it's not working for you, could you share the code that repros it? | 2020-07-12T03:08:36 |
|
google/jax | 3,729 | google__jax-3729 | [
"1888"
] | 9da9156b1b39acc6b7f5262f2d5d39e5760b2710 | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -4284,9 +4284,9 @@ def _reduce_prod_tree(x, axis=0):
paddings[axis] = (0, 1, 0)
x2 = pad(x2, _const(x, 1), paddings)
x = x1 * x2
- shape = list(x.shape)
- del shape[axis]
- return reshape(x, shape)
+ if x.shape[axis] == 0:
+ return full(input_shape[non_axes], _one(x))
+ return squeeze(x, (axis,))
return api.jvp(_reduce_prod_tree, (operand,), (tangent,))
| diff --git a/tests/lax_autodiff_test.py b/tests/lax_autodiff_test.py
--- a/tests/lax_autodiff_test.py
+++ b/tests/lax_autodiff_test.py
@@ -651,6 +651,7 @@ def testTransposeGrad(self, shape, dtype, perm, rng_factory):
[(3, 4, 5), (0, 2)],
[(3, 4, 5), (0, 1, 2)],
[(3, 1), (1,)],
+ [(3, 0, 5), (1,)],
]))
def testReduceGrad(self, op, init_val, shape, dtype, dims, rng_factory):
rng = rng_factory(self.rng())
@@ -664,7 +665,8 @@ def testReduceGrad(self, op, init_val, shape, dtype, dims, rng_factory):
eps = (1.0 if dtypes.finfo(dtype).bits == 16 and op is lax.add else
1e-1 if dtype == dtypes.bfloat16 else
1e-2 if dtypes.finfo(dtype).bits == 32 else None)
- check_grads(reduce, (operand,), 2, ["fwd", "rev"], tol, tol, eps)
+ if op not in (lax.max, lax.min) or all(d > 0 for d in shape):
+ check_grads(reduce, (operand,), 2, ["fwd", "rev"], tol, tol, eps)
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_op={}_dtype={}_padding={}"
diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -469,7 +469,7 @@ def testNotImplemented(self):
def testOp(self, np_op, jnp_op, rng_factory, shapes, dtypes, check_dtypes,
tolerance, inexact):
np_op = jtu.ignore_warning(category=RuntimeWarning,
- message="invalid value.*")(np_op)
+ message="invalid value.*")(np_op)
rng = rng_factory(self.rng())
args_maker = self._GetArgsMaker(rng, shapes, dtypes, np_arrays=False)
| Runtime error when taking grad of prod of take with empty indices
When taking grad/JVP of `lambda params: np.prod(np.take(params, []))`, a runtime error is produced.
Jax version: 0.1.55
Repro:
```python
jax.grad(lambda params: np.prod(np.take(params, np.array([], np.int32), axis=0)))(np.ones(6))
```
Error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py in primitive_computation(prim, *avals, **params)
179 try:
--> 180 return c.Build()
181 except RuntimeError as e:
23 frames
/usr/local/lib/python3.6/dist-packages/jax/lib/xla_bridge.py in Build(self, *args, **kwargs)
256 return super(_JaxComputationBuilder, self).Build(
--> 257 *args, **kwargs)
258
/usr/local/lib/python3.6/dist-packages/jaxlib/xla_client.py in Build(self, root, backend)
729 else:
--> 730 return Computation(self._builder.Build(), backend=backend)
731
RuntimeError: Invalid argument: Padding result in negative size for dimension 0:
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
<ipython-input-2-26b18538a688> in <module>()
----> 1 jax.grad(lambda params: np.prod(np.take(params, np.array([], np.int32), axis=0)))(np.ones(6))
/usr/local/lib/python3.6/dist-packages/jax/api.py in grad_f(*args, **kwargs)
345 @wraps(fun, docstr=docstr, argnums=argnums)
346 def grad_f(*args, **kwargs):
--> 347 _, g = value_and_grad_f(*args, **kwargs)
348 return g
349
/usr/local/lib/python3.6/dist-packages/jax/api.py in value_and_grad_f(*args, **kwargs)
400 f_partial, dyn_args = _argnums_partial(f, argnums, args)
401 if not has_aux:
--> 402 ans, vjp_py = vjp(f_partial, *dyn_args)
403 else:
404 ans, vjp_py, aux = vjp(f_partial, *dyn_args, has_aux=True)
/usr/local/lib/python3.6/dist-packages/jax/api.py in vjp(fun, *primals, **kwargs)
1255 if not has_aux:
1256 flat_fun, out_tree = flatten_fun_nokwargs(fun, in_tree)
-> 1257 out_primal, out_vjp = ad.vjp(flat_fun, primals_flat)
1258 out_tree = out_tree()
1259 else:
/usr/local/lib/python3.6/dist-packages/jax/interpreters/ad.py in vjp(traceable, primals, has_aux)
105 def vjp(traceable, primals, has_aux=False):
106 if not has_aux:
--> 107 out_primals, pvals, jaxpr, consts = linearize(traceable, *primals)
108 else:
109 out_primals, pvals, jaxpr, consts, aux = linearize(traceable, *primals, has_aux=True)
/usr/local/lib/python3.6/dist-packages/jax/interpreters/ad.py in linearize(traceable, *primals, **kwargs)
94 _, in_tree = tree_flatten(((primals, primals), {}))
95 jvpfun_flat, out_tree = flatten_fun(jvpfun, in_tree)
---> 96 jaxpr, out_pvals, consts = pe.trace_to_jaxpr(jvpfun_flat, in_pvals)
97 pval_primals, pval_tangents = tree_unflatten(out_tree(), out_pvals)
98 aval_primals, const_primals = unzip2(pval_primals)
/usr/local/lib/python3.6/dist-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr(fun, pvals, **kwargs)
341 with new_master(JaxprTrace) as master:
342 fun = trace_to_subjaxpr(fun, master, instantiate)
--> 343 jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
344 assert not env
345 del master
/usr/local/lib/python3.6/dist-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
151
152 del gen
--> 153 ans = self.f(*args, **dict(self.params, **kwargs))
154 del args
155 while stack:
<ipython-input-2-26b18538a688> in <lambda>(params)
----> 1 jax.grad(lambda params: np.prod(np.take(params, np.array([], np.int32), axis=0)))(np.ones(6))
/usr/local/lib/python3.6/dist-packages/jax/numpy/lax_numpy.py in reduction(a, axis, dtype, out, keepdims)
1127 if _dtype(a) != result_dtype:
1128 a = lax.convert_element_type(a, result_dtype)
-> 1129 result = lax.reduce(a, _reduction_init_val(a, init_val), op, dims)
1130 if keepdims:
1131 shape_with_singletons = lax.subvals(shape(a), zip(dims, (1,) * len(dims)))
/usr/local/lib/python3.6/dist-packages/jax/lax/lax.py in reduce(operand, init_value, computation, dimensions)
874 monoid_reducer = _get_monoid_reducer(computation, init_value)
875 if monoid_reducer:
--> 876 return monoid_reducer(operand, dimensions)
877 else:
878 jaxpr, consts = _reduction_jaxpr(computation, _abstractify(init_value))
/usr/local/lib/python3.6/dist-packages/jax/lax/lax.py in _reduce_prod(operand, axes)
925
926 def _reduce_prod(operand, axes):
--> 927 return reduce_prod_p.bind(operand, axes=tuple(axes))
928
929 def _reduce_max(operand, axes):
/usr/local/lib/python3.6/dist-packages/jax/core.py in bind(self, *args, **kwargs)
153
154 tracers = map(top_trace.full_raise, args)
--> 155 out_tracer = top_trace.process_primitive(self, tracers, kwargs)
156 if self.multiple_results:
157 return map(full_lower, out_tracer)
/usr/local/lib/python3.6/dist-packages/jax/interpreters/ad.py in process_primitive(self, primitive, tracers, params)
220 "Forward-mode differentiation rule for '{}' not implemented"
221 .format(primitive))
--> 222 primal_out, tangent_out = jvp(primals_in, tangents_in, **params)
223 if primitive.multiple_results:
224 return [JVPTracer(self, x, t) for x, t in zip(primal_out, tangent_out)]
/usr/local/lib/python3.6/dist-packages/jax/interpreters/ad.py in standard_jvp(jvprules, primitive, primals, tangents, **params)
319 def standard_jvp(jvprules, primitive, primals, tangents, **params):
320 val_out = primitive.bind(*primals, **params)
--> 321 tangents_out = [rule(t, *primals, **params) for rule, t in zip(jvprules, tangents)
322 if rule is not None and t is not zero]
323 return val_out, reduce(add_tangents, tangents_out, zero)
/usr/local/lib/python3.6/dist-packages/jax/interpreters/ad.py in <listcomp>(.0)
320 val_out = primitive.bind(*primals, **params)
321 tangents_out = [rule(t, *primals, **params) for rule, t in zip(jvprules, tangents)
--> 322 if rule is not None and t is not zero]
323 return val_out, reduce(add_tangents, tangents_out, zero)
324
/usr/local/lib/python3.6/dist-packages/jax/lax/lax.py in _reduce_prod_jvp_rule(tangent, operand, axes)
3410 left_padding = [(n, -1, 0)] + [(0, 0, 0)] * len(non_axes)
3411 right_padding = [(-1, n, 0)] + [(0, 0, 0)] * len(non_axes)
-> 3412 left_products = _reduce_window_prod(pad(operand, one, left_padding),
3413 window_dims, window_strides,
3414 xla_client.PaddingType.VALID)
/usr/local/lib/python3.6/dist-packages/jax/lax/lax.py in pad(operand, padding_value, padding_config)
634 operator.
635 """
--> 636 return pad_p.bind(operand, padding_value, padding_config=tuple(padding_config))
637
638 def rev(operand, dimensions):
/usr/local/lib/python3.6/dist-packages/jax/core.py in bind(self, *args, **kwargs)
150 top_trace = find_top_trace(args)
151 if top_trace is None:
--> 152 return self.impl(*args, **kwargs)
153
154 tracers = map(top_trace.full_raise, args)
/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py in apply_primitive(prim, *args, **params)
138 """Impl rule that compiles and runs a single primitive 'prim' using XLA."""
139 abstract_args = map(abstractify, args)
--> 140 compiled_fun = xla_primitive_callable(prim, *abstract_args, **params)
141 return compiled_fun(*args)
142
/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py in xla_primitive_callable(prim, *abstract_args, **params)
150 else:
151 handle_result = aval_to_result_handler(aval_out)
--> 152 built_c = primitive_computation(prim, *abstract_args, **params)
153 compiled = built_c.Compile(compile_options=xb.get_compile_options(),
154 backend=xb.get_backend(backend))
/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py in primitive_computation(prim, *avals, **params)
183 "This is a bug in JAX's shape-checking rules; please report it!\n"
184 "https://github.com/google/jax/issues\n")
--> 185 raise RuntimeError(msg)
186
187 def _execute_compiled_primitive(prim, compiled, backend, result_handler, *args):
RuntimeError: Invalid argument: Padding result in negative size for dimension 0:
This is a bug in JAX's shape-checking rules; please report it!
https://github.com/google/jax/issues
```
This doesn't happen when taking a JVP of just `np.take`, i.e.
```
jax.jvp(lambda params: np.take(params, np.array([], np.int32), axis=0), (np.ones(6),), (np.ones(6),))
```
| 2020-07-12T21:12:43 |
|
google/jax | 3,735 | google__jax-3735 | [
"3706"
] | 6391cfe7d0ef9b6a03a33ff82157e83d5a88e58a | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -2402,7 +2402,7 @@ def _conv_general_dilated_shape_rule(
def _conv_general_dilated_dtype_rule(
lhs, rhs, *, window_strides, padding, lhs_dilation, rhs_dilation,
dimension_numbers, **unused_kwargs):
- return naryop_dtype_rule(_input_dtype, [_float, _float],
+ return naryop_dtype_rule(_input_dtype, [_float | _complex, _float | _complex],
'conv_general_dilated', lhs, rhs)
_conv_spec_transpose = lambda spec: (spec[1], spec[0]) + spec[2:]
@@ -2499,16 +2499,37 @@ def _conv_general_dilated_transpose_rhs(
feature_group_count=feature_group_count,
batch_group_count=batch_group_count, precision=precision)
+
def _conv_general_dilated_translation_rule(
- c, lhs, rhs, *, window_strides, padding, lhs_dilation, rhs_dilation,
- dimension_numbers, feature_group_count, batch_group_count, precision,
- **unused_kwargs):
+ c, lhs, rhs, *, window_strides, padding,
+ lhs_dilation, rhs_dilation, dimension_numbers, feature_group_count,
+ batch_group_count, precision, expand_complex_convolutions, **unused_kwargs):
assert type(dimension_numbers) is ConvDimensionNumbers
dimension_numbers = _conv_general_proto(dimension_numbers)
- return xops.ConvGeneralDilated(lhs, rhs, window_strides, padding, lhs_dilation,
- rhs_dilation, dimension_numbers,
- feature_group_count, batch_group_count,
- precision_config=_precision_config(precision))
+ precision_config = _precision_config(precision)
+ dtype = c.get_shape(lhs).numpy_dtype()
+ conv = lambda x, y: xops.ConvGeneralDilated(
+ x, y, window_strides, padding, lhs_dilation, rhs_dilation,
+ dimension_numbers, feature_group_count, batch_group_count,
+ precision_config=precision_config)
+ if expand_complex_convolutions and onp.issubdtype(dtype, onp.complexfloating):
+ # We use a trick for complex multiplication due to Gauss which uses three
+ # multiplications and five additions; instead of the naive method of four
+ # multiplications and two additions.
+ # https://en.wikipedia.org/wiki/Multiplication_algorithm#Complex_multiplication_algorithm
+ #
+ # This performance win comes with a trade-off in accuracy; especially in
+ # cases when the real and imaginary differ hugely in magnitude. The relative
+ # error bound (e.g. 1p-24 in case of float32) would be relative to the
+ # maximum of real and imaginary parts of the result instead of being
+ # satisfied by the real and imaginary parts independently of each other.
+ lhs_real, lhs_imag = xops.Real(lhs), xops.Imag(lhs)
+ rhs_real, rhs_imag = xops.Real(rhs), xops.Imag(rhs)
+ k1 = conv(xops.Add(lhs_real, lhs_imag), rhs_real)
+ k2 = conv(lhs_real, xops.Sub(rhs_imag, rhs_real))
+ k3 = conv(lhs_imag, xops.Add(rhs_real, rhs_imag))
+ return xops.Complex(xops.Sub(k1, k3), xops.Add(k1, k2))
+ return conv(lhs, rhs)
def _conv_general_dilated_batch_rule(
batched_args, batch_dims, *, window_strides, padding,
@@ -2631,7 +2652,16 @@ def _conv_general_dilated_masking_rule(
conv_general_dilated_p = standard_primitive(
_conv_general_dilated_shape_rule, _conv_general_dilated_dtype_rule,
- 'conv_general_dilated', _conv_general_dilated_translation_rule)
+ 'conv_general_dilated', partial(_conv_general_dilated_translation_rule,
+ expand_complex_convolutions=False))
+
+# TODO(b/161124619, b/161126248): XLA does not support complex convolution on
+# CPU or GPU; on these backends, lower complex convolutions away.
+xla.backend_specific_translations['cpu'][conv_general_dilated_p] = partial(
+ _conv_general_dilated_translation_rule, expand_complex_convolutions=True)
+xla.backend_specific_translations['gpu'][conv_general_dilated_p] = partial(
+ _conv_general_dilated_translation_rule, expand_complex_convolutions=True)
+
ad.defbilinear(conv_general_dilated_p,
_conv_general_dilated_transpose_lhs,
_conv_general_dilated_transpose_rhs)
| diff --git a/tests/lax_autodiff_test.py b/tests/lax_autodiff_test.py
--- a/tests/lax_autodiff_test.py
+++ b/tests/lax_autodiff_test.py
@@ -346,7 +346,7 @@ def testConvWithGeneralPaddingGrad(self, lhs_shape, rhs_shape, dtype, strides,
for strides in all_strides
for rhs_dil in rhs_dils
for lhs_dil in lhs_dils
- for dtype in grad_float_dtypes
+ for dtype in grad_inexact_dtypes
for padding in ([((0, 0), (0, 0)), ((1, 0), (0, 1))] +
([((0, -1), (0, 0))] if lhs_shape[2] != 0 else []))
for dim_nums, perms in [
diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -463,7 +463,7 @@ def numpy_fun(lhs, rhs):
(j * feature_group_count * batch_group_count, i, 4, 5))
for w in [0, 10]
for b, i, j in itertools.product([2, 3], repeat=3)]
- for dtype in float_dtypes for strides in [(1, 1), (2, 1)]
+ for dtype in inexact_dtypes for strides in [(1, 1), (2, 1)]
for padding in [((1, 2), (2, 0)), ((10, 8), (7, 13))]
for lhs_dilation, rhs_dilation in itertools.product(
[(1, 1), (1, 2), (1, 4)], repeat=2)
| conv_general_dilated do not support complex number
I am currently shifting a project from autograd to Jax which involve the convolution for complex number. In autograd, the convolution function from autograd/autograd/scipy/signal.py/convolve do not restrict the type of input, it works pretty well. However, neither jax/jax/scipy/signal.py/convolve nor jax/jax/lax/lax.py/conv_general_dilated support the usage of data format, complex64. Would i know if there's any plan for supporting complex64 or any direction for me to make it works like HIPS/autograd library? Thank you so much.
| Can you share a bit more information? What did you run, and on what hardware platform?
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-761948a0f0be> in <module>
5 (1,1), # lhs/image dilation
6 (1,1), # rhs/kernel dilation
----> 7 dn) # dimension_numbers = lhs, rhs, out dimension permutation
8 print("out shape: ", out.shape)
9 print("First output channel:")
~/.local/lib/python3.6/site-packages/jax/lax/lax.py in conv_general_dilated(lhs, rhs, window_strides, padding, lhs_dilation, rhs_dilation, dimension_numbers, feature_group_count, batch_group_count, precision)
540 batch_group_count=batch_group_count,
541 lhs_shape=lhs.shape, rhs_shape=rhs.shape,
--> 542 precision=_canonicalize_precision(precision))
543
544 def dot(lhs: Array, rhs: Array, precision: Optional[PrecisionType] = None) -> Array:
~/.local/lib/python3.6/site-packages/jax/core.py in bind(self, *args, **kwargs)
197 top_trace = find_top_trace(args)
198 if top_trace is None:
--> 199 return self.impl(*args, **kwargs)
200
201 tracers = map(top_trace.full_raise, args)
~/.local/lib/python3.6/site-packages/jax/interpreters/xla.py in apply_primitive(prim, *args, **params)
164 def apply_primitive(prim, *args, **params):
165 """Impl rule that compiles and runs a single primitive 'prim' using XLA."""
--> 166 compiled_fun = xla_primitive_callable(prim, *map(arg_spec, args), **params)
167 return compiled_fun(*args)
168
~/.local/lib/python3.6/site-packages/jax/interpreters/xla.py in xla_primitive_callable(prim, *arg_specs, **params)
172 device = _device_from_arg_devices(arg_devices)
173 backend = xb.get_device_backend(device)
--> 174 aval_out = prim.abstract_eval(*avals, **params)
175 if not prim.multiple_results:
176 handle_result = aval_to_result_handler(device, aval_out)
~/.local/lib/python3.6/site-packages/jax/lax/lax.py in standard_abstract_eval(prim, shape_rule, dtype_rule, *args, **kwargs)
1697 return ConcreteArray(prim.impl(*[x.val for x in args], **kwargs))
1698 elif least_specialized is ShapedArray:
-> 1699 return ShapedArray(shape_rule(*args, **kwargs), dtype_rule(*args, **kwargs))
1700 elif least_specialized is UnshapedArray:
1701 return UnshapedArray(dtype_rule(*args, **kwargs))
~/.local/lib/python3.6/site-packages/jax/lax/lax.py in _conv_general_dilated_dtype_rule(lhs, rhs, window_strides, padding, lhs_dilation, rhs_dilation, dimension_numbers, **unused_kwargs)
2230 dimension_numbers, **unused_kwargs):
2231 return naryop_dtype_rule(_input_dtype, [_float, _float],
-> 2232 'conv_general_dilated', lhs, rhs)
2233
2234 _conv_spec_transpose = lambda spec: (spec[1], spec[0]) + spec[2:]
~/.local/lib/python3.6/site-packages/jax/lax/lax.py in naryop_dtype_rule(result_dtype, accepted_dtypes, name, *avals, **kwargs)
1737 typename = str(onp.dtype(aval_dtype).name)
1738 typenames = ', '.join(t.__name__ for t in types)
-> 1739 raise TypeError(msg.format(name, typename, i, i, typenames))
1740 _check_same_dtypes(name, False, *aval_dtypes)
1741 return result_dtype(*avals)
TypeError: conv_general_dilated does not accept dtype complex64 at position 0. Accepted dtypes at position 0 are subtypes of floating.`
I tried to follow the guide from Advanced Jax tutorials - the sharp bits - convolution part and amend it with complex value, the above error is what i get after running the code. From my understanding, the code from lax.py
`def _conv_general_dilated_dtype_rule(
lhs, rhs, *, window_strides, padding, lhs_dilation, rhs_dilation,
dimension_numbers, **unused_kwargs):
return naryop_dtype_rule(_input_dtype, [_float, _float],
'conv_general_dilated', lhs, rhs)`
restricted the input type to float but not complex number, would changing this function (_conv_general_dilated_dtype_rule) lead to the support of complex-valued convolution? Thank you so much.
When i make the amendment changing the data type
`2020-07-10 16:07:06.243313: F external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gpu_conv_runner.cc:491] %custom-call = (c64[1,200,198,3]{2,1,3,0}, u8[0]{0}) custom-call(c64[1,200,198,3]{2,1,3,0} %copy.3, c64[3,3,3,3]{1,0,2,3} %copy.4), window={size=3x3 pad=1_1x1_1}, dim_labels=b01f_01io->b01f, custom_call_target="__cudnn$convForward", metadata={op_type="conv_general_dilated" op_name="conv_general_dilated[ batch_group_count=1\n dimension_numbers=ConvDimensionNumbers(lhs_spec=(0, 3, 1, 2), rhs_spec=(3, 2, 0, 1), out_spec=(0, 3, 1, 2))\n feature_group_count=1\n lhs_dilation=(1, 1)\n lhs_shape=(1, 200, 198, 3)\n padding=((1, 1), (1, 1))\n precision=None\n rhs_dilation=(1, 1)\n rhs_shape=(3, 3, 3, 3)\n window_strides=(1, 1) ]"}, backend_config="{\"algorithm\":\"0\",\"tensor_ops_enabled\":false,\"conv_result_scale\":1,\"activation_mode\":\"0\",\"side_input_scale\":0}"`
And here is the program i run for complex convolution
> import numpy as onp
import jax
import jax.numpy as jnp
from jax import lax
from jax.lax import conv_general_dilated
from matplotlib import pyplot as plt
\
img = onp.zeros((1, 200, 198, 3), dtype=jnp.complex64)
for k in range(3):
x = 30 + 60*k
y = 20 + 60*k
img[0, x:x+10, y:y+10, k] = 1.0
print('image shape',img.shape)
\
kernel = onp.zeros((3, 3, 3, 3), dtype=jnp.complex64)
kernel += onp.array([[1, 1, 0],
[1, 0,-1],
[0,-1,-1]])[:, :, onp.newaxis, onp.newaxis]
print('kernel shape:',kernel.shape)
\
dn = lax.conv_dimension_numbers(img.shape, # only ndim matters, not shape
kernel.shape, # only ndim matters, not shape
('NHWC', 'HWIO', 'NHWC')) # the important bit
print(dn)
\
out = lax.conv_general_dilated(img, # lhs = image tensor
kernel, # rhs = conv kernel tensor
(1,1), # window strides
'SAME', # padding mode
(1,1), # lhs/image dilation
(1,1), # rhs/kernel dilation
dn) # dimension_numbers = lhs, rhs, out dimension permutation
print("out shape: ", out.shape)
Is it the problem related to XLA compiler? | 2020-07-13T15:26:11 |
google/jax | 3,751 | google__jax-3751 | [
"2514"
] | 2b7a39f92bd3380eb74c320980675b01eb0881f7 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -3334,7 +3334,7 @@ def take(a, indices, axis=None, out=None, mode=None):
index_dims = len(shape(indices))
slice_sizes = list(shape(a))
- slice_sizes[axis] = 1
+ slice_sizes[axis] = _min(indices.size, 1)
dnums = lax.GatherDimensionNumbers(
offset_dims=tuple(
list(range(axis)) +
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -2761,6 +2761,12 @@ def args_maker():
self._CheckAgainstNumpy(jnp_op, np_op, args_maker)
self._CompileAndCheck(jnp_op, args_maker)
+ def testTakeEmpty(self):
+ np.testing.assert_array_equal(
+ jnp.array([], dtype=jnp.float32),
+ jnp.take(jnp.array([], jnp.float32), jnp.array([], jnp.int32)))
+
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_ishape={}_axis={}".format(
jtu.format_shape_dtype_string(x_shape, dtype), i_shape, axis),
| jax.numpy.take() fails on empty arrays (and np.take does not)
```
>>> import numpy as onp
>>> import jax.numpy as jnp
>>> onp.take([], [])
array([], dtype=float64)
>>> jnp.take([], [])
ValueError: start_indices must have an integer type
```
This error is caused because jax converts the second array to an empty float array. But another error is raised even if you work around the first one:
```
>>> jnp.take([], jnp.array([], dtype=int))
RuntimeError: Invalid argument: Slice size at index 0 in gather op is out of range, must be within [0, 1), got 1.:
This is a bug in JAX's shape-checking rules; please report it!
https://github.com/google/jax/issues
```
| 2020-07-14T13:52:21 |
|
google/jax | 3,777 | google__jax-3777 | [
"3758"
] | bfe8e4f0b4e697e1622b6d80de3493afa0de60ae | diff --git a/jax/scipy/special.py b/jax/scipy/special.py
--- a/jax/scipy/special.py
+++ b/jax/scipy/special.py
@@ -206,6 +206,7 @@ def zeta(x, q=None):
T0 = (a + N) ** -s
s_over_a = (s_ + np.arange(2 * M, dtype=M.dtype)) / (a_ + N)
T1 = jnp.cumprod(s_over_a, -1)[..., ::2]
+ T1 = jnp.clip(T1, a_max=jnp.finfo(dtype).max)
coefs = np.array(_BERNOULLI_COEFS[:T1.shape[-1]], dtype=dtype)
T1 = T1 / coefs
T = T0 * (dtype(0.5) + T1.sum(-1))
| diff --git a/tests/lax_scipy_test.py b/tests/lax_scipy_test.py
--- a/tests/lax_scipy_test.py
+++ b/tests/lax_scipy_test.py
@@ -196,6 +196,11 @@ def testIssue980(self):
self.assertAllClose(np.zeros((4,), dtype=np.float32),
lsp_special.expit(x))
+ def testIssue3758(self):
+ x = np.array([1e5, 1e19, 1e10], dtype=np.float32)
+ q = np.array([1., 40., 30.], dtype=np.float32)
+ self.assertAllClose(np.array([1., 0., 0.], dtype=np.float32), lsp_special.zeta(x, q))
+
def testXlogyShouldReturnZero(self):
self.assertAllClose(lsp_special.xlogy(0., 0.), 0., check_dtypes=False)
| jax.scipy.special zeta sometimes returns NaN
For large values of z, `jax.scipy.special.zeta` returns NaN when it should instead return 0.:
`jax.scipy.special.zeta(1e5, q=1.)` # Return NaN.
Note: I suspect TF and Scipy have this same issue:
`sp.zeta(1e19, 40.)` # Returns NaN in Scipy. I have to use a large value because Scipy is using internal float64 calculations.
`tf.math.zeta(1e10, q=30.)` # Returns NaN in TF
| @fehiepsi contributed this function originally, perhaps they have thoughts.
I think you can fix the issue by clipping [T1 term](https://github.com/google/jax/blob/master/jax/scipy/special.py#L208) by `jnp.clip(T1, a_max=jnp.finfo(dtype).max)`. Another way is to add a safemul when multiplying `0` and `inf` at the computation of [T](https://github.com/google/jax/blob/master/jax/scipy/special.py#L211), kind of
```
T = jnp.where(T0 == 0, 0., T0 * (dtype(0.5) + T1.sum(-1)))
```
That way, we don't face `NaN` issue, even for examples that failed on `scipy` and `tf`. | 2020-07-16T20:19:29 |
google/jax | 3,778 | google__jax-3778 | [
"2972"
] | 3fb887421b7c09b1e879f1454f8930ca8c51cfc9 | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -614,35 +614,6 @@ def dot_general(lhs: Array, rhs: Array, dimension_numbers: DotDimensionNumbers,
contract_dims_seq, batch_dims_seq = dimension_numbers
contract_dims = tuple(map(lambda x: tuple(x), contract_dims_seq))
batch_dims = tuple(map(lambda x: tuple(x), batch_dims_seq))
- if not dtypes.issubdtype(lhs.dtype, np.inexact):
- # TODO(b/134526360): XLA doesn't support bool or integer dots, so we emit a
- # sum of products instead.
- lhs_contract_dims, rhs_contract_dims = contract_dims
- lhs_batch_dims, rhs_batch_dims = batch_dims
- lhs_noncontract_dims = tuple(sorted(
- set(range(np.ndim(lhs))) - set(lhs_batch_dims) - set(lhs_contract_dims)))
- rhs_noncontract_dims = tuple(sorted(
- set(range(np.ndim(rhs))) - set(rhs_batch_dims) - set(rhs_contract_dims)))
- lhs = transpose(lhs,
- lhs_batch_dims + lhs_noncontract_dims + lhs_contract_dims)
- rhs = transpose(rhs,
- rhs_batch_dims + rhs_noncontract_dims + rhs_contract_dims)
-
- lhs_start_expand = len(lhs_batch_dims) + len(lhs_noncontract_dims)
- lhs_end_expand = lhs_start_expand + len(rhs_noncontract_dims)
- lhs = expand_dims(lhs, tuple(range(lhs_start_expand, lhs_end_expand)))
-
- rhs_start_expand = len(lhs_batch_dims)
- rhs_end_expand = rhs_start_expand + len(lhs_noncontract_dims)
- rhs = expand_dims(rhs, tuple(range(rhs_start_expand, rhs_end_expand)))
-
- out_ndim = (len(lhs_batch_dims) + len(lhs_noncontract_dims) +
- len(rhs_noncontract_dims))
- op_product = bitwise_and if lhs.dtype == np.bool_ else mul
- op_sum = bitwise_or if lhs.dtype == np.bool_ else add
- return reduce(op_product(lhs, rhs), _zero(lhs), op_sum,
- tuple(range(out_ndim, out_ndim + len(lhs_contract_dims))))
-
return dot_general_p.bind(lhs, rhs,
dimension_numbers=(contract_dims, batch_dims),
precision=_canonicalize_precision(precision))
@@ -2714,24 +2685,38 @@ def _dot_general_shape_rule(lhs, rhs, *, dimension_numbers, precision):
msg = ("dot_general requires equal numbers of lhs_batch and rhs_batch "
"dimensions, got lhs_batch {} and rhs_batch {}.")
raise TypeError(msg.format(lhs_batch, rhs_batch))
- if not np.all(np.equal(lhs_batch, rhs_batch)):
- msg = ("dot_general requires same lhs and rhs batch dimension numbers, "
- "got {} and {}.")
- raise TypeError(msg.format(lhs_batch, rhs_batch))
+ lhs_contracting_set, lhs_batch_set = set(lhs_contracting), set(lhs_batch)
+ rhs_contracting_set, rhs_batch_set = set(rhs_contracting), set(rhs_batch)
+ if len(lhs_batch_set) != len(lhs_batch):
+ msg = ("dot_general requires lhs batch dimensions to be distinct, got "
+ f"lhs_batch {lhs_batch}.")
+ raise TypeError(msg)
+ if len(rhs_batch_set) != len(rhs_batch):
+ msg = ("dot_general requires rhs batch dimensions to be distinct, got "
+ f"rhs_batch {rhs_batch}.")
+ raise TypeError(msg)
+ if len(lhs_contracting_set) != len(lhs_contracting):
+ msg = ("dot_general requires lhs contracting dimensions to be distinct, "
+ f"got lhs_contracting {lhs_contracting}.")
+ raise TypeError(msg)
+ if len(rhs_contracting_set) != len(rhs_contracting):
+ msg = ("dot_general requires rhs contracting dimensions to be distinct, "
+ f"got rhs_contracting {rhs_contracting}.")
+ raise TypeError(msg)
+ if lhs_contracting_set & lhs_batch_set:
+ msg = ("dot_general requires lhs batch dimensions to be disjoint from "
+ "contracting dimensions, got lhs_batch {} and lhs_contracting {}.")
+ raise TypeError(msg.format(lhs_batch, lhs_contracting))
+ if rhs_contracting_set & rhs_batch_set:
+ msg = ("dot_general requires rhs batch dimensions to be disjoint from "
+ "contracting dimensions, got rhs_batch {} and rhs_contracting {}.")
+ raise TypeError(msg.format(rhs_batch, rhs_contracting))
lhs_batch_shape = np.take(lhs.shape, lhs_batch)
rhs_batch_shape = np.take(rhs.shape, rhs_batch)
if not np.all(np.equal(lhs_batch_shape, rhs_batch_shape)):
msg = ("dot_general requires lhs batch dimensions and rhs batch dimensions "
"to have the same shape, got {} and {}.")
raise TypeError(msg.format(lhs_batch_shape, rhs_batch_shape))
- if tuple(sorted(lhs_batch)) != tuple(range(len(lhs_batch))):
- msg = ("dot_general requires lhs batch dimensions to precede contracting "
- "and non-contracting dimensions, got lhs_batch {}.")
- raise TypeError(msg.format(lhs_batch))
- if tuple(sorted(rhs_batch)) != tuple(range(len(rhs_batch))):
- msg = ("dot_general requires rhs batch dimensions to precede contracting "
- "and non-contracting dimensions, got rhs_batch {}.")
- raise TypeError(msg.format(rhs_batch))
lhs_contracting_shape = np.take(lhs.shape, lhs_contracting)
rhs_contracting_shape = np.take(rhs.shape, rhs_contracting)
if not np.all(np.equal(lhs_contracting_shape, rhs_contracting_shape)):
@@ -2739,16 +2724,16 @@ def _dot_general_shape_rule(lhs, rhs, *, dimension_numbers, precision):
"shape, got {} and {}.")
raise TypeError(msg.format(lhs_contracting_shape, rhs_contracting_shape))
- batch_shape = tuple(np.take(lhs.shape, lhs_batch))
- lhs_contract_or_batch = tuple(lhs_contracting) + tuple(lhs_batch)
+ batch_shape = tuple(lhs_batch_shape)
+ lhs_contract_or_batch = tuple(sorted(tuple(lhs_contracting) + tuple(lhs_batch)))
lhs_tensored_shape = tuple(np.delete(lhs.shape, lhs_contract_or_batch))
- rhs_contract_or_batch = tuple(rhs_contracting) + tuple(rhs_batch)
+ rhs_contract_or_batch = tuple(sorted(tuple(rhs_contracting) + tuple(rhs_batch)))
rhs_tensored_shape = tuple(np.delete(rhs.shape, rhs_contract_or_batch))
return batch_shape + lhs_tensored_shape + rhs_tensored_shape
def _dot_general_dtype_rule(lhs, rhs, *, dimension_numbers, precision):
- return naryop_dtype_rule(_input_dtype, [_num, _num], 'dot_general', lhs, rhs)
+ return naryop_dtype_rule(_input_dtype, [_any, _any], 'dot_general', lhs, rhs)
def _dot_general_transpose_lhs(g, y, *, dimension_numbers, precision,
@@ -2785,53 +2770,77 @@ def _dot_general_batch_rule(batched_args, batch_dims, *, dimension_numbers,
lhs, rhs = batched_args
lbd, rbd = batch_dims
assert lbd is not None or rbd is not None
+ def bump_dims(dims, b):
+ return tuple(np.add(dims, np.greater_equal(dims, b)))
+
if lbd is not None and rbd is not None:
# adding a batch dimension
- if lbd != 0:
- lhs = batching.moveaxis(lhs, lbd, 0)
- if rbd != 0:
- rhs = batching.moveaxis(rhs, rbd, 0)
- lhs_batch = (0,) + tuple(np.add(1, lhs_batch))
- rhs_batch = (0,) + tuple(np.add(1, rhs_batch))
- lhs_contract = tuple(np.add(1, lhs_contract))
- rhs_contract = tuple(np.add(1, rhs_contract))
+ lhs_batch = (lbd,) + bump_dims(lhs_batch, lbd)
+ rhs_batch = (rbd,) + bump_dims(rhs_batch, rbd)
+ lhs_contract = bump_dims(lhs_contract, lbd)
+ rhs_contract = bump_dims(rhs_contract, rbd)
result_batch_dim = 0
else:
# adding a tensor product dimension
if lbd is not None:
- if lhs_batch == () or lbd > np.max(lhs_batch):
- # can avoid transposes
- bump_lhs_contract = np.greater_equal(lhs_contract, lbd)
- lhs_contract = tuple(np.add(lhs_contract, bump_lhs_contract))
- result_batch_dim = lbd - len(lhs_contract) + sum(bump_lhs_contract)
- else:
- # move the new dimension to the end of lhs to avoid changing batch dims
- lhs = batching.moveaxis(lhs, lbd, lhs.ndim - 1)
- # lhs tensor product dims in result come after batch dims
- result_batch_dim = lhs.ndim - len(lhs_contract) - 1
+ other = tuple(d for d in range(lhs.ndim)
+ if d not in lhs_batch and d not in lhs_contract)
+ result_batch_dim = (len(lhs_batch) + sum(np.less(other, lbd)))
+ lhs_batch = bump_dims(lhs_batch, lbd)
+ lhs_contract = bump_dims(lhs_contract, lbd)
else:
- if rhs_batch == () or rbd > np.max(rhs_batch):
- # can avoid transposes
- bump_rhs_contract = np.greater_equal(rhs_contract, rbd)
- rhs_contract = tuple(np.add(rhs_contract, bump_rhs_contract))
- result_batch_dim = (rbd + (lhs.ndim - len(lhs_contract) - len(lhs_batch))
- - (len(rhs_contract) - sum(bump_rhs_contract)))
- else:
- # move the new dimension to the end of rhs to avoid changing batch dims
- rhs = batching.moveaxis(rhs, rbd, rhs.ndim - 1)
- # rhs tensor product dims in result come after batch dims + lhs tensor
- # product dims
- result_batch_dim = (lhs.ndim - len(lhs_contract) - len(lhs_batch) +
- rhs.ndim - len(rhs_contract) - 1)
+ other = tuple(d for d in range(rhs.ndim)
+ if d not in rhs_batch and d not in rhs_contract)
+ result_batch_dim = (lhs.ndim - len(lhs_contract) +
+ sum(np.less(other, rbd)))
+ rhs_batch = bump_dims(rhs_batch, rbd)
+ rhs_contract = bump_dims(rhs_contract, rbd)
+
new_dimension_numbers = ((lhs_contract, rhs_contract), (lhs_batch, rhs_batch))
batched_out = dot_general(lhs, rhs, new_dimension_numbers,
precision=precision)
return batched_out, int(result_batch_dim)
+def _dot_using_sum_of_products(lhs, rhs, *, dimension_numbers):
+ contract_dims, batch_dims = dimension_numbers
+ lhs_contract_dims, rhs_contract_dims = contract_dims
+ lhs_batch_dims, rhs_batch_dims = batch_dims
+ lhs_noncontract_dims = tuple(sorted(
+ set(range(np.ndim(lhs))) - set(lhs_batch_dims) - set(lhs_contract_dims)))
+ rhs_noncontract_dims = tuple(sorted(
+ set(range(np.ndim(rhs))) - set(rhs_batch_dims) - set(rhs_contract_dims)))
+ lhs = transpose(lhs,
+ lhs_batch_dims + lhs_noncontract_dims + lhs_contract_dims)
+ rhs = transpose(rhs,
+ rhs_batch_dims + rhs_noncontract_dims + rhs_contract_dims)
+
+ lhs_start_expand = len(lhs_batch_dims) + len(lhs_noncontract_dims)
+ lhs_end_expand = lhs_start_expand + len(rhs_noncontract_dims)
+ lhs = expand_dims(lhs, tuple(range(lhs_start_expand, lhs_end_expand)))
+
+ rhs_start_expand = len(lhs_batch_dims)
+ rhs_end_expand = rhs_start_expand + len(lhs_noncontract_dims)
+ rhs = expand_dims(rhs, tuple(range(rhs_start_expand, rhs_end_expand)))
+
+ out_ndim = (len(lhs_batch_dims) + len(lhs_noncontract_dims) +
+ len(rhs_noncontract_dims))
+ op_product = bitwise_and if lhs.dtype == np.bool_ else mul
+ op_sum = bitwise_or if lhs.dtype == np.bool_ else add
+ return reduce(op_product(lhs, rhs), _zero(lhs), op_sum,
+ tuple(range(out_ndim, out_ndim + len(lhs_contract_dims))))
+
def _dot_general_translation_rule(c, lhs, rhs, *, dimension_numbers, precision):
- return xops.DotGeneral(lhs, rhs,
- xc.make_dot_dimension_numbers(dimension_numbers),
- precision_config=_precision_config(precision))
+ dtype = c.get_shape(lhs).numpy_dtype()
+ if dtypes.issubdtype(dtype, np.inexact):
+ return xops.DotGeneral(lhs, rhs,
+ xc.make_dot_dimension_numbers(dimension_numbers),
+ precision_config=_precision_config(precision))
+ else:
+ # TODO(b/134526360): XLA doesn't support bool or integer dots, so we emit a
+ # sum of products instead.
+ translation = xla.lower_fun(_dot_using_sum_of_products,
+ multiple_results=False)
+ return translation(c, lhs, rhs, dimension_numbers=dimension_numbers)
def _dot_general_masking_rule(padded_vals, logical_shapes, *, dimension_numbers,
precision):
| diff --git a/jax/test_util.py b/jax/test_util.py
--- a/jax/test_util.py
+++ b/jax/test_util.py
@@ -682,16 +682,16 @@ def check_raises_regexp(thunk, err_type, pattern):
assert re.match(pattern, str(e)), "{}\n\n{}\n".format(e, pattern)
-def _iter_eqns(jaxpr):
+def iter_eqns(jaxpr):
# TODO(necula): why doesn't this search in params?
for eqn in jaxpr.eqns:
yield eqn
for subjaxpr in core.subjaxprs(jaxpr):
- yield from _iter_eqns(subjaxpr)
+ yield from iter_eqns(subjaxpr)
def assert_dot_precision(expected_precision, fun, *args):
jaxpr = api.make_jaxpr(fun)(*args)
- precisions = [eqn.params['precision'] for eqn in _iter_eqns(jaxpr.jaxpr)
+ precisions = [eqn.params['precision'] for eqn in iter_eqns(jaxpr.jaxpr)
if eqn.primitive == lax.dot_general_p]
for precision in precisions:
msg = "Unexpected precision: {} != {}".format(expected_precision, precision)
diff --git a/tests/batching_test.py b/tests/batching_test.py
--- a/tests/batching_test.py
+++ b/tests/batching_test.py
@@ -291,11 +291,6 @@ def testDot4(self):
expected = np.einsum('ij,i->j', xs, ys)
self.assertAllClose(ans, expected, check_dtypes=False)
- def testDot5(self):
- f = vmap(partial(jnp.einsum, 'ij,j->i'), (None, 0))
- jaxpr = make_jaxpr(f)(jnp.zeros((1000, 1000)), jnp.zeros((1000, 1000)))
- assert "broadcast" not in str(jaxpr)
-
def testPad(self):
R = np.random.RandomState(0).randn
diff --git a/tests/lax_autodiff_test.py b/tests/lax_autodiff_test.py
--- a/tests/lax_autodiff_test.py
+++ b/tests/lax_autodiff_test.py
@@ -416,6 +416,8 @@ def testDotGrad(self, lhs_shape, rhs_shape, dtype, rng_factory):
((3, 5), (2, 5), (([1], [1]), ([], []))),
((5, 3), (5, 2), (([0], [0]), ([], []))),
((3, 3, 2), (3, 2, 4), (([2], [1]), ([0], [0]))),
+ ((3, 5, 2), (2, 4, 5), (([2], [0]), ([1], [2]))),
+ ((7, 3, 5, 2), (2, 2, 4, 5), (([3], [0]), ([2], [3]))),
]
for dtype in float_dtypes))
def testDotGeneralContractAndBatchGrads(self, lhs_shape, rhs_shape, dtype,
diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -743,10 +743,14 @@ def testDotAgainstNumpy(self, lhs_shape, rhs_shape, dtype, rng_factory):
"lhs_contracting": lhs_contracting, "rhs_contracting": rhs_contracting,
"rng_factory": rng_factory}
for lhs_shape, rhs_shape, lhs_contracting, rhs_contracting in [
+ [(5,), (5,), [0], [0]],
+ [(5, 7), (5,), [0], [0]],
+ [(7, 5), (5,), [1], [0]],
[(3, 5), (2, 5), [1], [1]],
[(5, 3), (5, 2), [0], [0]],
[(5, 3, 2), (5, 2, 4), [0], [0]],
[(5, 3, 2), (5, 2, 4), [0,2], [0,1]],
+ [(5, 3, 2), (3, 5, 2, 4), [0,2], [1,2]],
[(1, 2, 2, 3), (1, 2, 3, 1), [1], [1]],
[(3, 2), (2, 4), [1], [0]],
]
@@ -773,6 +777,7 @@ def fun(lhs, rhs):
"dimension_numbers": dimension_numbers, "rng_factory": rng_factory}
for lhs_shape, rhs_shape, dimension_numbers in [
((3, 3, 2), (3, 2, 4), (([2], [1]), ([0], [0]))),
+ ((3, 3, 2), (2, 3, 4), (([2], [0]), ([0], [1]))),
((3, 4, 2, 4), (3, 4, 3, 2), (([2], [3]), ([0, 1], [0, 1]))),
]
for dtype in all_dtypes
@@ -797,6 +802,7 @@ def fun(lhs, rhs):
"dimension_numbers": dimension_numbers, "rng_factory": rng_factory}
for lhs_shape, rhs_shape, dimension_numbers in [
((3, 3, 2), (3, 2, 4), (([2], [1]), ([0], [0]))),
+ ((3, 3, 2), (2, 3, 4), (([2], [0]), ([0], [1]))),
((3, 4, 2, 4), (3, 4, 3, 2), (([2], [3]), ([0, 1], [0, 1]))),
]
for dtype in all_dtypes
diff --git a/tests/lax_vmap_test.py b/tests/lax_vmap_test.py
--- a/tests/lax_vmap_test.py
+++ b/tests/lax_vmap_test.py
@@ -238,10 +238,14 @@ def testDot(self, lhs_shape, rhs_shape, dtype, bdims, rng_factory):
"lhs_contracting": lhs_contracting, "rhs_contracting": rhs_contracting,
"bdims": bdims, "rng_factory": rng_factory}
for lhs_shape, rhs_shape, lhs_contracting, rhs_contracting in [
+ [(5,), (5,), [0], [0]],
+ [(5, 7), (5,), [0], [0]],
+ [(7, 5), (5,), [1], [0]],
[(3, 5), (2, 5), [1], [1]],
[(5, 3), (5, 2), [0], [0]],
[(5, 3, 2), (5, 2, 4), [0], [0]],
[(5, 3, 2), (5, 2, 4), [0,2], [0,1]],
+ [(5, 3, 2), (3, 5, 2, 4), [0,2], [1,2]],
[(1, 2, 2, 3), (1, 2, 3, 1), [1], [1]],
[(3, 2), (2, 4), [1], [0]],
]
@@ -266,6 +270,7 @@ def testDotGeneralContractOnly(self, lhs_shape, rhs_shape, dtype,
"dimension_numbers": dimension_numbers, "bdims": bdims, "rng_factory": rng_factory}
for lhs_shape, rhs_shape, dimension_numbers in [
((3, 3, 2), (3, 2, 4), (([2], [1]), ([0], [0]))),
+ ((3, 3, 2), (2, 3, 4), (([2], [0]), ([0], [1]))),
((3, 4, 2, 4), (3, 4, 3, 2), (([2], [3]), ([0, 1], [0, 1]))),
]
for bdims in all_bdims(lhs_shape, rhs_shape)
@@ -278,6 +283,12 @@ def testDotGeneralContractAndBatch(self, lhs_shape, rhs_shape, dtype,
self._CheckBatching(dot, 5, bdims, (lhs_shape, rhs_shape), (dtype, dtype),
rng)
+ # Checks that batching didn't introduce any transposes or broadcasts.
+ jaxpr = api.make_jaxpr(dot)(np.zeros(lhs_shape, dtype),
+ np.zeros(rhs_shape, dtype))
+ for eqn in jtu.iter_eqns(jaxpr.jaxpr):
+ self.assertFalse(eqn.primitive in ["transpose", "broadcast"])
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_dtype={}_broadcast_sizes={}_bdims={}".format(
shape, np.dtype(dtype).name, broadcast_sizes, bdims),
| Inefficient `moveaxis` in `dot_general` batching rule
When batching a `dot_general` primitive, the batch axis is [moved to the end](https://github.com/google/jax/blob/b543652332a98e5c15db04651417404513aa4d1a/jax/lax/lax.py#L2563) of the array. Right now, this can cause problems for high-rank tensors because the order of axes affects the way XLA compiles the operation (in particular, can result in padding out the moved batched axis to a multiple of 128, instead of using one of the other larger axes).
Instead of moving the batch axis to the end, it should be possible to keep it in its original location, which would give the user more control over how XLA compiles the computation.
A repro that produces this behavior:
```
x = jnp.zeros((16, 3, 256, 256))
y = jnp.zeros((2, 4, 256))
z = jnp.zeros((16, 3, 4, 256, 2048))
def go(x, y, z):
return jnp.einsum("vnn,sn,vsnj->nj", x, y, z)
@jax.jit
def repro(x, y, z):
go_bat = jax.vmap(jax.vmap(go, (0, None, 0)), (None, 0, None))
return go_bat(x, y, z)
repro(x, y, z)
```
| 2020-07-16T20:25:06 |
|
google/jax | 3,845 | google__jax-3845 | [
"3843"
] | f6221a663ef6c311e986aeffcef7403be09e8b15 | diff --git a/jax/core.py b/jax/core.py
--- a/jax/core.py
+++ b/jax/core.py
@@ -635,6 +635,11 @@ def __init__(self) -> None:
self.substack = [Sublevel(0)]
self.initial_style = False
+ def set_state(self, other: 'TraceState') -> None:
+ self.trace_stack = other.trace_stack
+ self.sustack = other.substack[:]
+ self.initial_style = other.initial_style
+
def copy(self):
new = TraceState()
new.trace_stack = self.trace_stack.copy()
diff --git a/jax/custom_derivatives.py b/jax/custom_derivatives.py
--- a/jax/custom_derivatives.py
+++ b/jax/custom_derivatives.py
@@ -58,11 +58,12 @@ def _memoize(thunk):
saved_state = core.trace_state.copy()
def memoized():
if not cell:
- prev_state, core.trace_state = core.trace_state, saved_state
+ prev_state = core.trace_state.copy()
+ core.trace_state.set_state(saved_state)
try:
cell.append(thunk())
finally:
- core.trace_state = prev_state
+ core.trace_state.set_state(prev_state)
return cell[0]
return memoized
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -2517,6 +2517,24 @@ def f(x, y):
expected = 12.
self.assertAllClose(ans, expected, check_dtypes=False)
+ def test_concurrent_initial_style(self):
+ # https://github.com/google/jax/issues/3843
+ def unroll(param, sequence):
+ def scan_f(prev_state, inputs):
+ return prev_state, jax.nn.sigmoid(param * inputs)
+ return jnp.sum(jax.lax.scan(scan_f, None, sequence)[1])
+
+ def run():
+ return jax.grad(unroll)(jnp.array(1.0), jnp.array([1.0]))
+
+ # we just don't want this to crash
+ n_workers = 20
+ with concurrent.futures.ThreadPoolExecutor(max_workers=n_workers) as e:
+ futures = []
+ for _ in range(n_workers):
+ futures.append(e.submit(run))
+ _ = [f.result() for f in futures]
+
class CustomVJPTest(jtu.JaxTestCase):
| multi-thread jax.grad fails on jax.lax.scan
Calculating the gradient of a function in multiple independent threads sometimes causes jax to encounter errors of the form.
```
[...]
third_party/py/jax/core.py in pop(self, bottom)
605 def pop(self, bottom: bool) -> None:
606 if bottom:
--> 607 self.downward.pop()
608 else:
609 self.upward.pop()
IndexError: pop from empty list
```
In my original setup this happened only in 1% of the cases, here is a minimal example which reproduces the issue most of the time:
```
import concurrent
import jax
from jax import numpy as jnp
def unroll(param, sequence):
def scan_f(prev_state, inputs):
return prev_state, jax.nn.sigmoid(param * inputs)
return jnp.sum(jax.lax.scan(scan_f, None, sequence)[1])
def run():
return jax.grad(unroll)(jnp.array(1.0), jnp.array([1.0]))
# The more workers the more likely the issue appears. Using 1 worker works as expected.
n_workers = 20
with concurrent.futures.ThreadPoolExecutor(max_workers=n_workers) as e:
futures = []
for _ in range(n_workers):
futures.append(e.submit(run))
unused_results = [f.result() for f in futures]
```
| Thanks for finding this, and the excellent repro!
I suspect the `custom_jvp` on `jax.nn.sigmoid` may be an issue here; if I replace `jax.nn.sigmoid` with `jnp.sin` the issue goes away.
(By the way, I think `import concurrent` should be `import concurrent.futures`.)
I further suspect that [this state saving/restoration](https://github.com/google/jax/blob/f6221a663ef6c311e986aeffcef7403be09e8b15/jax/custom_derivatives.py#L61) is not safe.
Yes, that line is definitely not safe; it's swapping a module-level variable! We need to swap only thread-local variables.
I think I've got a fix. | 2020-07-24T01:59:45 |
google/jax | 3,879 | google__jax-3879 | [
"3877"
] | 3aa37d3af4324f53c72270296ca905189ab6fd99 | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -907,7 +907,7 @@ def _mapped_axis_size(tree, vals, dims, name):
# TODO(mattjj,phawkins): add a way to inspect pytree kind more directly
if tree == tree_flatten((core.unit,) * tree.num_leaves)[1]:
lines1 = ["arg {} has shape {} and axis {} is to be mapped"
- .format(i, x.shape, d) for i, (x, d) in enumerate(zip(vals, dims))]
+ .format(i, np.shape(x), d) for i, (x, d) in enumerate(zip(vals, dims))]
sizes = collections.defaultdict(list)
for i, (x, d) in enumerate(zip(vals, dims)):
if d is not None:
@@ -919,11 +919,11 @@ def _mapped_axis_size(tree, vals, dims, name):
"axes" if len(idxs) > 1 else "an axis",
size)
for size, idxs in sizes.items()]
- raise ValueError(msg.format("\n".join(lines1 + ["so"] + lines2))) from e
+ raise ValueError(msg.format("\n".join(lines1 + ["so"] + lines2))) from None
else:
sizes = [x.shape[d] if d is not None else None for x, d in zip(vals, dims)]
sizes = tree_unflatten(tree, sizes)
- raise ValueError(msg.format("the tree of axis sizes is:\n{}".format(sizes))) from e
+ raise ValueError(msg.format("the tree of axis sizes is:\n{}".format(sizes))) from None
def pmap(fun: Callable, axis_name: Optional[AxisName] = None, *, in_axes=0,
static_broadcasted_argnums: Union[int, Iterable[int]] = (),
| vmap mismatched dim exception has another exception composing the error msg
```python
import jax.numpy as np
def recursion(depth, seed, xt, xl):
seed, sample_seed = jax.random.split(seed)
if depth == 2:
return xl
xl = xl + tf.random.stateless_normal([], seed=sample_seed)
return recursion(depth + 1, seed, xt, xl)
jax.vmap(recursion, in_axes=(None, None, 0, 0))(
0, jax.random.PRNGKey(2), np.linspace(0, 1, 7), np.zeros([2]))
```
want: (something like what you get with `np.array(0)` for the first arg)
```
ValueError: vmap got inconsistent sizes for array axes to be mapped:
arg 0 has shape () and axis None is to be mapped
arg 1 has shape (2,) and axis None is to be mapped
arg 2 has shape (7,) and axis 0 is to be mapped
arg 3 has shape (2,) and axis 0 is to be mapped
so
arg 2 has an axis to be mapped of size 7
arg 3 has an axis to be mapped of size 2
```
but get
```
jax/api.py in <listcomp>(.0)
908 if tree == tree_flatten((core.unit,) * tree.num_leaves)[1]:
909 lines1 = ["arg {} has shape {} and axis {} is to be mapped"
--> 910 .format(i, x.shape, d) for i, (x, d) in enumerate(zip(vals, dims))]
911 sizes = collections.defaultdict(list)
912 for i, (x, d) in enumerate(zip(vals, dims)):
AttributeError: 'int' object has no attribute 'shape'
```
| Thanks for reporting this! | 2020-07-28T04:04:32 |
|
google/jax | 3,887 | google__jax-3887 | [
"3886"
] | 7506a3e5f0137f48415def878bea71c5977dfbd0 | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -2242,7 +2242,7 @@ def _integer_pow_jvp(g, x, *, y):
xor_p = standard_naryop([_bool_or_int, _bool_or_int], 'xor')
ad.defjvp_zero(xor_p)
-population_count_p = standard_unop(_bool_or_int, 'population_count')
+population_count_p = standard_unop(_int, 'population_count')
def _add_transpose(t, x, y):
# The following linearity assertion is morally true, but because in some cases we
diff --git a/jax/lax_reference.py b/jax/lax_reference.py
--- a/jax/lax_reference.py
+++ b/jax/lax_reference.py
@@ -119,9 +119,14 @@ def rem(lhs, rhs):
# TODO shift_right_logical
def population_count(x):
+ assert np.issubdtype(x.dtype, np.integer)
dtype = x.dtype
- if x.dtype in (np.uint8, np.uint16):
- x = x.astype(np.uint32)
+ iinfo = np.iinfo(x.dtype)
+ if np.iinfo(x.dtype).bits < 32:
+ assert iinfo.kind in ('i', 'u')
+ x = x.astype(np.uint32 if iinfo.kind == 'u' else np.int32)
+ if iinfo.kind == 'i':
+ x = x.view(f"uint{np.iinfo(x.dtype).bits}")
assert x.dtype in (np.uint32, np.uint64)
m = [
0x5555555555555555, # binary: 0101...
| diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -137,7 +137,7 @@ def op_record(op, nargs, dtypes, rng_factory, tol=None):
op_record("bitwise_not", 1, bool_dtypes, jtu.rand_small),
op_record("bitwise_or", 2, bool_dtypes, jtu.rand_small),
op_record("bitwise_xor", 2, bool_dtypes, jtu.rand_small),
- op_record("population_count", 1, uint_dtypes, jtu.rand_int),
+ op_record("population_count", 1, int_dtypes + uint_dtypes, jtu.rand_int),
op_record("add", 2, default_dtypes + complex_dtypes, jtu.rand_small),
op_record("sub", 2, default_dtypes + complex_dtypes, jtu.rand_small),
@@ -1786,6 +1786,12 @@ def test_reduction_with_repeated_axes_error(self):
with self.assertRaisesRegex(ValueError, "duplicate value in 'axes' .*"):
lax.reduce(np.arange(3), 0, lax.add, (0, 0))
+ def test_population_count_booleans_not_supported(self):
+ # https://github.com/google/jax/issues/3886
+ msg = "population_count does not accept dtype bool"
+ with self.assertRaisesRegex(TypeError, msg):
+ lax.population_count(True)
+
class LazyConstantTest(jtu.JaxTestCase):
def _Check(self, make_const, expected):
| lax.population_count does not support boolean dtype
Even though the use case is limited (`population_count(x) == x` if `x`'s dtype is `jax.numpy.bool_`), `lax.population_count_p` is defined as `population_count_p = standard_unop(_bool_or_int, 'population_count')`. However, this is what happens if `lax.population_count` is called with an argument with dtype `jax.numpy.bool_`:
```python
>>> lax.population_count(np.array([True, False], dtype=np.bool_))
Traceback (most recent call last):
File "/jax/jax/interpreters/xla.py", line 311, in primitive_computation
return c.build()
RuntimeError: Invalid argument: Expected an integral element type in argument to PopulationCount operation; got PRED.:
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/jax/jax/lax/lax.py", line 296, in population_count
return population_count_p.bind(x)
File "/jax/jax/core.py", line 275, in bind
return self.impl(*args, **kwargs)
File "/jax/jax/interpreters/xla.py", line 224, in apply_primitive
compiled_fun = xla_primitive_callable(prim, *unsafe_map(arg_spec, args), **params)
File "/jax/jax/interpreters/xla.py", line 257, in xla_primitive_callable
built_c = primitive_computation(prim, AxisEnv(nreps), backend, tuple_args,
File "/jax/jax/interpreters/xla.py", line 316, in primitive_computation
raise RuntimeError(msg) from e
RuntimeError: Invalid argument: Expected an integral element type in argument to PopulationCount operation; got PRED.:
This is a bug in JAX's shape-checking rules; please report it!
https://github.com/google/jax/issues
```
I think the simplest (and probably best) way to fix this is to remove booleans from the list of allowed types.
| Good call! Either we could remove it from the allowed types, or cast to integer internally. I lean the same way you do, that removing the purported support for booleans is probably best. | 2020-07-28T14:55:59 |
google/jax | 3,888 | google__jax-3888 | [
"3883"
] | 7506a3e5f0137f48415def878bea71c5977dfbd0 | diff --git a/jax/lax/lax.py b/jax/lax/lax.py
--- a/jax/lax/lax.py
+++ b/jax/lax/lax.py
@@ -3673,6 +3673,8 @@ def _dynamic_slice_transpose_rule(t, operand, *start_indices, slice_sizes):
[None] * len(start_indices))
def _batch_dynamic_slice_indices(indices, bdims):
+ if len(indices) == 0:
+ return np.array([], 'int32'), None
size = next((x.shape[i] for x, i in zip(indices, bdims) if i is not None), -1)
if size < 0:
return concatenate([broadcast(i, (1,)) for i in indices], 0), None
@@ -3769,8 +3771,8 @@ def _dynamic_update_slice_batching_rule(batched_args, batch_dims):
# scatter always.
operand, update, *start_idx = batched_args
operand_bd, update_bd, *start_idx_bd = batch_dims
- update_shape = (update.shape if update_bd is batching.not_mapped
- else tuple(np.delete(update.shape, update_bd)))
+ update_shape = (np.shape(update) if update_bd is batching.not_mapped
+ else tuple(np.delete(np.shape(update), update_bd)))
dims = tuple(range(len(update_shape)))
dnums = ScatterDimensionNumbers(update_window_dims=dims,
inserted_window_dims=(),
| diff --git a/tests/batching_test.py b/tests/batching_test.py
--- a/tests/batching_test.py
+++ b/tests/batching_test.py
@@ -960,6 +960,22 @@ def f(index1, index2):
expected = g(np.asarray([1]), np.asarray([2]))
self.assertAllClose(ans, expected)
+ def testIssue3883(self):
+ def scalar_f(x):
+ return lax.dynamic_slice(x, [], [])
+
+ xs = jnp.array([1, 2, 3, 4])
+ ans = vmap(scalar_f)(xs)
+ expected = jnp.array([scalar_f(x) for x in xs])
+ self.assertAllClose(ans, expected)
+
+ def scalar_f2(x):
+ return lax.dynamic_update_slice(x, 7, [])
+
+ xs = jnp.array([1, 2, 3, 4])
+ ans = vmap(scalar_f2)(xs)
+ expected = jnp.array([scalar_f2(x) for x in xs])
+ self.assertAllClose(ans, expected)
if __name__ == '__main__':
absltest.main(testLoader=jtu.JaxTestLoader())
| vmap of dynamic_slice of a scalar fails
Specifically, doing
```python
In [1]: import jax.numpy as np
In [2]: from jax import lax
In [3]: import jax
In [4]: jax.vmap(lambda x: lax.dynamic_slice(x, [], []))(np.array([1, 2, 3, 4]))
```
gives the following error
```text
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-ae0083eeb242> in <module>
----> 1 jax.vmap(lambda x: lax.dynamic_slice(x, [], []))(np.array([1, 2, 3, 4]))
~/dev/jax/jax/api.py in batched_fun(*args)
878 _ = _mapped_axis_size(in_tree, args_flat, in_axes_flat, "vmap")
879 out_flat = batching.batch(flat_fun, args_flat, in_axes_flat,
--> 880 lambda: flatten_axes("vmap out_axes", out_tree(),
881 out_axes))
882 return tree_unflatten(out_tree(), out_flat)
~/dev/jax/jax/interpreters/batching.py in batch(fun, in_vals, in_dims, out_dim_dests)
32 # executes a batched version of `fun` following out_dim_dests
33 batched_fun = batch_fun(fun, in_dims, out_dim_dests)
---> 34 return batched_fun.call_wrapped(*in_vals)
35
36 @lu.transformation_with_aux
~/dev/jax/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
148 gen = None
149
--> 150 ans = self.f(*args, **dict(self.params, **kwargs))
151 del args
152 while stack:
<ipython-input-4-ae0083eeb242> in <lambda>(x)
----> 1 jax.vmap(lambda x: lax.dynamic_slice(x, [], []))(np.array([1, 2, 3, 4]))
~/dev/jax/jax/lax/lax.py in dynamic_slice(operand, start_indices, slice_sizes)
734 start_indices = _dynamic_slice_indices(operand, start_indices)
735 return dynamic_slice_p.bind(operand, *start_indices,
--> 736 slice_sizes=tuple(slice_sizes))
737
738 def dynamic_update_slice(operand: Array, update: Array,
~/dev/jax/jax/core.py in bind(self, *args, **kwargs)
276
277 tracers = map(top_trace.full_raise, args)
--> 278 out_tracer = top_trace.process_primitive(self, tracers, kwargs)
279 if self.multiple_results:
280 return map(full_lower, out_tracer)
~/dev/jax/jax/interpreters/batching.py in process_primitive(self, primitive, tracers, params)
132 # TODO(mattjj,phawkins): if no rule implemented, could vmap-via-map here
133 batched_primitive = get_primitive_batcher(primitive)
--> 134 val_out, dim_out = batched_primitive(vals_in, dims_in, **params)
135 if primitive.multiple_results:
136 return map(partial(BatchTracer, self), val_out, dim_out)
~/dev/jax/jax/lax/lax.py in _dynamic_slice_batching_rule(batched_args, batch_dims, slice_sizes)
3696 dnums = GatherDimensionNumbers(offset_dims=dims, collapsed_slice_dims=(),
3697 start_index_map=dims)
-> 3698 index, index_bdim = _batch_dynamic_slice_indices(start_indices, start_idx_bds)
3699 return _gather_batching_rule(
3700 [operand, index], [operand_bd, index_bdim], dimension_numbers=dnums,
~/dev/jax/jax/lax/lax.py in _batch_dynamic_slice_indices(indices, bdims)
3676 size = next((x.shape[i] for x, i in zip(indices, bdims) if i is not None), -1)
3677 if size < 0:
-> 3678 return concatenate([broadcast(i, (1,)) for i in indices], 0), None
3679 indices = concatenate(
3680 [broadcast_in_dim(x, (size, 1),
~/dev/jax/jax/lax/lax.py in concatenate(operands, dimension)
446 An array containing the concatenation.
447 """
--> 448 return concatenate_p.bind(*operands, dimension=dimension)
449
450 Precision = xla_client.PrecisionConfig.Precision
~/dev/jax/jax/core.py in bind(self, *args, **kwargs)
273 top_trace = find_top_trace(args)
274 if top_trace is None:
--> 275 return self.impl(*args, **kwargs)
276
277 tracers = map(top_trace.full_raise, args)
~/dev/jax/jax/interpreters/xla.py in apply_primitive(prim, *args, **params)
222 def apply_primitive(prim, *args, **params):
223 """Impl rule that compiles and runs a single primitive 'prim' using XLA."""
--> 224 compiled_fun = xla_primitive_callable(prim, *unsafe_map(arg_spec, args), **params)
225 return compiled_fun(*args)
226
~/dev/jax/jax/interpreters/xla.py in xla_primitive_callable(prim, *arg_specs, **params)
238 return _xla_callable(lu.wrap_init(prim_fun), device, None, "prim", donated_invars,
239 *arg_specs)
--> 240 aval_out = prim.abstract_eval(*avals, **params)
241 if not prim.multiple_results:
242 handle_result = aval_to_result_handler(device, aval_out)
~/dev/jax/jax/lax/lax.py in standard_abstract_eval(prim, shape_rule, dtype_rule, *args, **kwargs)
1845 assert all(isinstance(arg, UnshapedArray) for arg in args), args
1846 least_specialized = _max(
-> 1847 map(type, args), key=operator.attrgetter('array_abstraction_level'))
1848 if least_specialized is ConcreteArray:
1849 return ConcreteArray(prim.impl(*[x.val for x in args], **kwargs))
ValueError: max() arg is an empty sequence
```
I'm just writing a test for this now and will have a go at fixing it.
| 2020-07-28T15:23:46 |
|
google/jax | 3,922 | google__jax-3922 | [
"3919",
"3919"
] | e8c7d9e2812cc8ca1a1b8ac78860d0882122fa30 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -2108,7 +2108,10 @@ def tile(A, reps):
A = reshape(A, (1,) * (len(reps) - ndim(A)) + shape(A))
reps = (1,) * (ndim(A) - len(reps)) + tuple(reps)
for i, rep in enumerate(reps):
- A = concatenate([A] * int(rep), axis=i)
+ if rep == 0:
+ A = A[tuple(slice(0 if j == i else None) for j in range(A.ndim))]
+ elif rep != 1:
+ A = concatenate([A] * int(rep), axis=i)
return A
@_wraps(np.concatenate)
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1137,7 +1137,7 @@ def args_maker():
jtu.format_shape_dtype_string(shape, dtype), reps),
"shape": shape, "dtype": dtype, "reps": reps,
"rng_factory": jtu.rand_default}
- for reps in [(), (2,), (3, 4), (2, 3, 4)]
+ for reps in [(), (2,), (3, 4), (2, 3, 4), (1, 0, 2)]
for shape, dtype in _shape_and_dtypes(all_shapes, default_dtypes)
))
def testTile(self, shape, dtype, reps, rng_factory):
| jax.numpy.tile inconsistent with np.tile (when 0 in repeats)
```python
>>> import numpy as np
>>> import jax.numpy as jnp
>>> np.tile(np.array([0, 1, 2]), (1, 1, 2))
array([[[0, 1, 2, 0, 1, 2]]])
>>> jnp.tile(jnp.array([0, 1, 2]), (1, 2, 2))
DeviceArray([[[0, 1, 2, 0, 1, 2]]], dtype=int64)
>>> np.tile(np.array([0, 1, 2]), (1, 0, 2))
array([], shape(1, 0, 6), dtype=int64)
>>> jnp.tile(jnp.array([0, 1, 2]), (1, 0, 2)
...
ValueError: Need at least one array to concatenate
```
A fix locally is to wrap the `jax.numpy.tile` like so:
```python
if 0 in repeats:
return jnp.array([]).reshape(np.array(tensor_in.shape) * np.array(repeats))
return jnp.tile(tensor_in, repeats)
```
jax.numpy.tile inconsistent with np.tile (when 0 in repeats)
```python
>>> import numpy as np
>>> import jax.numpy as jnp
>>> np.tile(np.array([0, 1, 2]), (1, 1, 2))
array([[[0, 1, 2, 0, 1, 2]]])
>>> jnp.tile(jnp.array([0, 1, 2]), (1, 2, 2))
DeviceArray([[[0, 1, 2, 0, 1, 2]]], dtype=int64)
>>> np.tile(np.array([0, 1, 2]), (1, 0, 2))
array([], shape(1, 0, 6), dtype=int64)
>>> jnp.tile(jnp.array([0, 1, 2]), (1, 0, 2)
...
ValueError: Need at least one array to concatenate
```
A fix locally is to wrap the `jax.numpy.tile` like so:
```python
if 0 in repeats:
return jnp.array([]).reshape(np.array(tensor_in.shape) * np.array(repeats))
return jnp.tile(tensor_in, repeats)
```
| 2020-07-31T05:16:45 |
|
google/jax | 3,949 | google__jax-3949 | [
"3927"
] | 16c63e0c3126edf34b5ede7d170c6dbca3cd80b2 | diff --git a/jax/numpy/__init__.py b/jax/numpy/__init__.py
--- a/jax/numpy/__init__.py
+++ b/jax/numpy/__init__.py
@@ -37,7 +37,8 @@
fmod, frexp, full, full_like, function, gcd, geomspace, gradient, greater,
greater_equal, hamming, hanning, heaviside, histogram, histogram_bin_edges,
hsplit, hstack, hypot, identity, iinfo, imag,
- indices, inexact, in1d, inf, inner, int16, int32, int64, int8, int_, integer, intersect1d,
+ indices, inexact, in1d, inf, inner, int16, int32, int64, int8, int_, integer,
+ interp, intersect1d,
isclose, iscomplex, iscomplexobj, isfinite, isin, isinf, isnan, isneginf,
isposinf, isreal, isrealobj, isscalar, issubdtype, issubsctype, iterable,
ix_, kaiser, kron, lcm, ldexp, left_shift, less, less_equal, lexsort, linspace,
diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -1266,6 +1266,32 @@ def _maybe_numpy_1_13_isclose_behavior(a, out):
def _maybe_numpy_1_13_isclose_behavior(a, out):
return out
+@_wraps(np.interp)
+def interp(x, xp, fp, left=None, right=None, period=None):
+ if shape(xp) != shape(fp) or ndim(xp) != 1:
+ raise ValueError("xp and fp must be one-dimensional arrays of equal size")
+ x, xp, fp = map(asarray, _promote_dtypes_inexact(x, xp, fp))
+ if period is not None:
+ if period == 0:
+ raise ValueError(f"period must be a non-zero value; got {period}")
+ period = abs(period)
+ x = x % period
+ xp = xp % period
+ xp, fp = lax.sort_key_val(xp, fp)
+ xp = concatenate([xp[-1:] - period, xp, xp[:1] + period])
+ fp = concatenate([fp[-1:], fp, fp[:1]])
+
+ i = clip(searchsorted(xp, x, side='right'), 1, len(xp) - 1)
+ df = fp[i] - fp[i - 1]
+ dx = xp[i] - xp[i - 1]
+ delta = x - xp[i - 1]
+ f = where((dx == 0) | (x == xp[i]), fp[i], fp[i - 1] + delta * (df / dx))
+
+ if period is None:
+ f = where(x < xp[0], fp[0] if left is None else left, f)
+ f = where(x > xp[-1], fp[-1] if right is None else right, f)
+ return f
+
@_wraps(np.in1d, lax_description="""
In the JAX version, the `assume_unique` argument is not referenced.
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1780,6 +1780,32 @@ def testIdentity(self, n, dtype):
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker)
self._CompileAndCheck(jnp_fun, args_maker)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "_{}_period={}_left={}_right={}".format(
+ jtu.format_shape_dtype_string(shape, dtype), period, left, right),
+ "shape": shape, "dtype": dtype,
+ "period": period, "left": left, "right": right}
+ for shape in nonempty_shapes
+ for period in [None, 0.59]
+ for left in [None, 0]
+ for right in [None, 1]
+ for dtype in default_dtypes
+ # following types lack precision for meaningful tests
+ if dtype not in [np.int8, np.int16, np.float16, jnp.bfloat16]
+ ))
+ def testInterp(self, shape, dtype, period, left, right):
+ rng = jtu.rand_default(self.rng(), scale=10)
+ kwds = dict(period=period, left=left, right=right)
+ np_fun = partial(np.interp, **kwds)
+ jnp_fun = partial(jnp.interp, **kwds)
+ args_maker = lambda: [rng(shape, dtype), np.sort(rng((20,), dtype)), np.linspace(0, 1, 20)]
+
+ # skip numpy comparison for integer types with period specified, because numpy
+ # uses an unstable sort and so results differ for duplicate values.
+ if not (period and np.issubdtype(dtype, np.integer)):
+ self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker, tol={np.float32: 2E-4})
+ self._CompileAndCheck(jnp_fun, args_maker)
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_x1={}_x2={}_x1_rng={}".format(
jtu.format_shape_dtype_string(x1_shape, x1_dtype),
| [FR] jnp.interp & jnp.piecewise
Hey, I am trying to create a learning reate schedule and it would be super helpful to have `interp` or `piecewise` for this task.
| `interp` is being discussed in #3860
Here's a quick implementation of `piecewise` based on `lax.switch`... not yet thoroughly tested or benchmarked.
```python
import numpy as np
from jax import lax, partial
from jax.numpy.vectorize import vectorize
from jax.numpy.lax_numpy import _wraps, int_, where, zeros_like
_wraps(np.piecewise)
def piecewise(x, condlist, funclist, *args, **kw):
funclist = [lambda x: 0] + [partial(f, *args, **kw) for f in funclist]
indices = zeros_like(x, dtype=int_)
for i, cond in enumerate(condlist):
indices = where(cond, i + 1, indices)
return vectorize(lax.switch, excluded=(1,))(indices, funclist, x)
``` | 2020-08-03T22:50:33 |
google/jax | 3,958 | google__jax-3958 | [
"3952"
] | 15a9a70bb7a818cd84fc6e0ac45f6cde16a68845 | diff --git a/jax/core.py b/jax/core.py
--- a/jax/core.py
+++ b/jax/core.py
@@ -506,7 +506,9 @@ def __xor__(self, other): return self.aval._xor(self, other)
def __rxor__(self, other): return self.aval._rxor(self, other)
def __invert__(self): return self.aval._invert(self)
def __lshift__(self, other): return self.aval._lshift(self, other)
+ def __rlshift__(self, other): return self.aval._rlshift(self, other)
def __rshift__(self, other): return self.aval._rshift(self, other)
+ def __rrshift__(self, other): return self.aval._rrshift(self, other)
def __getitem__(self, idx): return self.aval._getitem(self, idx)
def __nonzero__(self): return self.aval._nonzero(self)
def __bool__(self): return self.aval._bool(self)
diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -394,7 +394,6 @@ def fn(x1, x2):
bitwise_and = _one_to_one_binop(np.bitwise_and, lax.bitwise_and)
bitwise_or = _one_to_one_binop(np.bitwise_or, lax.bitwise_or)
bitwise_xor = _one_to_one_binop(np.bitwise_xor, lax.bitwise_xor)
-right_shift = _one_to_one_binop(np.right_shift, lax.shift_right_arithmetic)
left_shift = _one_to_one_binop(np.left_shift, lax.shift_left)
equal = _one_to_one_binop(np.equal, lax.eq)
multiply = _maybe_bool_binop(np.multiply, lax.mul, lax.bitwise_and)
@@ -441,6 +440,14 @@ def op(*args):
logical_xor = _logical_op(np.logical_xor, lax.bitwise_xor)
+@_wraps(np.right_shift)
+def right_shift(x1, x2):
+ x1, x2 = _promote_args(np.right_shift.__name__, x1, x2)
+ lax_fn = lax.shift_right_logical if \
+ np.issubdtype(x1.dtype, np.unsignedinteger) else lax.shift_right_arithmetic
+ return lax_fn(x1, x2)
+
+
@_wraps(np.absolute)
def absolute(x):
return x if issubdtype(_dtype(x), unsignedinteger) else lax.abs(x)
@@ -4492,6 +4499,8 @@ def _operator_round(number, ndigits=None):
"invert": bitwise_not,
"lshift": _defer_to_unrecognized_arg(left_shift),
"rshift": _defer_to_unrecognized_arg(right_shift),
+ "rlshift": _defer_to_unrecognized_arg(_swap_args(left_shift)),
+ "rrshift": _defer_to_unrecognized_arg(_swap_args(right_shift)),
"round": _operator_round,
}
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -69,6 +69,9 @@
python_scalar_dtypes = [jnp.bool_, jnp.int_, jnp.float_, jnp.complex_]
+# uint64 is problematic because with any uint type it promotes to float:
+int_dtypes_no_uint64 = [d for d in int_dtypes + unsigned_dtypes if d != np.uint64]
+
def _valid_dtypes_for_shape(shape, dtypes):
# Not all (shape, dtype) pairs are valid. In particular, Python scalars only
# have one type in each category (float, bool, etc.)
@@ -351,7 +354,8 @@ def op_record(name, nargs, dtypes, shapes, rng_factory, diff_modes,
# op_record("__and__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
# op_record("__xor__", 2, number_dtypes, all_shapes, jtu.rand_bool, []),
# op_record("__divmod__", 2, number_dtypes, all_shapes, jtu.rand_nonzero, []),
- # TODO(mattjj): lshift, rshift
+ op_record("__lshift__", 2, int_dtypes_no_uint64, all_shapes, partial(jtu.rand_int, high=8), []),
+ op_record("__rshift__", 2, int_dtypes_no_uint64, all_shapes, partial(jtu.rand_int, high=8), []),
]
JAX_RIGHT_OPERATOR_OVERLOADS = [
@@ -370,6 +374,8 @@ def op_record(name, nargs, dtypes, shapes, rng_factory, diff_modes,
# op_record("__rand__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
# op_record("__rxor__", 2, number_dtypes, all_shapes, jtu.rand_bool, []),
# op_record("__rdivmod__", 2, number_dtypes, all_shapes, jtu.rand_nonzero, []),
+ op_record("__rlshift__", 2, int_dtypes_no_uint64, all_shapes, partial(jtu.rand_int, high=8), []),
+ op_record("__rrshift__", 2, int_dtypes_no_uint64, all_shapes, partial(jtu.rand_int, high=8), [])
]
class _OverrideEverything(object):
@@ -392,11 +398,8 @@ class _OverrideNothing(object):
JAX_COMPOUND_OP_RECORDS += [
op_record("isclose", 2, [t for t in all_dtypes if t != jnp.bfloat16],
all_shapes, jtu.rand_small_positive, []),
- # uint64 is problematic because with any other int type it promotes to float.
- op_record("gcd", 2, [d for d in int_dtypes + unsigned_dtypes if d != np.uint64],
- all_shapes, jtu.rand_default, []),
- op_record("lcm", 2, [d for d in int_dtypes + unsigned_dtypes if d != np.uint64],
- all_shapes, jtu.rand_default, []),
+ op_record("gcd", 2, int_dtypes_no_uint64, all_shapes, jtu.rand_default, []),
+ op_record("lcm", 2, int_dtypes_no_uint64, all_shapes, jtu.rand_default, []),
]
JAX_REDUCER_NO_DTYPE_RECORDS += [
op_record("ptp", 1, number_dtypes, nonempty_shapes, jtu.rand_default, []),
@@ -594,6 +597,35 @@ def testBitwiseOp(self, np_op, jnp_op, rng_factory, shapes, dtypes):
check_dtypes=jtu.PYTHON_SCALAR_SHAPE not in shapes)
self._CompileAndCheck(jnp_op, args_maker)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": jtu.format_test_name_suffix(op.__name__, shapes, dtypes),
+ "op": op, "dtypes": dtypes, "shapes": shapes}
+ for op in [jnp.left_shift, jnp.right_shift]
+ for shapes in filter(
+ _shapes_are_broadcast_compatible,
+ # TODO numpy always promotes to shift dtype for zero-dim shapes:
+ itertools.combinations_with_replacement(nonzerodim_shapes, 2))
+ for dtypes in itertools.product(
+ *(_valid_dtypes_for_shape(s, int_dtypes_no_uint64) for s in shapes))))
+ def testShiftOpAgainstNumpy(self, op, dtypes, shapes):
+ dtype, shift_dtype = dtypes
+ signed_mix = np.issubdtype(dtype, np.signedinteger) != \
+ np.issubdtype(shift_dtype, np.signedinteger)
+ has_32 = any(np.iinfo(d).bits == 32 for d in dtypes)
+ promoting_to_64 = has_32 and signed_mix
+ if promoting_to_64 and not FLAGS.jax_enable_x64:
+ self.skipTest("np.right_shift/left_shift promoting to int64"
+ "differs from jnp in 32 bit mode.")
+
+ info, shift_info = map(np.iinfo, dtypes)
+ x_rng = jtu.rand_int(self.rng(), low=info.min, high=info.max + 1)
+ # NumPy requires shifts to be non-negative and below the bit width:
+ shift_rng = jtu.rand_int(self.rng(), high=max(info.bits, shift_info.bits))
+ args_maker = lambda: (x_rng(shapes[0], dtype), shift_rng(shapes[1], shift_dtype))
+ self._CompileAndCheck(op, args_maker)
+ np_op = getattr(np, op.__name__)
+ self._CheckAgainstNumpy(np_op, op, args_maker)
+
@parameterized.named_parameters(itertools.chain.from_iterable(
jtu.cases_from_list(
{"testcase_name": "{}_inshape={}_axis={}_dtype={}_keepdims={}".format(
| jnp.right_shift incorrect on unsigned ints
It looks like ~~`lax.shift_right_arithmetic`~~ `jnp.right_shift` incorrectly treats unsigned ints as signed:
```python
import numpy as np
from jax import lax, numpy as jnp
args = np.uint8(0b10000000), np.uint8(2)
print(f"{np.right_shift(*args):#010b} (expected)")
print(f"{jnp.right_shift(*args):#010b}")
print(f"{lax.shift_right_arithmetic(*args):#010b}")
```
results in
```
0b00100000 (expected)
0b11100000
0b11100000
```
Other unsigned int types produce the same issue. Tests for `lax.shift_right_arithmetic` were missing, they will be added with https://github.com/google/jax/pull/3923.
| I think that's working as intended. `shift_right_arithmetic` does an arithmetic (signed) shift irrespective of the type. If you want a logical shift you should use `shift_right_logical`. `lax` follows XLA in making the type of the shift independent of the carrier type.
@hawkinsp In that case, shouldn't `jnp.right_shift` resolve to `lax.shift_right_logical` for unsigned ints to match the behavior of `np`?
I could write the [missing `jnp.right_shift`/`left_shift` tests](https://github.com/google/jax/blob/master/tests/lax_numpy_test.py#L354) and fix this.
Yes, the jnp behavior should match the np behavior when possible. A PR with a fix & tests would be a great contribution! | 2020-08-04T15:53:59 |
google/jax | 3,988 | google__jax-3988 | [
"3985"
] | e39420b55d715a6e73aae0691936945975775451 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -4377,6 +4377,11 @@ def _astype(arr, dtype):
lax._check_user_dtype_supported(dtype, "astype")
return lax.convert_element_type(arr, dtype)
+
+def _nbytes(arr):
+ return size(arr) * _dtype(arr).itemsize
+
+
def _view(arr, dtype=None, type=None):
if type is not None:
raise NotImplementedError("`type` argument of array.view()")
@@ -4533,6 +4538,7 @@ def _operator_round(number, ndigits=None):
setattr(ShapedArray, "imag", core.aval_property(imag))
setattr(ShapedArray, "astype", core.aval_method(_astype))
setattr(ShapedArray, "view", core.aval_method(_view))
+setattr(ShapedArray, "nbytes", core.aval_property(_nbytes))
# Forward operators, methods, and properties on DeviceArray to lax_numpy
@@ -4548,6 +4554,7 @@ def _operator_round(number, ndigits=None):
setattr(DeviceArray, "imag", property(imag))
setattr(DeviceArray, "astype", _astype)
setattr(DeviceArray, "view", _view)
+setattr(DeviceArray, "nbytes", property(_nbytes))
# Extra methods that are handy
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -2662,6 +2662,20 @@ def testAstype(self):
self._CheckAgainstNumpy(np_op, jnp_op, args_maker)
self._CompileAndCheck(jnp_op, args_maker)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "_{}".format(
+ jtu.format_shape_dtype_string(shape, dtype)),
+ "shape": shape, "dtype": dtype}
+ for shape in array_shapes
+ for dtype in all_dtypes))
+ def testNbytes(self, shape, dtype):
+ rng = jtu.rand_default(self.rng())
+ np_op = lambda x: np.asarray(x).nbytes
+ jnp_op = lambda x: jnp.asarray(x).nbytes
+ args_maker = lambda: [rng(shape, dtype)]
+ self._CheckAgainstNumpy(np_op, jnp_op, args_maker)
+ self._CompileAndCheck(jnp_op, args_maker)
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_dtype={}".format(
jtu.format_shape_dtype_string(shape, a_dtype), dtype),
| JAX arrays should implement nbytes
`nbytes` is a convenient shortcut for calculating the size of a NumPy arrays. The implement is simply `array.size * array.dtype.itemsize`.
But currently, we see:
```
>>> import jax.numpy as jnp
>>> jnp.zeros((1000, 1000)).nbytes
AttributeError: 'DeviceArray' object has no attribute 'nbytes'
```
| 2020-08-07T14:32:41 |
|
google/jax | 4,023 | google__jax-4023 | [
"4022"
] | fe9f264b55f8b99c57f803db9eb7a2c8df897e9b | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -1510,12 +1510,8 @@ def clip(a, a_min=None, a_max=None):
if a_min is None and a_max is None:
raise ValueError("At most one of a_min and a_max may be None")
if a_min is not None:
- if _dtype(a_min) != _dtype(a):
- a_min = lax.convert_element_type(a_min, _dtype(a))
a = maximum(a_min, a)
if a_max is not None:
- if _dtype(a_max) != _dtype(a):
- a_max = lax.convert_element_type(a_max, _dtype(a))
a = minimum(a_max, a)
return a
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1081,10 +1081,10 @@ def np_fun(lhs, rhs):
"shape": shape, "dtype": dtype, "a_min": a_min, "a_max": a_max,
"rng_factory": jtu.rand_default}
for shape in all_shapes for dtype in number_dtypes
- for a_min, a_max in [(-1, None), (None, 1), (-1, 1),
+ for a_min, a_max in [(-1, None), (None, 1), (-0.9, 1),
(-np.ones(1), None),
(None, np.ones(1)),
- (-np.ones(1), np.ones(1))]))
+ (np.full(1, -0.9), np.ones(1))]))
def testClipStaticBounds(self, shape, dtype, a_min, a_max, rng_factory):
rng = rng_factory(self.rng())
np_fun = lambda x: np.clip(x, a_min=a_min, a_max=a_max)
| jax.numpy.clip has unexpected behavior which diverges from numpy.clip
Hi all!
I've recently encountered an issue with `jax.numpy.clip` which produces variable outputs based on the `dtype` of the input (I have jax version `0.2.0`). For example, if we define our `a_min` and `a_max` as floats, but pass an integer type for the `a` parameter, then the limits get cast to the same integer type as the input (see [here](https://github.com/google/jax/blob/ebc5e8bfd6d091bf1fbf67de1565705f30159aaf/jax/numpy/lax_numpy.py#L1514) and [here](https://github.com/google/jax/blob/ebc5e8bfd6d091bf1fbf67de1565705f30159aaf/jax/numpy/lax_numpy.py#L1518)). This can have some undesired results, especially since users might be accustomed to the `numpy.clip` behavior which does the exact opposite. Moreover, this divergence between the two implementations is [not explicitly documented](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.clip.html), as the jax documentation just copies [the numpy doc](https://numpy.org/doc/stable/reference/generated/numpy.clip.html). To detail the issue, I've attached two code snippets:
```python
>>> import jax.numpy as jnp
>>> import numpy as np
>>>
>>> eps = .0001
>>> f = lambda x: jnp.clip(x, eps - 1, 1 - eps) # x \in [-0.9999, 0.9999]
>>> g = lambda x: np.clip(x, eps - 1, 1 - eps) # x \in [-0.9999, 0.9999]
>>>
>>> x = jnp.arange(-3, 3)
>>> f(x)
DeviceArray([0, 0, 0, 0, 0, 0], dtype=int32)
>>> g(x)
array([-0.9999, -0.9999, -0.9999, 0. , 0.9999, 0.9999])
```
Considering the implementation, the behavior is as expected only when `jax` can infer the type or the type is explicitly specified:
```python
>>> x = jnp.arange(-3, 3, dtype=jnp.float32) # Something like jnp.arange(-3.0, 3.0) will also work
>>> f(x)
DeviceArray([-0.9999, -0.9999, -0.9999, 0. , 0.9999, 0.9999], dtype=float32)
>>> g(x)
array([-0.9999, -0.9999, -0.9999, 0. , 0.9999, 0.9999],
dtype=float32)
```
My two (mutually exclusive) suggestions would be the following:
* Explicitly document the difference between the two implementations
* Change the jax implementation such that it aligns with the numpy one
I would personally prefer the latter, since not everyone will read the docs, especially if they're already acquainted with the numpy method in the first place, however, I'm not sure what implications this will have on the jax codebase.
Thanks!
| Thanks - the issue is here: https://github.com/google/jax/blob/d551cec6e80bc7c31423739e67256c98b453e530/jax/numpy/lax_numpy.py#L1514
JAX coerces the bounds to the dtype of the input array, whereas numpy promotes the array to the dtype of the bounds. I think we should fix this to match numpy's behavior. | 2020-08-11T16:46:56 |
google/jax | 4,029 | google__jax-4029 | [
"4015"
] | c564aca77710df0599715d4231b7d5b7dd46984a | diff --git a/jax/experimental/host_callback.py b/jax/experimental/host_callback.py
--- a/jax/experimental/host_callback.py
+++ b/jax/experimental/host_callback.py
@@ -136,6 +136,7 @@ def power3(x):
from jax import api
from jax import core
+from jax import custom_derivatives
from jax import lax
from jax.lib import pytree
from jax.interpreters import ad, xla, batching, masking
@@ -654,6 +655,39 @@ def _rewrite_eqn(eqn: core.JaxprEqn, eqns: List[core.JaxprEqn],
eqn.params,
call_jaxpr=_rewrite_jaxpr(call_jaxpr, True,
True)), eqn.source_info))
+ elif eqn.primitive is custom_derivatives.custom_jvp_call_jaxpr_p:
+ fun_jaxpr = eqn.params["fun_jaxpr"]
+ new_invars = [*eqn.invars, input_token_var]
+ def unreachable_thunk():
+ assert False, "Should not be reached"
+ eqns.append(
+ core.new_jaxpr_eqn(
+ new_invars, eqn.outvars + [output_token_var], eqn.primitive,
+ dict(
+ eqn.params,
+ fun_jaxpr=_rewrite_typed_jaxpr(fun_jaxpr, True, True),
+ jvp_jaxpr_thunk=unreachable_thunk
+ ),
+ eqn.source_info))
+ elif eqn.primitive is custom_derivatives.custom_vjp_call_jaxpr_p:
+ fun_jaxpr = eqn.params["fun_jaxpr"]
+ new_invars = [*eqn.invars, input_token_var]
+ def unreachable_thunk():
+ assert False, "Should not be reached"
+ eqns.append(
+ core.new_jaxpr_eqn(
+ new_invars, eqn.outvars + [output_token_var], eqn.primitive,
+ dict(
+ eqn.params,
+ fun_jaxpr=_rewrite_typed_jaxpr(fun_jaxpr, True, True),
+ fwd_jaxpr_thunk=unreachable_thunk,
+ # The following are illegal values for the parameters, they
+ # should not be needed because this rewrite is just before
+ # compilation to XLA, which does not use those parameters.
+ bwd="illegal param",
+ out_trees="illegal param"
+ ),
+ eqn.source_info))
else:
raise NotImplementedError(f"outfeed rewrite {eqn.primitive}")
| diff --git a/tests/host_callback_test.py b/tests/host_callback_test.py
--- a/tests/host_callback_test.py
+++ b/tests/host_callback_test.py
@@ -90,6 +90,10 @@ def repl_floats(match_group):
what = re.sub(r"\-?\d*\.[\-\def]*", repl_floats, what)
what = re.sub(r"output_stream=[^\]\n]*", "", what)
what = re.sub(r"threshold=[^\]\n]*", "", what)
+ what = re.sub(r"bwd=[^\]\n]*", "", what)
+ what = re.sub(r"out_trees=[^\]\n]*", "", what)
+ what = re.sub(r"fwd_jaxpr_thunk=[^\]\n]*", "", what)
+ what = re.sub(r"jvp_jaxpr_thunk=[^\]\n]*", "", what)
# Empty lines
what = re.sub(r"^\s*\n", "", what, flags=re.MULTILINE)
def repl_func(match_group):
@@ -916,6 +920,94 @@ def test_pmap(self):
expected_res = jnp.stack([fun1_equiv(2. + a) for a in range(api.local_device_count())])
self.assertAllClose(expected_res, res, check_dtypes=False)
+ def test_scan_custom_jvp(self):
+ """custom JVP, inside scan.
+ This exercises the custom_jvp_call_jaxpr primitives."""
+ @api.custom_jvp
+ def f(x):
+ return x * hcb.id_print(x, output_stream=testing_stream, what="x")
+
+ @f.defjvp
+ def f_jvp(primals, tangents):
+ x, = primals
+ x_dot, = tangents
+ primal_out = f(x)
+ tangent_out = 3. * x * hcb.id_print(x_dot, output_stream=testing_stream, what="x_dot")
+ return primal_out, tangent_out
+
+ def g(x):
+ # Sum f(x_i)
+ return lax.scan(lambda carry, inp: (carry + f(inp), 0.),
+ np.full(x.shape[1:], 0.), # Like x w/o leading dim
+ x)[0]
+
+ arg = np.full((2,), 0.7)
+ self.assertAllClose(0.7 * 0.7 * 2, g(arg))
+ hcb.barrier_wait()
+ self.assertMultiLineStrippedEqual("""
+ what: x
+ 0.7
+ what: x
+ 0.7""", testing_stream.output)
+ testing_stream.reset()
+
+ self.assertAllClose(np.array([2.1, 2.1]), api.grad(g)(arg), check_dtypes=False)
+ hcb.barrier_wait()
+ self.assertMultiLineStrippedEqual("""
+ what: x
+ 0.7
+ what: x
+ 0.7
+ transforms: ({'name': 'transpose'},) what: x_dot
+ 2.1
+ transforms: ({'name': 'transpose'},) what: x_dot
+ 2.1""", testing_stream.output)
+
+ def test_scan_custom_vjp(self):
+ """custom VJP, inside scan.
+ This exercises the custom_vjp_call_jaxpr primitives."""
+ @api.custom_vjp
+ def f(x):
+ return x * hcb.id_print(x, output_stream=testing_stream, what="x")
+
+ # f_fwd: a -> (b, residual)
+ def f_fwd(x):
+ return f(x), 3. * x
+ # f_bwd: (residual, CT b) -> [CT a]
+ def f_bwd(residual, ct_b):
+ return residual * hcb.id_print(ct_b, output_stream=testing_stream, what="ct_b"),
+
+ f.defvjp(f_fwd, f_bwd)
+
+ def g(x):
+ # Sum f(x_i)
+ return lax.scan(lambda carry, inp: (carry + f(inp), 0.),
+ np.full(x.shape[1:], 0.), # Like x w/o leading dim
+ x)[0]
+
+ arg = np.full((2,), 0.7)
+
+ self.assertAllClose(0.7 * 0.7 * 2, g(arg))
+ hcb.barrier_wait()
+ self.assertMultiLineStrippedEqual("""
+ what: x
+ 0.7
+ what: x
+ 0.7""", testing_stream.output)
+ testing_stream.reset()
+
+ self.assertAllClose(np.array([2.1, 2.1]), api.grad(g)(arg), check_dtypes=False)
+ hcb.barrier_wait()
+ self.assertMultiLineStrippedEqual("""
+ what: x
+ 0.7
+ what: x
+ 0.7
+ what: ct_b
+ 1.
+ what: ct_b
+ 1.""", testing_stream.output)
+
def test_mask(self):
# TODO(necula)
raise SkipTest("masking has regressed")
@@ -1186,9 +1278,176 @@ def func(x):
linear=(False, False, False, False, False)
num_carry=3
num_consts=1
- reverse=False ] b 1 2 f a
+ reverse=False
+ unroll=1 ] b 1 2 f a
in (c, d, e, g) }""", func, [y])
+ def test_scan_custom_jvp(self):
+ """custom JVP, inside scan.
+ This exercises the custom_jvp_call_jaxpr primitives."""
+ @api.custom_jvp
+ def f(x):
+ return x * hcb.id_print(x)
+
+ @f.defjvp
+ def f_jvp(primals, tangents):
+ x, = primals
+ x_dot, = tangents
+ primal_out = f(x)
+ tangent_out = 3. * x * hcb.id_print(x_dot)
+ return primal_out, tangent_out
+
+ def g(x):
+ # Sum f(x_i)
+ return lax.scan(lambda carry, inp: (carry + f(inp), 0.),
+ np.full(x.shape[1:], 0.), # Like x w/o leading dim
+ x)[0]
+
+ arg = np.full((5,), 0.7)
+ self.assertRewrite("""
+ { lambda ; a c.
+ let b d _ = scan[ jaxpr={ lambda ; a e b.
+ let c f = custom_jvp_call_jaxpr[ fun_jaxpr={ lambda ; a d.
+ let b e = id_tap[ arg_treedef_=*
+ has_token_=True
+ nr_tapped_args_=1
+ tap_func_=_print
+ ] a d
+ c = mul a b
+ in (c, e) }
+ ] b e
+ d = add a c
+ in (d, f, 0.00) }
+ length=5
+ linear=(False, False, False)
+ num_carry=2
+ num_consts=0
+ reverse=False
+ unroll=1 ] 0.00 c a
+ in (b, d) }""", g, [arg])
+ self.assertRewrite("""
+ { lambda ; a d.
+ let _ _ e _ b =
+ scan[ jaxpr={ lambda ; a b h c d.
+ let e i = custom_jvp_call_jaxpr[ fun_jaxpr={ lambda ; a d.
+ let b e = id_tap[ arg_treedef_=*
+ has_token_=True
+ nr_tapped_args_=1
+ tap_func_=_print
+ ] a d
+ c = mul a b
+ in (c, e) }
+ ] c h
+ f = add a e
+ g = mul c 3.00
+ in (f, *, i, 0.00, g) }
+ length=5
+ linear=(False, True, False, True, False)
+ num_carry=3
+ num_consts=0
+ reverse=False
+ unroll=1 ] 0.00 * d a *
+ _ _ f _ c =
+ scan[ jaxpr={ lambda ; a b g c d.
+ let e = mul b d
+ f h = id_tap[ arg_treedef_=*
+ has_token_=True
+ nr_tapped_args_=1
+ tap_func_=_print
+ transforms=(('transpose',),) ] e g
+ in (*, b, h, *, f) }
+ length=5
+ linear=(True, True, True, False, False)
+ num_carry=3
+ num_consts=0
+ reverse=True
+ unroll=1 ] * 1.00 e * b
+ in (c, f) }""", api.grad(g), [arg])
+
+ def test_scan_custom_vjp(self):
+ """custom VJP, inside scan.
+ This exercises the custom_vjp_call_jaxpr primitives."""
+ @api.custom_vjp
+ def f(x):
+ return x * hcb.id_print(x)
+
+ # f_fwd: a -> (b, residual)
+ def f_fwd(x):
+ return f(x), 3. * x
+ # f_bwd: (residual, CT b) -> [CT a]
+ def f_bwd(residual, ct_b):
+ return residual * hcb.id_print(ct_b),
+
+ f.defvjp(f_fwd, f_bwd)
+
+ def g(x):
+ # Sum f(x_i)
+ return lax.scan(lambda carry, inp: (carry + f(inp), 0.),
+ np.full(x.shape[1:], 0.), # Like x w/o leading dim
+ x)[0]
+
+ arg = np.full((2,), 0.7)
+ self.assertRewrite("""
+ { lambda ; a c.
+ let b d _ = scan[ jaxpr={ lambda ; a e b.
+ let c f = custom_vjp_call_jaxpr[
+ fun_jaxpr={ lambda ; a d.
+ let b e = id_tap[ arg_treedef_=*
+ has_token_=True
+ nr_tapped_args_=1
+ tap_func_=_print
+ ] a d
+ c = mul a b
+ in (c, e) }
+ ] b e
+ d = add a c
+ in (d, f, 0.00) }
+ length=2
+ linear=(False, False, False)
+ num_carry=2
+ num_consts=0
+ reverse=False
+ unroll=1 ] 0.00 c a
+ in (b, d) }""", g, [arg])
+ self.assertRewrite("""
+ { lambda ; a d.
+ let _ _ e _ b =
+ scan[ jaxpr={ lambda ; a b h c d.
+ let e i = custom_vjp_call_jaxpr[
+ fun_jaxpr={ lambda ; a d.
+ let b e = id_tap[ arg_treedef_=*
+ has_token_=True
+ nr_tapped_args_=1
+ tap_func_=_print
+ ] a d
+ c = mul a b
+ in (c, e) }
+ ] c h
+ f = add a e
+ g = mul c 3.00
+ in (f, *, i, 0.00, g) }
+ length=2
+ linear=(False, True, False, True, False)
+ num_carry=3
+ num_consts=0
+ reverse=False
+ unroll=1 ] 0.00 * d a *
+ _ _ f _ c =
+ scan[ jaxpr={ lambda ; a b g c d.
+ let e h = id_tap[ arg_treedef_=*
+ has_token_=True
+ nr_tapped_args_=1
+ tap_func_=_print
+ ] b g
+ f = mul d e
+ in (*, b, h, *, f) }
+ length=2
+ linear=(True, True, True, False, False)
+ num_carry=3
+ num_consts=0
+ reverse=True
+ unroll=1 ] * 1.00 e * b
+ in (c, f) }""", api.grad(g), [arg])
if __name__ == "__main__":
absltest.main(testLoader=jtu.JaxTestLoader())
| host_callback doesn't work inside grad(odeint)
It reports "NotImplementedError: outfeed rewrite custom_vjp_call_jaxpr"
This would be quite useful to support because it would give us a way to get information about nature of the backward pass out of odeint, e.g., to facilitate debugging https://github.com/google/jax/issues/3993.
To reproduce:
```python
from jax.experimental.ode import odeint
from jax.experimental import host_callback
import jax.numpy as jnp
import jax
def f(x, t, k):
x = host_callback.id_print(x)
return -k * x
def loss(k=1.0):
t = jnp.linspace(0, 0.001, num=2)
xs = odeint(f, 1.0, t, k)
return xs[-1]
loss(1.0) # works
jax.grad(loss)(1.0) # fails
```
The error message is:
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-1-aea888293dfa> in <module>()
14
15 loss(1.0) # works
---> 16 jax.grad(loss)(1.0) # fails
19 frames
/usr/local/lib/python3.6/dist-packages/jax/api.py in grad_f(*args, **kwargs)
427 @wraps(fun, docstr=docstr, argnums=argnums)
428 def grad_f(*args, **kwargs):
--> 429 _, g = value_and_grad_f(*args, **kwargs)
430 return g
431
/usr/local/lib/python3.6/dist-packages/jax/api.py in value_and_grad_f(*args, **kwargs)
491 dtype = dtypes.result_type(ans)
492 tree_map(partial(_check_output_dtype_grad, holomorphic), ans)
--> 493 g = vjp_py(np.ones((), dtype=dtype))
494 g = g[0] if isinstance(argnums, int) else g
495 if not has_aux:
/usr/local/lib/python3.6/dist-packages/jax/api.py in _vjp_pullback_wrapper(cotangent_dtypes, io_tree, fun, py_args)
1458 "match type of corresponding primal output ({})")
1459 raise TypeError(msg.format(_dtype(a), dtype))
-> 1460 ans = fun(*args)
1461 return tree_unflatten(out_tree, ans)
1462
/usr/local/lib/python3.6/dist-packages/jax/interpreters/ad.py in unbound_vjp(pvals, jaxpr, consts, *cts)
115 cts = tuple(map(ignore_consts, cts, pvals))
116 dummy_args = [UndefinedPrimal(v.aval) for v in jaxpr.invars]
--> 117 arg_cts = backward_pass(jaxpr, consts, dummy_args, cts)
118 return map(instantiate_zeros, arg_cts)
119
/usr/local/lib/python3.6/dist-packages/jax/interpreters/ad.py in backward_pass(jaxpr, consts, primals_in, cotangents_in)
201 call_jaxpr, params = core.extract_call_jaxpr(eqn.primitive, eqn.params)
202 cts_out = get_primitive_transpose(eqn.primitive)(
--> 203 params, call_jaxpr, invals, cts_in, cts_in_avals)
204 else:
205 cts_out = get_primitive_transpose(eqn.primitive)(cts_in, *invals,
/usr/local/lib/python3.6/dist-packages/jax/interpreters/ad.py in call_transpose(primitive, params, call_jaxpr, args, ct, _)
486 new_params = update_params(new_params, map(is_undefined_primal, args),
487 [type(x) is not Zero for x in ct])
--> 488 out_flat = primitive.bind(fun, *all_args, **new_params)
489 return tree_unflatten(out_tree(), out_flat)
490 primitive_transposes[core.call_p] = partial(call_transpose, call_p)
/usr/local/lib/python3.6/dist-packages/jax/core.py in bind(self, fun, *args, **params)
1132
1133 def bind(self, fun, *args, **params):
-> 1134 return call_bind(self, fun, *args, **params)
1135
1136 def process(self, trace, fun, tracers, params):
/usr/local/lib/python3.6/dist-packages/jax/core.py in call_bind(primitive, fun, *args, **params)
1121 if top_trace is None:
1122 with new_sublevel():
-> 1123 outs = primitive.impl(fun, *args, **params)
1124 else:
1125 tracers = map(top_trace.full_raise, args)
/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py in _xla_call_impl(fun, device, backend, name, donated_invars, *args)
525 def _xla_call_impl(fun: lu.WrappedFun, *args, device, backend, name, donated_invars):
526 compiled_fun = _xla_callable(fun, device, backend, name, donated_invars,
--> 527 *unsafe_map(arg_spec, args))
528 try:
529 return compiled_fun(*args)
/usr/local/lib/python3.6/dist-packages/jax/linear_util.py in memoized_fun(fun, *args)
222 fun.populate_stores(stores)
223 else:
--> 224 ans = call(fun, *args)
225 cache[key] = (ans, fun.stores)
226 return ans
/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py in _xla_callable(fun, device, backend, name, donated_invars, *arg_specs)
598 fun, pvals, instantiate=False, stage_out=True, bottom=True)
599 map(prefetch, it.chain(consts, jaxpr_literals(jaxpr)))
--> 600 jaxpr = apply_outfeed_rewriter(jaxpr)
601
602 nreps = jaxpr_replicas(jaxpr)
/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py in apply_outfeed_rewriter(jaxpr)
181 def apply_outfeed_rewriter(jaxpr: core.Jaxpr) -> core.Jaxpr:
182 if outfeed_rewriter is not None:
--> 183 return outfeed_rewriter(jaxpr)
184 else:
185 return jaxpr
/usr/local/lib/python3.6/dist-packages/jax/experimental/host_callback.py in <lambda>(j)
752
753
--> 754 xla.outfeed_rewriter = lambda j: _rewrite_jaxpr(j, False, False)
755
756
/usr/local/lib/python3.6/dist-packages/jax/experimental/host_callback.py in _rewrite_jaxpr(jaxpr, has_input_token, has_output_token)
553 else:
554 output_token_var = mk_new_var(core.abstract_token)
--> 555 _rewrite_eqn(eqn, eqns, last_token_var, output_token_var, mk_new_var)
556 last_token_var = output_token_var
557
/usr/local/lib/python3.6/dist-packages/jax/experimental/host_callback.py in _rewrite_eqn(eqn, eqns, input_token_var, output_token_var, mk_new_var)
617 input_token_var
618 ] + eqn.invars[nr_const_and_carry:]
--> 619 new_jaxpr = _rewrite_typed_jaxpr(carry_jaxpr, True, True)
620 # The rewrite has put the token at end, it has to be at end of carry
621 new_jaxpr_invars = new_jaxpr.jaxpr.invars
/usr/local/lib/python3.6/dist-packages/jax/experimental/host_callback.py in _rewrite_typed_jaxpr(tjaxpr, has_input_token, has_output_token)
524 has_output_token: bool) -> core.TypedJaxpr:
525 """Rewrites a TypedJaxpr to thread the token, if needed."""
--> 526 new_jaxpr = _rewrite_jaxpr(tjaxpr.jaxpr, has_input_token, has_output_token)
527 return _mk_typed_jaxpr(new_jaxpr, tjaxpr.literals)
528
/usr/local/lib/python3.6/dist-packages/jax/experimental/host_callback.py in _rewrite_jaxpr(jaxpr, has_input_token, has_output_token)
553 else:
554 output_token_var = mk_new_var(core.abstract_token)
--> 555 _rewrite_eqn(eqn, eqns, last_token_var, output_token_var, mk_new_var)
556 last_token_var = output_token_var
557
/usr/local/lib/python3.6/dist-packages/jax/experimental/host_callback.py in _rewrite_eqn(eqn, eqns, input_token_var, output_token_var, mk_new_var)
654 eqn.params,
655 call_jaxpr=_rewrite_jaxpr(call_jaxpr, True,
--> 656 True)), eqn.source_info))
657 else:
658 raise NotImplementedError(f"outfeed rewrite {eqn.primitive}")
/usr/local/lib/python3.6/dist-packages/jax/experimental/host_callback.py in _rewrite_jaxpr(jaxpr, has_input_token, has_output_token)
553 else:
554 output_token_var = mk_new_var(core.abstract_token)
--> 555 _rewrite_eqn(eqn, eqns, last_token_var, output_token_var, mk_new_var)
556 last_token_var = output_token_var
557
/usr/local/lib/python3.6/dist-packages/jax/experimental/host_callback.py in _rewrite_eqn(eqn, eqns, input_token_var, output_token_var, mk_new_var)
656 True)), eqn.source_info))
657 else:
--> 658 raise NotImplementedError(f"outfeed rewrite {eqn.primitive}")
659
660
NotImplementedError: outfeed rewrite custom_vjp_call_jaxpr
```
| @gnecula want to take this one? | 2020-08-12T05:17:47 |
google/jax | 4,125 | google__jax-4125 | [
"4124"
] | 6bed4ee3b2c9b7f90883118dffc183ca0ed39774 | diff --git a/jax/random.py b/jax/random.py
--- a/jax/random.py
+++ b/jax/random.py
@@ -556,6 +556,9 @@ def choice(key, a, shape=(), replace=True, p=None):
Returns:
An array of shape `shape` containing samples from `a`.
"""
+ if not isinstance(shape, Sequence):
+ raise TypeError("shape argument of jax.random.choice must be a sequence, "
+ f"got {shape}")
a = jnp.asarray(a)
if a.ndim not in [0, 1]:
raise ValueError("a must be an integer or 1-dimensional")
| diff --git a/tests/random_test.py b/tests/random_test.py
--- a/tests/random_test.py
+++ b/tests/random_test.py
@@ -848,6 +848,13 @@ def testRadamacher(self):
self.assertAllClose(
counts[1]/ num_samples, 0.5, rtol=1e-02, atol=1e-02)
+ def testChoiceShapeIsNotSequenceError(self):
+ key = random.PRNGKey(0)
+ with self.assertRaises(TypeError):
+ random.choice(key, 5, 2, replace=False)
+ with self.assertRaises(TypeError):
+ random.choice(key, 5, 2, replace=True)
+
if __name__ == "__main__":
absltest.main(testLoader=jtu.JaxTestLoader())
| `replace` argument changes how `jax.random.choice` treats `shape`
Using jax version 0.1.75 on a colab cpu runtime.
```python
import jax
key = jax.random.PRNGKey(0)
jax.random.choice(key, 5, 2, replace=False)
jax.random.choice(key, 5, 2, replace=True)
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-19-56696338bc13> in <module>()
1 key = jax.random.PRNGKey(0)
2 jax.random.choice(key, 5, 2, replace=False)
----> 3 jax.random.choice(key, 5, 2, replace=True)
2 frames
/usr/local/lib/python3.6/dist-packages/jax/random.py in choice(key, a, shape, replace, p)
566 if p is None:
567 if replace:
--> 568 ind = randint(key, shape, 0, n_inputs)
569 result = ind if a.ndim == 0 else a[ind]
570 else:
/usr/local/lib/python3.6/dist-packages/jax/random.py in randint(key, shape, minval, maxval, dtype)
417 """
418 dtype = dtypes.canonicalize_dtype(dtype)
--> 419 shape = abstract_arrays.canonicalize_shape(shape)
420 return _randint(key, shape, minval, maxval, dtype)
421
/usr/local/lib/python3.6/dist-packages/jax/core.py in canonicalize_shape(shape)
1078 "got {}.")
1079 if any(isinstance(x, Tracer) and isinstance(get_aval(x), ShapedArray)
-> 1080 and not isinstance(get_aval(x), ConcreteArray) for x in shape):
1081 msg += ("\nIf using `jit`, try using `static_argnums` or applying `jit` to "
1082 "smaller subfunctions.")
TypeError: 'int' object is not iterable
```
| 2020-08-22T02:58:43 |
|
google/jax | 4,170 | google__jax-4170 | [
"4141"
] | f7a09c63110a68161e31e9d7dc1858534154a085 | diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -857,7 +857,7 @@ def _tuple_output(*args, **kwargs):
ans = yield args, kwargs
yield (ans,)
-def lower_fun(fun, multiple_results):
+def lower_fun(fun, multiple_results, parallel=False):
# This function can only be used to lower functions that take JAX array types
# as arguments (and e.g. don't accept unit values), because it assumes it can
# map from XLA types to JAX types. In general that mapping is not possible (as
@@ -868,10 +868,14 @@ def lower_fun(fun, multiple_results):
def f(c, *xla_args, **params):
# TODO(mattjj): revise this 'calling convention'
avals = [_array_aval_from_xla_shape(c.get_shape(x)) for x in xla_args]
+ if parallel:
+ axis_env = params.pop('axis_env')
+ del params['platform']
+ else:
+ axis_env = AxisEnv(1, (), (), None)
wrapped_fun = lu.wrap_init(fun, params)
if not multiple_results:
wrapped_fun = _tuple_output(wrapped_fun)
- axis_env = AxisEnv(1, (), (), None)
if config.omnistaging_enabled:
jaxpr, _, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, avals)
outs = jaxpr_subcomp(c, jaxpr, None, axis_env, _xla_consts(c, consts), '',
diff --git a/jax/lax/lax_parallel.py b/jax/lax/lax_parallel.py
--- a/jax/lax/lax_parallel.py
+++ b/jax/lax/lax_parallel.py
@@ -16,6 +16,7 @@
"""
import collections
+import warnings
import numpy as np
@@ -243,9 +244,7 @@ def pswapaxes(x, axis_name, axis):
``axis_name``.
Returns:
- Array(s) with shape ``np.insert(np.delete(x.shape, axis), axis, axis_size)``
- where ``axis_size`` is the size of the mapped axis named ``axis_name`` in
- the input ``x``.
+ Array(s) with the same shape as ``x``.
"""
return all_to_all(x, axis_name, axis, axis)
@@ -551,13 +550,32 @@ def _ppermute_batcher(vals_in, dims_in, axis_size, axis_name, perm):
batching.collective_rules[ppermute_p] = _ppermute_batcher
+def _moveaxis(src, dst, x):
+ perm = [i for i in range(x.ndim) if i != src]
+ perm.insert(dst, src)
+ return lax.transpose(x, perm)
+
+def _all_to_all_via_all_gather(x, *, axis_name, split_axis, concat_axis):
+ global_full = all_gather(x, axis_name)
+ idx = axis_index(axis_name)
+ local_slice = lax.dynamic_index_in_dim(global_full, idx, split_axis + 1, keepdims=False)
+ return _moveaxis(0, concat_axis, local_slice)
+
def _all_to_all_translation_rule(c, x, *, split_axis, concat_axis, axis_name,
axis_env, platform):
# Workaround for AllToAll not being implemented on CPU.
replica_groups = _replica_groups(axis_env, axis_name, None)
if len(replica_groups[0]) == 1:
return x
- elif platform == 'tpu':
+ elif platform != 'tpu':
+ warnings.warn("all_to_all (and pswapaxes) are only implemented properly for TPUs. All other "
+ "backends emulate it using a very slow and memory intensive algorithm, so expect "
+ "significant slowdowns.")
+ lowering = xla.lower_fun(_all_to_all_via_all_gather, multiple_results=False, parallel=True)
+ return lowering(c, x,
+ split_axis=split_axis, concat_axis=concat_axis, axis_name=axis_name,
+ axis_env=axis_env, platform=platform)
+ else:
split_count = len(replica_groups[0])
if not all(split_count == len(g) for g in replica_groups):
raise ValueError('Replica groups must be equally sized')
@@ -574,28 +592,25 @@ def _all_to_all_translation_rule(c, x, *, split_axis, concat_axis, axis_name,
x = xops.AllToAll(x, split_axis, concat_axis, split_count, replica_groups_protos)
x = xla.lower_fun(partial(lax.squeeze, dimensions=(split_axis,)), multiple_results=False)(c, x)
return x
- else:
- raise NotImplementedError("all_to_all and pswapaxes only supported on TPU")
-
-def _all_to_all_split_axis_rule(vals, which_mapped, split_axis, concat_axis,
- axis_name):
- assert tuple(which_mapped) == (True,)
- x, = vals
- # perform the communication to swap the hardware-mapped axes
- stacked = all_to_all_p.bind(x, split_axis=split_axis + 1, concat_axis=0,
- axis_name=axis_name)
- # transpose the newly mapped axis to the front, newly unmapped to concat_axis
- out = _moveaxis(split_axis + 1, 0, stacked)
- out = _moveaxis(1, concat_axis + 1, out)
- return out, True
def _all_to_all_transpose_rule(cts, axis_name, split_axis, concat_axis):
return (all_to_all(cts, axis_name=axis_name, split_axis=concat_axis, concat_axis=split_axis),)
-def _moveaxis(src, dst, x):
- perm = [i for i in range(x.ndim) if i != src]
- perm.insert(dst, src)
- return lax.transpose(x, perm)
+def _all_to_all_batcher(vals_in, dims_in, *, axis_name, split_axis, concat_axis):
+ x, = vals_in
+ d, = dims_in
+ if d <= split_axis:
+ split_axis += 1
+ if d <= concat_axis:
+ concat_axis += 1
+ # Note: At this point split_axis and concat_axis are adjusted to the extra
+ # dimension and we have d != split_axis and d != concat_axis.
+ if split_axis < d < concat_axis:
+ d -= 1
+ elif concat_axis < d < split_axis:
+ d += 1
+ result = all_to_all_p.bind(x, axis_name=axis_name, split_axis=split_axis, concat_axis=concat_axis)
+ return result, d
def _all_to_all_abstract_eval(x, axis_name, split_axis, concat_axis):
input_aval = raise_to_shaped(x)
@@ -609,6 +624,7 @@ def _all_to_all_abstract_eval(x, axis_name, split_axis, concat_axis):
xla.parallel_translations[all_to_all_p] = _all_to_all_translation_rule
ad.deflinear(all_to_all_p, _all_to_all_transpose_rule)
pxla.multi_host_supported_collectives.add(all_to_all_p)
+batching.primitive_batchers[all_to_all_p] = _all_to_all_batcher
def _expand(dim, size, index, x):
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -97,6 +97,8 @@ def tearDownModule():
ignore_jit_of_pmap_warning = partial(
jtu.ignore_warning, message=".*jit-of-pmap.*")
+ignore_slow_all_to_all_warning = partial(
+ jtu.ignore_warning, message="all_to_all.*expect significant slowdowns.*")
class PmapTest(jtu.JaxTestCase):
def _getMeshShape(self, device_mesh_shape):
@@ -143,6 +145,7 @@ def testGather(self):
ans = f(x)
self.assertAllClose(ans, expected, check_dtypes=False)
+ @ignore_slow_all_to_all_warning()
def testTrees(self):
ptranspose = lambda x, axis_name: lax.all_to_all(x, axis_name, 0, 0)
def protate(x, axis_name):
@@ -166,9 +169,9 @@ def protate(x, axis_name):
assert_allclose(jax_f(lax.pmin)(x), np_f(np.min)(x))
assert_allclose(jax_f(lax.psum)(x), np_f(np.sum)(x))
assert_allclose(jax_f(lax.pmean)(x), np_f(np.mean)(x))
+ assert_allclose(jax_f(ptranspose)(x), np_transpose(x))
+ # NOTE: ppermute only supported on TPU.
if jtu.device_under_test() not in ("cpu", "gpu"):
- # NOTE: all-to-all and ppermute only supported on TPU.
- assert_allclose(jax_f(ptranspose)(x), np_transpose(x))
assert_allclose(jax_f(protate)(x), np_rotate(x))
def testCollectivesWithTreesOfDifferentDtypes(self):
@@ -1630,6 +1633,77 @@ def f(x, y):
x = jnp.ones((2, 2, 64, 64))
self.assertAllClose(f(jax.pmap)(x, x), f(jax.vmap)(x, x))
+ @parameterized.named_parameters(
+ {"testcase_name": f"_split={split_axis}_concat={concat_axis}_vmap={vmap_axis}",
+ "split_axis": split_axis, "concat_axis": concat_axis, "vmap_axis": vmap_axis}
+ for split_axis, concat_axis, vmap_axis in it.product(range(3), range(3), range(4)))
+ @skipIf(not jax.config.omnistaging_enabled,
+ "vmap collectives only supported when omnistaging is enabled")
+ @ignore_slow_all_to_all_warning()
+ def testAllToAllInVmap(self, split_axis, concat_axis, vmap_axis):
+ def f(x):
+ return lax.all_to_all(x, 'i', split_axis=split_axis, concat_axis=concat_axis)
+
+ def adj(axis, hidden_axes):
+ for hax in sorted(hidden_axes):
+ if hax <= axis:
+ axis += 1
+ return axis
+
+ def reference(x, split_axis, concat_axis, vmap_axis):
+ pmap_axis = 0
+ vmap_axis = adj(vmap_axis, [pmap_axis])
+ ref = x
+
+ # Step 1.
+ # Adjust the split axis to the real tensor layout and move it to
+ # position 1. Since pmap_axis is always 0 we don't have to adjust it,
+ # but we do have to adjust vmap_axis.
+ split_axis = adj(split_axis, [pmap_axis, vmap_axis])
+ ref = jnp.moveaxis(ref, split_axis, pmap_axis + 1)
+ vmap_axis = vmap_axis + (0 if split_axis < vmap_axis else 1)
+ split_axis = pmap_axis + 1 # split_axes == 1
+
+ # Step 2.
+ # Now, we move pmap_axis to the position indicated by concat_axis.
+ concat_axis = adj(concat_axis, [pmap_axis, split_axis, vmap_axis]) - 1
+ ref = jnp.moveaxis(ref, pmap_axis, concat_axis)
+ pmap_axis = 0
+ vmap_axis = vmap_axis - (1 if concat_axis >= vmap_axis else 0)
+ del split_axis, concat_axis
+
+ # Step 3. vmap_axis always ends in position 1, since out_axes=0.
+ ref = jnp.moveaxis(ref, vmap_axis, 1)
+ return ref
+
+ def verify_ref():
+ # Both the reference and the real implementation of all_to_all batching involve
+ # some pretty complicated axis arithmetic, so it would be good to verify that it's
+ # not the case that the test passes because they're both incorrect. Fortunately, it
+ # is quite easy to write out the shape function for this code, and we know
+ # that it should be equivalent to a bunch of transposes, so the code below verifies
+ # that the reference puts the right dimensions in the right places. Note that we
+ # can't do the same comparison on f, since all_to_all wouldn't allow us to swap axes of
+ # different sizes.
+ start_shape = [2, 3, 4, 5, 6]
+ instance_shape = start_shape.copy()
+ pmap_dim_id = instance_shape.pop(0)
+ vmap_dim_id = instance_shape.pop(vmap_axis)
+ split_axis_id = instance_shape.pop(split_axis)
+ instance_shape.insert(concat_axis, pmap_dim_id)
+ expected_shape = (split_axis_id, vmap_dim_id, *instance_shape)
+
+ x = np.empty(start_shape)
+ self.assertEqual(reference(x, split_axis, concat_axis, vmap_axis).shape,
+ expected_shape)
+
+ verify_ref()
+
+ shape = (jax.device_count(),) * 5
+ x = jnp.arange(np.prod(shape)).reshape(shape)
+ self.assertAllClose(pmap(vmap(f, in_axes=vmap_axis), axis_name='i')(x),
+ reference(x, split_axis, concat_axis, vmap_axis))
+
class PmapWithDevicesTest(jtu.JaxTestCase):
| No batching rule for `all_to_all`
```py
import jax
from jax import lax, vmap, pmap
import jax.numpy as jnp
def f(x):
tiled_x = jax.tree_map(lambda x: jnp.tile(x[None], [jax.device_count()] + [1] * len(x.shape)), x)
all_x = lax.pswapaxes(tiled_x, 'batch', 0)
return all_x
pmap(vmap(f), axis_name='batch')(jnp.arange(jax.device_count() * 5 * 2).reshape((jax.device_count(), 5, 2)))
```
Currently this throws an assertion error elsewhere but with #4140, it will correctly throw `NotImplementedError: Batching rule for 'all_to_all' not implemented`.
| 2020-08-28T15:24:14 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.