repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
google/jax | 4,172 | google__jax-4172 | [
"4165"
] | 1dab791acbce80f49598331898ade7d588930c8f | diff --git a/jax/lax/lax_control_flow.py b/jax/lax/lax_control_flow.py
--- a/jax/lax/lax_control_flow.py
+++ b/jax/lax/lax_control_flow.py
@@ -2329,33 +2329,34 @@ def associative_scan(fn, elems):
Args:
fn: Python callable implementing an associative binary operation with
- signature `r = fn(a, b)`. This must satisfy associativity:
- `fn(a, fn(b, c)) == fn(fn(a, b), c)`. The inputs and result are
- (possibly nested structures of) `Tensor`(s), matching `elems`. Each
- `Tensor` has a leading batch dimension in place of `num_elems`; the `fn`
- is expected to map over this dimension. The result `r` has the same shape
- (and structure) as the two inputs `a` and `b`.
- elems: A (possibly nested structure of) `Tensor`(s), each with leading
- dimension `num_elems`, which must be known statically.
- Returns:
- result: A (possibly nested structure of) `Tensor`(s) of the same shape
- and structure as `elems`, in which the `k`th element is the result of
- recursively applying `fn` to combine the first `k` elements of
- `elems`. For example, given `elems = [a, b, c, ...]`, the result
- would be `[a, fn(a, b), fn(fn(a, b), c), ...]`.
- #### Examples
+ signature ``r = fn(a, b)``. This must satisfy associativity:
+ ``fn(a, fn(b, c)) == fn(fn(a, b), c)``. The inputs and result are
+ (possibly nested structures of) array(s) matching ``elems``. Each
+ array has a leading dimension in place of ``num_elems``; the `fn`
+ is expected to be scanned over this dimension. The result `r` has the same
+ shape (and structure) as the two inputs ``a`` and ``b``.
+ elems: A (possibly nested structure of) array(s), each with leading
+ dimension ``num_elems``.
+
+ Returns:
+ result: A (possibly nested structure of) array(s) of the same shape
+ and structure as ``elems``, in which the ``k``th element is the result of
+ recursively applying ``fn`` to combine the first ``k`` elements of
+ ``elems``. For example, given ``elems = [a, b, c, ...]``, the result
+ would be ``[a, fn(a, b), fn(fn(a, b), c), ...]``.
- ```python
- # Example 1: Partials sums of numbers.
+ Example 1: partial sums of an array of numbers:
- np.associative_scan(operator.add, np.arange(0, 4))
- # ==> [ 0, 1, 3, 6]
+ >>> lax.associative_scan(jnp.add, jnp.arange(0, 4))
+ [ 0, 1, 3, 6]
- # Example 2: Partial products of random matrices.
+ Example 2: partial products of an array of matrices
- np.associative_scan(np.matmul, matrices)
- ```
+ >>> mats = random.uniform(random.PRNGKey(0), (4, 2, 2))
+ >>> partial_prods = lax.associative_scan(jnp.matmul, mats)
+ >>> partial_prods.shape
+ (4, 2, 2)
"""
elems_flat, tree = tree_flatten(elems)
| Documentation issue - associative_scan
https://github.com/google/jax/blob/e95d5701e33bda30299a8c87aa43b61a686c1b01/jax/lax/lax_control_flow.py#L2352
Hi,
It should clearly be used from lax, and not numpy + it seems that the Example format does not follow the docstring style, so it doesn't show properly in the readthedocs page.
Adrien
| Thanks for raising the issue! We'll make certain this gets corrected. | 2020-08-28T16:32:51 |
|
google/jax | 4,272 | google__jax-4272 | [
"4223"
] | 83b4f3b97c9cb4a5030b3c6270d16e08dba206bd | diff --git a/jax/interpreters/pxla.py b/jax/interpreters/pxla.py
--- a/jax/interpreters/pxla.py
+++ b/jax/interpreters/pxla.py
@@ -925,7 +925,7 @@ def replicate(val, axis_size, nrep, devices=None, backend=None):
A ShardedDeviceArray of length `axis_size` where each shard is equal to
``val``.
"""
- device_count = (len(devices) if devices else xb.local_device_count())
+ device_count = (len(devices) if devices else xb.local_device_count(backend))
if nrep > device_count:
msg = ("Cannot replicate across %d replicas because only %d local devices "
"are available." % (nrep, device_count))
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -1481,6 +1481,15 @@ def pmapped_multi_step(state):
u = np.ones((device_count, 100))
multi_step_pmap(u) # doesn't crash
+ @jtu.skip_on_devices("cpu")
+ def test_replicate_backend(self):
+ # https://github.com/google/jax/issues/4223
+ def fn(indices):
+ return jnp.equal(indices, jnp.arange(3)).astype(jnp.float32)
+ mapped_fn = jax.pmap(fn, axis_name='i', backend='cpu')
+ mapped_fn = jax.pmap(mapped_fn, axis_name='j', backend='cpu')
+ indices = np.array([[[2], [1]], [[0], [0]]])
+ mapped_fn(indices) # doesn't crash
class VmapOfPmapTest(jtu.JaxTestCase):
| Nested pmap issue on GPU
Running the code with nested pmap/vmap in the reproducer below works with CPU/TPU but fails on GPU. The explicitly specified `cpu` backend when running `pmap` seems to be ignored, as there are clearly enough devices available.
Changing
`updates_at_idxs = updates[..., None] * rlax.one_hot(indices, num_classes)`
to
`updates_at_idxs = updates[..., None]`
makes things work on GPU as well. Not entirely sure how this is related to the mapped axes though.
- [Colab reproducer](https://colab.research.google.com/drive/1C8i4xUbu8wDWJKy0EvGhu2Vn7ptAky6W?usp=sharing)
| I've further isolated the error and included it as an alternative reproducer in the provided colab. This fails on GPU:
```python
import chex
import jax
from jax import numpy as jnp
chex.set_n_cpu_devices(4)
def fn(indices):
return jnp.equal(indices, jnp.arange(3)).astype(jnp.float32)
mapped_fn = jax.pmap(fn, axis_name='i', backend='cpu')
mapped_fn = jax.pmap(mapped_fn, axis_name='j', backend='cpu')
indices = np.array([[[2], [1]], [[0], [0]]])
mapped_fn(indices)
```
Thanks for the great reproducer! Thanks to that, this just took 2 mins to spot once I ran your repro. | 2020-09-12T05:15:12 |
google/jax | 4,349 | google__jax-4349 | [
"4348"
] | 6614f94890429c6c4c9dd46f932e22653dd6316d | diff --git a/jax/lax/lax_parallel.py b/jax/lax/lax_parallel.py
--- a/jax/lax/lax_parallel.py
+++ b/jax/lax/lax_parallel.py
@@ -547,7 +547,6 @@ def _ppermute_batcher(vals_in, dims_in, axis_size, axis_name, perm):
ad.deflinear(ppermute_p, _ppermute_transpose_rule)
xla.parallel_translations[ppermute_p] = _ppermute_translation_rule
pxla.multi_host_supported_collectives.add(ppermute_p)
-batching.primitive_batchers[ppermute_p] = partial(_collective_batcher, pmin_p)
batching.collective_rules[ppermute_p] = _ppermute_batcher
| Ppermute batching errors
This code.
```python
import jax
import jax.numpy as jnp
from functools import partial
@partial(jax.pmap, axis_name="i")
@jax.vmap
def f(a):
jax.lax.ppermute(a, "i", [0, 1, 2, 3, 4, 5, 6, 7, 8])
# Run it.
f(jnp.arange(8 * 8 * 8).reshape((8, 8, 8)))
```
Dies with this
```
TypeError: _allreduce_translation_rule() got an unexpected keyword argument 'perm'
```
I think this is because of this line
https://github.com/google/jax/blob/40e20242db0ed1d8cceb1d93b78d07c248e693a6/jax/lax/lax_parallel.py#L550
For some reason, the batching primative was set as `pmin`.
I'll go ahead and make a PR to fix this.
| 2020-09-18T20:59:37 |
||
google/jax | 4,388 | google__jax-4388 | [
"1332"
] | c875ab3ec9b2ab4794e4068e64172c9869e1b618 | diff --git a/jax/lax/lax_parallel.py b/jax/lax/lax_parallel.py
--- a/jax/lax/lax_parallel.py
+++ b/jax/lax/lax_parallel.py
@@ -557,13 +557,25 @@ def _all_to_all_translation_rule(c, x, *, split_axis, concat_axis, axis_name,
replica_groups = _replica_groups(axis_env, axis_name, None)
if len(replica_groups[0]) == 1:
return x
- else:
+ elif platform == 'tpu':
split_count = len(replica_groups[0])
if not all(split_count == len(g) for g in replica_groups):
raise ValueError('Replica groups must be equally sized')
replica_groups_protos = xc.make_replica_groups(replica_groups)
- return xops.AllToAll(x, split_axis, concat_axis, split_count,
- replica_groups_protos)
+ if concat_axis == split_axis:
+ return xops.AllToAll(x, split_axis, concat_axis, split_count,
+ replica_groups_protos)
+ else:
+ if concat_axis < split_axis:
+ split_axis += 1
+ elif split_axis < concat_axis:
+ concat_axis += 1
+ x = xla.lower_fun(partial(lax.expand_dims, dimensions=(concat_axis,)), multiple_results=False)(c, x)
+ x = xops.AllToAll(x, split_axis, concat_axis, split_count, replica_groups_protos)
+ x = xla.lower_fun(partial(lax.squeeze, dimensions=(split_axis,)), multiple_results=False)(c, x)
+ return x
+ else:
+ raise NotImplementedError("all_to_all and pswapaxes only supported on TPU")
def _all_to_all_split_axis_rule(vals, which_mapped, split_axis, concat_axis,
axis_name):
@@ -585,8 +597,15 @@ def _moveaxis(src, dst, x):
perm.insert(dst, src)
return lax.transpose(x, perm)
+def _all_to_all_abstract_eval(x, axis_name, split_axis, concat_axis):
+ input_aval = raise_to_shaped(x)
+ shape = list(input_aval.shape)
+ size = shape.pop(split_axis)
+ shape.insert(concat_axis, size)
+ return ShapedArray(tuple(shape), input_aval.dtype, weak_type=False)
+
all_to_all_p = core.Primitive('all_to_all')
-all_to_all_p.def_abstract_eval(lambda x, **params: raise_to_shaped(x))
+all_to_all_p.def_abstract_eval(_all_to_all_abstract_eval)
xla.parallel_translations[all_to_all_p] = _all_to_all_translation_rule
ad.deflinear(all_to_all_p, _all_to_all_transpose_rule)
pxla.multi_host_supported_collectives.add(all_to_all_p)
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -197,6 +197,55 @@ def testComplexPsum(self):
ans = f(x)
self.assertAllClose(ans, expected, check_dtypes=False)
+ @parameterized.named_parameters(
+ {"testcase_name": f"_split={split_axis}_concat={concat_axis}",
+ "split_axis": split_axis, "concat_axis": concat_axis}
+ for split_axis, concat_axis in it.product(range(2), range(2)))
+ def testAllToAll(self, split_axis, concat_axis):
+ if jtu.device_under_test() != "tpu":
+ raise SkipTest("all_to_all not implemented on non-TPU platforms")
+ pmap_in_axis = 0
+ shape = (xla_bridge.device_count(),) * 3
+ x = np.arange(np.prod(shape)).reshape(shape)
+
+ @partial(pmap, axis_name='i')
+ def f(x):
+ return lax.all_to_all(x, 'i', split_axis, concat_axis)
+ y = f(x)
+ if pmap_in_axis <= split_axis:
+ split_axis += 1
+ ref = jnp.moveaxis(x, (pmap_in_axis, split_axis),
+ (concat_axis + 1, 0))
+ self.assertAllClose(y, ref)
+
+ @parameterized.named_parameters(
+ {"testcase_name": f"_split={split_axis}_concat={concat_axis}",
+ "split_axis": split_axis, "concat_axis": concat_axis}
+ for split_axis, concat_axis in it.product(range(2), range(2)))
+ def testAllToAllSplitAxis(self, split_axis, concat_axis):
+ if jtu.device_under_test() != "tpu":
+ raise SkipTest("all_to_all not implemented on non-TPU platforms")
+ if xla_bridge.device_count() < 4:
+ raise SkipTest("test requires at least four devices")
+ pmap_in_axis = 0
+ shape = (4, 4, 4)
+ x = np.arange(np.prod(shape)).reshape(shape)
+
+ @partial(pmap, axis_name='i')
+ @partial(pmap, axis_name='j')
+ def f(x):
+ return lax.all_to_all(x, ('i', 'j'), split_axis, concat_axis)
+
+ unroll_shape = (2, 2, *shape[1:])
+ x_unroll = x.reshape(unroll_shape)
+ y_unroll = f(x_unroll)
+ y = y_unroll.reshape(shape)
+
+ if pmap_in_axis <= split_axis:
+ split_axis += 1
+ ref = jnp.moveaxis(x, (pmap_in_axis, split_axis),
+ (concat_axis + 1, 0))
+ self.assertAllClose(y, ref)
def testNestedBasic(self):
f = lambda x: lax.psum(lax.psum(x, 'i'), 'j')
| pmap all_to_all shape error
On an 8-device machine, if I run a trivial pmap + all_to_all with concat_axis=1:
```
x = np.ones((8, 8, 64))
y = jax.pmap(lambda x: lax.all_to_all(x, axis_name='i', split_axis=0, concat_axis=1), axis_name='i')(x)
print(y.shape, onp.array(y).shape) # --> returns (8, 8, 64) (8, 1, 512) !!!
```
whereas
```
x = np.ones((8, 8, 64))
y = jax.pmap(lambda x: lax.all_to_all(x, axis_name='i', split_axis=0, concat_axis=0), axis_name='i')(x)
print(y.shape, onp.array(y).shape) # --> returns (8, 8, 64) (8, 8, 64) no shape disagreement
```
even just trying to print the repr for the former case `y` yields the error `<repr(<jax.interpreters.pxla.ShardedDeviceArray at 0x7fe547474d08>) failed: RuntimeError: Invalid argument: Argument does not match host shape or layout of computation parameter 0: want f32[8,8,64]{2,1,0}, got f32[8,1,512]{2,0,1}>`
| This occurs because the semantics of xla all_to_all vary from the supposed operations used in jax's all_to_all - xla first splits along split_axis, broadcasts across cores, then concats into the concat_axis. the abstract eval rule for all_to_all is a simple identity which isn't correct as it's written, since in the above case the `f32[8,1,512]` shape is correct for the given xla operation. In the simple case of split_axis==concat_axis as in pswapaxes everything works fine - the general jax all_to_all case would need a final transpose op to perform as described in the general case. | 2020-09-23T11:14:57 |
google/jax | 4,438 | google__jax-4438 | [
"4428"
] | b609040ce64fba19facaca5256c624b6fe44c587 | diff --git a/jax/nn/functions.py b/jax/nn/functions.py
--- a/jax/nn/functions.py
+++ b/jax/nn/functions.py
@@ -171,22 +171,33 @@ def selu(x):
scale = 1.0507009873554804934193349852946
return scale * elu(x, alpha)
-def gelu(x):
+def gelu(x, approximate: bool = True):
r"""Gaussian error linear unit activation function.
- Computes the element-wise function:
+ If ``approximate=False``, computes the element-wise function:
+
+ .. math::
+ \mathrm{gelu}(x) = \frac{x}{2} \left(1 + \mathrm{erf} \left(
+ \frac{x}{\sqrt{2}} \right) \right)
+
+ If ``approximate=True``, uses the approximate formulation of GELU:
.. math::
\mathrm{gelu}(x) = \frac{x}{2} \left(1 + \mathrm{tanh} \left(
\sqrt{\frac{2}{\pi}} \left(x + 0.044715 x^3 \right) \right) \right)
- We explicitly use the approximation rather than the exact formulation for
- speed. For more information, see `Gaussian Error Linear Units (GELUs)
+ For more information, see `Gaussian Error Linear Units (GELUs)
<https://arxiv.org/abs/1606.08415>`_, section 2.
+
+ Args:
+ approximate: whether to use the approximate or exact formulation.
"""
- sqrt_2_over_pi = np.sqrt(2 / np.pi).astype(x.dtype)
- cdf = 0.5 * (1.0 + jnp.tanh(sqrt_2_over_pi * (x + 0.044715 * (x ** 3))))
- return x * cdf
+ if approximate:
+ sqrt_2_over_pi = np.sqrt(2 / np.pi).astype(x.dtype)
+ cdf = 0.5 * (1.0 + jnp.tanh(sqrt_2_over_pi * (x + 0.044715 * (x ** 3))))
+ return x * cdf
+ else:
+ return jnp.array(x * (lax.erf(x / np.sqrt(2)) + 1) / 2, dtype=x.dtype)
def glu(x, axis=-1):
"""Gated linear unit activation function."""
| diff --git a/tests/nn_test.py b/tests/nn_test.py
--- a/tests/nn_test.py
+++ b/tests/nn_test.py
@@ -15,12 +15,14 @@
"""Tests for nn module."""
import collections
+from functools import partial
import itertools
from absl.testing import absltest
from absl.testing import parameterized
import numpy as np
+import scipy.stats
from jax import core
from jax import test_util as jtu
@@ -93,9 +95,21 @@ def testGluValue(self):
val = nn.glu(jnp.array([1.0, 0.0]))
self.assertAllClose(val, jnp.array([0.5]))
+ @parameterized.parameters(False, True)
+ def testGelu(self, approximate):
+ def gelu_reference(x):
+ return x * scipy.stats.norm.cdf(x)
+ rng = jtu.rand_default(self.rng())
+ args_maker = lambda: [rng((4, 5, 6), jnp.float32)]
+ self._CheckAgainstNumpy(
+ gelu_reference, partial(nn.gelu, approximate=approximate), args_maker,
+ check_dtypes=False, tol=1e-3 if approximate else None)
+
@parameterized.parameters(*itertools.product(
(jnp.float32, jnp.bfloat16, jnp.float16),
- (nn.gelu, nn.relu, nn.softplus, nn.sigmoid)))
+ (partial(nn.gelu, approximate=False),
+ partial(nn.gelu, approximate=True),
+ nn.relu, nn.softplus, nn.sigmoid)))
def testDtypeMatchesInput(self, dtype, fn):
if dtype is jnp.float16 and jtu.device_under_test() == "tpu":
self.skipTest("float16 not supported on TPU")
| Use exact GELU
jax.nn.gelu uses the approximate form of the GELU, but [tensorflow](https://github.com/tensorflow/tensorflow/blob/938cc7bf9c4f354361e18e3ba485af53e602d341/tensorflow/python/keras/activations.py#L311) and [pytorch](https://pytorch.org/docs/stable/generated/torch.nn.GELU.html?highlight=gelu#torch.nn.GELU) use the exact version.
I believe the exact form is more numerically stable and similarly fast. Figure 16 of the [Performer paper](https://arxiv.org/pdf/2009.14794.pdf) (@xingyousong) showed the GELU running into NaN issues, and I suspect this is because jax uses the approximate version.
| Interesting! https://github.com/google/jax/pull/1556 switched to the approximate version. @trevorcai @jekbradbury
Exact GeLU is significantly slower on TPUs (easily noticeable even in end-to-end step time). We’d be happy to take a PR adding the exact implementation as an option, but keeping the approximate one as the default?
> Exact GeLU is significantly slower on TPUs
Interesting. For PyTorch this was not the case: https://github.com/pytorch/pytorch/issues/39853#issuecomment-658806898 The exact version was slightly faster.
> We’d be happy to take a PR
Sadly I don't know what optimizations would make them similarly fast like in PyTorch.
In JAX under a JIT, both versions are quite well optimized (fused, etc.). But TPUs are not very fast at certain kinds of vector math, and the exact GeLU happens to hit some of those cases (I think).
https://github.com/google/jax/pull/1556#issuecomment-545130783 says
> Confirming that my benchmarks are showing jax.grad(jax.jarrett(gelu)) as slower than jax.grad(gelu) on GPU as well.
So do you think the issue is with both TPUs and GPUs yet PyTorch on GPUs doesn't have a problem? Or do you think it's more of a TPU-specific issue?
I just tried some new JAX timings on TPUv2 (two generations old) and V100 (one generation old), mostly because I had easy access to them via Colab.
I found that on V100 when compiled with `jax.jit`, the approximate formulation is 1.12x faster on the forward pass (which seems relatively insignificant), but on TPUv2 the approximate formulation is 1.75x faster. The difference on the backward pass was much smaller. I would guess this is related to `erf` being much more expensive to compute than `tanh` on TPU. It's possible we could optimize our `erf` implementation, which would presumably improve the relative performance of the exact formulation on both platforms.
Since the performance differences are so large, at the moment it does seem like we might do best to let users choose which they want.
| 2020-10-02T13:50:45 |
google/jax | 4,517 | google__jax-4517 | [
"4510",
"4510",
"4510"
] | d4da9cc12cc785b0d174bdadddf29a5167239724 | diff --git a/jax/core.py b/jax/core.py
--- a/jax/core.py
+++ b/jax/core.py
@@ -232,9 +232,6 @@ def aval(self):
def __hash__(self):
assert False
- def __eq__(self, other):
- assert False
-
def __repr__(self):
if hasattr(self, 'hash'):
return '{}'.format(self.val)
diff --git a/jax/lax/lax_control_flow.py b/jax/lax/lax_control_flow.py
--- a/jax/lax/lax_control_flow.py
+++ b/jax/lax/lax_control_flow.py
@@ -1566,7 +1566,6 @@ def _scan_partial_eval(trace, *tracers, reverse, length, num_consts, num_carry,
jaxpr_1_opt, out_pvals_1, consts_1 = pe.trace_to_jaxpr(
lu.wrap_init(core.jaxpr_as_fun(jaxpr_1)), in_pvals_1,
instantiate=[True] * (num_carry + num_ys) + [False] * num_res)
-
jaxpr_1_opt = pe.ClosedJaxpr(pe.convert_constvars_jaxpr(jaxpr_1_opt), ())
num_consts_1 = num_consts + len(consts_1)
# any now-known residuals are intensive, so we want to revise jaxpr_2 to take
@@ -1577,6 +1576,18 @@ def _scan_partial_eval(trace, *tracers, reverse, length, num_consts, num_carry,
jaxpr_2_opt = pe.move_binders_to_front(jaxpr_2, move)
num_consts_2 = num_consts + len(intensive_residuals)
+ # As another optimization, for any extensive inputs that are just forwarded to
+ # extensive outputs, to avoid a copy (looping over dynamic-update-slice) we'd
+ # rather just forward the input tracer. That means pruning some extensive
+ # outputs from the jaxpr here, and updating out_flat below.
+ extensive_invars = jaxpr_1_opt.jaxpr.invars[num_consts_1 + num_carry:]
+ extensive_outvars = jaxpr_1_opt.jaxpr.outvars[num_carry:]
+ fwd_extensive = [num_consts + num_carry + extensive_invars.index(v)
+ if v in extensive_invars else None for v in extensive_outvars]
+ jaxpr_1_opt.jaxpr.outvars = (
+ jaxpr_1_opt.jaxpr.outvars[:num_carry] +
+ [v for i, v in zip(fwd_extensive, extensive_outvars) if i is None])
+
in_consts = (list(consts_1) + [core.unit] * num_consts +
[core.unit if uk else t.pval[1]
for uk, t in zip(unknowns[num_consts:], tracers[num_consts:])])
@@ -1587,6 +1598,15 @@ def _scan_partial_eval(trace, *tracers, reverse, length, num_consts, num_carry,
*in_consts, reverse=reverse, length=length, jaxpr=jaxpr_1_opt,
num_consts=num_consts_1, num_carry=num_carry, linear=tuple(linear_1),
unroll=unroll)
+
+ # Propagate the forwarded extensive outputs using fwd_extensive.
+ out_carry, out_extensive = split_list(out_flat, [num_carry])
+ out_extensive_iter = iter(out_extensive)
+ out_extensive = [next(out_extensive_iter) if i is None else
+ tracers[i].pval[1] if tracers[i].is_known() else tracers[i]
+ for i in fwd_extensive]
+ out_flat = out_carry + out_extensive
+
out_carry, ys, res_and_units = split_list(out_flat, [num_carry, num_ys])
extensive_residuals = [r for r, (pv, _) in zip(res_and_units, res_pvals) if pv is not None]
@@ -1802,19 +1822,19 @@ def _scan_typecheck(bind_time, *avals, reverse, length, num_consts, num_carry,
core.typecheck_assert(
all(_map(core.typematch, init_avals_jaxpr, carry_avals_jaxpr)),
f'scan input carry input and output types mismatch: '
- f'{_avals_short(init_avals_jaxpr)} vs {_avals_short(carry_avals_jaxpr)}')
+ f'\n{_avals_short(init_avals_jaxpr)}\nvs\n{_avals_short(carry_avals_jaxpr)}')
core.typecheck_assert(
all(_map(core.typecompat, const_avals_jaxpr, const_avals)),
- f'scan jaxpr takes input const types {_avals_short(const_avals_jaxpr)}, '
- f'called with consts of type {_avals_short(const_avals)}')
+ f'scan jaxpr takes input const types\n{_avals_short(const_avals_jaxpr)},\n'
+ f'called with consts of type\n{_avals_short(const_avals)}')
core.typecheck_assert(
all(_map(core.typecompat, init_avals_jaxpr, init_avals)),
- f'scan jaxpr takes input carry types {_avals_short(init_avals_jaxpr)}, '
- f'called with initial carry of type {_avals_short(init_avals)}')
+ f'scan jaxpr takes input carry types\n{_avals_short(init_avals_jaxpr)},\n'
+ f'called with initial carry of type\n{_avals_short(init_avals)}')
core.typecheck_assert(
all(_map(core.typecompat, x_avals_jaxpr, x_avals_mapped)),
- f'scan jaxpr takes input sequence types {_avals_short(x_avals_jaxpr)}, '
- f'called with sequence of type {_avals_short(x_avals)}')
+ f'scan jaxpr takes input sequence types\n{_avals_short(x_avals_jaxpr)},\n'
+ f'called with sequence of type\n{_avals_short(x_avals)}')
def scan_bind(*args, **params):
if not core.skip_checks:
| VJP of `scan` is unnecessarily copying primals that are also inputs to the scan
Consider the following computation:
```
import jax
import jax.lax as lax
import jax.numpy as jnp
import numpy as np
def cumprod(x):
s = jnp.ones((64, 1024), jnp.float32)
return lax.scan(lambda s, x: (x*s, s), s, x)
def forward_and_backward(x, ct, ct_acc):
primals, pullback = jax.vjp(cumprod, x)
return pullback((ct_acc, ct))
np.random.seed(1234)
x = jnp.asarray(np.random.randn(1024, 64, 1024))
ct = jnp.asarray(np.random.randn(1024, 64, 1024))
ct_acc = jnp.asarray(np.random.randn(64, 1024))
print(jax.make_jaxpr(forward_and_backward)(x, ct, ct_acc))
```
From the jaxpr:
```
{ lambda ; a b c.
let d = broadcast_in_dim[ broadcast_dimensions=( )
shape=(64, 1024) ] 1.0
_ _ _ _ e f =
scan[ jaxpr={ lambda ; a b c d.
let e = mul c a
in (e, *, a, *, a, c) }
length=1024
linear=(False, True, False, True)
num_carry=2
num_consts=0
reverse=False
unroll=1 ] d * a *
_ _ _ g =
scan[ jaxpr={ lambda ; a b c d e f.
let g = mul b f
h = add_any d g
i = mul b e
in (*, h, *, i) }
length=1024
linear=(True, True, True, True, False, False)
num_carry=2
num_consts=0
reverse=True
unroll=1 ] * c * b e f
in (g,) }
```
we are apparently choosing to save a copy of `x` inside the forward pass for use in the backward pass. But this is pointless: `x` was an input to the forward pass anyway, so we could have saved ourselves a somewhat expensive copy inside a loop and just forwarded the original input to the backward pass.
VJP of `scan` is unnecessarily copying primals that are also inputs to the scan
Consider the following computation:
```
import jax
import jax.lax as lax
import jax.numpy as jnp
import numpy as np
def cumprod(x):
s = jnp.ones((64, 1024), jnp.float32)
return lax.scan(lambda s, x: (x*s, s), s, x)
def forward_and_backward(x, ct, ct_acc):
primals, pullback = jax.vjp(cumprod, x)
return pullback((ct_acc, ct))
np.random.seed(1234)
x = jnp.asarray(np.random.randn(1024, 64, 1024))
ct = jnp.asarray(np.random.randn(1024, 64, 1024))
ct_acc = jnp.asarray(np.random.randn(64, 1024))
print(jax.make_jaxpr(forward_and_backward)(x, ct, ct_acc))
```
From the jaxpr:
```
{ lambda ; a b c.
let d = broadcast_in_dim[ broadcast_dimensions=( )
shape=(64, 1024) ] 1.0
_ _ _ _ e f =
scan[ jaxpr={ lambda ; a b c d.
let e = mul c a
in (e, *, a, *, a, c) }
length=1024
linear=(False, True, False, True)
num_carry=2
num_consts=0
reverse=False
unroll=1 ] d * a *
_ _ _ g =
scan[ jaxpr={ lambda ; a b c d e f.
let g = mul b f
h = add_any d g
i = mul b e
in (*, h, *, i) }
length=1024
linear=(True, True, True, True, False, False)
num_carry=2
num_consts=0
reverse=True
unroll=1 ] * c * b e f
in (g,) }
```
we are apparently choosing to save a copy of `x` inside the forward pass for use in the backward pass. But this is pointless: `x` was an input to the forward pass anyway, so we could have saved ourselves a somewhat expensive copy inside a loop and just forwarded the original input to the backward pass.
VJP of `scan` is unnecessarily copying primals that are also inputs to the scan
Consider the following computation:
```
import jax
import jax.lax as lax
import jax.numpy as jnp
import numpy as np
def cumprod(x):
s = jnp.ones((64, 1024), jnp.float32)
return lax.scan(lambda s, x: (x*s, s), s, x)
def forward_and_backward(x, ct, ct_acc):
primals, pullback = jax.vjp(cumprod, x)
return pullback((ct_acc, ct))
np.random.seed(1234)
x = jnp.asarray(np.random.randn(1024, 64, 1024))
ct = jnp.asarray(np.random.randn(1024, 64, 1024))
ct_acc = jnp.asarray(np.random.randn(64, 1024))
print(jax.make_jaxpr(forward_and_backward)(x, ct, ct_acc))
```
From the jaxpr:
```
{ lambda ; a b c.
let d = broadcast_in_dim[ broadcast_dimensions=( )
shape=(64, 1024) ] 1.0
_ _ _ _ e f =
scan[ jaxpr={ lambda ; a b c d.
let e = mul c a
in (e, *, a, *, a, c) }
length=1024
linear=(False, True, False, True)
num_carry=2
num_consts=0
reverse=False
unroll=1 ] d * a *
_ _ _ g =
scan[ jaxpr={ lambda ; a b c d e f.
let g = mul b f
h = add_any d g
i = mul b e
in (*, h, *, i) }
length=1024
linear=(True, True, True, True, False, False)
num_carry=2
num_consts=0
reverse=True
unroll=1 ] * c * b e f
in (g,) }
```
we are apparently choosing to save a copy of `x` inside the forward pass for use in the backward pass. But this is pointless: `x` was an input to the forward pass anyway, so we could have saved ourselves a somewhat expensive copy inside a loop and just forwarded the original input to the backward pass.
| 2020-10-09T03:33:55 |
||
google/jax | 4,524 | google__jax-4524 | [
"4490"
] | e194dff67f5fd948695ddf67e88486fc9f725f67 | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -1865,6 +1865,7 @@ def mean(a, axis=None, dtype=None, out=None, keepdims=False):
dtype = float_
else:
dtype = _dtype(a)
+ dtype = dtypes.canonicalize_dtype(dtype)
return lax.div(
sum(a, axis, dtype=dtype, keepdims=keepdims),
| np.mean raises warning when input is float64 in jax 0.2.1
With the new update, the following script
```python
import numpy as np
import jax.numpy as jnp
jnp.mean(np.ones(30)) # or jnp.mean(1)
```
raises the warning
```
UserWarning: Explicitly requested dtype float64 requested in sum is not available,
and will be truncated to dtype float32. To enable more dtypes, set the jax_enable_x64
configuration option or the JAX_ENABLE_X64 shell environment variable. See
https://github.com/google/jax#current-gotchas for more.
warnings.warn(msg.format(dtype, fun_name , truncated_dtype))
```
I think this new behavior is unexpected so I would like to open this issue.
| Thanks for the report. I believe the reason this started appearing in 0.2.1 is because of #4444. The best fix would be to call `dtypes.canonicalize_dtype()` here before passing the dtype to `sum()`: https://github.com/google/jax/blob/d4da9cc12cc785b0d174bdadddf29a5167239724/jax/numpy/lax_numpy.py#L1864-L1867 | 2020-10-09T16:13:52 |
|
google/jax | 4,556 | google__jax-4556 | [
"4551"
] | d1ca3b3dbe5cf546d6df48c9eb6d766b6a17a66a | diff --git a/jax/numpy/lax_numpy.py b/jax/numpy/lax_numpy.py
--- a/jax/numpy/lax_numpy.py
+++ b/jax/numpy/lax_numpy.py
@@ -3993,7 +3993,7 @@ def _index_to_gather(x_shape, idx):
idx_no_nones = [(i, d) for i, d in enumerate(idx) if d is not None]
advanced_pairs = (
(asarray(e), i, j) for j, (i, e) in enumerate(idx_no_nones)
- if isinstance(e, (Sequence, ndarray)))
+ if isscalar(e) or isinstance(e, (Sequence, ndarray)))
advanced_pairs = ((_normalize_index(e, x_shape[j]), i, j)
for e, i, j in advanced_pairs)
advanced_indexes, idx_advanced_axes, x_advanced_axes = zip(*advanced_pairs)
| diff --git a/tests/lax_numpy_indexing_test.py b/tests/lax_numpy_indexing_test.py
--- a/tests/lax_numpy_indexing_test.py
+++ b/tests/lax_numpy_indexing_test.py
@@ -148,6 +148,11 @@ def check_grads(f, args, order, atol=None, rtol=None, eps=None):
IndexSpec(shape=(3,), indexer=()),
IndexSpec(shape=(3, 4), indexer=()),
]),
+ ("TupleOfIntAndSliceAndIntArray", [
+ IndexSpec(shape=(3, 2, 3), indexer=(0, slice(None), np.arange(3))),
+ IndexSpec(shape=(3, 2, 3), indexer=(np.int32(1), slice(None), np.arange(3))),
+ IndexSpec(shape=(3, 2, 3), indexer=(np.array(2), slice(None), np.arange(3))),
+ ]),
]
STATIC_INDEXING_GRAD_TESTS = [
| Jit functions can't necessarily specialize on argument shapes
I'll disclaim that I'm not sure if this is a bug that should be fixed in Jax, or if the Jax "Sharp Bits" documentation should discuss this.
I originally stumbled upon an issue where my jit function was behaving strangely, and behaving differently than the non-jit version. Upon going over the Sharp Bits, I think this pertains to the "functions with argument-**value dependent shapes**" section. At first I thought it was simply my mistake, but then it dawned on me that the function doesn't in fact have value dependent shapes.
Here's a reproducible example:
```python
import jax
import jax.numpy as jnp
def fn(a, qs, idx):
return a[idx, :, qs].mean(axis=-1).squeeze()
jit_fn = jax.jit(fn)
a = jnp.ones((24, 8, 11))
qs = jnp.arange(11)
fn(a, qs, 4).shape # outputs (8,) as expected
jit_fn(a, qs, 4).shape # outputs (11,)
```
Note that the shape of `fn` does not actually depend on the values of `qs`. In fact, none of the "intermediate steps" of the function produce any array whose shape depends on the values of `qs`. I fixed the problem by rewriting the functions as follows:
```python
def better_fn(a, idx):
return a[idx, :].mean(axis=-1).squeeze()
jit_better_fn = jax.jit(better_fn)
b = a[:, :, qs]
better_fn(b, idx).shape # outputs (8,)
jit_better_fn(b, idx).shape # outputs (8,)
```
This is more evidence to me that the Sharp Bits section of the documentation applies to this case, however I believe that the statement that "specializing on argument shapes is ok" from the documentation is ambiguous.
| Thanks - this is definitely a bug, and I think it's unrelated to that sharp-bits section. It's something about the XLA translation rule for numpy's fancy indexing.
By the way, any time you do have value-dependent shapes (not here) JAX will raise a clear error message, not just silently do the wrong thing.
This just looks like a really surprising indexing bug...
Turns out the `mean` and `squeeze` are superfluous; the issue is a transpose in jitted code that combines single indexing, slicing, and fancy indexing:
```python
import jax
import jax.numpy as jnp
def fn(a, i, q):
return a[i, :, q]
jit_fn = jax.jit(fn)
a = jnp.arange(6).reshape(1, 2, 3)
q = jnp.arange(3)
print(fn(a, 0, q))
# [[0 1 2]
# [3 4 5]]
print(jit_fn(a, 0, q))
# [[0 3]
# [1 4]
# [2 5]]
```
Thanks for the quick responses! It was a bit of a tricky one to debug, I thought for sure I was doing something wrong.
via offline chat with @mattjj, it looks like the abstract eval is returning the wrong shape:
```python
jax.eval_shape(fn, a, 0, q)
# ShapeDtypeStruct(shape=(3, 2), dtype=int32)
```
(using my simplified repro).
Indeed, that's not causative, but it shows that XLA isn't to blame.
*[EDIT: oops, Jake already observed this, my comment is redundant]* I think this is a plain-old bug in our `_rewriting_take` which translates NumPy indexing to XLA operations. If we remove the `mean(axis=-1).squeeze()` part, we see that the result of the indexing expression should be of shape (8, 11), but we're producing something of shape (11, 8).
Update: issue appears to actually be on the non-jit codepath:
```python
import jax
import jax.numpy as jnp
def fn(a, i):
return a[i, :, np.arange(3)]
print(fn(a, 0))
# [[0 1 2]
# [3 4 5]]
print(fn(a, jnp.array(0)))
# [[0 3]
# [1 4]
# [2 5]]
```
Note that the latter actually matches numpy's behavior:
```python
import numpy as np
a = np.arange(6).reshape(1, 2, 3)
a[0, :, np.arange(3)]
# array([[0, 3],
# [1, 4],
# [2, 5]])
```
To be honest, I don't entirely understand why numpy gives this result. I would have expected it to be the same as this, but it is not:
```python
a[0, :, :]
# array([[0, 1, 2],
# [3, 4, 5]])
```
Even further simplified:
```python
import jax
import jax.numpy as jnp
a = jnp.arange(6).reshape(1, 2, 3)
print(a[0, :, jnp.arange(3)])
# [[0 1 2]
# [3 4 5]]
print(a[jnp.array(0), :, jnp.arange(3)])
# [[0 3]
# [1 4]
# [2 5]]
```
@mattjj points out that the issue is related to this comment: https://github.com/google/jax/blob/0660939ab016fe93aa0a897ba2f05343c6f3f380/jax/numpy/lax_numpy.py#L3979-L3981
We are treating devicearray scalars as if they are advanced indices; the fix is to treat all scalars the same. | 2020-10-12T20:59:53 |
google/jax | 4,563 | google__jax-4563 | [
"4552"
] | 83d011515de4303630ba0835406043f25b89f29e | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -1234,14 +1234,15 @@ def batched_fun(*args):
return batched_fun
-def _get_axis_size(name: str, i:int, shape: Tuple[int, ...], axis: int):
- try:
- return shape[axis]
- except (IndexError, TypeError) as e:
- raise ValueError(f"{name} got arg {i} of rank {len(shape)} "
- f"but axis to be mapped {axis}") from e
-
def _mapped_axis_size(tree, vals, dims, name):
+ def _get_axis_size(name: str, i:int, shape: Tuple[int, ...], axis: int):
+ try:
+ return shape[axis]
+ except (IndexError, TypeError) as e:
+ ranks = tree_unflatten(tree, [np.ndim(x) for x, d in zip(vals, dims)])
+ raise ValueError(f"{name} got arg {i} of rank {len(shape)} but axis to be mapped {axis}. "
+ f"The tree of ranks is:\n{ranks}") from e
+
mapped_axis_sizes = {_get_axis_size(name, i, np.shape(x), d)
for i, (x, d) in enumerate(zip(vals, dims))
if d is not None}
| pmap error message for mismatched rank with PyTree args
Small usability request :)
Currently, if you pass in an argument with an incorrect rank to a pmap you get the error: `ValueError: pmap got arg {arg_num} of rank 0 but axis to be mapped 0`
The `arg_num` index will correctly point to the position of the faulty argument, but only if your arguments do not contain PyTrees. If your arguments contain PyTrees you could for example get a `ValueError: pmap got arg 35 of rank 0 but axis to be mapped 0` on a function with 5 arguments.
If you pass in an incorrectly sized argument, there is a separate error message for PyTree args (https://github.com/google/jax/blob/ee9ca569890fae79c7e82fe5bad26e5ce9e72227/jax/api.py#L1275). Maybe something similar can be done here for an incorrect rank?
| 2020-10-13T17:27:27 |
||
google/jax | 4,582 | google__jax-4582 | [
"4565"
] | eb9c1ddd30d8cf0dd799405c0fffbc939119eb42 | diff --git a/jax/random.py b/jax/random.py
--- a/jax/random.py
+++ b/jax/random.py
@@ -213,6 +213,10 @@ def _threefry2x32_gpu_translation_rule(c, k1, k2, x1, x2):
c.get_shape(k1).dimensions(), c.get_shape(k2).dimensions(),
c.get_shape(x1).dimensions(), c.get_shape(x2).dimensions())
rank = len(shape)
+ if 0 in shape:
+ zeros = xla_client.ops.Broadcast(
+ xla_bridge.constant(c, np.array(0, np.uint32)), shape)
+ return xla_client.ops.Tuple(c, [zeros, zeros])
def _broadcast(x):
ndims = c.get_shape(x).rank()
return xla_client.ops.BroadcastInDim(x, shape,
| diff --git a/tests/random_test.py b/tests/random_test.py
--- a/tests/random_test.py
+++ b/tests/random_test.py
@@ -120,6 +120,14 @@ def testThreefry2x32Large(self):
np.testing.assert_equal(result[:n], np.full((n,), 0xc4923a9c, dtype=np.uint32))
np.testing.assert_equal(result[n:], np.full((n,), 0x483df7a0, dtype=np.uint32))
+ def testThreefry2x32Empty(self):
+ # Regression test for an op-by-op crash for empty arrays in CUDA mode.
+ with api.disable_jit():
+ result = random.threefry_2x32(
+ (np.uint32(0x13198a2e), np.uint32(0x03707344)),
+ jnp.ones((10, 0,), jnp.uint32))
+ np.testing.assert_equal(result, np.zeros((10, 0,), dtype=np.uint32))
+
def testRngRandomBitsViewProperty(self):
# TODO: add 64-bit if it ever supports this property.
# TODO: will this property hold across endian-ness?
| random.uniform of 0 sized arrays fails on GPU when JIT is disabled
When JIT is enabled, calling `jax.random.uniform` with a shape that has one or more 0-sized dimensions returns an empty array of the requested dimensions.
When JIT is disabled, calling `jax.random.uniform` with the same shape fails with a "CUDA operation failed" error.
Simple repro:
```py
import jax
key = jax.numpy.array([1, 1], jax.numpy.uint32)
shape = (1, 0)
jax.random.uniform(key, shape) # ok
with jax.disable_jit():
jax.random.uniform(key, shape) # CUDA operation failed: invalid configuration argument
```
Full callstack:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-1d1849d1c41c> in <module>()
5 jax.random.uniform(key, shape) # ok
6 with jax.disable_jit():
----> 7 jax.random.uniform(key, shape) # CUDA operation failed: invalid configuration argument
9 frames
google3/third_party/py/jax/random.py in uniform(key, shape, dtype, minval, maxval)
380 dtype = dtypes.canonicalize_dtype(dtype)
381 shape = abstract_arrays.canonicalize_shape(shape)
--> 382 return _uniform(key, shape, dtype, minval, maxval) # type: ignore
383
384 @partial(jit, static_argnums=(1, 2))
google3/third_party/py/jax/api.py in f_jitted(*args, **kwargs)
190 def f_jitted(*args, **kwargs):
191 if _jit_is_disabled():
--> 192 return fun(*args, **kwargs)
193 if max(static_argnums + donate_argnums, default=-1) >= len(args):
194 msg = ("jitted function has static_argnums={}, donate_argnums={} but "
google3/third_party/py/jax/random.py in _uniform(key, shape, dtype, minval, maxval)
399 raise TypeError("uniform only accepts 32- or 64-bit dtypes.")
400
--> 401 bits = _random_bits(key, nbits, shape)
402
403 # The strategy here is to randomize only the mantissa bits with an exponent of
google3/third_party/py/jax/random.py in _random_bits(key, bit_width, shape)
312 nblocks, rem = divmod(max_count, jnp.iinfo(np.uint32).max)
313 if not nblocks:
--> 314 bits = threefry_2x32(key, lax.iota(np.uint32, rem))
315 else:
316 *subkeys, last_key = split(key, nblocks + 1)
google3/third_party/py/jax/api.py in f_jitted(*args, **kwargs)
190 def f_jitted(*args, **kwargs):
191 if _jit_is_disabled():
--> 192 return fun(*args, **kwargs)
193 if max(static_argnums + donate_argnums, default=-1) >= len(args):
194 msg = ("jitted function has static_argnums={}, donate_argnums={} but "
google3/third_party/py/jax/random.py in threefry_2x32(keypair, count)
258 x = list(jnp.split(count.ravel(), 2))
259
--> 260 x = threefry2x32_p.bind(key1, key2, x[0], x[1])
261 out = jnp.concatenate(x)
262 assert out.dtype == np.uint32
google3/third_party/py/jax/core.py in bind(self, *args, **params)
261 top_trace = find_top_trace(args)
262 tracers = map(top_trace.full_raise, args)
--> 263 out = top_trace.process_primitive(self, tracers, params)
264 return map(full_lower, out) if self.multiple_results else full_lower(out)
265
google3/third_party/py/jax/core.py in process_primitive(self, primitive, tracers, params)
570
571 def process_primitive(self, primitive, tracers, params):
--> 572 return primitive.impl(*tracers, **params)
573
574 def process_call(self, primitive, f, tracers, params):
google3/third_party/py/jax/interpreters/xla.py in apply_primitive(prim, *args, **params)
232 """Impl rule that compiles and runs a single primitive 'prim' using XLA."""
233 compiled_fun = xla_primitive_callable(prim, *unsafe_map(arg_spec, args), **params)
--> 234 return compiled_fun(*args)
235
236
google3/third_party/py/jax/interpreters/xla.py in _execute_compiled_primitive(prim, compiled, result_handler, *args)
347 device, = compiled.local_devices()
348 input_bufs = list(it.chain.from_iterable(device_put(x, device) for x in args if x is not token))
--> 349 out_bufs = compiled.execute(input_bufs)
350 if FLAGS.jax_debug_nans: check_nans(prim, out_bufs)
351 return result_handler(*out_bufs)
RuntimeError: CUDA operation failed: invalid configuration argument
```
| Appears to only affect GPU
Closer to the source, this comes from the threefry primitive:
```python
from jax.random import threefry_2x32, PRNGKey
from jax import lax, disable_jit
with disable_jit():
bits = threefry_2x32(PRNGKey(0), lax.iota('uint32', 0))
```
Again, this errors on GPU only, and only with jit disabled.
The issueis in `jax.lib.cuda_prng`. You can see this by removing the cuda translation rule; the following no longer errors:
```python
from jax.random import threefry_2x32, PRNGKey, threefry2x32_p
from jax import lax, disable_jit
from jax.interpreters import xla
del xla.backend_specific_translations['gpu'][threefry2x32_p]
with disable_jit():
threefry_2x32(PRNGKey(0), lax.iota('uint32', 0))
```
Assigning to @hawkinsp, since I believe he worked on the cuda prng wrappers. | 2020-10-14T18:34:59 |
google/jax | 4,608 | google__jax-4608 | [
"4604"
] | 4a20eea8285d6396b50451ed884c0fe00e382821 | diff --git a/jax/experimental/host_callback.py b/jax/experimental/host_callback.py
--- a/jax/experimental/host_callback.py
+++ b/jax/experimental/host_callback.py
@@ -613,6 +613,7 @@ def _rewrite_eqn(eqn: core.JaxprEqn, eqns: List[core.JaxprEqn],
mk_new_var: Callable[[core.AbstractValue], core.Var]):
"""Rewrite an `eqn` and append equations to `eqns`.
+ This is only called if the current primitive uses outfeed.
Assume that the current token is in `input_token_var` and the resulting
token must end in `output_token_var`.
"""
@@ -698,11 +699,21 @@ def _rewrite_eqn(eqn: core.JaxprEqn, eqns: List[core.JaxprEqn],
eqn.primitive,
dict(
eqn.params,
- call_jaxpr=_rewrite_jaxpr(call_jaxpr, True,
- True),
+ call_jaxpr=_rewrite_jaxpr(call_jaxpr, True, True),
donated_invars=eqn.params["donated_invars"] + (False,)
),
eqn.source_info))
+ elif eqn.primitive is pe.remat_call_p:
+ call_jaxpr = cast(core.Jaxpr, eqn.params["call_jaxpr"])
+ eqns.append(
+ core.new_jaxpr_eqn(
+ eqn.invars + [input_token_var], eqn.outvars + [output_token_var],
+ eqn.primitive,
+ dict(
+ eqn.params,
+ call_jaxpr=_rewrite_jaxpr(call_jaxpr, True, True),
+ ),
+ eqn.source_info))
elif eqn.primitive is custom_derivatives.custom_jvp_call_jaxpr_p:
fun_jaxpr = eqn.params["fun_jaxpr"]
new_invars = [*eqn.invars, input_token_var]
diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -1305,9 +1305,13 @@ def _remat_translation_rule(c, axis_env, in_nodes,
xb.parameter(dummy_subc, 0, c.get_shape(false_op), replicated=[])
def zeros(xla_shape):
- shape, dtype = xla_shape.dimensions(), xla_shape.numpy_dtype()
- zero = xb.constant(dummy_subc, np.array(0, dtype=dtype))
- return xops.Broadcast(zero, shape)
+ if xla_shape.is_array():
+ shape, dtype = xla_shape.dimensions(), xla_shape.numpy_dtype()
+ zero = xb.constant(dummy_subc, np.array(0, dtype=dtype))
+ return xops.Broadcast(zero, shape)
+ else:
+ # It is a token
+ return xops.CreateToken(dummy_subc)
out_nodes = [zeros(s) for s in out_node_shapes]
dummy_subc = dummy_subc.build(xops.Tuple(dummy_subc, out_nodes))
| diff --git a/tests/host_callback_test.py b/tests/host_callback_test.py
--- a/tests/host_callback_test.py
+++ b/tests/host_callback_test.py
@@ -149,7 +149,6 @@ def func2(x):
x1, y1 = hcb.id_print((x * 2., x * 3.), output_stream=testing_stream)
return x1 + y1
- #assertMultiLineStrippedEqual(self, "", str(api.make_jaxpr(func2)(3.)))
self.assertEqual(3. * (2. + 3.), func2(3.))
hcb.barrier_wait()
@@ -220,8 +219,6 @@ def test_jit_no_invars(self):
def func(): # jitted function does not take arguments
return hcb.id_print(42, output_stream=testing_stream)
- #assertMultiLineStrippedEqual(self, "", str(api.make_jaxpr(api.jit(func))()))
-
self.assertAllClose(42, api.jit(func)())
hcb.barrier_wait()
assertMultiLineStrippedEqual(self, """
@@ -232,8 +229,6 @@ def test_jit_multiple_invars(self):
def func(x1, x2):
return hcb.id_print(x1 + x2, output_stream=testing_stream)
- #assertMultiLineStrippedEqual(self, "", str(api.make_jaxpr(api.jit(func))(40, 2)))
-
self.assertAllClose(42, api.jit(func)(40, 2))
hcb.barrier_wait()
assertMultiLineStrippedEqual(self, """
@@ -244,8 +239,6 @@ def test_jit_constant(self):
def func(x):
return hcb.id_print(42, result=x, output_stream=testing_stream)
- #assertMultiLineStrippedEqual(self, "", str(api.make_jaxpr(api.jit(func))(5)))
-
self.assertAllClose(5, api.jit(func)(5))
hcb.barrier_wait()
assertMultiLineStrippedEqual(self, """
@@ -727,7 +720,6 @@ def func(x):
def test_jvp(self):
jvp_fun1 = lambda x, xt: api.jvp(fun1, (x,), (xt,))
- #assertMultiLineStrippedEqual(self, "")
res_primals, res_tangents = jvp_fun1(jnp.float32(5.), jnp.float32(0.1))
self.assertAllClose(100., res_primals, check_dtypes=False)
self.assertAllClose(4., res_tangents, check_dtypes=False)
@@ -790,7 +782,6 @@ def func(x):
return x * hcb.id_print(y * 3., what="y * 3",
output_stream=testing_stream)
grad_func = api.grad(func)
- #assertMultiLineStrippedEqual(self, "", str(api.make_jaxpr(grad_func)(5.)))
res_grad = grad_func(jnp.float32(5.))
self.assertAllClose(2. * 5. * 6., res_grad, check_dtypes=False)
@@ -837,7 +828,6 @@ def func(x):
def test_vmap(self):
vmap_fun1 = api.vmap(fun1)
vargs = jnp.array([jnp.float32(4.), jnp.float32(5.)])
- #assertMultiLineStrippedEqual(self, "", str(api.make_jaxpr(vmap_fun1)(vargs)))
vmap_fun1(vargs)
hcb.barrier_wait()
assertMultiLineStrippedEqual(self, """
@@ -856,7 +846,6 @@ def func(y):
vmap_func = api.vmap(func)
vargs = jnp.array([jnp.float32(4.), jnp.float32(5.)])
- #assertMultiLineStrippedEqual(self, "", str(api.make_jaxpr(vmap_func)(vargs)))
_ = vmap_func(vargs)
hcb.barrier_wait()
assertMultiLineStrippedEqual(self, """
@@ -1177,6 +1166,20 @@ def loss(k=1.0):
api.grad(loss)(1.0) # should not fail
+ def test_remat(self):
+ def f(i, k):
+ x = hcb.id_print(k + i, output_stream=testing_stream)
+ return k * x
+
+ def loss(k):
+ return lax.fori_loop(0, 2, api.remat(f), k)
+ print(loss(3))
+ hcb.barrier_wait()
+ expected = """
+ 3
+ 10"""
+ self.assertMultiLineStrippedEqual(expected, testing_stream.output)
+
class OutfeedRewriterTest(jtu.JaxTestCase):
@@ -1184,9 +1187,12 @@ def assertRewrite(self, expected: str, func: Callable, args: Sequence,
has_input_token=True, has_output_token=True):
"""Check that the rewrite of func(*args) matches expected."""
jaxpr = api.make_jaxpr(func)(*args)
- # TODO: re-enable when we change the host_callback rewriter
- #rewritten = hcb._rewrite_closed_jaxpr(jaxpr,
- # has_input_token, has_output_token)
+ rewritten = hcb._rewrite_closed_jaxpr(jaxpr, # noqa: F841
+ has_input_token, has_output_token)
+ # Since it is somewhat annoying to update the Jaxpr assertions when we change
+ # the Jaxpr printing, we do not check these by default. It is recommended that
+ # before making changes to the code generation and Jaxpr rewriting, turn on
+ # the checking, update the expected Jaxpr, and then make the changes.
#assertMultiLineStrippedEqual(self, expected, str(rewritten))
del jaxpr
@@ -1543,5 +1549,38 @@ def g(x):
unroll=1 ] * 1.00 e * b
in (c, f) }""", api.grad(g), [arg])
+ def test_remat_loop(self):
+ def f(k, x):
+ x = hcb.id_print(k + x)
+ return -k * x
+
+ def loss(k):
+ return lax.fori_loop(0, 1, api.remat(f), k)
+
+ self.assertRewrite("""
+ { lambda ; a c.
+ let _ _ b d =
+ while[ body_jaxpr={ lambda ; a b c f.
+ let d = add a 1
+ e g = remat_call[ call_jaxpr={ lambda ; a b g.
+ let c = add a b
+ d h = id_tap[ arg_treedef_=*
+ has_token_=True
+ nr_tapped_args_=1
+ tap_func_=_print ] c g
+ e = neg a
+ f = mul e d
+ in (f, h) }
+ concrete=False
+ name=f ] a c f
+ in (d, b, e, g) }
+ body_nconsts=0
+ cond_jaxpr={ lambda ; a b c e.
+ let d = lt a b
+ in (d,) }
+ cond_nconsts=0 ] 0 1 a c
+ in (b, d) }""", loss, [2])
+
+
if __name__ == "__main__":
absltest.main(testLoader=jtu.JaxTestLoader())
| Host callbacks + remat causes NotImplementedError
Would it be possible to add remat to host_callback's? I am getting the following error: `NotImplementedError: outfeed rewrite remat_call`.
Repro:
```python
from jax.experimental import host_callback
import jax.numpy as jnp
import jax
from jax import lax
def f(k, x):
x = host_callback.id_print(k+x)
return -k * x
def loss(k=1.0):
return lax.fori_loop(0, 1, jax.remat(f), 0)(k)
loss(1)
```
```
Stack:
<ipython-input-72-12c7a8d25b00> in <module>()
11 return lax.fori_loop(0, 1, jax.remat(f), 0)(k)
12
---> 13 loss(1)
15 frames
<ipython-input-72-12c7a8d25b00> in loss(k)
9
10 def loss(k=1.0):
---> 11 return lax.fori_loop(0, 1, jax.remat(f), 0)(k)
12
13 loss(1)
jax/lax/lax_control_flow.py in fori_loop(lower, upper, body_fun, init_val)
204 else:
205 _, _, result = while_loop(_fori_cond_fun, _fori_body_fun(body_fun),
--> 206 (lower, upper, init_val))
207 return result
208
jax/lax/lax_control_flow.py in while_loop(cond_fun, body_fun, init_val)
297 outs = while_p.bind(*itertools.chain(cond_consts, body_consts, init_vals),
298 cond_nconsts=len(cond_consts), cond_jaxpr=cond_jaxpr,
--> 299 body_nconsts=len(body_consts), body_jaxpr=body_jaxpr)
300 return tree_unflatten(body_tree, outs)
301
jax/core.py in bind(self, *args, **params)
261 top_trace = find_top_trace(args)
262 tracers = map(top_trace.full_raise, args)
--> 263 out = top_trace.process_primitive(self, tracers, params)
264 return map(full_lower, out) if self.multiple_results else full_lower(out)
265
jax/core.py in process_primitive(self, primitive, tracers, params)
570
571 def process_primitive(self, primitive, tracers, params):
--> 572 return primitive.impl(*tracers, **params)
573
574 def process_call(self, primitive, f, tracers, params):
jax/interpreters/xla.py in apply_primitive(prim, *args, **params)
231 def apply_primitive(prim, *args, **params):
232 """Impl rule that compiles and runs a single primitive 'prim' using XLA."""
--> 233 compiled_fun = xla_primitive_callable(prim, *unsafe_map(arg_spec, args), **params)
234 return compiled_fun(*args)
235
jax/interpreters/xla.py in xla_primitive_callable(prim, *arg_specs, **params)
255 return prim.bind(*args, **params)
256 return _xla_callable(lu.wrap_init(prim_fun), device, None, "prim", donated_invars,
--> 257 *arg_specs)
258 aval_out = prim.abstract_eval(*avals, **params)
259 if not prim.multiple_results:
jax/linear_util.py in memoized_fun(fun, *args)
245 fun.populate_stores(stores)
246 else:
--> 247 ans = call(fun, *args)
248 cache[key] = (ans, fun.stores)
249
jax/interpreters/xla.py in _xla_callable(fun, device, backend, name, donated_invars, *arg_specs)
638 fun, pvals, instantiate=False, stage_out=True, bottom=True) # type: ignore
639 map(prefetch, it.chain(consts, jaxpr_literals(jaxpr)))
--> 640 jaxpr = apply_outfeed_rewriter(jaxpr)
641
642 nreps = jaxpr_replicas(jaxpr)
jax/interpreters/xla.py in apply_outfeed_rewriter(jaxpr)
190 def apply_outfeed_rewriter(jaxpr: core.Jaxpr) -> core.Jaxpr:
191 if outfeed_rewriter is not None:
--> 192 return outfeed_rewriter(jaxpr)
193 else:
194 return jaxpr
jax/experimental/host_callback.py in <lambda>(j)
831
832
--> 833 xla.outfeed_rewriter = lambda j: _rewrite_jaxpr(j, False, False)
834
835
jax/experimental/host_callback.py in _rewrite_jaxpr(jaxpr, has_input_token, has_output_token)
601 else:
602 output_token_var = mk_new_var(core.abstract_token)
--> 603 _rewrite_eqn(eqn, eqns, last_token_var, output_token_var, mk_new_var)
604 last_token_var = output_token_var
605
jax/experimental/host_callback.py in _rewrite_eqn(eqn, eqns, input_token_var, output_token_var, mk_new_var)
639 dict(
640 eqn.params,
--> 641 body_jaxpr=_rewrite_closed_jaxpr(body_jaxpr, True, True),
642 cond_jaxpr=_rewrite_closed_jaxpr(cond_jaxpr, True,
643 False)), eqn.source_info))
jax/experimental/host_callback.py in _rewrite_closed_jaxpr(cjaxpr, has_input_token, has_output_token)
571 has_output_token: bool) -> core.ClosedJaxpr:
572 """Rewrites a ClosedJaxpr to thread the token, if needed."""
--> 573 new_jaxpr = _rewrite_jaxpr(cjaxpr.jaxpr, has_input_token, has_output_token)
574 return core.ClosedJaxpr(new_jaxpr, cjaxpr.consts)
575
py/jax/experimental/host_callback.py in _rewrite_jaxpr(jaxpr, has_input_token, has_output_token)
601 else:
602 output_token_var = mk_new_var(core.abstract_token)
--> 603 _rewrite_eqn(eqn, eqns, last_token_var, output_token_var, mk_new_var)
604 last_token_var = output_token_var
605
jax/experimental/host_callback.py in _rewrite_eqn(eqn, eqns, input_token_var, output_token_var, mk_new_var)
738 eqn.source_info))
739 else:
--> 740 raise NotImplementedError(f"outfeed rewrite {eqn.primitive}")
741
742
NotImplementedError: outfeed rewrite remat_call
```
| @gnecula
This sounds like it would be pretty handy to have for debugging more complex gradient checkpointing schemes (like binomial checkpointing, https://github.com/google/jax/pull/4363) | 2020-10-16T08:31:28 |
google/jax | 4,623 | google__jax-4623 | [
"4622"
] | 9ea1311c7d8e763f72af1e94e7de88cbeb76cb64 | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -1762,11 +1762,12 @@ def fun(*tangents):
tangent_avals = list(map(core.get_aval, tangents))
for primal_aval, tangent_aval in zip(primal_avals, tangent_avals):
try:
- core.lattice_join(primal_aval, tangent_aval)
+ core.lattice_join(primal_aval.at_least_vspace(), tangent_aval)
except TypeError as e:
msg = ("linearized function called on tangent values inconsistent with "
- "the original primal values.")
- raise ValueError(msg) from e
+ "the original primal values: "
+ f"got {tangent_aval} for primal aval {primal_aval}")
+ raise ValueError(msg)
tangents_out = eval_jaxpr(jaxpr, consts, *tangents)
return tuple(map(lambda out_pv, tan_out: out_pv.merge_with_known(tan_out),
out_pvals, tangents_out))
diff --git a/jax/core.py b/jax/core.py
--- a/jax/core.py
+++ b/jax/core.py
@@ -1025,9 +1025,12 @@ def __init__(self, val, weak_type=False):
assert self.dtype != np.dtype('O'), val
def __eq__(self, other):
- return (type(self) is type(other) and self.dtype == other.dtype
- and self.shape == other.shape and self.weak_type == other.weak_type
- and np.all(self.val == other.val))
+ if (type(self) is type(other) and self.dtype == other.dtype
+ and self.shape == other.shape and self.weak_type == other.weak_type):
+ with eval_context(): # in case self.val is a DeviceArray
+ return (self.val == other.val).all()
+ else:
+ return False
def __hash__(self):
return id(self.val)
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -1133,16 +1133,16 @@ def __init__(self, shape, dtype):
def test_issue_871(self):
T = jnp.array([[1., 2.], [3., 4.], [5., 6.]])
x = jnp.array([1, 2, 3])
+ msg = ("linearized function called on tangent values inconsistent with "
+ "the original primal values")
y, f_jvp = api.linearize(jnp.sum, x)
- jtu.check_raises(lambda: f_jvp(T), ValueError,
- ("linearized function called on tangent values "
- "inconsistent with the original primal values."))
+ with self.assertRaisesRegex(ValueError, msg):
+ f_jvp(T)
y, f_jvp = api.linearize(api.jit(jnp.sum), x)
- jtu.check_raises(lambda: f_jvp(T), ValueError,
- ("linearized function called on tangent values "
- "inconsistent with the original primal values."))
+ with self.assertRaisesRegex(ValueError, msg):
+ f_jvp(T)
def test_partial_eval_lower(self):
# this is a simplified model of a bug that arose when we first used @jit in
@@ -1966,6 +1966,38 @@ def device_put_and_count(*args, **kwargs):
xla.device_put = orig_device_put
self.assertEqual(count, 0)
+ def test_join_concrete_arrays_with_omnistaging(self):
+ # https://github.com/google/jax/issues/4622
+ if not config.omnistaging_enabled:
+ raise unittest.SkipTest("test is omnistaging-specific")
+
+ x = jnp.array([1., 2., 3.])
+ y = jnp.array([1., 2., 4.])
+
+ @jit
+ def f():
+ core.lattice_join(core.ConcreteArray(x), core.ConcreteArray(y))
+
+ f() # doesn't crash
+
+ def test_linearize_aval_error(self):
+ # https://github.com/google/jax/issues/4622
+ f = lambda x: x
+
+ # these should not error
+ _, f_jvp = api.linearize(f, 1.)
+ f_jvp(1.)
+ _, f_jvp = api.linearize(f, np.ones(2, np.int32))
+ f_jvp(np.zeros(2, float0))
+
+ # these should error
+ _, f_jvp = api.linearize(f, 1.)
+ with self.assertRaisesRegex(ValueError, "tangent values inconsistent"):
+ f_jvp(1)
+ _, f_jvp = api.linearize(f, np.ones(2, np.int32))
+ with self.assertRaisesRegex(ValueError, "tangent values inconsistent"):
+ f_jvp(np.ones(2, np.int32))
+
class RematTest(jtu.JaxTestCase):
| Omnistaging breaks jax.scipy.sparse.linalg.cg in some settings
Not quite sure what is the issue but I boiled it down to the following.
```python
import jax
import jax.numpy as jnp
from jax.scipy.sparse.linalg import cg
def func(x):
return x
x = jnp.array([-50., 200.])
val, func_jvp = jax.linearize(func, x)
cg(func_jvp, val) # this fails on jax 0.2.0, but works with jax.config.disable_omnistaging()
```
I was able to reproduce this on a colab instance running jax 0.2.0.
<details><summary>The stack trace(s)</summary>
<p>
```
/usr/local/lib/python3.6/dist-packages/jax/lib/xla_bridge.py:130: UserWarning: No GPU/TPU found, falling back to CPU.
warnings.warn('No GPU/TPU found, falling back to CPU.')
---------------------------------------------------------------------------
ConcretizationTypeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/jax/api.py in fun(*tangents)
1762 try:
-> 1763 core.lattice_join(primal_aval, tangent_aval)
1764 except TypeError as e:
17 frames
/usr/local/lib/python3.6/dist-packages/jax/core.py in lattice_join(x, y)
787 elif isinstance(x, type(y)):
--> 788 return y.join(x)
789 elif isinstance(y, type(x)):
/usr/local/lib/python3.6/dist-packages/jax/core.py in join(self, other)
1019 def join(self, other) -> UnshapedArray:
-> 1020 if self == other:
1021 return self
/usr/local/lib/python3.6/dist-packages/jax/core.py in __bool__(self)
506 def __nonzero__(self): return self.aval._nonzero(self)
--> 507 def __bool__(self): return self.aval._bool(self)
508 def __int__(self): return self.aval._int(self)
/usr/local/lib/python3.6/dist-packages/jax/core.py in error(self, arg)
863 def error(self, arg):
--> 864 raise_concretization_error(arg, fname_context)
865 return error
/usr/local/lib/python3.6/dist-packages/jax/core.py in raise_concretization_error(val, context)
852 f"Encountered tracer value: {val}")
--> 853 raise ConcretizationTypeError(msg)
854
ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected.
The problem arose with the `bool` function.
While tracing the function f at /usr/local/lib/python3.6/dist-packages/jax/lax/lax_control_flow.py:2085, this value became a tracer due to JAX operations on these lines:
operation m:bool[2] = eq k:float32[2] l:float32[2]
from line <ipython-input-1-fdc0af3e82fa>:12 (<module>)
You can use transformation parameters such as `static_argnums` for `jit` to avoid tracing particular arguments of transformed functions.
See https://jax.readthedocs.io/en/latest/faq.html#abstract-tracer-value-encountered-where-concrete-value-is-expected-error for more information.
Encountered tracer value: Traced<ShapedArray(bool[])>with<DynamicJaxprTrace(level=1/0)>
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
<ipython-input-1-fdc0af3e82fa> in <module>()
10
11 val, func_jvp = jax.linearize(func, x)
---> 12 print(cg(func_jvp, val))
/usr/local/lib/python3.6/dist-packages/jax/scipy/sparse/linalg.py in cg(A, b, x0, tol, atol, maxiter, M)
172 symmetric = all(map(real_valued, tree_leaves(b)))
173 x = lax.custom_linear_solve(
--> 174 A, b, solve=cg_solve, transpose_solve=cg_solve, symmetric=symmetric)
175 info = None # TODO(shoyer): return the real iteration count here
176 return x, info
/usr/local/lib/python3.6/dist-packages/jax/lax/lax_control_flow.py in custom_linear_solve(matvec, b, solve, transpose_solve, symmetric)
2094
2095 solve_jaxpr, solve_consts, out_tree = _initial_style_jaxpr(
-> 2096 _shape_checked(partial(solve, matvec), "solve"), in_args_tree, b_avals)
2097 _check_tree("solve", "b", out_tree, tree)
2098
/usr/local/lib/python3.6/dist-packages/jax/lax/lax_control_flow.py in _initial_style_jaxpr(fun, in_tree, in_avals)
70 @cache()
71 def _initial_style_jaxpr(fun: Callable, in_tree, in_avals):
---> 72 jaxpr, out_avals, consts, out_tree = _initial_style_open_jaxpr(fun, in_tree, in_avals)
73 closed_jaxpr = core.ClosedJaxpr(pe.convert_constvars_jaxpr(jaxpr), ())
74 return closed_jaxpr, consts, out_tree
/usr/local/lib/python3.6/dist-packages/jax/lax/lax_control_flow.py in _initial_style_open_jaxpr(fun, in_tree, in_avals)
65 def _initial_style_open_jaxpr(fun: Callable, in_tree, in_avals):
66 wrapped_fun, out_tree = flatten_fun_nokwargs(lu.wrap_init(fun), in_tree)
---> 67 jaxpr, out_avals, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals)
68 return jaxpr, out_avals, consts, out_tree()
69
/usr/local/lib/python3.6/dist-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr_dynamic(fun, in_avals)
992 main.source_info = fun_sourceinfo(fun.f) # type: ignore
993 main.jaxpr_stack = () # type: ignore
--> 994 jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)
995 del main
996 return jaxpr, out_avals, consts
/usr/local/lib/python3.6/dist-packages/jax/interpreters/partial_eval.py in trace_to_subjaxpr_dynamic(fun, main, in_avals)
1002 trace = DynamicJaxprTrace(main, core.cur_sublevel())
1003 in_tracers = map(trace.new_arg, in_avals)
-> 1004 ans = fun.call_wrapped(*in_tracers)
1005 out_tracers = map(trace.full_raise, ans)
1006 jaxpr, out_avals, consts = frame.to_jaxpr(in_tracers, out_tracers)
/usr/local/lib/python3.6/dist-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
149
150 try:
--> 151 ans = self.f(*args, **dict(self.params, **kwargs))
152 except:
153 # Some transformations yield from inside context managers, so we have to
/usr/local/lib/python3.6/dist-packages/jax/lax/lax_control_flow.py in f(x)
2084 def _shape_checked(fun, name):
2085 def f(x):
-> 2086 y = fun(x)
2087 _check_shapes(name, "b", y, b_flat)
2088 return y
/usr/local/lib/python3.6/dist-packages/jax/scipy/sparse/linalg.py in _cg_solve(A, b, x0, maxiter, tol, atol, M)
77 return x_, r_, gamma_, p_, k + 1
78
---> 79 r0 = _sub(b, A(x0))
80 p0 = z0 = M(r0)
81 gamma0 = _vdot_tree(r0, z0)
/usr/local/lib/python3.6/dist-packages/jax/api.py in _lift_linearized(jaxpr, primal_avals, consts, io_tree, out_pvals, *py_args)
1770 out_pvals, tangents_out))
1771
-> 1772 return apply_flat_fun(fun, io_tree, *py_args)
1773
1774 def _check_inexact_input_vjp(x):
/usr/local/lib/python3.6/dist-packages/jax/api_util.py in apply_flat_fun(fun, io_tree, *py_args)
50 if in_tree != in_tree_expected:
51 raise TypeError("Expected {}, got {}".format(in_tree_expected, in_tree))
---> 52 ans = fun(*args)
53 return tree_unflatten(out_tree, ans)
54
/usr/local/lib/python3.6/dist-packages/jax/api.py in fun(*tangents)
1765 msg = ("linearized function called on tangent values inconsistent with "
1766 "the original primal values.")
-> 1767 raise ValueError(msg) from e
1768 tangents_out = eval_jaxpr(jaxpr, consts, *tangents)
1769 return tuple(map(lambda out_pv, tan_out: out_pv.merge_with_known(tan_out),
ValueError: linearized function called on tangent values inconsistent with the original primal values.
```
</p>
</details>
| 2020-10-17T01:24:20 |
|
google/jax | 4,641 | google__jax-4641 | [
"4564"
] | b7ec636cfa80583fe2a35bc26693b00c38eff10b | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -4232,10 +4232,28 @@ def _eliminate_deprecated_list_indexing(idx):
# deprecated by NumPy and exists for backward compatibility.
if not isinstance(idx, tuple):
if isinstance(idx, Sequence) and not isinstance(idx, ndarray):
+ # As of numpy 1.16, some non-tuple sequences of indices result in a warning, while
+ # others are converted to arrays, based on a set of somewhat convoluted heuristics
+ # (See https://github.com/numpy/numpy/blob/v1.19.2/numpy/core/src/multiarray/mapping.c#L179-L343)
+ # In JAX, we raise a warning for *all* non-tuple sequences, and in the future will
+ # *always* raise a TypeError here, rather than silently converting to an array or tuple
+ # depending on the contents of the list as numpy will. "Explicit is better than implicit".
+ # TODO(jakevdp): raise a TypeError here.
if _any(_should_unpack_list_index(i) for i in idx):
+ msg = ("Using a non-tuple sequence for multidimensional indexing is deprecated; "
+ "use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will "
+ "result in a TypeError. See https://github.com/google/jax/issues/4564 "
+ "for discussion of why this type of indexing is being deprecated.")
idx = tuple(idx)
else:
+ msg = ("Using a non-tuple sequence for multidimensional indexing is deprecated; "
+ "use `arr[array(seq)]` instead of `arr[seq]`. In the future this will "
+ "result in a TypeError. See https://github.com/google/jax/issues/4564 "
+ "for discussion of why this type of indexing is being deprecated.")
idx = (idx,)
+ # TODO(jakevdp): this stacklevel is appropriate for x[idx]; for ops.index_update
+ # we should use stacklevel=5; for x.at[idx].set() we should use stacklevel=6.
+ warnings.warn(msg, FutureWarning, stacklevel=4)
else:
idx = (idx,)
return idx
| diff --git a/tests/lax_numpy_indexing_test.py b/tests/lax_numpy_indexing_test.py
--- a/tests/lax_numpy_indexing_test.py
+++ b/tests/lax_numpy_indexing_test.py
@@ -14,6 +14,7 @@
import collections
+from contextlib import contextmanager
import enum
from functools import partial
import itertools
@@ -458,7 +459,7 @@ def testStaticIndexing(self, shape, dtype, indexer):
jnp_fun = lambda x: jnp.asarray(x)[indexer]
with suppress_deprecated_indexing_warnings():
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker)
- self._CompileAndCheck(jnp_fun, args_maker)
+ self._CompileAndCheck(jnp_fun, args_maker)
@parameterized.named_parameters({
"testcase_name":
@@ -562,7 +563,7 @@ def jnp_fun(x, unpacked_indexer):
args_maker = lambda: [rng(shape, dtype), unpacked_indexer]
with suppress_deprecated_indexing_warnings():
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker)
- self._CompileAndCheck(jnp_fun, args_maker)
+ self._CompileAndCheck(jnp_fun, args_maker)
@parameterized.named_parameters(
{"testcase_name": "{}_inshape={}_indexer={}"
@@ -613,7 +614,7 @@ def testAdvancedIntegerIndexing(self, shape, dtype, indexer):
jnp_fun = lambda x, idx: jnp.asarray(x)[idx]
with suppress_deprecated_indexing_warnings():
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker)
- self._CompileAndCheck(jnp_fun, args_maker)
+ self._CompileAndCheck(jnp_fun, args_maker)
@parameterized.named_parameters(
{"testcase_name": "{}_inshape={}_indexer={}"
@@ -672,7 +673,8 @@ def testAdvancedIntegerIndexingGrads(self, shape, dtype, indexer):
tol = 1e-2 if jnp.finfo(dtype).bits == 32 else None
arg = rng(shape, dtype)
fun = lambda x: jnp.asarray(x)[indexer]
- check_grads(fun, (arg,), 2, tol, tol, eps=1.)
+ with suppress_deprecated_indexing_warnings():
+ check_grads(fun, (arg,), 2, tol, tol, eps=1.)
@parameterized.named_parameters(
{"testcase_name": "{}_inshape={}_indexer={}"
@@ -699,7 +701,7 @@ def np_fun(x, indexer_with_dummies):
with suppress_deprecated_indexing_warnings():
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker)
- self._CompileAndCheck(jnp_fun, args_maker)
+ self._CompileAndCheck(jnp_fun, args_maker)
def testAdvancedIndexingManually(self):
x = np.random.RandomState(0).randn(3, 4, 5)
@@ -752,7 +754,8 @@ def testBooleanIndexingArray1D(self):
def testBooleanIndexingList1D(self):
idx = [True, True, False]
x = api.device_put(np.arange(3))
- ans = x[idx]
+ with suppress_deprecated_indexing_warnings():
+ ans = x[idx]
expected = np.arange(3)[idx]
self.assertAllClose(ans, expected, check_dtypes=False)
@@ -766,7 +769,8 @@ def testBooleanIndexingArray2DBroadcast(self):
def testBooleanIndexingList2DBroadcast(self):
idx = [True, True, False, True]
x = np.arange(8).reshape(4, 2)
- ans = api.device_put(x)[idx]
+ with suppress_deprecated_indexing_warnings():
+ ans = api.device_put(x)[idx]
expected = x[idx]
self.assertAllClose(ans, expected, check_dtypes=False)
@@ -946,7 +950,7 @@ def testStaticIndexing(self, shape, dtype, update_shape, update_dtype,
jax_fn = lambda x, y: UpdateOps.jax_fn(op, indexer, x, y)
with suppress_deprecated_indexing_warnings():
self._CheckAgainstNumpy(np_fn, jax_fn, args_maker)
- self._CompileAndCheck(jax_fn, args_maker)
+ self._CompileAndCheck(jax_fn, args_maker)
@parameterized.named_parameters(jtu.cases_from_list({
"testcase_name": "{}_inshape={}_indexer={}_update={}_sugared={}_op={}".format(
@@ -973,7 +977,7 @@ def testAdvancedIndexing(self, shape, dtype, update_shape, update_dtype,
jax_fn = lambda x, y: UpdateOps.jax_fn(op, indexer, x, y, unique_indices=True)
with suppress_deprecated_indexing_warnings():
self._CheckAgainstNumpy(np_fn, jax_fn, args_maker)
- self._CompileAndCheck(jax_fn, args_maker)
+ self._CompileAndCheck(jax_fn, args_maker)
@parameterized.named_parameters(jtu.cases_from_list({
"testcase_name": "{}_inshape={}_indexer={}_update={}_sugared={}_op={}".format(
@@ -1002,7 +1006,7 @@ def testAdvancedIndexingSorted(self, shape, dtype, update_shape, update_dtype,
op, indexer, x, y, indices_are_sorted=True, unique_indices=True)
with suppress_deprecated_indexing_warnings():
self._CheckAgainstNumpy(np_fn, jax_fn, args_maker, check_dtypes=True)
- self._CompileAndCheck(jax_fn, args_maker, check_dtypes=True)
+ self._CompileAndCheck(jax_fn, args_maker, check_dtypes=True)
@parameterized.named_parameters(jtu.cases_from_list({
"testcase_name": "{}_inshape={}_indexer={}_update={}_op={}_sugared={}".format(
@@ -1029,7 +1033,7 @@ def testMixedAdvancedIndexing(self, shape, dtype, update_shape, update_dtype,
jax_fn = lambda x, y: UpdateOps.jax_fn(op, indexer, x, y)
with suppress_deprecated_indexing_warnings():
self._CheckAgainstNumpy(np_fn, jax_fn, args_maker)
- self._CompileAndCheck(jax_fn, args_maker)
+ self._CompileAndCheck(jax_fn, args_maker)
@parameterized.named_parameters(jtu.cases_from_list({
"testcase_name": "{}_inshape={}_indexer={}_update={}_op={}".format(
@@ -1105,6 +1109,40 @@ def testIndexDtypeError(self):
jnp.zeros(5).at[::2].set(1)
self.assertLen(w, 0)
+ @contextmanager
+ def assertNoWarnings(self):
+ with warnings.catch_warnings(record=True) as caught_warnings:
+ yield
+ self.assertEmpty(caught_warnings)
+
+ @parameterized.named_parameters(jtu.cases_from_list({
+ "testcase_name": "idx={}".format(idx), "idx": idx, "idx_type": idx_type}
+ for idx, idx_type in [
+ ([0], "array"),
+ ([0, 0], "array"),
+ ([[0, 0]], "tuple"),
+ ([0, [0, 1]], "tuple"),
+ ([0, np.arange(2)], "tuple"),
+ ([0, None], "tuple"),
+ ([0, slice(None)], "tuple"),
+ ]))
+ def testIndexSequenceDeprecationWarning(self, idx, idx_type):
+ msg = fr"Using a non-tuple sequence for multidimensional indexing is deprecated.*arr\[{idx_type}\(seq\)\]"
+ normalize = {"array": np.array, "tuple": tuple}[idx_type]
+ x = jnp.arange(6).reshape(3, 2)
+
+ with self.assertWarnsRegex(FutureWarning, msg):
+ idx_get = x[idx]
+ with self.assertNoWarnings():
+ idx_get_norm = x[normalize(idx)]
+ self.assertArraysEqual(idx_get, idx_get_norm)
+
+ with self.assertWarnsRegex(FutureWarning, msg):
+ idx_set = x.at[idx].set(0)
+ with self.assertNoWarnings():
+ idx_set_norm = x.at[normalize(idx)].set(0)
+ self.assertArraysEqual(idx_set, idx_set_norm)
+
if __name__ == "__main__":
absltest.main(testLoader=jtu.JaxTestLoader())
diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -3696,7 +3696,7 @@ def testIssue764(self):
def testIssue776(self):
"""Tests that the scatter-add transpose rule instantiates symbolic zeros."""
def f(u):
- y = jax.ops.index_add(np.ones(10,), [2, 4, 5], u)
+ y = jnp.ones(10).at[np.array([2, 4, 5])].add(u)
# The transpose rule for lax.tie_in returns a symbolic zero for its first
# argument.
return lax.tie_in(y, 7.)
@@ -4008,7 +4008,7 @@ def testLinspaceEndpoints(self, dtype, rng_factory):
rng = rng_factory(self.rng())
endpoints = rng((2,), dtype)
out = jnp.linspace(*endpoints, 10, dtype=dtype)
- self.assertAllClose(out[[0, -1]], endpoints, rtol=0, atol=0)
+ self.assertAllClose(out[np.array([0, -1])], endpoints, rtol=0, atol=0)
@parameterized.named_parameters(
jtu.cases_from_list(
| Should JAX deprecate indexing with lists?
Since numpy 1.16, indexing with a list in place of a tuple has led to a `FutureWarning` (See https://github.com/numpy/numpy/pull/9686 for a discussion of the rationale for this):
```python
>>> import numpy as np
>>> x = np.arange(6).reshape(2, 3)
>>> idx = [[0], [1]]
>>> x[idx]
FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
array([1])
```
As mentioned in the warning, the current behavior treats the indices as identical to a tuple:
```python
>>> x[tuple(idx)]
array([1])
```
while in the future, the indices will be treated as an array:
```python
>>> x[np.array(idx)]
array([[[0, 1, 2]],
[[3, 4, 5]]])
```
JAX currently implements the old, deprecated behavior, without any warning:
```python
>>> import jax.numpy as jnp
>>> jnp.array(x)[idx]
DeviceArray([1], dtype=int32)
```
This is setting us up for a future where numpy and JAX have different indexing semantics for lists of indices. I would propose that we follow numpy and start warning about this behavior now, so that when a numpy release finally does deprecate this indexing behavior, jax will be ready to immediately follow suit.
Thoughts?
| 👍 we should definitely deprecate this. We can probably be even more aggressive about removing support for this given the experimental nature of JAX.
In the long term I would be inclined to make _all_ indexing with lists in JAX an error. Even the case where lists are treated like an array is at odds with how JAX disallows lists as arguments to functions like `jnp.sum()`. It's not much more painful to require inserting array() in expressions like `x[:, jnp.array([1, 2])]` and has the advantage of much more explicit conversion.
+1. Let's just disallow this now. | 2020-10-19T18:00:35 |
google/jax | 4,652 | google__jax-4652 | [
"4648"
] | 4b8334ab0b8d9d3f1c676866d177f3e3e6b4c250 | diff --git a/jax/scipy/stats/__init__.py b/jax/scipy/stats/__init__.py
--- a/jax/scipy/stats/__init__.py
+++ b/jax/scipy/stats/__init__.py
@@ -14,17 +14,17 @@
# flake8: noqa: F401
from . import bernoulli
-from . import poisson
from . import beta
from . import cauchy
from . import dirichlet
from . import expon
from . import gamma
+from . import geom
from . import laplace
+from . import logistic
from . import multivariate_normal
from . import norm
from . import pareto
+from . import poisson
from . import t
from . import uniform
-from . import logistic
-from . import geom
| docs: jax.scipy.stats.poisson not on the public API docs webpage
Hi. With the release of JAX [`v0.2.4`](https://github.com/google/jax/releases/tag/jax-v0.2.4) (congrats on the release btw) we noticed that `jax.scipy.stats.poisson` is not documented on the [public API docs for the `jax.scipy` package](https://jax.readthedocs.io/en/stable/jax.scipy.html#jax-scipy-stats). However, other distributions like [`jax.scipy.stats.norm` are documented](https://jax.readthedocs.io/en/stable/jax.scipy.html#module-jax.scipy.stats.norm).
Is this intentional (are there upcoming plans for `jax.scipy.stats.poisson`)? Or does `jax.scipy.stats.poisson` just still need to get put into Sphinx?
(cc @lukasheinrich @kratsg)
| 2020-10-20T13:49:07 |
||
google/jax | 4,695 | google__jax-4695 | [
"4692",
"4692"
] | 25b4070268448d1cd75220378b3db5c60551ddae | diff --git a/jax/_src/scipy/optimize/line_search.py b/jax/_src/scipy/optimize/line_search.py
--- a/jax/_src/scipy/optimize/line_search.py
+++ b/jax/_src/scipy/optimize/line_search.py
@@ -112,7 +112,7 @@ def body(state):
# This will cause the line search to stop, and since the Wolfe conditions
# are not satisfied the minimization should stop too.
- state = state._replace(failed=state.failed | (dalpha <= 1e-5))
+ state = state._replace(failed=state.failed | (dalpha <= 1e-10))
# Cubmin is sometimes nan, though in this case the bounds check will fail.
a_j_cubic = _cubicmin(state.a_lo, state.phi_lo, state.dphi_lo, state.a_hi,
diff --git a/jax/_src/scipy/optimize/minimize.py b/jax/_src/scipy/optimize/minimize.py
--- a/jax/_src/scipy/optimize/minimize.py
+++ b/jax/_src/scipy/optimize/minimize.py
@@ -11,9 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-from functools import partial
from typing import Any, Callable, Mapping, Optional, Tuple, Union
-
from .bfgs import minimize_bfgs
from typing import NamedTuple
import jax.numpy as jnp
@@ -92,7 +90,7 @@ def minimize(
if options is None:
options = {}
- fun_with_args = partial(fun, *args)
+ fun_with_args = lambda x: fun(x, *args)
if method.lower() == 'bfgs':
results = minimize_bfgs(fun_with_args, x0, **options)
| diff --git a/tests/scipy_optimize_test.py b/tests/scipy_optimize_test.py
--- a/tests/scipy_optimize_test.py
+++ b/tests/scipy_optimize_test.py
@@ -86,6 +86,14 @@ def min_op(x0):
scipy_res = scipy.optimize.minimize(func(np), x0, method='BFGS').x
self.assertAllClose(scipy_res, jax_res, atol=2e-5, check_dtypes=False)
+ def test_fixes4594(self):
+ n = 2
+ A = jnp.eye(n) * 1e4
+ def f(x):
+ return jnp.mean((A @ x) ** 2)
+ results = jax.scipy.optimize.minimize(f, jnp.ones(n), method='BFGS')
+ self.assertAllClose(results.x, jnp.zeros(n), atol=1e-6, rtol=1e-6)
+
if __name__ == "__main__":
absltest.main()
| jax.scipy.minimize optional function args not being passed properly
In `jax.scipy.optimize._minimize`,
```python
fun_with_args = partial(fun, *args)
```
incorrectly puts the `args` before the parameter when `args` are provided.
E.g.
```python
from functools import partial
func = lambda a,b: print(a,b)
args = ('b',)
partial(func, *args)('a')
# b a instead of a b as intended
```
jax.scipy.minimize optional function args not being passed properly
In `jax.scipy.optimize._minimize`,
```python
fun_with_args = partial(fun, *args)
```
incorrectly puts the `args` before the parameter when `args` are provided.
E.g.
```python
from functools import partial
func = lambda a,b: print(a,b)
args = ('b',)
partial(func, *args)('a')
# b a instead of a b as intended
```
| 2020-10-23T14:53:00 |
|
google/jax | 4,799 | google__jax-4799 | [
"4797"
] | bd98b060b577622d19ba8cd47b611d32291a67b1 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -55,7 +55,7 @@
FLAGS = flags.FLAGS
_max = builtins.max
-_min = builtins.max
+_min = builtins.min
_reduce = functools.reduce
Array = Any
| _min = builtins.max ?
There is a very suspicious assignment `_min = builtins.max` in `lax.py`. Is this a potential bug or is there actually a reason for this?
https://github.com/google/jax/blob/5c9204a4613b5a92dc6c60c13579d783bab4976f/jax/_src/lax/lax.py#L58
| 2020-11-05T14:24:42 |
||
google/jax | 4,822 | google__jax-4822 | [
"4775"
] | fc07a26d0ce5cfdf0e28a2af04453bf608463515 | diff --git a/jax/_src/scipy/signal.py b/jax/_src/scipy/signal.py
--- a/jax/_src/scipy/signal.py
+++ b/jax/_src/scipy/signal.py
@@ -68,8 +68,6 @@ def convolve(in1, in2, mode='full', method='auto',
warnings.warn("convolve() ignores method argument")
if jnp.issubdtype(in1.dtype, jnp.complexfloating) or jnp.issubdtype(in2.dtype, jnp.complexfloating):
raise NotImplementedError("convolve() does not support complex inputs")
- if jnp.ndim(in1) != 1 or jnp.ndim(in2) != 1:
- raise ValueError("convolve() only supports 1-dimensional inputs.")
return _convolve_nd(in1, in2, mode, precision=precision)
@@ -92,12 +90,10 @@ def correlate(in1, in2, mode='full', method='auto',
warnings.warn("correlate() ignores method argument")
if jnp.issubdtype(in1.dtype, jnp.complexfloating) or jnp.issubdtype(in2.dtype, jnp.complexfloating):
raise NotImplementedError("correlate() does not support complex inputs")
- if jnp.ndim(in1) != 1 or jnp.ndim(in2) != 1:
- raise ValueError("correlate() only supports {ndim}-dimensional inputs.")
- return _convolve_nd(in1, in2[::-1], mode, precision=precision)
+ return _convolve_nd(in1, jnp.flip(in2), mode, precision=precision)
-@_wraps(osp_signal.correlate)
+@_wraps(osp_signal.correlate2d)
def correlate2d(in1, in2, mode='full', boundary='fill', fillvalue=0,
precision=None):
if boundary != 'fill' or fillvalue != 0:
| diff --git a/tests/scipy_signal_test.py b/tests/scipy_signal_test.py
--- a/tests/scipy_signal_test.py
+++ b/tests/scipy_signal_test.py
@@ -29,6 +29,7 @@
onedim_shapes = [(1,), (2,), (5,), (10,)]
twodim_shapes = [(1, 1), (2, 2), (2, 3), (3, 4), (4, 4)]
+threedim_shapes = [(2, 2, 2), (3, 3, 2), (4, 4, 2), (5, 5, 2)]
default_dtypes = jtu.dtypes.floating + jtu.dtypes.integer
@@ -42,16 +43,17 @@ class LaxBackedScipySignalTests(jtu.JaxTestCase):
op,
jtu.format_shape_dtype_string(xshape, dtype),
jtu.format_shape_dtype_string(yshape, dtype),
- mode),
+ mode), "shapeset": shapeset,
"xshape": xshape, "yshape": yshape, "dtype": dtype, "mode": mode,
"jsp_op": getattr(jsp_signal, op),
"osp_op": getattr(osp_signal, op)}
for mode in ['full', 'same', 'valid']
for op in ['convolve', 'correlate']
for dtype in default_dtypes
- for xshape in onedim_shapes
- for yshape in onedim_shapes))
- def testConvolutions(self, xshape, yshape, dtype, mode, jsp_op, osp_op):
+ for shapeset in [onedim_shapes, twodim_shapes, threedim_shapes]
+ for xshape in shapeset
+ for yshape in shapeset))
+ def testConvolutions(self, shapeset, xshape, yshape, dtype, mode, jsp_op, osp_op):
rng = jtu.rand_default(self.rng())
args_maker = lambda: [rng(xshape, dtype), rng(yshape, dtype)]
osp_fun = partial(osp_op, mode=mode)
| jax.scipy.signal.convolve should support n-dimensional inputs
`scipy.signal.convolve()` supports N-dimensional inputs; the JAX equivalent should as well.
There is already a `_convolve_nd` utility function in the module; this would involve calling that, and writing a test case to ensure the outputs match scipy's.
| ... and similar for `jax.scipy.signal.correlate()`. | 2020-11-07T13:19:32 |
google/jax | 4,840 | google__jax-4840 | [
"4827"
] | 32e8dab032cb923fb5a545f7d9a6bc27057b42b9 | diff --git a/jax/experimental/optimizers.py b/jax/experimental/optimizers.py
--- a/jax/experimental/optimizers.py
+++ b/jax/experimental/optimizers.py
@@ -406,8 +406,8 @@ def update(i, g, state):
x, m, v = state
m = (1 - b1) * g + b1 * m # First moment estimate.
v = (1 - b2) * jnp.square(g) + b2 * v # Second moment estimate.
- mhat = m / (1 - b1 ** (i + 1)) # Bias correction.
- vhat = v / (1 - b2 ** (i + 1))
+ mhat = m / (1 - jnp.asarray(b1, m.dtype) ** (i + 1)) # Bias correction.
+ vhat = v / (1 - jnp.asarray(b2, m.dtype) ** (i + 1))
x = x - step_size(i) * mhat / (jnp.sqrt(vhat) + eps)
return x, m, v
def get_params(state):
@@ -442,7 +442,8 @@ def update(i, g, state):
x, m, u = state
m = (1 - b1) * g + b1 * m # First moment estimate.
u = jnp.maximum(b2 * u, jnp.abs(g)) # Update exponentially weighted infinity norm.
- x = x - (step_size(i) / (1 - b1 ** (i + 1))) * m / (u + eps)
+ x = (x - (step_size(i) / (1 - jnp.asarray(b1, m.dtype) ** (i + 1))) * m
+ / (u + eps))
return x, m, u
def get_params(state):
x, _, _ = state
| Adam turns Float32 into Float64
Dear Jax developers,
Thanks for the awesome work! When I set `jax_enable_x64=True`, I found the Adam optimizer turns float32 into float64 after one update. This happens for adam exclusively, not for sgd or rmsprop. Is this a known issue or a specific design? Do we have a nice way to counter this ?
I am using `jax==0.2.1` and `jaxlib==0.1.55`. I run the following code in purely cpu, though commenting `jit` will remove this phenomenon. The results are
```
0 float32
1 float64
2 float64
```
```
import numpy as onp
import jax.numpy as jnp
from jax.config import config
config.update('jax_enable_x64', True)
from jax import grad, jit
from jax.experimental import optimizers
x = onp.random.RandomState(0).normal(size=[10, 10]).astype('float32')
opt_init, opt_update, get_params = optimizers.adam(1e-3)
# opt_init, opt_update, get_params = optimizers.sgd(1e-3)
# opt_init, opt_update, get_params = optimizers.rmsprop(1e-3)
opt_state = opt_init(jnp.asarray(x).astype('float32'))
@jit
def loss(x):
return jnp.mean(x ** 2.)
@jit
def update(i, opt_state):
x = get_params(opt_state)
return opt_update(i, grad(loss)(x), opt_state)
print(0, x.dtype)
opt_state = update(0, opt_state)
x = get_params(opt_state)
print(1, x.dtype)
opt_state = update(1, opt_state)
x = get_params(opt_state)
print(2, x.dtype)
```
| 2020-11-09T14:11:34 |
||
google/jax | 4,847 | google__jax-4847 | [
"4461"
] | 307b528b2146c043194dec4a6e0fe2c51ffa1a94 | diff --git a/build/build.py b/build/build.py
--- a/build/build.py
+++ b/build/build.py
@@ -179,6 +179,7 @@ def check_bazel_version(bazel_path, min_version, max_version):
common --experimental_repo_remote_exec
build --repo_env PYTHON_BIN_PATH="{python_bin_path}"
+build --action_env=PYENV_ROOT
build --python_path="{python_bin_path}"
build --repo_env TF_NEED_CUDA="{tf_need_cuda}"
build --action_env TF_CUDA_COMPUTE_CAPABILITIES="{cuda_compute_capabilities}"
| Add Python 3.9 wheels for jaxlib
Sorry for the bother, but I'm having some problems with installing jaxlib in Python 3.9:
```
> pip install jaxlib==0.1.55
ERROR: Could not find a version that satisfies the requirement jaxlib==0.1.55 (from versions: 0.1, 0.1.1, 0.1.4)
ERROR: No matching distribution found for jaxlib==0.1.55
```
@hawkinsp
| We haven't released wheels for Python 3.9 yet. But we can try to do that! One slight blocker is that NumPy and SciPy haven't released 3.9 wheels yet (although we can most likely build them from source). So I'd be tempted to wait a week or two before trying to release 3.9 wheels.
@hawkinsp No rush! Just thought it might have been an oversight since a few versions are mysteriously there.
@hawkinsp we would like to build jaxlib from source in conda-forge, at the moment we are re-packaging the wheel. If there is any interested from the developers in doing so please open an issue in https://github.com/conda-forge/jaxlib-feedstock, we'd love to have some help!
PS: we already have scipy and numpy on py39!
FYI: [NumPy](https://pypi.org/project/numpy/#files) and [SciPy](https://pypi.org/project/scipy/#files) have now released wheels for Python 3.9. | 2020-11-09T21:27:10 |
|
google/jax | 4,896 | google__jax-4896 | [
"4696"
] | 65a7f6087cfbdeb452474c2c32f187c23bd45dbc | diff --git a/jax/_src/lax/control_flow.py b/jax/_src/lax/control_flow.py
--- a/jax/_src/lax/control_flow.py
+++ b/jax/_src/lax/control_flow.py
@@ -1021,8 +1021,11 @@ def _cond_transpose(cts, *args, branches, linear):
branches_trans = tuple(
_transpose_cond_jaxpr(jaxpr, num_res) for jaxpr in branches)
- lin_in_avals = [raise_to_shaped(a, weak_type=False) for a, l in zip(in_avals, linear) if l]
- assert all(jaxpr.out_avals == lin_in_avals for jaxpr in branches_trans)
+ lin_in_avals = [raise_to_shaped(a, weak_type=False)
+ for a, l in zip(in_avals, linear) if l]
+ assert all(core.typematch(out_aval, lin_in_aval)
+ for jaxpr in branches_trans
+ for out_aval, lin_in_aval in zip(jaxpr.out_avals, lin_in_avals))
res = ops[:num_res]
cts = _map(ad.instantiate_zeros_aval, branches[0].out_avals, cts)
diff --git a/jax/core.py b/jax/core.py
--- a/jax/core.py
+++ b/jax/core.py
@@ -1317,7 +1317,8 @@ def check_jaxpr(jaxpr: Jaxpr):
- variables are typed equally throughout a jaxpr
- variable type annotations are compatible with their binding expression
- Raises `TypeError` if `jaxpr` is determined invalid. Returns `None` otherwise.
+ Raises `JaxprTypeError` if `jaxpr` is determined invalid. Returns `None`
+ otherwise.
"""
try:
_check_jaxpr(jaxpr, [v.aval for v in jaxpr.invars])
| diff --git a/tests/lax_control_flow_test.py b/tests/lax_control_flow_test.py
--- a/tests/lax_control_flow_test.py
+++ b/tests/lax_control_flow_test.py
@@ -999,6 +999,24 @@ def f(x):
self.assertAllClose(ans, expected, check_dtypes=False)
jtu.check_grads(f, (x,), order=2, modes=["fwd", "rev"])
+ def testSwitchGradWithWeakTypeMismatch(self): # issue 4696
+ branches = [
+ lambda x: x, # This preserves the weak type of x.
+ lambda x: jnp.cos(x), # This strips the weak type of x.
+ ]
+
+ def f_ref(x):
+ i = x.astype(jnp.int32)
+ return branches[i](x)
+
+ def f(x):
+ return lax.switch(x.astype(jnp.int32), branches, x)
+
+ for x in [0., 1.]:
+ ans = api.grad(f)(x)
+ expected = api.grad(f_ref)(x)
+ self.assertAllClose(ans, expected, check_dtypes=False)
+
@parameterized.named_parameters(
{"testcase_name": f"_{name}", "cond": cond}
for cond, name in COND_IMPLS)
| weak_types: error in grad of switch
This is an error on the current master that was introduced by #4161. In a backward pass over a switch statement, the in_avals and out_avals do not match if branches return data of a different weak type.
This is the same error you would see if you had branches that returned different dtypes, but here the dtypes match but the weak_types do not.
Short repro:
```python
from jax import grad, lax
import jax.numpy as jnp
branches = [
lambda x: x, # This preserves the weak type of x.
lambda x: jnp.cos(x), # This strips the weak type of x.
]
def f(x):
return lax.switch(x.astype(jnp.int32), branches, x)
grad(f)(0.0)
```
```pytb
Traceback (most recent call last):
File "tmp.py", line 12, in <module>
grad(f)(0.0)
File "/Users/vanderplas/github/google/jax/jax/traceback_util.py", line 139, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/Users/vanderplas/github/google/jax/jax/api.py", line 750, in grad_f
_, g = value_and_grad_f(*args, **kwargs)
File "/Users/vanderplas/github/google/jax/jax/traceback_util.py", line 139, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/Users/vanderplas/github/google/jax/jax/api.py", line 819, in value_and_grad_f
g = vjp_py(np.ones((), dtype=dtype))
File "/Users/vanderplas/github/google/jax/jax/api.py", line 1791, in _vjp_pullback_wrapper
ans = fun(*args)
File "/Users/vanderplas/github/google/jax/jax/interpreters/ad.py", line 120, in unbound_vjp
arg_cts = backward_pass(jaxpr, consts, dummy_args, cts)
File "/Users/vanderplas/github/google/jax/jax/interpreters/ad.py", line 220, in backward_pass
cts_out = get_primitive_transpose(eqn.primitive)(cts_in, *invals,
File "/Users/vanderplas/github/google/jax/jax/_src/lax/control_flow.py", line 1023, in _cond_transpose
assert all(jaxpr.out_avals == lin_in_avals for jaxpr in branches_trans)
```
| 2020-11-14T02:28:32 |
|
google/jax | 4,907 | google__jax-4907 | [
"4905"
] | 0da1fbe2850ee7d21a6a98a0c68c3423b77c7965 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -4315,6 +4315,35 @@ def _scatter_translation_rule(c, operand, scatter_indices, updates, *,
_scatter_dimensions_proto(indices_shape, dimension_numbers),
indices_are_sorted, unique_indices)
+def _scatter_add_translation_rule(
+ c, operand, scatter_indices, updates, *, update_jaxpr, update_consts,
+ dimension_numbers, indices_are_sorted, unique_indices,
+ expand_complex128=False):
+ dtype = c.get_shape(operand).numpy_dtype()
+ scatter_dims = _scatter_dimensions_proto(c.get_shape(scatter_indices),
+ dimension_numbers)
+
+ def _make_reducer(dtype):
+ subc = xla_bridge.make_computation_builder("scatter_add_reducer")
+ shape = xc.Shape.array_shape(np.dtype(dtype), ())
+ args = [xb.parameter(subc, 0, shape), xb.parameter(subc, 1, shape)]
+ out = xops.Add(args[0], args[1])
+ return subc.build(out)
+
+ if expand_complex128 and dtype == np.complex128:
+ update_computation = _make_reducer(np.float64)
+ re = xops.Scatter(xops.Real(operand), scatter_indices, xops.Real(updates),
+ update_computation, scatter_dims, indices_are_sorted,
+ unique_indices)
+ im = xops.Scatter(xops.Imag(operand), scatter_indices, xops.Imag(updates),
+ update_computation, scatter_dims, indices_are_sorted,
+ unique_indices)
+ return xops.Complex(re, im)
+ else:
+ update_computation = _make_reducer(dtype)
+ return xops.Scatter(operand, scatter_indices, updates, update_computation,
+ scatter_dims, indices_are_sorted, unique_indices)
+
def _scatter_add_jvp(primals, tangents, *, update_jaxpr, update_consts,
dimension_numbers, indices_are_sorted, unique_indices):
operand, scatter_indices, updates = primals
@@ -4454,12 +4483,14 @@ def _scatter_batching_rule(scatter_op, batched_args, batch_dims, *,
scatter_add_p = standard_primitive(
_scatter_shape_rule, _scatter_dtype_rule, 'scatter-add',
- _scatter_translation_rule)
+ _scatter_add_translation_rule)
ad.primitive_jvps[scatter_add_p] = _scatter_add_jvp
ad.primitive_transposes[scatter_add_p] = _scatter_add_transpose_rule
batching.primitive_batchers[scatter_add_p] = (
partial(_scatter_batching_rule, scatter_add))
+xla.backend_specific_translations['gpu'][scatter_add_p] = partial(
+ _scatter_add_translation_rule, expand_complex128=True)
scatter_mul_p = standard_primitive(
_scatter_shape_rule, _scatter_dtype_rule, 'scatter-mul',
| diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -1934,7 +1934,7 @@ def testGatherShapeCheckingRule(self, operand_shape, start_indices_shape,
"arg_shape": arg_shape, "dtype": dtype, "idxs": idxs,
"update_shape": update_shape, "dnums": dnums,
"rng_factory": rng_factory, "rng_idx_factory": rng_idx_factory}
- for dtype in float_dtypes
+ for dtype in inexact_dtypes
for arg_shape, idxs, update_shape, dnums in [
((5,), np.array([[0], [2]]), (2,), lax.ScatterDimensionNumbers(
update_window_dims=(), inserted_window_dims=(0,),
| Slow gradient calculation when indexing a complex-valued object inside a loss.
I have observed that there is large increase in computation time when differentiating through a loss that involves index a complex128-valued array. Consider the following demo script.
```python
import argparse
import time
import jax.numpy as jnp
from jax import jit, grad
from jax import random
from jax.config import config
parser = argparse.ArgumentParser(description='Complex indexing and gradients in JAX')
parser.add_argument('--complex', default=True, dest='complex', action='store_true', help='Index a complex-valued object')
parser.add_argument('--no-complex', dest='complex', action='store_false')
parser.add_argument('--double', default=True, dest='double', action='store_true', help='Use double precision')
parser.add_argument('--no-double', dest='double', action='store_false')
parser.add_argument('--index', default=True, dest='index', action='store_true', help='Index the object inside the loss')
parser.add_argument('--no-index', dest='index', action='store_false')
args = parser.parse_args()
if args.double:
config.update("jax_enable_x64", True)
def loss(fvol, idx):
# Define a loss function that either indexes or does not index the array.
if args.index:
vv = fvol[idx].sum([-1])
# Doing this is fast again in complex-128.
# fvol_real = fvol.real[idx]
# fvol_imag = fvol.imag[idx]
# vv = jnp.sum(fvol_real + 1j*fvol_imag, [-1])
else:
vv = fvol
return jnp.square(jnp.linalg.norm(vv))
@jit
def loss_grad(fvol, idx):
return grad(loss)(fvol, idx)
# Create a large array.
rng = random.PRNGKey(0)
rng_real, rng_imag = random.split(rng, 2)
L = 256
fvol = random.normal(rng_real, [L**3])
if args.complex:
fvol += 1j * random.normal(rng_imag, fvol.shape)
dtype = fvol.dtype
idx = random.randint(rng, [L*L*8], minval=0, maxval=L)
# Time how long it takes to compute the loss.
for i in range(3):
start = time.time()
lv = loss(fvol, idx)
lv.block_until_ready()
elapsed = time.time() - start
print('object data: {} - time elapsed to evaluate loss: {:.5f}'.format(dtype, elapsed))
# Time how long it takes to compute the gradient of the loss.
for i in range(3):
start = time.time()
g = loss_grad(fvol, idx)
g.block_until_ready()
elapsed = time.time() - start
print('object data: {} - time elapsed to evaluate gradient: {:.5f}'.format(dtype, elapsed))
```
I am running this demo script on a V100 GPU with jaxlib 0.1.57 with Python 3.6.9. When I invoke the demo as `python demo.py --complex --double --index` (meaning that I have a complex-valued array, double precision, and a loss function that involves indexing) I get the following result:
```
object data: complex128 - time elapsed to evaluate loss: 0.63968
object data: complex128 - time elapsed to evaluate loss: 0.03179
object data: complex128 - time elapsed to evaluate loss: 0.03082
object data: complex128 - time elapsed to evaluate gradient: 4.70996
object data: complex128 - time elapsed to evaluate gradient: 4.55858
object data: complex128 - time elapsed to evaluate gradient: 4.56153
```
We see that it takes significantly longer to compute the gradient than the loss itself. If I invoke the script as `python demo.py --complex --double --no-index` (complex-valued array, double precision, and a loss that does not involve indexing) then things are better; the gradient calculation is ~3x the timing of the loss calculation.
```
object data: complex128 - time elapsed to evaluate loss: 0.18214
object data: complex128 - time elapsed to evaluate loss: 0.00098
object data: complex128 - time elapsed to evaluate loss: 0.00059
object data: complex128 - time elapsed to evaluate gradient: 0.13375
object data: complex128 - time elapsed to evaluate gradient: 0.00175
object data: complex128 - time elapsed to evaluate gradient: 0.00148
```
What about in single precision? Invoking the script as `python demo.py --complex --no-double --index` yields
```
object data: complex64 - time elapsed to evaluate loss: 0.60020
object data: complex64 - time elapsed to evaluate loss: 0.00300
object data: complex64 - time elapsed to evaluate loss: 0.00164
object data: complex64 - time elapsed to evaluate gradient: 0.18140
object data: complex64 - time elapsed to evaluate gradient: 0.03122
object data: complex64 - time elapsed to evaluate gradient: 0.03108
```
The gradient calculation is around an order of magnitude slower. What about when we have an unindexed loss function using the invocation `python demo.py --complex --no-double --no-index`?
```
object data: complex64 - time elapsed to evaluate loss: 0.17156
object data: complex64 - time elapsed to evaluate loss: 0.00114
object data: complex64 - time elapsed to evaluate loss: 0.00057
object data: complex64 - time elapsed to evaluate gradient: 0.11934
object data: complex64 - time elapsed to evaluate gradient: 0.00087
object data: complex64 - time elapsed to evaluate gradient: 0.00077
```
Finally, what happens if I consider indexing a real-valued array instead of a complex one? Using the invocation `python demo.py --no-complex --double --index`
```
object data: float64 - time elapsed to evaluate loss: 0.60958
object data: float64 - time elapsed to evaluate loss: 0.00578
object data: float64 - time elapsed to evaluate loss: 0.00180
object data: float64 - time elapsed to evaluate gradient: 0.13558
object data: float64 - time elapsed to evaluate gradient: 0.00106
object data: float64 - time elapsed to evaluate gradient: 0.00082
```
and compare this to the first output I showed where the gradient calculation was around four seconds.
I note in the demo script that there is a correction to the loss that will make the code fast again when run in double precision and indexing a complex-valued array. Replacing `vv = fvol[idx].sum([-1])` (line 25) with
```python
fvol_real = fvol.real[idx]
fvol_imag = fvol.imag[idx]
vv = jnp.sum(fvol_real + 1j*fvol_imag, [-1])
```
produces the output
```
object data: complex128 - time elapsed to evaluate loss: 0.92206
object data: complex128 - time elapsed to evaluate loss: 0.03364
object data: complex128 - time elapsed to evaluate loss: 0.03140
object data: complex128 - time elapsed to evaluate gradient: 0.20444
object data: complex128 - time elapsed to evaluate gradient: 0.06708
object data: complex128 - time elapsed to evaluate gradient: 0.06507
```
which a significant improvement over the four seconds the code was taking before. This is just speculation, but in researching this issue my colleagues and I thought this bug could be at play here as well: https://github.com/google/jax/issues/4115
| Yes, I strongly suspect this is a duplicate of #4115.
The issue is that XLA's implementation of `scatter-add`, which is the gradient of the `gather` we use to compute indexing, relies on atomic operations, but GPUs don't have atomic operations wide enough for complex128 values. In this case, XLA falls back to a slow loop of updates without any parallelism.
That said, for this *particular* scatter, which is a scatter-add, we can use the trick you have commented out, which is to perform separate scatters on the real and imaginary parts. We could do it inside the XLA translation rule easily enough. | 2020-11-16T16:15:13 |
google/jax | 4,993 | google__jax-4993 | [
"4992"
] | 7bd67efcc604605ea5afae02dba85e7ec4cf7eab | diff --git a/jax/_src/random.py b/jax/_src/random.py
--- a/jax/_src/random.py
+++ b/jax/_src/random.py
@@ -12,32 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-"""JAX pseudo-random number generators (PRNGs).
-
-Example usage:
-
->>> rng = jax.random.PRNGKey(seed)
->>> for i in range(num_steps):
-... rng, rng_input = jax.random.split(rng)
-... params = compiled_update(rng_input, params, next(batches))
-
-Context:
-
-Among other requirements, the JAX PRNG aims to:
-(a) ensure reproducibility,
-(b) parallelize well, both in terms of vectorization (generating array values)
-and multi-replica, multi-core computation. In particular it should not use
-sequencing constraints between random function calls.
-
-The approach is based on:
-1. "Parallel random numbers: as easy as 1, 2, 3" (Salmon et al. 2011)
-2. "Splittable pseudorandom number generators using cryptographic hashing"
-(Claessen et al. 2013)
-
-See also https://github.com/google/jax/blob/master/design_notes/prng.md
-for the design and its motivation.
-"""
-
from functools import partial
from typing import Optional, Sequence, Union
diff --git a/jax/random.py b/jax/random.py
--- a/jax/random.py
+++ b/jax/random.py
@@ -12,6 +12,68 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+"""Utilities for pseudo-random number generation.
+
+The ``jax.random`` package provides a number of routines for deterministic
+generation of sequences of pseudorandom numbers.
+
+Basic usage
+-----------
+
+>>> key = jax.random.PRNGKey(seed)
+>>> for i in range(num_steps):
+... key, subkey = jax.random.split(key)
+... params = compiled_update(subkey, params, next(batches))
+
+PRNG Keys
+---------
+Unlike the *stateful* pseudorandom number generators (PRNGs) that users of NumPy and
+SciPy may be accustomed to, JAX random functions all require an explicit PRNG state to
+be passed as a first argument.
+The random state is described by two unsigned 32-bit integers that we call a **key**,
+usually generated by the :py:func:`jax.random.PRNGKey` function::
+
+ >>> from jax import random
+ >>> key = random.PRNGKey(0)
+ >>> key
+ DeviceArray([0, 0], dtype=uint32)
+
+This key can then be used in any of JAX's random number generation routines::
+
+ >>> random.uniform(key)
+ DeviceArray(0.41845703, dtype=float32)
+
+Note that using a key does not modify it, so reusing the same key will lead to the same result::
+
+ >>> random.uniform(key)
+ DeviceArray(0.41845703, dtype=float32)
+
+If you need a new random number, you can use :meth:`jax.random.split` to generate new subkeys::
+
+ >>> key, subkey = random.split(key)
+ >>> random.uniform(subkey)
+ DeviceArray(0.10536897, dtype=float32)
+
+Design and Context
+------------------
+
+Among other requirements, the JAX PRNG aims to:
+
+(a) ensure reproducibility,
+(b) parallelize well, both in terms of vectorization (generating array values)
+ and multi-replica, multi-core computation. In particular it should not use
+ sequencing constraints between random function calls.
+
+The approach is based on:
+
+1. "Parallel random numbers: as easy as 1, 2, 3" (Salmon et al. 2011)
+2. "Splittable pseudorandom number generators using cryptographic hashing"
+ (Claessen et al. 2013)
+
+See also https://github.com/google/jax/blob/master/design_notes/prng.md
+for the design and its motivation.
+"""
+
# flake8: noqa: F401
from jax._src.random import (
| JAX Random Docs Page is Blank
Seems the page for Jax.random is not generating correctly...?
https://jax.readthedocs.io/en/latest/jax.random.html
| Thanks for reporting. I'll take a look - I suspect it may be because jax/random.py does not have a module-level docstring.
Hmm... looks like it's due to more than just the docstring. #4964 put the random functions into a private module, so sphinx sees this and skips them in autodoc. The solution, I think, is to explicitly list all functions to be documented, as we do in `jax.numpy.rst`. | 2020-11-20T23:32:57 |
|
google/jax | 4,999 | google__jax-4999 | [
"4983"
] | c7b2b9ed07dbcd848867ee04044f4b772e10d9ff | diff --git a/build/build_wheel.py b/build/build_wheel.py
--- a/build/build_wheel.py
+++ b/build/build_wheel.py
@@ -57,13 +57,19 @@ def _copy_so(src_file, dst_dir, dst_filename=None):
else:
dst_filename = src_filename
dst_file = os.path.join(dst_dir, dst_filename)
- shutil.copy(src_file, dst_file)
+ if _is_windows():
+ shutil.copyfile(src_file, dst_file)
+ else:
+ shutil.copy(src_file, dst_file)
def _copy_normal(src_file, dst_dir, dst_filename=None):
src_filename = os.path.basename(src_file)
dst_file = os.path.join(dst_dir, dst_filename or src_filename)
- shutil.copy(src_file, dst_file)
+ if _is_windows():
+ shutil.copyfile(src_file, dst_file)
+ else:
+ shutil.copy(src_file, dst_file)
def copy_file(src_file, dst_dir, dst_filename=None):
@@ -169,4 +175,3 @@ def build_wheel(sources_path, output_path):
finally:
if tmpdir:
tmpdir.cleanup()
-
| Error caused by shutil.rmtree
```
Traceback (most recent call last):
File "\\?\C:\Users\cloud\AppData\Local\Temp\Bazel.runfiles_vfpgffuf\runfiles\__main__\build\install_xla_in_source_tree.py", line 83, in <module>
shutil.rmtree(jaxlib_dir)
File "C:\Users\cloud\miniconda3\lib\shutil.py", line 516, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\cloud\miniconda3\lib\shutil.py", line 400, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\cloud\miniconda3\lib\shutil.py", line 398, in _rmtree_unsafe
os.unlink(fullname)
WindowsError: [Error 5] Access is denied.: 'D:\\jax\\build\\jaxlib\\cublas_kernels.pyd'
```
This only happens on rebuild.
The reason is `shutil.rmtree` will not delete readonly file on Windows.
| @cloudhan can you try with https://github.com/google/jax/pull/4982 patched in?
This reworks the build a bit more so it builds a wheel file rather than writing into the source tree. Since it uses a temporary directory it doesn't need this particular line of code.
I just updated that PR to fix some more windows-specific problems and verified I was able to build a windows wheel. | 2020-11-23T16:11:48 |
|
google/jax | 5,077 | google__jax-5077 | [
"5054"
] | 621f34b6dc992a2f0ce033a527ed7f47f24118a9 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -1005,10 +1005,24 @@ def sinc(x):
_check_arraylike("sinc", x)
x, = _promote_dtypes_inexact(x)
eq_zero = lax.eq(x, lax._const(x, 0))
- safe_x = where(eq_zero, lax._const(x, 0), x)
- pi_x = lax.mul(lax._const(x, pi), safe_x)
- return where(eq_zero,
- lax._const(x, 1), lax.div(lax.sin(pi_x), pi_x))
+ pi_x = lax.mul(lax._const(x, pi), x)
+ safe_pi_x = where(eq_zero, lax._const(x, 0), pi_x)
+ return where(eq_zero, _sinc_maclaurin(0, pi_x),
+ lax.div(lax.sin(safe_pi_x), safe_pi_x))
+
+@partial(custom_jvp, nondiff_argnums=(0,))
+def _sinc_maclaurin(k, x):
+ # compute the kth derivative of x -> sin(x)/x evaluated at zero (since we
+ # compute the monomial term in the jvp rule)
+ if k % 2:
+ return lax.full_like(x, 0)
+ else:
+ return lax.full_like(x, (-1) ** (k // 2) / (k + 1))
+
+@_sinc_maclaurin.defjvp
+def _sinc_maclaurin_jvp(k, primals, tangents):
+ (x,), (t,) = primals, tangents
+ return _sinc_maclaurin(k, x), _sinc_maclaurin(k + 1, x) * t
@_wraps(np.transpose)
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -4531,6 +4531,37 @@ def testOpGradSpecialValue(self, op, special_value, order):
check_grads(op, (special_value,), order, ["fwd", "rev"],
atol={np.float32: 3e-3})
+ def testSincAtZero(self):
+ # Some manual tests for sinc at zero, since it doesn't have well-behaved
+ # numerical derivatives at zero
+ def deriv(f):
+ return lambda x: api.jvp(f, (x,), (1.,))[1]
+
+ def apply_all(fns, x):
+ for f in fns:
+ x = f(x)
+ return x
+
+ d1 = 0.
+ for ops in itertools.combinations_with_replacement([deriv, api.grad], 1):
+ self.assertAllClose(apply_all(ops, jnp.sinc)(0.), d1)
+
+ d2 = -np.pi ** 2 / 3
+ for ops in itertools.combinations_with_replacement([deriv, api.grad], 2):
+ self.assertAllClose(apply_all(ops, jnp.sinc)(0.), d2)
+
+ d3 = 0.
+ for ops in itertools.combinations_with_replacement([deriv, api.grad], 3):
+ self.assertAllClose(apply_all(ops, jnp.sinc)(0.), d3)
+
+ d4 = np.pi ** 4 / 5
+ for ops in itertools.combinations_with_replacement([deriv, api.grad], 4):
+ self.assertAllClose(apply_all(ops, jnp.sinc)(0.), d4)
+
+ def testSincGradArrayInput(self):
+ # tests for a bug almost introduced in #5077
+ jax.grad(lambda x: jnp.sinc(x).sum())(jnp.arange(10.)) # doesn't crash
+
def testTakeAlongAxisIssue1521(self):
# https://github.com/google/jax/issues/1521
idx = jnp.repeat(jnp.arange(3), 10).reshape((30, 1))
| Add custom JVP rule for jnp.sinc
The current implementation has incorrect even-ordered derivatives at `x=0.0`:
```python
import matplotlib.pyplot as plt
import jax.numpy as jnp
from jax import grad, vmap
x = jnp.linspace(-5, 5, 101)
y = vmap(grad(grad(jnp.sinc)))(x)
plt.plot(x, y)
```

Related to #5039
| 2020-12-02T08:42:49 |
|
google/jax | 5,089 | google__jax-5089 | [
"5088"
] | d6e8a701d04e8ccb98501dc25cda0b0e05c5f6e0 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -3879,6 +3879,10 @@ def replace(tup, val):
lst[axis] = val
return tuple(lst)
+ use_64bit_index = _any([type(d) is Poly or d >= (1 << 31) for d in arr.shape])
+ index_dtype = int64 if use_64bit_index else int32
+ indices = lax.convert_element_type(indices, index_dtype)
+
bcast_shape = lax.broadcast_shapes(replace(arr.shape, 1), replace(indices.shape, 1))
indices = broadcast_to(indices, replace(bcast_shape, indices.shape[axis]))
arr = broadcast_to(arr, replace(bcast_shape, arr.shape[axis]))
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -3346,10 +3346,11 @@ def testTakeEmpty(self):
@parameterized.named_parameters(jtu.cases_from_list(
- {"testcase_name": "_{}_ishape={}_axis={}".format(
- jtu.format_shape_dtype_string(x_shape, dtype), i_shape, axis),
- "rng_factory": rng_factory, "x_shape": x_shape, "i_shape": i_shape, "dtype": dtype,
- "axis": axis}
+ {"testcase_name": "_{}_index={}_axis={}".format(
+ jtu.format_shape_dtype_string(x_shape, dtype),
+ jtu.format_shape_dtype_string(i_shape, index_dtype), axis),
+ "x_shape": x_shape, "i_shape": i_shape, "dtype": dtype,
+ "index_dtype": index_dtype, "axis": axis}
for x_shape, i_shape in filter(
_shapes_are_equal_length,
filter(_shapes_are_broadcast_compatible,
@@ -3357,9 +3358,10 @@ def testTakeEmpty(self):
for axis in itertools.chain(range(len(x_shape)), [-1],
[cast(Optional[int], None)])
for dtype in default_dtypes
- for rng_factory in [jtu.rand_default]))
- def testTakeAlongAxis(self, x_shape, i_shape, dtype, axis, rng_factory):
- rng = rng_factory(self.rng())
+ for index_dtype in int_dtypes))
+ def testTakeAlongAxis(self, x_shape, i_shape, dtype, index_dtype, axis):
+ rng = jtu.rand_default(self.rng())
+
i_shape = np.array(i_shape)
if axis is None:
i_shape = [np.prod(i_shape, dtype=np.int64)]
@@ -3370,7 +3372,11 @@ def testTakeAlongAxis(self, x_shape, i_shape, dtype, axis, rng_factory):
def args_maker():
x = rng(x_shape, dtype)
n = np.prod(x_shape, dtype=np.int32) if axis is None else x_shape[axis]
- i = rng(i_shape, np.int32) % (2 * n - 1) - (n - 1)
+ if np.issubdtype(index_dtype, np.unsignedinteger):
+ index_rng = jtu.rand_int(self.rng(), 0, n)
+ else:
+ index_rng = jtu.rand_int(self.rng(), -n, n)
+ i = index_rng(i_shape, index_dtype)
return x, i
jnp_op = lambda x, i: jnp.take_along_axis(x, i, axis=axis)
@@ -3380,6 +3386,14 @@ def args_maker():
self._CheckAgainstNumpy(np_op, jnp_op, args_maker)
self._CompileAndCheck(jnp_op, args_maker)
+ def testTakeAlongAxisWithUint8IndicesDoesNotOverflow(self):
+ # https://github.com/google/jax/issues/5088
+ h = jtu.rand_default(self.rng())((256, 256, 100), np.float32)
+ g = jtu.rand_int(self.rng(), 0, 100)((256, 256, 1), np.uint8)
+ q0 = jnp.take_along_axis(h, g, axis=-1)
+ q1 = np.take_along_axis( h, g, axis=-1)
+ np.testing.assert_equal(q0, q1)
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_n={}_increasing={}".format(
jtu.format_shape_dtype_string([shape], dtype),
| Incorrect results from take_along_axis with uint8 indices
I get incorrect values from `jnp.take_along_axis ` for arrays larger than 128 in any dimension and `uint8 `indices. It works with when I cast the indices to larger bit widths.
Jax version 0.2.6
Jaxlib version 0.1.57
```
h = np.random.random((256,256,100))
g = np.random.randint(0,100, size=(256,256,1)).astype(np.uint8)
q0 = jnp.take_along_axis(h, g, axis=-1)
q1 = np.take_along_axis( h, g, axis=-1)
print(np.allclose(q0[:128,:128], q1[:128,:128])) #True
print(np.allclose(q0, q1)) #False
```
| This is because of an integer overflow in an internal indexing calculation. I'll send a fix shortly. | 2020-12-03T13:43:01 |
google/jax | 5,111 | google__jax-5111 | [
"5102"
] | f776cdc1b3fdc2d5c0e6b75f614188f969a51d60 | diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -1167,8 +1167,7 @@ def copy(self):
def __repr__(self):
line_width = np.get_printoptions()["linewidth"]
- # TODO(jblespia): Should we put back self.__class__.__name__ ?
- prefix = '{}('.format("DeviceArray")
+ prefix = '{}('.format(self.__class__.__name__.lstrip('_'))
s = np.array2string(self._value, prefix=prefix, suffix=',',
separator=', ', max_line_width=line_width)
dtype_str = 'dtype={})'.format(self.dtype.name)
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -2091,6 +2091,10 @@ def test_device_put_replicated_pytree(self):
self.assertTrue(all(b.device() == d for b, d in zip(y2.device_buffers, devices)))
self.assertArraysEqual(y2, np.stack([xs['b'] for _ in devices]))
+ def test_repr(self):
+ x = jax.device_put_replicated(1, jax.devices())
+ self.assertStartsWith(repr(x), 'ShardedDeviceArray')
+
class SpecToIndicesTest(jtu.JaxTestCase):
| repr(sharded_device_array) should print ShardedDeviceArray
Just noticed that we had a trivial regression in how repr(sharded_device_array) works:
```python
In [1]: from jax.api import device_put_sharded, devices
In [2]: device_put_sharded([1], devices())
Out[2]: DeviceArray([1], dtype=int32)
```
We use repr(x) to show the type of x, e.g. in demos, so we probably want to put this back.
| Ah, [it's just this change](https://github.com/google/jax/commit/ad2de7554641d5ee986f43837bacaebc79c2337b#diff-121975a1a88d61a259f8423655e23a1b389eb18c399b267395d2aba5a196bae9R1170). @jblespiau to answer your question in that TODO, yes we should put it back ;) | 2020-12-05T05:27:17 |
google/jax | 5,156 | google__jax-5156 | [
"5154"
] | 24a27e07f54f1d0dcc39ff0a8c478eeb4e0ff236 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -417,7 +417,6 @@ def fn(x1, x2):
arcsinh = _one_to_one_unop(np.arcsinh, lax.asinh, True)
tanh = _one_to_one_unop(np.tanh, lax.tanh, True)
arcsinh = _one_to_one_unop(np.arcsinh, lax.asinh, True)
-arccosh = _one_to_one_unop(np.arccosh, lax.acosh, True)
arctanh = _one_to_one_unop(np.arctanh, lax.atanh, True)
sqrt = _one_to_one_unop(np.sqrt, lax.sqrt, True)
@@ -437,6 +436,14 @@ def fn(x1, x2):
float_power = _one_to_one_binop(np.float_power, lax.pow, True)
nextafter = _one_to_one_binop(np.nextafter, lax.nextafter, True, True)
+@_wraps(np.arccosh)
+def arccosh(x):
+ # Note: arccosh is multi-valued for complex input, and lax.acosh uses a different
+ # convention than np.arccosh.
+ out = lax.acosh(*_promote_args_inexact("arccosh", x))
+ if issubdtype(out.dtype, np.complexfloating):
+ out = where(real(out) < 0, lax.neg(out), out)
+ return out
def _comparison_op(numpy_fn, lax_fn):
def fn(x1, x2):
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -180,10 +180,10 @@ def op_record(name, nargs, dtypes, shapes, rng_factory, diff_modes,
inexact=True),
op_record("arctan2", 2, float_dtypes, all_shapes, jtu.rand_small, ["rev"],
inexact=True),
- op_record("arcsinh", 1, number_dtypes, all_shapes, jtu.rand_positive, ["rev"],
- inexact=True),
- op_record("arccosh", 1, number_dtypes, all_shapes, jtu.rand_positive, ["rev"],
- inexact=True),
+ op_record("arcsinh", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"],
+ inexact=True, tolerance={np.complex64: 2E-4, np.complex128: 2E-14}),
+ op_record("arccosh", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"],
+ inexact=True, tolerance={np.complex64: 2E-2, np.complex128: 2E-12}),
op_record("arctanh", 1, number_dtypes, all_shapes, jtu.rand_small, ["rev"],
inexact=True, tolerance={np.float64: 1e-9}),
]
| jnp.acosh and np.acosh follow different conventions
The following code outputs different results
```python
import numpy as np
from jax import numpy as jnp
x = -4.69577+2.1128223j
print(np.arccosh(x))
print(jnp.arccosh(x))
```
While there are infinitely many possible results, numpy defines a convention in its documentation for what the result should be:
https://numpy.org/doc/stable/reference/generated/numpy.arccosh.html
I do not know if it is that important, but figured I would report since we expect `jnp` and `np` to have the same behavior :)
| Thanks! We should change JAX to match numpy's convention here. | 2020-12-10T17:37:37 |
google/jax | 5,193 | google__jax-5193 | [
"5164"
] | 34bc6ca98779a3881352796898c43473e2975cc8 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -4352,7 +4352,7 @@ def _gather_batching_rule(batched_args, batch_dims, *, dimension_numbers,
counts = broadcasted_iota(start_indices.dtype, tuple(count_shape), 0)
start_indices = concatenate([counts, start_indices], len(count_shape) - 1)
- slice_sizes = (1,) + slice_sizes
+ slice_sizes = (_min(operand.shape[0], 1),) + slice_sizes
collapsed_slice_dims = (0,) + tuple(np.add(1, dimension_numbers.collapsed_slice_dims))
offset_dims = tuple(np.add(1, dimension_numbers.offset_dims))
start_index_map = (0,) + tuple(np.add(1, dimension_numbers.start_index_map))
| diff --git a/tests/lax_vmap_test.py b/tests/lax_vmap_test.py
--- a/tests/lax_vmap_test.py
+++ b/tests/lax_vmap_test.py
@@ -74,7 +74,12 @@ def _CheckBatching(self, op, bdim_size, bdims, shapes, dtypes, rng,
args = [rng(shape, dtype) for shape, dtype in zip(batched_shapes, dtypes)]
args_slice = args_slicer(args, bdims)
ans = api.vmap(op, bdims)(*args)
- expected = np.stack([op(*args_slice(i)) for i in range(bdim_size)])
+ if bdim_size == 0:
+ args = [rng(shape, dtype) for shape, dtype in zip(shapes, dtypes)]
+ out = op(*args)
+ expected = np.zeros((0,) + out.shape, out.dtype)
+ else:
+ expected = np.stack([op(*args_slice(i)) for i in range(bdim_size)])
self.assertAllClose(ans, expected, rtol=rtol, atol=atol)
@parameterized.named_parameters(itertools.chain.from_iterable(
@@ -642,6 +647,8 @@ def testFft(self, fft_ndims, shape, bdims):
for bdims in all_bdims(shape, idxs.shape)))
def testGather(self, shape, dtype, idxs, dnums, slice_sizes, bdims):
fun = partial(lax.gather, dimension_numbers=dnums, slice_sizes=slice_sizes)
+ self._CheckBatching(fun, 0, bdims, [shape, idxs.shape], [dtype, idxs.dtype],
+ jtu.rand_default(self.rng()))
self._CheckBatching(fun, 5, bdims, [shape, idxs.shape], [dtype, idxs.dtype],
jtu.rand_default(self.rng()))
| Regression bug in associative_scan
Hi,
There has been a regression in JAX code, which I believe to be related to this [commit](https://github.com/google/jax/commit/d3db7bd4be96e1fc16daa4571ade7fbfce13197f#diff-cc3f3a1265e2fee9115dc5a9f619d402e280fb48a7ed7d91261b42813d200f1b).
I have created PR https://github.com/google/jax/pull/5165 with a reproducing failing test.
I will try and look further into what the problem could be, but if the person who made said commit could also have a look it would be great (@hawkinsp I believe this is you).
Adrien
| Thanks for raising this!
What's the nature of the regression? An error, or a performance regression, or something else?
The error appears to be:
```
FAILED tests/lax_control_flow_test.py::LaxControlFlowTest::testAssociativeScanFailing_2 - TypeError: Slice size at index 0 in gather op is out of range, must be within [0, 1), got 1.
```
with traceback:
```
Traceback (most recent call last):
File "/Users/phawkins/.pyenv/versions/py3.9.0/lib/python3.9/site-packages/absl/testing/parameterized.py", line 282, in bound_param_test
return test_method(self, **testcase_params)
File "/Users/phawkins/t/issue5164/tests/lax_control_flow_test.py", line 2488, in testAssociativeScanFailing
_ = lax.associative_scan(fn, elems=(ms, vs))
File "/Users/phawkins/p/jax/jax/_src/lax/control_flow.py", line 2492, in associative_scan
scans = _scan(elems_flat)
File "/Users/phawkins/p/jax/jax/_src/lax/control_flow.py", line 2476, in _scan
even_elems = combine(
File "/Users/phawkins/p/jax/jax/_src/lax/control_flow.py", line 2431, in combine
c = fn(a, b)
File "/Users/phawkins/p/jax/jax/_src/traceback_util.py", line 139, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/Users/phawkins/p/jax/jax/api.py", line 1197, in batched_fun
out_flat = batching.batch(flat_fun, args_flat, in_axes_flat,
File "/Users/phawkins/p/jax/jax/interpreters/batching.py", line 35, in batch
return batched_fun.call_wrapped(*in_vals)
File "/Users/phawkins/p/jax/jax/linear_util.py", line 160, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/Users/phawkins/t/issue5164/tests/lax_control_flow_test.py", line 2486, in fn
return m1 + m2, jsp.linalg.solve(m1, v2) + jsp.linalg.solve(m2, v1)
File "/Users/phawkins/p/jax/jax/_src/scipy/linalg.py", line 193, in solve
return _solve(a, b, sym_pos, lower)
File "/Users/phawkins/p/jax/jax/_src/traceback_util.py", line 139, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/Users/phawkins/p/jax/jax/api.py", line 382, in f_jitted
return cpp_jitted_f(*args, **kwargs)
File "/Users/phawkins/p/jax/jax/api.py", line 278, in cache_miss
out_flat = xla.xla_call(
File "/Users/phawkins/p/jax/jax/core.py", line 1229, in bind
return call_bind(self, fun, *args, **params)
File "/Users/phawkins/p/jax/jax/core.py", line 1220, in call_bind
outs = primitive.process(top_trace, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/core.py", line 1232, in process
return trace.process_call(self, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/interpreters/batching.py", line 163, in process_call
vals_out = call_primitive.bind(f, *vals, **params)
File "/Users/phawkins/p/jax/jax/core.py", line 1229, in bind
return call_bind(self, fun, *args, **params)
File "/Users/phawkins/p/jax/jax/core.py", line 1220, in call_bind
outs = primitive.process(top_trace, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/core.py", line 1232, in process
return trace.process_call(self, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/core.py", line 598, in process_call
return primitive.impl(f, *tracers, **params)
File "/Users/phawkins/p/jax/jax/interpreters/xla.py", line 569, in _xla_call_impl
compiled_fun = _xla_callable(fun, device, backend, name, donated_invars,
File "/Users/phawkins/p/jax/jax/linear_util.py", line 251, in memoized_fun
ans = call(fun, *args)
File "/Users/phawkins/p/jax/jax/interpreters/xla.py", line 645, in _xla_callable
jaxpr, out_avals, consts = pe.trace_to_jaxpr_final(fun, abstract_args)
File "/Users/phawkins/p/jax/jax/interpreters/partial_eval.py", line 1230, in trace_to_jaxpr_final
jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)
File "/Users/phawkins/p/jax/jax/interpreters/partial_eval.py", line 1211, in trace_to_subjaxpr_dynamic
ans = fun.call_wrapped(*in_tracers)
File "/Users/phawkins/p/jax/jax/linear_util.py", line 160, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/Users/phawkins/p/jax/jax/_src/scipy/linalg.py", line 168, in _solve
return np_linalg.solve(a, b)
File "/Users/phawkins/p/jax/jax/_src/traceback_util.py", line 139, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/Users/phawkins/p/jax/jax/api.py", line 382, in f_jitted
return cpp_jitted_f(*args, **kwargs)
File "/Users/phawkins/p/jax/jax/api.py", line 278, in cache_miss
out_flat = xla.xla_call(
File "/Users/phawkins/p/jax/jax/core.py", line 1229, in bind
return call_bind(self, fun, *args, **params)
File "/Users/phawkins/p/jax/jax/core.py", line 1220, in call_bind
outs = primitive.process(top_trace, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/core.py", line 1232, in process
return trace.process_call(self, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/interpreters/batching.py", line 163, in process_call
vals_out = call_primitive.bind(f, *vals, **params)
File "/Users/phawkins/p/jax/jax/core.py", line 1229, in bind
return call_bind(self, fun, *args, **params)
File "/Users/phawkins/p/jax/jax/core.py", line 1220, in call_bind
outs = primitive.process(top_trace, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/core.py", line 1232, in process
return trace.process_call(self, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/interpreters/partial_eval.py", line 1085, in process_call
jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(f, self.main, in_avals)
File "/Users/phawkins/p/jax/jax/interpreters/partial_eval.py", line 1211, in trace_to_subjaxpr_dynamic
ans = fun.call_wrapped(*in_tracers)
File "/Users/phawkins/p/jax/jax/linear_util.py", line 160, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/Users/phawkins/p/jax/jax/_src/numpy/linalg.py", line 450, in solve
return lax_linalg._solve(a, b)
File "/Users/phawkins/p/jax/jax/_src/lax/linalg.py", line 263, in _solve
return custom_solve(b)
File "/Users/phawkins/p/jax/jax/_src/lax/control_flow.py", line 2211, in custom_linear_solve
out_flat = linear_solve_p.bind(
File "/Users/phawkins/p/jax/jax/core.py", line 271, in bind
out = top_trace.process_primitive(self, tracers, params)
File "/Users/phawkins/p/jax/jax/interpreters/batching.py", line 149, in process_primitive
val_out, dim_out = batched_primitive(vals_in, dims_in, **params)
File "/Users/phawkins/p/jax/jax/_src/lax/control_flow.py", line 2296, in _linear_solve_batching_rule
solve_jaxpr_batched, solve_x_bat = batching.batch_jaxpr(
File "/Users/phawkins/p/jax/jax/interpreters/batching.py", line 411, in batch_jaxpr
jaxpr_out, _, consts = pe.trace_to_jaxpr_dynamic(f, avals_in)
File "/Users/phawkins/p/jax/jax/interpreters/partial_eval.py", line 1201, in trace_to_jaxpr_dynamic
jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)
File "/Users/phawkins/p/jax/jax/interpreters/partial_eval.py", line 1211, in trace_to_subjaxpr_dynamic
ans = fun.call_wrapped(*in_tracers)
File "/Users/phawkins/p/jax/jax/linear_util.py", line 160, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/Users/phawkins/p/jax/jax/core.py", line 141, in jaxpr_as_fun
return eval_jaxpr(closed_jaxpr.jaxpr, closed_jaxpr.consts, *args)
File "/Users/phawkins/p/jax/jax/core.py", line 352, in eval_jaxpr
ans = eqn.primitive.bind(*(subfuns + in_vals), **bind_params)
File "/Users/phawkins/p/jax/jax/core.py", line 1229, in bind
return call_bind(self, fun, *args, **params)
File "/Users/phawkins/p/jax/jax/core.py", line 1220, in call_bind
outs = primitive.process(top_trace, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/core.py", line 1232, in process
return trace.process_call(self, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/interpreters/batching.py", line 163, in process_call
vals_out = call_primitive.bind(f, *vals, **params)
File "/Users/phawkins/p/jax/jax/core.py", line 1229, in bind
return call_bind(self, fun, *args, **params)
File "/Users/phawkins/p/jax/jax/core.py", line 1220, in call_bind
outs = primitive.process(top_trace, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/core.py", line 1232, in process
return trace.process_call(self, fun, tracers, params)
File "/Users/phawkins/p/jax/jax/interpreters/partial_eval.py", line 1085, in process_call
jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(f, self.main, in_avals)
File "/Users/phawkins/p/jax/jax/interpreters/partial_eval.py", line 1211, in trace_to_subjaxpr_dynamic
ans = fun.call_wrapped(*in_tracers)
File "/Users/phawkins/p/jax/jax/linear_util.py", line 160, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/Users/phawkins/p/jax/jax/core.py", line 352, in eval_jaxpr
ans = eqn.primitive.bind(*(subfuns + in_vals), **bind_params)
File "/Users/phawkins/p/jax/jax/core.py", line 271, in bind
out = top_trace.process_primitive(self, tracers, params)
File "/Users/phawkins/p/jax/jax/interpreters/batching.py", line 149, in process_primitive
val_out, dim_out = batched_primitive(vals_in, dims_in, **params)
File "/Users/phawkins/p/jax/jax/_src/lax/lax.py", line 4324, in _gather_batching_rule
return gather(operand, start_indices, dimension_numbers=dnums,
File "/Users/phawkins/p/jax/jax/_src/lax/lax.py", line 868, in gather
return gather_p.bind(
File "/Users/phawkins/p/jax/jax/core.py", line 271, in bind
out = top_trace.process_primitive(self, tracers, params)
File "/Users/phawkins/p/jax/jax/interpreters/partial_eval.py", line 1073, in process_primitive
out_avals = primitive.abstract_eval(*avals, **params)
File "/Users/phawkins/p/jax/jax/_src/lax/lax.py", line 1989, in standard_abstract_eval
shapes, dtypes = shape_rule(*args, **kwargs), dtype_rule(*args, **kwargs)
File "/Users/phawkins/p/jax/jax/_src/lax/lax.py", line 4223, in _gather_shape_rule
raise TypeError(f"Slice size at index {i} in gather op is out of range, "
TypeError: Slice size at index 0 in gather op is out of range, must be within [0, 1), got 1.
```
I didn't yet have time to try to decode what this means.
Hi,
It's an error. I have some code which implements Kalman filtering using prefix sum operations [(see this)](https://arxiv.org/abs/1905.13002). It used to work like a charm, but I've come back to it recently to see it fail a bit randomly (it's related to batching it seems). I've narrowed it down to the combination of associative_scan and lu permutations, hence the test added in the PR.
Adrien
@hawkinsp thanks, if you disable jit in your env you'll see that the stack stops at the lu solving bit.
I'm not 100% sure if its the LU that's to blame or associative scan but looking at the file history it seemed more plausible for it to be the associative_scan | 2020-12-15T03:28:04 |
google/jax | 5,203 | google__jax-5203 | [
"5190"
] | 92c993af17904a8d5b2cfe92fe425f40ca43520e | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -266,9 +266,9 @@ def cache_miss(_, *args, **kwargs):
# (dyn_args, donated_invars, args_flat, in_tree), since otherwise we have
# work/code that is redundant between C++ and Python. We can try that later.
if max(static_argnums + donate_argnums, default=-1) >= len(args):
- raise ValueError(f"jitted function has static_argnums={static_argnums}, "
- f"donate_argnums={donate_argnums} but "
- f"was called with only {len(args)} positional arguments.")
+ msg = ("jitted function has static_argnums={}, donate_argnums={} but "
+ "was called with only {} positional arguments.")
+ raise ValueError(msg.format(static_argnums, donate_argnums, len(args)))
f = lu.wrap_init(fun)
if static_argnums:
f, dyn_args = argnums_partial_except(f, static_argnums, args)
| How to resolve ValueError `vector::reserve`?
I am trying to optimize the following function using jit:
```py
@partial(jit, static_argnums=(0, 1,))
def coocurrence_helper(pairs: np.array, label_map: Dict) -> lil_matrix:
uniques = lil_matrix(np.zeros((len(label_map), len(label_map))).astype("int32"))
for item in pairs:
if item[0]!=item[1]:
uniques[label_map[item[0]], label_map[item[1]]] += 1
return uniques
```
the routine above is used here:
```py
def _get_pairwise_frequencies(
data: pd.DataFrame, crosstab=False
) -> pd.DataFrame:
values = data.stack()
values.index = values.index.droplevel(1)
values.name = "vals"
values = optimize(values.to_frame())
pair = optimize(values.join(values, rsuffix="_2"))
label_map = dict()
for lbl, each in enumerate(values.vals.unique()):
label_map[each] = lbl
if not crosstab:
freq = coocurrence_helper(pairs = pair.values, label_map=label_map)
return ((freq / freq.sum(1).ravel()).astype(np.float32))
else:
freq = pd.crosstab(pair["vals"], pair["vals_2"])
self.index = freq.index
return csr_matrix((freq / freq.sum(1)).astype(np.float32))
```
But i get the following error:
```py
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-42-f8e638fc2bb6> in <module>
----> 1 _get_pairwise_frequencies(data)
<ipython-input-30-43adeb39c76c> in _get_pairwise_frequencies(data, crosstab)
25 label_map[each] = lbl
26 if not crosstab:
---> 27 freq = coocurrence_helper(pairs = pair.values, label_map=label_map)
28 return csr_matrix((freq / freq.sum(1).ravel()).astype(np.float32))
29 else:
~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/jax/api.py in f_jitted(*args, **kwargs)
369 return cache_miss(*args, **kwargs)[0] # probably won't return
370 else:
--> 371 return cpp_jitted_f(*args, **kwargs)
372 f_jitted._cpp_jitted_f = cpp_jitted_f
373
ValueError: vector::reserve
```
What can be the source of the issue here? Without using `static_argnums` the error message is
```py
RuntimeError: Invalid argument: Unknown NumPy type O size 8
```
with the same traceback.
| `scipy.sparse.lil_matrix` is not a JAX-compatible type, so it cannot be used as a return value in jit-compiled code. Your code should work fine if you remove `@partial(jit, static_argnums=(0, 1,))`; if you want to use JAX jit, you'll have to rewrite the jitted parts of your code to avoid the use of `scipy.sparse` objects.
That said, that's a pretty bad error message, and I'm not entirely sure how to trigger it. Can you share a small reproduction of the `vector::reserve` error?
I managed to reproduce it from the above code this way:
```python
import pandas as pd
from scipy.sparse import lil_matrix, csr_matrix
import numpy as np
from jax import jit, partial
from typing import Dict
def optimize(x):
return x
@partial(jit, static_argnums=(0, 1,))
def coocurrence_helper(pairs: np.array, label_map: Dict) -> lil_matrix:
uniques = lil_matrix(np.zeros((len(label_map), len(label_map))).astype("int32"))
for item in pairs:
if item[0]!=item[1]:
uniques[label_map[item[0]], label_map[item[1]]] += 1
return uniques
def _get_pairwise_frequencies(
data: pd.DataFrame, crosstab=False
) -> pd.DataFrame:
values = data.stack()
values.index = values.index.droplevel(1)
values.name = "vals"
values = optimize(values.to_frame())
pair = optimize(values.join(values, rsuffix="_2"))
label_map = dict()
for lbl, each in enumerate(values.vals.unique()):
label_map[each] = lbl
if not crosstab:
freq = coocurrence_helper(pairs = pair.values, label_map=label_map)
return ((freq / freq.sum(1).ravel()).astype(np.float32))
else:
freq = pd.crosstab(pair["vals"], pair["vals_2"])
self.index = freq.index
return csr_matrix((freq / freq.sum(1)).astype(np.float32))
_get_pairwise_frequencies(
pd.DataFrame({'x': [1, 2, 3], 'y': [4, 5, 6]})
)
```
I'm certain it could be simplified :grin:
It seems the key to the repro is to return an invalid jax type while passing a static argument by keyword:
```python
from scipy.sparse import lil_matrix
from jax import jit, partial
@partial(jit, static_argnums=0)
def f(size) -> lil_matrix:
return lil_matrix((size, size))
f(size=4)
```
```pytb
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-57-eb6a7f1d5936> in <module>()
6 return lil_matrix((size, size))
7
----> 8 f(size=4)
/usr/local/lib/python3.6/dist-packages/jax/api.py in f_jitted(*args, **kwargs)
369 return cache_miss(*args, **kwargs)[0] # probably won't return
370 else:
--> 371 return cpp_jitted_f(*args, **kwargs)
372 f_jitted._cpp_jitted_f = cpp_jitted_f
373
ValueError: vector::reserve
```
If you instead pass the static argument by position, you get a more reasonable error:
```python
f(4)
```
```pytb
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-58-adf85db3d866> in <module>()
6 return lil_matrix((size, size))
7
----> 8 f(4)
-----[15 frames]
/usr/local/lib/python3.6/dist-packages/jax/core.py in concrete_aval(x)
860 handler = pytype_aval_mappings.get(typ)
861 if handler: return handler(x)
--> 862 raise TypeError(f"{type(x)} is not a valid JAX type")
863
864
TypeError: <class 'scipy.sparse.lil.lil_matrix'> is not a valid JAX type
```
Actually, it turns out `vector::reserve` comes from any function passing a static arg by keyword:
```python
from jax import jit
f = jit(lambda x: x, static_argnums=0)
f(x=4)
```
```pytb
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-65-ed41108eed0c> in <module>()
1 from jax import jit
2 f = jit(lambda x: x, static_argnums=0)
----> 3 f(x=4)
/usr/local/lib/python3.6/dist-packages/jax/api.py in f_jitted(*args, **kwargs)
369 return cache_miss(*args, **kwargs)[0] # probably won't return
370 else:
--> 371 return cpp_jitted_f(*args, **kwargs)
372 f_jitted._cpp_jitted_f = cpp_jitted_f
373
ValueError: vector::reserve
```
We should either handle the keyword argument appropriately in jitted functions, or if that is difficult, provide a better error to the user. | 2020-12-16T01:23:01 |
|
google/jax | 5,219 | google__jax-5219 | [
"5217"
] | 943c7794f92f32c4796cf980eedd5a11c86eea66 | diff --git a/jax/experimental/jet.py b/jax/experimental/jet.py
--- a/jax/experimental/jet.py
+++ b/jax/experimental/jet.py
@@ -229,6 +229,7 @@ def linear_prop(prim, primals_in, series_in, **params):
deflinear(lax.conj_p)
deflinear(lax.imag_p)
deflinear(lax.add_p)
+deflinear(ad_util.add_jaxvals_p)
deflinear(lax.sub_p)
deflinear(lax.convert_element_type_p)
deflinear(lax.broadcast_in_dim_p)
| diff --git a/tests/jet_test.py b/tests/jet_test.py
--- a/tests/jet_test.py
+++ b/tests/jet_test.py
@@ -19,6 +19,7 @@
import numpy as np
import unittest
+import jax
from jax import test_util as jtu
import jax.numpy as jnp
import jax.scipy.special
@@ -379,6 +380,14 @@ def g(x):
assert g_out_primals == f_out_primals
assert g_out_series == f_out_series
+ def test_add_any(self):
+ # https://github.com/google/jax/issues/5217
+ f = lambda x, eps: x * eps + eps + x
+ def g(eps):
+ x = jnp.array(1.)
+ return jax.grad(f)(x, eps)
+ jet(g, (1.,), ([1.],)) # doesn't crash
+
if __name__ == '__main__':
absltest.main(testLoader=jtu.JaxTestLoader())
| Composition of jet with grad
@mattjj
Hi all,
I noticed that sometimes composing jet with grad produces an error. See example below:
```
import jax.numpy as jnp
from jax import grad
from jax.experimental.jet import jet
f = lambda x, ε: x * ε + ε + x
def g(ε):
x = jnp.array(1.)
return grad(f)(x, ε)
jet(g, (1.,), ([1.],))
```
Any insights if this is a bug?
Thanks,
| Thanks for raising this! It's a missing rule for `jet` (which needs one rule for each primitive), so it's more of an 'enhancement needed' rather than a 'bug'. Or maybe I just prefer that positive framing of things :) | 2020-12-17T20:10:32 |
google/jax | 5,220 | google__jax-5220 | [
"4672"
] | 1a5b186ceb6b1e364a35f2dbc220b3d641cc3be6 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -1135,7 +1135,6 @@ def reduce(operands: Array, init_values: Array, computation: Callable,
return convert_element_type(monoid_reducer(*flat_operands, dimensions), weak_type=weak_type)
else:
flat_init_avals = safe_map(_abstractify, flat_init_values)
- # breakpoint()
jaxpr, consts, out_tree = _variadic_reduction_jaxpr(
computation, tuple(flat_init_avals), init_value_tree)
out = reduce_p.bind(*(flat_operands + flat_init_values), computation=computation,
diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -329,7 +329,7 @@ def primitive_computation(prim, axis_env, backend, tuple_args, *avals, **params)
assert isinstance(ans, xe.XlaOp)
c.clear_op_metadata()
try:
- return c.build()
+ return c.build(ans)
except RuntimeError as e:
msg = (" ".join(map(str, e.args)) + "\n"
"This is a bug in JAX's shape-checking rules; please report it!\n"
| diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -2392,5 +2392,11 @@ def testUnaryWeakTypes(self, op_name, rec_dtypes):
self.assertTrue(py_op.aval.weak_type)
self.assertFalse(lax_op.aval.weak_type)
+ def testCumsumLengthOne(self):
+ # regression test for issue 4672
+ x = lax.full((1,), 1)
+ out = lax.cumsum(x)
+ self.assertArraysEqual(out, x)
+
if __name__ == '__main__':
absltest.main(testLoader=jtu.JaxTestLoader())
| jax.numpy.repeat fails with JIT disabled for dimensions of size 1
`jax.numpy.repeat` throws a `RuntimeError` when JIT is disabled and the array dimension being repeated is of size 1.
The same error occurs on both CPU and GPU.
Simple repro:
```py
import jax
import jax.numpy as jnp
with jax.disable_jit():
jnp.repeat(jnp.zeros((1, 2)), repeats=2, axis=0, total_repeat_length=2)
```
Callstack:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-88-7216fb989347> in <module>()
4
5 with jax.disable_jit():
----> 6 jnp.repeat(jnp.zeros((1, 2)), repeats=2, axis=0, total_repeat_length=2)
7
8 def run_tests():
11 frames
google3/third_party/py/jax/_src/numpy/lax_numpy.py in repeat(a, repeats, axis, total_repeat_length)
2921 x=zeros([total_repeat_length], dtype=int32),
2922 idx=scatter_indices,
-> 2923 y=1)
2924 # Cumsum again to get scatter indices for repeat, e.g. [0,1,1,3,3,3,3,3]
2925 gather_indices = cumsum(block_split_indicators) - 1
google3/third_party/py/jax/_src/ops/scatter.py in index_add(x, idx, y, indices_are_sorted, unique_indices)
141 """
142 return _scatter_update(
--> 143 x, idx, y, lax.scatter_add, indices_are_sorted, unique_indices)
144
145
google3/third_party/py/jax/_src/ops/scatter.py in _scatter_update(x, idx, y, scatter_op, indices_are_sorted, unique_indices)
50 treedef, static_idx, dynamic_idx = jnp._split_index_for_jit(idx)
51 return _scatter_impl(x, y, scatter_op, treedef, static_idx, dynamic_idx,
---> 52 indices_are_sorted, unique_indices)
53
54
google3/third_party/py/jax/_src/ops/scatter.py in _scatter_impl(x, y, scatter_op, treedef, static_idx, dynamic_idx, indices_are_sorted, unique_indices)
62
63 idx = jnp._merge_static_and_dynamic_indices(treedef, static_idx, dynamic_idx)
---> 64 indexer = jnp._index_to_gather(jnp.shape(x), idx)
65
66 # Broadcast `y` to the slice output shape.
google3/third_party/py/jax/_src/numpy/lax_numpy.py in _index_to_gather(x_shape, idx)
4057 advanced_pairs = ((_normalize_index(e, x_shape[j]), i, j)
4058 for e, i, j in advanced_pairs)
-> 4059 advanced_indexes, idx_advanced_axes, x_advanced_axes = zip(*advanced_pairs)
4060 advanced_axes_are_contiguous = np.all(np.diff(idx_advanced_axes) == 1)
4061
google3/third_party/py/jax/_src/numpy/lax_numpy.py in <genexpr>(.0)
4056 if isscalar(e) or isinstance(e, (Sequence, ndarray)))
4057 advanced_pairs = ((_normalize_index(e, x_shape[j]), i, j)
-> 4058 for e, i, j in advanced_pairs)
4059 advanced_indexes, idx_advanced_axes, x_advanced_axes = zip(*advanced_pairs)
4060 advanced_axes_are_contiguous = np.all(np.diff(idx_advanced_axes) == 1)
google3/third_party/py/jax/_src/numpy/lax_numpy.py in _normalize_index(index, axis_size)
3785
3786 return lax.select(
-> 3787 lax.lt(index, _constant_like(index, 0)),
3788 lax.add(index, _constant_like(index, axis_size)),
3789 index)
google3/third_party/py/jax/_src/lax/lax.py in lt(x, y)
379 def lt(x: Array, y: Array) -> Array:
380 r"""Elementwise less-than: :math:`x < y`."""
--> 381 return lt_p.bind(x, y)
382
383 def convert_element_type(operand: Array, new_dtype: DType) -> Array:
google3/third_party/py/jax/core.py in bind(self, *args, **params)
261 top_trace = find_top_trace(args)
262 tracers = map(top_trace.full_raise, args)
--> 263 out = top_trace.process_primitive(self, tracers, params)
264 return map(full_lower, out) if self.multiple_results else full_lower(out)
265
google3/third_party/py/jax/core.py in process_primitive(self, primitive, tracers, params)
571
572 def process_primitive(self, primitive, tracers, params):
--> 573 return primitive.impl(*tracers, **params)
574
575 def process_call(self, primitive, f, tracers, params):
google3/third_party/py/jax/interpreters/xla.py in apply_primitive(prim, *args, **params)
232 """Impl rule that compiles and runs a single primitive 'prim' using XLA."""
233 compiled_fun = xla_primitive_callable(prim, *unsafe_map(arg_spec, args), **params)
--> 234 return compiled_fun(*args)
235
236
google3/third_party/py/jax/interpreters/xla.py in _execute_compiled_primitive(prim, compiled, result_handler, *args)
347 device, = compiled.local_devices()
348 input_bufs = list(it.chain.from_iterable(device_put(x, device) for x in args if x is not token))
--> 349 out_bufs = compiled.execute(input_bufs)
350 if FLAGS.jax_debug_nans: check_nans(prim, out_bufs)
351 return result_handler(*out_bufs)
RuntimeError: Invalid argument: Argument does not match host shape or layout of computation parameter 0: want s32[1]{0}, got pred[]
```
| I'm getting the same error, and am stuck because of it.
I have the same issue when disabling jit, but everything works fine when it is enabled.
I've been looking into this; it seems to be coming from a strange behavior of `jnp.cumsum` with jit disabled, which can be reproduced this way:
```python
from jax import lax
import jax.numpy as jnp
print(repr(lax.cumsum(jnp.ones(1))))
```
```
DeviceArray(False, dtype=float32)
```
I'm not certain how `False` has a dtype of `float32`, but I suspect whatever bug is causing this is the root of the issue here. | 2020-12-17T20:51:57 |
google/jax | 5,223 | google__jax-5223 | [
"5218"
] | 87284970d7d5ed6e44849b3115a33fe5a0ed3278 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -5898,21 +5898,25 @@ def _stop_gradient_batch_rule(batched_args, batch_dims):
batching.primitive_batchers[ad_util.stop_gradient_p] = _stop_gradient_batch_rule
-def create_token(x):
+def create_token(_=None):
"""Creates an XLA token value with no preconditions for sequencing effects.
Experimental.
- Args:
- x: a dummy argument used to tie the CreateToken operator into a trace. The
- value of `x` is ignored.
+ The argument is ignored. It exists for backward compatibility.
"""
- # x is a dummy argument used to tie the operator into a trace.
- return create_token_p.bind(stop_gradient(x))
+ if config.omnistaging_enabled:
+ return create_token_p.bind()
+ else:
+ x = _
+ if x is None:
+ raise ValueError(
+ 'create_token needs a tie-in operand unless omnistaging is enabled.')
+ return create_token_p.bind(stop_gradient(x))
create_token_p = Primitive("create_token")
create_token_p.def_impl(partial(xla.apply_primitive, create_token_p))
-create_token_p.def_abstract_eval(lambda _: abstract_token)
+create_token_p.def_abstract_eval(lambda *_: abstract_token)
xla.translations[create_token_p] = lambda c, *_: xops.CreateToken(c)
def after_all(*operands):
| remove tie-in operands under omnistaging for primitives that don't need them
With omnistaging enabled, any primitive such as `create_token`, and its corresponding `lax` function, no longer needs to take a tie-in argument for data dependence.
Removing `create_token`'s operands also enforces that the primitive can't be linear, so it can't be transposed, and the system shouldn't try to transpose it. As @mattjj points out, a nullary primitive wouldn't appear in a jaxpr to be transposed because partial evaluation will never pick it up. This ought to address [this comment in #4292](https://github.com/google/jax/issues/4292#issuecomment-746289386).
| 2020-12-18T04:05:17 |
||
google/jax | 5,244 | google__jax-5244 | [
"5222"
] | 38224f6b0b5fd43a1c37d8027298b764af4c06c4 | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -68,7 +68,8 @@
from .interpreters import masking
from .interpreters import invertible_ad as iad
from .interpreters.invertible_ad import custom_ivjp
-from .custom_derivatives import custom_jvp, custom_vjp, custom_gradient
+from .custom_derivatives import (closure_convert, custom_gradient, custom_jvp,
+ custom_vjp)
from .config import flags, config, bool_env
traceback_util.register_exclusion(__file__)
diff --git a/jax/custom_derivatives.py b/jax/custom_derivatives.py
--- a/jax/custom_derivatives.py
+++ b/jax/custom_derivatives.py
@@ -19,10 +19,11 @@
from typing import Callable, Sequence, Tuple, Any
from . import core
+from . import dtypes
from . import linear_util as lu
from .tree_util import (tree_flatten, tree_unflatten, tree_map, tree_multimap,
register_pytree_node_class)
-from .util import safe_zip, safe_map, split_list
+from .util import cache, safe_zip, safe_map, split_list
from .api_util import flatten_fun_nokwargs, argnums_partial, wrap_hashably
from .core import raise_to_shaped
from .ad_util import Zero, zeros_like_aval, stop_gradient_p
@@ -837,3 +838,108 @@ def tree_flatten(self):
def tree_unflatten(cls, aux, consts):
jaxpr, in_tree, out_tree = aux
return cls(jaxpr, in_tree, out_tree, consts)
+
+
+def closure_convert(fun, *example_args):
+ """Closure conversion utility, for use with higher-order custom derivatives.
+
+ To define custom derivatives such as with ``jax.custom_vjp(f)``, the target
+ function ``f`` must take, as formal arguments, all values involved in
+ differentiation. If ``f`` is a higher-order function, in that it accepts as an
+ argument a Python function ``g``, then values stored away in ``g``'s closure
+ will not be visible to the custom derivative rules, and attempts at AD
+ involving these values will fail. One way around this is to convert the
+ closure by extracting these values, and to pass them as explicit formal
+ arguments across the custom derivative boundary. This utility carries out that
+ conversion. More precisely, it closure-converts the function ``fun``
+ specialized to the types of the arguments given in ``example_args``.
+
+ When we refer here to "values in the closure" of ``fun``, we do not mean the
+ values that are captured by Python directly when ``fun`` is defined (e.g. the
+ Python objects in ``fun.__closure__``, if the attribute exists). Rather, we
+ mean values encountered during the execution of ``fun`` on ``example_args``
+ that determine its output. This may include, for instance, arrays captured
+ transitively in Python closures, i.e. in the Python closure of functions
+ called by ``fun``, the closures of the functions that they call, and so forth.
+
+ The function ``fun`` must be a pure function.
+
+ Example usage::
+
+ def minimize(objective_fn, x0):
+ converted_fn, aux_args = closure_convert(objective_fn, x0)
+ return _minimize(converted_fn, x0, *aux_args)
+
+ @partial(custom_vjp, nondiff_argnums=(0,))
+ def _minimize(objective_fn, x0, *args):
+ z = objective_fn(x0, *args)
+ # ... find minimizer x_opt ...
+ return x_opt
+
+ def fwd(objective_fn, x0, *args):
+ y = _minimize(objective_fn, x0, *args)
+ return y, (y, args)
+
+ def rev(objective_fn, res, g):
+ y, args = res
+ y_bar = g
+ # ... custom reverse-mode AD ...
+ return x0_bar, *args_bars
+
+ _minimize.defvjp(fwd, rev)
+
+ Args:
+ fun: Python callable to be converted. Must be a pure function.
+ example_args: Arrays, scalars, or (nested) standard Python
+ containers (tuples, lists, dicts, namedtuples, i.e., pytrees)
+ thereof, used to determine the types of the formal arguments to
+ ``fun``. This type-specialized form of ``fun`` is the function
+ that will be closure converted.
+
+ """
+ flat_args, in_tree = tree_flatten(example_args)
+ in_avals = tuple(map(abstractify, flat_args))
+ return _closure_convert_for_avals(fun, in_tree, in_avals)
+
+@cache()
+def _closure_convert_for_avals(fun, in_tree, in_avals):
+ if config.omnistaging_enabled:
+ wrapped_fun, out_tree = flatten_fun_nokwargs(lu.wrap_init(fun), in_tree)
+ jaxpr, out_pvals, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals)
+ else:
+ in_pvals = [pe.PartialVal.unknown(aval) for aval in in_avals]
+ wrapped_fun, out_tree = flatten_fun_nokwargs(lu.wrap_init(fun), in_tree)
+ with core.initial_style_staging(): # type: ignore
+ jaxpr, out_pvals, consts = pe.trace_to_jaxpr(
+ wrapped_fun, in_pvals, instantiate=True, stage_out=False) # type: ignore
+ out_tree = out_tree()
+
+ # We only want to closure convert for constants with respect to which we're
+ # differentiating. As a proxy for that, we hoist consts with float dtype.
+ # TODO(mattjj): revise this approach
+ from .numpy import inexact
+ is_float = lambda c: dtypes.issubdtype(dtypes.dtype(c), inexact)
+ (closure_consts, hoisted_consts), merge = partition_list(is_float, consts)
+ num_consts = len(hoisted_consts)
+
+ def converted_fun(*args_hconsts):
+ num_args = len(args_hconsts) - num_consts
+ args, hoisted_consts = split_list(args_hconsts, [num_args])
+ consts = merge(closure_consts, hoisted_consts)
+ all_args, in_tree2 = tree_flatten(tuple(args))
+ assert in_tree == in_tree2
+ out_flat = core.eval_jaxpr(jaxpr, consts, *all_args)
+ return tree_unflatten(out_tree, out_flat)
+
+ return converted_fun, hoisted_consts
+
+def partition_list(choice, lst):
+ out = [], []
+ which = [out[choice(elt)].append(elt) or choice(elt) for elt in lst]
+ def merge(l1, l2):
+ i1, i2 = iter(l1), iter(l2)
+ return [next(i2 if snd else i1) for snd in which]
+ return out, merge
+
+def abstractify(x):
+ return core.raise_to_shaped(core.get_aval(x))
diff --git a/jax/experimental/ode.py b/jax/experimental/ode.py
--- a/jax/experimental/ode.py
+++ b/jax/experimental/ode.py
@@ -32,61 +32,17 @@
import jax
import jax.numpy as jnp
from jax import core
-from jax import dtypes
+from jax import custom_derivatives
from jax import lax
-from jax.util import safe_map, safe_zip, cache, split_list
-from jax.api_util import flatten_fun_nokwargs
+from jax.util import safe_map, safe_zip
from jax.flatten_util import ravel_pytree
-from jax.tree_util import tree_map, tree_flatten, tree_unflatten
-from jax.interpreters import partial_eval as pe
+from jax.tree_util import tree_map
from jax import linear_util as lu
-from jax import config
map = safe_map
zip = safe_zip
-@cache()
-def closure_convert(fun, in_tree, in_avals):
- if config.omnistaging_enabled:
- wrapped_fun, out_tree = flatten_fun_nokwargs(lu.wrap_init(fun), in_tree)
- jaxpr, out_pvals, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals)
- else:
- in_pvals = [pe.PartialVal.unknown(aval) for aval in in_avals]
- wrapped_fun, out_tree = flatten_fun_nokwargs(lu.wrap_init(fun), in_tree)
- with core.initial_style_staging(): # type: ignore
- jaxpr, out_pvals, consts = pe.trace_to_jaxpr(
- wrapped_fun, in_pvals, instantiate=True, stage_out=False) # type: ignore
- out_tree = out_tree()
-
- # We only want to closure convert for constants with respect to which we're
- # differentiating. As a proxy for that, we hoist consts with float dtype.
- # TODO(mattjj): revise this approach
- is_float = lambda c: dtypes.issubdtype(dtypes.dtype(c), jnp.inexact)
- (closure_consts, hoisted_consts), merge = partition_list(is_float, consts)
- num_consts = len(hoisted_consts)
-
- def converted_fun(y, t, *hconsts_args):
- hoisted_consts, args = split_list(hconsts_args, [num_consts])
- consts = merge(closure_consts, hoisted_consts)
- all_args, in_tree2 = tree_flatten((y, t, *args))
- assert in_tree == in_tree2
- out_flat = core.eval_jaxpr(jaxpr, consts, *all_args)
- return tree_unflatten(out_tree, out_flat)
-
- return converted_fun, hoisted_consts
-
-def partition_list(choice, lst):
- out = [], []
- which = [out[choice(elt)].append(elt) or choice(elt) for elt in lst]
- def merge(l1, l2):
- i1, i2 = iter(l1), iter(l2)
- return [next(i2 if snd else i1) for snd in which]
- return out, merge
-
-def abstractify(x):
- return core.raise_to_shaped(core.get_aval(x))
-
def ravel_first_arg(f, unravel):
return ravel_first_arg_(lu.wrap_init(f), unravel).call_wrapped
@@ -213,11 +169,8 @@ def _check_arg(arg):
"\n{}.")
raise TypeError(msg.format(arg))
- flat_args, in_tree = tree_flatten((y0, t[0], *args))
- in_avals = tuple(map(abstractify, flat_args))
- converted, consts = closure_convert(func, in_tree, in_avals)
-
- return _odeint_wrapper(converted, rtol, atol, mxstep, y0, t, *consts, *args)
+ converted, consts = custom_derivatives.closure_convert(func, y0, t[0], *args)
+ return _odeint_wrapper(converted, rtol, atol, mxstep, y0, t, *args, *consts)
@partial(jax.jit, static_argnums=(0, 1, 2, 3))
def _odeint_wrapper(func, rtol, atol, mxstep, y0, ts, *args):
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -4333,6 +4333,42 @@ def f(x):
api.grad(lambda x: jnp.sum(jnp.sin(x)))(jnp.arange(3.)) * jnp.array([3., 4., 5.]),
check_dtypes=False)
+ def test_closure_convert(self):
+ def minimize(objective_fn, x0):
+ converted_fn, aux_args = api.closure_convert(objective_fn, x0)
+ return _minimize(converted_fn, x0, *aux_args)
+
+ @partial(api.custom_vjp, nondiff_argnums=(0,))
+ def _minimize(objective_fn, x0, *args):
+ _ = objective_fn(x0, *args)
+ return jnp.cos(x0)
+
+ def fwd(objective_fn, x0, *args):
+ y = _minimize(objective_fn, x0, *args)
+ return y, (y, args)
+
+ def rev(objective_fn, res, g):
+ y, args = res
+ x0_bar = 17. * y
+ args_bars = [42. * a for a in args]
+ return (x0_bar, *args_bars)
+
+ _minimize.defvjp(fwd, rev)
+
+ def obj(c, x):
+ return jnp.sum((x - c) ** 2.)
+
+ def solve(c, x):
+ def closure(x):
+ return obj(c, x)
+ return jnp.sum(minimize(closure, x))
+
+ c, x = jnp.ones(2), jnp.zeros(2)
+ self.assertAllClose(solve(c, x), 2.0, check_dtypes=False)
+ g_c, g_x = api.grad(solve, argnums=(0, 1))(c, x)
+ self.assertAllClose(g_c, 42. * jnp.ones(2), check_dtypes=False)
+ self.assertAllClose(g_x, 17. * jnp.ones(2), check_dtypes=False)
+
class InvertibleADTest(jtu.JaxTestCase):
| utility for closure-conversion in higher-order functions with custom derivatives
When a higher-order python function is associated with a custom derivative (e.g. via `custom_vjp`), differentiation doesn't handle the closures of its function arguments. A workaround is to accept auxiliary arguments and thread them back into input functions that otherwise would have closed over them. An example sketch:
```python
@custom_vjp
def minimize(objective_fn, x0, objective_aux_args=()):
# ...
t = objective_fn(..., *objective_aux_args)
# ...
```
An alternative approach taken by `jax.experimental.ode.odeint` is to stage function arguments out to jaxprs, converting closures in the process. We could try and factor [this setup](https://github.com/google/jax/blob/943c7794f92f32c4796cf980eedd5a11c86eea66/jax/experimental/ode.py#L50) into a common utility for use in other similar applications.
| The workaround based on auxiliary arguments has several disadvantages. It can be tricky to enforce, and it can require inconvenient code restructuring in callers. For more on why `odeint` takes the approach that it does, see #2718, #3557, #3558, and the PR resolving them: #3562
I'm currently struggling with this problem for a `custom_jvp` for a higher-order derivative from a solver. It would be really cool to have a docstring for `closure_convert` to make it clear how to use it in other places. :-)
Since this is an issue specific to custom derivatives, I'm wondering if we ought to offer it as part of `custom_vjp`, `custom_gradient`, etc., in a way that lets you write:
```python
@partial(custom_vjp, closure_convert_argnums=0)
def minimize(objective_fn, x0):
# ...
```
@rpadams Would that work for your solver?
@mattjj Having written `ode.closure_convert` and `custom_vjp`, what do you think?
Thanks for the fast reply @froystig! If I'm understanding the situation, I think this would resolve my situation in which the thing I'm writing a `custom_vjp` for is a solver that consumes multiple levels of closures. Threading the arguments through this seems daunting.
More context: I have an energy function that depends on a Jacobian (1st deriv), that is part of a Lagrangian which gives an ODE via Euler-Lagrange (2nd deriv), which I then use an implicit solver for (3rd deriv). Then I'd like to get gradients back through that stack (4th deriv) without having to backprop through my implicit (in the "backward Euler" sense) solver using the implicit (in the "implicit function theorem" sense) gradient. There are many closures/partials/lambdas along the way that I think are not playing nicely with the outer-loop `custom_jvp`... (However, that last level thing is not mathematically painful because I did most of the IFT work with the second-order solver.)
> Threading the arguments through this seems daunting.
Based on your context, it seems that what you'd find daunting is having to pass around extra arguments explicitly on the way down to calling the solver, to avoid forming closures anywhere. Is that correct? By contrast, would you be OK threading the arguments around _within_ the solver implementation, supposing they were extracted from the incoming closures for you?
An amendment to my previous comment: it might make more sense to return to the original idea and offer a `closure_convert` utility directly—rather than an option to `custom_vjp`—so that you can control the placement and threading of arguments in your solver implementation. In my sketch above, `minimize` has implicit arguments (via the hypothetical closure conversion) that aren't apparent in its signature.
Altogether here's roughly how using this would look:
```python
def minimize(objective_fn, x0):
converted_fn, consts = closure_convert(objective_fn, x0)
return _minimize(converted_fn, x0, consts)
@partial(custom_vjp, nondiff_argnums=0)
def _minimize(objective_fn, x0, objective_aux_args):
z = objective_fn(x0, objective_aux_args)
# ...
```
Assuming we're thinking about this the same way, it's that there are many closures going into forming the objective function minimized by the solver. Threading arguments around just the solver itself isn't too bad --- I'm basically doing that already in order to avoid re-jitting my Levenberg-Marquardt implementation every time step.
I should say that part of what I'm confused about is the statement in the docs:
> A limitation to this approach is that the argument f can’t close over any values involved in differentiation.
It seems like "values involved in differentiation" necessarily covers a lot of ground, i.e., essentially everything that's gone into the objective function, no? I'm interpreting this as "you can't use closures/lambda/partials in building your objective function", but maybe that's overly broad? In my case I'm making pretty extensive use of a generator pattern, e.g., `generate_lagrangian` that hands back a Lagrangian function that I hand to a `generate_euler_lagrange` that gives a function I can hand to a `generate_time_stepper` that constructs an objective for the implicit Euler, etc. I think you guys can appreciate that this is a pretty "SICM" kind of thing I'm doing. :-)
| 2020-12-22T18:26:59 |
google/jax | 5,282 | google__jax-5282 | [
"5276"
] | ad132a2d154b4d5d279b962bd40c6c6c9c847fee | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -3301,9 +3301,12 @@ def _broadcast_in_dim_shape_rule(operand, *, shape, broadcast_dimensions):
return shape
-def _broadcast_in_dim_transpose_rule(t, *, shape, broadcast_dimensions):
- axes = tuple(np.delete(range(len(shape)), broadcast_dimensions))
- return [_reduce_sum(t, axes)]
+def _broadcast_in_dim_transpose_rule(ct, operand, *, shape, broadcast_dimensions):
+ shape_in = operand.aval.shape
+ unit_dimensions = tuple(i for i, s in enumerate(shape_in) if s == 1)
+ bdims = tuple(np.delete(broadcast_dimensions, unit_dimensions))
+ axes = tuple(np.delete(range(len(shape)), bdims))
+ return [expand_dims(_reduce_sum(ct, axes), unit_dimensions)]
def _broadcast_in_dim_batch_rule(batched_args, batch_dims, *, shape,
broadcast_dimensions):
@@ -3318,7 +3321,7 @@ def _broadcast_in_dim_batch_rule(batched_args, batch_dims, *, shape,
broadcast_in_dim_p = standard_primitive(
_broadcast_in_dim_shape_rule, _input_dtype, 'broadcast_in_dim')
broadcast_in_dim_p.def_impl(_broadcast_in_dim_impl)
-ad.deflinear(broadcast_in_dim_p, _broadcast_in_dim_transpose_rule)
+ad.deflinear2(broadcast_in_dim_p, _broadcast_in_dim_transpose_rule)
batching.primitive_batchers[broadcast_in_dim_p] = _broadcast_in_dim_batch_rule
| diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -1021,6 +1021,15 @@ def testBroadcastInDim(self, inshape, dtype, outshape, dimensions):
op = lambda x: lax.broadcast_in_dim(x, outshape, dimensions)
self._CompileAndCheck(op, args_maker)
+ def testBroadcastInDimOperandShapeTranspose(self):
+ # Regression test for https://github.com/google/jax/issues/5276
+ def f(x):
+ return lax.broadcast_in_dim(x, (2, 3, 4), broadcast_dimensions=(0, 1, 2)).sum()
+ def g(x):
+ return lax.broadcast_in_dim(x.reshape((3,)), (2, 3, 4), broadcast_dimensions=(1,)).sum()
+ x = np.ones((1, 3, 1))
+ self.assertArraysEqual(jax.grad(f)(x), jax.grad(g)(x))
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_inshape={}_outshape={}_bcdims={}".format(
jtu.format_shape_dtype_string(inshape, np.float32),
| lax.broadcast_in_dim transposition rule
I noticed that the transposition rule of lax.broadcast_in_dim assumes that the input operand has a shape that matches the broadcast dimensions in the target shape. That is, we cannot use the following
```
x = np.ones((1, 3, 1))
lax.broadcast_in_dim(x, (2, 3, 4), broadcast_dimensions=(0, 1, 2))
```
The above would compute correctly but will fail in transpose (only if core.skip_checks == False).
Instead one must do:
```
x = np.ones((1, 3, 1))
lax.broadcast_in_dim(x.reshape((3,)), (2, 3, 4), broadcast_dimensions=(1,))
```
Either we must add a new parameter to broadcast_in_dim with the input shape, or at the very least we ought to change the shape rule to reject unsupported uses.
| We may be able to do this without a new parameter by accessing the input avals, similar to the approach that @mattjj recently used to drop the `input_dtype` argument from `convert_element_type_p`. | 2020-12-29T18:31:17 |
google/jax | 5,294 | google__jax-5294 | [
"5210"
] | 47d3e20441d4e4ff5f40905e833eb2a4b2067254 | diff --git a/jax/_src/lax/control_flow.py b/jax/_src/lax/control_flow.py
--- a/jax/_src/lax/control_flow.py
+++ b/jax/_src/lax/control_flow.py
@@ -44,7 +44,7 @@
from jax.interpreters import masking
from jax.lib import xla_bridge as xb
from jax.lib import xla_client
-from jax.util import (partial, unzip2, unzip4, safe_map, safe_zip, split_list,
+from jax.util import (partial, unzip2, unzip3, safe_map, safe_zip, split_list,
cache, extend_name_stack)
from jax.tree_util import (tree_flatten, tree_unflatten, treedef_is_leaf,
treedef_children, treedef_tuple, tree_multimap,
@@ -64,17 +64,18 @@
@cache()
def _initial_style_open_jaxpr(fun: Callable, in_tree, in_avals):
wrapped_fun, out_tree = flatten_fun_nokwargs(lu.wrap_init(fun), in_tree)
- jaxpr, out_avals, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals)
- return jaxpr, out_avals, consts, out_tree()
+ jaxpr, _, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals)
+ return jaxpr, consts, out_tree()
@cache()
def _initial_style_jaxpr(fun: Callable, in_tree, in_avals):
- jaxpr, out_avals, consts, out_tree = _initial_style_open_jaxpr(fun, in_tree, in_avals)
+ jaxpr, consts, out_tree = _initial_style_open_jaxpr(fun, in_tree, in_avals)
closed_jaxpr = core.ClosedJaxpr(pe.convert_constvars_jaxpr(jaxpr), ())
return closed_jaxpr, consts, out_tree
+@cache()
def _initial_style_jaxprs_with_common_consts(funs: Sequence[Callable],
- in_tree, in_avals):
+ in_tree, in_avals):
# When staging the branches of a conditional into jaxprs, constants are
# extracted from each branch and converted to jaxpr arguments. To use the
# staged jaxprs as the branches to a conditional *primitive*, we need for
@@ -82,18 +83,18 @@ def _initial_style_jaxprs_with_common_consts(funs: Sequence[Callable],
# for each one, it makes another that accepts *all* constants, but only uses
# those that it needs (dropping the rest).
- jaxprs, all_out_avals, all_consts, all_out_trees = unzip4(
+ jaxprs, all_consts, all_out_trees = unzip3(
_initial_style_open_jaxpr(fun, in_tree, in_avals) for fun in funs)
newvar = core.gensym(jaxprs, suffix='_')
all_const_avals = [[raise_to_shaped(core.get_aval(c)) for c in consts]
- for consts in all_consts]
+ for consts in all_consts]
unused_const_vars = [[newvar(aval) for aval in const_avals]
- for const_avals in all_const_avals]
+ for const_avals in all_const_avals]
def pad_jaxpr_constvars(i, jaxpr):
prefix = util.concatenate(unused_const_vars[:i])
- suffix = util.concatenate(unused_const_vars[i+1:])
+ suffix = util.concatenate(unused_const_vars[i + 1:])
constvars = [*prefix, *jaxpr.constvars, *suffix]
return core.Jaxpr(constvars=constvars, invars=jaxpr.invars,
outvars=jaxpr.outvars, eqns=jaxpr.eqns)
@@ -101,7 +102,7 @@ def pad_jaxpr_constvars(i, jaxpr):
consts = util.concatenate(all_consts)
jaxprs = [pad_jaxpr_constvars(i, jaxpr) for i, jaxpr in enumerate(jaxprs)]
closed_jaxprs = [core.ClosedJaxpr(pe.convert_constvars_jaxpr(jaxpr), ())
- for jaxpr, out_avals in zip(jaxprs, all_out_avals)]
+ for jaxpr in jaxprs]
return closed_jaxprs, consts, all_out_trees
def _abstractify(x):
@@ -2606,6 +2607,8 @@ def omnistaging_disabler() -> None:
global _initial_style_open_jaxpr, _initial_style_jaxpr, \
_initial_style_jaxprs_with_common_consts
+ from jax.util import unzip4
+
@cache()
def _initial_style_open_jaxpr(fun: Callable, in_tree, in_avals):
in_pvals = [pe.PartialVal.unknown(aval) for aval in in_avals]
| jax.lax.switch host memory leak
I'm encountering a memory leak in code that repeatedly calls `jax.lax.switch`. Here's a small example that reproduces the issue in colab:
```python
import resource, gc
import numpy as np
import jax, jax.numpy as jp
d = 100
def do_thing():
i = jp.array(np.random.rand() > 0.5, dtype="int32")
x = jp.array(np.random.randn(d))
y = jp.array(np.random.randn(d))
def fn(i, x, y):
return jax.lax.switch(i, [(lambda _: x), (lambda _: y)], None)
fn(i, x, y)
stats = np.zeros(1000)
for i in range(len(stats)):
gc.collect()
do_thing()
gc.collect()
stats[i] = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss # (kilobytes)
import matplotlib.pyplot as plt
plt.plot(stats)
plt.gca().ticklabel_format(style="plain", useOffset=False)
```

| It looks like we are repeatedly compiling a `cond` primitive and never getting cache hits for it:
```
WARNING:absl:Compiling cond for args ((ShapedArray(int32[]), None), (ShapedArray(float32[100]), None), (ShapedArray(float32[100]), None)).
File "/Users/phawkins/p/jax/q.py", line 18, in <module>
do_thing()
File "/Users/phawkins/p/jax/q.py", line 13, in do_thing
fn(i, x, y)
File "/Users/phawkins/p/jax/q.py", line 12, in fn
return jax.lax.switch(i, [(lambda _: x), (lambda _: y)], None)
File "/Users/phawkins/p/jax/jax/_src/lax/control_flow.py", line 616, in switch
out = cond_p.bind(
File "/Users/phawkins/p/jax/jax/_src/lax/control_flow.py", line 1105, in cond_bind
return core.Primitive.bind(cond_p, *args, branches=branches, linear=linear)
File "/Users/phawkins/p/jax/jax/core.py", line 271, in bind
out = top_trace.process_primitive(self, tracers, params)
File "/Users/phawkins/p/jax/jax/core.py", line 595, in process_primitive
return primitive.impl(*tracers, **params)
File "/Users/phawkins/p/jax/jax/interpreters/xla.py", line 235, in apply_primitive
compiled_fun = xla_primitive_callable(prim, *unsafe_map(arg_spec, args), **params)
File "/Users/phawkins/p/jax/jax/interpreters/xla.py", line 275, in xla_primitive_callable
traceback.print_stack()
```
(We also seem to be missing compilation logging for primitives!)
I think always recompiling here is expected, because we're passing in fresh `lambda` objects as arguments to `switch` each time.
However, I wonder if we can try to mitigate this. @apaszke recently told me (I think) that `f.__code__` is cached by CPython:
```python
In [1]: hash(lambda x: x)
Out[1]: 8752827212549
In [2]: hash(lambda x: x)
Out[2]: 8752827192743
In [3]: hash((lambda x: x).__code__)
Out[3]: -5762213570434008460
In [4]: hash((lambda x: x).__code__)
Out[4]: -5762213570434008460
```
Maybe in some cases we can leverage this to get more cache hits, at least in some cases?
In the meantime, the fix is not to pass lambdas into switches, and instead reuse the same function object.
EDIT: sorry, while writing this comment, I forgot that the real issue is the memory growth. Even if we don't get cache hits, we shouldn't leak memory!
With `__code__`, we might have a runtime/memory tradeoff to consider: does the time to hash/eq the `__code__` scale with the code size? My guess is that functions are compared by object id, by contrast.
But even if we keyed on `f.__code__` rather than on `f`, we'd still miss the cache in this case because the branch lambdas close over fresh arrays. See:
```python
>>> a, b = np.ones(4), np.ones(7)
>>> id(a), id(b)
(140197940693536, 140197940693616)
>>> f = lambda _: a
>>> g = lambda _: b
>>> hash(f.__code__) == hash(g.__code__)
False
>>> f.__code__ == g.__code__
False
```
This behavior is also consistent with our current extraction of closure-captured values as "consts" when we stage to a jaxpr.
So, keying on `__code__` might still be an improvement for code that looks like:
```python
lax.switch(..., [(lambda x: x + x), (lambda x: x * x)], x)
```
but not for the example given here.
Relatedly, the memory growth is bound to be more noticeable due to those size 100 arrays being stored alongside the lambdas, again since they're extracted when staging to jaxpr. @mattjj's suggestion to "reuse the same function object" would have forced the branches to be written so that they accept those arrays as a formal argument and wouldn't be held by the cache entry.
Concretely, does this mean that if I use a globally defined function and pass everything in through switch's third argument rather than by closure, the memory leak would go away? That's doable.
Would a `functools.partial` wrapped version of that globally defined function still be okay? (The partial would provide a plain Python int to the function, no arrays or anything like that.)
Yeah, a rewrite along the following lines ought to work around the observed leak:
```python
fst = lambda z: z[0]
snd = lambda z: z[1]
def do_thing():
i = jp.array(np.random.rand() > 0.5, dtype="int32")
x = jp.array(np.random.randn(d))
y = jp.array(np.random.randn(d))
return jax.lax.switch(i, [fst, snd], (x, y))
```
If the use of `partial` that you have in mind is in order to set things up, along the lines of:
```python
take = lambda i, z: z[i]
fst, snd = partial(take, 0), partial(take, 1)
```
then yes that should not interfere with this workaround. But pushing that partial into the switch, as in:
```python
def do_thing():
# ...
return jax.lax.switch(i, [partial(take, 0), partial(take, 1)], (x, y))
```
will leak again, since it creates new functions every time. We can mitigate the latter internally by keying on `__code__`, but we don't do that yet today.
This latter "leak" would not be as expensive or noticeable as the one you were originally seeing, since it doesn't store a size 100 array in every cache entry any longer. But the cache will still grow to account for the fresh functions, and there will be a runtime cost to staging those fresh functions out to jaxpr every time `switch` is called.
Gotcha. My real use case is basically I have a sequence of heterogeneous things that I flattened into a common format so I can stack them into an array. Then I have another sequence of Enum values that indicate the type of each element; these Enum values are stacked into an array of int32. At some point I need to do some computations that are structurally different depending on the type of the element, and so I use a vmap with inside of it a switch to basically turn the int32 back into Enum values. Here's a contrived example:
```python
import jax, jax.numpy as jp
from functools import partial
from enum import IntEnum
class Kind(IntEnum):
LEFT = 0
RIGHT = 1
def general_handler(kind, xy):
[x, y] = xy
if kind == Kind.LEFT: return x
if kind == Kind.RIGHT: return y
# prepare partial applications once
handlers = dict()
for kind in Kind:
handlers[kind] = partial(general_handler, kind)
@jax.vmap
def fn(dynamic_kind, x, y):
return jax.lax.switch(dynamic_kind,
[handlers[static_kind] for static_kind in Kind],
[x, y])
```
Based on what you've said, I think this should avoid both the leak and the cache misses.
I asked about `partial` because it doesn't create a function but a "partial object" -- simple data structure containing basically the function and its partial arguments. I thought that `partial(fn, 0) == partial(fn, 0)` so a newly created partial object for the same globally defined function with the same arguments would still match the cached one, but alas partial objects don't compare that way.
It looks like the original issue is resolved. Still I'd like to track the possibility of keying on `f.__code__` rather than on `id(f)` in our cache.
From a few experiments, it appears that comparing `__code__` can yield false positives. Accounting for `__closure__` might make up the difference. But `__closure__` isn't always available, e.g. on methods. In fact even `__code__` isn't always available, e.g. for builtin functions like `hash`. We can either keep things simple and avoid this change entirely, or we can key on `__code__` and `__closure__` when both are available, and otherwise key on object identity.
I'm leaning towards avoiding the change for now because (i) I'm not entirely sure about its correctness, and (ii) it doesn't solve the entire problem presented in this issue originally. We can revisit this if the special case problem (same code, same closure, fresh function object) comes up. We could tackle the overall issue of an observed leak in a separate way altogether, for instance by amending our cache eviction policy.
I'm still seeing the same problem if I modify my initial example as suggested:
```python
import resource, gc
import numpy as np
import jax, jax.numpy as jp
def xbranch(xy): return xy[0]
def ybranch(xy): return xy[1]
d = 1
def do_thing():
i = jp.array(np.random.rand() > 0.5, dtype="int32")
x = jp.array(np.random.randn(d))
y = jp.array(np.random.randn(d))
jax.lax.switch(i, [xbranch, ybranch], [x, y])
stats = np.zeros(100)
for i in range(len(stats)):
gc.collect()
do_thing()
gc.collect()
stats[i] = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss # (kilobytes)
import matplotlib.pyplot as plt
plt.plot(stats)
plt.gca().ticklabel_format(style="plain", useOffset=False)
```

The rate of growth doesn't seem to depend on the size `d` of the data, so it's something else that's being leaked. It's always ~40MB over the 100 iterations. | 2020-12-30T18:11:20 |
|
google/jax | 5,314 | google__jax-5314 | [
"5206"
] | 7c42dc91ed707ff9230c2dbb6217e8b9b28284fe | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -259,7 +259,7 @@ def _cpp_jit(
raise ValueError("can't specify both a device and a backend for jit, "
f"got device={device} and backend={backend}.")
- def cache_miss(*args, **kwargs):
+ def cache_miss(_, *args, **kwargs):
### This first part is basically the same code as in _python_jit.
# An alternative would be for cache_miss to accept from C++ the arguments
# (dyn_args, donated_invars, args_flat, in_tree), since otherwise we have
@@ -362,17 +362,20 @@ def get_jax_disable_jit_flag():
"""
return config.read("jax_disable_jit")
+ static_argnums_ = (0,) + tuple(i + 1 for i in static_argnums)
cpp_jitted_f = jax_jit.jit(fun, cache_miss, get_device_info,
get_jax_enable_x64, get_jax_disable_jit_flag,
- static_argnums)
+ static_argnums_)
# TODO(mattjj): make cpp callable follow descriptor protocol for bound methods
@wraps(fun)
@api_boundary
def f_jitted(*args, **kwargs):
+ context = getattr(core.thread_local_state.trace_state.trace_stack,
+ 'dynamic', None)
# TODO(jblespiau): Move this to C++.
if FLAGS.jax_debug_nans and not _jit_is_disabled():
- device_arrays = cpp_jitted_f(*args, **kwargs)
+ device_arrays = cpp_jitted_f(context, *args, **kwargs)
try:
xla.check_nans(xla.xla_call_p, [
da.device_buffer
@@ -384,9 +387,11 @@ def f_jitted(*args, **kwargs):
assert FLAGS.jax_debug_nans # compiled_fun can only raise in this case
print("Invalid nan value encountered in the output of a C++-jit "
"function. Calling the de-optimized version.")
- return cache_miss(*args, **kwargs)[0] # probably won't return
- else:
+ return cache_miss(context, *args, **kwargs)[0] # probably won't return
+ elif _jit_is_disabled():
return cpp_jitted_f(*args, **kwargs)
+ else:
+ return cpp_jitted_f(context, *args, **kwargs)
f_jitted._cpp_jitted_f = cpp_jitted_f
return f_jitted
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -506,6 +506,22 @@ def f(x):
f"explicit inner-jit backend specification cpu."):
f(1.)
+ def test_omnistaging(self):
+ # See https://github.com/google/jax/issues/5206
+ if not config.omnistaging_enabled:
+ raise unittest.SkipTest("test only works with omnistaging")
+
+ key_list = [None]
+
+ def init():
+ key, subkey = jax.random.split(key_list[0])
+ key_list[0] = key
+ return jax.random.normal(subkey, ())
+
+ key_list[0] = np.array([2384771982, 3928867769], dtype=np.uint32)
+ init()
+ self.jit(init)()
+ self.assertIsInstance(key_list[0], core.Tracer)
class PythonJitTest(CPPJitTest):
| The side-effect omnistaging example is broken
The example illustrating a possible issue with omnistaging and global state (https://github.com/google/jax/blob/master/design_notes/omnistaging.md#example-1) no longer leads to an escaped tracer:
```
from jax import jit
from jax import random
print('omnistaging enabled', jax.config.omnistaging_enabled)
print(jax.__version__)
key = random.PRNGKey(0)
def init():
global key
key, subkey = random.split(key)
print(key)
return random.normal(subkey, ())
print(init()) # -1.2515389
print(init()) # -0.58665067
init = jit(init)
print(init()) # 0.48648298
print(init()) # 0.48648298 !!
print(key) # Traced<ShapedArray(uint32[2])>with<DynamicJaxprTrace(level=0/1)>
print(random.normal(key, ()))
# omnistaging enabled True
# 0.2.7
# [4146024105 967050713]
# -1.2515285
# [2384771982 3928867769]
# -0.5866531
# [3382499631 3878610767]
# 0.48647928
# 0.48647928
# [3382499631 3878610767]
# Buffer(-0.4876217, dtype=float32)
```
Has this omnistaging behaviour been changed?
| Thanks for flagging this - I tried back as far as jax 0.2.0 when omnistaging was first turned on by default, and could not reproduce the expected escaped tracer error. I'm going to assign to @mattjj because he might know what's going on.
Hrm, curious. This repros for me:
```python
import jax
from jax import jit
from jax import random
key = random.PRNGKey(0)
def init():
global key
key, subkey = random.split(key)
return random.normal(subkey, ())
init = jit(init)
print(init()) # -1.2515389
print(init()) # -1.2515389
print(key) # Traced<ShapedArray(uint32[2])>with<DynamicJaxprTrace(level=0/1)>
```
If we put a single `print(init())` before jitting, then we also get the escaped tracer. But if we put two, we don't! Very curious.
I think the reason why @jakevdp wasn't able to reproduce is that he didn't rewind jaxlib as well as jax. The funny actor here is the C++ jit. If we run the original version with `JAX_CPP_JIT=0` then we get the behavior as expected:
```
$ cat 5206.py
import jax
from jax import jit
from jax import random
key = random.PRNGKey(0)
def init():
global key
key, subkey = random.split(key)
return random.normal(subkey, ())
print(init())
print(init())
init = jit(init)
print(init())
print(init())
print(key)
$ python 5206.py
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
-1.2515389
-0.58665067
0.48648298
0.48648298
[3382499631 3878610767]
$ env JAX_CPP_JIT=0 python 5206.py
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
-1.2515389
-0.58665067
0.48648298
0.48648298
Traced<ShapedArray(uint32[2])>with<DynamicJaxprTrace(level=0/1)>
```
I was tipped off because in the last line of the OP we see a `Buffer` printed out, and that smelled like a C++ issue.
I don't understand this in full yet though. cc @jblespiau in case he has ideas.
Just to underscore, I really don't know what's going on here, and I don't mean to imply that any particular code has a bug. Setting `JAX_CPP_JIT` affects the behavior, but I don't actually know if there's an issue in the C++ code, or related Python code, or if actually this is somehow intended behavior as a consequence of some of our caching decisions related to the C++ jit.
```
key = random.PRNGKey(0)
def init(a):
global key
import pdb; pdb.set_trace()
key, subkey = random.split(key)
return random.normal(subkey, ())
print(init(1))
print(init(2))
init = jit(init)
print(init(3))
print(init(4))
print(key)
```
What happens?
print(init()) << This compiles `random.split`. at this moment, random.split is provided with a tracer, it compiles, run the execution and return concrete keys.
print(key) --> prints [4146024105 967050713]
print(init()) << Here, we hit the cache for these, it returns a real value, as expected
print(key) --> [2384771982 3928867769]
init = jit(init)
print(init()) <-- This triggers the compilation of `init`. `ramdom.split` is provided with a concrete key so, it will return a concrete type
print(key) >> [3382499631 3878610767]
print(init()) <-- No longer executing Python anyway. key won't change.
print(key) >> [3382499631 3878610767]
It you want to get an abstract type, you need to store it when there is one (e.g. if you add an argument "a", and you save it, it will work:
```
global_v = None
def f(a):
global global_v
global_v = a
return a + 1
jit(f)(1)
print(global_v)
```
So for the C++, I can explain.
However, for the Python I cannot.
When we start tracing, do we set a global variable somewhere, to tell to compiled function they should trace?
i.e. when we jit `init`, and we get into `random.split`, it it tracing, even though we passed concrete types.
If we think functional (i.e. same inputs, same outputs), this seems to be the correct behavior (random.split is given concrete types, returns concrete types), while the Python execution is not functional, as its behavior changes based on some external values).
Ah yes, this is the nature of omnistaging, namely that with omnistaging things like `jit` change the behavior of operations based on global context, not just on their input arguments. That's the goal: not to rely solely on data dependence, so that we can stage out more, like constant creation and RNG operations. The cost is that JAX's tracing behavior itself isn't purely functional, in that that whether a function produces a tracer or not cannot be decided based on whether the inputs are traced.
We shouldn't get a cache hit on the `print(init())` line after applying `jit`. The reason is that it results in a worse program being staged out, at least if we imagine the random constant being arbitrarily large: there's a potentially-large random value staged out into the jitted version of `init`, as opposed to the intended behavior of staging out the PRNG generation itself (based on the small key as a constant).
But I see now why we are getting that behavior: it's because, by design, with the C++ jit we've moved the cache to the api.py level, whereas with the Python version we have it in the jit impl (i.e. the xla_call impl). In the Python version, when running the jitted version of `init`, we never hit the impl rule for `random.split` at all due to the change in global context, so there's no risk of hitting the cache. That is, the Python version is already sensitive to the necessary global context.
I think we need to make the C++ jit sensitive to the global context. Otherwise, we're not getting the full benefits of omnistaging. Concretely, I think we need to make the C++ cache depend on the value of the hashable `core.thread_local_state.trace_state.trace_stack.dynamic`.
(Thanks for catching this and letting us know, @sbodenstein ! Otherwise the ill effects are subtle, like increased memory fragmentation, and we might have been scratching our heads over the reason.)
> مثالی که یک مسئله احتمالی با وضعیت جهانی و جهانی را نشان می دهد ( https://github.com/google/jax/blob/master/design_notes/omnistaging.md#example-1 ) دیگر منجر به ردیابی فرار نمی شود:
>
> ```
> from jax import jit
> from jax import random
>
> print('omnistaging enabled', jax.config.omnistaging_enabled)
> print(jax.__version__)
>
> key = random.PRNGKey(0)
> def init():
> global key
> key, subkey = random.split(key)
> print(key)
> return random.normal(subkey, ())
> print(init()) # -1.2515389
> print(init()) # -0.58665067
> init = jit(init)
> print(init()) # 0.48648298
> print(init()) # 0.48648298 !!
> print(key) # Traced<ShapedArray(uint32[2])>with<DynamicJaxprTrace(level=0/1)>
> print(random.normal(key, ()))
>
> # omnistaging enabled True
> # 0.2.7
> # [4146024105 967050713]
> # -1.2515285
> # [2384771982 3928867769]
> # -0.5866531
> # [3382499631 3878610767]
> # 0.48647928
> # 0.48647928
> # [3382499631 3878610767]
> # Buffer(-0.4876217, dtype=float32)
> ```
>
> آیا این رفتار همه کاره تغییر کرده است؟
Ok | 2021-01-05T16:14:41 |
google/jax | 5,316 | google__jax-5316 | [
"5313"
] | 6a8741c89ad6ed13f9973620d348f8fd5e5b8bdb | diff --git a/jax/_src/lax/linalg.py b/jax/_src/lax/linalg.py
--- a/jax/_src/lax/linalg.py
+++ b/jax/_src/lax/linalg.py
@@ -244,10 +244,11 @@ def _matvec_multiply(a, b):
return lax.dot(a, b, precision=lax.Precision.HIGHEST)
def _check_solve_shapes(a, b):
- if not (a.ndim >= 2 and a.shape[-1] == a.shape[-2] and b.ndim >= 1):
- msg = ("The arguments to solve must have shapes a=[..., m, m] and "
- "b=[..., m, k] or b=[..., m]; got a={} and b={}")
- raise ValueError(msg.format(a.shape, b.shape))
+ if not (a.ndim >= 2 and b.ndim in [a.ndim, a.ndim - 1] and
+ a.shape[-1] == a.shape[-2] == b.shape[a.ndim - 2]):
+ raise ValueError(
+ "The arguments to solve must have shapes a=[..., m, m] and "
+ f"b=[..., m, k] or b=[..., m]; got a={a.shape} and b={b.shape}")
def _solve(a, b):
_check_solve_shapes(a, b)
@@ -956,7 +957,7 @@ def lu_pivots_to_permutation(swaps, m):
@partial(vectorize, excluded={3}, signature='(n,n),(n),(n,k)->(n,k)')
def _lu_solve_core(lu, permutation, b, trans):
m = lu.shape[0]
- x = jnp.reshape(b, (m, -1))
+ x = jnp.reshape(b, (m, np.prod(b.shape[1:])))
if trans == 0:
x = x[permutation, :]
x = triangular_solve(lu, x, left_side=True, lower=True, unit_diagonal=True)
diff --git a/jax/_src/numpy/linalg.py b/jax/_src/numpy/linalg.py
--- a/jax/_src/numpy/linalg.py
+++ b/jax/_src/numpy/linalg.py
@@ -330,8 +330,8 @@ def _pinv_jvp(rcond, primals, tangents):
@_wraps(np.linalg.inv)
def inv(a):
if jnp.ndim(a) < 2 or a.shape[-1] != a.shape[-2]:
- raise ValueError("Argument to inv must have shape [..., n, n], got {}."
- .format(jnp.shape(a)))
+ raise ValueError(
+ f"Argument to inv must have shape [..., n, n], got {a.shape}.")
return solve(
a, lax.broadcast(jnp.eye(a.shape[-1], dtype=lax.dtype(a)), a.shape[:-2]))
| diff --git a/tests/linalg_test.py b/tests/linalg_test.py
--- a/tests/linalg_test.py
+++ b/tests/linalg_test.py
@@ -707,6 +707,7 @@ def tensor_maker():
((8, 8), (8, 4)),
((1, 2, 2), (3, 2)),
((2, 1, 3, 3), (2, 4, 3, 4)),
+ ((1, 0, 0), (1, 0, 2)),
]
for dtype in float_types + complex_types))
def testSolve(self, lhs_shape, rhs_shape, dtype):
@@ -722,7 +723,7 @@ def testSolve(self, lhs_shape, rhs_shape, dtype):
{"testcase_name":
"_shape={}".format(jtu.format_shape_dtype_string(shape, dtype)),
"shape": shape, "dtype": dtype}
- for shape in [(1, 1), (4, 4), (2, 5, 5), (200, 200), (5, 5, 5)]
+ for shape in [(1, 1), (4, 4), (2, 5, 5), (200, 200), (5, 5, 5), (0, 0)]
for dtype in float_types))
def testInv(self, shape, dtype):
rng = jtu.rand_default(self.rng())
| Cannot invert zero-shape matrices
Hi all,
As described, calling `jax.numpy.linalg.inv(jax.numpy.zeros((0, 0)))` fails with a `ZeroDivisionError` in the shape machinery
```python-traceback
---------------------------------------------------------------------------
FilteredStackTrace Traceback (most recent call last)
<ipython-input-1-551ceea3fd37> in <module>
2
----> 3 jax.numpy.linalg.inv(jax.numpy.zeros((0, 0)))
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/linalg.py in inv(a)
334 .format(jnp.shape(a)))
--> 335 return solve(
336 a, lax.broadcast(jnp.eye(a.shape[-1], dtype=lax.dtype(a)), a.shape[:-2]))
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/linalg.py in solve(a, b)
449 a, b = _promote_arg_dtypes(jnp.asarray(a), jnp.asarray(b))
--> 450 return lax_linalg._solve(a, b)
451
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/linalg.py in _solve(a, b)
265 # b.shape == [..., m, k]
--> 266 return api.vmap(custom_solve, b.ndim - 1, max(a.ndim, b.ndim) - 1)(b)
267
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/control_flow.py in custom_linear_solve(matvec, b, solve, transpose_solve, symmetric)
2182
-> 2183 solve_jaxpr, solve_consts, out_tree = _initial_style_jaxpr(
2184 _shape_checked(partial(solve, matvec), "solve"), in_args_tree, b_avals)
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/control_flow.py in _initial_style_jaxpr(fun, in_tree, in_avals)
71 def _initial_style_jaxpr(fun: Callable, in_tree, in_avals):
---> 72 jaxpr, out_avals, consts, out_tree = _initial_style_open_jaxpr(fun, in_tree, in_avals)
73 closed_jaxpr = core.ClosedJaxpr(pe.convert_constvars_jaxpr(jaxpr), ())
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/control_flow.py in _initial_style_open_jaxpr(fun, in_tree, in_avals)
66 wrapped_fun, out_tree = flatten_fun_nokwargs(lu.wrap_init(fun), in_tree)
---> 67 jaxpr, out_avals, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals)
68 return jaxpr, out_avals, consts, out_tree()
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/control_flow.py in f(x)
2173 def f(x):
-> 2174 y = fun(x)
2175 _check_shapes(name, "b", y, b_flat)
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/linalg.py in <lambda>(_, x)
258 lambda x: _matvec_multiply(a, x),
--> 259 solve=lambda _, x: lu_solve(lu_, permutation, x, trans=0),
260 transpose_solve=lambda _, x: lu_solve(lu_, permutation, x, trans=1))
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/linalg.py in lu_solve(lu, permutation, b, trans)
987 """LU solve with broadcasting."""
--> 988 return _lu_solve(lu, permutation, b, trans)
989
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/linalg.py in _lu_solve(lu, permutation, b, trans)
981 .format(lu.shape, b.shape))
--> 982 x = _lu_solve_core(lu, permutation, b, trans)
983 return x[..., 0] if rhs_vector else x
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/vectorize.py in wrapped(*args)
303 vectorized_func = api.vmap(vectorized_func, in_axes)
--> 304 return vectorized_func(*vec_args)
305
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/vectorize.py in wrapped(*args)
134 def wrapped(*args):
--> 135 out = func(*args)
136 out_shapes = map(jnp.shape, out if isinstance(out, tuple) else [out])
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/vectorize.py in new_func(*args)
175 args.insert(i, arg)
--> 176 return func(*args)
177
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/linalg.py in _lu_solve_core(lu, permutation, b, trans)
938 m = lu.shape[0]
--> 939 x = jnp.reshape(b, (m, -1))
940 if trans == 0:
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in reshape(a, newshape, order)
1246 try:
-> 1247 return a.reshape(newshape, order=order) # forward to method for ndarrays
1248 except AttributeError:
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in _reshape_method(a, *newshape, **kwargs)
1291 newshape = newshape[0]
-> 1292 return _reshape(a, newshape, order=order)
1293
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in _reshape(a, newshape, order)
1267 def _reshape(a, newshape, order="C"):
-> 1268 computed_newshape = _compute_newshape(a, newshape)
1269 if order == "C":
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in _compute_newshape(a, newshape)
1261 if np.any(np.equal(newshape, -1)):
-> 1262 fix = -a.size // (newshape if type(newshape) is Poly else _prod(newshape))
1263 return [d if d != -1 else fix for d in newshape]
FilteredStackTrace: ZeroDivisionError: integer division or modulo by zero
The stack trace above excludes JAX-internal frames.
The following is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
ZeroDivisionError Traceback (most recent call last)
<ipython-input-1-551ceea3fd37> in <module>
1 import jax
2
----> 3 jax.numpy.linalg.inv(jax.numpy.zeros((0, 0)))
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/linalg.py in inv(a)
333 raise ValueError("Argument to inv must have shape [..., n, n], got {}."
334 .format(jnp.shape(a)))
--> 335 return solve(
336 a, lax.broadcast(jnp.eye(a.shape[-1], dtype=lax.dtype(a)), a.shape[:-2]))
337
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/traceback_util.py in reraise_with_filtered_traceback(*args, **kwargs)
137 def reraise_with_filtered_traceback(*args, **kwargs):
138 try:
--> 139 return fun(*args, **kwargs)
140 except Exception as e:
141 if not is_under_reraiser(e):
/opt/texat-venv/lib/python3.8/site-packages/jax/api.py in f_jitted(*args, **kwargs)
369 return cache_miss(*args, **kwargs)[0] # probably won't return
370 else:
--> 371 return cpp_jitted_f(*args, **kwargs)
372 f_jitted._cpp_jitted_f = cpp_jitted_f
373
/opt/texat-venv/lib/python3.8/site-packages/jax/api.py in cache_miss(*args, **kwargs)
276 _check_arg(arg)
277 flat_fun, out_tree = flatten_fun(f, in_tree)
--> 278 out_flat = xla.xla_call(
279 flat_fun,
280 *args_flat,
/opt/texat-venv/lib/python3.8/site-packages/jax/core.py in bind(self, fun, *args, **params)
1227
1228 def bind(self, fun, *args, **params):
-> 1229 return call_bind(self, fun, *args, **params)
1230
1231 def process(self, trace, fun, tracers, params):
/opt/texat-venv/lib/python3.8/site-packages/jax/core.py in call_bind(primitive, fun, *args, **params)
1218 tracers = map(top_trace.full_raise, args)
1219 with maybe_new_sublevel(top_trace):
-> 1220 outs = primitive.process(top_trace, fun, tracers, params)
1221 return map(full_lower, apply_todos(env_trace_todo(), outs))
1222
/opt/texat-venv/lib/python3.8/site-packages/jax/core.py in process(self, trace, fun, tracers, params)
1230
1231 def process(self, trace, fun, tracers, params):
-> 1232 return trace.process_call(self, fun, tracers, params)
1233
1234 def post_process(self, trace, out_tracers, params):
/opt/texat-venv/lib/python3.8/site-packages/jax/core.py in process_call(self, primitive, f, tracers, params)
596
597 def process_call(self, primitive, f, tracers, params):
--> 598 return primitive.impl(f, *tracers, **params)
599 process_map = process_call
600
/opt/texat-venv/lib/python3.8/site-packages/jax/interpreters/xla.py in _xla_call_impl(fun, device, backend, name, donated_invars, *args)
567
568 def _xla_call_impl(fun: lu.WrappedFun, *args, device, backend, name, donated_invars):
--> 569 compiled_fun = _xla_callable(fun, device, backend, name, donated_invars,
570 *unsafe_map(arg_spec, args))
571 try:
/opt/texat-venv/lib/python3.8/site-packages/jax/linear_util.py in memoized_fun(fun, *args)
249 fun.populate_stores(stores)
250 else:
--> 251 ans = call(fun, *args)
252 cache[key] = (ans, fun.stores)
253
/opt/texat-venv/lib/python3.8/site-packages/jax/interpreters/xla.py in _xla_callable(fun, device, backend, name, donated_invars, *arg_specs)
643 abstract_args, arg_devices = unzip2(arg_specs)
644 if config.omnistaging_enabled:
--> 645 jaxpr, out_avals, consts = pe.trace_to_jaxpr_final(fun, abstract_args)
646 if any(isinstance(c, core.Tracer) for c in consts):
647 raise core.UnexpectedTracerError("Encountered an unexpected tracer.")
/opt/texat-venv/lib/python3.8/site-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr_final(fun, in_avals)
1228 main.source_info = fun_sourceinfo(fun.f) # type: ignore
1229 main.jaxpr_stack = () # type: ignore
-> 1230 jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)
1231 del main
1232 return jaxpr, out_avals, consts
/opt/texat-venv/lib/python3.8/site-packages/jax/interpreters/partial_eval.py in trace_to_subjaxpr_dynamic(fun, main, in_avals)
1209 trace = DynamicJaxprTrace(main, core.cur_sublevel())
1210 in_tracers = map(trace.new_arg, in_avals)
-> 1211 ans = fun.call_wrapped(*in_tracers)
1212 out_tracers = map(trace.full_raise, ans)
1213 jaxpr, out_avals, consts = frame.to_jaxpr(in_tracers, out_tracers)
/opt/texat-venv/lib/python3.8/site-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
158
159 try:
--> 160 ans = self.f(*args, **dict(self.params, **kwargs))
161 except:
162 # Some transformations yield from inside context managers, so we have to
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/linalg.py in solve(a, b)
448 def solve(a, b):
449 a, b = _promote_arg_dtypes(jnp.asarray(a), jnp.asarray(b))
--> 450 return lax_linalg._solve(a, b)
451
452
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/linalg.py in _solve(a, b)
264 else:
265 # b.shape == [..., m, k]
--> 266 return api.vmap(custom_solve, b.ndim - 1, max(a.ndim, b.ndim) - 1)(b)
267
268 def _T(x): return jnp.swapaxes(x, -1, -2)
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/traceback_util.py in reraise_with_filtered_traceback(*args, **kwargs)
137 def reraise_with_filtered_traceback(*args, **kwargs):
138 try:
--> 139 return fun(*args, **kwargs)
140 except Exception as e:
141 if not is_under_reraiser(e):
/opt/texat-venv/lib/python3.8/site-packages/jax/api.py in batched_fun(*args)
1184 in_axes_flat = flatten_axes("vmap in_axes", in_tree, in_axes)
1185 _ = _mapped_axis_size(in_tree, args_flat, in_axes_flat, "vmap")
-> 1186 out_flat = batching.batch(flat_fun, args_flat, in_axes_flat,
1187 lambda: flatten_axes("vmap out_axes", out_tree(),
1188 out_axes),
/opt/texat-venv/lib/python3.8/site-packages/jax/interpreters/batching.py in batch(fun, in_vals, in_dims, out_dim_dests, axis_name)
33 # executes a batched version of `fun` following out_dim_dests
34 batched_fun = batch_fun(fun, in_dims, out_dim_dests, axis_name=axis_name)
---> 35 return batched_fun.call_wrapped(*in_vals)
36
37 @lu.transformation_with_aux
/opt/texat-venv/lib/python3.8/site-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
158
159 try:
--> 160 ans = self.f(*args, **dict(self.params, **kwargs))
161 except:
162 # Some transformations yield from inside context managers, so we have to
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/control_flow.py in custom_linear_solve(matvec, b, solve, transpose_solve, symmetric)
2181 _check_tree("matvec", "b", out_tree, tree)
2182
-> 2183 solve_jaxpr, solve_consts, out_tree = _initial_style_jaxpr(
2184 _shape_checked(partial(solve, matvec), "solve"), in_args_tree, b_avals)
2185 _check_tree("solve", "b", out_tree, tree)
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/control_flow.py in _initial_style_jaxpr(fun, in_tree, in_avals)
70 @cache()
71 def _initial_style_jaxpr(fun: Callable, in_tree, in_avals):
---> 72 jaxpr, out_avals, consts, out_tree = _initial_style_open_jaxpr(fun, in_tree, in_avals)
73 closed_jaxpr = core.ClosedJaxpr(pe.convert_constvars_jaxpr(jaxpr), ())
74 return closed_jaxpr, consts, out_tree
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/control_flow.py in _initial_style_open_jaxpr(fun, in_tree, in_avals)
65 def _initial_style_open_jaxpr(fun: Callable, in_tree, in_avals):
66 wrapped_fun, out_tree = flatten_fun_nokwargs(lu.wrap_init(fun), in_tree)
---> 67 jaxpr, out_avals, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals)
68 return jaxpr, out_avals, consts, out_tree()
69
/opt/texat-venv/lib/python3.8/site-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr_dynamic(fun, in_avals)
1199 main.source_info = fun_sourceinfo(fun.f) # type: ignore
1200 main.jaxpr_stack = () # type: ignore
-> 1201 jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)
1202 del main
1203 return jaxpr, out_avals, consts
/opt/texat-venv/lib/python3.8/site-packages/jax/interpreters/partial_eval.py in trace_to_subjaxpr_dynamic(fun, main, in_avals)
1209 trace = DynamicJaxprTrace(main, core.cur_sublevel())
1210 in_tracers = map(trace.new_arg, in_avals)
-> 1211 ans = fun.call_wrapped(*in_tracers)
1212 out_tracers = map(trace.full_raise, ans)
1213 jaxpr, out_avals, consts = frame.to_jaxpr(in_tracers, out_tracers)
/opt/texat-venv/lib/python3.8/site-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
158
159 try:
--> 160 ans = self.f(*args, **dict(self.params, **kwargs))
161 except:
162 # Some transformations yield from inside context managers, so we have to
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/control_flow.py in f(x)
2172 def _shape_checked(fun, name):
2173 def f(x):
-> 2174 y = fun(x)
2175 _check_shapes(name, "b", y, b_flat)
2176 return y
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/linalg.py in <lambda>(_, x)
257 lax.custom_linear_solve,
258 lambda x: _matvec_multiply(a, x),
--> 259 solve=lambda _, x: lu_solve(lu_, permutation, x, trans=0),
260 transpose_solve=lambda _, x: lu_solve(lu_, permutation, x, trans=1))
261 if a.ndim == b.ndim + 1:
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/linalg.py in lu_solve(lu, permutation, b, trans)
986 def lu_solve(lu, permutation, b, trans=0):
987 """LU solve with broadcasting."""
--> 988 return _lu_solve(lu, permutation, b, trans)
989
990
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/traceback_util.py in reraise_with_filtered_traceback(*args, **kwargs)
137 def reraise_with_filtered_traceback(*args, **kwargs):
138 try:
--> 139 return fun(*args, **kwargs)
140 except Exception as e:
141 if not is_under_reraiser(e):
/opt/texat-venv/lib/python3.8/site-packages/jax/api.py in f_jitted(*args, **kwargs)
369 return cache_miss(*args, **kwargs)[0] # probably won't return
370 else:
--> 371 return cpp_jitted_f(*args, **kwargs)
372 f_jitted._cpp_jitted_f = cpp_jitted_f
373
/opt/texat-venv/lib/python3.8/site-packages/jax/api.py in cache_miss(*args, **kwargs)
276 _check_arg(arg)
277 flat_fun, out_tree = flatten_fun(f, in_tree)
--> 278 out_flat = xla.xla_call(
279 flat_fun,
280 *args_flat,
/opt/texat-venv/lib/python3.8/site-packages/jax/core.py in bind(self, fun, *args, **params)
1227
1228 def bind(self, fun, *args, **params):
-> 1229 return call_bind(self, fun, *args, **params)
1230
1231 def process(self, trace, fun, tracers, params):
/opt/texat-venv/lib/python3.8/site-packages/jax/core.py in call_bind(primitive, fun, *args, **params)
1218 tracers = map(top_trace.full_raise, args)
1219 with maybe_new_sublevel(top_trace):
-> 1220 outs = primitive.process(top_trace, fun, tracers, params)
1221 return map(full_lower, apply_todos(env_trace_todo(), outs))
1222
/opt/texat-venv/lib/python3.8/site-packages/jax/core.py in process(self, trace, fun, tracers, params)
1230
1231 def process(self, trace, fun, tracers, params):
-> 1232 return trace.process_call(self, fun, tracers, params)
1233
1234 def post_process(self, trace, out_tracers, params):
/opt/texat-venv/lib/python3.8/site-packages/jax/interpreters/partial_eval.py in process_call(self, call_primitive, f, tracers, params)
1083 def process_call(self, call_primitive, f, tracers, params):
1084 in_avals = [t.aval for t in tracers]
-> 1085 jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(f, self.main, in_avals)
1086 if not jaxpr.eqns:
1087 return core.eval_jaxpr(jaxpr, consts, *tracers)
/opt/texat-venv/lib/python3.8/site-packages/jax/interpreters/partial_eval.py in trace_to_subjaxpr_dynamic(fun, main, in_avals)
1209 trace = DynamicJaxprTrace(main, core.cur_sublevel())
1210 in_tracers = map(trace.new_arg, in_avals)
-> 1211 ans = fun.call_wrapped(*in_tracers)
1212 out_tracers = map(trace.full_raise, ans)
1213 jaxpr, out_avals, consts = frame.to_jaxpr(in_tracers, out_tracers)
/opt/texat-venv/lib/python3.8/site-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
158
159 try:
--> 160 ans = self.f(*args, **dict(self.params, **kwargs))
161 except:
162 # Some transformations yield from inside context managers, so we have to
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/linalg.py in _lu_solve(lu, permutation, b, trans)
980 "(shape {}) must match"
981 .format(lu.shape, b.shape))
--> 982 x = _lu_solve_core(lu, permutation, b, trans)
983 return x[..., 0] if rhs_vector else x
984
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/vectorize.py in wrapped(*args)
302 vmap_counts = [max(c - 1, 0) for c in vmap_counts]
303 vectorized_func = api.vmap(vectorized_func, in_axes)
--> 304 return vectorized_func(*vec_args)
305
306 return wrapped
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/vectorize.py in wrapped(*args)
133 """Check that output core dimensions match the signature."""
134 def wrapped(*args):
--> 135 out = func(*args)
136 out_shapes = map(jnp.shape, out if isinstance(out, tuple) else [out])
137
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/vectorize.py in new_func(*args)
174 for i, arg in static_args:
175 args.insert(i, arg)
--> 176 return func(*args)
177
178 return new_func, dynamic_args
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/lax/linalg.py in _lu_solve_core(lu, permutation, b, trans)
937 def _lu_solve_core(lu, permutation, b, trans):
938 m = lu.shape[0]
--> 939 x = jnp.reshape(b, (m, -1))
940 if trans == 0:
941 x = x[permutation, :]
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in reshape(a, newshape, order)
1245 def reshape(a, newshape, order="C"):
1246 try:
-> 1247 return a.reshape(newshape, order=order) # forward to method for ndarrays
1248 except AttributeError:
1249 return _reshape(a, newshape, order=order)
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in _reshape_method(a, *newshape, **kwargs)
1290 type(newshape[0]) is not Poly):
1291 newshape = newshape[0]
-> 1292 return _reshape(a, newshape, order=order)
1293
1294
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in _reshape(a, newshape, order)
1266
1267 def _reshape(a, newshape, order="C"):
-> 1268 computed_newshape = _compute_newshape(a, newshape)
1269 if order == "C":
1270 return lax.reshape(a, computed_newshape, None)
/opt/texat-venv/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in _compute_newshape(a, newshape)
1260 newshape = [check(size) for size in newshape] if iterable else check(newshape)
1261 if np.any(np.equal(newshape, -1)):
-> 1262 fix = -a.size // (newshape if type(newshape) is Poly else _prod(newshape))
1263 return [d if d != -1 else fix for d in newshape]
1264 else:
ZeroDivisionError: integer division or modulo by zero
```
Is there any way I can fix this without writing a new variant of the function that handles the zero shape case?
| Thanks for the report! I believe this is a bug... the numpy equivalent returns an empty matrix:
```python
import numpy as np
np.linalg.inv(np.zeros((0, 0)))
# array([], shape=(0, 0), dtype=float64)
```
And there are good arguments to be made that an empty matrix is its own inverse. | 2021-01-05T19:52:45 |
google/jax | 5,320 | google__jax-5320 | [
"826"
] | 7c42dc91ed707ff9230c2dbb6217e8b9b28284fe | diff --git a/jax/_src/lax/linalg.py b/jax/_src/lax/linalg.py
--- a/jax/_src/lax/linalg.py
+++ b/jax/_src/lax/linalg.py
@@ -253,6 +253,12 @@ def _check_solve_shapes(a, b):
def _solve(a, b):
_check_solve_shapes(a, b)
+ # Broadcast leading dimensions of b to the shape of a, as is required by
+ # custom_linear_solve.
+ out_shape = tuple(d_a if d_b == 1 else d_b
+ for d_a, d_b in zip(a.shape[:-1] + (1,), b.shape))
+ b = jnp.broadcast_to(b, out_shape)
+
# With custom_linear_solve, we can reuse the same factorization when
# computing sensitivities. This is considerably faster.
lu_, _, permutation = lu(lax.stop_gradient(a))
| diff --git a/tests/linalg_test.py b/tests/linalg_test.py
--- a/tests/linalg_test.py
+++ b/tests/linalg_test.py
@@ -706,7 +706,7 @@ def tensor_maker():
((4, 4), (4,)),
((8, 8), (8, 4)),
((1, 2, 2), (3, 2)),
- ((2, 1, 3, 3), (2, 4, 3, 4)),
+ ((2, 1, 3, 3), (1, 4, 3, 4)),
((1, 0, 0), (1, 0, 2)),
]
for dtype in float_types + complex_types))
| linalg.solve doesn't respect broadcasting semantics
I'm attempting a batched matrix solve,
```python
import jax.numpy as jp
import numpy as np
A = np.eye(2)[np.newaxis]
b = np.ones((3, 2))
print(A.shape, b.shape)
print(np.linalg.solve(A, b))
print(jp.linalg.solve(A, b))
```
but I'm getting the following:
```
(1, 2, 2) (3, 2)
[[1. 1.]
[1. 1.]
[1. 1.]]
/Users/skainswo/.local/share/virtualenvs/research-OGGq2tNy/lib/python3.7/site-packages/jax/lib/xla_bridge.py:130: UserWarning: No GPU/TPU found, falling back to CPU.
warnings.warn('No GPU/TPU found, falling back to CPU.')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~/nu/skainswo/research/gan_with_the_wind/mle_normal.py in <module>
31 print(A.shape, b.shape)
32 print(np.linalg.solve(A, b))
---> 33 print(jp.linalg.solve(A, b))
~/.local/share/virtualenvs/research-OGGq2tNy/lib/python3.7/site-packages/jax/numpy/linalg.py in solve(a, b)
247 x = b if a_ndims == b_ndims else b[..., None]
248
--> 249 permutation = lax_linalg.lu_pivots_to_permutation(pivots, m)
250 x = x[..., permutation, :]
251
~/.local/share/virtualenvs/research-OGGq2tNy/lib/python3.7/site-packages/jax/lax_linalg.py in lu_pivots_to_permutation(swaps, k)
462 return swaps, permutation
463
--> 464 n, = np.shape(swaps)
465 permutation = np.arange(k)
466 _, permutation = lax.fori_loop(
ValueError: too many values to unpack (expected 1)
```
I'm on jax 0.1.36 and numpy 1.16.4.
| I also observed this problem and solved it by making `A` and `b` have the same batch_shape (using `np.broadcast_to`).
@fehiepsi Hmm that's not actually working for me. I now have shapes `(3, 2, 2)` and `(3, 2)` for A and b, respectively but I'm still seeing the same error.
And `A.shape == (3, 2, 2)` `b.shape == (3, 2, 1)` doesn't work either.
Sorry, I used `jax.scipy.linalg.solve_triangular` so the behaviour is a bit difference. The case `(3, 2, 2)` and `(3, 2)`/`(3, 2, 1)` does not work with `jax.numpy.solve` in my system too.
@fehiepsi Ah, that would explain it!
Yes, `jax.numpy.linalg.solve` simply doesn't support broadcasting at the moment.
As a workaround, you could most likely wrap a `vmap` around an unbatched solve; I see no reason why that wouldn't work. The underlying primitives should support batching, it's only the Python numpy layer on top that does not support batching.
@hawkinsp I can see how `vmap` would work when batching over a single dimension, but is there a solution for batching across multiple dimensions? For example `A.shape == (5, 3, 2, 2) b.shape == (5, 3, 2, 1)`.
Well, hopefully it's moot now because `np.linalg.solve` should support broadcasting.
But you could use `vmap` with multiple batch dimensions, either by flattening them with a `reshape`, or by applying `vmap` repeatedly. If this is a common pattern, we might want to add support to `vmap` itself to map over multiple dimensions.
@hawkinsp Roger that! Yeah, I would totally vote for vmap supporting batching over a variable number of dimensions, ie. "batch over all but last n dimensions".
Hi!
I'm not sure if it is better to open a new issue or write here (in case just let me know) but I still seem to have problem with solve and the broadcasting semantics.
In my problem I have to compute the bilinear `y' S y` when the vector `y` has shape `(dim, )` and `S` is a batch of matrices of shape `(comp, dim, dim)`.
My code works with the standard numpy's implementation of `linalg.solve` but not with jax's:
```python
import numpy as np
import jax.numpy as jnp
from jax import random
RNG = random.PRNGKey(42)
keyY, keyS = random.split(RNG)
dim = 3; comp = 4
y = jnp.expand_dims(random.uniform(keyY, shape=(dim, )), 0)
S = random.uniform(keyS, shape=(comp, dim, dim))
print(y.shape)
print(S.shape)
for solver in [np.linalg.solve, jnp.linalg.solve]:
print(jnp.sum(y * solver(S, y), axis=1).shape)
```
This works for pure numpy code, but for jax it fails, and from what I can see (output trace provided below) it appears that `jnp.linalg.solve` is not performing the broadcasting over the first dimension of y. What am I doing wrong?
I am working with jax version `0.2.7` and numpy `1.19.2`.
Error Trace:
```
---------------------------------------------------------------------------
FilteredStackTrace Traceback (most recent call last)
<ipython-input-5-766e89175182> in <module>
1 for solver in [np.linalg.solve, jnp.linalg.solve]:
----> 2 print(jnp.sum(y * solver(S, y), axis=1).shape)
3
~/.miniconda3/envs/pyintel/lib/python3.7/site-packages/jax/_src/numpy/linalg.py in solve(a, b)
449 a, b = _promote_arg_dtypes(jnp.asarray(a), jnp.asarray(b))
--> 450 return lax_linalg._solve(a, b)
451
~/.miniconda3/envs/pyintel/lib/python3.7/site-packages/jax/_src/lax/linalg.py in _solve(a, b)
265 # b.shape == [..., m, k]
--> 266 return api.vmap(custom_solve, b.ndim - 1, max(a.ndim, b.ndim) - 1)(b)
267
~/.miniconda3/envs/pyintel/lib/python3.7/site-packages/jax/_src/lax/control_flow.py in custom_linear_solve(matvec, b, solve, transpose_solve, symmetric)
2179 matvec_jaxpr, matvec_consts, out_tree = _initial_style_jaxpr(
-> 2180 _shape_checked(matvec, "matvec"), in_args_tree, b_avals)
2181 _check_tree("matvec", "b", out_tree, tree)
~/.miniconda3/envs/pyintel/lib/python3.7/site-packages/jax/_src/lax/control_flow.py in _initial_style_jaxpr(fun, in_tree, in_avals)
71 def _initial_style_jaxpr(fun: Callable, in_tree, in_avals):
---> 72 jaxpr, out_avals, consts, out_tree = _initial_style_open_jaxpr(fun, in_tree, in_avals)
73 closed_jaxpr = core.ClosedJaxpr(pe.convert_constvars_jaxpr(jaxpr), ())
~/.miniconda3/envs/pyintel/lib/python3.7/site-packages/jax/_src/lax/control_flow.py in _initial_style_open_jaxpr(fun, in_tree, in_avals)
66 wrapped_fun, out_tree = flatten_fun_nokwargs(lu.wrap_init(fun), in_tree)
---> 67 jaxpr, out_avals, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals)
68 return jaxpr, out_avals, consts, out_tree()
~/.miniconda3/envs/pyintel/lib/python3.7/site-packages/jax/_src/lax/control_flow.py in f(x)
2174 y = fun(x)
-> 2175 _check_shapes(name, "b", y, b_flat)
2176 return y
~/.miniconda3/envs/pyintel/lib/python3.7/site-packages/jax/_src/lax/control_flow.py in _check_shapes(func_name, expected_name, actual, expected)
2125 raise ValueError(
-> 2126 f"{func_name}() output shapes must match {expected_name}, "
2127 f"got {actual_shapes} and {expected_shapes}")
FilteredStackTrace: ValueError: matvec() output shapes must match b, got [(4, 3)] and [(1, 3)]
The stack trace above excludes JAX-internal frames.
```
It looks like this is a bug related to broadcasting within `np.linalg.solve`. Shorter repro:
```python
import numpy as np
import jax.numpy as jnp
S = np.random.rand(4, 3, 3)
y = np.random.rand(1, 3)
print(np.linalg.solve(S, y).shape)
# (4, 3)
print(jnp.linalg.solve(S, y).shape)
# ValueError: matvec() output shapes must match b, got [(4, 3)] and [(1, 3)]
``` | 2021-01-05T23:09:42 |
google/jax | 5,333 | google__jax-5333 | [
"5331"
] | cacda5d16b1822a7c6cb15d4011469b500ed9f03 | diff --git a/jax/_src/lax/linalg.py b/jax/_src/lax/linalg.py
--- a/jax/_src/lax/linalg.py
+++ b/jax/_src/lax/linalg.py
@@ -1128,6 +1128,10 @@ def svd_impl(operand, full_matrices, compute_uv):
def svd_translation_rule(c, operand, full_matrices, compute_uv):
shape = c.get_shape(operand).dimensions()
m, n = shape[-2:]
+ if m == 0 or n == 0:
+ return xla.lower_fun(_empty_svd, multiple_results=True)(
+ c, operand, full_matrices=full_matrices, compute_uv=compute_uv)
+
u, s, v = xops.SVD(operand)
permutation = list(range(len(shape)))
permutation[-1], permutation[-2] = permutation[-2], permutation[-1]
@@ -1200,10 +1204,31 @@ def svd_jvp_rule(primals, tangents, full_matrices, compute_uv):
return (s, U, Vt), (ds, dU, _H(dV))
+def _empty_svd(a, *, full_matrices, compute_uv):
+ batch_shape = a.shape[:-2]
+ m, n = a.shape[-2:]
+ s = jnp.empty(batch_shape + (0,), dtype=lax_internal._complex_basetype(a.dtype))
+ if not compute_uv:
+ return (s,)
+ if full_matrices:
+ size = max(m, n)
+ u = jnp.broadcast_to(jnp.eye(size, dtype=a.dtype), batch_shape + (size, size))
+ else:
+ u = jnp.empty(batch_shape + (m, n), dtype=a.dtype)
+ v = jnp.empty(batch_shape + (0, 0), dtype=a.dtype)
+ if m < n:
+ u, v = v, u
+ return s, u, v
+
def _svd_cpu_gpu_translation_rule(gesvd_impl, c, operand, full_matrices, compute_uv):
+ shape = c.get_shape(operand).dimensions()
+ m, n = shape[-2:]
+ batch_dims = shape[:-2]
+
+ if m == 0 or n == 0:
+ return xla.lower_fun(_empty_svd, multiple_results=True)(
+ c, operand, full_matrices=full_matrices, compute_uv=compute_uv)
- shape = c.get_shape(operand)
- batch_dims = shape.dimensions()[:-2]
s, u, vt, info = gesvd_impl(c, operand,
full_matrices=full_matrices,
compute_uv=compute_uv)
diff --git a/jax/_src/numpy/linalg.py b/jax/_src/numpy/linalg.py
--- a/jax/_src/numpy/linalg.py
+++ b/jax/_src/numpy/linalg.py
@@ -303,7 +303,7 @@ def pinv(a, rcond=None):
u, s, v = svd(a, full_matrices=False)
# Singular values less than or equal to ``rcond * largest_singular_value``
# are set to zero.
- cutoff = rcond[..., jnp.newaxis] * jnp.amax(s, axis=-1, keepdims=True)
+ cutoff = rcond[..., jnp.newaxis] * jnp.amax(s, axis=-1, keepdims=True, initial=-jnp.inf)
s = jnp.where(s > cutoff, s, jnp.inf)
res = jnp.matmul(_T(v), jnp.divide(_T(u), s[..., jnp.newaxis]))
return lax.convert_element_type(res, a.dtype)
| diff --git a/tests/linalg_test.py b/tests/linalg_test.py
--- a/tests/linalg_test.py
+++ b/tests/linalg_test.py
@@ -513,8 +513,8 @@ def testNorm(self, shape, dtype, ord, axis, keepdims):
"b": b, "m": m, "n": n, "dtype": dtype, "full_matrices": full_matrices,
"compute_uv": compute_uv}
for b in [(), (3,), (2, 3)]
- for m in [2, 7, 29, 53]
- for n in [2, 7, 29, 53]
+ for m in [0, 2, 7, 29, 53]
+ for n in [0, 2, 7, 29, 53]
for dtype in float_types + complex_types
for full_matrices in [False, True]
for compute_uv in [False, True]))
@@ -529,7 +529,7 @@ def testSVD(self, b, m, n, dtype, full_matrices, compute_uv):
# Norm, adjusted for dimension and type.
def norm(x):
norm = np.linalg.norm(x, axis=(-2, -1))
- return norm / (max(m, n) * jnp.finfo(dtype).eps)
+ return norm / (max(1, m, n) * jnp.finfo(dtype).eps)
a, = args_maker()
out = jnp.linalg.svd(a, full_matrices=full_matrices, compute_uv=compute_uv)
@@ -773,7 +773,8 @@ def args_maker():
{"testcase_name":
"_shape={}".format(jtu.format_shape_dtype_string(shape, dtype)),
"shape": shape, "dtype": dtype}
- for shape in [(1, 1), (4, 4), (2, 70, 7), (2000, 7), (7, 1000), (70, 7, 2)]
+ for shape in [(1, 1), (4, 4), (2, 70, 7), (2000, 7), (7, 1000), (70, 7, 2),
+ (2, 0, 0), (3, 0, 2), (1, 0)]
for dtype in float_types + complex_types))
def testPinv(self, shape, dtype):
if (jnp.issubdtype(dtype, np.complexfloating) and
| linalg.svd and linalg.pinv fail for zero-size matrices
Hi all,
This issue follows from #5313 with PR #5316
`jnp.linalg.svd(jnp.zeros((0,0)))` and `jnp.linalg.pinv(jnp.zeros((0,0)))` both fail with runtime errors.
I will try and take a look if I find time, but I'm not sure how soon that will be I'm afraid :(
| Note that this works correctly on GPU, but results in an error on CPU.
I made the fix for `pinv`. Note that by comparison, scipy raises a similar error for svd of an empty matrix (see https://github.com/scipy/scipy/issues/1532) while numpy return empty matrices:
```python
>>> np.linalg.svd(np.zeros((0, 0)))
(array([], shape=(0, 0), dtype=float64),
array([], dtype=float64),
array([], shape=(0, 0), dtype=float64))
```
Since JAX CPU translation rules rely on scipy LAPACK wrappers, jax has the same issue as scipy. Numpy specifically special-cases the empty matrix, as can be seen here: https://github.com/numpy/numpy/blob/5cae51e794d69dd553104099305e9f92db237c53/numpy/linalg/umath_linalg.c.src#L2770
Interestingly:
```python
>>> np.linalg.svd(np.zeros((0, 3)))
(array([], shape=(0, 0), dtype=float64),
array([], dtype=float64),
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]))
>>> np.linalg.svd(np.zeros((3, 0)))
(array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]),
array([], dtype=float64),
array([], shape=(0, 0), dtype=float64))
```
Wow, thanks for digging in to the special cases here Jake. | 2021-01-06T19:32:14 |
google/jax | 5,360 | google__jax-5360 | [
"5358"
] | 3f85b300c8b2770dd0aef0c78b883b4820f27412 | diff --git a/jax/util.py b/jax/util.py
--- a/jax/util.py
+++ b/jax/util.py
@@ -21,6 +21,7 @@
import numpy as np
+partial = functools.partial
def safe_zip(*args):
n = len(args[0])
@@ -90,12 +91,6 @@ def split_dict(dct, names):
def concatenate(xs):
return list(it.chain.from_iterable(xs))
-def partial(fun, *args, **kwargs):
- wrapped = functools.partial(fun, *args, **kwargs)
- functools.update_wrapper(wrapped, fun)
- wrapped._bound_args = args
- return wrapped
-
class partialmethod(functools.partial):
def __get__(self, instance, owner):
if instance is None:
| jax.partial breaks signature inspection
```python
import inspect
from jax import partial
def f(x, y):
return x + y
add_one = partial(f, y=1)
inspect.signature(add_one).bind(1)
# TypeError: missing a required argument: 'y'
```
Compare to `functools.partial`:
```python
import inspect
from functools import partial
def f(x, y):
return x + y
add_one = partial(f, y=1)
inspect.signature(add_one).bind(1)
# <BoundArguments (x=1)>
```
Proposed fix is to alias `jax.partial` to `functools.partial`, and possibly one day deprecate it.
| 2021-01-08T20:27:21 |
||
google/jax | 5,387 | google__jax-5387 | [
"912"
] | 628d84ea617ff8190714b7c3aa1847a133837538 | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -29,7 +29,8 @@
import operator
import threading
import weakref
-from typing import Any, Callable, Iterable, List, NamedTuple, Optional, Sequence, Tuple, TypeVar, Union
+from typing import (Any, Callable, Iterable, List, NamedTuple, Optional,
+ Sequence, Tuple, TypeVar, Union)
from warnings import warn
import numpy as np
@@ -49,7 +50,7 @@
from ._src.traceback_util import api_boundary
from .tree_util import (tree_map, tree_flatten, tree_unflatten, tree_structure,
tree_transpose, tree_leaves, tree_multimap,
- treedef_is_leaf, Partial)
+ treedef_is_leaf, treedef_children, Partial)
from ._src.util import (unzip2, curry, partial, safe_map, safe_zip, prod,
split_list, extend_name_stack, wrap_name, cache, wraps,
HashableFunction)
@@ -1076,13 +1077,13 @@ def vmap(fun: F, in_axes=0, out_axes=0, axis_name=None) -> F:
(tuple/list/dict) thereof specifying which input array axes to map over.
If each positional argument to ``fun`` is an array, then ``in_axes`` can
- be an integer, a None, or a tuple of integers and Nones with
- length equal to the number of positional arguments to ``fun``.
- An integer or ``None`` indicates which array axis to map over for all
- arguments (with ``None`` indicating not to map any axis), and a tuple
- indicates which axis to map for each corresponding positional argument.
- Axis integers must be in the range ``[-ndim, ndim)`` for each array, where
- ``ndim`` is the number of dimensions of the corresponding input array.
+ be an integer, a None, or a tuple of integers and Nones with length equal
+ to the number of positional arguments to ``fun``. An integer or ``None``
+ indicates which array axis to map over for all arguments (with ``None``
+ indicating not to map any axis), and a tuple indicates which axis to map
+ for each corresponding positional argument. Axis integers must be in the
+ range ``[-ndim, ndim)`` for each array, where ``ndim`` is the number of
+ dimensions (axes) of the corresponding input array.
If the positional arguments to ``fun`` are container types, the
corresponding element of ``in_axes`` can itself be a matching container,
@@ -1091,17 +1092,22 @@ def vmap(fun: F, in_axes=0, out_axes=0, axis_name=None) -> F:
argument tuple passed to ``fun``.
At least one positional argument must have ``in_axes`` not None. The sizes
- of the mapped input axes for all mapped positional arguments must all
- be equal.
+ of the mapped input axes for all mapped positional arguments must all be
+ equal.
+
+ Arguments passed as keywords are always mapped over their leading axis
+ (i.e. axis index 0).
+
+ See below for examples.
out_axes: An integer, None, or (nested) standard Python container
(tuple/list/dict) thereof indicating where the mapped axis should appear
in the output. All outputs with a mapped axis must have a non-None
- ``out_axes`` specification. Axis integers must be
- in the range ``[-ndim, ndim)`` for each output array, where ``ndim`` is
- the number of dimensions of the array returned by the :func:`vmap`-ed
- function, which is one more than the number of dimensions of the
- corresponding array returned by ``fun``.
+ ``out_axes`` specification. Axis integers must be in the range ``[-ndim,
+ ndim)`` for each output array, where ``ndim`` is the number of dimensions
+ (axes) of the array returned by the :func:`vmap`-ed function, which is one
+ more than the number of dimensions (axes) of the corresponding array
+ returned by ``fun``.
Returns:
Batched/vectorized version of ``fun`` with arguments that correspond to
@@ -1160,22 +1166,22 @@ def vmap(fun: F, in_axes=0, out_axes=0, axis_name=None) -> F:
>>> print(out)
[1. 2. 3. 4. 5.]
- The results of a vectorized function can be mapped or unmapped.
- For example, the function below returns a pair with the first
- element mapped and the second unmapped. Only for unmapped results
- we can specify ``out_axes`` to be ``None`` (to keep it unmapped).
+ The results of a vectorized function can be mapped or unmapped. For example,
+ the function below returns a pair with the first element mapped and the second
+ unmapped. Only for unmapped results we can specify ``out_axes`` to be ``None``
+ (to keep it unmapped).
>>> print(vmap(lambda x, y: (x + y, y * 2.), in_axes=(0, None), out_axes=(0, None))(jnp.arange(2.), 4.))
(DeviceArray([4., 5.], dtype=float32), 8.0)
- If the ``out_axes`` is specified for an unmapped result, the result is broadcast
- across the mapped axis:
+ If the ``out_axes`` is specified for an unmapped result, the result is
+ broadcast across the mapped axis:
>>> print(vmap(lambda x, y: (x + y, y * 2.), in_axes=(0, None), out_axes=0)(jnp.arange(2.), 4.))
(DeviceArray([4., 5.], dtype=float32), DeviceArray([8., 8.], dtype=float32))
- If the ``out_axes`` is specified for a mapped result, the result is
- transposed accordingly.
+ If the ``out_axes`` is specified for a mapped result, the result is transposed
+ accordingly.
"""
_check_callable(fun)
docstr = ("Vectorized version of {fun}. Takes similar arguments as {fun} "
@@ -1196,21 +1202,21 @@ def vmap(fun: F, in_axes=0, out_axes=0, axis_name=None) -> F:
in_axes_, out_axes_ = tree_leaves(in_axes), tree_leaves(out_axes)
if not all(isinstance(l, (type(None), int)) for l in in_axes_):
- raise TypeError("vmap in_axes must be an int, None, or (nested) container with "
- f"those types as leaves, but got {in_axes}.")
+ raise TypeError("vmap in_axes must be an int, None, or (nested) container "
+ f"with those types as leaves, but got {in_axes}.")
if not all(isinstance(l, (type(None), int)) for l in out_axes_):
- raise TypeError("vmap out_axes must be an int, None, or (nested) container with "
- f"those types as leaves, but got {out_axes}.")
+ raise TypeError("vmap out_axes must be an int, None, or (nested) container "
+ f"with those types as leaves, but got {out_axes}.")
del in_axes_, out_axes_
@wraps(fun, docstr=docstr)
@api_boundary
- def batched_fun(*args):
- args_flat, in_tree = tree_flatten(args)
+ def batched_fun(*args, **kwargs):
+ args_flat, in_tree = tree_flatten((args, kwargs))
f = lu.wrap_init(fun)
- flat_fun, out_tree = flatten_fun_nokwargs(f, in_tree)
- in_axes_flat = flatten_axes("vmap in_axes", in_tree, in_axes)
- _ = _mapped_axis_size(in_tree, args_flat, in_axes_flat, "vmap")
+ flat_fun, out_tree = flatten_fun(f, in_tree)
+ in_axes_flat = flatten_axes("vmap in_axes", in_tree, (in_axes, 0), kws=True)
+ _ = _mapped_axis_size(in_tree, args_flat, in_axes_flat, "vmap", kws=True)
out_flat = batching.batch(flat_fun, args_flat, in_axes_flat,
lambda: flatten_axes("vmap out_axes", out_tree(),
out_axes),
@@ -1219,14 +1225,14 @@ def batched_fun(*args):
return batched_fun
-def _mapped_axis_size(tree, vals, dims, name):
+def _mapped_axis_size(tree, vals, dims, name, *, kws=False):
def _get_axis_size(name: str, i:int, shape: Tuple[int, ...], axis: int):
try:
return shape[axis]
except (IndexError, TypeError) as e:
ranks = tree_unflatten(tree, [np.ndim(x) for x, d in zip(vals, dims)])
- raise ValueError(f"{name} got arg {i} of rank {len(shape)} but axis to be mapped {axis}. "
- f"The tree of ranks is:\n{ranks}") from e
+ raise ValueError(f"{name} got arg {i} of rank {len(shape)} but axis to be "
+ f"mapped {axis}. The tree of ranks is:\n{ranks}") from e
mapped_axis_sizes = {_get_axis_size(name, i, np.shape(x), d)
for i, (x, d) in enumerate(zip(vals, dims))
@@ -1241,6 +1247,11 @@ def _get_axis_size(name: str, i:int, shape: Tuple[int, ...], axis: int):
# we switch the error message based on whether args is a tuple of arrays,
# in which case we can produce an error message based on argument indices,
# or if it has nested containers.
+ if kws:
+ # if keyword arguments are included in the tree, we make adapt the error
+ # message only to be about the positional arguments
+ tree, leaf = treedef_children(tree)
+ assert treedef_is_leaf(leaf)
# TODO(mattjj,phawkins): add a way to inspect pytree kind more directly
if tree == tree_flatten((core.unit,) * tree.num_leaves)[1]:
lines1 = [f"arg {i} has shape {np.shape(x)} and axis {d} is to be mapped"
@@ -1277,39 +1288,35 @@ def pmap(
) -> F:
"""Parallel map with support for collective operations.
- The purpose of :py:func:`pmap` is to express single-program multiple-data (SPMD)
- programs. Applying :py:func:`pmap` to a function will compile the function with XLA
- (similarly to :py:func:`jit`), then execute it in parallel on XLA devices, such as
- multiple GPUs or multiple TPU cores. Semantically it is comparable to
- :py:func:`vmap` because both transformations map a function over array axes, but
- where :py:func:`vmap` vectorizes functions by pushing the mapped axis down into
- primitive operations, :py:func:`pmap` instead replicates the function and executes
- each replica on its own XLA device in parallel.
-
- Another key difference with :py:func:`vmap` is that while :py:func:`vmap` can only express
- pure maps, :py:func:`pmap` enables the use of parallel SPMD collective operations,
- like all-reduce sum.
+ The purpose of :py:func:`pmap` is to express single-program multiple-data
+ (SPMD) programs. Applying :py:func:`pmap` to a function will compile the
+ function with XLA (similarly to :py:func:`jit`), then execute it in parallel
+ on XLA devices, such as multiple GPUs or multiple TPU cores. Semantically it
+ is comparable to :py:func:`vmap` because both transformations map a function
+ over array axes, but where :py:func:`vmap` vectorizes functions by pushing the
+ mapped axis down into primitive operations, :py:func:`pmap` instead replicates
+ the function and executes each replica on its own XLA device in parallel.
The mapped axis size must be less than or equal to the number of local XLA
devices available, as returned by :py:func:`jax.local_device_count()` (unless
- ``devices`` is specified, see below). For nested :py:func:`pmap` calls, the product
- of the mapped axis sizes must be less than or equal to the number of XLA
- devices.
+ ``devices`` is specified, see below). For nested :py:func:`pmap` calls, the
+ product of the mapped axis sizes must be less than or equal to the number of
+ XLA devices.
.. note::
:py:func:`pmap` compiles ``fun``, so while it can be combined with
:py:func:`jit`, it's usually unnecessary.
- **Multi-host platforms:** On multi-host platforms such as TPU pods, :py:func:`pmap`
- is designed to be used in SPMD Python programs, where every host is running
- the same Python code such that all hosts run the same pmapped function in the
- same order. Each host should still call the pmapped function with mapped axis
- size equal to the number of *local* devices (unless ``devices`` is specified,
- see below), and an array of the same leading axis size will be returned as
- usual. However, any collective operations in ``fun`` will be computed over
- *all* participating devices, including those on other hosts, via
- device-to-device communication. Conceptually, this can be thought of as
- running a pmap over a single array sharded across hosts, where each host
+ **Multi-host platforms:** On multi-host platforms such as TPU pods,
+ :py:func:`pmap` is designed to be used in SPMD Python programs, where every
+ host is running the same Python code such that all hosts run the same pmapped
+ function in the same order. Each host should still call the pmapped function
+ with mapped axis size equal to the number of *local* devices (unless
+ ``devices`` is specified, see below), and an array of the same leading axis
+ size will be returned as usual. However, any collective operations in ``fun``
+ will be computed over *all* participating devices, including those on other
+ hosts, via device-to-device communication. Conceptually, this can be thought
+ of as running a pmap over a single array sharded across hosts, where each host
"sees" only its local shard of the input and output. The SPMD model requires
that the same multi-host pmaps must be run in the same order on all devices,
but they can be interspersed with arbitrary operations running on a single
@@ -1324,7 +1331,9 @@ def pmap(
axis_name: Optional, a hashable Python object used to identify the mapped
axis so that parallel collectives can be applied.
in_axes: A non-negative integer, None, or nested Python container thereof
- that specifies which axes in the input to map over (see :py:func:`vmap`).
+ that specifies which axes of positional arguments to map over. Arguments
+ passed as keywords are always mapped over their leading axis (i.e. axis
+ index 0). See :py:func:`vmap` for details.
out_axes: A non-negative integer, None, or nested Python container thereof
indicating where the mapped axis should appear in the output. All outputs
with a mapped axis must have a non-None ``out_axes`` specification
@@ -1332,8 +1341,8 @@ def pmap(
static_broadcasted_argnums: An int or collection of ints specifying which
positional arguments to treat as static (compile-time constant).
Operations that only depend on static arguments will be constant-folded.
- Calling the pmapped function with different values for these constants will
- trigger recompilation. If the pmapped function is called with fewer
+ Calling the pmapped function with different values for these constants
+ will trigger recompilation. If the pmapped function is called with fewer
positional arguments than indicated by ``static_argnums`` then an error is
raised. Each of the static arguments will be broadcasted to all devices.
Arguments that are not arrays or containers thereof must be marked as
@@ -1342,8 +1351,8 @@ def pmap(
Optional, a sequence of Devices to map over. (Available devices can be
retrieved via jax.devices()). If specified, the size of the mapped axis
must be equal to the number of local devices in the sequence. Nested
- :py:func:`pmap` s with ``devices`` specified in either the inner or outer :py:func:`pmap`
- are not yet supported.
+ :py:func:`pmap` s with ``devices`` specified in either the inner or outer
+ :py:func:`pmap` are not yet supported.
backend: This is an experimental feature and the API is likely to change.
Optional, a string representing the XLA backend. 'cpu', 'gpu', or 'tpu'.
axis_size: Optional; the size of the mapped axis.
@@ -1362,12 +1371,11 @@ def pmap(
Returns:
A parallelized version of ``fun`` with arguments that correspond to those of
- ``fun`` but with extra array axes at positions indicated by ``in_axes``
- and with output that has an additional leading array axis (with the same
- size).
+ ``fun`` but with extra array axes at positions indicated by ``in_axes`` and
+ with output that has an additional leading array axis (with the same size).
- For example, assuming 8 XLA devices are available, :py:func:`pmap` can be used as a
- map along a leading array axis:
+ For example, assuming 8 XLA devices are available, :py:func:`pmap` can be used
+ as a map along a leading array axis:
>>> import jax.numpy as jnp
>>>
@@ -1395,9 +1403,9 @@ def pmap(
>>> pmap(lambda x: x ** 2)(jnp.arange(9)) # doctest: +SKIP
ValueError: ... requires 9 replicas, but only 8 XLA devices are available
- As with :py:func:`vmap`, using ``None`` in ``in_axes`` indicates that an argument
- doesn't have an extra axis and should be broadcasted, rather than mapped,
- across the replicas:
+ As with :py:func:`vmap`, using ``None`` in ``in_axes`` indicates that an
+ argument doesn't have an extra axis and should be broadcasted, rather than
+ mapped, across the replicas:
>>> x, y = jnp.arange(2.), 4.
>>> out = pmap(lambda x, y: (x + y, y * 2.), in_axes=(0, None))(x, y) # doctest: +SKIP
@@ -1529,8 +1537,8 @@ def f_pmapped(*args, **kwargs):
donated_invars = (False,) * len(args)
in_axes_flat = flatten_axes("pmap in_axes", in_tree, (dyn_in_axes, 0))
global_arg_shapes_flat = flatten_axes("pmap global_arg_shapes", in_tree,
- (dyn_global_arg_shapes, None))
- local_axis_size = _mapped_axis_size(in_tree, args, in_axes_flat, "pmap")
+ (dyn_global_arg_shapes, None), kws=True)
+ local_axis_size = _mapped_axis_size(in_tree, args, in_axes_flat, "pmap", kws=True)
for arg in args: _check_arg(arg)
flat_fun, out_tree = flatten_fun(f, in_tree)
if not config.omnistaging_enabled and out_axes != 0:
diff --git a/jax/api_util.py b/jax/api_util.py
--- a/jax/api_util.py
+++ b/jax/api_util.py
@@ -16,7 +16,7 @@
from typing import Any, Tuple, Union
from .tree_util import (tree_flatten, tree_unflatten, tree_multimap, _replace_nones,
- tree_structure)
+ tree_structure, treedef_children, treedef_is_leaf)
from . import linear_util as lu
from ._src.util import safe_map, WrapHashably, Hashable
from .core import unit
@@ -166,7 +166,7 @@ def wrap_hashably(arg):
else:
return Hashable(arg)
-def flatten_axes(name, treedef, axis_tree):
+def flatten_axes(name, treedef, axis_tree, *, kws=False):
# given an axis spec tree axis_tree (a pytree with integers and Nones at the
# leaves, i.e. the Nones are to be considered leaves) that is a tree prefix of
# the given treedef, build a complete axis spec tree with the same structure
@@ -179,6 +179,12 @@ def flatten_axes(name, treedef, axis_tree):
try:
tree_multimap(add_leaves, _replace_nones(proxy, axis_tree), dummy)
except ValueError:
+ if kws:
+ # if keyword arguments are included in the tree, we make adapt the error
+ # message only to be about the positional arguments
+ treedef, leaf = treedef_children(treedef)
+ assert treedef_is_leaf(leaf)
+ axis_tree, _ = axis_tree
raise ValueError(f"{name} specification must be a tree prefix of the "
f"corresponding value, got specification {axis_tree} "
f"for value tree {treedef}.") from None
| diff --git a/tests/batching_test.py b/tests/batching_test.py
--- a/tests/batching_test.py
+++ b/tests/batching_test.py
@@ -1160,6 +1160,16 @@ def f(x, y):
self.assertAllClose(x_bar, jnp.dot(z_bar, y.T))
self.assertAllClose(y_bar, jnp.dot(x.T, z_bar))
+ def testVmapKwargs(self):
+ # https://github.com/google/jax/issues/912
+
+ def f(a, b):
+ return (2*a, 3*b)
+
+ x = vmap(f)(jnp.array([1]), jnp.array([2])) # works
+ y = vmap(f)(a=jnp.array([1]), b=jnp.array([2])) # doesn't work
+ self.assertAllClose(x, y)
+
if __name__ == '__main__':
absltest.main(testLoader=jtu.JaxTestLoader())
| vmap doesn't handle named arguments.
Minimalish example:
```
def f(a, b):
return (2*a, 3*b)
jax.vmap(f)(jnp.array([1]), jnp.array([2])) # works
jax.vmap(f)(a=jnp.array([1]), b=jnp.array([2])) # doesn't work
```
The last line fails with `TypeError: reduce() of empty sequence with no initial value`.
| I came across a situation where `vmap` will appear to allow keyword arguments but produce incorrect results. In my case the incorrect results were valid values which allowed learning to continue (badly). Ideally an exception would be raised until keyword arguments are supported.
```
def wrapped_argmax(dummy, array):
return jnp.argmax(array)
batch_wrapped_argmax = jax.vmap(wrapped_argmax)
array = jnp.array([[0, 3, 0], [0, 0, 2], [1, 0, 0]], dtype=jnp.float32)
dummy = jnp.zeros_like(array)
no_kw = batch_wrapped_argmax(
dummy,
array,
)
print("good", no_kw)
kw = batch_wrapped_argmax(
dummy,
array=array,
)
print("bad ", kw)
kw_slice = batch_wrapped_argmax(
dummy[2:3],
array=array[2:3],
)
print("good", kw_slice)
try:
batch_wrapped_argmax(
dummy=dummy,
array=array,
)
except Exception as err:
print(err)
# Output:
# good [1 2 0]
# bad [1 1 1]
# good [0]
# reduce() of empty sequence with no initial value
```
@mattjj Is it possible to revert this change or (probably better) replace the NotImplementedError by a warning? Some code that we use in jaxmd relies on passing kwargs to vmapped function (with the expectation that they will not be vmapped over, so for us it's working as expected).
> Some code that we use in jaxmd relies on passing kwargs to vmapped function (with the expectation that they will not be vmapped over, so for us it's working as expected).
I don't know the details of what you are trying to do, but would it be perhaps easier/clearer to use something like `functools.partial` to feed in the non-vmapped arguments in such cases?
So, something like the following?
```
g = functools.partial(f, a=a, b=b) # setting generic, non-vmapped, kwargs
h = jax.vmap(g)
z = h(x, y)
```
I appreciate this might be slightly less elegant for your use case. But I think the alternative may be confusing at times, if the code behaves unintuitively to users (but doesn't fail)?
I think ideally named arguments would eventually be supported to `vmap` over. I suppose that would break your current code, since it relies on them behaving differently?
Ah, thanks for the recommendation. If it feels like my suggestion will adversely affect user experience then I can use a pattern like that.
When calling `f(x, y, a=a, b=b)` in a loop, however, it feels a bit awkward to call `jit(vmap(g))` in the loop (though I guess it will hit the cache and so shouldn't be costly). I am definitely not proposing that jax be silent re: use of kwargs. I just think a warning stating that kwargs are not batched over might suffice to let users know about that behavior and reason about it accordingly.
Regarding future compatibility, I would assume - as is the case with positional arguments - that eventually vmap will allow one to decide whether or not kwargs are to be vmapped over.
@sschoenholz actually `jit(vmap(g))` _won't_ hit the cache, because `vmap` (like other transformations) produces a new Python callable object each time it's applied. (I promised Adam I'd write a doc explaining when to expect cache hits...)
As for the main topic of the thread: the main reason kwargs are hard to support with vmapped functions is that the way in_axes are specified really depends on arguments being passed as positional. But we can cover some special cases, including the one in the OP. I'll try that! | 2021-01-13T03:41:55 |
google/jax | 5,423 | google__jax-5423 | [
"5275"
] | 8275da24336170424aa224c95ea033d80cf1d85c | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,6 +26,7 @@
author='JAX team',
author_email='[email protected]',
packages=find_packages(exclude=["examples"]),
+ package_data={'jax': ['py.typed']},
python_requires='>=3.6',
install_requires=[
'numpy >=1.12',
| PEP 561 Compliance
It's really great to see type annotations throughout the library! When I try to use `mypy` on my project which uses JAX, I get the following error:
```bash
error: Skipping analyzing 'jax': found module but no type hints or library stubs
```
I think there needs to be an empty `py.typed` file in the root of the source files, as per [PEP 561](https://www.python.org/dev/peps/pep-0561/), and a corresponding entry in the `setup.py`:
```python
setup(
...
package_data={'jax': ['py.typed']}
...
}
```
| This is in progress in #4711
Thanks @jakevdp! | 2021-01-14T20:09:33 |
|
google/jax | 5,468 | google__jax-5468 | [
"5440"
] | 34e798ff26018c69566557dbb97a82e327402842 | diff --git a/jax/interpreters/batching.py b/jax/interpreters/batching.py
--- a/jax/interpreters/batching.py
+++ b/jax/interpreters/batching.py
@@ -259,6 +259,8 @@ def process_custom_vjp_call(self, prim, fun, fwd, bwd, tracers, *, out_trees):
out_dims = out_dims[-len(out_vals) % len(out_dims):]
return [BatchTracer(self, v, d) for v, d in zip(out_vals, out_dims)]
+ post_process_custom_vjp_call = post_process_custom_jvp_call
+
def _main_trace_for_axis_names(main_trace: core.MainTrace,
axis_name: Union[core.AxisName, Tuple[core.AxisName, ...]]
) -> bool:
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -3866,6 +3866,47 @@ def g(x, y):
expected = jnp.cos(3.)
self.assertAllClose(ans, expected, check_dtypes=False)
+ def test_closed_over_tracer2(self):
+ def outer(x):
+ @api.custom_vjp
+ def f(y):
+ return x * y
+ def f_fwd(y):
+ return f(y), jnp.cos(y)
+ def f_rev(cos_y, g):
+ return (cos_y * g,)
+ f.defvjp(f_fwd, f_rev)
+ return f
+
+ @api.vmap
+ def g(x):
+ return outer(x)(3.)
+
+ ans = g(np.arange(3.))
+ expected = np.arange(3.) * 3
+ self.assertAllClose(ans, expected, check_dtypes=False)
+
+ def test_closed_over_tracer3(self):
+ def outer(x):
+ @api.custom_vjp
+ def f(y):
+ return x * y
+ def f_fwd(y):
+ return f(y), (x, jnp.cos(y))
+ def f_rev(res, g):
+ x, cos_y = res
+ return (cos_y * g * x,)
+ f.defvjp(f_fwd, f_rev)
+ return api.grad(f)
+
+ @api.vmap
+ def g(x):
+ return outer(x)(3.)
+
+ ans = g(np.arange(3.))
+ expected = np.cos(3.) * np.arange(3.)
+ self.assertAllClose(ans, expected, check_dtypes=False)
+
def test_nondiff_arg_tracer_error(self):
# This is similar to the old (now skipped) test_nondiff_arg_tracer, except
# we're testing for the error message that that usage pattern now raises.
| Error when `custom_vjp`-wrapped original function closes over batch (vmap) tracer
Defining a `custom_vjp` and then calling its forward implementation (without even using `vjp` at all!) fails with a confusing error message when:
- the arguments to the `custom_vjp` function are not batched
- but the function closes over some value that is batched
```
def bad(x, y):
@jax.custom_derivatives.custom_vjp
def go(y): return x
def never_called(*args): raise NotImplementedError()
go.defvjp(never_called, never_called)
return go(y)
bad(1., 2.) # -> 1.
jax.vmap(bad)(jnp.array([1.]), jnp.array([2.])) # -> [1.]
jax.vmap(bad, in_axes=[0, None])(jnp.array([1.]), 2.) # -> error
```
The error in question:
```
<ipython-input-79-21aaf8a8397e> in <module>()
----> 1 jax.vmap(bad, in_axes=[0, None])(jnp.array([1.]), 2.)
```
<details>
<summary><code>--- (click to expand backtrace) ---</code></summary>
<pre><code>
.../jax/api.py in batched_fun(*args)
1215 lambda: flatten_axes("vmap out_axes", out_tree(),
1216 out_axes),
-> 1217 axis_name=axis_name)
1218 return tree_unflatten(out_tree(), out_flat)
1219
<br><br>
.../jax/interpreters/batching.py in batch(fun, in_vals, in_dims, out_dim_dests, axis_name)
33 # executes a batched version of `fun` following out_dim_dests
34 batched_fun = batch_fun(fun, in_dims, out_dim_dests, axis_name=axis_name)
---> 35 return batched_fun.call_wrapped(*in_vals)
36
37 @lu.transformation_with_aux
<br><br>
.../jax/linear_util.py in call_wrapped(self, *args, **kwargs)
158
159 try:
--> 160 ans = self.f(*args, **dict(self.params, **kwargs))
161 except:
162 # Some transformations yield from inside context managers, so we have to
<br><br>
<ipython-input-75-0975376ba419> in bad(x, y)
4 def never_called(*args): raise NotImplementedError()
5 go.defvjp(never_called, never_called)
----> 6 return go(y)
<br><br>
.../jax/custom_derivatives.py in __call__(self, *args, **kwargs)
488 if config.omnistaging_enabled:
489 out_flat = custom_vjp_call_p.bind(flat_fun, flat_fwd, flat_bwd, *args_flat,
--> 490 out_trees=out_trees)
491 fst, aux = lu.merge_linear_aux(out_tree, out_trees)
492 out_tree = aux if fst else aux[0]
<br><br>
.../jax/custom_derivatives.py in bind(self, fun, fwd, bwd, out_trees, *args)
577 with core.maybe_new_sublevel(top_trace):
578 outs = top_trace.process_custom_vjp_call(self, fun, fwd, bwd, tracers,
--> 579 out_trees=out_trees)
580 _, env_trace_todo = lu.merge_linear_aux(env_trace_todo1, env_trace_todo2)
581 return _apply_todos(env_trace_todo, map(core.full_lower, outs))
<br><br>
.../jax/core.py in process_custom_vjp_call(***failed resolving arguments***)
605 def process_custom_vjp_call(self, primitive, fun, fwd, bwd, tracers, out_trees):
606 del primitive, fwd, bwd, out_trees # Unused.
--> 607 return fun.call_wrapped(*tracers)
608
609
<br><br>
.../jax/linear_util.py in call_wrapped(***failed resolving arguments***)
171 while stack:
172 gen, out_store = stack.pop()
--> 173 ans = gen.send(ans)
174 if out_store is not None:
175 ans, side = ans
<br><br>
.../jax/core.py in process_env_traces(primitive, level, params_tuple, out_axes_transforms, *args)
1195 trace = ans._trace.main.with_cur_sublevel()
1196 outs = map(trace.full_raise, outs)
-> 1197 outs, cur_todo = primitive.post_process(trace, outs, params)
1198 if isinstance(primitive, MapPrimitive):
1199 cur_todo, out_axes_transform = cur_todo
</code></pre>
</details>
```
.../jax/custom_derivatives.py in post_process(self, trace, out_tracers, params)
586
587 def post_process(self, trace, out_tracers, params):
--> 588 return trace.post_process_custom_vjp_call(out_tracers, params)
589 custom_vjp_call_p = CustomVJPCallPrimitive('custom_vjp_call')
590
AttributeError: 'BatchTrace' object has no attribute 'post_process_custom_vjp_call'
```
Note that the existing test `api_test.test_bwd_closes_over_tracer` test for closing over a tracer in the backward pass implementation. However, this error happens when closing over a tracer in the non-vjp implementation, and happens even if we don't ask for any gradients.
| @mattjj @jekbradbury | 2021-01-20T03:11:14 |
google/jax | 5,495 | google__jax-5495 | [
"5463"
] | 9ccfc9fd48b097c78c2d0fae515ff6bd52c0f681 | diff --git a/jax/core.py b/jax/core.py
--- a/jax/core.py
+++ b/jax/core.py
@@ -1134,6 +1134,7 @@ def join(self, other):
else:
assert False, f"Cannot join {self} with {other}"
def str_short(self): return 'Tok'
+ def at_least_vspace(self): return self
abstract_token = AbstractToken()
diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -367,10 +367,9 @@ def _execute_replicated_primitive(prim, compiled, result_handler, *args):
def check_special(prim, bufs):
- for buf in bufs:
- # TODO(jblespiau): We can simply use buf.xla_shape() when version 0.1.58 is
- # the default.
- _check_special(prim.name, getattr(buf, "xla_shape", buf.shape)(), buf)
+ if FLAGS.jax_debug_infs or FLAGS.jax_debug_nans:
+ for buf in bufs:
+ _check_special(prim.name, buf.xla_shape(), buf)
def _check_special(name, xla_shape, buf):
assert not xla_shape.is_tuple()
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -2160,6 +2160,20 @@ def test_linearize_aval_error(self):
with self.assertRaisesRegex(ValueError, "tangent values inconsistent"):
f_jvp(np.ones(2, np.int32))
+ def test_grad_of_token_consuming_primitive(self):
+ # https://github.com/google/jax/issues/5463
+ tokentest_p = core.Primitive("tokentest")
+ tokentest_p.def_impl(partial(xla.apply_primitive, tokentest_p))
+ tokentest_p.def_abstract_eval(lambda x, y: x)
+ xla.translations[tokentest_p] = lambda c, x, y: x
+ ad.defjvp(tokentest_p, (lambda g, x, token: x), None)
+
+ token = jax.lax.create_token(123)
+ arr = jnp.ones((3, 2))
+ res, vjp_fun = jax.vjp(lambda x: tokentest_p.bind(x, token), arr)
+ # Should not crash.
+ vjp_fun(arr)
+
class RematTest(jtu.JaxTestCase):
| Breaking changes on vjp with with tokens for `jax>=0.2.8`?
After updating jax to the latest version I'm starting to see test failures on (mpi4jax)[https://github.com/PhilipVinc/mpi4jax] when calling `vjp` on functions that accept tokens.
see below:
```python
def test_allreduce_vjp():
from mpi4jax import Allreduce
token = jax.lax.create_token(123)
arr = jnp.ones((3, 2))
_arr = arr.copy()
res, vjp_fun = jax.vjp(lambda x: Allreduce(x, op=MPI.SUM, token=token)[0], arr)
(vjp,) = vjp_fun(_arr)
expected, _ = Allreduce(arr, op=MPI.SUM)
assert jnp.array_equal(expected, res)
assert jnp.array_equal(_arr, vjp)
```
fails with error
```python
tests/test_collective_ops.py:202:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../../../Documents/pythonenvs/mpi4jax_env/lib/python3.8/site-packages/jax/api.py:1866: in vjp
return _vjp(lu.wrap_init(fun), *primals, has_aux=has_aux)
../../../../../Documents/pythonenvs/mpi4jax_env/lib/python3.8/site-packages/jax/api.py:1874: in _vjp
out_primal, out_vjp = ad.vjp(flat_fun, primals_flat)
../../../../../Documents/pythonenvs/mpi4jax_env/lib/python3.8/site-packages/jax/interpreters/ad.py:114: in vjp
out_primals, pvals, jaxpr, consts = linearize(traceable, *primals)
../../../../../Documents/pythonenvs/mpi4jax_env/lib/python3.8/site-packages/jax/interpreters/ad.py:101: in linearize
jaxpr, out_pvals, consts = pe.trace_to_jaxpr(jvpfun_flat, in_pvals)
../../../../../Documents/pythonenvs/mpi4jax_env/lib/python3.8/site-packages/jax/interpreters/partial_eval.py:506: in trace_to_jaxpr
jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
../../../../../Documents/pythonenvs/mpi4jax_env/lib/python3.8/site-packages/jax/linear_util.py:160: in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
tests/test_collective_ops.py:202: in <lambda>
res, vjp_fun = jax.vjp(lambda x: Allreduce(x, op=MPI.SUM, token=token)[0], arr)
mpi4jax/validation.py:90: in wrapped
return function(*args, **kwargs)
mpi4jax/collective_ops/allreduce.py:68: in Allreduce
return mpi_allreduce_p.bind(x, token, op=op, comm=comm, transpose=_transpose)
../../../../../Documents/pythonenvs/mpi4jax_env/lib/python3.8/site-packages/jax/core.py:270: in bind
tracers = map(top_trace.full_raise, args)
../../../../../Documents/pythonenvs/mpi4jax_env/lib/python3.8/site-packages/jax/_src/util.py:37: in safe_map
return list(map(f, *args))
../../../../../Documents/pythonenvs/mpi4jax_env/lib/python3.8/site-packages/jax/core.py:377: in full_raise
return self.pure(val)
../../../../../Documents/pythonenvs/mpi4jax_env/lib/python3.8/site-packages/jax/interpreters/ad.py:269: in pure
tangent_zero = Zero(get_aval(val).at_least_vspace())
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = AbstractToken()
def at_least_vspace(self):
> raise NotImplementedError("must override")
E NotImplementedError: must override
../../../../../Documents/pythonenvs/mpi4jax_env/lib/python3.8/site-packages/jax/core.py:803: NotImplementedError
```
In mpi4jax, the culprit seems to be
`return mpi_allreduce_p.bind(x, token, op=op, comm=comm, transpose=_transpose)`, which normally works fine, also under jit, but if under a vjp transformation he does not like the token to be there?
This code was working with a previous version of jax.
| This test was successful with `jax==0.2.7` and is broken with `jax>=0.2.8` | 2021-01-22T15:58:53 |
google/jax | 5,526 | google__jax-5526 | [
"5522"
] | 1fd1faa06ca12fed2332ec52b943b4a76bd99da1 | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -2000,6 +2000,7 @@ def transposed_fun(out_cotangent):
def make_jaxpr(fun: Callable,
static_argnums: Union[int, Iterable[int]] = (),
+ axis_env: Optional[Sequence[Tuple[AxisName, int]]] = None,
return_shape: bool = False,
) -> Callable[..., core.ClosedJaxpr]:
"""Creates a function that produces its jaxpr given example args.
@@ -2009,6 +2010,12 @@ def make_jaxpr(fun: Callable,
arguments and return value should be arrays, scalars, or standard Python
containers (tuple/list/dict) thereof.
static_argnums: See the :py:func:`jax.jit` docstring.
+ axis_env: Optional, a sequence of pairs where the first element is an axis
+ name and the second element is a positive integer representing the size of
+ the mapped axis with that name. This parameter is useful when lowering
+ functions that involve parallel communication collectives, and it
+ specifies the axis name/size environment that would be set up by
+ applications of :py:func:`jax.pmap`.
return_shape: Optional boolean, defaults to ``False``. If ``True``, the
wrapped function returns a pair where the first element is the ``jaxpr``
and the second element is a pytree with the same structure as
@@ -2069,8 +2076,14 @@ def jaxpr_maker(*args, **kwargs):
jaxtree_fun, out_tree = flatten_fun(wrapped, in_tree)
in_avals = [raise_to_shaped(core.get_aval(x)) for x in jax_args]
if config.omnistaging_enabled:
- jaxpr, out_avals, consts = pe.trace_to_jaxpr_dynamic(jaxtree_fun, in_avals)
+ with ExitStack() as stack:
+ for axis_name, size in axis_env or []:
+ stack.enter_context(core.extend_axis_env(axis_name, size, None))
+ jaxpr, out_avals, consts = pe.trace_to_jaxpr_dynamic(jaxtree_fun, in_avals)
else:
+ if axis_env:
+ raise NotImplementedError(
+ "axis_env argument to make_jaxpr only supported with omnistaging.")
in_pvals = [pe.PartialVal.unknown(a) for a in in_avals]
jaxpr, out_pvals, consts = pe.trace_to_jaxpr(
jaxtree_fun, in_pvals, instantiate=True, stage_out=True) # type: ignore
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -2775,6 +2775,15 @@ def test_make_jaxpr_return_shape(self):
api.ShapeDtypeStruct(shape=(2,), dtype=jnp.float32))
self.assertEqual(shape_tree, expected)
+ def test_make_jaxpr_axis_env(self):
+ if not config.omnistaging_enabled:
+ raise unittest.SkipTest("test only works with omnistaging")
+
+ def f(x):
+ return x - lax.psum(x, 'i')
+ jaxpr = api.make_jaxpr(f, axis_env=[('i', 4)])(2)
+ self.assertIn('psum', str(jaxpr))
+
class LazyTest(jtu.JaxTestCase):
| `make_jaxpr` does not work with any parallel operators
Currently `make_jaxpr` does not work when there are any parallel operations, such as `psum/pmean` inside the computation. This also is reflected in the fact that the in general the returned `jaxpr` returned by `make_jaxpr` is not aware of any parallel computation in general.
This breaks any code that needs to analyze or internally transform functions based on their `jaxpr`.
`make_jaxpr` needs some way of adding the parallel environment to it.
| 2021-01-27T01:25:46 |
|
google/jax | 5,547 | google__jax-5547 | [
"5536"
] | 3d5d0737662528bb628f2b1a3cafae16083b0c09 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -1147,7 +1147,15 @@ def reduce(operands: Array, init_values: Array, computation: Callable,
@cache()
def _reduction_jaxpr(computation, aval):
pval = pe.PartialVal.unknown(aval)
- comp = lu.wrap_init(lambda x, y: (computation(x, y),))
+ @lu.wrap_init
+ def comp(x, y):
+ result = computation(x, y)
+ if not (isinstance(result, core.Tracer) or core.valid_jaxtype(result)):
+ raise ValueError(
+ f"Invalid return type from reduction function: {type(result)}\n"
+ f"Reduction functions should only return an array.\n"
+ f"Full return value: {result}")
+ return (result,)
jaxpr, _, consts = pe.trace_to_jaxpr(comp, (pval, pval), instantiate=False)
return jaxpr, consts
diff --git a/jax/interpreters/partial_eval.py b/jax/interpreters/partial_eval.py
--- a/jax/interpreters/partial_eval.py
+++ b/jax/interpreters/partial_eval.py
@@ -517,6 +517,10 @@ def trace_to_subjaxpr(main: core.MainTrace, instantiate: Union[bool, Sequence[bo
trace = JaxprTrace(main, core.cur_sublevel())
in_tracers = map(trace.new_arg, pvals)
ans = yield in_tracers, {}
+ assert isinstance(ans, (list, tuple)), (
+ f"Got unexpected return type when tracing function to jaxpr: {ans}")
+ assert all(isinstance(x, core.Tracer) or core.valid_jaxtype(x) for x in ans), (
+ f"Got unexpected return type when tracing function to jaxpr: {ans}")
instantiate = [instantiate] * len(ans) if isinstance(instantiate, bool) else instantiate
out_tracers = map(trace.full_raise, map(core.full_lower, ans))
out_tracers = map(partial(instantiate_const_at, trace), instantiate, out_tracers)
| diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -1571,6 +1571,15 @@ def zero_stride_test():
with self.assertRaisesRegex(TypeError, "must have every element be"):
failure_fun()
+ with self.assertRaisesRegex(
+ ValueError,
+ "Invalid return type from reduction function: <class 'list'>\n"
+ "Reduction functions should only return an array.\n"
+ "Full return value: .*"):
+ return lax.reduce_window(
+ np.ones((1,)), 0., lambda x, y: [x + y],
+ padding='VALID', window_dimensions=(1,), window_strides=(1,))
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": (f"_shape={shape}_windowdimensions={window_dimensions}"
f"_basedilation={base_dilation}_windowdilation="
| Bug in shape checking rule when calling jax.lax.reduce_window
I stumbled on this by accident and the error message says to report it:
```
jax.lax.reduce_window(
np.zeros((10, 28, 28, 3), dtype=np.float32), jnp.inf, lambda *args: [jax.lax.min(*args)], [1,3,3,1], [1,2,2,1], "SAME")
```
The computation argument appears to be the one triggering the bug, the plain jax.lax.min works fine.
| 2021-01-29T00:28:34 |
|
google/jax | 5,584 | google__jax-5584 | [
"5570"
] | 2a7697858a0c5e05fd08a8493535879d1a44723d | diff --git a/jax/_src/scipy/stats/multivariate_normal.py b/jax/_src/scipy/stats/multivariate_normal.py
--- a/jax/_src/scipy/stats/multivariate_normal.py
+++ b/jax/_src/scipy/stats/multivariate_normal.py
@@ -44,7 +44,7 @@ def logpdf(x, mean, cov, allow_singular=None):
L = lax.linalg.cholesky(cov)
y = lax.linalg.triangular_solve(L, x - mean, lower=True, transpose_a=True)
return (-1/2 * jnp.einsum('...i,...i->...', y, y) - n/2*np.log(2*np.pi)
- - jnp.log(L.diagonal()).sum())
+ - jnp.log(L.diagonal(axis1=-1, axis2=-2)).sum(-1))
@_wraps(osp_stats.multivariate_normal.pdf, update_doc=False)
def pdf(x, mean, cov):
| diff --git a/tests/scipy_stats_test.py b/tests/scipy_stats_test.py
--- a/tests/scipy_stats_test.py
+++ b/tests/scipy_stats_test.py
@@ -20,6 +20,7 @@
import numpy as np
import scipy.stats as osp_stats
+from jax import api
from jax import test_util as jtu
from jax.scipy import stats as lsp_stats
from jax.scipy.special import expit
@@ -470,6 +471,24 @@ def args_maker():
self._CompileAndCheck(lsp_stats.multivariate_normal.logpdf, args_maker,
rtol=1e-4, atol=1e-4)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "_ndim={}_nbatch={}_dtype={}".format(ndim, nbatch, dtype.__name__),
+ "ndim": ndim, "nbatch": nbatch, "dtype": dtype}
+ for ndim in [2, 3]
+ for nbatch in [1, 3, 5]
+ for dtype in jtu.dtypes.floating))
+ def testMultivariateNormalLogpdfBatch(self, ndim, nbatch, dtype):
+ # Regression test for #5570
+ rng = jtu.rand_default(self.rng())
+ x = rng((nbatch, ndim), dtype)
+ mean = 5 * rng((nbatch, ndim), dtype)
+ factor = rng((nbatch, ndim, 2 * ndim), dtype)
+ cov = factor @ factor.transpose(0, 2, 1)
+
+ result1 = lsp_stats.multivariate_normal.logpdf(x, mean, cov)
+ result2 = api.vmap(lsp_stats.multivariate_normal.logpdf)(x, mean, cov)
+ self.assertArraysEqual(result1, result2)
+
if __name__ == "__main__":
absltest.main(testLoader=jtu.JaxTestLoader())
| `inf` using multivariate_normal.logpdf with multiple batches
`multivariate_normal.logpdf` gives `inf`s when there are multiple batches:
```
>>> from jax import scipy
>>> import jax.numpy as np
>>> from jax.scipy.stats import multivariate_normal
>>> multivariate_normal.logpdf(np.array([[1. ,2.],
... [2., 3.]]),
... np.array([[1., 2.],
... [2., 3.]]),
... np.array([[[1., 0.2],
... [0.2, 1.]],
... [[1., 0.3],
... [0.3, 1.]]]))
DeviceArray([inf, inf], dtype=float32)
```
| Thanks for the report!
I beleive the issue is that the diagonal of the decomposed covariance is not properly handled in the block case. It looks like changing this line: https://github.com/google/jax/blob/bb0750f31a35e2661834e8b532781785fe63496e/jax/_src/scipy/stats/multivariate_normal.py#L46-L47
to this:
```python
return (-1/2 * jnp.einsum('...i,...i->...', y, y) - n/2*np.log(2*np.pi)
- jnp.log(L.diagonal(axis1=-1, axis2=-2)).sum(-1))
```
will make things work correctly.
I can prepare a fix with tests sometime tomorrow if nobody gets to it before that. | 2021-02-01T18:03:17 |
google/jax | 5,649 | google__jax-5649 | [
"5645"
] | 3575bc7639a8f222417d33adc2112ebe766770af | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -1665,6 +1665,7 @@ def broadcast_arrays(*args):
""")
def broadcast_to(arr, shape):
arr = arr if isinstance(arr, ndarray) else array(arr)
+ shape = (shape,) if ndim(shape) == 0 else shape
shape = canonicalize_shape(shape) # check that shape is concrete
arr_shape = _shape(arr)
if arr_shape == shape:
@@ -2879,17 +2880,26 @@ def ones_like(a, dtype=None, shape=None):
@_wraps(np.full)
def full(shape, fill_value, dtype=None):
lax._check_user_dtype_supported(dtype, "full")
- shape = (shape,) if ndim(shape) == 0 else shape
- return lax.full(shape, fill_value, dtype)
+ _check_arraylike("full", fill_value)
+ if ndim(fill_value) == 0:
+ shape = (shape,) if ndim(shape) == 0 else shape
+ return lax.full(shape, fill_value, dtype)
+ else:
+ return broadcast_to(asarray(fill_value, dtype=dtype), shape)
@_wraps(np.full_like)
def full_like(a, fill_value, dtype=None, shape=None):
- _check_arraylike("full_like", a)
lax._check_user_dtype_supported(dtype, "full_like")
- if np.isscalar(shape):
- shape = (shape,)
- return lax.full_like(a, fill_value, dtype, shape)
+ _check_arraylike("full_like", a, fill_value)
+ if shape is not None:
+ shape = (shape,) if ndim(shape) == 0 else shape
+ if ndim(fill_value) == 0:
+ return lax.full_like(a, fill_value, dtype, shape)
+ else:
+ shape = np.shape(a) if shape is None else shape
+ dtype = _dtype(a) if dtype is None else dtype
+ return broadcast_to(asarray(fill_value, dtype=dtype), shape)
@_wraps(np.zeros)
| diff --git a/jax/test_util.py b/jax/test_util.py
--- a/jax/test_util.py
+++ b/jax/test_util.py
@@ -452,6 +452,8 @@ def _dims_of_shape(shape):
return shape
elif isinstance(shape, ScalarShape):
return ()
+ elif np.ndim(shape) == 0:
+ return (shape,)
else:
raise TypeError(type(shape))
@@ -467,6 +469,9 @@ def _cast_to_shape(value, shape, dtype):
elif type(shape) in (list, tuple):
assert np.shape(value) == tuple(shape)
return value
+ elif np.ndim(shape) == 0:
+ assert np.shape(value) == (shape,)
+ return value
else:
raise TypeError(type(shape))
diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -61,7 +61,7 @@
array_shapes = nonempty_array_shapes + empty_array_shapes
nonzerodim_shapes = nonempty_nonscalar_array_shapes + empty_array_shapes
nonempty_shapes = scalar_shapes + nonempty_array_shapes
-all_shapes = scalar_shapes + array_shapes
+all_shapes = scalar_shapes + array_shapes
float_dtypes = jtu.dtypes.all_floating
complex_dtypes = jtu.dtypes.complex
@@ -92,7 +92,7 @@ def _shape_and_dtypes(shapes, dtypes):
yield (shape, dtype)
def _compatible_shapes(shape):
- if shape in scalar_shapes:
+ if shape in scalar_shapes or np.ndim(shape) == 0:
return [shape]
return (shape[n:] for n in range(len(shape) + 1))
@@ -2483,19 +2483,21 @@ def testHVDStack(self, shape, op, dtypes):
self._CompileAndCheck(jnp_fun, args_maker)
@parameterized.named_parameters(jtu.cases_from_list(
- {"testcase_name": "_inshape={}_outdtype={}".format(
+ {"testcase_name": "_inshape={}_outdtype={}_fillshape={}".format(
jtu.format_shape_dtype_string(shape, fill_value_dtype),
- np.dtype(out_dtype).name if out_dtype else "None"),
- "shape": shape, "fill_value_dtype": fill_value_dtype,
- "out_dtype": out_dtype}
+ np.dtype(out_dtype).name if out_dtype else "None",
+ fill_value_shape),
+ "fill_value_dtype": fill_value_dtype, "fill_value_shape": fill_value_shape,
+ "shape": shape, "out_dtype": out_dtype}
for shape in array_shapes + [3, np.array(7, dtype=np.int32)]
for fill_value_dtype in default_dtypes
+ for fill_value_shape in _compatible_shapes(shape)
for out_dtype in [None] + default_dtypes))
- def testFull(self, shape, fill_value_dtype, out_dtype):
+ def testFull(self, shape, fill_value_dtype, fill_value_shape, out_dtype):
rng = jtu.rand_default(self.rng())
np_fun = lambda fill_value: np.full(shape, fill_value, dtype=out_dtype)
jnp_fun = lambda fill_value: jnp.full(shape, fill_value, dtype=out_dtype)
- args_maker = lambda: [rng((), fill_value_dtype)]
+ args_maker = lambda: [rng(fill_value_shape, fill_value_dtype)]
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker)
self._CompileAndCheck(jnp_fun, args_maker)
@@ -2558,20 +2560,20 @@ def testOnesWithInvalidShape(self):
@unittest.skipIf(numpy_version < (1, 17), "shape parameter not supported in older numpy")
@parameterized.named_parameters(jtu.cases_from_list(
- {"testcase_name": "_inshape={}_filldtype={}_outdtype={}_outshape={}".format(
+ {"testcase_name": "_inshape={}_filldtype={}_fillshape={}_outdtype={}_outshape={}".format(
jtu.format_shape_dtype_string(shape, in_dtype),
- np.dtype(fill_value_dtype).name,
- np.dtype(out_dtype).name,
- out_shape),
+ np.dtype(fill_value_dtype).name, fill_value_shape,
+ np.dtype(out_dtype).name, out_shape),
"shape": shape, "in_dtype": in_dtype,
- "fill_value_dtype": fill_value_dtype, "out_dtype": out_dtype,
- "out_shape": out_shape}
+ "fill_value_dtype": fill_value_dtype, "fill_value_shape": fill_value_shape,
+ "out_dtype": out_dtype, "out_shape": out_shape}
for shape in array_shapes
for out_shape in [None] + array_shapes
for in_dtype in default_dtypes
for fill_value_dtype in default_dtypes
+ for fill_value_shape in _compatible_shapes(shape if out_shape is None else out_shape)
for out_dtype in default_dtypes))
- def testFullLike(self, shape, in_dtype, fill_value_dtype, out_dtype, out_shape):
+ def testFullLike(self, shape, in_dtype, fill_value_dtype, fill_value_shape, out_dtype, out_shape):
if numpy_version < (1, 19) and out_shape == ():
raise SkipTest("Numpy < 1.19 treats out_shape=() like out_shape=None")
rng = jtu.rand_default(self.rng())
@@ -2579,7 +2581,7 @@ def testFullLike(self, shape, in_dtype, fill_value_dtype, out_dtype, out_shape):
x, fill_value, dtype=out_dtype, shape=out_shape)
jnp_fun = lambda x, fill_value: jnp.full_like(
x, fill_value, dtype=out_dtype, shape=out_shape)
- args_maker = lambda: [rng(shape, in_dtype), rng((), fill_value_dtype)]
+ args_maker = lambda: [rng(shape, in_dtype), rng(fill_value_shape, fill_value_dtype)]
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker)
self._CompileAndCheck(jnp_fun, args_maker)
@@ -4546,6 +4548,7 @@ def body(i, xy):
[(3,), (2, 1, 3)],
[(3,), (3, 3)],
[(1,), (3,)],
+ [(1,), 3],
])
def testBroadcastTo(self, from_shape, to_shape):
rng = jtu.rand_default(self.rng())
| jax.numpy.full((2,2),[1,2]) fails
The last example at https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.full.html
`jax.numpy.full((2,2),[1,2])`
couldn't pass. It gives
`TypeError: full must be called with scalar fill_value, got fill_value.shape (2,).`
| Thanks for the report! We’ll try to get non-scalar inputs implemented soon. | 2021-02-05T18:09:15 |
google/jax | 5,690 | google__jax-5690 | [
"5683"
] | c2455d97e95f5c3c31a96571a1236b1f786ad93d | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -2318,8 +2318,14 @@ def __init__(self, shape, dtype):
>>> print(out.dtype)
float32
"""
+ def dtype(x):
+ try:
+ return dtypes.result_type(x)
+ except ValueError:
+ return dtypes.result_type(getattr(x, 'dtype'))
+
def abstractify(x):
- return ShapedArray(np.shape(x), dtypes.result_type(x))
+ return ShapedArray(np.shape(x), dtype(x))
args_flat, in_tree = tree_flatten((args, kwargs))
wrapped_fun, out_tree = flatten_fun(lu.wrap_init(fun), in_tree)
out = pe.abstract_eval_fun(wrapped_fun.call_wrapped,
diff --git a/jax/dtypes.py b/jax/dtypes.py
--- a/jax/dtypes.py
+++ b/jax/dtypes.py
@@ -322,10 +322,12 @@ def dtype(x):
return np.result_type(x)
def _result_type_raw(*args):
- if len(args) < 2:
+ if len(args) == 1:
return _jax_type(args[0])
return _least_upper_bound(*{_jax_type(arg) for arg in args})
def result_type(*args):
"""Convenience function to apply Numpy argument dtype promotion."""
+ if len(args) == 0:
+ raise ValueError("at least one array or dtype is required")
return canonicalize_dtype(_result_type_raw(*args))
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -1222,6 +1222,17 @@ def __init__(self, shape, dtype):
self.assertEqual(out_shape.shape, (3, 5))
+ def test_eval_shape_duck_typing2(self):
+ # https://github.com/google/jax/issues/5683
+ class EasyDict(dict):
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.__dict__ = self
+
+ x = EasyDict(shape=(3,), dtype=np.dtype('float32'))
+ out_shape = api.eval_shape(lambda x: x, x) # doesn't crash
+ self.assertEqual(out_shape.shape, (3,))
+
def test_issue_871(self):
T = jnp.array([[1., 2.], [3., 4.], [5., 6.]])
x = jnp.array([1, 2, 3])
| jax.eval_shape does not understand arbitrary object dtype
```python
import objax
import jax
m = objax.nn.Sequential([objax.nn.Linear(3, 4)])
v = objax.random.uniform(((32, 3)))
jax.eval_shape(m, v) # ShapeDtypeStruct(shape=(32, 4), dtype=float32)
d = objax.util.EasyDict(shape=(32, 3), dtype=v.dtype)
print(repr(d.shape), repr(d.dtype)) # (32, 3) dtype('float32')
print(hasattr(d, 'shape'), hasattr(d, 'dtype')) # True True
jax.eval_shape(m, d)
#
Traceback (most recent call last):
File "/usr/local/google/home/dberth/jax3/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3417, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-24-d48e201250f2>", line 1, in <module>
jax.eval_shape(m, d)
File "/usr/local/google/home/dberth/jax3/lib/python3.8/site-packages/jax/api.py", line 2283, in eval_shape
*map(abstractify, args_flat))
File "/usr/local/google/home/dberth/jax3/lib/python3.8/site-packages/jax/util.py", line 36, in safe_map
return list(map(f, *args))
File "/usr/local/google/home/dberth/jax3/lib/python3.8/site-packages/jax/api.py", line 2279, in abstractify
return ShapedArray(np.shape(x), dtypes.result_type(x))
File "/usr/local/google/home/dberth/jax3/lib/python3.8/site-packages/jax/dtypes.py", line 294, in result_type
return canonicalize_dtype(dtype(args[0]))
File "/usr/local/google/home/dberth/jax3/lib/python3.8/site-packages/jax/dtypes.py", line 288, in dtype
return np.result_type(x)
File "<__array_function__ internals>", line 5, in result_type
File "/usr/local/google/home/dberth/jax3/lib/python3.8/site-packages/numpy/core/_internal.py", line 64, in _usefields
names, formats, offsets, titles = _makenames_list(adict, align)
File "/usr/local/google/home/dberth/jax3/lib/python3.8/site-packages/numpy/core/_internal.py", line 40, in _makenames_list
format = dtype(obj[0], align=align)
TypeError: data type not understood
```
The class EasyDict is simply this:
```python
class EasyDict(dict):
"""Custom dictionary that allows to access dict values as attributes."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.__dict__ = self
```
| Thanks for surfacing this!
JAX in general does not support the object dtype. I think we could make the error message clearer there. But actually I think the main issue is something else.
I think the main issue here is that `eval_shape` isn't living up to its contract of just requiring that its input has `shape` and `dtype` attributes. That is a bug, either in how we're calling `np.shape` or how we've implemented `dtypes.result_type` (both called in `jax.eval_shape`).
By the way, is the main goal with `EasyDict` to set up something like [`types.SimpleNamespace`](https://docs.python.org/3/library/types.html#types.SimpleNamespace)? I believe `eval_shape` works with those. | 2021-02-09T19:20:11 |
google/jax | 5,720 | google__jax-5720 | [
"5719"
] | 697e983b6da1bea9e3726765d77cb58672857e68 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -1659,7 +1659,7 @@ def bincount(x, weights=None, minlength=0, *, length=None):
x = core.concrete_or_error(asarray, x,
"The error occured because of argument 'x' of jnp.bincount. "
"To avoid this error, pass a static `length` argument.")
- length = max(x) + 1
+ length = max(x, initial=-1) + 1
length = _max(length, minlength)
if ndim(x) != 1:
raise ValueError("only 1-dimensional input supported.")
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -4001,7 +4001,7 @@ def testAtLeastNdLiterals(self, pytype, dtype, op):
"weights": weights,
"minlength": minlength,
"length": length}
- for shape in [(5,), (10,)]
+ for shape in [(0,), (5,), (10,)]
for dtype in int_dtypes
for weights in [True, False]
for minlength in [0, 20]
| bincount of empty array raises
Running
`jnp.bincount(jnp.int32([]))`
raises
`ValueError: zero-size array to reduction operation max which has no identity`
In contrast, `np.bincount([])` produces `array([], dtype=int64)`
| Thanks for the report! | 2021-02-12T21:19:14 |
google/jax | 5,740 | google__jax-5740 | [
"5711"
] | 29f63fea0a655a751775bbad6f1cdf7666b5c2ea | diff --git a/jax/_src/scipy/stats/dirichlet.py b/jax/_src/scipy/stats/dirichlet.py
--- a/jax/_src/scipy/stats/dirichlet.py
+++ b/jax/_src/scipy/stats/dirichlet.py
@@ -13,29 +13,40 @@
# limitations under the License.
-import numpy as np
import scipy.stats as osp_stats
from jax import lax
from jax._src.numpy import lax_numpy as jnp
from jax._src.numpy.util import _wraps
from jax.scipy.special import gammaln, xlogy
+from jax._src.numpy.lax_numpy import _promote_dtypes_inexact
def _is_simplex(x):
- x_sum = jnp.sum(x, axis=-1)
- return jnp.all(x > 0, axis=-1) & (x_sum <= 1) & (x_sum > 1 - 1e-6)
+ x_sum = jnp.sum(x, axis=0)
+ return jnp.all(x > 0, axis=0) & (abs(x_sum - 1) < 1E-6)
@_wraps(osp_stats.dirichlet.logpdf, update_doc=False)
def logpdf(x, alpha):
- args = (np.ones((0,), lax.dtype(x)), np.ones((1,), lax.dtype(alpha)))
- to_dtype = lax.dtype(osp_stats.dirichlet.logpdf(*args))
- x, alpha = [lax.convert_element_type(arg, to_dtype) for arg in (x, alpha)]
- one = jnp._constant_like(x, 1)
- normalize_term = jnp.sum(gammaln(alpha), axis=-1) - gammaln(jnp.sum(alpha, axis=-1))
- log_probs = lax.sub(jnp.sum(xlogy(lax.sub(alpha, one), x), axis=-1), normalize_term)
- return jnp.where(_is_simplex(x), log_probs, -jnp.inf)
+ x, alpha = _promote_dtypes_inexact(x, alpha)
+ if alpha.ndim != 1:
+ raise ValueError(
+ f"`alpha` must be one-dimensional; got alpha.shape={alpha.shape}"
+ )
+ if x.shape[0] not in (alpha.shape[0], alpha.shape[0] - 1):
+ raise ValueError(
+ "`x` must have either the same number of entries as `alpha` "
+ f"or one entry fewer; got x.shape={x.shape}, alpha.shape={alpha.shape}"
+ )
+ one = jnp._constant_like(x, 1)
+ if x.shape[0] != alpha.shape[0]:
+ x = jnp.concatenate([x, lax.sub(one, x.sum(0, keepdims=True))], axis=0)
+ normalize_term = jnp.sum(gammaln(alpha)) - gammaln(jnp.sum(alpha))
+ if x.ndim > 1:
+ alpha = lax.broadcast_in_dim(alpha, alpha.shape + (1,) * (x.ndim - 1), (0,))
+ log_probs = lax.sub(jnp.sum(xlogy(lax.sub(alpha, one), x), axis=0), normalize_term)
+ return jnp.where(_is_simplex(x), log_probs, -jnp.inf)
@_wraps(osp_stats.dirichlet.pdf, update_doc=False)
| diff --git a/tests/scipy_stats_test.py b/tests/scipy_stats_test.py
--- a/tests/scipy_stats_test.py
+++ b/tests/scipy_stats_test.py
@@ -30,6 +30,7 @@
config.parse_flags_with_absl()
all_shapes = [(), (4,), (3, 4), (3, 1), (1, 4), (2, 1, 4)]
+one_and_two_dim_shapes = [(4,), (3, 4), (3, 1), (1, 4)]
scipy_version = tuple(map(int, osp.version.version.split('.')[:2]))
@@ -163,22 +164,44 @@ def args_maker():
tol=1e-4)
self._CompileAndCheck(lax_fun, args_maker)
- @genNamedParametersNArgs(2)
+ @parameterized.named_parameters(
+ jtu.cases_from_list(
+ {"testcase_name": jtu.format_test_name_suffix("", [x_shape, alpha_shape], dtypes),
+ "shapes": [x_shape, alpha_shape], "dtypes": dtypes}
+ for x_shape in one_and_two_dim_shapes
+ for alpha_shape in [(x_shape[0],), (x_shape[0] + 1,)]
+ for dtypes in itertools.combinations_with_replacement(jtu.dtypes.floating, 2)
+ ))
def testDirichletLogPdf(self, shapes, dtypes):
rng = jtu.rand_positive(self.rng())
- scipy_fun = osp_stats.cauchy.logpdf
- lax_fun = lsp_stats.cauchy.logpdf
- dim = 4
- shapes = (shapes[0] + (dim,), shapes[1] + (dim,))
+
+ def _normalize(x, alpha):
+ x_norm = x.sum(0) + (0.0 if x.shape[0] == alpha.shape[0] else 0.1)
+ return (x / x_norm).astype(x.dtype), alpha
+
+ def lax_fun(x, alpha):
+ return lsp_stats.dirichlet.logpdf(*_normalize(x, alpha))
+
+ def scipy_fun(x, alpha):
+ # scipy validates the x normalization using float64 arithmetic, so we must
+ # cast x to float64 before normalization to ensure this passes.
+ x, alpha = _normalize(x.astype('float64'), alpha)
+
+ result = osp_stats.dirichlet.logpdf(x, alpha)
+ # if x.shape is (N, 1), scipy flattens the output, while JAX returns arrays
+ # of a consistent rank. This check ensures the results have the same shape.
+ return result if x.ndim == 1 else np.atleast_1d(result)
def args_maker():
+ # Don't normalize here, because we want normalization to happen at 64-bit
+ # precision in the scipy version.
x, alpha = map(rng, shapes, dtypes)
- x = x / np.sum(x, axis=-1, keepdims=True)
- return [x, alpha]
+ return x, alpha
+ tol = {np.float32: 1E-3, np.float64: 1e-5}
self._CheckAgainstNumpy(scipy_fun, lax_fun, args_maker, check_dtypes=False,
- tol=1e-4)
- self._CompileAndCheck(lax_fun, args_maker)
+ tol=tol)
+ self._CompileAndCheck(lax_fun, args_maker, atol=tol, rtol=tol)
@genNamedParametersNArgs(3)
def testExponLogPdf(self, shapes, dtypes):
| JAX Dirichlet Log PDF output different thatn SciPy
```
jax.scipy.stats.dirichlet.logpdf(np.array([9.2820191190831968786056904718861915171146e-01,
1.7047643862547270932061849180172430351377e-03,
3.8956840931979931121065252597190919914283e-03,
5.3582474241543359694261994263797532767057e-02,
6.7548574969396027109502789187445159768686e-03,
2.8701918527402454049679558778507271199487e-03,
2.9901160210044600180900875585621179197915e-03]), np.array([2.0922036170959472656250000000000000000000e+00,
2.5644928216934204101562500000000000000000e-01,
1.0264828205108642578125000000000000000000e+00,
6.1082868576049804687500000000000000000000e+00,
5.9878396987915039062500000000000000000000e-01,
2.6282947063446044921875000000000000000000e+00,
7.7161555290222167968750000000000000000000e+00], dtype=np.float32))
```
outputs `-inf` whereas
```
scipy.stats.dirichlet.logpdf(np.array([9.2820191190831968786056904718861915171146e-01,
1.7047643862547270932061849180172430351377e-03,
3.8956840931979931121065252597190919914283e-03,
5.3582474241543359694261994263797532767057e-02,
6.7548574969396027109502789187445159768686e-03,
2.8701918527402454049679558778507271199487e-03,
2.9901160210044600180900875585621179197915e-03]), np.array([2.0922036170959472656250000000000000000000e+00,
2.5644928216934204101562500000000000000000e-01,
1.0264828205108642578125000000000000000000e+00,
6.1082868576049804687500000000000000000000e+00,
5.9878396987915039062500000000000000000000e-01,
2.6282947063446044921875000000000000000000e+00,
7.7161555290222167968750000000000000000000e+00], dtype=np.float32))
```
outputs -31.38.
I tried using 64-bit floats but that didn't work either.
| What version of JAX are you using? On version 0.2.9 I get the correct output.
Same version. It might be a precision issue. I can repro with:
```
from jax.config import config
config.update("jax_enable_x64", True)
```
at the top of the script and cannot without those additional lines.
Digging around the JAX code, I think it is this (line 38, `dirichlet.py`):
```
return jnp.where(_is_simplex(x), log_probs, -jnp.inf)
```
where
```
def _is_simplex(x):
x_sum = jnp.sum(x, axis=-1)
return jnp.all(x > 0, axis=-1) & (x_sum <= 1) & (x_sum > 1 - 1e-6)
```
Something about the precision makes the sum of those fractions marginally greater than 1. I guess this counts as user error but I will say getting negative infinite is much worse than an exception (I thought my training was inserting non positive `alpha` parameters.
Ah, I see. It looks like JAX and scipy use different criteria to check if the value is close enough to 1. Jax is here: https://github.com/google/jax/blob/ab21409e0b2a561f217c833ca6f915b9dcefffd0/jax/_src/scipy/stats/dirichlet.py#L27
And scipy is here: https://github.com/scipy/scipy/blob/fa119c98b18e3dfc6e158e22e806831f40d660d3/scipy/stats/_multivariate.py#L1287
Basically, jax defines the valid input range as `1 - 1E-6 < x_sum <= 1` and scipy defines it as `1 - 10E-10 <= x_sum <= 1 + 10E-10`
The fix here would be to make these constraints match.
Nice.
The SciPy tolerance seems more forgiving, normalizing within the current JAX tolerance is trickier than it seems (dividing by the sum of the array isn't sufficient, so some form of truncating precision and then summing might have to do).
I started working on this, and it looks like the problems run even deeper. Due to a copy/paste typo, we never even test dirichlet distributions (note `cauchy` here): https://github.com/google/jax/blob/3c87a36831109ddcbc95e3844108c78f51d00300/tests/scipy_stats_test.py#L166-L170
And because of this we never discovered that the implementation in JAX is incompatible with the implementation in scipy in ways that are deeper than the tolerance around the sum of x.
For example:
```python
import numpy as np
import scipy.stats
import jax.scipy.stats
np.random.seed(0)
x = np.random.rand(4)
x /= x.sum()
alpha = np.ones(5)
print(scipy.stats.dirichlet.logpdf(x, alpha))
# 3.1780538303479458
print(jax.scipy.stats.dirichlet.logpdf(x, alpha))
# ValueError: Incompatible shapes for broadcasting: ((5,), (4,), (1,))
``` | 2021-02-16T20:55:55 |
google/jax | 5,751 | google__jax-5751 | [
"5638"
] | 0163c92e584946f5ab2d6553e6129fbe426c187f | diff --git a/jax/lib/xla_bridge.py b/jax/lib/xla_bridge.py
--- a/jax/lib/xla_bridge.py
+++ b/jax/lib/xla_bridge.py
@@ -144,7 +144,9 @@ def _get_local_backend(platform=None):
_tpu_backend = None
def _get_tpu_driver_backend(platform):
- del platform
+ if platform == "cpu":
+ return _get_local_backend("cpu")
+
global _tpu_backend
if _tpu_backend is None:
backend_target = FLAGS.jax_backend_target
| jax.devices() behaves unintuitively
Hi! I noticed some unexpected (?) behaviour in the following code:
```python
import jax
from jax.config import config
config.update("jax_backend_target", 'grpc://some endpoint:8470')
config.update("jax_xla_backend", 'tpu_driver')
print(jax.devices('cpu')) #prints tpu devices instead of cpu
```
I can get the expected behaviour if I do
```python
import jax
print(jax.devices('cpu')) #prints cpu devices
from jax.config import config
config.update("jax_backend_target", 'grpc://11.111.11.1118470')
config.update("jax_xla_backend", 'tpu_driver')
print(jax.devices('cpu')) #now prints cpu devices
print(jax.devices('tpu')) #now prints tpu devices
```
I think this may be related the caching of `jax.lib.xla_bridge.get_backend()`. Not sure if this is expected behaviour or a bug.
I noticed this because I was trying `jit` a few smaller functions on the host vm cpu during a larger TPU computation . I tried using the `backend=cpu` argument and `device_put`, but was unable to obtain the desired behaviour. In the end the only thing that seemed to work was to clear the cache of `get_backend()` reconfigure `jax.config` to cpu.
| @jacksonwb
Aren't you two already using the new Cloud TPU alpha program? That doesn't have this problem :)
This is indeed a bug in the `tpu_driver` client. Given that we're focusing our efforts on the new Cloud TPU architecture and you have a workaround, we probably won't prioritize fixing this. Please let me know if I'm underestimating the impact of this though and we can reconsider!
We are using the new Cloud TPU alpha program, but still have some applications on the old Cloud TPU setup due to requirements for GKE support.
Currently we have a workaround involving calls to `jax.lib.xla_bridge.get_backend.cache_clear()` when switching from TPU to CPU jitting, which seems to be working, but I'm not sure what other unintended consequences this may be creating. | 2021-02-17T01:27:16 |
|
google/jax | 5,762 | google__jax-5762 | [
"5326"
] | 29f63fea0a655a751775bbad6f1cdf7666b5c2ea | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -3536,13 +3536,22 @@ def t_op():
def _pad_batch_rule(batched_args, batch_dims, *, padding_config):
operand, padding_value = batched_args
operand_bdim, padding_value_bdim = batch_dims
+ if operand_bdim is None:
+ operand_bdim = 0
+ operand = broadcast(operand, (padding_value.shape[padding_value_bdim],))
+
+ padding_config = list(padding_config)
+ padding_config.insert(operand_bdim, (0, 0, 0))
if padding_value_bdim is None:
- assert operand_bdim is not None
- padding_config = list(padding_config)
- padding_config.insert(operand_bdim, (0, 0, 0))
return pad(operand, padding_value, padding_config), operand_bdim
- else:
- raise NotImplementedError # loop and stack
+
+ assert padding_value_bdim == 0, padding_value_bdim
+
+ x = pad(operand, _zero(operand), padding_config)
+ mask = pad(full_like(operand, True, np.bool_), False, padding_config)
+ broadcasted_padding = broadcast_in_dim(padding_value, x.shape,
+ (operand_bdim,))
+ return select(mask, x, broadcasted_padding), operand_bdim
def _pad_translation_rule(c, operand, padding_value, *, padding_config):
return xops.Pad(operand, padding_value,
| diff --git a/tests/lax_vmap_test.py b/tests/lax_vmap_test.py
--- a/tests/lax_vmap_test.py
+++ b/tests/lax_vmap_test.py
@@ -369,13 +369,13 @@ def testReshape(self, arg_shape, out_shape, dtype, dimensions, bdims):
.format(jtu.format_shape_dtype_string(shape, dtype), pads, bdims),
"shape": shape, "dtype": dtype, "pads": pads, "bdims": bdims}
for shape in [(2, 3)]
- for bdims in all_bdims(shape)
+ for bdims in all_bdims(shape, ())
for dtype in default_dtypes
for pads in [[(1, 2, 1), (0, 1, 0)]]))
def testPad(self, shape, dtype, pads, bdims):
rng = jtu.rand_small(self.rng())
- fun = lambda operand: lax.pad(operand, np.array(0, dtype), pads)
- self._CheckBatching(fun, 5, bdims, (shape,), (dtype,), rng)
+ fun = lambda operand, padding: lax.pad(operand, padding, pads)
+ self._CheckBatching(fun, 5, bdims, (shape, ()), (dtype, dtype), rng)
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_predshape={}_argshapes={}_bdims={}".format(
| incomplete implementation for batching rule for pad
For example:
```
def fun(x, pad_val): # x: f32[3], pad_val: f32[]
return lax.pad(x, pad_val, [(1, 1, 0)])
res = fun(np.ones((3,)), 7.)
# Pad the first row with 7 and the second with 8
res_batch = jax.vmap(fun)(np.ones((2, 3)), np.array([7, 8.]))
```
Raises NotImplementedError in the _pad_batch_rule:
```
File "/Users/necula/Source/jax/jax/experimental/jax2tf/tests/batch_poly_test.py", line 119, in fun
return lax.pad(x, pad_val, [(1, 1, 0)])
File "/Users/necula/Source/jax/jax/_src/lax/lax.py", line 750, in pad
return pad_p.bind(operand, padding_value, padding_config=tuple(padding_config))
File "/Users/necula/Source/jax/jax/core.py", line 271, in bind
out = top_trace.process_primitive(self, tracers, params)
File "/Users/necula/Source/jax/jax/interpreters/batching.py", line 149, in process_primitive
val_out, dim_out = batched_primitive(vals_in, dims_in, **params)
File "/Users/necula/Source/jax/jax/_src/lax/lax.py", line 3470, in _pad_batch_rule
raise NotImplementedError # loop and stack
```
| I remember seeing a CL from George about this. Should this be closed?
Does the test pass?
I filed the bug so that I don't forget about it. I am happy to work on it, but I do not have the code written.
Oh i was thinking your XlaPad shape CL...Feel free to unassign this. | 2021-02-17T18:31:00 |
google/jax | 5,768 | google__jax-5768 | [
"5766"
] | ba269b407f710ca15af6dc60356e8131d9c6bb7e | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -4142,12 +4142,15 @@ def take(a, indices, axis: Optional[int] = None, out=None, mode=None):
else:
axis_idx = _canonicalize_axis(axis, ndim(a))
- if mode == "raise":
+ if mode is None:
+ # lax.gather() does not support negative indices, so we wrap them here
+ indices = where(indices < 0, indices + a.shape[axis_idx], indices)
+ elif mode == "raise":
# TODO(phawkins): we have no way to report out of bounds errors yet.
raise NotImplementedError("The 'raise' mode to jnp.take is not supported.")
elif mode == "wrap":
indices = mod(indices, _constant_like(indices, a.shape[axis_idx]))
- elif mode != "clip" and mode is not None:
+ elif mode != "clip":
raise ValueError("Invalid mode '{}' for np.take".format(mode))
index_dims = len(shape(indices))
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -3621,7 +3621,7 @@ def testUnpackbits(self, shape, dtype, axis, bitorder, count):
[cast(Optional[int], None)])
for dtype in all_dtypes
for index_dtype in int_dtypes
- for mode in ['wrap', 'clip']))
+ for mode in [None, 'wrap', 'clip']))
def testTake(self, shape, dtype, index_shape, index_dtype, axis, mode):
def args_maker():
x = rng(shape, dtype)
@@ -3629,7 +3629,10 @@ def args_maker():
return x, i
rng = jtu.rand_default(self.rng())
- rng_indices = jtu.rand_int(self.rng(), -5, 5)
+ if mode is None:
+ rng_indices = jtu.rand_int(self.rng(), -shape[axis or 0], shape[axis or 0])
+ else:
+ rng_indices = jtu.rand_int(self.rng(), -5, 5)
jnp_op = lambda x, i: jnp.take(x, i, axis=axis, mode=mode)
np_op = lambda x, i: np.take(x, i, axis=axis, mode=mode)
self._CheckAgainstNumpy(np_op, jnp_op, args_maker)
| Bug in jnp.take with negative indexing
Hello, I am seeing the following issue with jax.numpy.take and -1 index:
`A = jnp.array([[1, 2], [3, 4], [5, 6]])`
`A[0]` --> [1, 2]
`A[-1]` --> [5, 6]
`A.take([0, -1], axis=0)` --> [[1, 2], [1, 2]]
`jnp.take(A, [0, -1], axis=0)` --> [[1, 2], [1, 2]]
Thanks!
| Thanks for the report! It looks like the issue is this is lowered to `lax.gather`, which doesn't support negative indices.
One workaround currently would be to use `mode='wrap'`, which does wrap negative indices correctly. | 2021-02-17T20:54:41 |
google/jax | 5,777 | google__jax-5777 | [
"5776"
] | 8ad511822808138a0f07ead29db5a8403189005b | diff --git a/jax/api_util.py b/jax/api_util.py
--- a/jax/api_util.py
+++ b/jax/api_util.py
@@ -75,7 +75,11 @@ def apply_flat_fun_nokwargs(fun, io_tree, py_args):
@lu.transformation_with_aux
def flatten_fun_nokwargs2(in_tree, *args_flat):
py_args = tree_unflatten(in_tree, args_flat)
- ans, aux = yield py_args, {}
+ pair = yield py_args, {}
+ if not isinstance(pair, (list, tuple)) or len(pair) != 2:
+ raise TypeError("expected function with aux output to return a two-element "
+ f"tuple, but got type {type(pair)} with value {repr(pair)}")
+ ans, aux = pair
ans_flat, ans_tree = tree_flatten(ans)
aux_flat, aux_tree = tree_flatten(aux)
yield (ans_flat, aux_flat), (ans_tree, aux_tree)
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -876,6 +876,16 @@ def test_grad_and_aux_basic(self):
self.assertAllClose(g, grad(lambda x: x**3)(3.))
self.assertAllClose(aux, [9.], check_dtypes=False)
+ def test_grad_and_aux_error(self):
+ with self.assertRaisesRegex(TypeError, "two-element tuple"):
+ grad(lambda x: (1, 2, 3), has_aux=True)(1.)
+
+ with self.assertRaisesRegex(TypeError, "two-element tuple"):
+ grad(lambda x: x, has_aux=True)(1.)
+
+ with self.assertRaisesRegex(TypeError, "two-element tuple"):
+ grad(lambda x: (x,), has_aux=True)(1.)
+
def test_grad_and_aux_nested(self):
def f(x):
g, aux = grad(lambda x: (x**3, [x**3]), has_aux=True)(x)
@@ -2319,6 +2329,7 @@ def __jax_array__(self):
self.assertEqual(f(x), f(a))
+
class RematTest(jtu.JaxTestCase):
def test_remat_basic(self):
| bad error message when with grad(f, has_aux=True) when f doesn't return a pair
```python
import jax
jax.grad(lambda x: (1, 2, 3), has_aux=True)(1.)
```
```
ValueError: too many values to unpack (expected 2)
```
| 2021-02-18T17:47:41 |
|
google/jax | 5,851 | google__jax-5851 | [
"5532"
] | 625bb8040e5e74bf271c3934afeb80232990a9dd | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -355,7 +355,11 @@ def get_jax_enable_x64():
functions decorated with jax.jit), so we delay inspecting the value
of the jax_enable_x64 flag until JIT time.
"""
- return config.x64_enabled
+ # TODO(jblespiau): Delete when jaxlib 0.1.62 is the minimal version.
+ if lib._xla_extension_version >= 4:
+ return config.read("jax_enable_x64")
+ else:
+ return config.x64_enabled
def get_jax_disable_jit_flag():
"""Returns the value of the `jax_disable_jit` flag.
@@ -375,8 +379,10 @@ def get_jax_disable_jit_flag():
@wraps(fun)
@api_boundary
def f_jitted(*args, **kwargs):
+ # TODO(jblespiau): We can remove `config.x64_enabled` when jaxlib has
+ # extension version 4
context = (getattr(core.thread_local_state.trace_state.trace_stack,
- 'dynamic', None), config.x64_enabled)
+ "dynamic", None), config.x64_enabled)
# TODO(jblespiau): Move this to C++.
if (FLAGS.jax_debug_nans or FLAGS.jax_debug_infs) and not _jit_is_disabled():
device_arrays = cpp_jitted_f(context, *args, **kwargs)
diff --git a/jax/config.py b/jax/config.py
--- a/jax/config.py
+++ b/jax/config.py
@@ -16,6 +16,7 @@
import sys
import threading
from typing import Optional
+from jax import lib
def bool_env(varname: str, default: bool) -> bool:
"""Read an environment variable and interpret it as a boolean.
@@ -49,6 +50,7 @@ def __init__(self):
class Config:
+ # TODO(jakevdp): Remove when minimum jaxlib is has extension version 4
_thread_local_state = _ThreadLocalState()
def __init__(self):
@@ -149,13 +151,23 @@ def disable_omnistaging(self):
@property
def x64_enabled(self):
- if self._thread_local_state.enable_x64 is None:
- self._thread_local_state.enable_x64 = bool(self.read('jax_enable_x64'))
- return self._thread_local_state.enable_x64
+ if lib._xla_extension_version >= 4:
+ if lib.jax_jit.get_enable_x64() is None:
+ lib.jax_jit.set_enable_x64(bool(self.read('jax_enable_x64')))
+ return lib.jax_jit.get_enable_x64()
+ else:
+ # TODO(jakevdp): Remove when minimum jaxlib is has extension version 4
+ if self._thread_local_state.enable_x64 is None:
+ self._thread_local_state.enable_x64 = bool(self.read('jax_enable_x64'))
+ return self._thread_local_state.enable_x64
# TODO(jakevdp): make this public when thread-local x64 is fully implemented.
def _set_x64_enabled(self, state):
- self._thread_local_state.enable_x64 = bool(state)
+ if lib._xla_extension_version >= 4:
+ lib.jax_jit.set_enable_x64(bool(state))
+ else:
+ # TODO(jakevdp): Remove when minimum jaxlib is has extension version 4
+ self._thread_local_state.enable_x64 = bool(state)
class NameSpace(object):
diff --git a/jax/experimental/x64_context.py b/jax/experimental/x64_context.py
--- a/jax/experimental/x64_context.py
+++ b/jax/experimental/x64_context.py
@@ -18,7 +18,7 @@
"""
from contextlib import contextmanager
-from jax.config import config
+from jax import config
@contextmanager
def enable_x64():
| diff --git a/tests/x64_context_test.py b/tests/x64_context_test.py
--- a/tests/x64_context_test.py
+++ b/tests/x64_context_test.py
@@ -16,10 +16,16 @@
import concurrent.futures
import time
-from absl.testing import absltest, parameterized
-
-from jax import api, lax, partial, random
-from jax.config import config, FLAGS
+from absl.testing import absltest
+from absl.testing import parameterized
+
+import jax
+from jax import api
+from jax import lax
+from jax import partial
+from jax import random
+from jax.config import config
+from jax.config import FLAGS
from jax.experimental import enable_x64, disable_x64
import jax.numpy as jnp
import jax.test_util as jtu
@@ -48,11 +54,35 @@ def test_make_array(self, jit):
func = _maybe_jit(jit, lambda: jnp.arange(10.0))
dtype_start = func().dtype
with enable_x64():
- self.assertEqual(func().dtype, 'float64')
+ self.assertEqual(func().dtype, "float64")
with disable_x64():
- self.assertEqual(func().dtype, 'float32')
+ self.assertEqual(func().dtype, "float32")
self.assertEqual(func().dtype, dtype_start)
+ @parameterized.named_parameters(
+ jtu.cases_from_list({
+ "testcase_name": "_jit={}_f_{}".format(jit, f.__name__),
+ "jit": jit,
+ "enable_or_disable": f
+ } for jit in ["python", "cpp", None] for f in [enable_x64, disable_x64]))
+ def test_correctly_capture_default(self, jit, enable_or_disable):
+ if jit == "cpp" and not config.omnistaging_enabled:
+ self.skipTest("cpp_jit requires omnistaging")
+
+ # The fact we defined a jitted function with a block with a different value
+ # of `config.enable_x64` has no impact on the output.
+ with enable_or_disable():
+ func = _maybe_jit(jit, lambda: jnp.arange(10.0))
+ func()
+
+ expected_dtype = "float64" if config.read("jax_enable_x64") else "float32"
+ self.assertEqual(func().dtype, expected_dtype)
+
+ with enable_x64():
+ self.assertEqual(func().dtype, "float64")
+ with disable_x64():
+ self.assertEqual(func().dtype, "float32")
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_jit={}".format(jit), "jit": jit}
for jit in ["python", "cpp", None]))
@@ -113,9 +143,11 @@ def func_x64():
self.assertEqual(x32.result(), jnp.int32)
def test_jit_cache(self):
- # TODO(jakevdp): enable this test when CPP jit cache is fixed.
- if FLAGS.experimental_cpp_jit:
- self.skipTest("Known failure due to https://github.com/google/jax/issues/5532")
+ if jtu.device_under_test() == "tpu":
+ self.skipTest("64-bit random not available on TPU")
+ if jax.lib._xla_extension_version < 4 and FLAGS.experimental_cpp_jit:
+ self.skipTest(
+ "Known failure due to https://github.com/google/jax/issues/5532")
f = partial(random.uniform, random.PRNGKey(0), (1,), 'float64', -1, 1)
with disable_x64():
| experimental.enable_x64 fails for random.uniform
Short repro:
```python
from jax import random
from jax.experimental import disable_x64, enable_x64
f = lambda: random.uniform(random.PRNGKey(0), (10,), 'float64', -1, 1)
with disable_x64():
f()
with enable_x64():
f()
f() # <--- fails
```
Failure:
```pytb
Traceback (most recent call last):
File "tmp.py", line 11, in <module>
f() # <--- fails
File "tmp.py", line 4, in <lambda>
f = lambda: random.uniform(random.PRNGKey(0), (10,), 'float64', -1, 1)
File "/Users/vanderplas/github/google/jax/jax/_src/random.py", line 365, in uniform
return _uniform(key, shape, dtype, minval, maxval) # type: ignore
jax._src.traceback_util.FilteredStackTrace: RuntimeError: Invalid argument: Argument does not match host shape or layout of computation parameter 1: want s64[], got s32[]: while running replica 0 and partition 0 of a replicated computation (other replicas may have failed as well).
The stack trace above excludes JAX-internal frames.
The following is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "tmp.py", line 11, in <module>
f() # <--- fails
File "tmp.py", line 4, in <lambda>
f = lambda: random.uniform(random.PRNGKey(0), (10,), 'float64', -1, 1)
File "/Users/vanderplas/github/google/jax/jax/_src/random.py", line 365, in uniform
return _uniform(key, shape, dtype, minval, maxval) # type: ignore
File "/Users/vanderplas/github/google/jax/jax/_src/traceback_util.py", line 139, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/Users/vanderplas/github/google/jax/jax/api.py", line 401, in f_jitted
return cpp_jitted_f(context, *args, **kwargs)
RuntimeError: Invalid argument: Argument does not match host shape or layout of computation parameter 1: want s64[], got s32[]: while running replica 0 and partition 0 of a replicated computation (other replicas may have failed as well).
```
| This is not a problem with the python JIT path, but is a problem with the CPP jit. I think the issue is here: https://github.com/google/jax/blob/f4b5ff9d466c3f93d850efd772eee0ef03601a05/jax/api.py#L370-L372
The cpp JIT determines the X64 flag value only the first time the CPP function is called: https://github.com/tensorflow/tensorflow/blob/26bb688b0cba1e56fdee7ecde7ebde0d72ce2066/tensorflow/compiler/xla/python/jax_jit.cc#L1108
and the result is stored in an `absl` constant, which I believe essentially acts as a global constant: https://github.com/tensorflow/tensorflow/blob/26bb688b0cba1e56fdee7ecde7ebde0d72ce2066/tensorflow/compiler/xla/python/jax_jit.cc#L900
This means that compiling the function twice with different X64 flags can make the second compiled function fail on subsequent calls because the global cache of the X64 flag reflects the value at the first rather than the second compilation.
I think that fixing this will require changing the C++ side to relax the assumption that `jax_enable_x64` is constant within a session.
Absolutely.
Just to check on the semantics Is the context manager expected to change the code at run time or definition time? (I strongly expect at runtime).
```
with enablex64():
f= jax.jit(...)
f(x) => I expect it should be with x64?
f(x) => I imagine it should be with whatever global value is define in the program?
with disablex64():
f(x) => I imagine it should be x32?
```
Assuming this semantics, it means the C++ will need for each call to know what is the local value for enable_x64.
There are a few options:
- Call Python each time, from C++, to get the value of the flag. I dislike this approach, as the goal is to call the least python as Possible
- Pass the value from Python to C++, e.g. as the first argument, as take it from C++, and consider the rest of the arguments as the jitted function call.
- Have jax_enable_64 live in C++, and have the context manager update this value, and access this value from C++. This is what is done for DisableJit:
https://github.com/tensorflow/tensorflow/blob/26bb688b0cba1e56fdee7ecde7ebde0d72ce2066/tensorflow/compiler/xla/python/jax_jit.cc#L1141
The difficulty of that option, is that we need to move the value from the config to C++ (i.e. when we update the flags, we should set the C++ value).
> Is the context manager expected to change the code at run time or definition time? (I strongly expect at runtime).
Indeed, the semantics are that that the code is changed at run-time. For the CPP jit, we accomplish this by using the current value of the X64 flag as part of the cache key: https://github.com/google/jax/blob/f4b5ff9d466c3f93d850efd772eee0ef03601a05/jax/_src/util.py#L187-L203
> Assuming this semantics, it means the C++ will need for each call to know what is the local value for enable_x64.
This is True. Our initial attempt at that was to add the X64 flag to the context used to define the cache key: https://github.com/google/jax/blob/f4b5ff9d466c3f93d850efd772eee0ef03601a05/jax/api.py#L378-L379
But obviously this has to be threaded through more deeply. I think the second approach is probably the best, as it basically matches the approach in the python JIT and avoids extra complication.
> - Pass the value from Python to C++, e.g. as the first argument, as take it from C++, and consider the rest of the arguments as the jitted function call.
In fact, the value is already available to C++ as part of the `context` variable as shown above... though it would make sense to make this more explicit in the function's contract if we go this route. | 2021-02-25T18:06:51 |
google/jax | 5,868 | google__jax-5868 | [
"5785"
] | 640e62c7dab22b582a729de7b2eea1ce2c6b480d | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -330,6 +330,47 @@ def _promote_args_inexact(fun_name, *args):
_check_no_float0s(fun_name, *args)
return _promote_shapes(fun_name, *_promote_dtypes_inexact(*args))
+def _convert_and_clip_integer(val, dtype):
+ """
+ Convert integer-typed val to specified integer dtype, clipping to dtype
+ range rather than wrapping.
+
+ Args:
+ val: value to be converted
+ dtype: dtype of output
+
+ Returns:
+ equivalent of val in new dtype
+
+ Examples
+ --------
+ Normal integer type conversion will wrap:
+
+ >>> val = jnp.uint32(0xFFFFFFFF)
+ >>> val.astype('int32')
+ DeviceArray(-1, dtype=int32)
+
+ This function clips to the values representable in the new type:
+
+ >>> _convert_and_clip_integer(val, 'int32')
+ DeviceArray(2147483647, dtype=int32)
+ """
+ val = val if isinstance(val, ndarray) else asarray(val)
+ dtype = dtypes.canonicalize_dtype(dtype)
+ if not (issubdtype(dtype, integer) and issubdtype(val.dtype, integer)):
+ raise TypeError("_convert_and_clip_integer only accepts integer dtypes.")
+
+ val_dtype = dtypes.canonicalize_dtype(val.dtype)
+ if val_dtype != val.dtype:
+ # TODO(jakevdp): this is a weird corner case; need to figure out how to handle it.
+ # This happens in X32 mode and can either come from a jax value created in another
+ # context, or a Python integer converted to int64.
+ pass
+ min_val = _constant_like(val, _max(iinfo(dtype).min, iinfo(val_dtype).min))
+ max_val = _constant_like(val, _min(iinfo(dtype).max, iinfo(val_dtype).max))
+ return clip(val, min_val, max_val).astype(dtype)
+
+
def _constant_like(x, const):
return np.array(const, dtype=_dtype(x))
diff --git a/jax/_src/random.py b/jax/_src/random.py
--- a/jax/_src/random.py
+++ b/jax/_src/random.py
@@ -25,7 +25,7 @@
from jax import dtypes
from jax.core import NamedShape
from jax.api import jit, vmap
-from jax._src.numpy.lax_numpy import _constant_like, asarray
+from jax._src.numpy.lax_numpy import _constant_like, _convert_and_clip_integer, asarray
from jax.lib import xla_bridge
from jax.lib import xla_client
from jax.lib import cuda_prng
@@ -441,20 +441,28 @@ def randint(key: jnp.ndarray,
def _randint(key, shape, minval, maxval, dtype):
_check_shape("randint", shape, np.shape(minval), np.shape(maxval))
if not jnp.issubdtype(dtype, np.integer):
- raise TypeError("randint only accepts integer dtypes.")
-
- minval = lax.convert_element_type(minval, dtype)
- maxval = lax.convert_element_type(maxval, dtype)
+ raise TypeError(f"randint only accepts integer dtypes, got {dtype}")
+
+ minval = _asarray(minval)
+ maxval = _asarray(maxval)
+ if not jnp.issubdtype(minval.dtype, np.integer):
+ minval = minval.astype(int)
+ if not jnp.issubdtype(maxval.dtype, np.integer):
+ maxval = maxval.astype(int)
+
+ # Flag where maxval is greater than the maximum value of dtype
+ # in order to handle cases like randint(key, shape, 0, 256, 'uint8')
+ maxval_out_of_range = lax.gt(
+ maxval, _convert_and_clip_integer(jnp.array(jnp.iinfo(dtype).max, dtype), maxval.dtype))
+
+ minval = _convert_and_clip_integer(minval, dtype)
+ maxval = _convert_and_clip_integer(maxval, dtype)
minval = lax.broadcast_to_rank(minval, len(shape))
maxval = lax.broadcast_to_rank(maxval, len(shape))
nbits = jnp.iinfo(dtype).bits
if nbits not in (8, 16, 32, 64):
- raise TypeError("randint only accepts 8-, 16-, 32-, or 64-bit dtypes.")
-
- # if we don't have minval < maxval, just always return minval
- # https://github.com/google/jax/issues/222
- maxval = lax.max(lax.add(minval, np.array(1, dtype)), maxval)
+ raise TypeError(f"randint only accepts 8-, 16-, 32-, or 64-bit dtypes, got {dtype}")
# This algorithm is biased whenever (maxval - minval) is not a power of 2.
# We generate double the number of random bits required by the dtype so as to
@@ -466,6 +474,18 @@ def _randint(key, shape, minval, maxval, dtype):
unsigned_dtype = _UINT_DTYPES[nbits]
span = lax.convert_element_type(maxval - minval, unsigned_dtype)
+ # Ensure that span=1 when maxval <= minval, so minval is always returned;
+ # https://github.com/google/jax/issues/222
+ span = lax.select(maxval <= minval, lax.full_like(span, 1), span)
+
+ # When maxval is out of range, the span has to be one larger.
+ # If span is already the maximum representable value, this will wrap to zero,
+ # causing remainders below to have no effect, which is the correct semantics.
+ span = lax.select(
+ maxval_out_of_range & (maxval > minval),
+ lax.add(span, lax._const(span, 1)),
+ span)
+
# To compute a remainder operation on an integer that might have twice as many
# bits as we can represent in the native unsigned dtype, we compute a
# multiplier equal to 2**nbits % span. To avoid overflow, we use the identity:
| diff --git a/tests/random_test.py b/tests/random_test.py
--- a/tests/random_test.py
+++ b/tests/random_test.py
@@ -26,6 +26,7 @@
from jax import api
from jax import core
+from jax import dtypes
from jax import grad
from jax import lax
from jax import numpy as jnp
@@ -983,6 +984,34 @@ def test_random_split_doesnt_device_put_during_tracing(self):
api.jit(random.split)(key)
self.assertEqual(count[0], 1) # 1 for the argument device_put
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": f"_dtype={dtype}", "dtype": dtype}
+ for dtype in int_dtypes + uint_dtypes))
+ def test_randint_bounds(self, dtype):
+ min = np.iinfo(dtype).min
+ max = np.iinfo(dtype).max
+ key = random.PRNGKey(1701)
+ shape = (10,)
+ if np.iinfo(dtype).bits < np.iinfo(dtypes.canonicalize_dtype(int)).bits:
+ expected = random.randint(key, shape, min, max, dtype)
+ self.assertArraysEqual(expected, random.randint(key, shape, min - 12345, max + 12345, dtype))
+ else:
+ self.assertRaises(OverflowError, random.randint, key, shape, min - 12345, max + 12345, dtype)
+
+ def test_randint_out_of_range(self):
+ key = random.PRNGKey(0)
+
+ r = random.randint(key, (10,), 255, 256, np.uint8)
+ self.assertAllClose(r, jnp.full_like(r, 255))
+
+ r = random.randint(key, (1000,), -128, 128, np.int8)
+ self.assertGreater((r == -128).sum(), 0)
+ self.assertGreater((r == 127).sum(), 0)
+
+ r = random.randint(key, (1000,), -1000, 1000, np.uint8)
+ self.assertGreater((r == 0).sum(), 0)
+ self.assertGreater((r == 255).sum(), 0)
+
if __name__ == "__main__":
absltest.main(testLoader=jtu.JaxTestLoader())
| jax.random.randint cannot generate values equal to DTYPE_MAX
According to randint docstring: `Sample uniform random values in [minval, maxval) with given shape/dtype.`
However because of the conversion here https://github.com/google/jax/blob/5bbb449ae5849c508194c8eb5b10c101f1fa22ae/jax/_src/random.py#L439 passing `maxval={dtype max value + 1}` will generate an all-zero array.
The behavior in numpy is different (and correct):
```
jax.random.randint(jax.random.PRNGKey(0), shape=(10,), minval=0, maxval=256, dtype=jnp.uint8)
>> DeviceArray([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=uint8)
np.random.randint(size=(10,), low=0, high=256, dtype=np.uint8)
>> array([ 87, 136, 195, 254, 95, 94, 113, 7, 66, 13], dtype=uint8)
```
A workaround is to not pass the dtype to randint (or use a dtype with a higher maximum value), and cast the output:
```
jax.random.randint(jax.random.PRNGKey(0), shape=(10,), minval=0, maxval=256).astype(jnp.uint8)
>> DeviceArray([178, 233, 99, 238, 39, 149, 46, 139, 68, 142], dtype=uint8)
```
| Thinking about this a bit... this is really tricky for JAX to handle in a general way. Consider the case of `max = 1 << 64`, which is one greater than the maximum uint64. Numpy handles it as expected:
```python
>>> np.random.randint(0, 1<<64, dtype='uint64')
17392714807819977315
```
But if you increase this bound by one, numpy returns an error:
```
>>> np.random.randint(0, 1<<64 + 1, dtype='uint64')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-30-e02298560197> in <module>
----> 1 np.random.randint(0, 1<<64 + 1, dtype='uint64')
mtrand.pyx in numpy.random.mtrand.RandomState.randint()
_bounded_integers.pyx in numpy.random._bounded_integers._rand_uint64()
ValueError: high is out of bounds for uint64
```
Neither of these numbers (`1 << 64` and `1 << 64 + 1`) are representable as 64-bit integers or floats; numpy handles these by making use of Python's built-in arbitrary precision integers, as mentioned in the note here: https://github.com/numpy/numpy/blob/5cae51e794d69dd553104099305e9f92db237c53/numpy/random/_bounded_integers.pyx.in#L286-L292
JAX cannot do this, because it compiles to XLA which only has fixed-precision integers, and has no native way to represent larger numbers such as `1 << 64`.
How might we work around this? Some possibilities:
1. Follow NumPy and do our arithmetic in Python. The downside of this is that it would only work for concrete values of minval and maxval; i.e. you could no longer jit-compile code that passes minval and maxval as JAX arrays. You might imagine getting around this by only using the Python int logic in non-traced code; the problem there is that this would cause `randint` to act differently depending on whether it is wrapped in JIT, which is something we like to avoid.
2. Use some sort of extended representation (i.e. a pair of int64s) to represent the minval and maxval in order to handle larger bounds. However, this would require deeper changes to JAX's JIT dispatch mechanism, because currently a Python integer passed to a jitted function is **always** converted to a dtyped integer before the function can do any logic on it, because that's the only way JAX knows how to handle numerical data. I'm not sure how I'd approach changing this for cases where larger integers are required.
Neither of these is really a viable solution, and I can't think of any other alternatives.
The above shouldn't stop us from fixing the issue for smaller types, though: for example things like `random.randint(key, shape, minval=0, maxval=256, dtype='uint8')` should be workable with a few tweaks. But we would not be able to error on `maxval > 256` as numpy does, because JAX cannot branch on non-static values.
I started some work on this in #5868... it gets quite complicated to handle all the possible corner cases!
Could we fix this by adding an `inclusive`/`exclusive` option on the API?
> Could we fix this by adding an inclusive/exclusive option on the API?
That's an option, although unless users know to think about it, things like `randint(key, shape, 0, 256, 'uint8')` will continue to silently return zeros.
We should report (everywhere) when we see integer overflow casting Python integers to typed integers. Someone even started on that in the past.
That would be useful - I think currently we only get that via numpy when we overflow int64.
Also, even if we go the `inclusive / exclusive` route, I've come across some other corner cases that we'll need to fix as well, for example:
```python
from jax import random
random.randint(random.PRNGKey(0), (10,), 254, 254, dtype='uint8')
# Buffer([254, 254, 254, 254, 254, 254, 254, 254, 254, 254], dtype=uint8)
random.randint(random.PRNGKey(0), (10,), 255, 255, dtype='uint8')
# Buffer([111, 139, 85, 222, 6, 223, 108, 27, 20, 191], dtype=uint8)
```
Related: https://github.com/google/jax/issues/2006#issuecomment-575273556 | 2021-02-26T21:57:18 |
google/jax | 5,872 | google__jax-5872 | [
"5370"
] | a0c5a80971a39920eeeb89fd67e0f42a74d0a784 | diff --git a/jax/_src/scipy/special.py b/jax/_src/scipy/special.py
--- a/jax/_src/scipy/special.py
+++ b/jax/_src/scipy/special.py
@@ -101,7 +101,8 @@ def expit(x):
@_wraps(osp_special.logsumexp)
def logsumexp(a, axis=None, b=None, keepdims=False, return_sign=False):
if b is not None:
- a, b = jnp.broadcast_arrays(a, b)
+ a, b = _promote_args_inexact("logsumexp", a, b)
+ a = jnp.where(b != 0, a, -jnp.inf)
pos_dims, dims = _reduction_dims(a, axis)
amax = jnp.max(a, axis=dims, keepdims=keepdims)
amax = lax.stop_gradient(lax.select(lax.is_finite(amax), amax, lax.full_like(amax, 0)))
| diff --git a/tests/lax_scipy_test.py b/tests/lax_scipy_test.py
--- a/tests/lax_scipy_test.py
+++ b/tests/lax_scipy_test.py
@@ -146,6 +146,14 @@ def lax_fun(array_to_reduce):
self._CheckAgainstNumpy(scipy_fun, lax_fun, args_maker)
self._CompileAndCheck(lax_fun, args_maker)
+ def testLogSumExpZeros(self):
+ # Regression test for https://github.com/google/jax/issues/5370
+ scipy_fun = lambda a, b: osp_special.logsumexp(a, b=b)
+ lax_fun = lambda a, b: lsp_special.logsumexp(a, b=b)
+ args_maker = lambda: [np.array([-1000, -2]), np.array([1, 0])]
+ self._CheckAgainstNumpy(scipy_fun, lax_fun, args_maker)
+ self._CompileAndCheck(lax_fun, args_maker)
+
@parameterized.named_parameters(itertools.chain.from_iterable(
jtu.cases_from_list(
{"testcase_name": jtu.format_test_name_suffix(
| Bug in logsumexp when b argument has 0s
It looks like logsumexp can give the wrong answer if it's b argument has 0s in the indices where the a argument is large. For example
``` python
import jax.numpy as jnp
from jax.scipy.special import logsumexp
a = jnp.array([-1000.0, -2.0])
b = jnp.array([1.0, 0.0])
out = logsumexp(a, b=b) # DeviceArray(-inf, dtype=float32)
# out should be DeviceArray(-1000.0, dtype=float32)
```
The bug seems to happen because the implementation subtracts the max value of a without accounting for b. A simple fix should be to add these 2 lines of code in the first check if b is not None:
``` python
a = a + jnp.where(b, jnp.log(jnp.abs(b)), -jnp.inf)
b = jnp.sign(b)
```
| Thanks for the report! Are you interested in contributing a pull request to fix this issue?
Sure!
Great! Feel free to submit a pull request with the fix, and ping me when you do. We'll want to make sure to add a new test case that covers this bug (i.e. one that would fail with the current code, but passes with the fix). Let me know if any questions come up along the way | 2021-02-27T01:05:55 |
google/jax | 5,933 | google__jax-5933 | [
"5931"
] | 02cf04b60b615e882e7c0c14e22ff4a3e1a1718c | diff --git a/jax/_src/scipy/special.py b/jax/_src/scipy/special.py
--- a/jax/_src/scipy/special.py
+++ b/jax/_src/scipy/special.py
@@ -103,6 +103,8 @@ def logsumexp(a, axis=None, b=None, keepdims=False, return_sign=False):
if b is not None:
a, b = _promote_args_inexact("logsumexp", a, b)
a = jnp.where(b != 0, a, -jnp.inf)
+ else:
+ a, = _promote_args_inexact("logsumexp", a)
pos_dims, dims = _reduction_dims(a, axis)
amax = jnp.max(a, axis=dims, keepdims=keepdims)
amax = lax.stop_gradient(lax.select(lax.is_finite(amax), amax, lax.full_like(amax, 0)))
| diff --git a/tests/lax_scipy_test.py b/tests/lax_scipy_test.py
--- a/tests/lax_scipy_test.py
+++ b/tests/lax_scipy_test.py
@@ -106,7 +106,7 @@ def _GetArgsMaker(self, rng, shapes, dtypes):
"shapes": shapes, "dtype": dtype,
"axis": axis, "keepdims": keepdims,
"return_sign": return_sign, "use_b": use_b}
- for shape_group in compatible_shapes for dtype in float_dtypes
+ for shape_group in compatible_shapes for dtype in float_dtypes + int_dtypes
for use_b in [False, True]
for shapes in itertools.product(*(
(shape_group, shape_group) if use_b else (shape_group,)))
@@ -143,8 +143,9 @@ def lax_fun(array_to_reduce):
return_sign=return_sign)
args_maker = lambda: [rng(shapes[0], dtype)]
+ tol = {np.float32: 1E-6, np.float64: 1E-14}
self._CheckAgainstNumpy(scipy_fun, lax_fun, args_maker)
- self._CompileAndCheck(lax_fun, args_maker)
+ self._CompileAndCheck(lax_fun, args_maker, rtol=tol, atol=tol)
def testLogSumExpZeros(self):
# Regression test for https://github.com/google/jax/issues/5370
| jax.scipy.special.logsumexp raises OverflowError
`jax.scipy.special.logsumexp` seems to be broken in jax 0.2.9.
The following code:
```
import jax, scipy
print(jax.scipy.special.logsumexp(jax.numpy.array([-1, 1])))
print(scipy.special.logsumexp(jax.numpy.array([-1, 1])))
```
Raises the following error:
```Traceback (most recent call last):
File "txt.py", line 2, in <module>
print(jax.scipy.special.logsumexp(jax.numpy.array([-1, 1])))
File "/opt/conda/lib/python3.7/site-packages/jax/_src/scipy/special.py", line 107, in logsumexp
amax = lax.reduce(a, _constant_like(a, -np.inf), lax.max, dims)
File "/opt/conda/lib/python3.7/site-packages/jax/_src/numpy/lax_numpy.py", line 335, in _constant_like
return np.array(const, dtype=_dtype(x))
OverflowError: cannot convert float infinity to integer```
While the line from scipy runs fine. This was reproduced for the handful of other inputs I tried
| `jax.scipy.special.logsumexp` does not work on integers. If you use floats like
```python
jax.scipy.special.logsumexp(jax.numpy.array([-1., 1.]))
# -> DeviceArray(1.1269281, dtype=float32)
```
it works. You may still consider this a bug, just FYI.
Thanks for the report. Indeed I think the issue is that `logsumexp` currently fails for integer input (although the error is more explicit on master due to changes in #5872). The fix will be to internally promote inputs to floating point; the workaround right now is to convert inputs manually:
```python
import jax, scipy
print(jax.scipy.special.logsumexp(jax.numpy.array([-1, 1]).astype(jnp.float32)))
``` | 2021-03-04T17:09:13 |
google/jax | 5,978 | google__jax-5978 | [
"5976"
] | 00c295771ac7b649bd4084d8dbf69a0d7f03a0e8 | diff --git a/jax/_src/random.py b/jax/_src/random.py
--- a/jax/_src/random.py
+++ b/jax/_src/random.py
@@ -1173,11 +1173,12 @@ def _poisson(key, lam, shape, dtype):
# λ -> ∞, so pick some arbitrary large value.
lam_rejection = lax.select(use_knuth, lax.full_like(lam, 1e5), lam)
max_iters = dtype.type(jnp.iinfo(dtype).max) # insanely conservative
- return lax.select(
- use_knuth,
- _poisson_knuth(key, lam_knuth, shape, dtype, max_iters),
- _poisson_rejection(key, lam_rejection, shape, dtype, max_iters),
+ result = lax.select(
+ use_knuth,
+ _poisson_knuth(key, lam_knuth, shape, dtype, max_iters),
+ _poisson_rejection(key, lam_rejection, shape, dtype, max_iters),
)
+ return lax.select(lam == 0, jnp.zeros_like(result), result)
def poisson(key, lam, shape=(), dtype=dtypes.int_):
| diff --git a/tests/random_test.py b/tests/random_test.py
--- a/tests/random_test.py
+++ b/tests/random_test.py
@@ -538,6 +538,12 @@ def testPoissonShape(self):
x = random.poisson(key, np.array([2.0, 20.0]), shape=(3, 2))
assert x.shape == (3, 2)
+ def testPoissonZeros(self):
+ key = random.PRNGKey(0)
+ lam = jnp.concatenate([jnp.zeros(10), 20 * jnp.ones(10)])
+ samples = random.poisson(key, lam, shape=(2, 20))
+ self.assertArraysEqual(samples[:, :10], jnp.zeros_like(samples[:, :10]))
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_dtype={}".format(np.dtype(dtype).name), "dtype": dtype}
for dtype in jtu.dtypes.floating))
| Bug in jax.random.poisson for lam=0
Dear people from JAX,
Thank you for writing this software, I love it! I am using JAX for some optics simulations for which I need to use the poissonian random number generator to add noise to my simulated measurements. I noticed that `poisson(key, lam=0)` always returns -1, which makes no sense, and it does not match with the numpy implementation.
In JAX, I would write:
```python
import jax.numpy as jnp
import jax
key = jax.random.PRNGKey(0)
original = jnp.zeros(10,)
x = jax.random.poisson(key,original,(10,))
print(x)
# [-1 -1 -1 -1 -1 -1 -1 -1 -1 -1]
```
Numpy:
```python
import numpy as np
rng = np.random.default_rng()
s = rng.poisson(0, 10)
print(s)
# [0 0 0 0 0 0 0 0 0 0]
```
Looking at the source the problem seems to be in _poisson_knuth. An obvious solution would be to add a statement that just returns zero wherever lam==0, but there's probably a more elegant solution?
| Thanks for the report! I'll take a look. | 2021-03-08T17:27:50 |
google/jax | 5,990 | google__jax-5990 | [
"5987"
] | 0b88b0ea9b250c3dc908d30ebd22478d0cd5b08b | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -607,12 +607,14 @@ def power(x1, x2):
# TODO(phawkins): add integer pow support to XLA.
bits = 6 # Anything more would overflow for any x1 > 1
- acc = ones(shape(x1), dtype=dtype)
+ zero = _constant_like(x2, 0)
+ one = _constant_like(x2, 1)
+ # Initialize acc carefully such that pow(0, x2) is zero for x2 != 0
+ acc = where(lax.bitwise_and(lax.eq(x1, zero), lax.ne(x2, zero)), zero, one)
for _ in range(bits):
- acc = where(lax.bitwise_and(x2, _constant_like(x2, 1)),
- lax.mul(acc, x1), acc)
+ acc = where(lax.bitwise_and(x2, one), lax.mul(acc, x1), acc)
x1 = lax.mul(x1, x1)
- x2 = lax.shift_right_logical(x2, _constant_like(x2, 1))
+ x2 = lax.shift_right_logical(x2, one)
return acc
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1754,6 +1754,16 @@ def testIntegerPower(self, ptype):
self.assertLen(eqns, 1)
self.assertEqual(eqns[0].primitive, lax.integer_pow_p)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "_x={}_y={}".format(x, y), "x": x, "y": y}
+ for x in [-1, 0, 1]
+ for y in [0, 32, 64, 128]))
+ def testIntegerPowerOverflow(self, x, y):
+ # Regression test for https://github.com/google/jax/issues/5987
+ args_maker = lambda: [x, y]
+ self._CheckAgainstNumpy(np.power, jnp.power, args_maker)
+ self._CompileAndCheck(jnp.power, args_maker)
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_axis={}".format(
jtu.format_shape_dtype_string(shape, dtype), axis),
| Exponentiation on int32 data with base 0 gives different results when jitted
Example
```python
from jax import numpy as jnp
import jax
EXP_MULTIPLIER = 1
def power(a, b):
return a**b
a = jnp.int32(0)
b = jnp.int32(EXP_MULTIPLIER * 64)
result = power(a, b)
result_jit = jax.jit(power)(a, b)
assert result == result_jit, f'{result} != {result_jit}'
```
```
AssertionError: 0 != 1
```
| That's very curious - thanks for the report
```python
from jax import jit
import jax.numpy as jnp
print(jnp.power(0, 64))
# 0
print(jit(jnp.power)(0, 64))
# 1
```
That we observe differences between `jit` and non-`jit` isn't surprising: that's the difference between `lax.integer_pow` and the implementation in `lax_numpy` for specific unknown integers. But why is the latter implementation doing that?
Ah: it only looks at the first 6 bits of the exponent, but in this special case (0) we actually need to perform at least one multiplication.
Wow this is nasty :)
@hawkinsp, @jakevdp do you have an approximate timeline for when this might be fixed?
`x ** y` overflows for any integer `x > 1` and `y >= 64` and therefore has undefined output under those conditions. We are just missing a special case for `x == 0`. I think the fix is trivial and it is just to add `lax.select(x == 0, 0, ...)` to the return value.
Jake self-assigned this though so I'll let him look at it.
I'll try to get a fix in this morning. | 2021-03-09T17:37:11 |
google/jax | 5,997 | google__jax-5997 | [
"5988"
] | 6515b5f676916ece50a1f029b8c1557c8a93515b | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -5141,12 +5141,12 @@ def piecewise(x, condlist, funclist, *args, **kw):
funclist = [0] + list(funclist)
else:
raise ValueError(f"with {nc} condition(s), either {nc} or {nc+1} functions are expected; got {nf}")
- indices = argmax(cumsum(vstack([zeros_like(condlist[:1]), condlist]), 0), 0)
+ indices = argmax(cumsum(concatenate([zeros_like(condlist[:1]), condlist], 0), 0), 0)
dtype = _dtype(x)
def _call(f):
return lambda x: f(x, *args, **kw).astype(dtype)
def _const(v):
- return lambda x: full_like(x, v)
+ return lambda x: array(v, dtype=dtype)
funclist = [_call(f) if callable(f) else _const(f) for f in funclist]
return vectorize(lax.switch, excluded=(1,))(indices, funclist, x)
| diff --git a/jax/test_util.py b/jax/test_util.py
--- a/jax/test_util.py
+++ b/jax/test_util.py
@@ -875,7 +875,7 @@ def assertMultiLineStrippedEqual(self, expected, what):
msg="Found\n{}\nExpecting\n{}".format(what, expected))
def _CompileAndCheck(self, fun, args_maker, *, check_dtypes=True,
- rtol=None, atol=None):
+ rtol=None, atol=None, check_cache_misses=True):
"""Helper method for running JAX compilation and allclose assertions."""
args = args_maker()
@@ -892,10 +892,11 @@ def wrapped_fun(*args):
cache_misses = xla.xla_primitive_callable.cache_info().misses
python_ans = fun(*args)
- self.assertEqual(
- cache_misses, xla.xla_primitive_callable.cache_info().misses,
- "Compilation detected during second call of {} in op-by-op "
- "mode.".format(fun))
+ if check_cache_misses:
+ self.assertEqual(
+ cache_misses, xla.xla_primitive_callable.cache_info().misses,
+ "Compilation detected during second call of {} in op-by-op "
+ "mode.".format(fun))
cfun = api.jit(wrapped_fun)
python_should_be_executing = True
diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1647,17 +1647,18 @@ def testExtract(self, shape, dtype):
"shape": shape, "dtype": dtype, "ncond": ncond, "nfunc": nfunc}
for ncond in [1, 2, 3]
for nfunc in [ncond, ncond + 1]
- for shape in nonempty_nonscalar_array_shapes
+ for shape in all_shapes
for dtype in all_dtypes))
def testPiecewise(self, shape, dtype, ncond, nfunc):
rng = jtu.rand_default(self.rng())
rng_bool = jtu.rand_int(self.rng(), 0, 2)
funclist = [lambda x: x - 1, 1, lambda x: x, 0][:nfunc]
- args_maker = lambda: (rng(shape, dtype), list(rng_bool((ncond,) + shape, bool)))
+ args_maker = lambda: (rng(shape, dtype), [rng_bool(shape, bool) for i in range(ncond)])
np_fun = partial(np.piecewise, funclist=funclist)
jnp_fun = partial(jnp.piecewise, funclist=funclist)
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker, check_dtypes=True)
- self._CompileAndCheck(jnp_fun, args_maker, check_dtypes=True)
+ # This is a higher-order function, so the cache miss check will fail.
+ self._CompileAndCheck(jnp_fun, args_maker, check_dtypes=True, check_cache_misses=False)
@parameterized.named_parameters(jtu.cases_from_list(
| jnp.piecewise breaks on scalar inputs
From documentation and analogy to onp.piecewise():
```python
import numpy as onp
onp.piecewise(onp.array(0.1), [onp.array(True), onp.array(False)], [-1, 1])
```
the jax's alternative should work on scalar inputs (null shape arrays)
```python
from jax import numpy as jnp
jnp.piecewise(jnp.array(0.1), [jnp.array(True), jnp.array(False)], [-1, 1])
```
However, this breaks with the following error on jax 0.2.8 and 0.2.10:
> TypeError Traceback (most recent call last)
> <ipython-input-6-3886dc4a1876> in <module>
> ----> 1 jnp.piecewise(jnp.array(0.1), [jnp.array(True), jnp.array(False)], [-1, 1])
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in piecewise(x, condlist, funclist, *args, **kw)
> 5132 else:
> 5133 raise ValueError(f"with {nc} condition(s), either {nc} or {nc+1} functions are expected; got {nf}")
> -> 5134 indices = argmax(cumsum(vstack([zeros_like(condlist[:1]), condlist]), 0), 0)
> 5135 dtype = _dtype(x)
> 5136 def _call(f):
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in vstack(tup)
> 2741 @_wraps(np.vstack)
> 2742 def vstack(tup):
> -> 2743 return concatenate([atleast_2d(m) for m in tup], axis=0)
> 2744 row_stack = vstack
> 2745
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in concatenate(arrays, axis)
> 2734 else:
> 2735 while len(arrays) > 1:
> -> 2736 arrays = [lax.concatenate(arrays[i:i+k], axis)
> 2737 for i in range(0, len(arrays), k)]
> 2738 return arrays[0]
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py in <listcomp>(.0)
> 2734 else:
> 2735 while len(arrays) > 1:
> -> 2736 arrays = [lax.concatenate(arrays[i:i+k], axis)
> 2737 for i in range(0, len(arrays), k)]
> 2738 return arrays[0]
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/_src/lax/lax.py in concatenate(operands, dimension)
> 490 An array containing the concatenation.
> 491 """
> --> 492 return concatenate_p.bind(*operands, dimension=dimension)
> 493
> 494 Precision = xla_client.PrecisionConfig.Precision
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/core.py in bind(self, *args, **params)
> 282 top_trace = find_top_trace(args)
> 283 tracers = map(top_trace.full_raise, args)
> --> 284 out = top_trace.process_primitive(self, tracers, params)
> 285 return map(full_lower, out) if self.multiple_results else full_lower(out)
> 286
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/core.py in process_primitive(self, primitive, tracers, params)
> 620
> 621 def process_primitive(self, primitive, tracers, params):
> --> 622 return primitive.impl(*tracers, **params)
> 623
> 624 def process_call(self, primitive, f, tracers, params):
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/interpreters/xla.py in apply_primitive(prim, *args, **params)
> 239 def apply_primitive(prim, *args, **params):
> 240 """Impl rule that compiles and runs a single primitive 'prim' using XLA."""
> --> 241 compiled_fun = xla_primitive_callable(prim, *unsafe_map(arg_spec, args), **params)
> 242 return compiled_fun(*args)
> 243
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/_src/util.py in wrapper(*args, **kwargs)
> 196 return f(*args, **kwargs)
> 197 else:
> --> 198 return cached(bool(config.x64_enabled), *args, **kwargs)
> 199
> 200 wrapper.cache_clear = cached.cache_clear
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/_src/util.py in cached(_, *args, **kwargs)
> 189 @functools.lru_cache(max_size)
> 190 def cached(_, *args, **kwargs):
> --> 191 return f(*args, **kwargs)
> 192
> 193 @functools.wraps(f)
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/interpreters/xla.py in xla_primitive_callable(prim, *arg_specs, **params)
> 264 return _xla_callable(lu.wrap_init(prim_fun), device, None, "prim", donated_invars,
> 265 *arg_specs)
> --> 266 aval_out = prim.abstract_eval(*avals, **params)
> 267 if not prim.multiple_results:
> 268 handle_result = aval_to_result_handler(device, aval_out)
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/_src/lax/lax.py in standard_abstract_eval(prim, shape_rule, dtype_rule, weak_type_rule, *avals, **kwargs)
> 1992 weak_type=weak_type)
> 1993 elif least_specialized is ShapedArray:
> -> 1994 return ShapedArray(shape_rule(*avals, **kwargs), dtype_rule(*avals, **kwargs),
> 1995 weak_type=weak_type)
> 1996 elif least_specialized is UnshapedArray:
>
> ~/opt/anaconda3/envs/grape_jax210/lib/python3.8/site-packages/jax/_src/lax/lax.py in _concatenate_shape_rule(*operands, **kwargs)
> 3425 "dimension {} for shapes {}.")
> 3426 shapes = [operand.shape for operand in operands]
> -> 3427 raise TypeError(msg.format(dimension, ", ".join(map(str, shapes))))
> 3428
> 3429 concat_size = sum(o.shape[dimension] for o in operands)
>
> TypeError: Cannot concatenate arrays with shapes that differ in dimensions other than the one being concatenated: concatenating along dimension 0 for shapes (1, 1), (1, 2).
| Thanks for the report! | 2021-03-09T19:28:32 |
google/jax | 6,011 | google__jax-6011 | [
"5949"
] | 3e45a8376cff541808dfab2abe6db2f7ade8ee42 | diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -744,7 +744,7 @@ def set_up_aliases(c, xla_args, out_tuple, donated_args, tuple_args):
for arg_index, arg in enumerate(xla_args):
if donated_args[arg_index]:
for param_index, element in flatten_shape(c.GetShape(arg)):
- key = (element.dimensions(), element.numpy_dtype())
+ key = (element.dimensions(), element.xla_element_type())
if tuple_args:
param_number = 0
param_index = (arg_index,) + tuple(param_index)
@@ -756,7 +756,7 @@ def set_up_aliases(c, xla_args, out_tuple, donated_args, tuple_args):
# Consume donations for outputs.
out_donated_args = list(donated_args)
for output_index, element in flatten_shape(c.GetShape(out_tuple)):
- key = (element.dimensions(), element.numpy_dtype())
+ key = (element.dimensions(), element.xla_element_type())
if donations.get(key, ()):
param_number, param_index, arg_index = donations[key].popleft()
out_donated_args[arg_index] = False
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -2216,6 +2216,10 @@ def test_grad_of_token_consuming_primitive(self):
# Should not crash.
vjp_fun(arr)
+ def test_jit_returning_token(self):
+ x = jax.jit(jax.lax.create_token)(1.0)
+ self.assertIsInstance(x, jax.interpreters.xla.Token)
+
def test_leak_checker_catches_a_jit_leak(self):
if not config.omnistaging_enabled:
raise unittest.SkipTest("test only works with omnistaging")
| Cannot jit GPU functions returning Token objects
Cannot jit GPU functions that return Token objects, as reported in #5707
Seen with `jax==0.2.9` and `libjax==0.1.61+CUDA111`.
The same snippet works fine when running on CPU (`jax.config.update('jax_platform_name', 'cpu')`).
Reproducer:
```python
>>> import jax
>>> jax.devices()
[GpuDevice(id=0)]
>>> jax.jit(jax.lax.create_token)(1.0)
```
It leads to the error:
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
jax._src.traceback_util.FilteredStackTrace: RuntimeError: Unimplemented: Unimplemented primitive type TOKEN
The stack trace above excludes JAX-internal frames.
The following is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/filippovicentini/Documents/pythonenvs/mpi4jax_env/lib64/python3.8/site-packages/jax/_src/traceback_util.py", line 139, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/filippovicentini/Documents/pythonenvs/mpi4jax_env/lib64/python3.8/site-packages/jax/api.py", line 398, in f_jitted
return cpp_jitted_f(context, *args, **kwargs)
File "/home/filippovicentini/Documents/pythonenvs/mpi4jax_env/lib64/python3.8/site-packages/jax/api.py", line 289, in cache_miss
out_flat = xla.xla_call(
File "/home/filippovicentini/Documents/pythonenvs/mpi4jax_env/lib64/python3.8/site-packages/jax/core.py", line 1275, in bind
return call_bind(self, fun, *args, **params)
File "/home/filippovicentini/Documents/pythonenvs/mpi4jax_env/lib64/python3.8/site-packages/jax/core.py", line 1266, in call_bind
outs = primitive.process(top_trace, fun, tracers, params)
File "/home/filippovicentini/Documents/pythonenvs/mpi4jax_env/lib64/python3.8/site-packages/jax/core.py", line 1278, in process
return trace.process_call(self, fun, tracers, params)
File "/home/filippovicentini/Documents/pythonenvs/mpi4jax_env/lib64/python3.8/site-packages/jax/core.py", line 631, in process_call
return primitive.impl(f, *tracers, **params)
File "/home/filippovicentini/Documents/pythonenvs/mpi4jax_env/lib64/python3.8/site-packages/jax/interpreters/xla.py", line 580, in _xla_call_impl
compiled_fun = _xla_callable(fun, device, backend, name, donated_invars,
File "/home/filippovicentini/Documents/pythonenvs/mpi4jax_env/lib64/python3.8/site-packages/jax/linear_util.py", line 260, in memoized_fun
ans = call(fun, *args)
File "/home/filippovicentini/Documents/pythonenvs/mpi4jax_env/lib64/python3.8/site-packages/jax/interpreters/xla.py", line 714, in _xla_callable
donated_invars = set_up_aliases(c, xla_args, out_tuple, donated_invars, tuple_args)
File "/home/filippovicentini/Documents/pythonenvs/mpi4jax_env/lib64/python3.8/site-packages/jax/interpreters/xla.py", line 753, in set_up_aliases
key = (element.dimensions(), element.numpy_dtype())
RuntimeError: Unimplemented: Unimplemented primitive type TOKEN
```
| Seems to be a followup from #5707 | 2021-03-10T15:19:35 |
google/jax | 6,028 | google__jax-6028 | [
"6027"
] | ea07d41947b96c2446084a454451c940f78ffc98 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -30,7 +30,7 @@
import operator
import os
import types
-from typing import Sequence, FrozenSet, Optional, Tuple, Union, cast
+from typing import Any, Sequence, FrozenSet, Optional, Tuple, Union, cast
from textwrap import dedent as _dedent
import warnings
@@ -1302,6 +1302,22 @@ def _reshape(a, *args, order="C"):
else:
raise ValueError("Unexpected value for 'order' argument: {}.".format(order))
+def _ensure_index_tuple(x: Any) -> Tuple[int, ...]:
+ """Convert x to a tuple of indices."""
+ try:
+ return (operator.index(x),)
+ except TypeError:
+ return tuple(map(operator.index, x))
+
+def _transpose(a, *args):
+ if not args:
+ axis = None
+ elif len(args) == 1:
+ axis = args[0] if args[0] is None else _ensure_index_tuple(args[0])
+ else:
+ axis = _ensure_index_tuple(args)
+ return transpose(a, axis)
+
@_wraps(np.ravel)
def ravel(a, order="C"):
_check_arraylike("ravel", a)
@@ -5334,7 +5350,7 @@ def _operator_round(number, ndigits=None):
_diff_methods = ["clip", "conj", "conjugate", "cumprod", "cumsum",
"diagonal", "dot", "max", "mean", "min", "prod", "ptp",
"ravel", "repeat", "sort", "squeeze", "std", "sum",
- "swapaxes", "take", "tile", "trace", "transpose", "var"]
+ "swapaxes", "take", "tile", "trace", "var"]
# These methods are mentioned explicitly by nondiff_methods, so we create
# _not_implemented implementations of them here rather than in __init__.py.
@@ -5351,6 +5367,7 @@ def _operator_round(number, ndigits=None):
for method_name in _nondiff_methods + _diff_methods:
setattr(ShapedArray, method_name, core.aval_method(globals()[method_name]))
setattr(ShapedArray, "reshape", core.aval_method(_reshape))
+setattr(ShapedArray, "transpose", core.aval_method(_transpose))
setattr(ShapedArray, "flatten", core.aval_method(ravel))
setattr(ShapedArray, "T", core.aval_property(transpose))
setattr(ShapedArray, "real", core.aval_property(real))
@@ -5368,6 +5385,7 @@ def _operator_round(number, ndigits=None):
for method_name in _nondiff_methods + _diff_methods:
setattr(device_array, method_name, globals()[method_name])
setattr(device_array, "reshape", _reshape)
+ setattr(device_array, "transpose", _transpose)
setattr(device_array, "flatten", ravel)
setattr(device_array, "T", property(transpose))
setattr(device_array, "real", property(real))
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -1661,6 +1661,28 @@ def testPiecewise(self, shape, dtype, ncond, nfunc):
self._CompileAndCheck(jnp_fun, args_maker, check_dtypes=True, check_cache_misses=False)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "{}_perm={}_{}".format(
+ jtu.format_shape_dtype_string(shape, dtype), perm, arg_type),
+ "dtype": dtype, "shape": shape, "perm": perm, "arg_type": arg_type}
+ for dtype in default_dtypes
+ for shape in array_shapes
+ for arg_type in ["splat", "value"]
+ for perm in [None, tuple(np.random.RandomState(0).permutation(np.zeros(shape).ndim))]))
+ def testTransposeTuple(self, shape, dtype, perm, arg_type):
+ rng = jtu.rand_some_zero(self.rng())
+ args_maker = lambda: [rng(shape, dtype)]
+ if arg_type == "value":
+ np_fun = lambda x: x.transpose(perm)
+ jnp_fun = lambda x: jnp.array(x).transpose(perm)
+ else:
+ np_fun = lambda x: x.transpose(*(perm or ()))
+ jnp_fun = lambda x: jnp.array(x).transpose(*(perm or ()))
+
+ self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker, check_dtypes=True)
+ self._CompileAndCheck(jnp_fun, args_maker, check_dtypes=True)
+
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "{}_trim={}".format(
jtu.format_shape_dtype_string(a_shape, dtype), trim),
| Inconsistent transpose behaviour between jax.numpy and numpy
Example:
```
# Fine.
np.arange(6).reshape(1, 3, 2).transpose(2, 1, 0)
# TypeError: transpose() takes from 1 to 2 positional arguments but 4 were given
jnp.arange(6).reshape(1, 3, 2).transpose(2, 1, 0)
```
| Thanks for the report - the multi-argument version of this is not yet implemented. Until we rectify that, you can pass a tuple of axes:
```python
jnp.arange(6).reshape(1, 3, 2).transpose((2, 1, 0))
``` | 2021-03-11T17:31:26 |
google/jax | 6,048 | google__jax-6048 | [
"6044"
] | 8d3b4ac2f36761513ae132b59180e9770fe34922 | diff --git a/jax/interpreters/pxla.py b/jax/interpreters/pxla.py
--- a/jax/interpreters/pxla.py
+++ b/jax/interpreters/pxla.py
@@ -1165,6 +1165,9 @@ def partitioned_sharding_spec(num_partitions: int,
def execute_replicated(compiled, backend, in_handler, out_handler, *args):
input_bufs = in_handler(args)
out_bufs = compiled.execute_sharded_on_local_devices(input_bufs)
+ if xla.needs_check_special():
+ for bufs in out_bufs:
+ xla.check_special("parallel computation", bufs)
return out_handler(out_bufs)
diff --git a/jax/interpreters/xla.py b/jax/interpreters/xla.py
--- a/jax/interpreters/xla.py
+++ b/jax/interpreters/xla.py
@@ -357,7 +357,7 @@ def _execute_compiled_primitive(prim, compiled, result_handler, *args):
device, = compiled.local_devices()
input_bufs = list(it.chain.from_iterable(device_put(x, device) for x in args if x is not token))
out_bufs = compiled.execute(input_bufs)
- check_special(prim, out_bufs)
+ check_special(prim.name, out_bufs)
return result_handler(*out_bufs)
def _execute_replicated_primitive(prim, compiled, result_handler, *args):
@@ -370,11 +370,13 @@ def _execute_replicated_primitive(prim, compiled, result_handler, *args):
]
return result_handler(*out_bufs)
+def needs_check_special():
+ return FLAGS.jax_debug_infs or FLAGS.jax_debug_nans
-def check_special(prim, bufs):
- if FLAGS.jax_debug_infs or FLAGS.jax_debug_nans:
+def check_special(name, bufs):
+ if needs_check_special():
for buf in bufs:
- _check_special(prim.name, buf.xla_shape(), buf)
+ _check_special(name, buf.xla_shape(), buf)
def _check_special(name, xla_shape, buf):
assert not xla_shape.is_tuple()
@@ -845,7 +847,7 @@ def _execute_compiled(compiled: XlaExecutable, avals, handlers, *args):
device, = compiled.local_devices()
input_bufs = list(it.chain.from_iterable(device_put(x, device) for x in args if x is not token))
out_bufs = compiled.execute(input_bufs)
- check_special(xla_call_p, out_bufs)
+ check_special(xla_call_p.name, out_bufs)
return [handler(*bs) for handler, bs in zip(handlers, _partition_outputs(avals, out_bufs))]
def _execute_replicated(compiled: XlaExecutable, avals, handlers, *args):
@@ -856,7 +858,7 @@ def _execute_replicated(compiled: XlaExecutable, avals, handlers, *args):
buf[0] for buf in compiled.execute_sharded_on_local_devices(
list(zip(*input_bufs)))
]
- check_special(xla_call_p, out_bufs)
+ check_special(xla_call_p.name, out_bufs)
return [handler(*bs) for handler, bs in zip(handlers, _partition_outputs(avals, out_bufs))]
def _execute_trivial(jaxpr, device: Optional[Device], consts, avals, handlers, *args):
| diff --git a/tests/debug_nans_test.py b/tests/debug_nans_test.py
--- a/tests/debug_nans_test.py
+++ b/tests/debug_nans_test.py
@@ -18,9 +18,11 @@
import jax
import numpy as np
+from unittest import SkipTest
from jax import test_util as jtu
from jax import numpy as jnp
+from jax.experimental import pjit
from jax.config import config
config.parse_flags_with_absl()
@@ -49,6 +51,12 @@ def testJitComputationNoNaN(self):
ans = jax.jit(jnp.tanh)(A)
ans.block_until_ready()
+ def testJitComputationNaN(self):
+ A = jnp.array(0.)
+ with self.assertRaises(FloatingPointError):
+ ans = jax.jit(lambda x: 0. / x)(A)
+ ans.block_until_ready()
+
def testSingleResultPrimitiveNaN(self):
A = jnp.array(0.)
with self.assertRaises(FloatingPointError):
@@ -71,6 +79,67 @@ def f(x):
with self.assertRaisesRegex(FloatingPointError, msg):
f(1)
+ def testPmap(self):
+ f = jax.pmap(lambda x: 0. / x)
+
+ with self.assertRaisesRegex(
+ FloatingPointError,
+ r"invalid value \(nan\) encountered in parallel computation"):
+ ans = f(jnp.array([0.]))
+ ans.block_until_ready()
+
+ if jax.device_count() >= 2:
+ with self.assertRaisesRegex(
+ FloatingPointError,
+ r"invalid value \(nan\) encountered in parallel computation"):
+ ans = f(jnp.array([1., 0.]))
+ ans.block_until_ready()
+
+ def testPmapNoNaN(self):
+ ans = jax.pmap(lambda x: 0. / x)(jnp.array([1.]))
+ ans.block_until_ready()
+
+ @jtu.ignore_warning(message=".*is an experimental.*")
+ def testXmap(self):
+ if not config.omnistaging_enabled:
+ raise SkipTest("xmap requires omnistaging")
+
+ f = jax.experimental.maps.xmap(
+ lambda x: 0. / x,
+ in_axes=['i'],
+ out_axes=['i'],
+ axis_resources={'i': 'x'})
+
+ with jax.experimental.maps.mesh(np.array(jax.local_devices()[:1]), ('x',)):
+ with self.assertRaisesRegex(
+ FloatingPointError,
+ r"invalid value \(nan\) encountered in parallel computation"):
+ ans = f(jnp.array([0.]))
+ ans.block_until_ready()
+
+ if jax.device_count() >= 2:
+ with jax.experimental.maps.mesh(np.array(jax.local_devices()[:2]), ('x',)):
+ with self.assertRaises(FloatingPointError):
+ ans = f(jnp.array([1., 0.]))
+ ans.block_until_ready()
+
+ @jtu.ignore_warning(message=".*is an experimental.*")
+ @jtu.skip_on_devices("cpu", "gpu")
+ def testPjit(self):
+ if jax.device_count() < 2:
+ raise SkipTest("test requires >=2 devices")
+
+ p = jax.experimental.PartitionSpec('x')
+ f = pjit.pjit(lambda x: 0. / x,
+ in_axis_resources=p,
+ out_axis_resources=p)
+
+ with jax.experimental.maps.mesh(np.array(jax.local_devices()[:2]), ('x',)):
+ with self.assertRaises(FloatingPointError):
+ ans = f(jnp.array([0., 1.]))
+ ans.block_until_ready()
+
+ # TODO(skye): add parallel inf tests, ideally by factoring out test logic
class DebugInfsTest(jtu.JaxTestCase):
| Debugging nan when using 'pmap'
I was trying to set up error propagation by returning nan values as explained in this [issue](https://github.com/google/jax/issues/4257) and I noticed that to make it work for a _pmap_-ed function I also need to _jit_ it.
So something like this doesn't give me an error:
```python
from jax.config import config
config.update("jax_debug_nans", True)
@jax.pmap
def my_func(x):
return jnp.nan
x = jnp.zeros((1,))
y = my_func(x)
```
While by adding `jax.jit` I get the error (which is what I want):
```python
from jax.config import config
config.update("jax_debug_nans", True)
@jax.jit
@jax.pmap
def my_func(x):
return jnp.nan
x = jnp.zeros((1,))
y = my_func(x)
```
So basically a _pmap_-ed function doesn't behave as a _jit_-ed function or a plain function in regards to debugging nans with `jax_debug_nans`. Adding `jax.jit` is an easy fix but I think it would help if this was documented, maybe [here](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#Debugging-NaNs) somewhere. Personally it took me quite a bit of time to figure out what was going on.
| We should just make `jax_debug_nans` work with pmap (and also `jax_debug_infs` while we're at it). pmap is generally supposed to work like jit + the parallel mapping, and we also generally don't recommend wrapping pmap in jit since it can result in extra data transfers. I'll work on a fix, thanks for reporting! | 2021-03-12T22:23:18 |
google/jax | 6,068 | google__jax-6068 | [
"6051"
] | 63c06ef77e84bb5b3582fe23b17d8dfd2f5ecd0c | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -272,20 +272,18 @@ def _promote_dtypes(*args):
if len(args) < 2:
return args
else:
- to_dtype_raw = dtypes._result_type_raw(*args)
- weak_type = to_dtype_raw in set(dtypes._weak_types)
- to_dtype = dtypes.canonicalize_dtype(to_dtype_raw)
+ to_dtype, weak_type = dtypes._lattice_result_type(*args)
+ to_dtype = dtypes.canonicalize_dtype(to_dtype)
return [lax.convert_element_type(x, to_dtype, weak_type) for x in args]
def _promote_dtypes_inexact(*args):
"""Convenience function to apply Numpy argument dtype promotion.
Promotes arguments to an inexact type."""
- to_dtype_raw = dtypes._result_type_raw(*args)
- to_dtype = dtypes.canonicalize_dtype(to_dtype_raw)
+ to_dtype, weak_type = dtypes._lattice_result_type(*args)
+ to_dtype = dtypes.canonicalize_dtype(to_dtype)
to_dtype_inexact = _to_inexact_dtype(to_dtype)
- weak_type = (to_dtype == to_dtype_inexact
- and to_dtype_raw in set(dtypes._weak_types))
+ weak_type = (weak_type and to_dtype == to_dtype_inexact)
return [lax.convert_element_type(x, to_dtype_inexact, weak_type) for x in args]
def _to_inexact_dtype(dtype):
diff --git a/jax/dtypes.py b/jax/dtypes.py
--- a/jax/dtypes.py
+++ b/jax/dtypes.py
@@ -224,17 +224,13 @@ def dtype_real(typ):
np.dtype('complex128'),
] + _weak_types # type: ignore[operator]
-def _jax_type(value):
- """Return the jax type for a value or type."""
- # Note: `x in _weak_types` can return false positives due to dtype comparator overloading.
- if any(value is typ for typ in _weak_types):
- return value
- dtype_ = dtype(value)
- if is_weakly_typed(value):
- pytype = type(dtype_.type(0).item())
- if pytype in _weak_types:
- return pytype
- return dtype_
+def _jax_type(dtype, weak_type):
+ """Return the jax type for a dtype and weak type."""
+ return type(dtype.type(0).item()) if (weak_type and dtype != bool) else dtype
+
+def _dtype_and_weaktype(value):
+ """Return a (dtype, weak_type) tuple for the given input."""
+ return dtype(value), any(value is typ for typ in _weak_types) or is_weakly_typed(value)
def _type_promotion_lattice():
"""
@@ -266,6 +262,14 @@ def _make_lattice_upper_bounds():
@functools.lru_cache(512) # don't use util.memoize because there is no X64 dependence.
def _least_upper_bound(*nodes):
+ """Compute the least upper bound of a set of nodes.
+
+ Args:
+ nodes: sequence of entries from _jax_types
+ Returns:
+ the _jax_type representing the least upper bound of the input nodes
+ on the promotion lattice.
+ """
# This function computes the least upper bound of a set of nodes N within a partially
# ordered set defined by the lattice generated above.
# Given a partially ordered set S, let the set of upper bounds of n ∈ S be
@@ -325,13 +329,23 @@ def dtype(x):
return python_scalar_dtypes[type(x)]
return np.result_type(x)
-def _result_type_raw(*args):
- if len(args) == 1:
- return _jax_type(args[0])
- return _least_upper_bound(*{_jax_type(arg) for arg in args})
+def _lattice_result_type(*args):
+ dtypes, weak_types = zip(*(_dtype_and_weaktype(arg) for arg in args))
+ if len(dtypes) == 1:
+ return dtypes[0], weak_types[0]
+
+ # If all inputs are weakly typed, we compute the bound of the strongly-typed
+ # counterparts and apply the weak type at the end. This avoids returning the
+ # incorrect result with non-canonical weak types (e.g. weak int16).
+ if all(weak_types):
+ result_type = _least_upper_bound(*{_jax_type(dtype, False) for dtype in dtypes})
+ return dtype(result_type), True
+ else:
+ result_type = _least_upper_bound(*{_jax_type(d, w) for d, w in zip(dtypes, weak_types)})
+ return dtype(result_type), any(result_type is t for t in _weak_types)
def result_type(*args):
- """Convenience function to apply Numpy argument dtype promotion."""
+ """Convenience function to apply JAX argument dtype promotion."""
if len(args) == 0:
raise ValueError("at least one array or dtype is required")
- return canonicalize_dtype(_result_type_raw(*args))
+ return canonicalize_dtype(_lattice_result_type(*args)[0])
| diff --git a/tests/dtypes_test.py b/tests/dtypes_test.py
--- a/tests/dtypes_test.py
+++ b/tests/dtypes_test.py
@@ -24,6 +24,7 @@
import jax
from jax import dtypes
+from jax import lax
from jax import numpy as jnp
from jax import test_util as jtu
from jax.interpreters import xla
@@ -34,7 +35,7 @@
bool_dtypes = [np.dtype('bool')]
signed_dtypes = [np.dtype('int8'), np.dtype('int16'), np.dtype('int32'),
- np.dtype('int64'), np.dtype('longlong'), np.dtype('intc')]
+ np.dtype('int64')]
unsigned_dtypes = [np.dtype('uint8'), np.dtype('uint16'), np.dtype('uint32'),
np.dtype('uint64')]
@@ -210,7 +211,7 @@ class TestPromotionTables(jtu.JaxTestCase):
"jaxtype": jaxtype}
for jaxtype in dtypes._jax_types)
def testJaxTypeFromType(self, jaxtype):
- self.assertIs(dtypes._jax_type(jaxtype), jaxtype)
+ self.assertIs(dtypes._jax_type(*dtypes._dtype_and_weaktype(jaxtype)), jaxtype)
@parameterized.named_parameters(
{"testcase_name": "_jaxtype={}".format(jaxtype),
@@ -221,7 +222,7 @@ def testJaxTypeFromVal(self, jaxtype):
val = jaxtype(0)
except TypeError:
val = jaxtype.type(0)
- self.assertIs(dtypes._jax_type(val), jaxtype)
+ self.assertIs(dtypes._jax_type(*dtypes._dtype_and_weaktype(val)), jaxtype)
@jtu.ignore_warning(category=UserWarning,
message="Explicitly requested dtype.*")
@@ -327,5 +328,30 @@ def testBinaryPromotionJitInvariance(self, xtype, ytype, xfun, yfun):
args_maker = lambda: [xtype(1), ytype(1)]
self._CompileAndCheck(f, args_maker, check_dtypes=True)
+ @parameterized.named_parameters(
+ {"testcase_name": "_dtype={}_weak_type={}".format(dtype, weak_type),
+ "dtype": dtype, "weak_type": weak_type}
+ for dtype in all_dtypes
+ for weak_type in [True, False]
+ )
+ def testUnaryPromotion(self, dtype, weak_type):
+ # Regression test for https://github.com/google/jax/issues/6051
+ x = lax.convert_element_type(0, dtype, weak_type=weak_type)
+ y = jnp.array(0, dtype=dtypes.result_type(x))
+ assert x.dtype == y.dtype
+
+ @parameterized.named_parameters(
+ {"testcase_name": "_dtype={}_weak_type={}".format(dtype, weak_type),
+ "dtype": dtype, "weak_type": weak_type}
+ for dtype in all_dtypes
+ for weak_type in [True, False]
+ )
+ def testBinaryNonPromotion(self, dtype, weak_type):
+ # Regression test for https://github.com/google/jax/issues/6051
+ x = lax.convert_element_type(0, dtype, weak_type=weak_type)
+ y = (x + x)
+ assert x.dtype == y.dtype
+ assert dtypes.is_weakly_typed(y) == dtypes.is_weakly_typed(x)
+
if __name__ == "__main__":
absltest.main(testLoader=jtu.JaxTestLoader())
| Non-canonical weak types can lead to failed promotion
```python
from jax import lax
x = lax.convert_element_type(1, 'int16', weak_type=True)
print(x)
# DeviceArray(1, dtype=int16)
print(x + 1)
# TypeError: add requires arguments to have the same dtypes, got int16, int32.
```
The reason this happens is due to something @mattjj noted in #6000; `dtypes.result_type` essentially canonicalizes all weak types before doing any promotion
| Might also be related to #6018... | 2021-03-15T21:17:35 |
google/jax | 6,130 | google__jax-6130 | [
"6129"
] | d1a8ad076b86d752bc89a2850c9fbd24c415e2f0 | diff --git a/jax/lib/xla_bridge.py b/jax/lib/xla_bridge.py
--- a/jax/lib/xla_bridge.py
+++ b/jax/lib/xla_bridge.py
@@ -29,11 +29,13 @@
logging._warn_preinit_stderr = 0
from ..config import flags
-from jax._src import util
+from jax._src import util, traceback_util
from .. import dtypes
import numpy as np
import threading
+traceback_util.register_exclusion(__file__)
+
try:
from . import tpu_client
except ImportError:
@@ -335,11 +337,12 @@ def constant(builder, py_val, canonicalize_types=True):
Returns:
A representation of the constant, either a ComputationDataHandle or None
"""
- py_type = type(py_val)
- if py_type in _constant_handlers:
- return _constant_handlers[py_type](builder, py_val, canonicalize_types)
- else:
- raise TypeError("No constant handler for type: {}".format(py_type))
+ for t in type(py_val).mro():
+ handler = _constant_handlers.get(t)
+ if handler: return handler(builder, py_val, canonicalize_types)
+ if hasattr(py_val, '__jax_array__'):
+ return constant(builder, py_val.__jax_array__(), canonicalize_types)
+ raise TypeError("No constant handler for type: {}".format(type(py_val)))
# HLO instructions optionally can be annotated to say how the output should be
# spatially partitioned (represented in XLA as OpSharding protos, see
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -16,6 +16,7 @@
import collections
from contextlib import contextmanager
import copy
+import enum
from functools import partial
import re
import unittest
@@ -2359,6 +2360,19 @@ def __jax_array__(self):
for f in [jnp.isscalar, jnp.size, jnp.shape, jnp.dtype]:
self.assertEqual(f(x), f(a))
+ def test_constant_handler_mro(self):
+ # https://github.com/google/jax/issues/6129
+
+ class Foo(enum.IntEnum):
+ bar = 1
+
+ @api.pmap
+ def f(_):
+ return Foo.bar
+
+ ans = f(jnp.arange(1)) # doesn't crash
+ expected = jnp.arange(1) + 1
+ self.assertAllClose(ans, expected)
class RematTest(jtu.JaxTestCase):
| no constant handler for IntEnum
```python
import enum
import jax
class Foo(enum.IntEnum):
bar = 1
baz = 2
@jax.pmap
def f(_):
return Foo.bar
f(jax.numpy.arange(1))
```
The same works with `jax.jit` instead of `jax.pmap`.
| 2021-03-19T01:05:59 |
|
google/jax | 6,136 | google__jax-6136 | [
"1028"
] | d75becbf676e392bcf08e91594e8495bbc39317d | diff --git a/jax/flatten_util.py b/jax/flatten_util.py
--- a/jax/flatten_util.py
+++ b/jax/flatten_util.py
@@ -12,12 +12,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import warnings
+
+import numpy as np
from .tree_util import tree_flatten, tree_unflatten
-from ._src.util import safe_zip
+from ._src.util import safe_zip, unzip2
import jax.numpy as jnp
-from jax.api import vjp
+from jax import dtypes
+from jax import lax
zip = safe_zip
@@ -26,18 +30,40 @@ def ravel_pytree(pytree):
"""Ravel (i.e. flatten) a pytree of arrays down to a 1D array.
Args:
- pytree: a pytree to ravel.
+ pytree: a pytree of arrays and scalars to ravel.
Returns:
A pair where the first element is a 1D array representing the flattened and
- concatenated leaf values, and the second element is a callable for
- unflattening a 1D vector of the same length back to a pytree of of the same
- structure as the input ``pytree``.
+ concatenated leaf values, with dtype determined by promoting the dtypes of
+ leaf values, and the second element is a callable for unflattening a 1D
+ vector of the same length back to a pytree of of the same structure as the
+ input ``pytree``. If the input pytree is empty (i.e. has no leaves) then as
+ a convention a 1D empty array of dtype float32 is returned in the first
+ component of the output.
+
+ For details on dtype promotion, see
+ https://jax.readthedocs.io/en/latest/type_promotion.html.
+
"""
leaves, treedef = tree_flatten(pytree)
- flat, unravel_list = vjp(_ravel_list, *leaves)
+ flat, unravel_list = _ravel_list(leaves)
unravel_pytree = lambda flat: tree_unflatten(treedef, unravel_list(flat))
return flat, unravel_pytree
-def _ravel_list(*lst):
- return jnp.concatenate([jnp.ravel(elt) for elt in lst]) if lst else jnp.array([])
+def _ravel_list(lst):
+ if not lst: return jnp.array([], jnp.float32), lambda _: []
+ from_dtypes = [dtypes.dtype(l) for l in lst]
+ to_dtype = dtypes.result_type(*from_dtypes)
+ sizes, shapes = unzip2((jnp.size(x), jnp.shape(x)) for x in lst)
+ indices = np.cumsum(sizes)
+
+ def unravel(arr):
+ chunks = jnp.split(arr, indices[:-1])
+ with warnings.catch_warnings():
+ warnings.simplefilter("ignore") # ignore complex-to-real cast warning
+ return [lax.convert_element_type(chunk.reshape(shape), dtype)
+ for chunk, shape, dtype in zip(chunks, shapes, from_dtypes)]
+
+ ravel = lambda e: jnp.ravel(lax.convert_element_type(e, to_dtype))
+ raveled = jnp.concatenate([ravel(e) for e in lst])
+ return raveled, unravel
| diff --git a/tests/tree_util_test.py b/tests/tree_util_test.py
--- a/tests/tree_util_test.py
+++ b/tests/tree_util_test.py
@@ -20,6 +20,9 @@
from jax import test_util as jtu
from jax import tree_util
+from jax import flatten_util
+from jax import dtypes
+import jax.numpy as jnp
def _dummy_func(*args, **kwargs):
@@ -274,5 +277,56 @@ def testTransposeWithCustomObject(self):
FlatCache({"a": [3, 4], "b": [5, 6]}))
self.assertEqual(expected, actual)
+
+class RavelUtilTest(jtu.JaxTestCase):
+
+ def testFloats(self):
+ tree = [jnp.array([3.], jnp.float32),
+ jnp.array([[1., 2.], [3., 4.]], jnp.float32)]
+ raveled, unravel = flatten_util.ravel_pytree(tree)
+ self.assertEqual(raveled.dtype, jnp.float32)
+ tree_ = unravel(raveled)
+ self.assertAllClose(tree, tree_, atol=0., rtol=0.)
+
+ def testInts(self):
+ tree = [jnp.array([3], jnp.int32),
+ jnp.array([[1, 2], [3, 4]], jnp.int32)]
+ raveled, unravel = flatten_util.ravel_pytree(tree)
+ self.assertEqual(raveled.dtype, jnp.int32)
+ tree_ = unravel(raveled)
+ self.assertAllClose(tree, tree_, atol=0., rtol=0.)
+
+ def testMixedFloatInt(self):
+ tree = [jnp.array([3], jnp.int32),
+ jnp.array([[1., 2.], [3., 4.]], jnp.float32)]
+ raveled, unravel = flatten_util.ravel_pytree(tree)
+ self.assertEqual(raveled.dtype, dtypes.promote_types(jnp.float32, jnp.int32))
+ tree_ = unravel(raveled)
+ self.assertAllClose(tree, tree_, atol=0., rtol=0.)
+
+ def testMixedIntBool(self):
+ tree = [jnp.array([0], jnp.bool_),
+ jnp.array([[1, 2], [3, 4]], jnp.int32)]
+ raveled, unravel = flatten_util.ravel_pytree(tree)
+ self.assertEqual(raveled.dtype, dtypes.promote_types(jnp.bool_, jnp.int32))
+ tree_ = unravel(raveled)
+ self.assertAllClose(tree, tree_, atol=0., rtol=0.)
+
+ def testMixedFloatComplex(self):
+ tree = [jnp.array([1.], jnp.float32),
+ jnp.array([[1, 2 + 3j], [3, 4]], jnp.complex64)]
+ raveled, unravel = flatten_util.ravel_pytree(tree)
+ self.assertEqual(raveled.dtype, dtypes.promote_types(jnp.float32, jnp.complex64))
+ tree_ = unravel(raveled)
+ self.assertAllClose(tree, tree_, atol=0., rtol=0.)
+
+ def testEmpty(self):
+ tree = []
+ raveled, unravel = flatten_util.ravel_pytree(tree)
+ self.assertEqual(raveled.dtype, jnp.float32) # convention
+ tree_ = unravel(raveled)
+ self.assertAllClose(tree, tree_, atol=0., rtol=0.)
+
+
if __name__ == "__main__":
absltest.main(testLoader=jtu.JaxTestLoader())
| `ravel_pytree` does not work with int input
After https://github.com/google/jax/pull/897, `vjp` does not accept int input. Because `ravel_pytree` uses `vjp` for ravel/unravel logic, something like `ravel_pytree([1])` does not work anymore. I would be nice if we are able to disable this check in `vjp`. Probably we can provide an optional kwargs `disable_check_inexact_input` in `vjp` function to maintain this behaviour.
cc @hawkinsp @neerajprad
| 2021-03-19T17:09:48 |
|
google/jax | 6,137 | google__jax-6137 | [
"6134"
] | d75becbf676e392bcf08e91594e8495bbc39317d | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -768,7 +768,7 @@ def value_and_grad(fun: Callable, argnums: Union[int, Sequence[int]] = 0,
"same shape as the arguments at positions {argnums}.")
_check_callable(fun)
- argnums = _ensure_index(argnums)
+ argnums = core.concrete_or_error(_ensure_index, argnums)
@wraps(fun, docstr=docstr, argnums=argnums)
@api_boundary
| 'TypeError: iteration over a 0-d array' when using vmap
When I'm using vmap as shown in the following example I get a 'TypeError: iteration over a 0-d array' but expect an array with the second derivatives for both input variables. I'am using jax 0.2.10 and jaxlib 0.1.64.
```
import jax.numpy as jnp
from jax import grad, vmap
def target_function(x, y):
single_input = jnp.array([x, y])
return jnp.sum(single_input ** 3)
single_input = jnp.array([1., 1.])
vmap_second_derivatives = (
lambda variable_position: grad(grad(target_function, variable_position), variable_position)(*single_input)
)
variable_positions = jnp.array([0, 1])
second_derivatives = vmap(vmap_second_derivatives)(variable_positions)
```
**The full error message:**
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
Traceback (most recent call last):
File "/Users/davidanton/develop/issue_vmap/issueVMAP.py", line 18, in <module>
second_derivatives = vmap(vmap_second_derivatives)(variable_positions)
File "/Users/davidanton/develop/issue_vmap/issueVMAP.py", line 13, in <lambda>
lambda variable_position: grad(grad(target_function, variable_position), variable_position)(*single_input)
jax._src.traceback_util.FilteredStackTrace: TypeError: iteration over a 0-d array
The stack trace above excludes JAX-internal frames.
The following is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/davidanton/develop/issue_vmap/issueVMAP.py", line 18, in <module>
second_derivatives = vmap(vmap_second_derivatives)(variable_positions)
File "/Users/davidanton/develop/issue_vmap/venv/lib/python3.9/site-packages/jax/_src/traceback_util.py", line 139, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/Users/davidanton/develop/issue_vmap/venv/lib/python3.9/site-packages/jax/api.py", line 1237, in batched_fun
out_flat = batching.batch(
File "/Users/davidanton/develop/issue_vmap/venv/lib/python3.9/site-packages/jax/linear_util.py", line 166, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/Users/davidanton/develop/issue_vmap/issueVMAP.py", line 13, in <lambda>
lambda variable_position: grad(grad(target_function, variable_position), variable_position)(*single_input)
File "/Users/davidanton/develop/issue_vmap/venv/lib/python3.9/site-packages/jax/api.py", line 748, in grad
value_and_grad_f = value_and_grad(fun, argnums, has_aux=has_aux,
File "/Users/davidanton/develop/issue_vmap/venv/lib/python3.9/site-packages/jax/api.py", line 808, in value_and_grad
argnums = _ensure_index(argnums)
File "/Users/davidanton/develop/issue_vmap/venv/lib/python3.9/site-packages/jax/api_util.py", line 38, in _ensure_index
return tuple(map(operator.index, x))
File "/Users/davidanton/develop/issue_vmap/venv/lib/python3.9/site-packages/jax/_src/util.py", line 37, in safe_map
args = list(map(list, args))
File "/Users/davidanton/develop/issue_vmap/venv/lib/python3.9/site-packages/jax/core.py", line 497, in __iter__
return iter(self.aval._iter(self))
File "/Users/davidanton/develop/issue_vmap/venv/lib/python3.9/site-packages/jax/_src/lax/lax.py", line 1943, in _iter
raise TypeError("iteration over a 0-d array") # same as numpy error
TypeError: iteration over a 0-d array
**Calling the function without vmap, on the other hand, works fine.**
```
import jax.numpy as jnp
from jax import grad, vmap
def target_function(x, y):
single_input = jnp.array([x, y])
return jnp.sum(single_input ** 3)
single_input = jnp.array([1., 1.])
vmap_second_derivatives = (
lambda variable_position: grad(grad(target_function, variable_position), variable_position)(*single_input)
)
print(vmap_second_derivatives(0))
print(vmap_second_derivatives(1))
```
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
6.0
6.0
| 2021-03-19T17:15:07 |
||
google/jax | 6,144 | google__jax-6144 | [
"6121"
] | 4f8814a760122450287eefe30a8e9fd16a83a412 | diff --git a/jax/_src/numpy/linalg.py b/jax/_src/numpy/linalg.py
--- a/jax/_src/numpy/linalg.py
+++ b/jax/_src/numpy/linalg.py
@@ -202,7 +202,7 @@ def _cofactor_solve(a, b):
"a=[..., m, m] and b=[..., m, m]; got a={} and b={}")
raise ValueError(msg.format(a_shape, b_shape))
if a_shape[-1] == 1:
- return a[0, 0], b
+ return a[..., 0, 0], b
# lu contains u in the upper triangular matrix and l in the strict lower
# triangular matrix.
# The diagonal of l is set to ones without loss of generality.
| diff --git a/tests/linalg_test.py b/tests/linalg_test.py
--- a/tests/linalg_test.py
+++ b/tests/linalg_test.py
@@ -119,6 +119,12 @@ def testDetGrad(self, shape, dtype):
a[0] = 0
jtu.check_grads(jnp.linalg.det, (a,), 1, atol=1e-1, rtol=1e-1)
+ def testDetGradIssue6121(self):
+ f = lambda x: jnp.linalg.det(x).sum()
+ x = jnp.ones((16, 1, 1))
+ jax.grad(f)(x)
+ jtu.check_grads(f, (x,), 2, atol=1e-1, rtol=1e-1)
+
def testDetGradOfSingularMatrixCorank1(self):
# Rank 2 matrix with nonzero gradient
a = jnp.array([[ 50, -30, 45],
| Gradient of jnp.linalg.det fails
Hi,
I am quite new to JAX but there seems to be an issue with taking the derivative through `jnp.linalg.det`. The following code fails (it works for functions like `jnp.linalg.slogdet`):
```python
import jax
import jax.numpy as jnp
key = jax.random.PRNGKey(42)
jax.grad(lambda x: jnp.linalg.det(x).sum())(jax.random.normal(key, (16, 1, 1)))
```
```
---------------------------------------------------------------------------
FilteredStackTrace Traceback (most recent call last)
<ipython-input-64-74aa1dd68db7> in <module>
1 key = jax.random.PRNGKey(42)
----> 2 jax.grad(lambda x: jnp.linalg.det(x).sum())(jax.random.normal(key, (16, 1, 1)))
<ipython-input-64-74aa1dd68db7> in <lambda>(x)
1 key = jax.random.PRNGKey(42)
----> 2 jax.grad(lambda x: jnp.linalg.det(x).sum())(jax.random.normal(key, (16, 1, 1)))
FilteredStackTrace: TypeError: Custom JVP rule must produce primal and tangent outputs with equal shapes and dtypes, but got float32[1] and float32[16] respectively.
The stack trace above excludes JAX-internal frames.
The following is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-64-74aa1dd68db7> in <module>
1 key = jax.random.PRNGKey(42)
----> 2 jax.grad(lambda x: jnp.linalg.det(x).sum())(jax.random.normal(key, (16, 1, 1)))
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/_src/traceback_util.py in reraise_with_filtered_traceback(*args, **kwargs)
137 def reraise_with_filtered_traceback(*args, **kwargs):
138 try:
--> 139 return fun(*args, **kwargs)
140 except Exception as e:
141 if not is_under_reraiser(e):
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/api.py in grad_f(*args, **kwargs)
758 @api_boundary
759 def grad_f(*args, **kwargs):
--> 760 _, g = value_and_grad_f(*args, **kwargs)
761 return g
762
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/_src/traceback_util.py in reraise_with_filtered_traceback(*args, **kwargs)
137 def reraise_with_filtered_traceback(*args, **kwargs):
138 try:
--> 139 return fun(*args, **kwargs)
140 except Exception as e:
141 if not is_under_reraiser(e):
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/api.py in value_and_grad_f(*args, **kwargs)
821 tree_map(partial(_check_input_dtype_grad, holomorphic, allow_int), dyn_args)
822 if not has_aux:
--> 823 ans, vjp_py = _vjp(f_partial, *dyn_args)
824 else:
825 ans, vjp_py, aux = _vjp(f_partial, *dyn_args, has_aux=True)
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/api.py in _vjp(fun, has_aux, *primals)
1894 if not has_aux:
1895 flat_fun, out_tree = flatten_fun_nokwargs(fun, in_tree)
-> 1896 out_primal, out_vjp = ad.vjp(flat_fun, primals_flat)
1897 out_tree = out_tree()
1898 else:
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/interpreters/ad.py in vjp(traceable, primals, has_aux)
112 def vjp(traceable, primals, has_aux=False):
113 if not has_aux:
--> 114 out_primals, pvals, jaxpr, consts = linearize(traceable, *primals)
115 else:
116 out_primals, pvals, jaxpr, consts, aux = linearize(traceable, *primals, has_aux=True)
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/interpreters/ad.py in linearize(traceable, *primals, **kwargs)
99 _, in_tree = tree_flatten(((primals, primals), {}))
100 jvpfun_flat, out_tree = flatten_fun(jvpfun, in_tree)
--> 101 jaxpr, out_pvals, consts = pe.trace_to_jaxpr(jvpfun_flat, in_pvals)
102 out_primals_pvals, out_tangents_pvals = tree_unflatten(out_tree(), out_pvals)
103 assert all(out_primal_pval.is_known() for out_primal_pval in out_primals_pvals)
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr(fun, pvals, instantiate)
504 with core.new_main(JaxprTrace) as main:
505 fun = trace_to_subjaxpr(fun, main, instantiate)
--> 506 jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
507 assert not env
508 del main, fun, env
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
164
165 try:
--> 166 ans = self.f(*args, **dict(self.params, **kwargs))
167 except:
168 # Some transformations yield from inside context managers, so we have to
<ipython-input-64-74aa1dd68db7> in <lambda>(x)
1 key = jax.random.PRNGKey(42)
----> 2 jax.grad(lambda x: jnp.linalg.det(x).sum())(jax.random.normal(key, (16, 1, 1)))
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/custom_derivatives.py in __call__(self, *args, **kwargs)
215 flat_jvp, out_tree2 = _flatten_jvp(jvp, in_tree)
216 if config.omnistaging_enabled:
--> 217 out_flat = custom_jvp_call_p.bind(flat_fun, flat_jvp, *args_flat)
218 _, out_tree = lu.merge_linear_aux(out_tree1, out_tree2)
219 else:
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/custom_derivatives.py in bind(self, fun, jvp, *args)
281 tracers = map(top_trace.full_raise, args) # type: ignore
282 with core.maybe_new_sublevel(top_trace):
--> 283 outs = top_trace.process_custom_jvp_call(self, fun, jvp, tracers) # type: ignore
284 _, env_trace_todo = lu.merge_linear_aux(env_trace_todo1, env_trace_todo2)
285 return _apply_todos(env_trace_todo, map(core.full_lower, outs))
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/interpreters/ad.py in process_custom_jvp_call(self, _, __, f_jvp, tracers)
350 # currently handle float0s
351 tangents_in = map(replace_float0s, primals_in, tangents_in)
--> 352 outs = f_jvp.call_wrapped(*it.chain(primals_in, tangents_in))
353 primals_out, tangents_out = split_list(outs, [len(outs) // 2])
354 tangents_out = map(recast_to_float0, primals_out, tangents_out)
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
177 while stack:
178 gen, out_store = stack.pop()
--> 179 ans = gen.send(ans)
180 if out_store is not None:
181 ans, side = ans
~/miniconda3/envs/jax/lib/python3.8/site-packages/jax/custom_derivatives.py in _flatten_jvp(in_tree, *args)
259 msg = ("Custom JVP rule must produce primal and tangent outputs with "
260 "equal shapes and dtypes, but got {} and {} respectively.")
--> 261 raise TypeError(msg.format(av1.str_short(), av2.str_short()))
262 else:
263 msg = ("Custom JVP rule must produce primal and tangent outputs with "
TypeError: Custom JVP rule must produce primal and tangent outputs with equal shapes and dtypes, but got float32[1] and float32[16] respectively.
```
| 2021-03-20T02:15:43 |
|
google/jax | 6,145 | google__jax-6145 | [
"6096"
] | f8c36d9c0a0920c5d68f578cbf3e05d3a5aaa182 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -2113,6 +2113,7 @@ def _naryop_weak_type_rule(name, *avals, **kwargs):
return all(aval.weak_type for aval in avals)
def naryop(result_dtype, accepted_dtypes, name, translation_rule=None):
+ # TODO(frostig,mattjj): only used with arity > 2 once, simplify
dtype_rule = partial(naryop_dtype_rule, result_dtype, accepted_dtypes, name)
shape_rule = partial(_broadcasting_shape_rule, name)
weak_type_rule = partial(_naryop_weak_type_rule, name)
diff --git a/jax/interpreters/batching.py b/jax/interpreters/batching.py
--- a/jax/interpreters/batching.py
+++ b/jax/interpreters/batching.py
@@ -331,7 +331,8 @@ def broadcast_batcher(prim, args, dims, **params):
return (out, (d,) * len(out)) if prim.multiple_results else (out, d)
else:
size, = {shape[d] for shape, d in shapes if d is not not_mapped}
- args = [bdim_at_front(x, d, size) for x, d in zip(args, dims)]
+ args = [bdim_at_front(x, d, size) if np.ndim(x) else x
+ for x, d in zip(args, dims)]
ndim = max(np.ndim(x) for x in args) # special-case scalar broadcasting
args = [_handle_scalar_broadcasting(ndim, x, d) for x, d in zip(args, dims)]
out = prim.bind(*args, **params)
| diff --git a/tests/batching_test.py b/tests/batching_test.py
--- a/tests/batching_test.py
+++ b/tests/batching_test.py
@@ -21,6 +21,7 @@
import jax
import jax.numpy as jnp
+import jax.scipy as jsp
from jax import test_util as jtu
from jax import lax
from jax._src.lax import parallel
@@ -1240,5 +1241,13 @@ def testNonJaxTypedOutput(self):
TypeError, "Output from batched function.*is not a valid JAX type"):
vmap(lambda x: "hello")(np.arange(5))
+ def testIssue6096(self):
+ def f(x):
+ return jsp.special.betainc(jnp.ones(3), 1., x)
+
+ self.assertEqual(f(jnp.ones(3)).shape, (3,))
+ self.assertEqual(jax.vmap(f)(jnp.ones((2, 3))).shape, (2, 3))
+
+
if __name__ == '__main__':
absltest.main(testLoader=jtu.JaxTestLoader())
| betainc batching rule is not quite correct
This issue can be replicated using the following code
```python
import jax
import jax.numpy as jnp
from jax.scipy.special import betainc
def f(x):
return betainc(jnp.ones(3), 1., x)
assert f(jnp.ones(3)).shape == (3,)
assert jax.vmap(f)(jnp.ones((2, 3))).shape == (2, 3)
# TypeError: regularized_incomplete_beta got arrays of different rank: (2, 3), (2,), (2, 3).
```
| 2021-03-20T04:09:11 |
|
google/jax | 6,169 | google__jax-6169 | [
"6008"
] | ecd8f51e231c2f3443e7496bc094834b5d63b24e | diff --git a/jax/_src/numpy/linalg.py b/jax/_src/numpy/linalg.py
--- a/jax/_src/numpy/linalg.py
+++ b/jax/_src/numpy/linalg.py
@@ -253,7 +253,11 @@ def _det_jvp(primals, tangents):
return y, jnp.trace(z, axis1=-1, axis2=-2)
-@_wraps(np.linalg.eig)
+@_wraps(np.linalg.eig, lax_description="""
+This differs from ``numpy.linalg.eig`` in that the return type of
+``jax.numpy.linalg.eig`` is always ``complex64`` for 32-bit input,
+and ``complex128`` for 64-bit input.
+""")
def eig(a):
a = _promote_arg_dtypes(jnp.asarray(a))
return lax_linalg.eig(a, compute_left_eigenvectors=False)
| NumPy compatibility of jax.numpy.linalg.eig for real-valued inputs
`jax.numpy.linalg.eig` for real-valued input always gives complex-valued output. However, NumPy casts the result to real dtype if the imaginary part of eigenvalues is all zeros.
```python
In [1]: import jax.numpy as jnp
In [2]: import numpy as np
In [3]: np.linalg.eig(np.diag((1, 2, 3)))
...:
Out[3]:
(array([1., 2., 3.]),
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]))
In [4]: jnp.linalg.eig(np.diag((1, 2, 3)))
...:
Out[4]:
[DeviceArray([1.+0.j, 2.+0.j, 3.+0.j], dtype=complex64),
DeviceArray([[1.+0.j, 0.+0.j, 0.+0.j],
[0.+0.j, 1.+0.j, 0.+0.j],
[0.+0.j, 0.+0.j, 1.+0.j]], dtype=complex64)]
```
Is this the intended behavior?
| Yes, this is intended. Due to JAX's XLA compilation model, it is not possible to make the return type of a function dependent on the values contained in the returned array.
There's probably still an action item here: we should document the difference. | 2021-03-22T17:43:32 |
|
google/jax | 6,226 | google__jax-6226 | [
"6022"
] | 700eb89d008e5e61bd89dfbe5df1433e0abbb93c | diff --git a/jax/_src/lax/parallel.py b/jax/_src/lax/parallel.py
--- a/jax/_src/lax/parallel.py
+++ b/jax/_src/lax/parallel.py
@@ -759,7 +759,7 @@ def pos_reduce(x):
size = len(axis_index_groups[0])
else:
size = prod([core.axis_frame(name).size for name in named_axes]) # type: ignore
- return tuple(size * pos_reduce(x) for x in args)
+ return tuple(lax._const(x, size) * pos_reduce(x) for x in args)
return core.Primitive.bind(
psum_p, *args, axes=axes, axis_index_groups=axis_index_groups)
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -1695,6 +1695,21 @@ def testArgAllReduce(self, shape, dtype, axis, collective, bulk_op):
expected = bulk_op(x, axis=axis)
self.assertAllClose(ans, expected, check_dtypes=False)
+ @parameterized.named_parameters(
+ {"testcase_name": "_dtype={}".format(
+ jtu.format_shape_dtype_string((), dtype)),
+ "dtype": dtype}
+ for dtype in [np.float32, np.int32]
+ )
+ def testPmapDtype(self, dtype):
+ # Regression test for https://github.com/google/jax/issues/6022
+ @partial(pmap, axis_name='i')
+ def func(_):
+ return jax.lax.psum(dtype(0), axis_name='i')
+ unused_arg = jnp.arange(xla_bridge.device_count())
+ out_dtype = func(unused_arg).dtype
+ self.assertEqual(out_dtype, dtype)
+
class VmapOfPmapTest(jtu.JaxTestCase):
| psum over 32 bit numpy arrays returns 64 bit values when 64 bits are enabled
```python
import jax
import jax.lax as lax
import jax.numpy as jnp
import functools
jax.config.update('jax_enable_x64', True)
@functools.partial(jax.pmap, axis_name='i')
def foo(_):
# Note that I never explicitly construct a numpy array, grad returns it sometimes.
x = jax.grad(lambda x: jnp.array(0., jnp.float32))(jnp.array(0., jnp.float32))
print(type(x))
print(x.dtype)
y = jax.lax.psum(x, axis_name='i')
print(type(y))
print(y.dtype)
return y
foo(jnp.arange([jax.device_count()], dtype=jnp.float32))
```
Outputs:
```
<class 'numpy.ndarray'>
float32
<class 'numpy.float64'>
float64
```
| 2021-03-25T19:44:02 |
|
google/jax | 6,232 | google__jax-6232 | [
"6223"
] | 69f88e2ea99765e5461b40f0001d8c8abeec660e | diff --git a/jax/_src/lax/fft.py b/jax/_src/lax/fft.py
--- a/jax/_src/lax/fft.py
+++ b/jax/_src/lax/fft.py
@@ -114,7 +114,9 @@ def _irfft_transpose(t, fft_lengths):
scale = 1 / prod(fft_lengths)
out = scale * mask * x
assert out.dtype == _complex_dtype(t.dtype), (out.dtype, t.dtype)
- return out
+ # Use JAX's convention for complex gradients
+ # https://github.com/google/jax/issues/6223#issuecomment-807740707
+ return lax.conj(out)
def fft_transpose_rule(t, operand, fft_type, fft_lengths):
if fft_type == xla_client.FftType.RFFT:
| diff --git a/tests/fft_test.py b/tests/fft_test.py
--- a/tests/fft_test.py
+++ b/tests/fft_test.py
@@ -20,6 +20,7 @@
from absl.testing import absltest
from absl.testing import parameterized
+import jax
from jax import lax
from jax import numpy as jnp
from jax import test_util as jtu
@@ -116,6 +117,21 @@ def testFftn(self, inverse, real, shape, dtype, axes):
tol = 0.15
jtu.check_grads(jnp_fn, args_maker(), order=2, atol=tol, rtol=tol)
+ def testIrfftTranspose(self):
+ # regression test for https://github.com/google/jax/issues/6223
+ def build_matrix(linear_func, size):
+ return jax.vmap(linear_func)(jnp.eye(size, size))
+
+ def func(x):
+ return jnp.fft.irfft(jnp.concatenate([jnp.zeros(1), x[:2] + 1j*x[2:]]))
+
+ def func_transpose(x):
+ return jax.linear_transpose(func, x)(x)[0]
+
+ matrix = build_matrix(func, 4)
+ matrix2 = build_matrix(func_transpose, 4).T
+ self.assertAllClose(matrix, matrix2)
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_inverse={}_real={}".format(inverse, real),
"inverse": inverse, "real": real}
| linear_transpose involving fft seemingly incorrect
I was planning to implement functionality that requires me to be able to take the transpose of a complicated linear function, which includes fourier transforms. I noticed that the fft module seems to implement the rules for transposition; the last few lines of https://jax.readthedocs.io/en/latest/_modules/jax/_src/lax/fft.html seem to pertain to it.
I am not familiar enough with the jax API at present to point out any bugs; but below is a simple test that demonstrates what I believe to be a bug. The build_matrix helper function explicitly constructs the coefficients of the linear function by feeding it all delta functions in sequence. If I linear_transpose my function, that should be identical to the matrix transpose of the built matrix. Yet it doesnt. It seems as if im getting my result in reverse order (plus another off-by-one index bug I think).
Seems to me like a bug in the implementation of the transpose rules for ffts; but again not qualified myself to spot it.
While on the topic, slightly related question: when viewed as linear operators, convolution and correlation are transposed operators. Should I trust jax to figure out efficient transformations along these lines (assuming the underlying fft rules are bug free); or is it likely optimal for me to figure out how to override the linear transpose of a convolution with my own handcrafted correlation functions (and vice versa)?
Code to reproduce:
```python
import numpy as np
import jax
from jax import numpy as jnp
import matplotlib.pyplot as plt
np.random.seed(0)
signal = np.cumsum(np.random.randn(2**8))
signal_jax = jnp.array(signal)
x = np.linspace(-1, 1, len(signal))
psf = np.clip(0.2 - np.abs(x), 0, 1) * (x > 0)
psf /= psf.sum()
psf_jax = jnp.array(psf)
jrfft = jax.jit(jnp.fft.rfft)
jirfft = jax.jit(jnp.fft.irfft)
@jax.jit
def convolve(a, b):
fa = jrfft(a)
fb = jrfft(b)
return jirfft(fa * fb)
@jax.jit
def correlate(a, b):
"""NOTE: can this be implemented as a transposition rule according to:
https://jax.readthedocs.io/en/latest/_modules/jax/_src/lax/fft.html
"""
fa = jrfft(a).conj()
fb = jrfft(b)
return jirfft(fa * fb)
def psf_convolve(psf):
"""statically bind psf arg"""
psf = jax.numpy.fft.ifftshift(psf)
return lambda a: convolve(psf, a)
def psf_correlate(psf):
"""statically bind psf arg. psf assumed to be centered"""
psf = jax.numpy.fft.ifftshift(psf)
return lambda a: correlate(psf, a)
import types
def build_matrix(func, shape):
"""explicitly evaluate coeeficient matrix of linear operator func by calling it repeatedly with delta functions"""
i, j = shape
Z = []
I = jnp.eye(i, j)
for r in range(i):
z = func(I[r])
Z.append(z)
return jnp.array(Z)
func = psf_convolve(psf_jax)
arr = types.SimpleNamespace(shape=signal_jax.shape, dtype=np.float32)
func_trans = lambda a: jax.linear_transpose(func, arr)(a)[0]
N = len(signal)
plt.figure()
M = build_matrix(func, (N, N)).T
plt.imshow(M)
plt.figure()
M = build_matrix(func_trans, (N, N))
plt.imshow(M)
plt.show()
```
- [ ] If applicable, include full error messages/tracebacks.
| @mattjj @hawkinsp @shoyer do any of you have cycles to look into this? (Or cc someone else who may)
I'm not quite sure what's going on yet (you may be correct about a bad transpose rule somewhere), but here's a slightly simplified example that seems to reproduce the same issue:
```python
import numpy as np
import jax
import jax.numpy as jnp
def build_matrix(func, size):
return jax.vmap(func)(jnp.eye(size, size))
def identity(x):
# return x # this works
return jnp.fft.irfft(jnp.fft.rfft(x)) # this doesn't
identity_trans = lambda a: jax.linear_transpose(identity, a)(a)[0]
N = 4
M_forward = build_matrix(identity, N)
M_backward = build_matrix(identity_trans, N).T
print('Forward matrix:')
print(M_forward)
print('Backward matrix:')
print(M_backward)
```
```
Forward matrix:
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
Backward matrix:
[[1. 0. 0. 0.]
[0. 0. 0. 1.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]]
```
A bit more experimentation suggests that the culprit may be the transpose of `irfft`, which appears to be off by a complex conjugate:
```
import numpy as np
import jax
import jax.numpy as jnp
def build_matrix(func, size):
return jax.vmap(func)(jnp.eye(size, size))
def func(x):
# return jnp.fft.irfft(jnp.fft.rfft(x))
# return jnp.fft.fft(x).real
# return jnp.concatenate([jnp.fft.rfft(x).real, jnp.zeros(1)])
return jnp.fft.irfft(jnp.concatenate([jnp.zeros(1), x[:2] + 1j*x[2:]]))
func_trans = lambda a: jax.linear_transpose(func, a)(a)[0]
N = 4
M_forward = build_matrix(func, N)
M_backward = build_matrix(func_trans, N).T
print('Forward matrix:')
print(M_forward)
print('Backward matrix:')
print(M_backward)
```
```
Forward matrix:
[[ 0.5 0. -0.5 0. ]
[ 0.25 -0.25 0.25 -0.25]
[ 0. -0.5 0. 0.5 ]
[ 0. 0. 0. 0. ]]
Backward matrix:
[[ 0.5 0. -0.5 0. ]
[ 0.25 -0.25 0.25 -0.25]
[ 0. 0.5 0. -0.5 ]
[ 0. 0. 0. 0. ]]
```
This makes sense, given that the transpose rule was copied from TensorFlow, which follows a [different convention for complex gradients](https://github.com/google/jax/issues/4891). | 2021-03-26T00:11:23 |
google/jax | 6,256 | google__jax-6256 | [
"5832"
] | 2ed6bbed3b98afc94da38506fb0431db5e26bf4d | diff --git a/jax/custom_derivatives.py b/jax/custom_derivatives.py
--- a/jax/custom_derivatives.py
+++ b/jax/custom_derivatives.py
@@ -654,8 +654,8 @@ def batched_fwd_jaxpr_thunk():
fwd_args_batched = [0 if b else not_mapped for b in args_batched]
fwd_out_dims = lambda: out_dims2[0]
- batched_bwd = batching.batch(bwd, axis_name, axis_size, fwd_out_dims,
- fwd_args_batched)
+ batched_bwd = batching.batch_custom_vjp_bwd(bwd, axis_name, axis_size, fwd_out_dims,
+ fwd_args_batched)
batched_outs = custom_vjp_call_jaxpr_p.bind(
*args, fun_jaxpr=batched_fun_jaxpr,
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -2470,6 +2470,37 @@ def f_rev(_, g):
api.grad(lambda x: f(f(f(x))))(1.)
+ def test_custom_vjp_scan_batching_edge_case(self):
+ # https://github.com/google/jax/issues/5832
+ @jax.custom_vjp
+ def mul(x, coeff): return x * coeff
+ def mul_fwd(x, coeff): return mul(x, coeff), (x, coeff)
+ def mul_bwd(res, g):
+ x, coeff = res
+ g_x = g * coeff
+ g_coeff = (x * g).sum()
+ return g_x, g_coeff
+ mul.defvjp(mul_fwd, mul_bwd)
+
+ def scan_over_mul(x, coeff):
+ def f_(x, t):
+ return mul(x, coeff), None
+ y, _ = jax.lax.scan(f_, x, jnp.arange(3))
+ return y
+
+ key = jax.random.PRNGKey(0)
+ key1, key2 = jax.random.split(key, 2)
+ x_batch = jax.random.normal(key1, (3, 2))
+ covector_batch = jax.random.normal(key2, (3, 2))
+ coeff = jnp.array(1.)
+
+ batched_scan_over_mul = jax.vmap(scan_over_mul, in_axes=(0, None), out_axes=0)
+ res, vjp_fun = jax.vjp(batched_scan_over_mul, x_batch, coeff)
+ vjp_fun(covector_batch) # doesn't crash
+
+ jtu.check_grads(batched_scan_over_mul, (x_batch, coeff), order=2,
+ modes=['rev'])
+
class RematTest(jtu.JaxTestCase):
| Error with custom_vjp + scan + vmap (jax==0.2.9)
Hello,
Using lax.scan over a function with a custom_vjp, then vmapping the resulting function, and then attempting backward mode differentiation using jax.vjp, leads to an error with jax 0.2.9. Jax 0.2.8 works.
This is the code to reproduce the error:
```
import jax
import jax.numpy as jnp
@jax.custom_vjp
def mul(x, coeff): return x * coeff
def mul_fwd(x, coeff): return mul(x, coeff), (x, coeff)
def mul_bwd(res, g):
x, coeff = res
g_x = g * coeff
g_coeff = (x * g).sum()
return g_x, g_coeff
mul.defvjp(mul_fwd, mul_bwd)
def scan_over_mul(x, coeff):
def f_(x, t):
return mul(x, coeff), None
y, _ = jax.lax.scan(f_, x, jnp.arange(3))
return y
key = jax.random.PRNGKey(0)
key1, key2 = jax.random.split(key, 2)
x_batch = jax.random.normal(key1, (3, 2))
covector_batch = jax.random.normal(key2, (3, 2))
coeff = jnp.array(1.)
batched_scan_over_mul = jax.vmap(scan_over_mul, in_axes=(0, None), out_axes=0)
res = batched_scan_over_mul(x_batch, coeff)
res, vjp_fun = jax.vjp(batched_scan_over_mul, x_batch, coeff)
grads = vjp_fun(covector_batch) # This line throws ValueError
print(grads)
```
Things work as expected (i.e., no error and `grads[1]`, which corresponds to `coeff` has only one element) if doing one of:
- scanning over a function without a custom_vjp,
- replacing `res, vjp_fun = jax.vjp(batched_scan_over_mul, x_batch, coeff)` by `with jax.disable_jit(): res, vjp_fun = jax.vjp(batched_scan_over_mul, x_batch, coeff)`
- replacing scan by a python for loop.
- Using jax 0.2.8 instead of 0.2.9. The master branch throws the same error as 0.2.9.
Here is the error:
```
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
Traceback (most recent call last):
File "/Users/florianhopfmueller/code/debug_vmap_error.py", line 38, in <module>
grads = vjp_fun(covector_batch)
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/api.py", line 1834, in _vjp_pullback_wrapper
ans = fun(*args)
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/interpreters/ad.py", line 121, in unbound_vjp
arg_cts = backward_pass(jaxpr, consts, dummy_args, cts)
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/interpreters/ad.py", line 227, in backward_pass
cts_out = get_primitive_transpose(eqn.primitive)(cts_in, *invals,
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/_src/lax/control_flow.py", line 1693, in _scan_transpose
jaxpr_trans = _transpose_scan_jaxpr(
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/_src/lax/control_flow.py", line 1728, in _transpose_scan_jaxpr
return _make_closed_jaxpr(transposed, res1_avals + c_avals + b_avals + res2_avals)
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/_src/lax/control_flow.py", line 1732, in _make_closed_jaxpr
jaxpr, out_avals, consts = pe.trace_to_jaxpr_dynamic(traceable, in_avals)
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/interpreters/partial_eval.py", line 1186, in trace_to_jaxpr_dynamic
jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/interpreters/partial_eval.py", line 1196, in trace_to_subjaxpr_dynamic
ans = fun.call_wrapped(*in_tracers)
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/linear_util.py", line 166, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/_src/lax/control_flow.py", line 1722, in transposed
cbar_abar = ad.backward_pass(jaxpr.jaxpr, jaxpr.consts, primals, b_bar)
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/interpreters/ad.py", line 227, in backward_pass
cts_out = get_primitive_transpose(eqn.primitive)(cts_in, *invals,
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/interpreters/ad.py", line 687, in _custom_lin_transpose
cts_in = bwd.call_wrapped(*res, *cts_out)
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/linear_util.py", line 179, in call_wrapped
ans = gen.send(ans)
File "/Users/florianhopfmueller/opt/anaconda3/envs/jax029/lib/python3.9/site-packages/jax/interpreters/batching.py", line 70, in _match_axes
raise ValueError(msg)
ValueError: vmap has mapped output but out_axes is None
```
Please let me know if this is expected, or if you need any more info to address it! Thanks a lot.
| Thanks for the excellent report! | 2021-03-28T18:30:29 |
google/jax | 6,257 | google__jax-6257 | [
"5365",
"5365"
] | 634397dc5925270a543fcbbd8b5f9ef5e2f3e8a4 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -4555,6 +4555,7 @@ def _scatter_add_jvp(primals, tangents, *, update_jaxpr, update_consts,
dimension_numbers, indices_are_sorted, unique_indices):
operand, scatter_indices, updates = primals
g_operand, g_scatter_indices, g_updates = tangents
+ del g_scatter_indices # ignored
val_out = scatter_add_p.bind(
operand, scatter_indices, updates, update_jaxpr=update_jaxpr,
update_consts=update_consts, dimension_numbers=dimension_numbers,
diff --git a/jax/experimental/jet.py b/jax/experimental/jet.py
--- a/jax/experimental/jet.py
+++ b/jax/experimental/jet.py
@@ -163,11 +163,11 @@ def process_custom_vjp_call(self, primitive, fun, fwd, bwd, tracers, out_trees):
return fun.call_wrapped(*tracers)
-class ZeroTerm(object): pass
+class ZeroTerm: pass
zero_term = ZeroTerm()
register_pytree_node(ZeroTerm, lambda z: ((), None), lambda _, xs: zero_term)
-class ZeroSeries(object): pass
+class ZeroSeries: pass
zero_series = ZeroSeries()
register_pytree_node(ZeroSeries, lambda z: ((), None), lambda _, xs: zero_series)
@@ -549,7 +549,6 @@ def _select_taylor_rule(primal_in, series_in, **params):
return primal_out, series_out
jet_rules[lax.select_p] = _select_taylor_rule
-
def _lax_max_taylor_rule(primal_in, series_in):
x, y = primal_in
@@ -589,3 +588,14 @@ def _custom_jvp_call_jaxpr_rule(primals_in, series_in, *, fun_jaxpr,
del jvp_jaxpr_thunk
return jet(core.jaxpr_as_fun(fun_jaxpr), primals_in, series_in)
jet_rules[custom_jvp_call_jaxpr_p] = _custom_jvp_call_jaxpr_rule
+
+def _scatter_add_rule(primals_in, series_in, *, update_jaxpr, update_consts,
+ dimension_numbers, indices_are_sorted, unique_indices):
+ bind = partial(lax.scatter_add_p.bind, update_jaxpr=update_jaxpr,
+ update_consts=update_consts, dimension_numbers=dimension_numbers,
+ indices_are_sorted=indices_are_sorted, unique_indices=unique_indices)
+ operand, scatter_indices, updates = primals_in
+ primal_out = bind(operand, scatter_indices, updates)
+ series_out = [bind(d1, scatter_indices, d2) for d1, _, d2 in zip(*series_in)]
+ return primal_out, series_out
+jet_rules[lax.scatter_add_p] = _scatter_add_rule
| diff --git a/tests/jet_test.py b/tests/jet_test.py
--- a/tests/jet_test.py
+++ b/tests/jet_test.py
@@ -389,6 +389,26 @@ def g(eps):
return jax.grad(f)(x, eps)
jet(g, (1.,), ([1.],)) # doesn't crash
+ def test_scatter_add(self):
+ # very basic test from https://github.com/google/jax/issues/5365
+ def f(x):
+ x0 = x[0]
+ x1 = x[1]
+ return (x0**5 + x1**5).sum()
+
+ def h(eps):
+ from jax import jacfwd, grad
+
+ x = jnp.array([1., 1.])
+ μ = eps * x
+
+ def F(t):
+ return f(x + t * μ)
+
+ return grad(jacfwd(F))(0.)
+
+ self.check_jet(h, (0.,), ([1., 2., 3.],))
+
if __name__ == '__main__':
absltest.main(testLoader=jtu.JaxTestLoader())
| Enhancement needed: Composition of jet, grad and jacfwd
@mattjj
Hi all,
Composing the jet + jacfwd + grad sometimes produces an error:
```
import jax.numpy as jnp
from jax.experimental.jet import jet
def f(x):
x0 = x[0]
x1 = x[1]
return (x0**5 + x1**5).sum()
def h(ε):
from jax import jacfwd, grad
x = jnp.array([1., 1.])
μ = ε * x
def F(t):
return f(x + t * μ)
return grad(jacfwd(F))(0.)
jet(h, (0.,), ([1.],))
```
Any insights?
Thanks!
Enhancement needed: Composition of jet, grad and jacfwd
@mattjj
Hi all,
Composing the jet + jacfwd + grad sometimes produces an error:
```
import jax.numpy as jnp
from jax.experimental.jet import jet
def f(x):
x0 = x[0]
x1 = x[1]
return (x0**5 + x1**5).sum()
def h(ε):
from jax import jacfwd, grad
x = jnp.array([1., 1.])
μ = ε * x
def F(t):
return f(x + t * μ)
return grad(jacfwd(F))(0.)
jet(h, (0.,), ([1.],))
```
Any insights?
Thanks!
| What's the error?
Here:
python3.8/site-packages/jax/experimental/jet.py in process_primitive(self, primitive, tracers, params)
125 if t is zero_term else t for t in series]
126 for x, series in zip(primals_in, series_in)]
--> 127 rule = jet_rules[primitive]
128 primal_out, terms_out = rule(primals_in, series_in, **params)
129 if not primitive.multiple_results:
KeyError: scatter-add
Adding
`deflinear(lax.scatter_add_p)`
to jet.py seems to solve the problem.
Thanks
What's the error?
Here:
python3.8/site-packages/jax/experimental/jet.py in process_primitive(self, primitive, tracers, params)
125 if t is zero_term else t for t in series]
126 for x, series in zip(primals_in, series_in)]
--> 127 rule = jet_rules[primitive]
128 primal_out, terms_out = rule(primals_in, series_in, **params)
129 if not primitive.multiple_results:
KeyError: scatter-add
Adding
`deflinear(lax.scatter_add_p)`
to jet.py seems to solve the problem.
Thanks
| 2021-03-29T02:59:12 |
google/jax | 6,269 | google__jax-6269 | [
"5952"
] | 88f5e26482dd314357a24186a8d7799c866d760b | diff --git a/jax/_src/random.py b/jax/_src/random.py
--- a/jax/_src/random.py
+++ b/jax/_src/random.py
@@ -59,6 +59,11 @@ def PRNGKey(seed: int) -> jnp.ndarray:
key is constructed from a 64-bit seed by effectively bit-casting to a pair
of uint32 values (or from a 32-bit seed by first padding out with zeros).
"""
+ # Avoid overflowerror in X32 mode by first converting ints to int64.
+ # This breaks JIT invariance of PRNGKey for large ints, but supports the
+ # common use-case of instantiating PRNGKey with Python hashes in X32 mode.
+ if isinstance(seed, int):
+ seed = np.int64(seed)
seed_arr = jnp.asarray(seed)
if seed_arr.shape:
raise TypeError(f"PRNGKey seed must be a scalar; got {seed!r}.")
@@ -279,7 +284,7 @@ def fold_in(key: jnp.ndarray, data: int) -> jnp.ndarray:
A new PRNGKey that is a deterministic function of the inputs and is
statistically safe for producing a stream of new pseudo-random values.
"""
- return _fold_in(key, data)
+ return _fold_in(key, jnp.uint32(data))
@jit
def _fold_in(key, data):
diff --git a/jax/dtypes.py b/jax/dtypes.py
--- a/jax/dtypes.py
+++ b/jax/dtypes.py
@@ -121,8 +121,7 @@ def _scalar_type_to_dtype(typ: type, value: Any = None):
---------------------------------------------------------------------------
OverflowError: Python int 9223372036854775808 too large to convert to int64
"""
- dtype = python_scalar_dtypes[typ]
- # TODO(jakevdp): use proper overflow for int32.
+ dtype = canonicalize_dtype(python_scalar_dtypes[typ])
if typ is int and value is not None:
if value < np.iinfo(dtype).min or value > np.iinfo(dtype).max:
raise OverflowError(f"Python int {value} too large to convert to {dtype}")
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -5031,30 +5031,28 @@ def test_partial_eval(self):
for jit_type in [None, "python", "cpp"]
if not (jit_type is None and func == 'identity')))
def test_integer_overflow(self, jit_type, func):
- def jit(f, **kwargs):
- if jit_type is None:
- return f
- elif jit_type == "python":
- return api._python_jit(f, **kwargs)
- elif jit_type == "cpp":
- return api._cpp_jit(f, **kwargs)
- else:
- raise ValueError(f"invalid jit_type={jit_type}")
- func = jit({
+ if jit_type == "cpp" and not config.x64_enabled and jax.lib.version < (0, 1, 65):
+ self.skipTest("int32 overflow not yet implemented in CPP JIT.")
+ funcdict = {
'identity': lambda x: x,
'asarray': jnp.asarray,
- 'device_put': api.device_put
- }[func])
-
- int64_max = np.iinfo(np.int64).max
- int64_min = np.iinfo(np.int64).min
-
- int_dtype = dtypes.canonicalize_dtype(np.int64)
-
- self.assertEqual(func(int64_max).dtype, int_dtype)
- self.assertEqual(func(int64_min).dtype, int_dtype)
- self.assertRaises(OverflowError, func, int64_max + 1)
- self.assertRaises(OverflowError, func, int64_min - 1)
+ 'device_put': api.device_put,
+ }
+ jit = {
+ 'python': api._python_jit,
+ 'cpp': api._cpp_jit,
+ None: lambda x: x,
+ }
+ f = jit[jit_type](funcdict[func])
+
+ int_dtype = dtypes.canonicalize_dtype(jnp.int_)
+ int_max = np.iinfo(int_dtype).max
+ int_min = np.iinfo(int_dtype).min
+
+ self.assertEqual(f(int_max).dtype, int_dtype)
+ self.assertEqual(f(int_min).dtype, int_dtype)
+ self.assertRaises(OverflowError, f, int_max + 1)
+ self.assertRaises(OverflowError, f, int_min - 1)
if __name__ == '__main__':
diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -3275,17 +3275,17 @@ def testArrayUnsupportedDtypeError(self):
jnp.array(3, [('a','<i4'),('b','<i4')])
def testArrayFromInteger(self):
- # TODO(jakevdp): implement X32 overflow and canonicalize these
- int_max = jnp.iinfo(jnp.int64).max
- int_min = jnp.iinfo(jnp.int64).min
+ int_dtype = dtypes.canonicalize_dtype(jnp.int64)
+ int_max = jnp.iinfo(int_dtype).max
+ int_min = jnp.iinfo(int_dtype).min
# Values at extremes are converted correctly.
for val in [int_min, 0, int_max]:
- self.assertEqual(jnp.array(val).dtype, dtypes.canonicalize_dtype('int64'))
+ self.assertEqual(jnp.array(val).dtype, int_dtype)
# out of bounds leads to an OverflowError
val = int_max + 1
- with self.assertRaisesRegex(OverflowError, f"Python int {val} too large to convert to int64"):
+ with self.assertRaisesRegex(OverflowError, f"Python int {val} too large to convert to {int_dtype.name}"):
jnp.array(val)
# explicit uint64 should work
diff --git a/tests/random_test.py b/tests/random_test.py
--- a/tests/random_test.py
+++ b/tests/random_test.py
@@ -946,6 +946,9 @@ def f(x):
]
))
def test_prng_seeds_and_keys(self, seed, type, jit, key):
+ if (jit and type is int and not config.x64_enabled and
+ (seed < np.iinfo('int32').min or seed > np.iinfo('int32').max)):
+ self.skipTest("Expected failure: integer out of range for jit.")
seed = type(seed)
if jit:
actual = api.jit(random.PRNGKey)(seed)
@@ -961,6 +964,8 @@ def test_prng_seeds_and_keys(self, seed, type, jit, key):
def test_prng_jit_invariance(self, seed, type):
if type == "int" and seed == (1 << 64) - 1:
self.skipTest("Expected failure: Python int too large.")
+ if not config.x64_enabled and seed > np.iinfo(np.int32).max:
+ self.skipTest("Expected failure: Python int too large.")
type = {"int": int, "np.array": np.array, "jnp.array": jnp.array}[type]
args_maker = lambda: [type(seed)]
self._CompileAndCheck(random.PRNGKey, args_maker)
| Raise OverflowError in X32 mode for large Python integers
Python integers are arbitrary width, meaning that they can represent values that are not representable by numpy's `int64`. In these cases, numpy raises an overflow error. For example:
```python
>>> import numpy as np
>>> np.array(1 << 65, dtype=int)
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-36-bc194570c3ad> in <module>
----> 1 np.array(1 << 65, dtype=int)
OverflowError: Python int too large to convert to C long
```
In X64 mode, JAX inherits this property via numpy:
```python
>>> import jax.numpy as jnp
>>> from jax import config; config.update('jax_enable_x64', True)
>>> np.array(1 << 65, dtype=int)
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-3-e4d8a251ed7e> in <module>
----> 1 jnp.array(1 << 65, dtype=int)
~/github/google/jax/jax/_src/numpy/lax_numpy.py in array(object, dtype, copy, order, ndmin)
2874
2875 if _can_call_numpy_array(object):
-> 2876 object = _np_array(object, dtype=dtype, ndmin=ndmin, copy=False)
2877
2878 assert type(object) not in dtypes.python_scalar_dtypes
~/github/google/jax/jax/_src/numpy/lax_numpy.py in _np_array(obj, dtype, **kwargs)
228 uses Jax's default dtypes.
229 """
--> 230 arr = np.array(obj, dtype=dtype, **kwargs)
231 obj_dtype = getattr(obj, 'dtype', None)
232 arr_dtype = np.dtype(arr.dtype).type
OverflowError: Python int too large to convert to C long
```
With X64 disabled, however, values that are too large to be stored in `int32` are silently truncated to zero:
```python
>>> import jax.numpy as jnp
>>> jnp.array(1 << 33)
DeviceArray(0, dtype=int32)
```
Such silent data loss is problematic, particularly because X32 is the default mode.
This PR adds a value check when Python integers are coerced to arrays, so that an `OverflowError` is raised as appropriate for the default integer type:
```python
>>> import jax.numpy as jnp
>>> jnp.array(1 << 33)
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-2-50cb794a933c> in <module>
----> 1 jnp.array(1 << 33)
~/github/google/jax/jax/_src/numpy/lax_numpy.py in array(object, dtype, copy, order, ndmin)
2872
2873 if _can_call_numpy_array(object):
-> 2874 object = dtypes.coerce_to_array(object, dtype=dtype, ndmin=ndmin, copy=False)
2875
2876 assert type(object) not in dtypes.python_scalar_dtypes
~/github/google/jax/jax/dtypes.py in coerce_to_array(x, dtype, ndmin, copy)
125 info = np.iinfo(dtype)
126 if not info.min <= x <= info.max:
--> 127 raise OverflowError(f"Python int {x} too large to convert to {dtype}")
128 return np.array(x, dtype=dtype, ndmin=ndmin, copy=copy)
129
OverflowError: Python int 8589934592 too large to convert to int32
```
This also required a minor change to `random.PRNGseed` as well, as the previous behavior was built on the assumption that out-of-bounds inputs were truncated rather than leading to an error.
| @hawkinsp – I requested your review even though this still has a number of downstream impacts. If this change is something we want, I can start fixing/mitigating those. Thanks! | 2021-03-29T18:57:33 |
google/jax | 6,290 | google__jax-6290 | [
"6289"
] | 1f1d3dffe2de969c3485535b9bbd5a375600b551 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -1993,7 +1993,10 @@ def _reduction_init_val(a, init_val):
sign, info = np.sign(init_val), iinfo(a_dtype)
return np.array(info.min if sign < 0 else info.max, dtype=a_dtype)
-_cast_to_bool = partial(lax.convert_element_type, new_dtype=bool_)
+def _cast_to_bool(operand):
+ with warnings.catch_warnings():
+ warnings.filterwarnings("ignore", category=np.ComplexWarning)
+ return lax.convert_element_type(operand, bool_)
@_wraps(np.sum, skip_params=['out'])
def sum(a, axis: Optional[Union[int, Tuple[int, ...]]] = None, dtype=None,
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -785,9 +785,7 @@ def np_fun(x):
res = res if not is_bf16_nan_test else res.astype(jnp.bfloat16)
return res
np_fun = _promote_like_jnp(np_fun, inexact)
- np_fun = jtu.ignore_warning(category=np.ComplexWarning)(np_fun)
jnp_fun = lambda x: jnp_op(x, axis, keepdims=keepdims)
- jnp_fun = jtu.ignore_warning(category=jnp.ComplexWarning)(jnp_fun)
args_maker = lambda: [rng(shape, dtype)]
tol = {np.float16: 0.002}
self._CheckAgainstNumpy(np_fun, jnp_fun, args_maker, tol=tol)
| Warning when using jnp.any, jnp.all on complex arrays
The `numpy` equivalents operate quietly on complex arrays, while the `jax.numpy` variants raise a ComplexWarning
```python
In [1]: x = np.random.randn(2) + 1j * np.random.randn(2)
In [2]: np.any(x)
Out[2]: True
In [3]: jax.numpy.any(jax.device_put(x))
/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:1945: ComplexWarning: Casting complex values to real discards the imaginary part
a = preproc(a) if preproc else a
Out[3]: DeviceArray(True, dtype=bool)
```
- jax 0.2.11
- jaxlib 0.1.64
| Thanks for the report! | 2021-03-30T18:11:02 |
google/jax | 6,321 | google__jax-6321 | [
"6252"
] | 3e980a79f6cd9b318860a32540e400b5cea73981 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -1928,10 +1928,12 @@ def _isposneginf(infinity, x, out):
return full_like(x, False, dtype=bool_)
isposinf = _wraps(np.isposinf, skip_params=['out'])(
- lambda x, out=None: _isposneginf(inf, x, out))
+ lambda x, out=None: _isposneginf(inf, x, out)
+)
isneginf = _wraps(np.isneginf, skip_params=['out'])(
- lambda x, out=None: _isposneginf(-inf, x, out))
+ lambda x, out=None: _isposneginf(-inf, x, out)
+)
@_wraps(np.isnan)
def isnan(x):
diff --git a/jax/_src/numpy/util.py b/jax/_src/numpy/util.py
--- a/jax/_src/numpy/util.py
+++ b/jax/_src/numpy/util.py
@@ -21,6 +21,7 @@
_section_break = re.compile(r"\n(?=[^\n]{3,15}\n-{3,15})", re.MULTILINE)
_numpy_signature_re = re.compile(r'^([\w., ]+=)?\s*[\w\.]+\([\w\W]*?\)$', re.MULTILINE)
_versionadded = re.compile(r'^\s+\.\.\s+versionadded::', re.MULTILINE)
+_docreference = re.compile(r':doc:`(.*?)\s*<.*?>`')
class ParsedDoc(NamedTuple):
"""
@@ -48,6 +49,10 @@ def _parse_numpydoc(docstr: Optional[str]) -> ParsedDoc:
if docstr is None or not docstr.strip():
return ParsedDoc(docstr)
+ # Remove any :doc: directives in the docstring to avoid sphinx errors
+ docstr = _docreference.sub(
+ lambda match: f"{match.groups()[0]}", docstr)
+
signature, body = "", docstr
match = _numpy_signature_re.match(body)
if match:
| Building JAX documentation raise error about jnp.array
I have tried to build JAX documentation using `sphinx-build -b html docs docs/build/html` command, error raised.
Error messages:
```
Running Sphinx v3.5.3
[autosummary] generating autosummary for: _autosummary/jax.core.ClosedJaxpr.rst, _autosummary/jax.core.Jaxpr.rst, _autosummary/jax.image.resize.rst, _autosummary/jax.image.scale_and_translate.rst, _autosummary/jax.lax.abs.rst, _autosummary/jax.lax.acos.rst, _autosummary/jax.lax.add.rst, _autosummary/jax.lax.all_gather.rst, _autosummary/jax.lax.all_to_all.rst, _autosummary/jax.lax.argmax.rst, ..., notebooks/neural_network_with_tfds_data.ipynb, notebooks/quickstart.ipynb, notebooks/score_matching.ipynb, notebooks/thinking_in_jax.ipynb, notebooks/vmapped_log_probs.ipynb, profiling.md, pytrees.md, rank_promotion_warning.rst, transformations.md, type_promotion.rst
loading intersphinx inventory from https://docs.python.org/3/objects.inv...
loading intersphinx inventory from https://numpy.org/doc/stable/objects.inv...
loading intersphinx inventory from https://docs.scipy.org/doc/scipy/reference/objects.inv...
myst v0.13.5: MdParserConfig(renderer='sphinx', commonmark_only=False, dmath_allow_labels=True, dmath_allow_space=True, dmath_allow_digits=True, update_mathjax=True, enable_extensions=['dollarmath'], disable_syntax=[], url_schemes=None, heading_anchors=None, html_meta=[], footnote_transition=True, substitutions=[], sub_delimiters=['{', '}'])
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 752 source files that are out of date
updating environment: [new config] 752 added, 0 changed, 0 removed
/Users/minhoheo/jax/jax/_src/numpy/lax_numpy.py:docstring of jax._src.numpy.lax_numpy.array:1: WARNING: Inline emphasis start-string without end-string.
Extension error (sphinx_autodoc_typehints):
Handler <function process_docstring at 0x10cfe1700> for event 'autodoc-process-docstring' threw an exception (exception: unmatched ')' (<unknown>, line 1))
```
Environments:
- OS: macOS Big Sur
- python 3.8.2
- numpy: 1.20.0
- jax: 0.2.11
- jaxlib: 0.1.64
| I don't recognize this one, and the CI doc builds don't indicate any problems: https://readthedocs.org/projects/jax/builds/
I would try updating all the doc dependencies (listed in `docs/requirements.txt`), clear the build cache and try again.
Thank you for your response!
As you mentioned, I updated all dependencies and did not work well.
So I tried again with python 3.7.9 (same as CI) and it worked fine.
I am not sure if the python version matter.
Glad it's working with Python 3.7! I build the docs locally on OSX with Python 3.8 and haven't seen any issues. Is it possible that you have a different set of dependencies installed in your 3.8 environment?
I am appreciated you tested yourself!
I have tested with new virtualenv and got errors.
(Edit: The only thing I did with new virtualenv is install to jax, jaxlib and requirements in docs/)
Here is the build command I ran to get more specific error messages.
In "docs" directory,
`python -m sphinx -T -E -W --keep-going -b html -d _build/doctrees -D jupyter_execute_notebooks=off -D language=en . _build/html`
And here is the error messages
```
Running Sphinx v3.5.3
loading translations [en]... done
making output directory... done
[autosummary] generating autosummary for: async_dispatch.rst, autodidax.ipynb, changelog.md, concurrency.rst, custom_vjp_update.md, developer.md, device_memory_profiling.md, errors.rst, faq.rst, glossary.rst, ..., notebooks/neural_network_with_tfds_data.ipynb, notebooks/quickstart.ipynb, notebooks/score_matching.ipynb, notebooks/thinking_in_jax.ipynb, notebooks/vmapped_log_probs.ipynb, profiling.md, pytrees.md, rank_promotion_warning.rst, transformations.md, type_promotion.rst
[autosummary] generating autosummary for: /Users/minhoheo/jax/docs/_autosummary/jax.core.ClosedJaxpr.rst, /Users/minhoheo/jax/docs/_autosummary/jax.core.Jaxpr.rst, /Users/minhoheo/jax/docs/_autosummary/jax.image.resize.rst, /Users/minhoheo/jax/docs/_autosummary/jax.image.scale_and_translate.rst, /Users/minhoheo/jax/docs/_autosummary/jax.lax.abs.rst, /Users/minhoheo/jax/docs/_autosummary/jax.lax.acos.rst, /Users/minhoheo/jax/docs/_autosummary/jax.lax.add.rst, /Users/minhoheo/jax/docs/_autosummary/jax.lax.all_gather.rst, /Users/minhoheo/jax/docs/_autosummary/jax.lax.all_to_all.rst, /Users/minhoheo/jax/docs/_autosummary/jax.lax.argmax.rst, ..., /Users/minhoheo/jax/docs/_autosummary/jax.scipy.stats.norm.pdf.rst, /Users/minhoheo/jax/docs/_autosummary/jax.scipy.stats.norm.ppf.rst, /Users/minhoheo/jax/docs/_autosummary/jax.scipy.stats.pareto.logpdf.rst, /Users/minhoheo/jax/docs/_autosummary/jax.scipy.stats.pareto.pdf.rst, /Users/minhoheo/jax/docs/_autosummary/jax.scipy.stats.poisson.logpmf.rst, /Users/minhoheo/jax/docs/_autosummary/jax.scipy.stats.poisson.pmf.rst, /Users/minhoheo/jax/docs/_autosummary/jax.scipy.stats.t.logpdf.rst, /Users/minhoheo/jax/docs/_autosummary/jax.scipy.stats.t.pdf.rst, /Users/minhoheo/jax/docs/_autosummary/jax.scipy.stats.uniform.logpdf.rst, /Users/minhoheo/jax/docs/_autosummary/jax.scipy.stats.uniform.pdf.rst
loading intersphinx inventory from https://docs.python.org/3/objects.inv...
loading intersphinx inventory from https://numpy.org/doc/stable/objects.inv...
loading intersphinx inventory from https://docs.scipy.org/doc/scipy/reference/objects.inv...
myst v0.13.5: MdParserConfig(renderer='sphinx', commonmark_only=False, dmath_allow_labels=True, dmath_allow_space=True, dmath_allow_digits=True, update_mathjax=True, enable_extensions=['dollarmath'], disable_syntax=[], url_schemes=None, heading_anchors=None, html_meta=[], footnote_transition=True, substitutions=[], sub_delimiters=['{', '}'])
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 60 source files that are out of date
updating environment: [new config] 753 added, 0 changed, 0 removed
/Users/minhoheo/jax/jax/_src/numpy/lax_numpy.py:docstring of jax._src.numpy.lax_numpy.array:1: WARNING: Inline emphasis start-string without end-string.
Traceback (most recent call last):
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/events.py", line 111, in emit
results.append(listener.handler(self.app, *args))
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx_autodoc_typehints.py", line 376, in process_docstring
type_hints = get_all_type_hints(obj, name)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx_autodoc_typehints.py", line 231, in get_all_type_hints
rv = backfill_type_hints(obj, name)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx_autodoc_typehints.py", line 272, in backfill_type_hints
obj_ast = ast.parse(textwrap.dedent(inspect.getsource(obj)), **parse_kwargs)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/ast.py", line 47, in parse
return compile(source, filename, mode, flags,
File "<unknown>", line 1
lambda x, out=None: _isposneginf(-inf, x, out))
^
SyntaxError: unmatched ')'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/cmd/build.py", line 280, in build_main
app.build(args.force_all, filenames)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/application.py", line 352, in build
self.builder.build_update()
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 296, in build_update
self.build(to_build,
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 310, in build
updated_docnames = set(self.read())
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 417, in read
self._read_serial(docnames)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 438, in _read_serial
self.read_doc(docname)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 478, in read_doc
doctree = read_doc(self.app, self.env, self.env.doc2path(docname))
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/io.py", line 221, in read_doc
pub.publish()
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/core.py", line 217, in publish
self.document = self.reader.read(self.source, self.parser,
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/io.py", line 126, in read
self.parse()
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/readers/__init__.py", line 77, in parse
self.parser.parse(self.input, document)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/parsers.py", line 104, in parse
self.statemachine.run(inputlines, document, inliner=self.inliner)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 170, in run
results = StateMachineWS.run(self, input_lines, input_offset,
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/statemachine.py", line 241, in run
context, next_state, result = self.check_line(
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/statemachine.py", line 459, in check_line
return method(match, context, next_state)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 2769, in underline
self.section(title, source, style, lineno - 1, messages)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 327, in section
self.new_subsection(title, lineno, messages)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 393, in new_subsection
newabsoffset = self.nested_parse(
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 281, in nested_parse
state_machine.run(block, input_offset, memo=self.memo,
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 196, in run
results = StateMachineWS.run(self, input_lines, input_offset)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/statemachine.py", line 241, in run
context, next_state, result = self.check_line(
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/statemachine.py", line 459, in check_line
return method(match, context, next_state)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 2344, in explicit_markup
self.explicit_list(blank_finish)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 2369, in explicit_list
newline_offset, blank_finish = self.nested_list_parse(
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 318, in nested_list_parse
state_machine.run(block, input_offset, memo=self.memo,
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 196, in run
results = StateMachineWS.run(self, input_lines, input_offset)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/statemachine.py", line 241, in run
context, next_state, result = self.check_line(
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/statemachine.py", line 459, in check_line
return method(match, context, next_state)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 2647, in explicit_markup
nodelist, blank_finish = self.explicit_construct(match)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 2354, in explicit_construct
return method(self, expmatch)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 2096, in directive
return self.run_directive(
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/docutils/parsers/rst/states.py", line 2146, in run_directive
result = directive_instance.run()
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/ext/autodoc/directive.py", line 167, in run
documenter.generate(more_content=self.content)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/ext/autodoc/__init__.py", line 967, in generate
self.add_content(more_content)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/ext/autodoc/__init__.py", line 628, in add_content
for i, line in enumerate(self.process_doc(docstrings)):
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/ext/autodoc/__init__.py", line 569, in process_doc
self.env.app.emit('autodoc-process-docstring',
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/application.py", line 462, in emit
return self.events.emit(event, *args, allowed_exceptions=allowed_exceptions)
File "/Users/minhoheo/test_doc/lib/python3.8/site-packages/sphinx/events.py", line 119, in emit
raise ExtensionError(__("Handler %r for event %r threw an exception") %
sphinx.errors.ExtensionError: Handler <function process_docstring at 0x113405430> for event 'autodoc-process-docstring' threw an exception (exception: unmatched ')' (<unknown>, line 1))
Extension error (sphinx_autodoc_typehints):
Handler <function process_docstring at 0x113405430> for event 'autodoc-process-docstring' threw an exception (exception: unmatched ')' (<unknown>, line 1))
```
Here is Python version
```
Python 3.8.2 (default, Dec 21 2020, 15:06:04)
[Clang 12.0.0 (clang-1200.0.32.29)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
And here is pip list
```
Package Version
----------------------------- ---------
absl-py 0.12.0
alabaster 0.7.12
apipkg 1.5
appnope 0.1.2
argon2-cffi 20.1.0
async-generator 1.10
attrs 20.3.0
Babel 2.9.0
backcall 0.2.0
bleach 3.3.0
certifi 2020.12.5
cffi 1.14.5
chardet 4.0.0
colorama 0.4.4
cycler 0.10.0
decorator 4.4.2
defusedxml 0.7.1
docutils 0.16
entrypoints 0.3
execnet 1.8.0
flatbuffers 1.12
gitdb 4.0.7
GitPython 3.1.14
idna 2.10
imagesize 1.2.0
importlib-metadata 3.9.1
iniconfig 1.1.1
ipykernel 5.5.0
ipython 7.22.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
jax 0.2.11
jaxlib 0.1.64
jedi 0.18.0
Jinja2 2.11.3
joblib 1.0.1
jsonschema 3.2.0
jupyter-cache 0.4.2
jupyter-client 6.1.12
jupyter-core 4.7.1
jupyter-sphinx 0.3.1
jupyterlab-widgets 1.0.0
kiwisolver 1.3.1
markdown-it-py 0.6.2
MarkupSafe 1.1.1
matplotlib 3.4.0
mdit-py-plugins 0.2.6
mistune 0.8.4
myst-nb 0.12.0
myst-parser 0.13.5
nbclient 0.5.3
nbconvert 5.6.1
nbdime 2.1.0
nbformat 5.1.2
nest-asyncio 1.5.1
notebook 6.3.0
numpy 1.20.2
opt-einsum 3.3.0
packaging 20.9
pandocfilters 1.4.3
parso 0.8.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.1.2
pip 21.0.1
pluggy 0.13.1
prometheus-client 0.9.0
prompt-toolkit 3.0.18
ptyprocess 0.7.0
py 1.10.0
pycparser 2.20
Pygments 2.8.1
pyparsing 2.4.7
pyrsistent 0.17.3
pytest 6.2.2
pytest-forked 1.3.0
pytest-xdist 2.2.1
python-dateutil 2.8.1
pytz 2021.1
PyYAML 5.4.1
pyzmq 22.0.3
requests 2.25.1
scikit-learn 0.24.1
scipy 1.6.2
Send2Trash 1.5.0
setuptools 41.2.0
six 1.15.0
sklearn 0.0
smmap 4.0.0
snowballstemmer 2.1.0
Sphinx 3.5.3
sphinx-autodoc-typehints 1.11.1
sphinx-rtd-theme 0.5.1
sphinx-togglebutton 0.2.3
sphinxcontrib-applehelp 1.0.2
sphinxcontrib-devhelp 1.0.2
sphinxcontrib-htmlhelp 1.0.3
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.3
sphinxcontrib-serializinghtml 1.1.4
SQLAlchemy 1.3.23
terminado 0.9.4
testpath 0.4.4
threadpoolctl 2.1.0
toml 0.10.2
tornado 6.1
traitlets 5.0.5
urllib3 1.26.4
wcwidth 0.2.5
webencodings 0.5.1
wheel 0.36.2
widgetsnbextension 3.5.1
zipp 3.4.1
```
Looks quite similar to this bug: https://github.com/agronholm/sphinx-autodoc-typehints/issues/148
Thanks for searching. Yes indeed it is similar.
But it seems that there is no direct solution for this kind of issue.
And since it's working with Python 3.7, so maybe it would be better to close this issue?
Or leave it open? How do you think?
I just hit this error in Python 3.8 on OSX. I'm not sure what changed in my environment
#6318 addresses the new warning you saw above with numpy 1.20
I suspect this is why it works in 3.7 but not in 3.8: https://github.com/agronholm/sphinx-autodoc-typehints/blob/49face656c51e370b68108a4a59c7032ed398a2e/sphinx_autodoc_typehints.py#L290-L299
Ah, the culprit is the inspect module:
```python
>>> import sys; sys.version_info
sys.version_info(major=3, minor=8, micro=2, releaselevel='final', serial=0)
>>> import jax.numpy as jnp
>>> import inspect
>>> inspect.getsource(jnp.isposinf)
' lambda x, out=None: _isposneginf(inf, x, out))\n'
```
I sent a fix in https://github.com/agronholm/sphinx-autodoc-typehints/pull/168; it looks like there's been some activity in the repo over the last week or two, so hopefully this fix can be put into a release soon! | 2021-04-01T20:15:09 |
|
google/jax | 6,328 | google__jax-6328 | [
"6283"
] | 3c1ee0644b580b1c7cf7fd32043916da7b487390 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -5449,6 +5449,34 @@ def reduce_window_shape_tuple(operand_shape, window_dimensions, window_strides,
_reduce_window_batch_rule, _reduce_window_min)
+def _reduce_precision_shape_rule(operand, *, exponent_bits, mantissa_bits):
+ exponent_bits = operator.index(exponent_bits)
+ mantissa_bits = operator.index(mantissa_bits)
+ if exponent_bits < 1:
+ raise ValueError(f"reduce_precision: exponent_bits must be positive; got {exponent_bits}")
+ if mantissa_bits < 0:
+ raise ValueError(f"reduce_precision: mantissa_bits must be non-negative; got {mantissa_bits}")
+ return operand.shape
+
+
+reduce_precision_p = standard_primitive(
+ _reduce_precision_shape_rule,
+ partial(unop_dtype_rule, _identity, _float, 'reduce_precision'),
+ name='reduce_precision')
+
+
+def reduce_precision(operand, exponent_bits, mantissa_bits):
+ """Wraps XLA's `ReducePrecision
+ <https://www.tensorflow.org/xla/operation_semantics#reduceprecision>`_
+ operator.
+ """
+ exponent_bits = core.concrete_or_error(
+ operator.index, exponent_bits, "exponent_bits argument of lax.reduce_precision")
+ mantissa_bits = core.concrete_or_error(
+ operator.index, mantissa_bits, "mantissa_bits argument of lax.reduce_precision")
+ return reduce_precision_p.bind(operand, exponent_bits=exponent_bits, mantissa_bits=mantissa_bits)
+
+
def _select_and_scatter_shape_rule(
operand, source, init_value, *, select_jaxpr, select_consts, scatter_jaxpr,
scatter_consts, window_dimensions, window_strides, padding):
diff --git a/jax/experimental/jax2tf/jax2tf.py b/jax/experimental/jax2tf/jax2tf.py
--- a/jax/experimental/jax2tf/jax2tf.py
+++ b/jax/experimental/jax2tf/jax2tf.py
@@ -794,6 +794,7 @@ def _unexpected_primitive(p: core.Primitive, *args, **kwargs):
"igamma_grad_a",
"random_gamma_grad",
+ "reduce_precision",
# Not high priority?
"after_all", "all_to_all", "create_token",
diff --git a/jax/lax/__init__.py b/jax/lax/__init__.py
--- a/jax/lax/__init__.py
+++ b/jax/lax/__init__.py
@@ -209,6 +209,8 @@
reduce_min_p,
reduce_or_p,
reduce_p,
+ reduce_precision,
+ reduce_precision_p,
reduce_prod_p,
reduce_sum_p,
reduce_window,
| diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -1693,6 +1693,25 @@ def np_fun(x):
self._CompileAndCheck(fun, args_maker)
self._CheckAgainstNumpy(np_fun, fun, args_maker)
+
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "_shape={}_out_dtype={}".format(
+ jtu.format_shape_dtype_string(shape, dtype),
+ jtu.format_shape_dtype_string(shape, out_dtype)),
+ "shape": shape, "dtype": dtype, "out_dtype": out_dtype}
+ for shape in [(), (3,), (3, 4)]
+ for dtype in float_dtypes
+ for out_dtype in float_dtypes))
+ def testReducePrecision(self, shape, dtype, out_dtype):
+ rng = jtu.rand_default(self.rng())
+ args_maker = lambda: [rng(shape, dtype)]
+ info = dtypes.finfo(out_dtype)
+ fun = lambda x: lax.reduce_precision(x, info.nexp, info.nmant)
+ np_fun = lambda x: np.asarray(x).astype(out_dtype).astype(dtype)
+ self._CheckAgainstNumpy(np_fun, fun, args_maker)
+ self._CompileAndCheck(fun, args_maker)
+
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_axis={}_isstable={}".format(
jtu.format_shape_dtype_string(shape, dtype), axis, is_stable),
| Add lax.reduce_precision primitive corresponding to XLA's ReducePrecision op
I want to round a float32 tensor x to bfloat16 values and get it back as float32.
The natural thing is: x.astype(jnp.bfloat32).astype(jnp.float32)
But it looks like XLA optimizes this away to just x again?
Could we have an op to enforce bfloat16 rounding? (e.g., to simulate bfloat16 training in parts of the net)
| It looks like your strategy works, at least in simple cases:
```python
import jax.numpy as jnp
from jax import jit
@jit
def round(x):
return x.astype(jnp.bfloat16).astype(jnp.float32)
x = jnp.float32(jnp.pi)
y = round(x)
print(x, y, x == y)
# 3.1415927 3.140625 False
```
Do you have an example of a place where this approach fails?
XLA is allowed to make optimizations that increase precision. I know the TPU backend does so in some cases, if nothing else. The only way to guarantee precision is not increased by XLA would be to add a JAX primitive that wraps the `ReducePrecision` operator, that exists explicitly for this purpose.
I'll take a look at exposing this in `lax.py` | 2021-04-02T20:18:30 |
google/jax | 6,354 | google__jax-6354 | [
"6353"
] | db367216b9fdb923dff3542842a5f7785d2e3116 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -1111,7 +1111,8 @@ def flip(m, axis: Optional[Union[int, Tuple[int, ...]]] = None):
_check_arraylike("flip", m)
if axis is None:
return lax.rev(m, list(range(len(shape(m)))))
- return lax.rev(m, [_canonicalize_axis(axis, ndim(m))])
+ axis = _ensure_index_tuple(axis)
+ return lax.rev(m, [_canonicalize_axis(ax, ndim(m)) for ax in axis])
@_wraps(np.fliplr)
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -3449,7 +3449,7 @@ def testTracingPrimitiveWithNoTranslationErrorMessage(self):
"shape": shape, "dtype": dtype, "axis": axis}
for shape in [(3,), (2, 3)]
for dtype in default_dtypes
- for axis in list(range(-len(shape), len(shape))) + [None] # Test negative axes
+ for axis in list(range(-len(shape), len(shape))) + [None] + [tuple(range(len(shape)))] # Test negative axes and tuples
))
def testFlip(self, shape, dtype, axis):
rng = jtu.rand_default(self.rng())
| jnp.flip crashes when given a tuple of axes
Please:
- [x] Check for duplicate issues.
- [x] Provide a complete example of how to reproduce the bug, wrapped in triple backticks like this:
```python
import jax.numpy as jnp
jnp.flip(jnp.zeros((3, 3, 3)), axis=(1,2))
```
- [x] If applicable, include full error messages/tracebacks.
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/buehrle/dev/polyssimo/.venv/lib/python3.6/site-packages/jax/_src/numpy/lax_numpy.py", line 1114, in flip
return lax.rev(m, [_canonicalize_axis(axis, ndim(m))])
File "/home/buehrle/dev/polyssimo/.venv/lib/python3.6/site-packages/jax/_src/util.py", line 262, in canonicalize_axis
axis = operator.index(axis)
TypeError: 'tuple' object cannot be interpreted as an integer
```
| 2021-04-06T15:17:01 |
|
google/jax | 6,399 | google__jax-6399 | [
"6372"
] | f1a6397948a179665b7977592949dc33093e9b33 | diff --git a/jax/api.py b/jax/api.py
--- a/jax/api.py
+++ b/jax/api.py
@@ -1259,14 +1259,12 @@ def vmap(fun: F, in_axes=0, out_axes=0, axis_name=None) -> F:
# rather than raising an error. https://github.com/google/jax/issues/2367
in_axes = tuple(in_axes)
- in_axes_, out_axes_ = tree_leaves(in_axes), tree_leaves(out_axes)
- if not all(isinstance(l, (type(None), int)) for l in in_axes_):
+ if not all(type(l) is int for l in tree_leaves(in_axes)):
raise TypeError("vmap in_axes must be an int, None, or (nested) container "
f"with those types as leaves, but got {in_axes}.")
- if not all(isinstance(l, (type(None), int)) for l in out_axes_):
+ if not all(type(l) is int for l in tree_leaves(out_axes)):
raise TypeError("vmap out_axes must be an int, None, or (nested) container "
f"with those types as leaves, but got {out_axes}.")
- del in_axes_, out_axes_
@wraps(fun, docstr=docstr)
@api_boundary
@@ -1560,6 +1558,13 @@ def pmap(
donate_tuple = rebase_donate_argnums(_ensure_index_tuple(donate_argnums),
static_broadcasted_tuple)
+ if not all(type(l) is int for l in tree_leaves(in_axes)):
+ raise TypeError("pmap in_axes must be an int, None, or (nested) container "
+ f"with those types as leaves, but got {in_axes}.")
+ if not all(type(l) is int for l in tree_leaves(out_axes)):
+ raise TypeError("pmap out_axes must be an int, None, or (nested) container "
+ f"with those types as leaves, but got {out_axes}.")
+
@wraps(fun)
@api_boundary
def f_pmapped(*args, **kwargs):
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -1939,6 +1939,16 @@ def foo(tree_arg):
foo, in_axes=((0, collections.OrderedDict([('a', 1), ('b', 2)])),))
self.assertEqual(vfoo(tree).shape, (6, 2, 5))
+ def test_vmap_in_axes_bool_error(self):
+ # https://github.com/google/jax/issues/6372
+ with self.assertRaisesRegex(TypeError, "must be an int"):
+ api.vmap(lambda x: x, in_axes=False)(jnp.zeros(3))
+
+ def test_pmap_in_axes_bool_error(self):
+ # https://github.com/google/jax/issues/6372
+ with self.assertRaisesRegex(TypeError, "must be an int"):
+ api.pmap(lambda x: x, in_axes=False)(jnp.zeros(1))
+
def test_pmap_global_cache(self):
def f(x, y):
return x, y
| Confusing `vmap` user experience when passing in `False` into `in_axes`
`False` is interpreted as 0 in the `in_axes` argument to `jax.vmap`. This is an invalid value (and some users may expect it to behave like `None`). See this code:
```python
import jax.numpy as jnp
def f(x, y):
return jnp.sum(x) + jnp.sum(y)
print(jax.vmap(f, in_axes=[False, 0])(arr, arr))
print(jax.vmap(f, in_axes=[None, 0])(arr, arr))
print(jax.vmap(f, in_axes=[0, 0])(arr, arr))
```
prints
```
[ 6 14]
[13 17]
[ 6 14]
```
The `vmap` documentation states that each item in `in_axes` must be either an integer or None. @mbz was confused by this and accidentally thought that `False` would behave like `None`. But instead it's interpreted as 0.
It would be good to raise an error for users accidentally using `False` in `in_axes`.
| Thanks for raising this! I think the trouble is [on this line](https://github.com/google/jax/blob/f1a6397948a179665b7977592949dc33093e9b33/jax/api.py#L1263), due to the fact that `isinstance(True, int)` (like `issubclass(bool, int)`) is True in Python. | 2021-04-09T21:48:06 |
google/jax | 6,408 | google__jax-6408 | [
"6403"
] | 2fea627cbcadb8e66ee4c1af4de09bea20d947c1 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -5573,6 +5573,8 @@ def _select_and_scatter_add_transpose(
t, source, operand, *, select_prim, window_dimensions, window_strides,
padding):
assert ad.is_undefined_primal(source) and not ad.is_undefined_primal(operand)
+ if type(t) is ad_util.Zero:
+ return [ad_util.Zero(source.aval), None]
ones = (1,) * len(window_dimensions)
source_t = _select_and_gather_add(t, operand, select_prim, window_dimensions,
window_strides, padding, ones, ones)
| Gradient through Neural Tangent Kernel works for CNN without MaxPool, but not for CNN with MaxPool
Hi all,
I am working on a project that requires taking the gradient of Neural Tangent Kernel with respect to the parameters at which the Jacobian is being evaluated.
However, I’m getting an error when jitting the NTK computation function if my network uses maxpooling.
I have prepared two Colab notebooks that reproduce the issue:
[CNN without MaxPool](https://colab.research.google.com/drive/1k_cCuNxz2rXeXFrVnwFimWtvOzfVPjdo?usp=sharing)
[CNN with MaxPool](https://colab.research.google.com/drive/1WxK6WjSrZ4FdzJpY3hg0LSZ8gSkM7Bf-?usp=sharing)
In the first notebook, I show that computing the gradients with respect to the NTK evaluation parameters works just fine if I don’t include a maxpooling operation in the CNN, and in the second notebook, I show that including a maxpooling operation results in an error:
> typeValue Zero(ShapedArray(float32[11,28,28,32])) with type <class ‘jax.ad_util.Zero’> is not a valid JAX type
Oddly, the error message disappears if I remove all the jitting from the NTK function.
I would really appreciate if you could help me understand the cause of this issue and help me fix it.
Thank you so much!
| 2021-04-12T12:12:14 |
||
google/jax | 6,414 | google__jax-6414 | [
"6405"
] | ad342419b87fbe08afe9a5f959704dee3a654a0d | diff --git a/jax/_src/random.py b/jax/_src/random.py
--- a/jax/_src/random.py
+++ b/jax/_src/random.py
@@ -563,7 +563,7 @@ def _shuffle(key, x, axis) -> jnp.ndarray:
# another analysis (where the keys are generated one bit at a time).
exponent = 3 # see tjablin@'s analysis for explanation of this parameter
uint32max = jnp.iinfo(np.uint32).max
- num_rounds = int(np.ceil(exponent * np.log(x.size) / np.log(uint32max)))
+ num_rounds = int(np.ceil(exponent * np.log(max(1, x.size)) / np.log(uint32max)))
for _ in range(num_rounds):
key, subkey = split(key)
| diff --git a/tests/random_test.py b/tests/random_test.py
--- a/tests/random_test.py
+++ b/tests/random_test.py
@@ -296,7 +296,7 @@ def testChoice(self, dtype, shape, replace, weighted, array_input):
{"testcase_name": "_{}".format(jtu.format_shape_dtype_string(shape, dtype)),
"dtype": dtype, "shape": shape}
for dtype in jtu.dtypes.floating + jtu.dtypes.integer
- for shape in [100, (10, 10), (10, 5, 2)]))
+ for shape in [100, (10, 10), (10, 5, 2), 0, 1, (0, 5), (1, 5)]))
def testPermutationArray(self, dtype, shape):
key = random.PRNGKey(0)
x = jnp.arange(np.prod(shape)).reshape(shape).astype(dtype)
@@ -307,7 +307,8 @@ def testPermutationArray(self, dtype, shape):
perm2 = crand(key)
self.assertAllClose(perm1, perm2)
- self.assertFalse(np.all(perm1 == x)) # seems unlikely!
+ if x.shape[0] > 1:
+ self.assertFalse(np.all(perm1 == x)) # seems unlikely!
self.assertAllClose(np.sort(perm1.ravel()), x.ravel(), check_dtypes=False)
self.assertArraysAllClose(
x, jnp.arange(np.prod(shape)).reshape(shape).astype(dtype))
| jax.random.permutation throws error when called on length zero input
When random.permutation is called with a length zero array, it should most naturally return a length zero array. Instead it throws an error message `cannot convert float infinity to integer`.
As an example, the code
```python
random.permutation(skey, np.ones((0,)))
```
returns an error message
```
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-23-13d34f8e8bed> in <module>()
----> 1 random.permutation(skey, np.ones((0,)))
12 frames
google3/third_party/py/jax/_src/random.py in permutation(key, x)
539 return _shuffle(key, jnp.arange(x), 0)
540 elif np.ndim(x) == 1:
--> 541 return _shuffle(key, x, 0)
542 else:
543 assert isinstance(x, jnp.ndarray)
google3/third_party/py/jax/api.py in cache_miss(*args, **kwargs)
421 backend=backend,
422 name=flat_fun.__name__,
--> 423 donated_invars=donated_invars)
424 out_pytree_def = out_tree()
425 out = tree_unflatten(out_pytree_def, out_flat)
google3/third_party/py/jax/core.py in bind(self, fun, *args, **params)
1556
1557 def bind(self, fun, *args, **params):
-> 1558 return call_bind(self, fun, *args, **params)
1559
1560 def process(self, trace, fun, tracers, params):
google3/third_party/py/jax/core.py in call_bind(primitive, fun, *args, **params)
1547 tracers = map(top_trace.full_raise, args)
1548 with maybe_new_sublevel(top_trace):
-> 1549 outs = primitive.process(top_trace, fun, tracers, params)
1550 return map(full_lower, apply_todos(env_trace_todo(), outs))
1551
google3/third_party/py/jax/core.py in process(self, trace, fun, tracers, params)
1559
1560 def process(self, trace, fun, tracers, params):
-> 1561 return trace.process_call(self, fun, tracers, params)
1562
1563 def post_process(self, trace, out_tracers, params):
google3/third_party/py/jax/core.py in process_call(self, primitive, f, tracers, params)
597
598 def process_call(self, primitive, f, tracers, params):
--> 599 return primitive.impl(f, *tracers, **params)
600 process_map = process_call
601
google3/third_party/py/jax/interpreters/xla.py in _xla_call_impl(fun, device, backend, name, donated_invars, *args)
576 def _xla_call_impl(fun: lu.WrappedFun, *args, device, backend, name, donated_invars):
577 compiled_fun = _xla_callable(fun, device, backend, name, donated_invars,
--> 578 *unsafe_map(arg_spec, args))
579 try:
580 return compiled_fun(*args)
google3/third_party/py/jax/linear_util.py in memoized_fun(fun, *args)
258 fun.populate_stores(stores)
259 else:
--> 260 ans = call(fun, *args)
261 cache[key] = (ans, fun.stores)
262
google3/third_party/py/jax/interpreters/xla.py in _xla_callable(fun, device, backend, name, donated_invars, *arg_specs)
649
650 abstract_args, arg_devices = unzip2(arg_specs)
--> 651 jaxpr, out_avals, consts = pe.trace_to_jaxpr_final(fun, abstract_args, transform_name="jit")
652 if any(isinstance(c, core.Tracer) for c in consts):
653 raise core.UnexpectedTracerError("Encountered an unexpected tracer.")
google3/third_party/py/jax/interpreters/partial_eval.py in trace_to_jaxpr_final(fun, in_avals, transform_name)
1207 main.source_info = fun_sourceinfo(fun.f, transform_name) # type: ignore
1208 main.jaxpr_stack = () # type: ignore
-> 1209 jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)
1210 del fun, main
1211 return jaxpr, out_avals, consts
google3/third_party/py/jax/interpreters/partial_eval.py in trace_to_subjaxpr_dynamic(fun, main, in_avals)
1186 trace = DynamicJaxprTrace(main, core.cur_sublevel())
1187 in_tracers = map(trace.new_arg, in_avals)
-> 1188 ans = fun.call_wrapped(*in_tracers)
1189 out_tracers = map(trace.full_raise, ans)
1190 jaxpr, out_avals, consts = frame.to_jaxpr(in_tracers, out_tracers)
google3/third_party/py/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
164
165 try:
--> 166 ans = self.f(*args, **dict(self.params, **kwargs))
167 except:
168 # Some transformations yield from inside context managers, so we have to
google3/third_party/py/jax/_src/random.py in _shuffle(key, x, axis)
564 exponent = 3 # see tjablin@'s analysis for explanation of this parameter
565 uint32max = jnp.iinfo(np.uint32).max
--> 566 num_rounds = int(np.ceil(exponent * np.log(x.size) / np.log(uint32max)))
567
568 for _ in range(num_rounds):
OverflowError: cannot convert float infinity to integer
```
| Thanks for the report! I'll take a look. | 2021-04-12T16:52:48 |
google/jax | 6,416 | google__jax-6416 | [
"6404"
] | 631653c42d4a9274080ca46a66b006076c727ad9 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -4714,7 +4714,13 @@ def _index_to_gather(x_shape, idx, normalize_indices=True):
use_64bit_index = _any([not core.is_constant_dim(d) or d >= (1 << 31) for d in x_shape])
index_dtype = int64 if use_64bit_index else int32
- gather_indices = np.zeros((0,), dtype=index_dtype) # use np to save a compilation
+
+ # Gather indices.
+ # Pairs of (array, start_dim) values. These will be broadcast into
+ # gather_indices_shape, with the array dimensions aligned to start_dim, and
+ # then concatenated.
+ gather_indices = []
+ gather_indices_shape = []
# We perform three transformations to y before the scatter op, in order:
# First, y is broadcast to slice_shape. In general `y` only need broadcast to
@@ -4740,15 +4746,12 @@ def _index_to_gather(x_shape, idx, normalize_indices=True):
advanced_indexes = broadcast_arrays(*advanced_indexes)
shape = advanced_indexes[0].shape
ndim = len(shape)
- advanced_indexes = [
- lax.convert_element_type(lax.reshape(a, shape + (1,)), index_dtype)
- for a in advanced_indexes]
-
- # Broadcast gather_indices from [..., k] to [..., 1, 1, ..., 1, k].
- gather_indices = lax.broadcast_in_dim(
- gather_indices, np.insert(gather_indices.shape, -1, shape),
- tuple(range(gather_indices.ndim - 1)) + (gather_indices.ndim + ndim - 1,))
- gather_indices = concatenate([gather_indices] + advanced_indexes, -1)
+
+ start_dim = len(gather_indices_shape)
+ gather_indices += ((lax.convert_element_type(a, index_dtype), start_dim)
+ for a in advanced_indexes)
+ gather_indices_shape += shape
+
start_index_map.extend(x_advanced_axes)
collapsed_slice_dims.extend(x_advanced_axes)
slice_shape.extend(shape)
@@ -4772,8 +4775,7 @@ def _index_to_gather(x_shape, idx, normalize_indices=True):
raise IndexError(f"index is out of bounds for axis {x_axis} with size 0")
i = _normalize_index(i, x_shape[x_axis]) if normalize_indices else i
i = lax.convert_element_type(i, index_dtype)
- i = broadcast_to(i, tuple(gather_indices.shape[:-1]) + (1,))
- gather_indices = concatenate((gather_indices, i), -1)
+ gather_indices.append((i, len(gather_indices_shape)))
collapsed_slice_dims.append(x_axis)
gather_slice_shape.append(1)
start_index_map.append(x_axis)
@@ -4807,8 +4809,7 @@ def _index_to_gather(x_shape, idx, normalize_indices=True):
reversed_y_dims.append(collapsed_y_axis)
if stride == 1:
i = lax.convert_element_type(start, index_dtype)
- i = broadcast_to(i, tuple(gather_indices.shape[:-1]) + (1,))
- gather_indices = concatenate((gather_indices, i), -1)
+ gather_indices.append((i, len(gather_indices_shape)))
slice_shape.append(limit - start)
gather_slice_shape.append(limit - start)
offset_dims.append(collapsed_y_axis)
@@ -4818,18 +4819,9 @@ def _index_to_gather(x_shape, idx, normalize_indices=True):
size = i.shape[0]
slice_shape.append(size)
gather_slice_shape.append(1)
- gather_indices_shape = tuple(gather_indices.shape[:-1]) + (size,)
- i = lax.broadcast_in_dim(
- i, shape=gather_indices_shape + (1,),
- broadcast_dimensions=(len(gather_indices_shape) - 1,))
- gather_indices = lax.broadcast_in_dim(
- gather_indices,
- shape=gather_indices_shape + (len(start_index_map),),
- broadcast_dimensions=(
- tuple(range(len(gather_indices_shape) - 1)) +
- (len(gather_indices_shape),)))
- gather_indices = concatenate(
- (gather_indices, i), len(gather_indices_shape))
+ gather_indices.append((i, len(gather_indices_shape)))
+ gather_indices_shape.append(size)
+
start_index_map.append(x_axis)
collapsed_slice_dims.append(x_axis)
@@ -4846,6 +4838,19 @@ def _index_to_gather(x_shape, idx, normalize_indices=True):
msg = "Indexing mode not yet supported. Open a feature request!\n{}"
raise IndexError(msg.format(idx))
+ if len(gather_indices) == 0:
+ gather_indices_array = np.zeros((0,), dtype=index_dtype)
+ elif len(gather_indices) == 1:
+ g, _ = gather_indices[0]
+ gather_indices_array = lax.expand_dims(g, (g.ndim,))
+ else:
+ last_dim = len(gather_indices_shape)
+ gather_indices_shape.append(1)
+ gather_indices_array = lax.concatenate([
+ lax.broadcast_in_dim(g, gather_indices_shape, tuple(range(i, i + g.ndim)))
+ for g, i in gather_indices],
+ last_dim)
+
dnums = lax.GatherDimensionNumbers(
offset_dims = tuple(offset_dims),
collapsed_slice_dims = tuple(sorted(collapsed_slice_dims)),
@@ -4857,7 +4862,7 @@ def _index_to_gather(x_shape, idx, normalize_indices=True):
gather_slice_shape=gather_slice_shape,
reversed_y_dims=reversed_y_dims,
dnums=dnums,
- gather_indices=gather_indices)
+ gather_indices=gather_indices_array)
def _should_unpack_list_index(x):
"""Helper for _eliminate_deprecated_list_indexing."""
| Indexing into arrays hoists empty array constants out of Jaxprs
Repro:
```
def test_fun(x):
return x[0] + x[1] + x[2] + x[3]
```
Jaxpr:
```
{ lambda a b c d ; e.
let f = lt 0 0
g = add 0 4
h = select f g 0
i = convert_element_type[ new_dtype=int32
weak_type=False ] h
j = broadcast_in_dim[ broadcast_dimensions=( )
shape=(1,) ] i
k = concatenate[ dimension=0 ] a j
l = gather[ dimension_numbers=GatherDimensionNumbers(offset_dims=(), collapsed_slice_dims=(0,), start_index_map=(0,))
slice_sizes=(1,) ] e k
[...]
}
```
where `a`, `b`, `c`, and `d` are empty int32 arrays - one per indexing operation: `[array([], dtype=int32), array([], dtype=int32), array([], dtype=int32), array([], dtype=int32)]`
In severe cases, thousands of these constants inflate the size of the unoptimized HLO and result in slow compilation. In code with higher-order functions, there is evidence that these constants and the associated `lt`,`add`, and `select` operations do not get optimized away.
Workarounds: `jnp.take` does not create empty constants and `lax.slice` results in even more compact code in situations where its semantics are applicable.
| 2021-04-12T19:46:07 |
||
google/jax | 6,439 | google__jax-6439 | [
"6431"
] | 3c003a68fcc2ecea575a58bae033b951fc0e08a5 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -5170,9 +5170,10 @@ def _quantile(a, q, axis, interpolation, keepdims, squash_nans):
raise ValueError("q must be have rank <= 1, got shape {}".format(shape(q)))
a_shape = shape(a)
- a = lax.sort(a, dimension=axis)
if squash_nans:
+ a = where(isnan(a), nan, a) # Ensure nans are positive so they sort to the end.
+ a = lax.sort(a, dimension=axis)
counts = sum(logical_not(isnan(a)), axis=axis, dtype=q.dtype,
keepdims=keepdims)
shape_after_reduction = counts.shape
@@ -5200,6 +5201,7 @@ def _quantile(a, q, axis, interpolation, keepdims, squash_nans):
index[axis] = high
high_value = a[tuple(index)]
else:
+ a = lax.sort(a, dimension=axis)
n = a_shape[axis]
q = lax.mul(q, _constant_like(q, n - 1))
low = lax.floor(q)
| diff --git a/jax/test_util.py b/jax/test_util.py
--- a/jax/test_util.py
+++ b/jax/test_util.py
@@ -666,10 +666,13 @@ def rand(shape, dtype):
return base_rand(shape, dtype)
dims = _dims_of_shape(shape)
- nan_flips = rng.rand(*dims) < 0.1
+ r = rng.rand(*dims)
+ nan_flips = r < 0.1
+ neg_nan_flips = r < 0.05
vals = base_rand(shape, dtype)
vals = np.where(nan_flips, np.array(np.nan, dtype=dtype), vals)
+ vals = np.where(neg_nan_flips, np.array(-np.nan, dtype=dtype), vals)
return _cast_to_shape(np.asarray(vals, dtype=dtype), shape, dtype)
| nanmedian (sometimes) incorrectly returns nan
`nanmedian` of a logged array containing `nan`s incorrectly returns `nan`
```python
import jax
import jax.numpy as np
import itertools
M = np.e # or w/e
a = np.array([0, M, 0])
# log of nans
la1 = np.log(np.array([np.nan, M, np.nan]))
la2 = np.log(np.where(a > 0, a, np.nan))
la3 = np.log(jax.ops.index_update(a, a <= 0, np.nan))
# nans inserted after log
la4 = np.array([np.nan, np.log(M), np.nan])
la5 = np.log(a); la5 = np.where(np.isinf(la5), np.nan, la5)
la6 = np.log(a); la6 = jax.ops.index_update(la5, np.isinf(la6), np.nan)
for pair in itertools.combinations((la1,la2,la3,la4,la5,la6), 2):
assert np.array_equal(*pair, equal_nan=True) # all True
print(np.nanmedian(la1)) # nan
print(np.nanmedian(la2)) # nan
print(np.nanmedian(la3)) # nan
print(np.nanmedian(la4)) # log(M)
print(np.nanmedian(la5)) # log(M)
print(np.nanmedian(la6)) # log(M)
```
Other nan functions seem to work as expected, e.g.
```python
print(np.nanmean(la1)) # log(M)
print(np.nanmean(la4)) # log(M)
```
| Thanks for the report! This one is pretty strange... it looks like for some reason the particular float representation of the nan value matters!
```python
print(la1.view('uint8'))
print(la4.view('uint8'))
# [255 255 255 255 0 0 128 63 255 255 255 255]
# [ 0 0 192 127 0 0 128 63 0 0 192 127]
```
It probably has something to do with how these react to the sorting used under the hood in `median`.
indeed! interesting..
```
jax.lax.sort(la1) # DeviceArray([nan, nan, 1.], dtype=float32)
jax.lax.sort(la4) # DeviceArray([ 1., nan, nan], dtype=float32)
jax.lax.sort(np.hstack([la1,la4])) # DeviceArray([nan, nan, 1., 1., nan, nan], dtype=float32)
``` | 2021-04-14T17:37:58 |
google/jax | 6,470 | google__jax-6470 | [
"1459"
] | 919b11e81ada26bc449431f2cc4fe0fab7d136c6 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -141,7 +141,7 @@ def nextafter(x1: Array, x2: Array) -> Array:
For the smallest usable (i.e. normal) float, use ``tiny`` of ``jnp.finfo``.
"""
- return nextafter_p.bind(_brcast(x1, x2), _brcast(x2, x1))
+ return nextafter_p.bind(x1, x2)
def floor(x: Array) -> Array:
r"""Elementwise floor: :math:`\left\lfloor x \right\rfloor`."""
@@ -287,7 +287,7 @@ def complex(x: Array, y: Array) -> Array:
Builds a complex number from real and imaginary parts.
"""
- return complex_p.bind(_brcast(x, y), _brcast(y, x))
+ return complex_p.bind(x, y)
def conj(x: Array) -> Array:
r"""Elementwise complex conjugate function: :math:`\overline{x}`."""
@@ -2146,55 +2146,48 @@ def naryop(result_dtype, accepted_dtypes, name, translation_rule=None):
return prim
standard_naryop = partial(naryop, _input_dtype)
-
+# Decorator for translation rules which adds explicit broadcasting of positional
+# arguments. This is necessary only for a handful of primitives whose XLA
+# implementations do not support broadcasting.
def _broadcast_translate(translate: Callable):
- # Decorator for translation rules which adds explicit broadcasting of
- # positional arguments. This is necessary only for a handful of primitives
- # whose XLA implementations do not support broadcasting.
- def _broadcast_array(array, array_shape, result_shape):
- if array_shape == result_shape:
- return array
- bcast_dims = tuple(range(len(result_shape) - len(array_shape),
- len(result_shape)))
- result = xops.BroadcastInDim(array, result_shape, bcast_dims)
+ def _broadcast_array(x, shape, result_shape):
+ if shape == result_shape:
+ return x
+ bcast_dims = tuple(range(len(result_shape) - len(shape), len(result_shape)))
+ result = xops.BroadcastInDim(x, result_shape, bcast_dims)
return result
def _broadcasted_translation_rule(c, *args, **kwargs):
- shapes = [c.get_shape(arg).dimensions() for arg in args]
+ shapes = [c.get_shape(x).dimensions() for x in args]
result_shape = broadcast_shapes(*shapes)
- args = [_broadcast_array(arg, arg_shape, result_shape)
- for arg, arg_shape in zip(args, shapes)]
+ args = [_broadcast_array(x, s, result_shape) for x, s in zip(args, shapes)]
return translate(c, *args, **kwargs)
return _broadcasted_translation_rule
-# NOTE(mattjj): this isn't great for orchestrate fwd mode because it means JVPs
-# get two extra ops in them: a reshape and a broadcast_in_dim (or sometimes just
-# a broadcast). but saving the shape info with the primitives isn't great either
-# because then we can't trace these ops without shape data.
-def _brcast(x, *others):
- # Used in jvprules to make naryop broadcasting explicit for transposability.
- # Requires shape info during jvp tracing, which isn't strictly necessary.
- # We don't need full numpy broadcasting, but otherwise the logic is the same
- # so we reuse the broadcast_shapes function after filtering out scalars.
- shapes = tuple(filter(None, map(np.shape, (x,) + others)))
- shape = shapes and broadcast_shapes(*shapes)
- if np.shape(x) != shape:
- return _brcast_to(x, shape)
- else:
+# Like autograd.numpy.numpy_vjps.unbroadcast, this utility handles transposition
+# involving linear primitives with implicit broadcasting.
+def _unbroadcast(aval, x):
+ if not isinstance(aval, ShapedArray):
+ raise TypeError("transpose with implicit broadcasting of unshaped values")
+ x_shape = np.shape(x)
+ if aval.shape == x_shape:
return x
+ assert not aval.shape or len(x_shape) == len(aval.shape)
+ if not aval.shape:
+ return _reduce_sum(x, list(range(len(x_shape))))
+ else:
+ dims = [i for i, (a, b) in enumerate(zip(x_shape, aval.shape)) if a != b]
+ if config.jax_enable_checks: assert all(aval.shape[i] == 1 for i in dims)
+ return reshape(_reduce_sum(x, dims), aval.shape)
-
-def _brcast_to(x, shape):
+def _maybe_broadcast(target_shape, x):
x_shape = np.shape(x)
- assert x_shape != shape
- if x_shape:
- assert len(x_shape) == len(shape)
- broadcast_dimensions, = np.where(np.equal(x_shape, shape))
- squeezed_dimensions, = np.where(np.not_equal(x_shape, shape))
- squeezed = squeeze(x, squeezed_dimensions)
- return broadcast_in_dim(squeezed, shape, broadcast_dimensions)
+ if x_shape == target_shape:
+ return x
else:
- return broadcast(x, shape)
+ dims = [i for i, (a, b) in enumerate(zip(x_shape, target_shape)) if a == b]
+ squeeze_shape = [x_shape[i] for i in dims]
+ return broadcast_in_dim(reshape(x, squeeze_shape), target_shape, dims)
_float = {np.floating}
@@ -2224,9 +2217,10 @@ def _sign_translation_rule(c, x):
sign_p = standard_unop(_num, 'sign', translation_rule=_sign_translation_rule)
ad.defjvp_zero(sign_p)
-nextafter_p = standard_naryop(
- [_float, _float], 'nextafter',
- translation_rule=_broadcast_translate(partial(standard_translate, 'next_after')))
+_nextafter_translation_rule = \
+ _broadcast_translate(partial(standard_translate, 'next_after'))
+nextafter_p = standard_naryop([_float, _float], 'nextafter',
+ translation_rule=_nextafter_translation_rule)
floor_p = standard_unop(_float, 'floor')
ad.defjvp_zero(floor_p)
@@ -2346,8 +2340,8 @@ def atan_translation_rule(x):
atan2_p = standard_naryop([_float, _float], 'atan2')
ad.defjvp(atan2_p,
- lambda g, x, y: _brcast(g, y) * (y / (square(x) + square(y))),
- lambda g, x, y: _brcast(g, x) * -x / (square(x) + square(y)))
+ lambda g, x, y: g * (y / (square(x) + square(y))),
+ lambda g, x, y: g * -x / (square(x) + square(y)))
sinh_p = standard_unop(_float | _complex, 'sinh')
ad.defjvp(sinh_p, lambda g, x: mul(g, cosh(x)))
@@ -2398,10 +2392,10 @@ def betainc_grad_not_implemented(g, a, b, x):
'igamma_grad_a')))
def igamma_gradx(g, a, x):
- return _brcast(g, a, x) * exp(-x + (a - _ones(a)) * log(x) - lgamma(a))
+ return g * exp(-x + (a - _ones(a)) * log(x) - lgamma(a))
def igamma_grada(g, a, x):
- return _brcast(g, a, x) * igamma_grad_a(a, x)
+ return g * igamma_grad_a(a, x)
ad.defjvp(igamma_p, igamma_grada, igamma_gradx)
@@ -2452,10 +2446,29 @@ def _bessel_i1e_jvp(g, y, x):
imag_p = unop(_complex_basetype, _complex, 'imag')
ad.deflinear2(imag_p, lambda t, _: [complex(np.zeros((), _dtype(t)), neg(t))])
+
+def _complex_transpose_rule(t, x, y):
+ assert ad.is_undefined_primal(x) or ad.is_undefined_primal(y)
+ if ad.is_undefined_primal(x) and ad.is_undefined_primal(y):
+ if type(t) is ad_util.Zero:
+ return [ad_util.Zero(x.aval), ad_util.Zero(y.aval)]
+ else:
+ return [_unbroadcast(x.aval, real(t)), _unbroadcast(y.aval, imag(neg(t)))]
+ elif ad.is_undefined_primal(x):
+ if type(t) is ad_util.Zero:
+ return [ad_util.Zero(x.aval), None]
+ else:
+ return [_unbroadcast(x.aval, real(t)), None]
+ else:
+ if type(t) is ad_util.Zero:
+ return [None, ad_util.Zero(y.aval)]
+ else:
+ return [None, _unbroadcast(y.aval, imag(neg(t)))]
+
_complex_dtype = lambda dtype, *args: (np.zeros((), dtype) + np.zeros((), np.complex64)).dtype
complex_p = naryop(_complex_dtype, [_complex_elem_types, _complex_elem_types],
'complex')
-ad.deflinear2(complex_p, lambda t, *args: [real(t), imag(neg(t))])
+ad.deflinear2(complex_p, _complex_transpose_rule)
conj_p = unop(_complex_dtype, _complex_elem_types | _complex, 'conj')
@@ -2494,10 +2507,10 @@ def _abs_jvp_rule(g, ans, x):
def _pow_jvp_lhs(g, ans, x, y):
jac = mul(y, pow(x, select(eq(y, _zeros(y)), _ones(y), sub(y, _ones(y)))))
- return mul(_brcast(g, y), jac)
+ return mul(g, jac)
def _pow_jvp_rhs(g, ans, x, y):
- return mul(_brcast(g, x), mul(log(_replace_zero(x)), ans))
+ return mul(g, mul(log(_replace_zero(x)), ans))
ad.defjvp2(pow_p, _pow_jvp_lhs, _pow_jvp_rhs)
@@ -2554,61 +2567,112 @@ def _integer_pow_jvp(g, x, *, y):
clz_p = standard_unop(_int, 'clz')
+def _add_jvp(primals, tangents):
+ x, y = primals
+ xdot, ydot = tangents
+ primal_out = add(x, y)
+ if type(xdot) is type(ydot) is ad_util.Zero:
+ return primal_out, ad_util.Zero.from_value(primal_out)
+ if type(xdot) is ad_util.Zero:
+ return primal_out, _maybe_broadcast(primal_out.shape, ydot)
+ elif type(ydot) is ad_util.Zero:
+ return primal_out, _maybe_broadcast(primal_out.shape, xdot)
+ else:
+ return primal_out, add(xdot, ydot)
+
def _add_transpose(t, x, y):
- # The following linearity assertion is morally true, but because in some cases we
- # instantiate zeros for convenience, it doesn't always hold.
+ # Morally the following assertion is true, but because we instantiate zeros in
+ # some places (e.g. in custom_jvp) it may not always hold. For example, see
+ # api_test.py's CustomJVPTest.test_jaxpr_zeros.
# assert ad.is_undefined_primal(x) and ad.is_undefined_primal(y)
- return [t, t]
+ x_aval = x.aval if ad.is_undefined_primal(x) else _abstractify(x)
+ y_aval = y.aval if ad.is_undefined_primal(y) else _abstractify(y)
+ if type(t) is ad_util.Zero:
+ return [ad_util.Zero(x_aval), ad_util.Zero(y_aval)]
+ else:
+ return [_unbroadcast(x_aval, t), _unbroadcast(y_aval, t)]
-add_p = standard_naryop([_num, _num], 'add')
-ad.defjvp(add_p, lambda g, x, y: _brcast(g, y), lambda g, x, y: _brcast(g, x))
-ad.primitive_transposes[add_p] = _add_transpose
def _add_inverse(r, x, y):
xr = r - y
yr = r - x
return xr, yr
+
+add_p = standard_naryop([_num, _num], 'add')
+ad.primitive_jvps[add_p] = _add_jvp
+ad.primitive_transposes[add_p] = _add_transpose
iad.definverse(add_p, _add_inverse)
+def _sub_jvp(primals, tangents):
+ x, y = primals
+ xdot, ydot = tangents
+ primal_out = sub(x, y)
+ if type(xdot) is type(ydot) is ad_util.Zero:
+ return primal_out, ad_util.Zero.from_value(primal_out)
+ if type(xdot) is ad_util.Zero:
+ return primal_out, _maybe_broadcast(primal_out.shape, neg(ydot))
+ elif type(ydot) is ad_util.Zero:
+ return primal_out, _maybe_broadcast(primal_out.shape, xdot)
+ else:
+ return primal_out, sub(xdot, ydot)
+
def _sub_transpose(t, x, y):
- # The following linearity assertion is morally true, but because in some cases
- # we instantiate zeros for convenience, it doesn't always hold.
- # TODO(mattjj): re-enable this assertion, don't return None below
+ # Morally the following assertion is true, but see the comment in add_p's
+ # transpose rule.
# assert ad.is_undefined_primal(x) and ad.is_undefined_primal(y)
+ x_aval = x.aval if ad.is_undefined_primal(x) else _abstractify(x)
+ y_aval = y.aval if ad.is_undefined_primal(y) else _abstractify(y)
if type(t) is ad_util.Zero:
- x_bar = ad_util.Zero(x.aval) if ad.is_undefined_primal(x) else None
- y_bar = ad_util.Zero(y.aval) if ad.is_undefined_primal(y) else None
- return [x_bar, y_bar]
+ return [ad_util.Zero(x_aval), ad_util.Zero(y_aval)]
else:
- return [t, neg(t)]
+ return [_unbroadcast(x_aval, t), _unbroadcast(y_aval, neg(t))]
sub_p = standard_naryop([_num, _num], 'sub')
-ad.defjvp(sub_p,
- lambda g, x, y: _brcast(g, y),
- lambda g, x, y: _brcast(neg(g), x))
+ad.primitive_jvps[sub_p] = _sub_jvp
ad.primitive_transposes[sub_p] = _sub_transpose
-mul_p = standard_naryop([_num, _num], 'mul')
-ad.defbilinear_broadcasting(_brcast, mul_p, mul, mul)
+
+def _mul_transpose(ct, x, y):
+ assert ad.is_undefined_primal(x) ^ ad.is_undefined_primal(y)
+ if ad.is_undefined_primal(x):
+ if type(ct) is ad_util.Zero:
+ return [ad_util.Zero(x.aval), None]
+ else:
+ return [_unbroadcast(x.aval, mul(ct, y)), None]
+ else:
+ if type(ct) is ad_util.Zero:
+ return [None, ad_util.Zero(y.aval)]
+ else:
+ return [None, _unbroadcast(y.aval, mul(x, ct))]
+
def _mul_inverse(r, x, y):
xr = r / y
yr = r / x
return xr, yr
+
+mul_p = standard_naryop([_num, _num], 'mul')
+ad.defjvp(mul_p,
+ lambda xdot, x, y: mul(xdot, y),
+ lambda ydot, x, y: mul(x, ydot))
+ad.primitive_transposes[mul_p] = _mul_transpose
iad.definverse(mul_p, _mul_inverse)
def _div_transpose_rule(cotangent, x, y):
assert ad.is_undefined_primal(x) and not ad.is_undefined_primal(y)
- res = ad_util.Zero(x.aval) if type(cotangent) is ad_util.Zero else div(cotangent, y)
- return res, None
+ if type(cotangent) is ad_util.Zero:
+ return [ad_util.Zero(x.aval), None]
+ else:
+ return [_unbroadcast(x.aval, div(cotangent, y)), None]
div_p = standard_naryop([_num, _num], 'div')
ad.defjvp(div_p,
- lambda g, x, y: div(_brcast(g, y), y),
- lambda g, x, y: mul(mul(neg(_brcast(g, x)), x), integer_pow(y, -2)))
+ lambda g, x, y: div(g, y),
+ lambda g, x, y: mul(mul(neg(g), x), integer_pow(y, -2)))
ad.primitive_transposes[div_p] = _div_transpose_rule
rem_p = standard_naryop([_num, _num], 'rem')
-ad.defjvp(rem_p,
- lambda g, x, y: _brcast(g, y),
- lambda g, x, y: mul(_brcast(neg(g), x), floor(div(x, y))))
+ad.defjvp(
+ rem_p,
+ lambda g, x, y: _maybe_broadcast(broadcast_shapes(np.shape(x), np.shape(y)), g),
+ lambda g, x, y: mul(neg(g), floor(div(x, y))))
def _broadcasting_select(c, which, x, y):
@@ -2639,15 +2703,15 @@ def _minmax_translation_rule(c, x, y, *, minmax=None, cmp=None):
[_any, _any], 'max', translation_rule=partial(
_minmax_translation_rule, minmax=xops.Max, cmp=xops.Gt))
ad.defjvp2(max_p,
- lambda g, ans, x, y: mul(_brcast(g, y), _balanced_eq(x, ans, y)),
- lambda g, ans, x, y: mul(_brcast(g, x), _balanced_eq(y, ans, x)))
+ lambda g, ans, x, y: mul(g, _balanced_eq(x, ans, y)),
+ lambda g, ans, x, y: mul(g, _balanced_eq(y, ans, x)))
min_p: core.Primitive = standard_naryop(
[_any, _any], 'min', translation_rule=partial(
_minmax_translation_rule, minmax=xops.Min, cmp=xops.Lt))
ad.defjvp2(min_p,
- lambda g, ans, x, y: mul(_brcast(g, y), _balanced_eq(x, ans, y)),
- lambda g, ans, x, y: mul(_brcast(g, x), _balanced_eq(y, ans, x)))
+ lambda g, ans, x, y: mul(g, _balanced_eq(x, ans, y)),
+ lambda g, ans, x, y: mul(g, _balanced_eq(y, ans, x)))
shift_left_p = standard_naryop([_int, _int], 'shift_left')
ad.defjvp_zero(shift_left_p)
@@ -3393,12 +3457,12 @@ def _clamp_shape_rule(min, operand, max):
ad.defjvp(clamp_p,
lambda g, min, operand, max:
select(bitwise_and(gt(min, operand), lt(min, max)),
- _brcast(g, operand), _zeros(operand)),
+ g, _zeros(operand)),
lambda g, min, operand, max:
select(bitwise_and(gt(operand, min), lt(operand, max)),
g, _zeros(operand)),
lambda g, min, operand, max:
- select(lt(max, operand), _brcast(g, operand), _zeros(operand)))
+ select(lt(max, operand), g, _zeros(operand)))
batching.defbroadcasting(clamp_p)
diff --git a/jax/interpreters/ad.py b/jax/interpreters/ad.py
--- a/jax/interpreters/ad.py
+++ b/jax/interpreters/ad.py
@@ -471,13 +471,12 @@ def add_tangents(x, y):
return add_jaxvals(x, y)
-def defbilinear_broadcasting(bcast, prim, lhs_rule, rhs_rule):
+def defbilinear(prim, lhs_rule, rhs_rule):
assert isinstance(prim, Primitive)
- lhs_jvp = lambda g, x, y, **kwargs: prim.bind(bcast(g, y), y, **kwargs)
- rhs_jvp = lambda g, x, y, **kwargs: prim.bind(x, bcast(g, x), **kwargs)
+ lhs_jvp = lambda g, x, y, **kwargs: prim.bind(g, y, **kwargs)
+ rhs_jvp = lambda g, x, y, **kwargs: prim.bind(x, g, **kwargs)
defjvp(prim, lhs_jvp, rhs_jvp)
primitive_transposes[prim] = partial(bilinear_transpose, lhs_rule, rhs_rule)
-defbilinear: Callable = partial(defbilinear_broadcasting, lambda g, x: g)
def bilinear_transpose(lhs_rule, rhs_rule, cotangent, x, y, **kwargs):
assert is_undefined_primal(x) ^ is_undefined_primal(y)
diff --git a/jax/interpreters/batching.py b/jax/interpreters/batching.py
--- a/jax/interpreters/batching.py
+++ b/jax/interpreters/batching.py
@@ -91,7 +91,11 @@ class BatchTracer(Tracer):
__slots__ = ['val', 'batch_dim']
def __init__(self, trace, val, batch_dim: Optional[int]):
- assert not config.jax_enable_checks or type(batch_dim) in (int, NotMapped) # type: ignore
+ if config.jax_enable_checks:
+ assert type(batch_dim) in (int, NotMapped)
+ if type(batch_dim) is int:
+ aval = raise_to_shaped(core.get_aval(val))
+ assert aval is core.abstract_unit or 0 <= batch_dim < len(aval.shape) # type: ignore
self._trace = trace
self.val = val
self.batch_dim = batch_dim
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -3782,6 +3782,186 @@ def foo(x):
expected = 2. * jnp.ones(3)
self.assertAllClose(ans, expected, check_dtypes=False)
+ def test_custom_jvp_vmap_broadcasting_interaction(self):
+ # https://github.com/google/jax/issues/6452
+ def f2(y, z):
+ v1 = z
+ v2 = jnp.sum(y) + z
+ return jnp.logaddexp(v1, v2)
+
+ def f1(y, z):
+ v = api.vmap(lambda _y: f2(_y, z))(y)
+ return jnp.sum(v)
+
+ y = jnp.ones((3, 2))
+ f = lambda z: f1(y, z)
+ z = 0.1
+ val, g = api.value_and_grad(f)(z)
+ self.assertEqual(val.shape, ())
+ self.assertEqual(g.shape, ())
+
+ def test_custom_jvp_vmap_broadcasting_interaction_2(self):
+ # https://github.com/google/jax/issues/5849
+ @api.custom_jvp
+ def transform(box, R):
+ if jnp.isscalar(box) or box.size == 1:
+ return R * box
+ elif box.ndim == 2:
+ return jnp.einsum('ij,j->i', box, R)
+ raise ValueError()
+
+ @transform.defjvp
+ def transform_jvp(primals, tangents):
+ box, R = primals
+ dbox, dR = tangents
+ return (transform(box, R), dR + transform(dbox, R))
+
+ def periodic_general(box):
+ def displacement_fn(Ra, Rb, **kwargs):
+ _box = kwargs.get('box', box)
+ return transform(_box, Ra - Rb)
+
+ return displacement_fn
+
+ N = 250
+
+ scalar_box = 1.0
+ displacement = periodic_general(scalar_box)
+
+ key = jax.random.PRNGKey(0)
+ R = jax.random.uniform(key, (N, 2))
+
+ def energy_fn(box):
+ d = partial(displacement, box=box)
+ d = api.vmap(api.vmap(d, (None, 0)), (0, None))
+ return jnp.sum(d(R, R) ** 2)
+
+ self.assertEqual(grad(energy_fn)(scalar_box).shape, ())
+
+ def test_custom_jvp_implicit_broadcasting(self):
+ # https://github.com/google/jax/issues/6357
+ if config.x64_enabled:
+ raise unittest.SkipTest("test only applies when x64 is disabled")
+
+ @jax.custom_jvp
+ def projection_unit_simplex(x: jnp.ndarray) -> jnp.ndarray:
+ """Projection onto the unit simplex."""
+ s = 1.0
+ n_features = x.shape[0]
+ u = jnp.sort(x)[::-1]
+ cssv = jnp.cumsum(u) - s
+ ind = jnp.arange(n_features) + 1
+ cond = u - cssv / ind > 0
+ idx = jnp.count_nonzero(cond)
+ threshold = cssv[idx - 1] / idx.astype(x.dtype)
+ return jax.nn.relu(x - threshold)
+
+
+ @projection_unit_simplex.defjvp
+ def projection_unit_simplex_jvp(primals, tangents):
+ x, = primals
+ x_dot, = tangents
+ primal_out = projection_unit_simplex(x)
+ supp = primal_out > 0
+ card = jnp.count_nonzero(supp)
+ tangent_out = supp * x_dot - (jnp.dot(supp, x_dot) / card) * supp
+ return primal_out, tangent_out
+
+ rng = np.random.RandomState(0)
+ x = rng.rand(5).astype(np.float32)
+
+ J_rev = jax.jacrev(projection_unit_simplex)(x)
+ J_fwd = jax.jacfwd(projection_unit_simplex)(x)
+
+ p = projection_unit_simplex(x)
+ support = (p > 0).astype(jnp.int32)
+ cardinality = jnp.count_nonzero(support)
+ J_true = jnp.diag(support) - jnp.outer(support, support) / cardinality
+ self.assertAllClose(J_true, J_fwd)
+ self.assertAllClose(J_true, J_rev)
+
+ proj = jax.vmap(projection_unit_simplex)
+
+ def fun(X):
+ return jnp.sum(proj(X) ** 2)
+
+ rng = np.random.RandomState(0)
+ X = rng.rand(4, 5).astype(np.float32)
+ U = rng.rand(4, 5)
+ U /= np.sqrt(np.sum(U ** 2))
+ U = U.astype(np.float32)
+
+ eps = 1e-3
+ dir_deriv_num = (fun(X + eps * U) - fun(X - eps * U)) / (2 * eps)
+ dir_deriv = jnp.vdot(jax.grad(fun)(X), U)
+ self.assertAllClose(dir_deriv, dir_deriv_num, atol=1e-3)
+
+ def test_vmap_inside_defjvp(self):
+ # https://github.com/google/jax/issues/3201
+ seed = 47
+ key = jax.random.PRNGKey(seed)
+ mat = jax.random.normal(key, (2, 3))
+
+ @jax.custom_jvp
+ def f(mat, aux):
+ num_rows, num_cols = mat.shape
+ return jnp.ones((num_rows, 1)) / num_cols
+
+ @f.defjvp
+ def f_jvp(primals, tangents):
+ mat, aux = primals
+ vec, _ = tangents
+ output = f(*primals)
+ num_rows, num_cols = mat.shape
+ size = num_rows * num_cols
+ # -----
+ bd_mat = mat.reshape(1, 1, num_rows, num_cols)
+ bd_mat = jnp.tile(bd_mat, reps=(num_rows, num_cols))
+ bd_mat = bd_mat.reshape(size, num_rows, num_cols)
+ # -----
+ rowsum = jnp.sum(mat, axis=1, keepdims=True)
+ colsum = jnp.sum(mat, axis=0, keepdims=True)
+ bd_rowsum = jnp.tile(rowsum, reps=(1, num_rows))
+ bd_colsum = jnp.tile(colsum, reps=(num_cols, 1))
+ # -----
+ bd_vec = vec.reshape(size, 1)
+ # -----
+ def operate(mx, val):
+ buf = 0
+ for i in range(2):
+ buf = buf + jnp.matmul(mx, bd_colsum) / jnp.power(aux, i)
+ buf = jnp.matmul(bd_rowsum, buf)
+ return buf * val
+ # -----
+ # Vertorizing will raise shape error
+ bd_buf = jax.vmap(operate, in_axes=(0, 0), out_axes=0)(bd_mat, bd_vec)
+ # -----
+ bd_buf = bd_buf / aux
+ jvp = jnp.sum(bd_buf, axis=0)
+ jvp = jnp.mean(jvp, axis=1, keepdims=True)
+ # -----
+ # JVP ends successfully, but still raise an error
+ return (output, jvp)
+
+ jax.grad(lambda mat, aux: jnp.sum(f(mat, aux)))(mat, 0.5) # doesn't crash
+
+ def test_custom_jvp_unbroadcasting(self):
+ # https://github.com/google/jax/issues/3056
+ a = jnp.array([1., 1.])
+
+ @jax.custom_jvp
+ def f(x):
+ return a * x
+
+ @f.defjvp
+ def f_jvp(primals, tangents):
+ x, = primals
+ dx, = tangents
+ return a * x, a * dx
+
+ shape = grad(lambda x: jnp.sum(f(x)))(jnp.array(1.)).shape
+ self.assertEqual(shape, ())
+
class CustomVJPTest(jtu.JaxTestCase):
diff --git a/tests/djax_test.py b/tests/djax_test.py
--- a/tests/djax_test.py
+++ b/tests/djax_test.py
@@ -161,6 +161,7 @@ def g(x):
class DJaxBatchingTests(jtu.JaxTestCase):
def test_nonzero(self):
+ raise absltest.SkipTest("TODO") # TODO broke this somehow
@djax.djit
def f(x):
return nonzero(x)
diff --git a/tests/lax_autodiff_test.py b/tests/lax_autodiff_test.py
--- a/tests/lax_autodiff_test.py
+++ b/tests/lax_autodiff_test.py
@@ -36,7 +36,9 @@
FLAGS = config.FLAGS
-compatible_shapes = [[(3,)], [(3, 4), (3, 1), (1, 4)], [(2, 3, 4), (2, 1, 4)]]
+compatible_shapes = [[(3,)],
+ [(), (3, 4), (3, 1), (1, 4)],
+ [(2, 3, 4), (2, 1, 4)]]
GradTestSpec = collections.namedtuple(
| Consider revising broadcasting JVP strategy
We should think about reversing the decision documented here, and locking shapes into more primitives at trace time: https://github.com/google/jax/blob/064014b53cd5cffb1ea44f6bc6f9e4e9074c4752/jax/lax/lax.py#L1518-L1521
(It appears that XLA isn't always able to optimize away the resulting cruft once it's been transposed, perhaps because it can't fuse two `reduce_sum`s -- the transpose of `broadcast_in_dim` -- separated by a `reshape`.)
| Concretely, the following code generates a Jaxpr with the reduce/reshape/reduce/reshape pattern, which XLA currently does not optimize out on GPU or CPU.
```
import jax
import jax.numpy as jnp
def forward(b, x):
# x is [B, F]
return x + b
def loss(b, x, target):
# x is [B1, B2, F]
y = jax.vmap(forward, in_axes=(None, 0))(b, x)
return jnp.mean(jnp.square(y))
feat_size = 4
data = jnp.zeros(shape=[2, 3, feat_size])
bias = jnp.zeros([feat_size])
jax.make_jaxpr(jax.grad(loss))(bias, data)
```
Generates:
```
{ lambda c ; ; a b.
let d = reshape[ new_sizes=(1, 4)
dimensions=None
old_sizes=(4,) ] a
e = broadcast_in_dim[ shape=(2, 1, 4)
broadcast_dimensions=(1, 2) ] d
f = add b e
g = mul c f
h = mul c f
i = add_any g h
j = reduce_sum[ axes=(1,)
input_shape=(2, 3, 4) ] i
k = reshape[ new_sizes=(2, 1, 4)
dimensions=None
old_sizes=(2, 4) ] j
l = reduce_sum[ axes=(0,)
input_shape=(2, 1, 4) ] k
m = reshape[ new_sizes=(4,)
dimensions=None
old_sizes=(1, 4) ] l
in [m] }
``` | 2021-04-15T22:26:56 |
google/jax | 6,526 | google__jax-6526 | [
"6410"
] | 42d2e7620a81db03460738afcde13c9359b33701 | diff --git a/jax/_src/api.py b/jax/_src/api.py
--- a/jax/_src/api.py
+++ b/jax/_src/api.py
@@ -1627,9 +1627,14 @@ def f_pmapped(*args, **kwargs):
lambda: (0,) * out_tree().num_leaves,
closure=out_axes)
else:
+ # out_axes_thunk closes over the out_axes, they are flattened here to make
+ # them hashable.
+ out_axes_leaves, out_axes_treedef = tree_flatten(out_axes)
out_axes_thunk = HashableFunction(
- lambda: tuple(flatten_axes("pmap out_axes", out_tree(), out_axes)),
- closure=out_axes)
+ lambda: tuple(flatten_axes("pmap out_axes", out_tree(),
+ tree_unflatten(out_axes_treedef,
+ list(out_axes_leaves)))),
+ closure=(tuple(out_axes_leaves), out_axes_treedef))
out = pxla.xla_pmap(
flat_fun, *args, backend=backend, axis_name=axis_name,
axis_size=local_axis_size, global_axis_size=axis_size,
diff --git a/jax/experimental/maps.py b/jax/experimental/maps.py
--- a/jax/experimental/maps.py
+++ b/jax/experimental/maps.py
@@ -488,9 +488,15 @@ def fun_mapped(*args):
# TODO: Check that:
# - two axes mapped to the same resource never coincide (even inside f)
in_axes_flat = flatten_axes("xmap in_axes", in_tree, in_axes)
+
+ # out_axes_thunk closes over the out_axes, they are flattened here to make
+ # them hashable.
+ out_axes_leaves, out_axes_treedef = tree_flatten(out_axes)
out_axes_thunk = HashableFunction(
- lambda: tuple(flatten_axes("xmap out_axes", out_tree(), out_axes)),
- closure=out_axes)
+ lambda: tuple(flatten_axes("xmap out_axes", out_tree(),
+ tree_unflatten(out_axes_treedef,
+ list(out_axes_leaves)))),
+ closure=(tuple(out_axes_leaves), out_axes_treedef))
axis_resource_count = _get_axis_resource_count(normalized_axis_resources, resource_env)
for axis, size in axis_sizes.items():
| diff --git a/tests/pmap_test.py b/tests/pmap_test.py
--- a/tests/pmap_test.py
+++ b/tests/pmap_test.py
@@ -2033,6 +2033,15 @@ def f(x, y):
self.assertAllClose(f(x, y),
(jnp.sin(x.transpose((1, 0, 2)) + y).transpose((1, 2, 0)), y * 2))
+ def testPmapDictOutAxes(self):
+ # see issue #6410
+ @partial(pmap, out_axes={'a': 0})
+ def f(x):
+ return {'a': x}
+ device_count = xla_bridge.device_count()
+ x = jnp.arange(device_count)
+ tree_util.tree_multimap(self.assertAllClose, f(x), {'a': x})
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": f"_{in_axes}_{out_axes}",
"in_axes": in_axes, "out_axes": out_axes}
diff --git a/tests/xmap_test.py b/tests/xmap_test.py
--- a/tests/xmap_test.py
+++ b/tests/xmap_test.py
@@ -1142,6 +1142,12 @@ def testNegativeAxes(self):
with self.assertRaisesRegex(ValueError, "xmap doesn't support negative axes in out_axes"):
xmap(lambda x: x, in_axes={0: 'i'}, out_axes={-1: 'i'})(jnp.ones((5,)))
+ @ignore_xmap_warning()
+ def testDictOutAxes(self):
+ # see issue #6410
+ out = xmap(lambda x: x, in_axes=[...], out_axes={"a": [...]})({"a": 1})
+ self.assertEqual(out, {"a": 1})
+
@ignore_xmap_warning()
def testListAxesRankAssertion(self):
error = (r"xmap argument has an in_axes specification of \['i', None\], which "
| dict out_axes in xmap give a hash error
```
def f(x):
return {"a": x}
x = jax.numpy.ones((2, 3))
xmap(f, in_axes=[...], out_axes={"a": [...]})(x)
```
gives
```
[...]
jax/linear_util.py in memoized_fun(fun, *args)
253 else:
254 key = (fun.transforms, fun.params, args, config.x64_enabled)
--> 255 result = cache.get(key, None)
256 if result is not None:
257 ans, stores = result
jax/_src/util.py in __hash__(self)
375
376 def __hash__(self):
--> 377 return hash((self.f.__code__, self.closure))
378
379 def __call__(self, *args, **kwargs):
jax/_src/util.py in __hash__(self)
375
376 def __hash__(self):
--> 377 return hash((self.f.__code__, self.closure))
378
379 def __call__(self, *args, **kwargs):
TypeError: unhashable type: 'dict'
```
(The closure of out_axes_thunk seems to contain the out_axes as an unhashable dict!)
Making out_axes a dict-less prefix works:
```
def f(x):
return {"a": x}
x = jax.numpy.ones((2, 3))
xmap(f, in_axes=[...], out_axes=[...])(x)
> no error
```
| 2021-04-21T11:08:23 |
|
google/jax | 6,590 | google__jax-6590 | [
"6444"
] | 6fd806b964769d806161f57bbda270f82fc3907b | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -5228,6 +5228,7 @@ def _quantile(a, q, axis, interpolation, keepdims, squash_nans):
index[axis] = high
high_value = a[tuple(index)]
else:
+ a = where(any(isnan(a), axis=axis, keepdims=True), nan, a)
a = lax.sort(a, dimension=axis)
n = a_shape[axis]
q = lax.mul(q, _constant_like(q, n - 1))
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -4037,7 +4037,7 @@ def args_maker(): return []
jtu.format_shape_dtype_string(a_shape, a_dtype),
jtu.format_shape_dtype_string(q_shape, q_dtype),
axis, keepdims, interpolation),
- "a_rng": jtu.rand_some_nan if 'nan' in op else jtu.rand_default,
+ "a_rng": jtu.rand_some_nan,
"q_rng": q_rng, "op": op,
"a_shape": a_shape, "a_dtype": a_dtype,
"q_shape": q_shape, "q_dtype": q_dtype, "axis": axis,
@@ -4068,6 +4068,9 @@ def testQuantile(self, op, a_rng, q_rng, a_shape, a_dtype, q_shape, q_dtype,
args_maker = lambda: [a_rng(a_shape, a_dtype)]
else:
args_maker = lambda: [a_rng(a_shape, a_dtype), q_rng(q_shape, q_dtype)]
+
+ # TODO(jakevdp): remove this ignore_warning when minimum numpy version is 1.17.0
+ @jtu.ignore_warning(category=RuntimeWarning, message="Invalid value encountered.*")
def np_fun(*args):
args = [x if jnp.result_type(x) != jnp.bfloat16 else
np.asarray(x, np.float32) for x in args]
| jnp.quantile should return NaN for arrays containing NaNs
```python
import numpy as np
import jax.numpy as jnp
x = np.array([1, 2, np.nan])
print(np.quantile(x, 0.5))
# nan
print(jnp.quantile(x, 0.5))
# 2.0
```
| Related to #6439 | 2021-04-29T17:15:46 |
google/jax | 6,701 | google__jax-6701 | [
"773"
] | fa9ca33e60af0b2f7defc93dff060c4e78aca3d4 | diff --git a/build/build.py b/build/build.py
--- a/build/build.py
+++ b/build/build.py
@@ -79,17 +79,22 @@ def check_python_version(python_version):
BAZEL_BASE_URI = "https://github.com/bazelbuild/bazel/releases/download/3.7.2/"
BazelPackage = collections.namedtuple("BazelPackage", ["file", "sha256"])
bazel_packages = {
- "Linux":
+ ("Linux", "x86_64"):
BazelPackage(
file="bazel-3.7.2-linux-x86_64",
sha256=
"70dc0bee198a4c3d332925a32d464d9036a831977501f66d4996854ad4e4fc0d"),
- "Darwin":
+ ("Linux", "aarch64"):
+ BazelPackage(
+ file="bazel-3.7.2-linux-arm64",
+ sha256=
+ "6ebd9eccbcb8f63c92a324c0c86cec11963aa9dcb914dd4718f592fdfeda9823"),
+ ("Darwin", "x86_64"):
BazelPackage(
file="bazel-3.7.2-darwin-x86_64",
sha256=
"80c82e93a12ba30021692b11c78007807e82383a673be1602573b944beb359ab"),
- "Windows":
+ ("Windows", "x86_64"):
BazelPackage(
file="bazel-3.7.2-windows-x86_64.exe",
sha256=
@@ -99,7 +104,7 @@ def check_python_version(python_version):
def download_and_verify_bazel():
"""Downloads a bazel binary from Github, verifying its SHA256 hash."""
- package = bazel_packages.get(platform.system())
+ package = bazel_packages.get((platform.system(), platform.machine()))
if package is None:
return None
diff --git a/build/build_wheel.py b/build/build_wheel.py
--- a/build/build_wheel.py
+++ b/build/build_wheel.py
@@ -216,12 +216,16 @@ def prepare_wheel(sources_path):
def build_wheel(sources_path, output_path):
"""Builds a wheel in `output_path` using the source tree in `sources_path`."""
- platform_name = {
- "Linux": "manylinux2010",
- "Darwin": "macosx_10_9",
- "Windows": "win",
- }[platform.system()]
- cpu_name = "amd64" if platform.system() == "Windows" else "x86_64"
+ if platform.system() == "Windows":
+ cpu_name = "amd64"
+ platform_name = "win"
+ else:
+ platform_name, cpu_name = {
+ ("Linux", "x86_64"): ("manylinux2010", "x86_64"),
+ ("Linux", "aarch64"): ("manylinux2014", "aarch64"),
+ ("Darwin", "x86_64"): ("macosx_10_9", "x86_64"),
+ ("Darwin", "arm64"): ("macosx_11_0", "arm64"),
+ }[(platform.system(), platform.machine())]
python_tag_arg = (f"--python-tag=cp{sys.version_info.major}"
f"{sys.version_info.minor}")
platform_tag_arg = f"--plat-name={platform_name}_{cpu_name}"
| Build for Ubuntu (ARM-based machine)
I was trying to build jax on my Jetson Nano device. Although failed at first, I finally managed to build it by making the following change at [line 326](https://github.com/google/jax/blob/master/build/build.py#L326) in `build/build.py`:
```
[":install_xla_in_source_tree", os.getcwd()] + ["--cpu=arm"])
```
which follows the suggestion from [here](https://blog.bazel.build/2019/02/11/configurable-builds-part-1.html).
I wonder if it is useful to make this in a more general setting (maybe add a flag as an option in build) for other people to build this wonderful package on ARM-based machines. If not, I am happy to share this piece of information in case anyone having the same problem.
| Thanks! It's great to know that JAX works on ARM! I'll add a flag to the build file to make this easier (or add a way to pass your own flags, like `--cpu` to the bazel command line.)
Thank you very much! BTW when I was importing jax it shows the following:
```
>>> from jax import random
>>> key = random.PRNGKey(0)
2019-05-28 16:38:35.228715: W external/org_tensorflow/tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2019-05-28 16:38:38.662329: F external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_driver.cc:175] Check failed: err == cudaSuccess || err == cudaErrorInvalidValue Unexpected CUDA error: unknown error
Aborted (core dumped)
```
I don't think it is a problem of jax itself but I wonder if there is a way to get around it. (This also happens when calling TF, but TF somehow ignores it and continue to compute on GPU.)
Thank you for your help!
Since we don't have such a device we can't easily debug it. Contributions are welcome.
The first message is a warning (`W`) and seems benign. The second message is a fatal error (`F`), but I don't know what's causing it since the error message from the CUDA libraries is not helpful at all...
Thank you so much!
@mark-fangzhou-xie
Hi, When I follow your instructions to build jax on Nano, it shows the error:
```
Traceback (most recent call last):
File "build/build.py", line 380, in <module>
main()
File "build/build.py", line 333, in main
check_bazel_version(bazel_path, min_version="2.0.0", max_version=None)
File "build/build.py", line 157, in check_bazel_version
version_output = shell([bazel_path, "--bazelrc=/dev/null", "version"])
File "build/build.py", line 47, in shell
output = subprocess.check_output(cmd)
File "/home/lly2014/archiconda3/lib/python3.7/subprocess.py", line 389, in check_output
**kwargs).stdout
File "/home/lly2014/archiconda3/lib/python3.7/subprocess.py", line 466, in run
with Popen(*popenargs, **kwargs) as process:
File "/home/lly2014/archiconda3/lib/python3.7/subprocess.py", line 769, in __init__
restore_signals, start_new_session)
File "/home/lly2014/archiconda3/lib/python3.7/subprocess.py", line 1516, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
OSError: [Errno 8] Exec format error: './bazel-2.0.0-linux-x86_64'
```
How can I fix it ? Thanks
I use the command: python build/build.py --enable_cuda
> @mark-fangzhou-xie
> Hi, When I follow your instructions to build jax on Nano, it shows the error:
>
> ```
> Traceback (most recent call last):
> File "build/build.py", line 380, in <module>
> main()
> File "build/build.py", line 333, in main
> check_bazel_version(bazel_path, min_version="2.0.0", max_version=None)
> File "build/build.py", line 157, in check_bazel_version
> version_output = shell([bazel_path, "--bazelrc=/dev/null", "version"])
> File "build/build.py", line 47, in shell
> output = subprocess.check_output(cmd)
> File "/home/lly2014/archiconda3/lib/python3.7/subprocess.py", line 389, in check_output
> **kwargs).stdout
> File "/home/lly2014/archiconda3/lib/python3.7/subprocess.py", line 466, in run
> with Popen(*popenargs, **kwargs) as process:
> File "/home/lly2014/archiconda3/lib/python3.7/subprocess.py", line 769, in __init__
> restore_signals, start_new_session)
> File "/home/lly2014/archiconda3/lib/python3.7/subprocess.py", line 1516, in _execute_child
> raise child_exception_type(errno_num, err_msg, err_filename)
> OSError: [Errno 8] Exec format error: './bazel-2.0.0-linux-x86_64'
> ```
>
> How can I fix it ? Thanks
> I use the command: python build/build.py --enable_cuda
As you can see, I didn't really solve the problem myself (lack of experience in C++ and CUDA), so I really can't help you with this.
I later built a fully-fledged Ubuntu machine and it works smoothly :)
> I was trying to build jax on my Jetson Nano device. Although failed at first, I finally managed to build it by making the following change at line 326 in build/build.py:
didn't you mange to build it?
> > I was trying to build jax on my Jetson Nano device. Although failed at first, I finally managed to build it by making the following change at line 326 in build/build.py:
>
> didn't you mange to build it?
The build seemed to be successful. That's true. No error came out during the building process. And I thought it was a success. But that doesn't mean it will work. If you notice, I didn't even manage to set the random number key later on after importing it. I wouldn't call that "success" to be honest.
As for now, I completely forgot what I did at that time and don't have nano at hand. There isn't really much I could do to help you here.
Did you have any luck further? I'm trying to build it on my Jetson nano currently and I'm having issues myself haha. | 2021-05-10T15:09:50 |
|
google/jax | 6,702 | google__jax-6702 | [
"856"
] | fa9ca33e60af0b2f7defc93dff060c4e78aca3d4 | diff --git a/jax/_src/dtypes.py b/jax/_src/dtypes.py
--- a/jax/_src/dtypes.py
+++ b/jax/_src/dtypes.py
@@ -232,7 +232,8 @@ def issubdtype(a, b):
np.dtype('float64'),
np.dtype('complex64'),
np.dtype('complex128'),
-] + _weak_types # type: ignore[operator]
+]
+_jax_dtype_set = set(_jax_types) | {float0}
def _jax_type(dtype, weak_type):
"""Return the jax type for a dtype and weak type."""
@@ -247,7 +248,8 @@ def _type_promotion_lattice():
Return the type promotion lattice in the form of a DAG.
This DAG maps each type to its immediately higher type on the lattice.
"""
- b1, u1, u2, u4, u8, i1, i2, i4, i8, bf, f2, f4, f8, c4, c8, i_, f_, c_ = _jax_types
+ b1, u1, u2, u4, u8, i1, i2, i4, i8, bf, f2, f4, f8, c4, c8 = _jax_types
+ i_, f_, c_ = _weak_types
return {
b1: [i_],
u1: [i2, u2], u2: [i4, u4], u4: [i8, u8], u8: [f_],
@@ -275,7 +277,7 @@ def _least_upper_bound(*nodes):
"""Compute the least upper bound of a set of nodes.
Args:
- nodes: sequence of entries from _jax_types
+ nodes: sequence of entries from _jax_types + _weak_types
Returns:
the _jax_type representing the least upper bound of the input nodes
on the promotion lattice.
@@ -337,7 +339,11 @@ def is_python_scalar(x):
def dtype(x):
if type(x) in python_scalar_dtypes:
return python_scalar_dtypes[type(x)]
- return np.result_type(x)
+ dt = np.result_type(x)
+ if dt not in _jax_dtype_set:
+ raise TypeError(f"Value '{x}' with dtype {dt} is not a valid JAX array "
+ "type. Only arrays of numeric types are supported by JAX.")
+ return dt
def _lattice_result_type(*args):
dtypes, weak_types = zip(*(_dtype_and_weaktype(arg) for arg in args))
| diff --git a/tests/api_test.py b/tests/api_test.py
--- a/tests/api_test.py
+++ b/tests/api_test.py
@@ -1812,6 +1812,10 @@ def check_warning(warn, nowarn):
check_warning(lambda: jnp.arange(1.0).astype("int64"),
lambda: jnp.arange(1.0).astype(int))
+ def test_error_for_invalid_dtype(self):
+ with self.assertRaisesRegex(TypeError, ".*not a valid JAX array type.*"):
+ lax.add(jnp.array(7), np.array("hello"))
+
def test_vmap_preserves_docstr(self):
def superfun(a):
"""Does things with stuff."""
| jax.numpy.subtract throws runtime error when one of the arrays is object type
When subtracting a traditional numpy array of dtype object whose entries are all floats from a jax.numpy array, jax.numpy.subtract throws a (somewhat inscrutable) runtime error. Subtraction works fine in numpy.
See minimal reproduction [here](https://colab.research.google.com/drive/1RsORCiwcWchwD7XiN4FsPvfx2ZjpMMac).
| Indeed, JAX doesn't work with object dtype arrays. Let's close this issue by documenting that somewhere (EDIT) and by having a better error message if possible.
Thanks for the clear repro! I'm curious about where you find an application for object arrays.
Cool, thank you!
This came up because I had a dataset that was in the form of a list of tuples of `numpy` arrays. Extracting training batches from the list in native `numpy` and passing them to a training loop written in JAX resulted in the error. | 2021-05-10T15:53:27 |
google/jax | 6,703 | google__jax-6703 | [
"2349"
] | ec2f1d0f676c2e8be0fd5649f32a4b86152383b6 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -2286,6 +2286,9 @@ def flatnonzero(a):
def _nan_reduction(a, name, jnp_reduction, init_val, nan_if_all_nan,
axis=None, keepdims=None, **kwargs):
_check_arraylike(name, a)
+ if not issubdtype(_dtype(a), inexact):
+ return jnp_reduction(a, axis=axis, keepdims=keepdims, **kwargs)
+
out = jnp_reduction(where(isnan(a), _reduction_init_val(a, init_val), a),
axis=axis, keepdims=keepdims, **kwargs)
if nan_if_all_nan:
| diff --git a/tests/lax_numpy_test.py b/tests/lax_numpy_test.py
--- a/tests/lax_numpy_test.py
+++ b/tests/lax_numpy_test.py
@@ -339,7 +339,7 @@ def op_record(name, nargs, dtypes, shapes, rng_factory, diff_modes,
op_record("sum", 1, all_dtypes, all_shapes, jtu.rand_default, []),
op_record("nanmean", 1, inexact_dtypes, nonempty_shapes, jtu.rand_some_nan,
[], inexact=True),
- op_record("nanprod", 1, inexact_dtypes, all_shapes, jtu.rand_some_nan, []),
+ op_record("nanprod", 1, all_dtypes, all_shapes, jtu.rand_some_nan, []),
op_record("nansum", 1, number_dtypes, all_shapes, jtu.rand_some_nan, []),
]
@@ -370,8 +370,8 @@ def op_record(name, nargs, dtypes, shapes, rng_factory, diff_modes,
inexact=True),
op_record("std", 1, all_dtypes, nonempty_shapes, jtu.rand_default, [],
inexact=True),
- op_record("nanmax", 1, inexact_dtypes, nonempty_shapes, jtu.rand_some_nan, []),
- op_record("nanmin", 1, inexact_dtypes, nonempty_shapes, jtu.rand_some_nan, []),
+ op_record("nanmax", 1, all_dtypes, nonempty_shapes, jtu.rand_some_nan, []),
+ op_record("nanmin", 1, all_dtypes, nonempty_shapes, jtu.rand_some_nan, []),
op_record("nanvar", 1, all_dtypes, nonempty_shapes, jtu.rand_some_nan,
[], inexact=True),
op_record("nanstd", 1, all_dtypes, nonempty_shapes, jtu.rand_some_nan,
| jnp.nanmax does not work with int inputs
Just flagging that `jnp.nanmax` has issues when the input is int. For example:
```
#Numpy version:
np.nanmax(np.arange(3*3*3).reshape(3,3,3), axis=(1,2))
>> array([ 8, 17, 26])
```
versus:
```
jnp.nanmax(jnp.arange(3*3*3).reshape(3,3,3), axis=(1,2))
>> ValueError: cannot convert float NaN to integer
```
Converting to float works, as does jnp.nanmean. Issue seems to be converting nan to int here:
https://github.com/google/jax/blob/52a41311c5666d0188c9f6966e3ef7e31ccb4e81/jax/numpy/lax_numpy.py#L275
| 2021-05-10T17:21:58 |
|
google/jax | 6,704 | google__jax-6704 | [
"6697"
] | ca1b764bba628b6d0eae080ab305627c8f371d65 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -5837,18 +5837,20 @@ class _IndexUpdateHelper:
# Note: this docstring will appear as the docstring for the `at` property.
"""Indexable helper object to call indexed update functions.
- The `at` property is syntactic sugar for calling the indexed update functions
+ The ``at`` property is syntactic sugar for calling the indexed update functions
defined in :mod:`jax.ops`, and acts as a pure equivalent of in-place
- modificatons.
+ modificatons. For further information, see `Syntactic Sugar for Index Update Operators
+ <https://jax.readthedocs.io/en/latest/jax.ops.html#syntactic-sugar-for-indexed-update-operators>`_.
In particular:
+
- ``x = x.at[idx].set(y)`` is a pure equivalent of ``x[idx] = y``.
- ``x = x.at[idx].add(y)`` is a pure equivalent of ``x[idx] += y``.
- ``x = x.at[idx].mul(y)`` is a pure equivalent of ``x[idx] *= y``.
- ``x = x.at[idx].min(y)`` is a pure equivalent of
- ``x[idx] = minimum(x[idx], y)``.
+ ``x[idx] = minimum(x[idx], y)``.
- ``x = x.at[idx].max(y)`` is a pure equivalent of
- ``x[idx] = maximum(x[idx], y)``.
+ ``x[idx] = maximum(x[idx], y)``.
"""
__slots__ = ("array",)
| Make it easier to find documentation for the DeviceArray.at attribute
It's currently very hard to find documentation about the syntax for slice updates. I'm not sure if I'm unable to find something on it because "at" is a stop word, or if isn't documented at all. I only was able to find out about this feature because of the FLAX documentation.
| Hi - thanks for opening the issue. The attribute is documented here: https://jax.readthedocs.io/en/latest/jax.ops.html#syntactic-sugar-for-indexed-update-operators
I'm not sure how to make this more discoverable. Do you have any suggestions?
Hi @jakevdp
> Do you have any suggestions?
In the Glossary https://jax.readthedocs.io/en/latest/glossary.html where it says:
```
`DeviceArray`
JAX’s analog of the `numpy.ndarray`. See `jax.interpreters.xla.DeviceArray`.
```
`jax.interpreters.xla.DeviceArray` is not "clickable". Maybe fixing that could resolve the discoverability for some users?
And then, you could maybe add a reference to https://jax.readthedocs.io/en/latest/jax.ops.html#syntactic-sugar-for-indexed-update-operators in the `jax.interpreters.xla.DeviceArray` API docs because the two can be (f)used together.
Ah, yeah that's a good idea. It's a bit non-trivial, though, because `DeviceArray` doesn't actually have the `at` attribute: instead it is defined on the two implementations of `DeviceArray` that are used by the Python JIT and the C++ JIT respectively. Offhand I don't know any way to make sphinx document that in a non-confusing way (this is why it's not been done already). I'll try a few things and see if I can figure out a workaround.
> Hi @jakevdp
>
> > Do you have any suggestions?
>
> In the Glossary https://jax.readthedocs.io/en/latest/glossary.html where it says:
>
> ```
> `DeviceArray`
>
> JAX’s analog of the `numpy.ndarray`. See `jax.interpreters.xla.DeviceArray`.
> ```
>
> `jax.interpreters.xla.DeviceArray` is not "clickable". Maybe fixing that could resolve the discoverability for some users?
This is basically exactly why I had trouble finding it I think. There being two implementations makes sense, but having some docs on their shared interface sounds great. | 2021-05-10T18:31:47 |
|
google/jax | 6,706 | google__jax-6706 | [
"2694"
] | 1509f995ed9c4db93b6097350879824a36277513 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -49,6 +49,7 @@
from jax import lax
from jax._src.lax.lax import _device_put_raw
from jax import ops
+from jax._src.ops import scatter
from jax._src.util import (partial, unzip2, prod as _prod, subvals, safe_zip,
canonicalize_axis as _canonicalize_axis, maybe_named_axis)
from jax.tree_util import tree_leaves, tree_flatten, tree_map
@@ -5842,14 +5843,17 @@ class _IndexUpdateHelper:
The ``at`` property is syntactic sugar for calling the indexed update functions
defined in :mod:`jax.ops`, and acts as a pure equivalent of in-place
- modificatons. For further information, see `Syntactic Sugar for Index Update Operators
- <https://jax.readthedocs.io/en/latest/jax.ops.html#syntactic-sugar-for-indexed-update-operators>`_.
+ modificatons. For further information, see `Indexed Update Operators
+ <https://jax.readthedocs.io/en/latest/jax.ops.html#indexed-update-operators>`_.
In particular:
- ``x = x.at[idx].set(y)`` is a pure equivalent of ``x[idx] = y``.
- ``x = x.at[idx].add(y)`` is a pure equivalent of ``x[idx] += y``.
- - ``x = x.at[idx].mul(y)`` is a pure equivalent of ``x[idx] *= y``.
+ - ``x = x.at[idx].multiply(y)`` (aka ``mul``) is a pure equivalent of
+ ``x[idx] *= y``.
+ - ``x = x.at[idx].divide(y)`` is a pure equivalent of ``x[idx] /= y``.
+ - ``x = x.at[idx].power(y)`` is a pure equivalent of ``x[idx] **= y``.
- ``x = x.at[idx].min(y)`` is a pure equivalent of
``x[idx] = minimum(x[idx], y)``.
- ``x = x.at[idx].max(y)`` is a pure equivalent of
@@ -5866,6 +5870,8 @@ def __getitem__(self, index):
def __repr__(self):
return f"_IndexUpdateHelper({repr(self.array)})"
+_power = power
+_divide = divide
class _IndexUpdateRef:
"""Helper object to call indexed update functions for an (advanced) index.
@@ -5886,74 +5892,100 @@ def __repr__(self):
def set(self, values, indices_are_sorted=False, unique_indices=False):
"""Pure equivalent of ``x[idx] = y``.
- ``x.at[idx].set(y)`` is syntactic sugar for
- ``jax.ops.index_update(x, jax.ops.index[idx], y)``, and
- returns the value of ``x`` that would result from the NumPy-style
+ Returns the value of ``x`` that would result from the NumPy-style
:mod:indexed assignment <numpy.doc.indexing>` ``x[idx] = y``.
See :mod:`jax.ops` for details.
"""
- return ops.index_update(self.array, self.index, values,
- indices_are_sorted=indices_are_sorted,
- unique_indices=unique_indices)
+ return scatter._scatter_update(self.array, self.index, values, lax.scatter,
+ indices_are_sorted=indices_are_sorted,
+ unique_indices=unique_indices)
def add(self, values, indices_are_sorted=False, unique_indices=False):
"""Pure equivalent of ``x[idx] += y``.
- ``x.at[idx].add(y)`` is syntactic sugar for
- ``jax.ops.index_add(x, jax.ops.index[idx], y)``, and
- returns the value of ``x`` that would result from the NumPy-style
+ Returns the value of ``x`` that would result from the NumPy-style
:mod:indexed assignment <numpy.doc.indexing>` ``x[idx] += y``.
See :mod:`jax.ops` for details.
"""
- return ops.index_add(self.array, self.index, values,
- indices_are_sorted=indices_are_sorted,
- unique_indices=unique_indices)
+ return scatter._scatter_update(self.array, self.index, values,
+ lax.scatter_add,
+ indices_are_sorted=indices_are_sorted,
+ unique_indices=unique_indices)
- def mul(self, values, indices_are_sorted=False, unique_indices=False):
- """Pure equivalent of ``x[idx] += y``.
+ def multiply(self, values, indices_are_sorted=False, unique_indices=False):
+ """Pure equivalent of ``x[idx] *= y``.
- ``x.at[idx].mul(y)`` is syntactic sugar for
- ``jax.ops.index_mul(x, jax.ops.index[idx], y)``, and
- returns the value of ``x`` that would result from the NumPy-style
+ Returns the value of ``x`` that would result from the NumPy-style
:mod:indexed assignment <numpy.doc.indexing>` ``x[idx] *= y``.
See :mod:`jax.ops` for details.
"""
- return ops.index_mul(self.array, self.index, values,
- indices_are_sorted=indices_are_sorted,
- unique_indices=unique_indices)
+ return scatter._scatter_update(self.array, self.index, values,
+ lax.scatter_mul,
+ indices_are_sorted=indices_are_sorted,
+ unique_indices=unique_indices)
+ mul = multiply
+
+ def divide(self, values, indices_are_sorted=False, unique_indices=False):
+ """Pure equivalent of ``x[idx] /= y``.
+
+ Returns the value of ``x`` that would result from the NumPy-style
+ :mod:indexed assignment <numpy.doc.indexing>` ``x[idx] /= y``.
+
+ See :mod:`jax.ops` for details.
+ """
+ return _divide(
+ self.array,
+ scatter._scatter_update(ones_like(self.array), self.index, values,
+ lax.scatter_mul,
+ indices_are_sorted=indices_are_sorted,
+ unique_indices=unique_indices))
+
+ def power(self, values, indices_are_sorted=False, unique_indices=False):
+ """Pure equivalent of ``x[idx] **= y``.
+
+ Returns the value of ``x`` that would result from the NumPy-style
+ :mod:indexed assignment <numpy.doc.indexing>` ``x[idx] **= y``.
+
+ See :mod:`jax.ops` for details.
+ """
+ return _power(
+ self.array,
+ scatter._scatter_update(ones_like(self.array), self.index, values,
+ lax.scatter_mul,
+ indices_are_sorted=indices_are_sorted,
+ unique_indices=unique_indices))
def min(self, values, indices_are_sorted=False, unique_indices=False):
"""Pure equivalent of ``x[idx] = minimum(x[idx], y)``.
- ``x.at[idx].min(y)`` is syntactic sugar for
- ``jax.ops.index_min(x, jax.ops.index[idx], y)``, and
- returns the value of ``x`` that would result from the NumPy-style
+ Returns the value of ``x`` that would result from the NumPy-style
:mod:indexed assignment <numpy.doc.indexing>`
``x[idx] = minimum(x[idx], y)``.
See :mod:`jax.ops` for details.
"""
- return ops.index_min(self.array, self.index, values,
- indices_are_sorted=indices_are_sorted,
- unique_indices=unique_indices)
+ return scatter._scatter_update(self.array, self.index, values,
+ lax.scatter_min,
+ indices_are_sorted=indices_are_sorted,
+ unique_indices=unique_indices)
def max(self, values, indices_are_sorted=False, unique_indices=False):
"""Pure equivalent of ``x[idx] = maximum(x[idx], y)``.
- ``x.at[idx].max(y)`` is syntactic sugar for
- ``jax.ops.index_max(x, jax.ops.index[idx], y)``, and
- returns the value of ``x`` that would result from the NumPy-style
+ Returns the value of ``x`` that would result from the NumPy-style
:mod:indexed assignment <numpy.doc.indexing>`
``x[idx] = maximum(x[idx], y)``.
See :mod:`jax.ops` for details.
"""
- return ops.index_max(self.array, self.index, values,
- indices_are_sorted=indices_are_sorted,
- unique_indices=unique_indices)
+ return scatter._scatter_update(self.array, self.index, values,
+ lax.scatter_max,
+ indices_are_sorted=indices_are_sorted,
+ unique_indices=unique_indices)
+
setattr(_DeviceArray, "at", property(_IndexUpdateHelper))
setattr(_CppDeviceArray, "at", property(_IndexUpdateHelper))
| diff --git a/jax/test_util.py b/jax/test_util.py
--- a/jax/test_util.py
+++ b/jax/test_util.py
@@ -138,7 +138,10 @@ def _assert_numpy_allclose(a, b, atol=None, rtol=None, err_msg=''):
kw = {}
if atol: kw["atol"] = atol
if rtol: kw["rtol"] = rtol
- np.testing.assert_allclose(a, b, **kw, err_msg=err_msg)
+ with np.errstate(invalid='ignore'):
+ # TODO(phawkins): surprisingly, assert_allclose sometimes reports invalid
+ # value errors. It should not do that.
+ np.testing.assert_allclose(a, b, **kw, err_msg=err_msg)
def tolerance(dtype, tol=None):
tol = {} if tol is None else tol
diff --git a/tests/lax_numpy_indexing_test.py b/tests/lax_numpy_indexing_test.py
--- a/tests/lax_numpy_indexing_test.py
+++ b/tests/lax_numpy_indexing_test.py
@@ -849,8 +849,10 @@ class UpdateOps(enum.Enum):
UPDATE = 0
ADD = 1
MUL = 2
- MIN = 3
- MAX = 4
+ DIV = 3
+ POW = 4
+ MIN = 5
+ MAX = 6
def np_fn(op, indexer, x, y):
x = x.copy()
@@ -858,6 +860,10 @@ def np_fn(op, indexer, x, y):
UpdateOps.UPDATE: lambda: y,
UpdateOps.ADD: lambda: x[indexer] + y,
UpdateOps.MUL: lambda: x[indexer] * y,
+ UpdateOps.DIV: jtu.ignore_warning(category=RuntimeWarning)(
+ lambda: x[indexer] / y.astype(x.dtype)),
+ UpdateOps.POW: jtu.ignore_warning(category=RuntimeWarning)(
+ lambda: x[indexer] ** y.astype(x.dtype)),
UpdateOps.MIN: lambda: np.minimum(x[indexer], y),
UpdateOps.MAX: lambda: np.maximum(x[indexer], y),
}[op]()
@@ -880,12 +886,21 @@ def sugar_fn(op, indexer, x, y, indices_are_sorted=False,
return {
UpdateOps.UPDATE: x.at[indexer].set,
UpdateOps.ADD: x.at[indexer].add,
- UpdateOps.MUL: x.at[indexer].mul,
+ UpdateOps.MUL: x.at[indexer].multiply,
+ UpdateOps.DIV: x.at[indexer].divide,
+ UpdateOps.POW: x.at[indexer].power,
UpdateOps.MIN: x.at[indexer].min,
UpdateOps.MAX: x.at[indexer].max,
}[op](y, indices_are_sorted=indices_are_sorted,
unique_indices=unique_indices)
+ def dtypes(op):
+ if op == UpdateOps.UPDATE:
+ return all_dtypes
+ elif op == UpdateOps.DIV or op == UpdateOps.POW:
+ return jtu.dtypes.inexact
+ else:
+ return default_dtypes
class IndexedUpdateTest(jtu.JaxTestCase):
@@ -899,10 +914,10 @@ class IndexedUpdateTest(jtu.JaxTestCase):
} for name, index_specs in s(STATIC_INDEXING_TESTS)
for shape, indexer in s(index_specs)
for op in s(UpdateOps)
- for dtype in s(all_dtypes if op == UpdateOps.UPDATE else default_dtypes)
+ for dtype in s(UpdateOps.dtypes(op))
for update_shape in s(_broadcastable_shapes(_update_shape(shape, indexer)))
for update_dtype in s([dtype] if op == UpdateOps.ADD else all_dtypes)
- for sugared in s([True, False]))))
+ for sugared in (s([True, False]) if op not in [UpdateOps.DIV, UpdateOps.POW] else [True]))))
def testStaticIndexing(self, shape, dtype, update_shape, update_dtype,
indexer, sugared, op):
rng = jtu.rand_default(self.rng())
@@ -912,87 +927,78 @@ def testStaticIndexing(self, shape, dtype, update_shape, update_dtype,
jax_fn = lambda x, y: UpdateOps.sugar_fn(op, indexer, x, y)
else:
jax_fn = lambda x, y: UpdateOps.jax_fn(op, indexer, x, y)
- self._CheckAgainstNumpy(np_fn, jax_fn, args_maker)
+ self._CheckAgainstNumpy(np_fn, jax_fn, args_maker,
+ tol={np.complex128: 1e-14})
self._CompileAndCheck(jax_fn, args_maker)
@parameterized.named_parameters(jtu.named_cases_from_sampler(lambda s: ({
- "testcase_name": "{}_inshape={}_indexer={}_update={}_sugared={}_op={}".format(
+ "testcase_name": "{}_inshape={}_indexer={}_update={}_op={}".format(
name, jtu.format_shape_dtype_string(shape, dtype), indexer,
- jtu.format_shape_dtype_string(update_shape, update_dtype), sugared, op.name),
+ jtu.format_shape_dtype_string(update_shape, update_dtype), op.name),
"shape": shape, "dtype": dtype, "indexer": indexer,
"update_shape": update_shape, "update_dtype": update_dtype,
- "op": op, "sugared": sugared
+ "op": op
} for name, index_specs in s(ADVANCED_INDEXING_TESTS_NO_REPEATS)
for shape, indexer in s(index_specs)
for op in s(UpdateOps)
- for dtype in s(all_dtypes if op == UpdateOps.UPDATE else default_dtypes)
+ for dtype in s(UpdateOps.dtypes(op))
for update_shape in s(_broadcastable_shapes(_update_shape(shape, indexer)))
- for update_dtype in s([dtype] if op == UpdateOps.ADD else all_dtypes)
- for sugared in s([True, False]))))
+ for update_dtype in s([dtype] if op == UpdateOps.ADD else all_dtypes))))
def testAdvancedIndexing(self, shape, dtype, update_shape, update_dtype,
- indexer, sugared, op):
+ indexer, op):
rng = jtu.rand_default(self.rng())
args_maker = lambda: [rng(shape, dtype), rng(update_shape, update_dtype)]
np_fn = lambda x, y: UpdateOps.np_fn(op, indexer, x, y)
- if sugared:
- jax_fn = lambda x, y: UpdateOps.sugar_fn(op, indexer, x, y, unique_indices=True)
- else:
- jax_fn = lambda x, y: UpdateOps.jax_fn(op, indexer, x, y, unique_indices=True)
- self._CheckAgainstNumpy(np_fn, jax_fn, args_maker)
+ jax_fn = lambda x, y: UpdateOps.sugar_fn(op, indexer, x, y, unique_indices=True)
+ self._CheckAgainstNumpy(np_fn, jax_fn, args_maker,
+ tol={np.complex128: 1e-14})
self._CompileAndCheck(jax_fn, args_maker)
@parameterized.named_parameters(jtu.named_cases_from_sampler(lambda s: ({
- "testcase_name": "{}_inshape={}_indexer={}_update={}_sugared={}_op={}".format(
+ "testcase_name": "{}_inshape={}_indexer={}_update={}_op={}".format(
name, jtu.format_shape_dtype_string(shape, dtype), indexer,
- jtu.format_shape_dtype_string(update_shape, update_dtype), sugared, op.name),
+ jtu.format_shape_dtype_string(update_shape, update_dtype), op.name),
"shape": shape, "dtype": dtype, "indexer": indexer,
"update_shape": update_shape, "update_dtype": update_dtype,
- "op": op, "sugared": sugared
+ "op": op
} for name, index_specs in s(ADVANCED_INDEXING_TESTS_NO_REPEATS_SORTED)
for shape, indexer in s(index_specs)
for op in s(UpdateOps)
- for dtype in s(all_dtypes if op == UpdateOps.UPDATE else default_dtypes)
+ for dtype in s(UpdateOps.dtypes(op))
for update_shape in s(_broadcastable_shapes(_update_shape(shape, indexer)))
- for update_dtype in s([dtype] if op == UpdateOps.ADD else all_dtypes)
- for sugared in s([True, False]))))
+ for update_dtype in s([dtype] if op == UpdateOps.ADD else all_dtypes))))
def testAdvancedIndexingSorted(self, shape, dtype, update_shape, update_dtype,
- indexer, sugared, op):
+ indexer, op):
rng = jtu.rand_default(self.rng())
args_maker = lambda: [rng(shape, dtype), rng(update_shape, update_dtype)]
np_fn = lambda x, y: UpdateOps.np_fn(op, indexer, x, y)
- if sugared:
- jax_fn = lambda x, y: UpdateOps.sugar_fn(
- op, indexer, x, y, indices_are_sorted=True, unique_indices=True)
- else:
- jax_fn = lambda x, y: UpdateOps.jax_fn(
- op, indexer, x, y, indices_are_sorted=True, unique_indices=True)
- self._CheckAgainstNumpy(np_fn, jax_fn, args_maker, check_dtypes=True)
+ jax_fn = lambda x, y: UpdateOps.sugar_fn(
+ op, indexer, x, y, indices_are_sorted=True, unique_indices=True)
+ self._CheckAgainstNumpy(np_fn, jax_fn, args_maker, check_dtypes=True,
+ tol={np.complex128: 1e-14})
self._CompileAndCheck(jax_fn, args_maker, check_dtypes=True)
@parameterized.named_parameters(jtu.named_cases_from_sampler(lambda s: ({
- "testcase_name": "{}_inshape={}_indexer={}_update={}_op={}_sugared={}".format(
+ "testcase_name": "{}_inshape={}_indexer={}_update={}_op={}".format(
name, jtu.format_shape_dtype_string(shape, dtype), indexer,
- jtu.format_shape_dtype_string(update_shape, update_dtype), op.name, sugared),
+ jtu.format_shape_dtype_string(update_shape, update_dtype), op.name),
"shape": shape, "dtype": dtype, "indexer": indexer,
"update_shape": update_shape, "update_dtype": update_dtype,
- "op": op, "sugared": sugared
+ "op": op
} for name, index_specs in s(MIXED_ADVANCED_INDEXING_TESTS_NO_REPEATS)
for shape, indexer in s(index_specs)
for op in s(UpdateOps)
- for dtype in s(all_dtypes if op == UpdateOps.UPDATE else default_dtypes)
+ for dtype in s(UpdateOps.dtypes(op))
for update_shape in s(_broadcastable_shapes(_update_shape(shape, indexer)))
- for update_dtype in s([dtype] if op == UpdateOps.ADD else all_dtypes)
- for sugared in s([True, False]))))
+ for update_dtype in s([dtype] if op == UpdateOps.ADD else all_dtypes))))
def testMixedAdvancedIndexing(self, shape, dtype, update_shape, update_dtype,
- indexer, sugared, op):
+ indexer, op):
rng = jtu.rand_default(self.rng())
args_maker = lambda: [rng(shape, dtype), rng(update_shape, update_dtype)]
np_fn = lambda x, y: UpdateOps.np_fn(op, indexer, x, y)
- if sugared:
- jax_fn = lambda x, y: UpdateOps.sugar_fn(op, indexer, x, y)
- else:
- jax_fn = lambda x, y: UpdateOps.jax_fn(op, indexer, x, y)
- self._CheckAgainstNumpy(np_fn, jax_fn, args_maker)
+ jax_fn = lambda x, y: UpdateOps.sugar_fn(op, indexer, x, y)
+ self._CheckAgainstNumpy(np_fn, jax_fn, args_maker,
+ tol={np.complex128: 1e-14})
self._CompileAndCheck(jax_fn, args_maker)
@parameterized.named_parameters(jtu.cases_from_list({
@@ -1012,7 +1018,7 @@ def testMixedAdvancedIndexing(self, shape, dtype, update_shape, update_dtype,
def testStaticIndexingGrads(self, shape, dtype, update_shape, update_dtype,
indexer, op):
rng = jtu.rand_default(self.rng())
- jax_fn = lambda x, y: UpdateOps.jax_fn(op, indexer, x, y)
+ jax_fn = lambda x, y: UpdateOps.sugar_fn(op, indexer, x, y)
x = rng(shape, dtype)
y = rng(update_shape, update_dtype)
check_grads(jax_fn, (x, y), 2, rtol=1e-3, atol=1e-3, eps=1.)
| Add index_mul, index_div, index_pow
Currently the only scatter operations implemented as primitives are update, add, min and max. `index_update` has limited support for transposing, which means some custom_jvp rules implemented using `index_update` cannot be automatically converted into vjp form. Extending the indexing operations would fix this, in particular in the fix for issue #2380.
| One third of the way there!
Though it's probably not high priority | 2021-05-10T21:53:07 |
google/jax | 6,728 | google__jax-6728 | [
"6605"
] | f7717c89bd842a8dc6112e19c72a461bc739d414 | diff --git a/jax/_src/numpy/lax_numpy.py b/jax/_src/numpy/lax_numpy.py
--- a/jax/_src/numpy/lax_numpy.py
+++ b/jax/_src/numpy/lax_numpy.py
@@ -2658,8 +2658,8 @@ def _broadcast_to_pairs(nvals, nd, name):
# pad
return tuple((nvals.flat[0], nvals.flat[0]) for i in range(nd))
else:
- raise ValueError(f"{name} given unexpected structure: {nvals}. "
- f"See docstring for valid {name} formats.")
+ raise ValueError(f"jnp.pad: {name} with nd={nd} has unsupported shape {nvals.shape}. "
+ f"Valid shapes are ({nd}, 2), (1, 2), (2,), (1,), or ().")
@partial(jit, static_argnums=(1, 2, 4, 5, 6))
| Issue better error message when jax.numpy.pad() pad_widths mismatches array rank
The following JAX code:
```
y = jnp.pad(jnp.zeros((2,2)), [[0,0],[0,0],[0,0]], mode='wrap')
```
given an error:
```
ValueError: pad_width given unexpected structure: [[0 0]
[0 0]
[0 0]]. See docstring for valid pad_width formats.
```
This is because the length of the pad_widths list needs to match the rank of the array. Can we make this error message more explicit?
| It would certainly be possible to be more explicit, but in this case it's tricky because there are so many possible reasons that the input is invalid. Here is the chain of options: https://github.com/google/jax/blob/9c63c772a97623453aecc0680dd4601cc8aa107a/jax/_src/numpy/lax_numpy.py#L2639-L2656
It's tough to guess what the user intended when there are so many possible correct inputs. Maybe in this case we could raise a more specific error if `nvals.ndim == 2 and nvals.shape[1] == 2 and nvals.shape[0] not in (1, nd)`. What do you think?
One simple thing might be just to report `np.ndm(x)` in the error message. | 2021-05-11T22:46:16 |
|
google/jax | 6,764 | google__jax-6764 | [
"6762"
] | bf63107046162f296f7aabd73eb0e09bca33cc61 | diff --git a/jax/_src/lax/lax.py b/jax/_src/lax/lax.py
--- a/jax/_src/lax/lax.py
+++ b/jax/_src/lax/lax.py
@@ -5382,7 +5382,8 @@ def _argminmax_translation_rule(value_comparator, identity,
x_index = xb.parameter(subc, 1, index_shape)
y_value = xb.parameter(subc, 2, value_shape)
y_index = xb.parameter(subc, 3, index_shape)
- which_value = value_comparator(x_value, y_value)
+ which_value = xops.Or(value_comparator(x_value, y_value),
+ xops.Ne(x_value, x_value))
which_index = xops.Or(which_value, xops.And(xops.Eq(x_value, y_value),
xops.Lt(x_index, y_index)))
xops.Tuple(subc, [xops.Select(which_value, x_value, y_value),
@@ -5402,8 +5403,8 @@ def _argminmax_gpu_translation_rule(op, a, *, axes, index_dtype):
idxs = tie_in(a, broadcasted_iota(index_dtype, a.shape, axis))
maxval = np.array(dtypes.iinfo(index_dtype).max, dtype=index_dtype)
maxval = broadcast(tie_in(a, maxval), a.shape)
- mask_idxs = select(eq(a, expand_dims(op(a, (axis,)), (axis,))), idxs,
- maxval)
+ maxvals = expand_dims(op(a, (axis,)), (axis,))
+ mask_idxs = select(eq(a, maxvals) | ne(a, a), idxs, maxval)
return _reduce_min(mask_idxs, (axis,))
_argmin_translation_rule = partial(_argminmax_translation_rule, xops.Lt,
| diff --git a/tests/lax_test.py b/tests/lax_test.py
--- a/tests/lax_test.py
+++ b/tests/lax_test.py
@@ -2560,6 +2560,10 @@ def testArgMinMaxWeakType(self, jax_fn, weak_type):
x_out_jit = api.jit(op)(x_in)
self.assertEqual(dtypes.is_weakly_typed(x_out_jit), False)
+ def testArgMaxOfNanChoosesNaN(self):
+ self.assertEqual(lax.argmax(np.array([0., np.nan]), axis=0,
+ index_dtype=np.int32), 1)
+
@parameterized.named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}".format(rec.op),
"op_name": rec.op, "rec_dtypes": rec.dtypes}
| Inconsistent edge case handling in jax.random.categorical across devices
When provided NaN probabilities, `jax.random.categorical` returns different results depending on the device.
```
# GPU
> jax.random.categorical(rng_key, jnp.array([0, jnp.nan]))
DeviceArray(2147483647, dtype=int32)
# CPU
> jax.random.categorical(rng_key, jnp.array([0, jnp.nan]))
DeviceArray(1, dtype=int32)
```
Arguably both are not ideal (being valid values, bugs can go unnoticed - especially in the CPU case where 1 is a perfectly valid value coming from this function) but the inconsistency in particular can make debugging very tricky and should be fixed.
| The issue is actually that `argmax` is handled differently on GPU, and its implementation requires an equality comparison, which fails for NaNs. | 2021-05-17T13:51:35 |
google/jax | 6,781 | google__jax-6781 | [
"6780"
] | 683289c4ad8bdabb28b236530a932303f47d35cb | diff --git a/jax/_src/image/scale.py b/jax/_src/image/scale.py
--- a/jax/_src/image/scale.py
+++ b/jax/_src/image/scale.py
@@ -24,9 +24,8 @@
def _fill_lanczos_kernel(radius, x):
y = radius * jnp.sin(np.pi * x) * jnp.sin(np.pi * x / radius)
- with np.errstate(divide='ignore', invalid='ignore'):
- out = y / (np.pi ** 2 * x ** 2)
- out = jnp.where(x <= 1e-3, 1., out)
+ # out = y / (np.pi ** 2 * x ** 2) where x >1e-3, 1 otherwise
+ out = jnp.where(x > 1e-3, jnp.divide(y, jnp.where(x != 0, np.pi**2 * x**2, 1)), 1)
return jnp.where(x > radius, 0., out)
@@ -63,10 +62,10 @@ def compute_weight_mat(input_size: int, output_size: int, scale,
weights = kernel(x)
total_weight_sum = jnp.sum(weights, axis=0, keepdims=True)
- with np.errstate(invalid='ignore', divide='ignore'):
- weights = jnp.where(
- jnp.abs(total_weight_sum) > 1000. * np.finfo(np.float32).eps,
- weights / total_weight_sum, 0)
+ weights = jnp.where(
+ jnp.abs(total_weight_sum) > 1000. * np.finfo(np.float32).eps,
+ jnp.divide(weights, jnp.where(total_weight_sum != 0, total_weight_sum, 1)),
+ 0)
# Zero out weights where the sample location is completely outside the input
# range.
# Note sample_f has already had the 0.5 removed, hence the weird range below.
| diff --git a/tests/image_test.py b/tests/image_test.py
--- a/tests/image_test.py
+++ b/tests/image_test.py
@@ -324,6 +324,41 @@ def jit_fn(in_array, s, t):
output = jax.jit(jit_fn)(x, scale_a, translation_a)
self.assertAllClose(output, expected, atol=2e-03)
+ @parameterized.named_parameters(jtu.cases_from_list(
+ {"testcase_name": "antialias={}".format(antialias),
+ "antialias": antialias}
+ for antialias in [True, False]))
+ def testScaleAndTranslateGradFinite(self, antialias):
+ image_shape = [1, 6, 7, 1]
+ target_shape = [1, 3, 3, 1]
+
+ data = [
+ 51, 38, 32, 89, 41, 21, 97, 51, 33, 87, 89, 34, 21, 97, 43, 25, 25, 92,
+ 41, 11, 84, 11, 55, 111, 23, 99, 50, 83, 13, 92, 52, 43, 90, 43, 14, 89,
+ 71, 32, 23, 23, 35, 93
+ ]
+
+ x = jnp.array(data, dtype=jnp.float32).reshape(image_shape)
+ scale_a = jnp.array([1.0, 0.35, 0.4, 1.0], dtype=jnp.float32)
+ translation_a = jnp.array([0.0, 0.2, 0.1, 0.0], dtype=jnp.float32)
+
+ def scale_fn(s):
+ return jnp.sum(jax.image.scale_and_translate(
+ x, target_shape, (0, 1, 2, 3), s, translation_a, "linear", antialias,
+ precision=jax.lax.Precision.HIGHEST))
+
+ scale_out = jax.grad(scale_fn)(scale_a)
+ self.assertTrue(jnp.all(jnp.isfinite(scale_out)))
+
+ def translate_fn(t):
+ return jnp.sum(jax.image.scale_and_translate(
+ x, target_shape, (0, 1, 2, 3), scale_a, t, "linear", antialias,
+ precision=jax.lax.Precision.HIGHEST))
+
+ translate_out = jax.grad(translate_fn)(translation_a)
+ self.assertTrue(jnp.all(jnp.isfinite(translate_out)))
+
+
if __name__ == "__main__":
absltest.main(testLoader=jtu.JaxTestLoader())
| nans in gradient of scale_and_translate
I'd like to take gradients of `scale_and_translate` wrt the translation parameters. Currently the gradients are `nan` whenever the translation is >1. Example:
```python
import jax
import jax.numpy as jnp
from jax.image import scale_and_translate
xin = jnp.r_[:16].reshape(4, 4) + 1.
xin = jax.device_put(xin)
def f(x):
out = scale_and_translate(xin, shape=xin.shape, spatial_dims=(0, 1),
scale=jnp.array([1.0, 1.0]),
translation=jnp.array([x[0], x[1]]),
method='linear', antialias=True)
return out
f((1.1, 1.2)) # raises an error if JAX_DEBUG_NANS=1
g = lambda x: jnp.sum(f(x))
print(jax.grad(g)( (1.1, 1.2))) # nans
```
If `JAX_DEBUG_NANS=1` is set, the script throws an exception at the evaluation of `f`:
```
Traceback (most recent call last):
File "resize.py", line 15, in <module>
f((1.1, 1.2)) # raises an error if JAX_DEBUG_NANS=1
File "resize.py", line 9, in f
out = scale_and_translate(xin, shape=xin.shape, spatial_dims=(0, 1),
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/_src/image/scale.py", line 219, in scale_and_translate
return _scale_and_translate(image, shape, spatial_dims, scale, translation,
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/_src/image/scale.py", line 92, in _scale_and_translate
w = compute_weight_mat(m, n, scale[i], translation[i],
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/_src/image/scale.py", line 69, in compute_weight_mat
weights / total_weight_sum, 0)
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 5666, in deferring_binary_op
return binary_op(self, other)
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 569, in true_divide
return lax.div(x1, x2)
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/_src/lax/lax.py", line 352, in div
return div_p.bind(x, y)
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/core.py", line 264, in bind
out = top_trace.process_primitive(self, tracers, params)
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/core.py", line 606, in process_primitive
return primitive.impl(*tracers, **params)
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/interpreters/xla.py", line 232, in apply_primitive
return compiled_fun(*args)
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/interpreters/xla.py", line 351, in _execute_compiled_primitive
check_special(prim.name, out_bufs)
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/interpreters/xla.py", line 370, in check_special
_check_special(name, buf.xla_shape(), buf)
File "/home/pfister/miniconda3/envs/scico/lib/python3.8/site-packages/jax/interpreters/xla.py", line 376, in _check_special
raise FloatingPointError(f"invalid value (nan) encountered in {name}")
FloatingPointError: invalid value (nan) encountered in div
```
The issue is in this block:
https://github.com/google/jax/blob/c85e835951c68b4383dbc9ef25bef99ea996fd84/jax/_src/image/scale.py#L66-L69
in particular, the computaiton `weights/total_weight_sum` can involve a divide by 0, even though those locations get set to zero in the course of the `jnp.where`.
I'll submit a PR shortly with the "where inside of where" fix.
| 2021-05-18T21:37:22 |
|
google/jax | 6,785 | google__jax-6785 | [
"6756",
"5292"
] | 62603fde67eea6087e3b094f72f7cc214cf9701c | diff --git a/jax/_src/scipy/optimize/bfgs.py b/jax/_src/scipy/optimize/bfgs.py
--- a/jax/_src/scipy/optimize/bfgs.py
+++ b/jax/_src/scipy/optimize/bfgs.py
@@ -55,6 +55,7 @@ class _BFGSResults(NamedTuple):
f_k: jnp.ndarray
g_k: jnp.ndarray
H_k: jnp.ndarray
+ old_old_fval: jnp.ndarray
status: Union[int, jnp.ndarray]
line_search_status: Union[int, jnp.ndarray]
@@ -108,6 +109,7 @@ def minimize_bfgs(
f_k=f_0,
g_k=g_0,
H_k=initial_H,
+ old_old_fval=f_0 + jnp.linalg.norm(g_0) / 2,
status=0,
line_search_status=0,
)
@@ -124,6 +126,7 @@ def body_fun(state):
state.x_k,
p_k,
old_fval=state.f_k,
+ old_old_fval=state.old_old_fval,
gfk=state.g_k,
maxiter=line_search_maxiter,
)
@@ -153,7 +156,8 @@ def body_fun(state):
x_k=x_kp1,
f_k=f_kp1,
g_k=g_kp1,
- H_k=H_kp1
+ H_k=H_kp1,
+ old_old_fval=state.f_k,
)
return state
diff --git a/jax/_src/scipy/optimize/line_search.py b/jax/_src/scipy/optimize/line_search.py
--- a/jax/_src/scipy/optimize/line_search.py
+++ b/jax/_src/scipy/optimize/line_search.py
@@ -104,15 +104,16 @@ def _zoom(restricted_func_and_grad, wolfe_one, wolfe_two, a_lo, phi_lo,
def body(state):
# Body of zoom algorithm. We use boolean arithmetic to avoid using jax.cond
# so that it works on GPU/TPU.
+ dalpha = (state.a_hi - state.a_lo)
a = jnp.minimum(state.a_hi, state.a_lo)
b = jnp.maximum(state.a_hi, state.a_lo)
- dalpha = (b - a)
cchk = delta1 * dalpha
qchk = delta2 * dalpha
# This will cause the line search to stop, and since the Wolfe conditions
# are not satisfied the minimization should stop too.
- state = state._replace(failed=state.failed | (dalpha <= 1e-10))
+ threshold = jnp.where((jnp.finfo(dalpha).bits < 64), 1e-5, 1e-10)
+ state = state._replace(failed=state.failed | (dalpha <= threshold))
# Cubmin is sometimes nan, though in this case the bounds check will fail.
a_j_cubic = _cubicmin(state.a_lo, state.phi_lo, state.dphi_lo, state.a_hi,
@@ -169,9 +170,9 @@ def body(state):
hi_to_lo,
state._asdict(),
dict(
- a_hi=a_lo,
- phi_hi=phi_lo,
- dphi_hi=dphi_lo,
+ a_hi=state.a_lo,
+ phi_hi=state.phi_lo,
+ dphi_hi=state.dphi_lo,
a_rec=state.a_hi,
phi_rec=state.phi_hi,
),
@@ -191,6 +192,9 @@ def body(state):
),
)
state = state._replace(j=state.j + 1)
+ # Choose higher cutoff for maxiter than Scipy as Jax takes longer to find
+ # the same value - possibly floating point issues?
+ state = state._replace(failed= state.failed | state.j >= 30)
return state
state = while_loop(lambda state: (~state.done) & (~pass_through) & (~state.failed),
@@ -213,7 +217,6 @@ class _LineSearchState(NamedTuple):
phi_star: Union[float, jnp.ndarray]
dphi_star: Union[float, jnp.ndarray]
g_star: jnp.ndarray
- saddle_point: Union[bool, jnp.ndarray]
class _LineSearchResults(NamedTuple):
@@ -269,6 +272,11 @@ def restricted_func_and_grad(t):
else:
phi_0 = old_fval
dphi_0 = jnp.dot(gfk, pk)
+ if old_old_fval is not None:
+ candidate_start_value = 1.01 * 2 * (phi_0 - old_old_fval) / dphi_0
+ start_value = jnp.where(candidate_start_value > 1, 1.0, candidate_start_value)
+ else:
+ start_value = 1
def wolfe_one(a_i, phi_i):
# actually negation of W1
@@ -292,18 +300,12 @@ def wolfe_two(dphi_i):
phi_star=phi_0,
dphi_star=dphi_0,
g_star=gfk,
- saddle_point=False,
)
def body(state):
# no amax in this version, we just double as in scipy.
# unlike original algorithm we do our next choice at the start of this loop
- a_i = jnp.where(state.i == 1, 1., state.a_i1 * 2.)
- # if a_i <= 0 then something went wrong. In practice any really small step
- # length is a failure. Likely means the search pk is not good, perhaps we
- # are at a saddle point.
- saddle_point = a_i < 1e-5
- state = state._replace(failed=saddle_point, saddle_point=saddle_point)
+ a_i = jnp.where(state.i == 1, start_value, state.a_i1 * 2.)
phi_i, dphi_i, g_i = restricted_func_and_grad(a_i)
state = state._replace(nfev=state.nfev + 1,
@@ -384,25 +386,28 @@ def body(state):
state)
status = jnp.where(
- state.failed & (~state.saddle_point),
+ state.failed,
jnp.array(1), # zoom failed
- jnp.where(
- state.failed & state.saddle_point,
- jnp.array(2), # saddle point reached,
jnp.where(
state.i > maxiter,
jnp.array(3), # maxiter reached
jnp.array(0), # passed (should be)
),
- ),
)
+ # Step sizes which are too small causes the optimizer to get stuck with a
+ # direction of zero in <64 bit mode - avoid with a floor on minimum step size.
+ alpha_k = state.a_star
+ alpha_k = jnp.where((jnp.finfo(alpha_k).bits != 64)
+ & (jnp.abs(alpha_k) < 1e-8),
+ jnp.sign(alpha_k) * 1e-8,
+ alpha_k)
results = _LineSearchResults(
failed=state.failed | (~state.done),
nit=state.i - 1, # because iterations started at 1
nfev=state.nfev,
ngev=state.ngev,
k=state.i,
- a_k=state.a_star,
+ a_k=alpha_k,
f_k=state.phi_star,
g_k=state.g_star,
status=status,
| diff --git a/tests/scipy_optimize_test.py b/tests/scipy_optimize_test.py
--- a/tests/scipy_optimize_test.py
+++ b/tests/scipy_optimize_test.py
@@ -57,6 +57,13 @@ def func(p):
return func
+def zakharovFromIndices(x, ii):
+ sum1 = (x**2).sum()
+ sum2 = (0.5*ii*x).sum()
+ answer = sum1+sum2**2+sum2**4
+ return answer
+
+
class TestBFGS(jtu.JaxTestCase):
@parameterized.named_parameters(jtu.cases_from_list(
@@ -94,6 +101,37 @@ def f(x):
results = jax.scipy.optimize.minimize(f, jnp.ones(n), method='BFGS')
self.assertAllClose(results.x, jnp.zeros(n), atol=1e-6, rtol=1e-6)
+ @jtu.skip_on_flag('jax_enable_x64', False)
+ def test_zakharov(self):
+ def zakharov_fn(x):
+ ii = jnp.arange(1, len(x) + 1, step=1)
+ answer = zakharovFromIndices(x=x, ii=ii)
+ return answer
+
+ x0 = jnp.array([600.0, 700.0, 200.0, 100.0, 90.0, 1e4])
+ eval_func = jax.jit(zakharov_fn)
+ jax_res = jax.scipy.optimize.minimize(fun=eval_func, x0=x0, method='BFGS')
+ self.assertLess(jax_res.fun, 1e-6)
+
+ def test_minimize_bad_initial_values(self):
+ # This test runs deliberately "bad" initial values to test that handling
+ # of failed line search, etc. is the same across implementations
+ initial_value = jnp.array([92, 0.001])
+ opt_fn = himmelblau(jnp)
+ jax_res = jax.scipy.optimize.minimize(
+ fun=opt_fn,
+ x0=initial_value,
+ method='BFGS',
+ ).x
+ scipy_res = scipy.optimize.minimize(
+ fun=opt_fn,
+ jac=jax.grad(opt_fn),
+ method='BFGS',
+ x0=initial_value
+ ).x
+ self.assertAllClose(scipy_res, jax_res, atol=2e-5, check_dtypes=False)
+
+
def test_args_must_be_tuple(self):
A = jnp.eye(2) * 1e4
def f(x):
| jax.scipy.optimize hangs
I'm comparing performance for the scipy BFGS implementation and the replica in jax, but the one in jax keeps hanging and/or being 100x slower. To make sure this is not only my system, I have verified that the code below hangs on Colab.
```python
import jax
import jax.numpy as jnp
import jax.scipy.optimize
import jax.random
jX = jnp.arange(-2, 8, 0.03) # A 1D evaluation grid
targetp = jnp.array(
[1.3615158, 3.1700504, 0.3901142, 0.8582838,
-0.24486497, 3.078153, 1.2610698, 2.276195,
0.84759176, 1.563921, -0.91339713, 0.23969328]) # The "true" parameters
def fun(p): # This function was once auto-generated, so it looks silly
arg1 = p[2] + p[3] * jnp.tanh(p[4] + p[5] * jX)
arg2 = p[6] + p[7] * jnp.log(jnp.abs(p[8] + p[9] * jnp.arctan(p[10] + p[11] * jX)))
return p[0] + p[1] * arg1 * arg2
targety = fun(targetp)
def mse_loss(p):
return ((fun(p) - targety) ** 2).mean()
```
I can optimize `mse_loss` just fine in scipy.optimize.minimize, but when I run the code below, the interpreter hangs on the second iteration:
```python
key = jax.random.PRNGKey(1337)
keys = jax.random.split(key, 10)
for k in keys: # Try optimize from different starting points
x0 = targetp + jax.random.normal(k)
optx = jax.scipy.optimize.minimize(mse_loss, x0, method='BFGS')
print(optx.fun, optx.nfev)
```
I have verified this with and without GPU and with and without jitted functions.
[This](https://colab.research.google.com/drive/1sBwLh5w_XRDABHJB2SrmwZtOTXu33EOD) is a link to the above code in Colab.
jax.scipy.optimize.minimize convergence problem
I attempted to minimize a simple test function with scipy.optimize.minimize and with jax.scipy.optimize.minimize. I used identical parameters and start point. scipy.optimize.minimize converged to the function minimum but jax.scipy.optimize.minimize did not.
```python
import time
import scipy.optimize
import jax
import jax.numpy as jnp
import jax.scipy.optimize
import jax.config
import autograd.numpy as anp
import autograd
jax.config.update("jax_enable_x64", True)
def zakharovFromIndices(x, ii):
sum1 = (x**2).sum()
sum2 = (0.5*ii*x).sum()
answer = sum1+sum2**2+sum2**4
return answer
def zakharov_jaxNumpy(x):
ii = jnp.arange(1, len(x)+1, step=1)
answer = zakharovFromIndices(x=x, ii=ii)
return answer
def zakharov_autogradNumpy(x):
ii = anp.arange(1, len(x)+1, step=1)
answer = zakharovFromIndices(x=x, ii=ii)
return answer
jEvalFunc = jax.jit(zakharov_jaxNumpy)
aEvalFunc = zakharov_autogradNumpy
aGradFunc = autograd.grad(aEvalFunc)
x0 = [600.0, 700.0, 200.0, 100.0, 90.0, 1e4]
toleranceChange = 1e-9
maxIter = 10000
jx0 = jnp.array(x0)
ax0 = anp.array(x0)
aOptimRes_x0 = scipy.optimize.minimize(fun=aEvalFunc, x0=ax0, method='BFGS', jac=aGradFunc)
jOptimRes_x0 = jax.scipy.optimize.minimize(fun=jEvalFunc, x0=jx0, method='BFGS')
print("scipy.optimize converged?: {}".format(aOptimRes_x0.fun<1e-6))
print("jax.scipy.optimize converged?: {}".format(jOptimRes_x0.fun<1e-6))
```
|
I tried to scale the Hessian matrix at the first iteration with the method proposed in Nocedal and Wright in "Numerical Optimization p.143 formula (6.20)" but it doesn't solve the issue. Similar problems have been mentioned in #5139. The Scipy implementation uses a second line search when the first one don't succeed which could explain the performance differences. | 2021-05-19T10:58:18 |
google/jax | 6,793 | google__jax-6793 | [
"6745"
] | 49421d02843e4f4a15947ca384dae647418cefda | diff --git a/jax/_src/util.py b/jax/_src/util.py
--- a/jax/_src/util.py
+++ b/jax/_src/util.py
@@ -330,10 +330,6 @@ def tuple_delete(t, idx):
assert 0 <= idx < len(t), (idx, len(t))
return t[:idx] + t[idx + 1:]
-def tuple_replace(t, idx, val):
- assert 0 <= idx < len(t), (idx, len(t))
- return t[:idx] + (val,) + t[idx:]
-
# TODO(mattjj): replace with dataclass when Python 2 support is removed
def taggedtuple(name, fields) -> Callable[..., Any]:
"""Lightweight version of namedtuple where equality depends on the type."""
| tuple_replace function in util.py seems incorrect
In the file jax._src.util.py
Current definition
```python
def tuple_replace(t, idx, val):
assert 0 <= idx < len(t), (idx, len(t))
return t[:idx] + (val,) + t[idx:]
```
looks like an almost copy of tuple_insert. I think the correct definition should be:
```python
def tuple_replace(t, idx, val):
assert 0 <= idx < len(t), (idx, len(t))
return t[:idx] + (val,) + t[idx+1:]
```
| You're correct, although the function appears to be unused. | 2021-05-19T19:30:10 |
|
google/jax | 6,798 | google__jax-6798 | [
"6791"
] | 04e5914fc6858db9adc273197e3e2da749b8af52 | diff --git a/jax/experimental/jax2tf/jax2tf.py b/jax/experimental/jax2tf/jax2tf.py
--- a/jax/experimental/jax2tf/jax2tf.py
+++ b/jax/experimental/jax2tf/jax2tf.py
@@ -170,6 +170,9 @@ def convert(fun: Callable, *,
The conversion fails if it cannot ensure that the it would produce the same
sequence of TF ops for any non-zero values of the dimension variables.
+ polymorphic_shapes are only supported for positional arguments; shape
+ polymorphism is not supported for keyword arguments.
+
See [the README](https://github.com/google/jax/blob/master/jax/experimental/jax2tf/README.md#shape-polymorphic-conversion)
for more details.
@@ -192,7 +195,7 @@ def convert(fun: Callable, *,
"""
api._check_callable(fun)
- def converted_fun(*args: TfVal) -> TfVal:
+ def converted_fun(*args: TfVal, **kwargs: TfVal) -> TfVal:
# TODO: is there a better way to check if we are inside a transformation?
if not core.trace_state_clean():
raise ValueError("convert must be used outside all JAX transformations."
@@ -204,21 +207,23 @@ def check_arg(a):
"be NumPy array, scalar, tf.Variable, or tf.Tensor")
raise TypeError(msg)
tree_util.tree_map(check_arg, args)
+ tree_util.tree_map(check_arg, list(kwargs.values()))
# Name input tensors
args = tuple(
tree_util.tree_map(lambda x, i=i: tf.identity(x, f"jax2tf_arg_{i}"), a) # type: ignore
for i, a in enumerate(args))
+ kwargs = {k: tf.identity(v, f"jax2tf_arg_{k}") for k, v in kwargs.items()}
# This function may take pytrees of TfVals. We can only set
# tf.custom_gradient on functions that take a flat argument list.
- args_flat, in_tree = tree_util.tree_flatten((args, {}))
+ args_flat, in_tree = tree_util.tree_flatten((args, kwargs))
if polymorphic_shapes is None:
polymorphic_shapes_ = (None,) * len(args)
else:
if not isinstance(polymorphic_shapes, Sequence) or len(args) != len(polymorphic_shapes):
- msg = ("polymorphic_shapes must be a sequence with the same length as the argument list "
+ msg = ("polymorphic_shapes must be a sequence with the same length as the positional argument list "
f"({len(args)}). Got polymorphic_shapes={polymorphic_shapes}.")
raise TypeError(msg)
polymorphic_shapes_ = tuple(polymorphic_shapes)
@@ -227,6 +232,9 @@ def check_arg(a):
polymorphic_shapes_flat = tuple(api_util.flatten_axes("jax2tf.convert polymorphic_shapes",
in_tree.children()[0],
polymorphic_shapes_))
+ # Add kwargs shapes.
+ polymorphic_shapes_flat = polymorphic_shapes_flat + tuple(
+ (None,) * (len(args_flat) - len(polymorphic_shapes_flat)))
# Construct the abstract values for the flat arguments, possibly based on
# the input shapes and the polymorphic_shapes if given. May create new shape
| diff --git a/jax/experimental/jax2tf/tests/jax2tf_test.py b/jax/experimental/jax2tf/tests/jax2tf_test.py
--- a/jax/experimental/jax2tf/tests/jax2tf_test.py
+++ b/jax/experimental/jax2tf/tests/jax2tf_test.py
@@ -517,6 +517,15 @@ def jax_fn_array(x):
tf_fn_array(np.array([3, 4, 5])), np.array([4.5, 10, 17.5],
jnp.bfloat16))
+ def test_kwargs(self):
+ # Re: https://github.com/google/jax/issues/6791
+ def f_jax(*, x):
+ return jnp.sum(x)
+ f_tf = jax2tf.convert(f_jax)
+ self.assertAllClose(
+ f_tf(x=np.zeros(3, dtype=np.float32)), # Call with kwargs.
+ np.zeros((), dtype=np.float32))
+
def test_enable_xla(self):
# Tests that enable_xla flag is properly scoped to a conversion.
def fun(x):
| jax2tf does not preserve argument names
When converting a function from JAX to TensorFlow using experimental `jax2tf.convert`, the function arguments are not preserved, thus it is not possible to call the code with named instead of positional arguments.
For example, the following snippet will break:
```python
def f_jax(x):
return jnp.sum(x)
f_tf = jax2tf.convert(f_jax)
x = jnp.zeros(3)
print(f_tf(x=x))
```
```
Traceback (most recent call last):
File "test.py", line 8, in <module>
print(f_tf(x=x))
TypeError: converted_fun() got an unexpected keyword argument 'x'
```
My workaround so far has been to wrap the converted function around another function, and then proceed to export the saved model.
```python
def f_jax(x):
return jnp.sum(x)
f_tf = jax2tf.convert(f_jax)
f_wrap = lambda x: f_tf(x)
x = jnp.zeros(3)
print(f_wrap(x=x))
```
Since jax2tf is experimental, I expected to find some issues. However, this one is not listed as a [known issue](https://github.com/google/jax/tree/master/jax/experimental/jax2tf#known-issues). An important use case for this is to allow models with multiple inputs to be called via Tensorflow Serving.
| 2021-05-20T03:48:22 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.